Pular para o conteúdo principal

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models - Leitura de Artigo

https://arxiv.org/pdf/2201.11903.pdf

Chain-of-Thought (CoT) para elaborar perguntas aos LLMs mas as perguntas podem ser incompletas no que diz respeito ao contexto.

A busca exploratório é formada por várias perguntas elaboradas ao longo do processo.

Está relacionado com "Fine Tuning" na tarefa a ser executada pq está ensinando ao LLM como responder

Introduction 

However, scaling up model size alone has not proved sufficient for achieving high performance on challenging tasks such as arithmetic, commonsense, and symbolic reasoning (Raeet al., 2021).
This work explores how the reasoning ability of large language models can be unlocked by a simple method motivated by two ideas. First, techniques for arithmetic reasoning can benefit from generating natural language rationales that lead to the final answer. Prior work has given models the ability to generate natural language intermediate steps by training from scratch (Ling et al., 2017) or finetuning a pretrained model (Cobbe et al., 2021), in addition to neuro-symbolic methods that use formal languages instead of natural language (Roy and Roth, 2015; Chiang and Chen, 2019; Amini et al., 2019; Chen et al., 2019). Second, large language models offer the exciting prospect of in-context few-shot learning via prompting. That is, instead of finetuning a separate language model checkpoint for each new task, one can simply “prompt” the model with a few input–output exemplars demonstrating the task. Remarkably, this has been successful for a range of simple question-answering tasks (Brown et al., 2020).

In this paper, we combine the strengths of these two ideas in a way that avoids their limitations.
Specifically, we explore the ability of language models to perform few-shot prompting for reasoning tasks, given a prompt that consists of triples: 〈input, chain of thought, output〉. A chain of thought is a series of intermediate natural language reasoning steps that lead to the final output, and we refer to
this approach as chain-of-thought prompting.

Chain-of-Thought Prompting

Chain-of-thought prompting has several attractive properties as an approach for facilitating reasoning in language models.
1. First, chain of thought, in principle, allows models to decompose multi-step problems into intermediate steps, which means that additional computation can be allocated to problems that require more reasoning steps.

[Pergunta 1: WHAT, Pergunta 2: WHEN, Pergunta 3: WHERE, Pergunta 4: WHO SAID (Where is written, When was acessed, How it was generated ... All contextualized (possible) answers já se antecipa as perguntas 2 em diante]

2. Second, a chain of thought provides an interpretable window into the behavior of the model, suggesting how it might have arrived at a particular answer and providing opportunities to debug where the reasoning path went wrong (although fully characterizing a model’s computations that support an answer remains an open question).
3. Third, chain-of-thought reasoning can be used for tasks such as math word problems, commonsense reasoning, and symbolic manipulation, and is potentially applicable (at least in principle) to any task that humans can solve via language.
4. Finally, chain-of-thought reasoning can be readily elicited in sufficiently large off-the-shelf language models simply by including examples of chain of thought sequences into the exemplars of few-shot prompting.

Chain-of-thought prompting is a general approach that is inspired by several prior directions: prompt-
ing, natural language explanations, program synthesis/execution, numeric and logical reasoning, and
intermediate language steps.

[Inspiração para transformar Competency Quesitons do KG em Graph Queries via LLM]

Seminário da Pós (YouTube) - 15/09, 15h - On the Construction of Database Interfaces based on Large Language Models

https://www.youtube.com/live/b1I2EQ5rclI?si=9nsS85vl2WMb6aeh


Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...

Aprendizado de Máquina Relacional

 Extraído de -> https://www.lncc.br/~ziviani/papers/Texto-MC1-SBBD2019.pdf   Aprendizado de máquina relacional (AMR) destina-se à criação de modelos estatísticos para dados relacionais (seria o mesmo que dados conectados) , isto é, dados cuja a informação relacional é tão ou mais impor tante que a informação individual (atributos) de cada elemento.    Essa classe de aprendizado tem sido utilizada em diversas aplicações, por exemplo, na extração de informação de dados não estruturados [Zhang et al. 2016] e na modelagem de linguagem natural [Vu et al. 2018].   A adoção de técnicas de aprendizado de máquina relacional em tarefas de comple mentação de grafo de conhecimento se baseia na premissa de existência de regularidades semânticas presentes no mesmo . Modelos grafos probabilísticos  Baseada em regras / heurísticas que não podem garantir 100% de precisão no resultado da inferência mas os resultados podem ser explicados. Modelos de características de ...