... (cont'd)

[Google Brain] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

https://arxiv.org/abs/2201.11903

Recent work suggests chain-of-thought prompting changes scaling curves & therefore the point where emergence occurs.

In their paper the Google researchers showed chain-of-thought prompts could elicit emergent behaviors.

Such prompts, which ask the model to explain its reasoning, may help researchers begin to investigate why emergence occurs at all.

...

Reply to this note

Please Login to reply.

Discussion

... (cont'd)

Recent findings like these suggest at least two possibilities for why emergence occurs.

1. In comparison to biological systems larger models truly gain new abilities spontaneously.

2. What appears to be emergent may instead be the culmination of an internal, statistics-driven process that works through chain-of-thought-type reasoning. Large LLMs may simply be learning heuristics that are out of reach for those with fewer parameters or lower-quality data.

...