Wikipedia:Reference desk/Archives/Computing/2023 September 5
Appearance
Computing desk | ||
---|---|---|
< September 4 | << Aug | September | Oct >> | September 6 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
September 5
[edit]Why do ChatGPT and other LLMs apparently execute procedural steps?
[edit]I understand that LLMs are the worlds best autocomplete on steroids, but is it understood why ChatGPT and other LLMs apparently execute multi-step procedures that they are asked to do with prompt engineering? The "Sparks of AGI" paper was not at all helpful to me here. Lavaship (talk) 23:33, 5 September 2023 (UTC)
- I don't know how much detail is publicly available. Lacking specific knowledge, my best guess is that the training material for these chatbots contains dialogues (prompt – response – prompt – response – ...), in which the two categories are marked as such. When it is the chatbot's turn, the model is used to generate plausible continuations of the preceding token string using only the likelihood of response tokens, which include an end-of-response token. --Lambiam 11:46, 6 September 2023 (UTC)
- One could reductively describe a brain as a "ganglion on steroids", but the reality is that increasing the complexity and scale of a system will tend to produce emergent phenomena. Essentially, yes -- the only real thing they're capable of doing is minimizing the loss function for next-token prediction. The fact that this apparently produces an entity capable of logical reasoning and theory of mind, at a level previously assumed to require a full human brain, raises significant philosophical questions to say the least. jp×g 01:08, 8 September 2023 (UTC)