Draft:Prompt Chain
Review waiting, please be patient.
This may take 6 weeks or more, since drafts are reviewed in no specific order. There are 1,025 pending submissions waiting for review.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Reviewer tools
|
This is a draft article. It is a work in progress open to editing by anyone. Please ensure core content policies are met before publishing it as a live Wikipedia article. Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL Last edited by KylieTastic (talk | contribs) 34 hours ago. (Update)
This draft has been submitted and is currently awaiting review. |
Prompt chaining is a systematic approach in artificial intelligence where complex tasks are broken down into smaller, sequential steps, with each step's output serving as input for the subsequent step. This methodology has gained prominence in the field of Large Language Models (LLMs) as organizations seek to improve output quality and maintain better control over AI-generated content.[1].
History
[edit]The concept of prompt chaining evolved from earlier work on chain-of-thought prompting, first formally described by Wei et al. in 2022[1]. The technique gained wider attention following demonstrations of its effectiveness in complex reasoning tasks[2]
Theoretical foundation
[edit]The effectiveness of prompt chaining builds upon research in:
Chain-of-thought reasoning[1] Zero-shot task decomposition[2] Self-consistency in language models[3]
This section needs expansion. You can help by adding to it. (November 2024) |
Types
[edit]Research has identified several approaches to implementing prompt chains:
Individual prompt chains
[edit]These utilize a single LLM throughout the process, similar to the methodology described in chain-of-thought reasoning studies.[1]
Multi-model approaches
[edit]This section needs expansion. You can help by adding to it. (November 2024) |
This section requires expansion with verified sources.
Limitations
[edit]Current research identifies several limitations:
Potential error propagation between chain steps[3] Computational overhead in multi-step processing Challenge of maintaining context across chain steps
See also
[edit]Prompt engineering Large language model Natural language processing
References
[edit]- ^ a b c d Wei, Jason, et al. "Chain of thought prompting elicits reasoning in large language models." arXiv preprint arXiv:2201.11903 (2022)
- ^ a b Kojima, Takeshi, et al. "Large language models are zero-shot reasoners." arXiv preprint arXiv:2205.11916 (2022)
- ^ a b Wang, Xuezhi, et al. "Self-consistency improves chain of thought reasoning in language models." arXiv preprint arXiv:2203.11171 (2022)
Category:Artificial intelligence Category:Natural language processing Category:Machine learning