Jump to content

Draft:Reasoning token

From Wikipedia, the free encyclopedia

Reasoning tokens are specialized system tokens designed to guide the system to perform step-by-step reasoning during the inference part of a Large Language Model (LLM) in the area of artificial intelligence. They are used in System 2-type LLMs that typically involve the Chain of Thought (CoT) technique to divide the task into smaller steps.[1]

They are created based on the user’s prompt and added to the reasoning process in order to help the AI model plan reasoning steps and analyze the responses.[1]

There are different types of reasoning tokens for different purposes. Some studies proposed Self-Reasoning Tokens while some studies proposed Planning Tokens.[2][3][4]

Reasoning tokens in the systems are often notated with single or double-angle brackets for illustrative purposes like below:

Question: Chenny is 10 years old. Alyana is 4 years younger than Chenny. How old is Anne if she is 2 years older than Alyana?
<prefix_0><prefix_1><prefix_2><kmeans1_0><kmeans1_1><kmeans1_2> Alyana is 10 - 4 = «10-4=6» 6 years old.
<prefix_0><prefix_1><prefix_2><kmeans3_0><kmeans3_1><kmeans3_2> So, Anne is 6 + 2 = «6+2=8» 8 years old.
<prefix_0><prefix_1><prefix_2><answer_0><answer_1><answer_2> The answer is: 8
Planning token examples in literature.[3]

The above planning tokens at the start of each reasoning step serve as a guide to the model’s reasoning process.

There are other types of system tokens that may serve as conceptual steps of the reasoning process, which may look like <Analyze_Problem>, <Generate_Hypothesis>, <Evaluate_Evidence>, and <Draw_Conclusion>. The system can also create custom reasoning tokens tailored to the specific prompt or task. This allows the system to focus on the most relevant aspects of the problem.[1]

These system tokens will be deleted before the final response is shown to the user. However, these system tokens still get metered, even if you can’t verify the number of usages, and are included in the bill.[1]

References

[edit]
  1. ^ a b c d Lim, Don (September 2024). "Reasoning tokens and techniques used in System 2 LLMs such as OpenAI o1". Medium. Retrieved 2024-09-29.
  2. ^ Wang, Junlin; Jain, Siddhartha; Zhang, Dejiao; Ray, Baishakhi; Kumar, Varun; Athiwaratkun, Ben (2024-06-10). "Reasoning in Token Economies: Budget-Aware Evaluation of LLM Reasoning Strategies". arXiv:2406.06461 [cs.CL].
  3. ^ a b Wang, Xinyi; Caccia, Lucas; Ostapenko, Oleksiy; Yuan, Xingdi; Wang, William Yang; Sordoni, Alessandro (2023-10-09). "Guiding Language Model Reasoning with Planning Tokens". arXiv:2310.05707 [cs.CL].
  4. ^ Felipe Sens Bonetto (2024-04-20). "Self-Reasoning Tokens: Teaching Models to Think Ahead". reasoning-tokens.ghost.io. Retrieved 2024-09-29.