In this research paper, the authors introduce a framework called Graph of Thoughts (GoT) that enhances the capabilities of large language models (LLMs) like Chain-of-Thought or Tree of Thoughts. GoT allows the modeling of information generated by an LLM as a graph, where LLM thoughts serve as vertices and edges represent dependencies between these thoughts. This approach enables the combination of thoughts into synergistic outcomes, distilling the essence of thought networks, and enhancing thoughts through feedback loops. The authors demonstrate that GoT offers advantages over existing paradigms, such as significantly improving sorting quality and reducing costs. This work opens up new possibilities for prompting schemes in LLMs, bringing their reasoning closer to human thinking mechanisms.
https://arxiv.org/abs/2308.09687