Large Language Models Are Neurosymbolic Reasoners

This paper explores the use of Large Language Models (LLMs) as symbolic reasoners in text-based games, focusing on tasks like math, map reading, and applying common sense. The LLM agent is designed to handle symbolic challenges and achieve in-game goals after being initialized and informed of its role. Through experimental results, it is shown that the LLM agent significantly enhances LLMs as automated agents for symbolic reasoning, achieving an average performance of 88% across all tasks. This research opens up new possibilities for the application of LLMs in real-world scenarios requiring symbolic reasoning capabilities. (Word count: 98)

https://arxiv.org/abs/2401.09334

To top