The study evaluates the deductive and inductive reasoning capabilities of Large Language Models (LLMs). It questions which type of reasoning poses a greater challenge for LLMs. While deductive reasoning has been extensively studied, inductive reasoning remains unexplored. A novel framework, SolverLearner, is proposed to isolate inductive reasoning in LLMs. Surprisingly, LLMs demonstrate strong inductive reasoning abilities but lack in deductive reasoning, especially in “counterfactual” tasks. Through SolverLearner, LLMs show near-perfect performance in inductive reasoning tasks. This research sheds light on the reasoning abilities of LLMs and the importance of separating deductive and inductive reasoning.
https://arxiv.org/abs/2408.00114