This paper introduces a framework called NL2FOL which automates the process of translating natural language into First-Order Logic using Large Language Models. By incorporating implicit background knowledge and utilizing SMT solvers, NL2FOL can detect logical fallacies and provide interpretable insights into the reasoning process. Surprisingly, this neurosymbolic approach achieves strong performance on various datasets without the need for fine-tuning or labeled training data. Controversially, NL2FOL achieves an F1-score of 78% on the LOGIC dataset and 80% on the LOGICCLIMATE dataset, showcasing its effectiveness in logical reasoning tasks.
https://arxiv.org/abs/2405.02318