In this web content, the author explores the limitations of current AI models, particularly large language models (LLMs), in terms of their ability to think scientifically and exhibit true intelligence. The author highlights the danger of humans easily believing fallacies and compares this to the phenomenon of “cargo cult science.” The author also discusses the dominance of neural networks in AI and raises concerns about their sustainability and lack of universality. The content emphasizes the importance of asking “why” and reasoning from first principles, which current AI models are incapable of doing. Additionally, the author addresses the issues of nondeterminism and causal inference in AI.
https://queue.acm.org/detail.cfm?id=3595860