Eerke Boiten, a Cyber Security Professor, believes that current AI systems based on large neural networks are too complex to be used responsibly in serious applications. He highlights that these systems lack manageability and transparency, making them unreliable for impactful software development. Boiten criticizes the lack of accountability in AI development, with major ethical concerns around data responsibility and outcomes. He emphasizes the emergent behavior of neural networks and the challenges in verification and fault management. Boiten suggests that current AI systems may be a dead-end for reliability and advocates for compositional approaches and hybrid models integrating symbolic AI. He sees a limited role for current AI in certain contexts with careful management.
https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end/