Large models of what? Mistaking engineering achievements for linguistic agency

The paper challenges sensational and misleading claims about Large Language Models (LLMs) by questioning the assumptions of language completeness and data completeness. It argues that language is not a distinct and complete entity, but a means of acting, making it incompatible with the current architecture of LLMs. The absence of key characteristics such as embodiment, participation, and precariousness in LLMs further supports the argument that they cannot function as linguistic agents like humans. The concept of ‘algospeak’, a pattern of high stakes human language activity in online environments, is used as an example to illustrate the limitations of LLMs. These findings suggest a fundamental misconception of both human language and LLM capabilities.

https://arxiv.org/abs/2407.08790

To top