keyboard_arrow_up
Beyond ‘Autocomplete on Steroids’: Testing a NEO-Aristotelian Theory of Some Emergent Features of Intentionality in LLMs

Authors

Gray Cox, USA

Abstract

This paper explores shortcomings in the "autocomplete on steroids" (AOS) way of conceptual framing Large Language Models (LLMs). It first sketches that view and some key reasons for its appeal. It then argues the view overlooks ways the Attention function in GPT systems introduces features of emergent intentionality in LLM behavior because it tacitly frames the description with the mechanistic metaphor of efficient causality. A conceptual analysis of the functions of variable Attention in GPT reinforcement learning suggests Aristotelian categories of formal and final causality provide a better understanding of the kinds of pattern recognition found in LLMs and the ways their behaviors seem to exhibit evidence of design and purpose. A conceptual illustration is used to explain the neo-Aristotelian theory proposed. Then descriptions and analyses of a series of preliminary experiments with three LLMs are used to explore empirical evidence for the comparative merits of that theory. The experiments provide preliminary evidence of the LLMs' abilities to engage in the production of texts in ways that exhibit formal and final causality that would be difficult to explain using mechanical conceptions of efficient causality that are implied by the "autocomplete on steroids" theory. The paper concludes with a brief review of the key findings, the limits of this study, and directions for future research that it suggests.

Keywords

Autocompletion, Formal and Final Causality, Emergent Intentionality, Aristotelian theory of AI, Attention

Full Text  Volume 14, Number 18