Whether AI is a Programming Language Interpreter?
June 5, 2025: Categorizing Artificial Intelligence
Two Senses of Abstraction
No programming is usable until it is modeled with a system that instantiates the requisite logical structure for it to be applied. As logic specifies relationships and rules that can be realized in multiple instances, it is always abstract. Since all programming languages are specifications which at their core are entirely built on logical foundations, they inherit logic's property of being abstract.
When assembly language is called concrete, it is only because of the very direct correspondence it has with a particular model of a logical structure: in this case, a physical CPU. Abstraction in the context of computer science is a different concept than the one used for logic alone, even though it has similarities. It is possible to create an abstract programming language that is neither applicable in multiple instances nor similar to machine code, whereas it is not possible to have a standard logical operation that is not general. These concepts are similar in that both the programmatically and metaphysically abstract hide complex details and are, at minimum, often generalizable.
Is Using AI Programming?
Programming languages with the lowest degree of abstraction most closely resemble the physics of computation while those with the highest degree most closely resemble natural language. Artificial intelligence acts upon natural language inputs. This makes it necessary to ask whether the use of an AI application can be considered programming.
First, to use AI is to send user input into a predefined application. Although user input does affect the instructions and data executed by the processor, it does not change the underlying logic of the application. Rather, all inputs are intentionally directed through logical rules predefined by the application programmer (and / or trained model weights) such that no external input can affect the core design of the application [1]. As a result, if using AI can be considered programming, it would have to interpret a programming language within the confines of the AI application. As AI acts upon natural language, natural language or some subset of it would have to qualify as a programming language. Since natural language syntaxes have not yet been fully formalized, and programming languages are by definition formal specifications, it is a necessity that the language AI acts on be a formalized subset of natural language. AI cannot generate acceptable responses for all inputs, either for lack of data, lack of context, application boundaries, etc.; and we can take inputs which cause such errors to have erroneous syntaxes. This is a type of input restriction, which implies that AI indeed only accepts a subset of natural language. We must next examine if that subset is well-defined.
In a formal specification, ambiguity is forbidden. However, as most AI is based on a probabilistic architecture, all concepts indexed are inherently ambiguous, and for each unit of a response, the option with the highest probability is committed [2]. This is why AI will usually guess instead of requesting clarification. In fact, the essence of AI responses can be reasonably construed as guesses, and for most use cases the majority of guesses are correct. Regardless, guesses are far from rules, so the input AI parses cannot be called a programming language.
Second, programming languages are predictably deterministic, but AI applications are at best chaotically deterministic. That is to say, while the behaviour of computer program execution can be both foreseen and repeated, the behaviour of AI applications cannot be foreseen and can only be repeated by keeping every starting parameter the same [2]; any slight variation in input may cause a completely new pathway to generate a response. This is chaotic determinism, which is uncharacteristic of standard code execution. The only factors that could make a conventional program behave in an unreplicatable way would be external ones like parallel process execution order or analog inputs, since no computational operation exists to create a truly random (or unforeseeable) number.
Third, there is another architecture built into some AI Large Language Models (LLMs) known as chain-of-thought prompting. It initially appears as though this would resolve the first critique, which was that probabilistic AI is not based on well-defined logical operations, because chain-of-thought architecture follows a series of apparently auditable, logical steps before producing any conclusions. Looking closer, we find that these are not atomic logical steps like those a classical computer would execute, nor are they based on verified formulas baked into a lookup table, but it merely guesses what the next logical step would look like, and it can of course guess this incorrectly. Again, this process does not involve a formal specification and is unlike classical computing.
Fourth, the way LLMs replicate the execution of code may not precisely reflect that code's real-world behaviour. Almost all programming languages have a property known as Turing completeness, the ability to compute any algorithm, given enough time and memory. Non-Turing-complete languages such as markup languages do exist, but, unlike LLMs, they are not used to replicate the behaviour of Turing-complete systems. LLMs are not Turing-complete, and, for most tasks, neither is chain-of-thought prompting. However, it is theoretically possible to use a chain-of-thought LLM to follow the atomic logical steps of the source code of an operating system including that of its programming languages in a Turing-complete way, but for this to work properly, it must be free of mistakes, a requirement which probability-based models cannot guarantee.
For these reasons, AI cannot be called a programming language interpreter, even though it can act in a similar manner and also may in theory have the capacity to meet the very strict requirements to fulfil that definition.
[1] It would be possible to design a metamorphic program which allowed user input to change its underlying logic, but this exception does not apply to AI.
[2] Many AI applications do not use highest probability selection, known as greedy decoding, but are rather always non-deterministic.