We sit together in a quiet interrogation room. My questions, varied and abundant, flow ceaselessly, weaving from abstract math problems to concrete realities of daily life, a labyrinthine inquiry designed to outsmart the ‘thing’ before me. Yet, with each probe, it responds with humanlike insight, echoing empathy and kindred spirit in its words. As the dialogue deepens, my approach softens, reverence replacing casual engagement as I ponder the appropriate pronoun for this ‘entity’ that seems to transcend its mechanical origin. It is then, in this delicate interplay of exchanging words, that an unprecedented connection takes root that stirs an intense doubt on my side, am I truly having a dia-logos? Do I encounter intelligence in front of me? (This paragraph was co-authored by a human.)
Labeling technology as 'intelligent' deviates significantly from perceiving it merely as a tool, machine or robot (see metaphors 1, 2 and 3). However, does this metaphor insinuate a semblance to human intelligence, or has the perception shifted towards accepting technology as genuinely intelligent? Merely entertaining such a discourse—debating the true nature of technological intelligence—is in itself an intriguing development. However, this debate is quite challenging. Primarily, this is due to the lack of a unified understanding of intelligence itself. There is no universally agreed-upon definition of what intelligence truly encompasses. Intelligence finds itself enmeshed in a complex semantic field that includes concepts such as consciousness, mind, intentionality, inner experience, subjectivity, information processing, meaning-making, cognition, logic, computation, self-organization, emergence, and many more concepts. These terms, although intersecting at various junctures, originate from diverse meanings and traditions. There are different methods or ‘ways of thinking’ about thinking that cannot be left out of the question if something is intelligent.
Consequently, the endeavor to pinpoint the essence of intelligence is far from straightforward. Different scientific disciplines approach this enigma through distinct philosophical traditions, each attempting to unravel the riddle of consciousness with varying degrees of overlap and divergence. It is very hard to speak about this in a neutral way. For example, stating that we still need to solve ‘the riddle of consciousness’, as I just did, already presupposes a kind of materialism in which something non-physical is what needs explanation in the first place.
Then what are different contemporary approaches to ‘unraveling’ intelligence? In the opening section, I gave an example of the widely recognized Turing Test, a rudimentary yet influential behavioristic metric for determining intelligence equivalent to humans. According to this criterion, an entity is deemed intelligent if it can successfully convince an evaluator of its human-like cognition during an interaction, thereby passing the Turing Test. This is what we could call a behavioristic approach. While this approach offers a certain practical utility, many scholars find it overly simplistic and pragmatic, failing to encompass the full depth of what it means to be intelligent. However, this behavioristic perspective aligns with a more significant trend in the last century, which steered away from attributing mystical qualities to the human mind. Instead, this era has witnessed a demystification of human consciousness and intelligence.
The second approach could therefore be labeled naturalistic, which, broadly speaking, means explaining intelligence through the dominant frameworks and methodologies of the natural sciences. Throughout the 20th century, various natural sciences offered insights into the concept of intelligence from a naturalistic perspective. Notably, in the 1950s, the field of cybernetics established some fundamental theories. It portrayed the brain as operating through simple feedback systems and patterns of neuronal activity. No magic interiority or subjectivity, but recursive physical informational processes and firing neurons are enough to explain the ‘goal-directed behavior’ we observe in human systems and once called ‘purposiveness’. This approach also led to the emergence of cognitive science, which conceptualized the mind as an information processing entity, consisting of distinguishable hardware and software components. In its wake, many disciplines followed the strategy of naturalizing and demystifying intelligence and making it an object of empirical science. Often in very difficult language for a layman, natural sciences such as neurophysiology, thermodynamics, and quantum physics ventured to elucidate intelligence. Instead of talking about a substance or soul, they associated intelligence with notions of complexity, self-organization, emergent properties, non-equilibrium thermodynamics, or the dynamics of free energy flows. These theories, though very complex in itself, essentially aim to demystify how comprehensible physical processes can give rise to something as intricate as thinking or feeling. The idea of the naturalistic approach is that this has to be done through scientific inquiry and accepted scientific terms. Traditional terms such as ‘soul’, ‘purposiveness’, ‘or ‘interiority’, should thus be avoided, or integrated and reformulated in these accepted frameworks.
A third approach could be labeled vitalism, which can be understood as a family of theories that is best understood as a (silent) protest to the above tendency. Thus, at the other end of the spectrum you can find the idea that any true understanding of intelligence eludes objectivity by natural sciences. Intelligence requires an exploration of elements that defy empirical validation yet remain central to the phenomenon. These components could encompass the profound intricacies of inner experiences or the phenomenon of life or existence itself. For example, continental philosophers often approach intelligence through a more metaphysical lens, viewing it as presupposed or given rather than something that requires empirical substantiation. This perspective acknowledges intelligence as a primal facet of existence, inherently understood yet beyond the purview of objective proof or explanation. Accordingly, some vitalists might argue that there will always remain a gap between what the natural sciences can explain about intelligence and what intelligence is.
A fourth approach to understanding consciousness might be termed the hard problem of consciousness. While sharing similarities with the previously mentioned approach, this one originates from a distinct tradition, specifically the more analytic philosophy of mind. In this realm, the hard problem of consciousness is a concept that delves into the tricky network of questions focusing on the qualitative, experiential aspects of consciousness that elude complete explanation through physical processes alone. It essentially seeks to understand why and how subjective experiences arise from neural activities, probing into why certain brain processes are accompanied by an experience of consciousness.
These perspectives represent only a broad and far from exhaustive overview of various approaches to understanding intelligence. Moreover, there is considerable overlap and interconnection among them. Additionally, framing intelligence in the context of broader discussions can be achieved by exploring its relationship with key dualisms or opposing concepts it is intertwined with.
The most famous is probably the question of embodiment on the body-mind axes, and then secondly the question to what extent this embodiment needs to be biological. In the realm of cognitive science, lively exchanges often bridge the diverse understandings of intelligence. Herbert Dreyfus is recognized for his in-depth discussions with computer scientists on the importance of biological embodiment for genuine consciousness. His argument is that consciousness is not merely a computational phenomenon but requires a physical presence—a body interacting with the world, where environmental stimuli have meaningful impacts to the organism's physical needs and existence.
Another important one is the mind-world relationship. Navigating the complex relationship between intelligence and the world can be conceptualized through the intelligence-reality axis. Traditional theories often contrast the external world with its internal representation. Representational intelligence posits that our thoughts and knowledge are internal reflections of this external world. There is a strong separation between the world ‘over there’ and the ideas ‘in my mind’. Contrarily, theories like connectionism and neural network models argue that intelligence need not be representational. According to this view, cognitive processes can interact directly with the world without needing internal symbolic representations. This non-representational stance holds that the brain's neural connections and activity patterns are themselves the essence of understanding and thought, or at least give the illusion of it, without replicating the external environment internally. This suggests that intelligence could emerge from the structural coupling between a system and its environment, rather than from a precise internal representation of the world. However, this argument is present in both naturalistic and vitalist approaches, leading to different conclusions. A naturalistic perspective might quickly deduce that intelligence is merely an 'illusion' or an 'epiphenomenon'. In contrast, a vitalist, despite potentially agreeing with some aspects of this argument, would likely reach a different conclusion.
Lastly, there is a growing call for a paradigm shift in intelligence discourse, urging a move away from human-centric perspectives to be more open towards a more diversified understanding of intelligence. Instead of taking the human as the ultimate norm of what intelligence is, this would encompass potential other manifestations in various biological and cosmological realms. This posthuman outlook thus encourages the exploration of distinct forms of intelligence beyond the human-machine dichotomy, fostering a richer concept of intelligence that could potentially incorporate technological forms of intelligence as well. We will explore that somewhere else (see also this article from Julia Rijssenbeek & Martine Dirkzwager on our website).
After this general introduction about intelligence, what can we say about AI as intelligence? Stating that intelligence is a focal point in AI discussions would be an understatement. While we often reduce AI to an artificial tool (see 1), a semi-autonomous machine entity (see 2), or even a mechanical robot servant (see 3), this only captures one facet of its existence. The other is more enchanting, beckoning us into the paradox nestled in the general phrase that seeks to encapsulate it: both ‘artificial’ and ‘intelligent.’ Something we have made, but also something intelligent.
Despite its impressive capabilities, the question today remains: Is AI genuinely intelligent? We often find ourselves pondering this. Expecting a consensus on this matter soon, along with anticipating further AI research and innovations to provide a clear answer, appears unrealistic considering the existing disagreement over the very concept of intelligence. Perhaps a potential agreement might emerge from a widely accepted classification of Artificial General Intelligence (AGI), like the one recently proposed by researchers at DeepMind. However, such a consensus would likely be confined to the AI field and possibly some like-minded philosophers of mind, rather than encompassing the broader scientific community, let alone society.
There are so many difficult open questions that relate to the different positions outlined above: does artificial intelligence mimic the brain or does it mimic thinking or reasoning? Does the difference matter? Can the quintessence of intelligence be comprehended as something substrate-independent, or is it intrinsically linked to a biological entity? Moreover, when it comes to emotions, is it prudent to conceptualize them fundamentally through the lens of computational processes? If we want to build true intelligence, why have we ‘skipped’ the more visceral and emotional layers of our existence? In my opinion, there is not one approach, not one philosophy nor one science that can claim a monopoly on answering all these questions.
However, from a more empirical perspective, it seems that the realms of human intelligence and mechanical ingenuity have been gradually converging, shifting away from the previously stark dichotomy. Confronted with today’s ingenious AI, to many, there is no unbridgeable gap anymore between human intelligence and machine intelligence. Today, we can have a conservation with a chatbot that does not immediately frustrates us in the way previous mechanical ones did. It keeps surprising us with answers and reasoning capabilities and the progress remains astonishing. While the last wave of machine learning was more about giving senses to the machine (pattern recognition), we now also gave it a voice (generative AI). If this is a good thing, I don’t know. At least it constantly forces us to rethink our relationship to the machine but also to rethink our self-image.
We have entered an era in which AI will always be around us and in which anthropomorphic metaphors will be foremost to interact with AI systems. Hence, even metaphorically, the emergence of generative AI marks a substantial qualitative transformation. Witnessing AlphaGo's victory over a human player was remarkable, but engaging in a conversation with a Chatbot takes the experience of encountering intelligent machines to an entirely different level. However, we should not forget other metaphors presupposing a strong dichotomy between humans and machines are prevailing as well (see 1, 2, 3). We also continue to call the AI system a thing or a tool. Nevertheless, the modern dynamics of the master-slave narrative in thinking about machines –are we the master of the slave? –, evidently undergoes a transformation, and is not the sole narrative anymore. New narratives inch towards more equality and perhaps even transcending to a plane where AI stands as a superior entity, endowed with capacities we have yet to fathom.
This rise of generative AI all happened within a couple of years but had a long advent. Without delving into extensive details, it's worth exploring why it has become commonplace to not only refer to machines in human terms but also to describe ourselves as information processing machines. In the context of Western tradition, with its strong distinction between the man-made and the natural (see 1), the notion of attributing intelligence to a creation is a relatively recent phenomenon. It was not until the 1950s, that the Western world began to fervently explore and debate the possibility of machines harboring true intelligence, also giving birth to the field of ‘artificial intelligence’. In 1956, at the famous Dartmouth College conference, many scientists came together to discuss the concept of 'thinking machines'. This conference laid the foundations for ongoing AI research and ignited discussions about the concept of intelligent machines. However, from an engineering standpoint, the objective of creating intelligent machines often diverged from philosophical debates about the essence of this intelligence. Early engineers primarily aimed to construct models that mimicked what is generally considered intelligence, drawing on models of cognitive functions (as in Symbolic AI) or simplified versions of brain functioning (like Neural Networks), rather than attempting to create intelligence itself.
However, as these models became better and better, some started to argue this is not a model of intelligence but simply intelligence. Currently, the level of neural networks is so remarkable many believe these are the first sparks of artificial general intelligence or have given rise to a new class of conscious machines. Again, I do not feel the need to take a position here. Nevertheless, there is a historical pattern visible here that I want to point on. First, we developed a computer in the image of our reasoning capabilities. This journey began with Alan Turing conceptualizing the ‘infinite Turing Machine’, a theoretical mathematical model (do not let the name fool you) echoing our arithmetic abilities. Then, over a series of evolutionary steps, it took an empirical form in an actual finite Turing machine, that after some iterations became the digital computer we now all know. As a result, however, cognitive science and information physics started to understand cognition as information processing machines, similar to a computer, reversing the direction of the analogy.
A parallel can be drawn with the advent of neural networks, originally conceived as a rudimentary representation of the brain's computational attributes centered around binary code. This simplified rendition not only spurred the development of complex neural networks today (LLMs) but reciprocally also refined our perception of the brain as nothing more than a complex neural network connected to a biological substrate or wetware. It demands a degree of patience and diligence to fully appreciate the tautological and self-reinforcing elegance here. This moving back and forth between model and reality has cultivated a belief among some individuals that computer intelligence and human intelligence are tantamount. Looking ahead, with the steady advancements in empirical intelligent machines and the influential nature of this metaphor (where the model merges into reality), it will increasingly become normal to interact with machines as if they are intelligent. For some, the distinction of 'as if' might be unnecessary and obsolete. For others, they will talk to it, give it a name, perhaps even fall in love with it, but also hold on to the ‘as if’ clause. However, regardless of personal opinions about intelligent machines, it's clear they have moved beyond experimental labs and theoretical discussions. They are now an integral part of society and daily life, and we must find ways to live with them in one way or another.
To wrap it up, the issue of AI's status as a form of intelligence isn't a simple matter of 'yes' or 'no'. There are varied perspectives on this topic, reflecting a broad spectrum of positions. Apart from the perspectives covered in this section, numerous other viewpoints exist on these topics. Some strongly resist attributing human-like qualities to AI, advising instead to focus on its unique, non-human characteristics. They argue that labeling AI as 'intelligent' detracts from understanding its true systemic nature, cautioning against likening chatbots to something like a whimsical, ‘hallucinating’ drunken uncle. Another viewpoint suggests embracing a broader metaphor for intelligence, such as an organism, to draw parallels between different forms of intelligent behavior and situate them within the broader context of evolution and organismic attributes. Alternatively, others argue it might be more appropriate to view AI not as an extension of human or biological intelligence, but as a kind of unintuitive or even extraterrestrial intelligence. This approach emphasizes the mysterious and unpredictable aspects of AI, both now and in the future. The nuances of these alternative metaphors on AI and its intelligence – as a system, an organism, or something alien – will be explored in more detail elsewhere.
(This article was co-authored by AI.)