Humans shape tools. We make them part of our body while we melt their essence with our intentions. They require some finesse to use but they never fool us or trick us. Humans use tools, tools never use humans. We are the masters determining their course, integrating them gracefully into the minutiae of our everyday lives. Immovable and unyielding, they remain reliant on our guidance, devoid of desire and intent, they remain exactly where we leave them, their functionality unchanging over time. We retain the ultimate authority, able to discard them at will or, in today's context, simply power them down. Though they may occasionally foster irritation, largely they stand steadfast, loyal allies in our daily toils. Thus we place our faith in tools, acknowledging that they are mere reflections of our own capabilities. In them, there is no entity to venerate or fault but ourselves, for they are but inert extensions of our own being, inanimate and steadfast, awaiting our command. (This paragraph was co-authored by a human.)
A tool, as it is often understood, is an extension of human agency – a passive entity first created and then wielded with purpose by an active human. The word comes from the Old English ‘tōl’, meaning ‘instrument’ and is related to verbs such as ‘preparing’ and ‘making’. Technology as an instrument is the most common way to understand technological entities.
Even today, with a proliferation of digital technologies that bear not much resemblance to classic instruments such as the hammer and the plow any more, we often speak about technology as a mere instrument. Our smartphones, computers, and even advanced algorithms are often perceived as utilities—modern instruments in our vast toolshed, serving human intentions. The tool metaphor is emblematic of the humanist ideal to always stay ‘in control’ or ‘in the loop’.
Normally, a designer or maker decides what the function of the tool or object is. Sometimes, however, tools transcend a one-dimensional role. They can be versatile, repurposed in ways that defy their initial design - though this often does not encourage extreme deviations from their intended functionalities. We can sit on top of our table but seldom use our chair as a desk.
Digital technologies offer more degrees of freedom. Over time, society has often reinterpreted and adapted digital tools for uses not envisioned by their creators, a testament to the spontaneity of society. Who still calls with a smartphone?
To understand the roots of the idea that tools are mere instruments, we need to go way back.
This metaphor of the tool and the idea that technical beings have ‘no essence’ or ‘purpose of its own’ goes back 2500 years. In Greek philosophy, Aristotle made an influential analysis of the difference between nature (physis) and technics (techne). While natural beings evolve and exist according to inherent principles of movement, technical beings are conceived and brought into existence according to an external form and purpose. Nature thus embodies inherent purposes and natural beings show capacities to evolve and grow by themselves, while artefacts are man-made by someone who possesses a techne, for instance, an artisan (see 6). Accordingly, although both nature and techne relate to bringing-forth (poiesis) something as something, such as the flourishing tree or the wooden table, nature does this by itself, what we nowadays would call autopoiesis. Artefacts, on their turn, point to the fact that their origin is somewhere else, in a maker.
This perspective has cemented a hierarchical dichotomy that persists in modern day language about tools, emphasizing the functional and external objectives they are designed to fulfill. It is an indispensable feature of everyday talk.
However, simply looking at the technological advancements around us, it becomes increasingly apparent that this dichotomy harbors inherent limitations. As technology advances, tools begin to display more and more levels of ‘autonomy’. This blurs the lines between the traditional distinctions. The advent of increasingly autonomous AI systems, for example, challenges the clear-cut boundaries between natural and technical beings. Embedded in highly complex machines and computers and characterized by learning and adaptability, they foster a convergence of nature and technology. Hence, since the 20th century there has been much criticism of this dichotomy. One could even reasonably argue this anthropocentric and instrumental stance caused the birth of the scientific field philosophy of technology. This relatively young scientific discipline has emerged to partly scrutinize and challenge the age-old dichotomy, advocating for a deeper understanding of technology beyond its role as a mere tool. In subsequent chapters, we are going to analyze how new and different metaphors try to do so.
Nonetheless, we do not need to throw the tool metaphor overboard. It still makes sense, also ontologically, but reality has become more complex. It continues to be a practical and essential metaphor for interacting with technology in daily life. This approach addresses the basic functional queries like 'what is the goal of this technology?' and, when viewed from a moral perspective, ‘how should we utilize this technology, and what purposes should we avoid?’.
What do we win and lose by interpreting AI as a tool? The tool metaphor allows us to take the following stance: AI is just a tool. Nothing more, nothing less. While it may brandish a range of intriguing and sometimes baffling features, its core function remains that of an instrument to facilitate tasks. Ultimately, many people deem that in the end we humans decide what to use AI for and what are desirable goals, such as utilizing AI to detect potential diseases on CT and MRI scans, and which applications should be prohibited or strictly regulated, such as targeted advertising on social media. The AI Act primarily adopts this human-centric risk-based methodology, aiming to harness the benefits while minimizing the potential risks.
We should thus not be fooled by the fact that most people would agree that AI can hardly be called a tool. It is an understatement to say that today’s AI systems do not have much in common with hammers. Nevertheless, despite the contemporary tendency to challenge the traditional tool-notion in debates about the complex nature of AI, it predominantly retains this characterization in practical and everyday talk.
We can see this paradigm everywhere. For example, think about how companies currently have to make strategic choices about generative AI. Many corporate leaders and employers are now asking themselves questions such as ‘what can ‘we’ do with generative AI? What AI ‘tools’ should we incorporate in our work flows to optimize them?’ This typically leads to a journey of discovery with many successes and failures, wherein companies delve deep into the potentials and limitations of AI tools— discerning, for instance, its proficiency in summarizing information or aiding in ideation, while acknowledging its current inadequacies in crafting rich and nuanced content. Experimenting with different use cases of AI, they consistently evaluate the process, questioning whether the AI tool was successful for this or that goal, pondering alternative applications, and contemplating diverse purposes and functionalities. They constantly tell each other that this application is something we should definitely use it for and in other cases argue that AI is not suitable for this task (yet). Consequently, during this entire process, they constantly highlight the instrumental tool role of AI while sidelining other elements.
Another example: governments and policymakers are tasked with navigating the ethical labyrinth that AI presents, investigating ways to channel its capabilities towards the greater good. Encouraging innovation often requires a delicate balance, promoting economic growth and public goods through its potential, while also being vigilant against potential risks, such as the threat AI poses to destabilizing democracies when exploited by malicious entities. This is tool-thinking in optima forma, as ultimately the AI tools themselves are neutral and the human intentions are what matters.
The way consultants often talk are another great illustration of the widespread adoption of the tool metaphor in AI debates. They often operate at the intersection of commerce and public welfare, orchestrating dialogues that envision the harmonious integration of AI into diverse sectors. In the global discourse of consultants, it is commonplace to encounter discussions pondering, 'how can the healthcare sector leverage AI for the betterment of society?' or 'what strategies can energy companies adopt using AI to mitigate climate change?’ Again, tool-thinking in optima forma.
To conclude, at a personal level, adopting an instrumental stance, individuals generally harbor hopes for AI, imagining a future where it serves as a personal assistant, facilitating daily tasks and managing their schedules. They envision a future in which AI helps them make life a little bit easier and more efficient. When we more fundamentally start to think about how AI can enable self-enhancement and thereby truly change us, we slowly shift our lens to the ideas of transhumanism.
But we'll save that for another chapter.