With the widespread adoption of generative AI tools, the evolution of AI has entered a new phase. Even though various types of AI have already been used for years, the very fact that a range of tools and applications is now at everyone’s disposal and are very easy to use, implies that AI is on the verge of a breakthrough. It’s thus no wonder that the societal impact of AI has become a topic of major concern among opinion leaders and policy makers: from existential risk of a superintelligent AI, to the impact on our jobs, and how AI-generated content will impact our social and political systems (e.g. meddling in elections, unleashing an avalanche of fake news). Although there is a wide variety of responses – from a moratorium on AI development to a manifesto for deregulating AI to speed up its development - it is clear that the stakes are high and that we need to consider how to regulate AI. Surprisingly to some, China was among the first countries to regulate the use of AI. It has already taken notable steps by significantly restricting the use of AI's by companies (governmental AI is a different story). Meanwhile, Europe has been diligently crafting its AI Act, and in the US, Senate hearings and President Biden’s executive order signal the start of their regulatory journey.
So while most agree that AI regulation is necessary, how it should be regulated and to what extent, remains a topic of debate. AI legislation faces the challenging task of balancing risk mitigation against maximizing AI’s societal and economic potential. Skeptics of AI regulation warn that all too cautious, or otherwise poorly designed, regulations will stymie innovation and deprive us of future medicines, economic gains, or even geopolitical power. Especially in Europe, people worry about falling behind in the global AI race, particularly with giants like China and the US leading the charge. This concern is valid, but we should nevertheless pause and question whether developing faster always equates to ‘better’ outcomes. Boundless development of AI would probably lead to more powerful tools, but these are not necessarily better tools. What matters is that the technology aligns with our needs and values. A careful approach to AI development and use, might not only be safer, but also result in AI systems that better resonate with our societal norms. Hence, AI that is better from our perspective.
In some cases, the ‘best’ solution, is to not use AI at all. After all, what benefit does AI offer if it doesn't mirror our core values? Take the example of robotaxis in San Francisco. In a bold move, the city has allowed these cars to operate on its roads to allow their developers to collect more real-life data and improve their technology further. In Europe, we could be jealous of such a daring move and, perhaps rightfully, worry about the technological head start it provides for American companies. Yet, we should also ask whether robotaxis are really what we need and want for European cities. Beyond the immediate problems these taxis pose today, like blocking streets for no reason or hurting pedestrians, we should ask whether we want robotaxis to reinforce the dominant position of the automobile? Perhaps we are better off if we invest in public transportation and bicycle sharing instead. Yes, a blanket ban on autonomous cars in the EU poses risks for our automotive sector, but it would also provide an opportunity; a chance to reimagine our cities as car-free zones, enhancing urban life.
Current proposals to regulate AI, in Europe, but also in China and the U.S., focus on immediate risks posed by AI. The European AI Act, for instance, is concerned with risks in terms of human health and safety, but also in terms of privacy and civil rights. While this is a logical starting point, it should not be the end of the story. After all, safe AI does not necessarily equate good AI. Except for some examples of AI’s posing ‘unacceptable risks’ (such as mass surveillance or social scoring systems), the Act does not say anything about the desirability of those applications. In other words: as long as it’s safe, it’s ok. Yet, this overlooks the broader societal implications, both positive and negative, that AI is bound to have.
To draw another parallel with cars: as I argued in my book, From Luxury to Necessity, initial regulations for automobiles were also focused on immediate safety concerns—traffic rules, traffic lights, guardrails, seat belts. These measures, while critical in their own right, were blind to the broader impact cars would have on our everyday lives and lifeworld. Almost one hundred years later we're struggling with the consequences of over-reliance on cars; local air pollution, greenhouse gas emissions, transport poverty, and unlivable cities. It may take another century to develop a new mobility system to fix these problems.
There's a real danger of history repeating itself with AI: that we focus too much on the short-term risks and dangers, while ignoring the long-term societal and cultural effects. Both negative and positive. Beyond the health, privacy and civil rights risks addressed today, we should thus also consider the broader alignment of AI with our norms and values. Among many issues, we should worry about the concentration of power in the hands of a few companies, our growing dependence on technology and resulting vulnerabilities, a further ‘rationalization’ of our economy and society and the potential erosion of our autonomy and democracy. Yet, we may also worry that a myopic risk-based approach deprives us of future solutions to problems in health care or the environment. AI solutions that appear risky today, may hold the key to novel treatments or renewable energy technology. This does not mean that grand promises of future AI capabilities should overrule any concerns over safety or civil rights, but we should also not reject promising projects merely based on hypothetical risks. A careful balance between risk and opportunity, and hence between conflicting values, needs to be defined on a case-by-case basis. This could, for instance, result in a stepwise approach in which projects are monitored regularly and real-life experiments take place in controlled settings.
While it's tough to project the long-term outcomes of widespread AI deployment, it's crucial to have these discussions. Scenarios and other forms of foresight can help us make informed decisions and develop forward-thinking legislation. To be clear, this is not about preventing a hostile superintelligence from eradicating mankind. This is about the ways we design and apply, even basic, AI systems in our economy and society and, hence, what we do to ourselves. It’s also not about a complete rejection of AI, it is about considering how AI can help us improve our wellbeing, without resulting in undesirable outcomes in the short and long term.