When thinking or talking about technology, we often perceive technology as mechanical devices, such as machines or robots assembled by humans, built up from parts to a whole, that has controllable and predictable characteristics. However, advances in AI, quantum technology, and synthetic biology will evolve these technologies towards a point of complexity and uncertainty that man cannot comprehend nor steer and control. The question is how we as humans can meaningfully interact and understand these complex innovations, and whether we can control their effects.
Exponential technologies are complex technologies by nature, and different countries will have a different from of these technologies and what they can bring to our societies and economy, e.g. China has a different conception of AI compared to the West, seeing it more as a natural phenomenon. For meaningful human control and reducing risks, transparency and openness are required in governance and research & development models.
AI and quantum are respectively at the intelligence and hard infrastructure layers. The systemic risks of exponential technologies also demand new governance models to limit risk and meaningful human control.
Mission-driven innovation will likely invest substantially in exponential technologies, as they harbor the promise of solving some of our society’s biggest problems (e.g. climate change, military superiority).
Exponential technologies will become an increasingly important vector of geopolitical interest, given their political, military and economic implications.
An increasing number of technologies transcend meaningful human control and understanding, and thus pose risks for future applications and development. For example, synthetic biology aims to create synthetic organisms that have new, unpredictable and complex interactions with the environment and other organisms (e.g. this could lead to new viruses or diseases); AI could reach a point of singularity whose intelligence man cannot understand (e.g. AI could be adversary to human goals); smart algorithms can be so complex that they are a ‘black box’; quantum computers work on such different computational principles that their calculations are deemed not understandable for the human mind (e.g. stock market crashes that we cannot explain, autonomous systems turning wild). Nonetheless, given the huge expected benefits – both economically as strategically – it is likely that their development will continue. As such, we need a new understanding of and relationship with these technologies to ensure they contribute to.a better future.