In part I, we considered the geopolitical aspects of semiconductor technology in advancing generative AI. We saw that semiconductor technology is crucial for generative AI progress. We also noted that the semiconductor value chain for AI chips faces bottlenecks due to reliance on specific countries for raw materials and advanced technology. This makes the supply chain vulnerable to geopolitical tensions and trade disputes. As nations strive for technological independence to secure their chip supplies for national security, the US and China are particularly active, each enhancing their 'AI chips' capabilities. This affects global economic stability and technological progress. Therefore, competition for semiconductor dominance is a major geopolitical issue with significant implications for global power dynamics.
Building on these insights, in this part, we will explore the role of Big Tech in AI chips and their importance in developing 'AI Stacks'. Controlling one’s AI Stack brings not only geopolitical but also significant economic benefits, as shown by the rising valuations of companies like Nvidia, Amazon, Meta, Apple, and Google. The concept of the AI Stack will highlight some of these companies’ strategies for developing and strengthening their own AI Stacks. One that will be observed is that only the largest companies will be able to pay for the massive spending to develop an own, proprietary AI Stack. As such, Big Tech firms are intensifying control over AI computing hardware and cloud infrastructure, impacting AI innovation, sustainability, and geopolitics. Therefore, these corporate intra-Stack dynamics will shape the future of AI.
Demand for AI chips is high and has always been high. However, since the ‘generative AI hype’ of late 2022, demand is said to outstrip supply by a factor of 10 so that access to compute resources — at the lowest total cost — has become a determining factor for the success of AI companies. In fact, many companies spend more than 80% of their total capital raised on compute resources. For example, Microsoft and OpenAI are collaborating on a massive data center project that could cost up to $100 billion, with the goal of building a powerful AI supercomputer called "Stargate”. The Stargate supercomputer would be the fifth phase in a series of supercomputers that Microsoft and OpenAI are planning to build over the next six years. The current fourth-phase supercomputer for OpenAI is expected to launch around 2026. The Stargate supercomputer is envisioned to have over 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity per GPU server, making it one of the top five most powerful publicly disclosed supercomputers in the world. Amazon has also disclosed that it would invest almost $150 billion in the next 15 years in data centers to handle the forthcoming explosion in demand for AI applications and digital services. The magnitude of these projects and money involved show that managing and developing an AI Stack will likely only be possible for the largest companies.
The recent Computational Power in AI by the AI Now Institute highlights the increasing verticalization of control of the AI stack by the big tech companies. These companies are intensifying their already-concentrated control over the computing hardware and cloud infrastructure that is essential for building and deploying larger AI systems, and other spots of concentration such as chip fabrication. The AI industry faces a bottleneck due to the scarcity and monopolization of computational power. This scarcity is driven by the need for high-end chips, like Nvidia's H100 for training large-scale AI models that are at the cutting edge of AI development (note that for many localized AI computation at the edge of the network require specialized AI chips although less sophisticated ones will do the job; furthermore, specialized ‘inference’ AI chips outperform general-purpose processors (GPUs) given the parallel processing demands for AI workloads which help pre-trained models execute actions in real-time based on new data, such as Google’s Tensor Processing Unit (TPU)). The monopolization of computers by a few companies, including Nvidia, TSMC, and major cloud providers like AWS and Google Cloud, shapes the AI industry's trajectory. These companies' dominance affects AI innovation, environmental sustainability, and geopolitical dynamics.
The relentless surge in demand AI capabilities has precipitated a host of challenges and bottlenecks in semiconductor chip production. As AI applications proliferate, the need for chips that are not only powerful but also energy-efficient and capable of lightning-fast processing has become critical. Scaling chip production to meet these burgeoning requirements is fraught with hurdles, both technical and logistical.
One of the primary challenges is the technical complexity involved in manufacturing chips suitable for AI. The precision required for the latest generation of semiconductors, often measured in mere nanometers, pushes the limits of current manufacturing capabilities. The intricate designs necessitate advanced lithography and etching processes that are not only expensive but also require a high degree of expertise and precision engineering. As chip features shrink, the likelihood of defects increases, leading to lower yields and higher costs. In addition to manufacturing difficulties, the materials used in chip production are themselves a bottleneck. The industry's reliance on rare materials, which are geopolitically sensitive, adds a layer of complexity and risk. Innovations in materials science are essential to discover and implement alternative materials that can meet the performance standards required by next-generation AI systems while also being more abundantly available and sustainable.
The manufacturing technology itself must evolve to keep pace with the demands of AI. This includes advancements in 3D packaging technology, which allows for higher integration of chip components and better performance. Furthermore, new semiconductor materials like gallium nitride and silicon carbide offer superior electrical efficiency, which could prove pivotal for power-hungry AI applications. The development and refinement of these technologies are crucial to overcoming the current limitations in chip production.
Therefore, OpenAI CEO Sam Altman recently said that he is seeking about $7 trillion of investment to reshape the global chip industry and create a network of AI chip factories. This project aims to address the increasing demand for chips capable of supporting AI workloads. Altman's plan includes developing a global network of fabrication plants and increasing the global supply of semiconductors for AI applications. Altman's vision faces skepticism due to the colossal amount he is seeking, which exceeds the current size of the global semiconductor market significantly. Despite challenges such as shortages of skilled workers and material resources in the semiconductor industry, Altman is actively engaging with potential investors, including discussions with Middle Eastern investors and chip manufacturing companies. As we saw in the previous piece, these shortages also take on a geopolitical and geo-economic dimension, as the global AI chip industry is now strongly controlled by the US and its allies. As such, developing or redirecting production flows will also radically change global power dynamics, which could lead to interesting partnerships between companies and countries: how would the US government react if China helps an American company like OpenAI to invest and develop new chip factories. Or when a conglomerate of companies has the economic and political gravity (see below) removes production facilities away from countries and regions in the American sphere of influence and to more neutral grounds, e.g. India or Indonesia? These considerations show that economic interests could easily clash with strategic imperatives.
To bridge the gap between the complexities of AI chip demand and the broader geopolitical implications that we described, we can now examine how these elements are intertwined within the fabric of global technology and economy for AI chips. As the discussion on AI chip demand highlights, it is not just technological and logistical hurdles in scaling production, but also the geopolitical and economic forces shaping the industry, this transition from focusing on the immediate challenges of chip production to considering the strategic maneuvers of companies within this sector serves as a stepping stone to understanding the layered impacts of AI chip development. At FreedomLab Thinktank, we use the framework of the Stack to understand the anatomy of digital systems as a layered structure of technological and non-technological components, and can be used as a tool for strategic analysis and decision-making. As such, we can analyze companies and countries are positioning and orienting themselves along the AI Stack. We start with companies, as there have been major developments in recent months.
In this part, I will ‘grade’ the AI-driven companies of the ‘Magnificent Seven’. Riding the wave of the 2023 AI boom, these seven largest US-listed companies – the ‘familiar Big Tech companies’ including Apple, Microsoft, Google parent Alphabet, Amazon and Meta Platforms – as well as two new entrants Nvidia and Tesla) have seen their combined market capitalization soar by 74% in 2023, or $5.1 trillion to $12 trillion – nearly triple Germany’s GDP. I will grade their strategy based on my estimation how likely it is that they will acquire and/or sustain a dominant position in the AI Stack.
In recent months, Nvidia the most important private company in the semiconductor IA industry, as its chips being crucial for training and operating AI systems. Nvidia’s valuation has surged more than 160% to $3.1 trillion since the start of the year, to $3.1 trillion and briefly becoming the world’s most valuable company in June. With this additional capital and warfare chest, it is now moving ‘up the Stack’ as Nvidia's CUDA software platform has become the industry benchmark that allows developers to use GPUs for general purpose processing. This integration ensures that Nvidia’s hardware is optimized for the most demanding AI tasks, from machine learning to autonomous vehicles. Nvidia's CEO, Jensen Huang, argues that the shift to upgrade data centers with chips needed for training powerful AI models is still in its early phases, which will require spending roughly $2 trillion to equip all the buildings and computers to use chips like Nvidia's and that the company will make its own investments. So controlling both the hardware and the software, Nvidia is moving from the hard infrastructure layer to higher layers. That means that it could take on a stronger role in consumers’ lives, e.g. by developing its own smartphone or AI-hardware for the smart (gaming) home. Given the allure of its founder, Jen-Hsun Huang with his Rockstar ‘leather jacket’, and its strong position in the gaming and developer community, it could build on both its soft power as well hard chip power to become a full-Stack AI company.
Nvidia’s AI Stack rating: 9
Besides Nvidia, key players stand out not only for their market size but also for their strategic foresight and technological advancements. Amazon, the (primarily) ecommerce platform, is another example, by designing its own chips for generative AI on cloud computing services, although fabrication is done by third-party semiconductor foundries, most notably TSMC. This shift is part of a broader trend of tech giants bringing chip production in-house, allowing for customization that serves specific cloud-based AI tasks better than off-the-shelf components. Amazon’s Graviton and Inferentia chips are tailored for their AWS cloud computing environment, optimizing operational efficiencies and reducing costs. By using its own chips, Amazon can better integrate hardware and software, resulting in improved performance for AI services such as voice recognition, language translation, and data analytics. This move not only enhances AWS's capabilities but also gives Amazon greater control over its supply chain, reducing dependency on external chip manufacturers. So by starting out from its platform function, Amazon goes for a vertical integration to tailor its hardware specifically to the needs of its cloud services, optimizing performance and reducing costs in its cloud business. And by designing its own chips, such as Amazon’s Graviton processors, Amazon can reduce its reliance on third-party suppliers, mitigating risks associated with supply chain disruptions and fluctuations in supplier pricing. Through this approach, Amazon not only secures a competitive edge by enhancing the efficiency and cost-effectiveness of its services but also gains greater control over the technological roadmap of its infrastructure, ensuring that it can rapidly adapt to the evolving demands of AI applications. However, given that it will still have to rely on its partnerships with AI companies (e.g. Anthropic) that is not exclusive, as well as its limited data ownership of mostly retail interactions means that it has little unique resources that will help it dominate the AI Stack.
Amazon’s AI Stack rating: 7
Being a similar platform company, Meta is now adopting an open-source strategy (see other article), aiming to develop the leading AI models and circumvent the bottlenecks as described above. For this, CEO Mark Zuckerberg outlined Meta’s ‘future roadmap’ for AI last month which stated that it will require a “massive compute infrastructure”. Similar to Amazon, Meta will have its customized and designed chips for its data centers that it will deploy in 2024 although fabrication is done by other companies. Still, Zuckerberg stressed that the reason why Meta would eventually lead the ‘AI race’ over other competitors is the company’s hold over the valuable Nvidia H100 graphics cards to train its AI models. However, Meta is of course sitting in the throne of a huge pile of training data of its social media empire, including Instagram, Facebook, and WhatsApp. Its latest LLaMa LLM is therefore positioning itself as outperforming on natural-sounding conversations. Furthermore, given that Yann LeCun, a leading AI scientist, is its Chief Scientist, overseeing Meta’s AI projects and exploring new ways to create general AI, one could speculate that Meta could be the company that is trying new – and necessary – ways to boost the capabilities of generative AI.
Meta’s AI Stack rating: 7.5
Apple has historically prioritized control over its entire Stack, from hardware to software, to deliver a seamless user experience across its ecosystem of devices as thus to integrate the ‘Apple style’ in all of its products. In the realm of AI and chip development, Apple continues this approach by designing its own processors, including the A series for iPhones and the M series for Macs. These chips incorporate advanced AI capabilities for tasks ranging from facial recognition to augmented reality, enhancing device performance while maintaining privacy and security. By controlling the chip design, Apple ensures that its hardware is tightly integrated with its software, offering optimized performance for AI-driven applications. This strategy not only differentiates Apple's products in the market but also reduces its dependency on external chip manufacturers, thereby mitigating risks related to supply chain disruptions. This month, Apple has announced its ‘Apple Intelligence’ with its proprietary on-device foundational model. This model is much, much smaller compared to foundational models of OpenAI’s ChatGPT, but it will bring efficient and speedy generative AI to Apple’s massive hardware suite. Furthermore, local inference and edge computing focus allows Apple to play the role of differentiator by doubling down on privacy and security, thus positioning itself as the ‘moral generative AI’ developer compared to other parties. Therefore, although Apple is fairly late to the game, it remains an interesting player given its strong and tight integration on the higher layers of the Stack.
Apple’s AI Stack rating: 8.5
Alphabet (Google’s parent company), on the other hand, leverages its dominance in data and AI algorithms to strengthen its position within the AI Stack. Google's development of the Tensor Processing Unit (TPU) is a testament to its commitment to leading in AI hardware. TPUs are custom-designed to accelerate machine learning workloads, significantly enhancing the performance of Google's AI services, such as search, voice recognition, and translation. Google Cloud also offers access to TPUs, enabling developers and businesses to leverage Google's AI infrastructure for their own applications. By developing proprietary AI chips, Google not only boosts the performance of its services but also positions itself as a key player in the cloud computing and AI markets, competing directly with other tech giants and specialized AI chip companies. Similar to Meta, Google owns its huge pile of proprietary data from its consumer-based businesses. However, its Gemini models still trail OpenAI and have recently also blundered with both its chatbot as well as image generation tools. As such, it remains to be seen whether Google can get its AI act together before being outcompeted.
Google’s AI Stack rating: 6
Who would have thought a few years ago that Microsoft, the OS company that looked rather dull compared Apple in the 1990s as well as to the youthful social media Big Tech’s of the 00s and early 2010s, has re-emerged as the world’s most valuable company (it has a market cap over $3.3 trillion as of June 23rd). Microsoft’s stellar rise on the AI scene has been driven initially by being an early and largest investor in OpenAI, the company that develops the best-performing LLMs. As such, Microsoft is now rolling-out ChatGPT’s models in its software suite with its Copilot. But it has other partnerships with leading AI companies as well, such as Inflection and Mistral, and leveraging its AI leadership by hiring Mustafa Suleyman to spearhead a new division aimed at consumer AI applications. This is part of Microsoft's broader strategy to partner with innovative startups and strategic collaborations instead of outright acquisitions to navigate regulatory landscapes. Despite focusing on enterprise AI, Microsoft is expanding into consumer markets, eyeing growth through AI-enhanced products like Copilot and Bing. Given the company’s strong foothold in the cloud with its Azure platform, its huge investments in hardware AI equipment, and productivity-directed applications and orientation (could well be the first and only ‘killer-app’ for generative AI to make it a fundamental, general-purpose technology) it seems that Microsoft has put all it chess pieces into position on along the AI Stack and will now let its strategy pay out in the coming years.
Microsoft’s AI Stack rating: 8.5
This little exercise shows that thinking in terms of the AI Stack can help to understand corporate strategies on the AI Stack and could provide a framework to make strategic outlooks for companies: where are they investing in and why, what are likely next ventures that they will explore, and what are current synergies that they aim to establish within their own company as well as with other companies or industries. This approach enables a deeper analysis of how different layers of technology—ranging from hardware and software to the production ecosystem and value chain and geopolitical interests – interact and influence each other.
In this piece, we have conceptualized the generative AI industry through the lens of the Stack in order to identify how companies strategically navigate and develop their own generative AI Stack. The perspective of the Stack thus reveals the interconnectedness of AI innovations, where advancements in one layer such as chips can significantly impact the functionality and performance of others, illustrating the complexity of scaling generative AI technologies. Furthermore, understanding the generative AI chip Stack is essential for grasping the geopolitical implications of AI, as the control over different layers can confer significant economic and strategic advantages to companies and nations, and thus help to navigate the global power dynamics among generative AI and its dedicated chips. Analyzing the Stack strategies of the biggest players in the generative AI competition hopefully gives a grip or idea on how to speculate on the future of generative AI.