The different narratives that we tell ourselves about the impact of AI on the labor market shape its development, and at this very moment several scenarios are being developed. As we follow through three narratives: (1) the Goldman Sachs free-for-all, (2) the sober economists view, and (3) the perspective peddlers’ view, we explore ideas about the value of work, from cold, calculated economics of production to the impact on society and the labor market as a whole. Let's go through some contemporary stories that shape our conception of how AI is going to affect the way we work, starting with the scenario that expects that everything will change drastically.
The boundless possible improvements to society through the introduction of AI seem vertiginous, maybe to no one as much as the Goldman Sachs analyst. They argue AI could finally bring forth a new era of productivity basically unseen since the end of the second world war, and ideally, be the tide that raises all boats. For all estimates cannot be conservative, guarded, are you never allowed to get excited? When the music starts playing, you dance.
According to a Goldman Sachs estimate, we could be looking at an increase of as much as 7% in global GDP, and an increase of 1,5% per year over a ten year period in productivity growth. While GDP is a popular estimate of growth, the number to really keep an eye on is productivity growth, which has been lagging in many industrialized countries for decades, leaving many a central bank and financial department holding their heads in despair. It could turn around the downward trend of the last decades. The Goldman Sachs predictions are made on the controversial assumption that AI will be able to create intellectual content that will be functionally indistinguishable from human-made content, that it will be able to produce like a skilled, and infinitely faster, human professional.
Moreover, in this scenario, we typically do not need to worry about massive job loss. While about two thirds of the US occupation are exposed to some degree of automation, where 25–50% of their workload could be replaced, jobs displaced by automation have historically been offset by the creation of new jobs. In fact, 85% of employment growth since the 70’s has come from new occupations driven by technology, hopefully it is enough to offset the 300 million jobs GS projects will be lost through AI-driven automation. Someone optimistic about the productivity gains coming form AI, like the Goldman study, could reiterate the classic rebuttal, that the criticizers are seeing the economy as a zero-sum game. An example is accusing them of falling for the lump-of-labor fallacy, which is believing that there is a finite and fixed amount of labor in the economy. In essence, there is no need for worry, the productivity increases due to AI will be large enough to offset the displacement effects.
In the end, they view the technological disruption that AI is likely to generate as being the same as many of the technological disruptions that we have lived through, especially in the last 200 years. Why would it be so different, after all, to introduce a technology like AI, compared to the introduction of electricity, automated weaving machines, or even the windmill? All of these technologies have had innumerable positive effects on the world economy, and during the introduction of all of them there were people and organized groups (like the Luddites and medieval peasants, not unions yet!) that fought tooth and nail to prevent the increasing automatization of their professions. Yet, there was no longterm mass unemployment.
In the near future, they say, the Generative AI tools that are currently being developed will be layered into all of our existing tasks through software packages and technology platforms, and the productivity gains will be widespread in the economy. These productivity effects are so large, in fact, that there need be no thoughts wasted on distribution of the effects, since the gain offsets any displacement of workers in the economy. Thus, with higher worker marginal productivity, the introduction of AI technology may serve as a conduit for human endeavors, where anyone can learn to become more productive through training, and then stand a very good chance of being employed. With pure automation there could be some displacement, however, this would be remedied quickly by explosive growth in adjacent sectors as a result of the automation, which leads to new tasks and expansion of already existing professions which are not automated (yet!).
Before you run to celebrate your productivity increases (as you would), and recline in your comfy office chair watching the profits soar, there’s someone who’d like to have a word with you. Many economists, like Daron Acemoglu, Simon Johnson, and David Autor, have a different view, that the outlook is far more uncertain, and probably much less enticing, than your new friends at Goldman Sachs would have you believe.
Let’s first lay out their argument against high productivity growth. According to an estimate by economist D. Acemoglu the TFP, the factor productivity across all tasks, is only likely to increase by only 0.06% per annum. Even if the investment boom created by AI is likely to increase GDP growth by 1–1.5% per year, that is a far cry from the Goldman Sachs estimate of a 7% increase in global GDP over the coming decade.
The reason for this low additional productivity growth is largely due to how many tasks one believes will be automated. As we saw in the previous scenario, some believe that as much as 50% of the tasks will be automated in two thirds of the occupations, or about one third of total occupations. Acemoglu et al. believe this number to be around 3.45–4.6% of total tasks, depending on whether you factor in tasks that are particularly difficult for an AI system based in machine learning to automate. These are tasks that do not have a definite measure of success, and that involve more difficult, context – dependent variables, which make learning from outside observation much harder. Whether we include tasks that are especially hard to automate or not, we are talking about a whole different category of change compared to the Goldman Sachs estimates.
The effects of AI may be very distributed in and of themselves, since they affect most work in some way and therefore won’t disrupt specific industries to the degree that they displace specific subsections of workers. However, according to this narrative there is also no evidence that AI will reduce inequality or produce any wage growth for these groups. To understand this dynamic, the concept of the “productivity bandwagon”, proposed in Acemoglu & Johnsons’ book Power and Progress, might be helpful.
For the majority of people to see the benefit of productivity growth, this productivity has to be “anchored”, connected to the improved efficiency of human labor, as opposed to automation. In economic terms, that would be raising the worker's marginal productivity, rather than simply raising average productivity, as a measure of success. This is a nuancing of the “all boats rise with the tide” perspective from the previous scenario. The ship of labor will only rise on a sea of increased worker marginal productivity. An example of this is the Ford production plants, where increased mechanization actually led to an increase in worker demand due to an intensive upskilling practice from the company, as detailed in Power and Progress.
Will the type of technological change that we’ll see through AI have the same impact on society as the other disruptive technologies? This seems to be a fundamental bet in the more optimistic Goldman report, and in this scenario it is challenged on several fronts:
For the labor market, this so-so automation, that is automation without substantial productivity gains, spells trouble. The displacement effect, where capital (machines) takes over tasks previously performed by labor, is not countered by a productivity effect and the reinstatement effect that follows, which raises demand for labor in other non-automated tasks, often in adjacent, ancillary industries and “reinstates” labor in new tasks.
This scenario with lackluster growth will squeeze middle income workers, advocates argue, and make labor worse off compared to capital, concentrating even more wealth in the hands of very few tech companies. Those displaced middle-income workers would probably try to find work in the lower - skilled sectors, thus increasing competition and reducing wage growth. The share of workers whose automatability is at least 70% is between 6%–12% in OECD countries, but even though the total share of tasks performed by labor might initially fall by quite a substantial amount, how the actual occupations will change remains to be seen. It is a pretty dire picture these economists sketch, with the exception that we actually can do something about it by making sure we follow the productivity bandwagon.
While some see the massive effects of AI as next to inevitable, and others as either negligible or dangerous, others, like the tech pundit Benedict Evans in his eponymous newsletter, see the development differently. This third scenario urges us to take a step back before jumping to conclusions about AI's impact on the labor market. Its roots lie in understanding technology hype cycles: first come inflated expectations, followed by a period of disillusionment, and finally, a return to reality, that is realistic expectations.
The development of generative AI has indeed been different from other technological disruptors so far, mainly because of its accessibility. While it took 20 years for 20% of US retail to move online and 25 years for a third of US companies to implement cloud technologies, Evan explains, it took just two months for ChatGPT to reach above 100 million users. This would have been an amazing development, only that most of them have not come back to use the product. So at least in the short term, this narrative argues, it seems like we did it again: we overestimated a technology and it all turned out to be a huge bubble like so many others before, the dotcom bubble comes to mind.
One major problem arises from the expected speed of deployment of the technology, and the expectations related to this speed. This idea of being able to deploy Generative AI within weeks everywhere, which will be powerful enough to be able to perform the same tasks as people, is typical of technological solutionism. This is akin to the Goldman scenario, where one of the assumptions for the fantastic growth is that the output of AI tech will be practically indistinguishable from human performance. Now, this might happen eventually, but we are pretty far from it at the moment, and therefore current growth estimates could be mistaken
The main contention of Benedict Evans and others in this camp, is that the current mismatch in expectations and actually achieved results from AI technology comes from the fact that we are following a hype cycle, like so many times before. After an initial stage of AI utopianism, the idealized technology meets the reality of its limitation when implemented for actually useful tasks in a complex reality. The eagerness to get the AI revolution started means that we have skipped the very essential phase of product – market fit during product development, which has led to this haphazard implementation where over 80% of AI related projects fail (compared to an average rate of 40% for all projects, so double the failure rate).
The Gartner Hype Cycle is especially useful in understanding this development, starting with inflated expectations, followed by disillusionment and then a rise to actual productivity.
One reason for the heightened expectations, both in speed and capabilities, come from the absolutely massive amounts of investments poured into AI by big tech. According to some estimates, AI technology would have to produce a return on investment to the rune of 600 billion dollars a year, poised to be raised to one trillion dollars if current trends of increased investment are upheld throughout the year. With such massive resources behind it, and the considerable alternative cost, it makes sense for expectations to be very high indeed.
However, as Evans points out, the wildest dreams of the dotcom – era did actually come true, it just took some more time for the technology to mature and for the users to find a use for its massive potential. The same thing might be happening today in regards to AI technology – it may just be a question of timing our expectations accordingly!
If we bring the Goldman Sachs representatives, Acemoglu et.al. And Benedict Evans in a room together, we’d end up with some lines of contention and some of agreement. Evans and Acemoglu might agree on taking a more skeptical approach to the gains expected from the introduction of generative AI technologies, citing delayed effects and low effects on productivity growth, respectively. On the other hand, Evans introduces a temporal axis and seems to conclude that AI might be more like other technologies after all, as they all follow a specific hype cycle, in that respect he is closer to the Goldman Sachs analyst. Acemoglu questions whether AI will be a major disruptor in terms of productivity growth at all. With regard to this, Evans introduces the importance of managing expectations as well, questioning whether we will see the full benefits of AI or create a bubble that bursts before we get the chance.
In conclusion, both types of disruption and timing of expectations will be important in determining the change to come in the labor market. Recently, some of the belief in the potential of AI has been marred, by a lack of confidence from one of the definitive arbiters of these things; the market. In the light of this downturn, we could venture an educated guess regarding where in the hype cycle we are, which is in the rather steep fall from the peak of inflated expectations, and the trough of disillusionment, as seen in the graph above.
Now, instead of becoming entirely disillusioned, we might instead turn our attention to other potential applications of AI technology. Perhaps the focus should be more on complementarity instead of automation? Perhaps we should keep in mind the history of implementing other technologies in the past, and the lag in implementation that product – to – market fit entails? In any case, we can be sure that our way of thinking about AI is evolving.
In the end, it's unlikely that any of our scenarios have the ultimate answer to the impact of AI on labor. But for our purposes, at this stage in the development of AI, we don’t necessarily need a perfect prediction. What we have instead are some points on the graph, and next time a development occurs, we have some likely scenarios to contextualize it with, and see towards which scenario current developments are leaning.