Will AGI happen sooner than we expect?
Most experts agree that it is a question of when, rather than if, we achieve artificial general intelligence (AGI). It is famously a term whose definition keeps shifting, but in common parlance it tends to refer to a form of AI that is able to match human capabilities in every task.
This is a separate conversation to whether gen-AI can keep improving. Gen-AI could theoretically make dramatic leaps while still remaining ‘narrow AI’, for example, highly capable but only within a defined scope. Equally, gen-AI could plateau without negating the possibility of AGI, since progress could come from other regions of the AI landscape.
So, will AGI emerge and, if so, when?
For business leaders this isn’t a purely philosophical question, there’s also a practical implication for workforce strategy. Right now, AI’s limitations mean it is generally used as a tool to augment rather than replace human labour. But if AGI is as capable as a human, that may no longer hold true.
Perhaps AGI will give us firms where much of the workforce is AI. Or perhaps it will lead to a future where humans are still widely employed, but just in specific capacities: defining the problems AI works on, ensuring it adheres to defined values, and bringing a strain of higher-order creativity to the table that may continue to elude machine learning. Human employment may also shift to roles that require embodiment.
Until recently, surveys of researchers tended to predict that AGI would arrive around 2060. However, with advancements in large language models (LLMs), and a belief that the transformer model architecture that underlies them could provide a pathway to AGI, that date has pushed forward. In 2023, a survey of experts predicted AGI by 2040. More recently, leading players in AI have suggested we could see AGI before the end of this decade.
The recently published AI 2027 scenario, written by a group of expert authors, vividly describes how current trends in AI development mean AGI could be with us far sooner than we ever imagined. In this version of the future, AI models train ever more powerful AI models, and within just a few years, superintelligent AI far outstrips human geniuses.
However, versions of the future also exist in which progress is far slower, and perhaps never leads to AGI. A recent research paper suggested, at the very least, a less accelerated progression. It looked at reasoning models, an enhanced version of large language models that are trained to solve multi-step reasoning tasks, and found that they had “fundamental limitations”.
When presented with highly complex problems, the models faced what the researchers termed a “complete accuracy collapse”. Not only this, but as the models neared collapse, they began “reducing their reasoning effort”. With the air of disappointed school teachers, the researchers said they found this “particularly concerning”. And that’s putting aside the fact that frontier models still struggle with basic maths among other seemingly elementary tasks.
The typical response when an organisation is faced with an uncertain future is scenario planning. That might seem irrelevant in the face of such a paradigm shift as AGI, but there’s an argument that this could still be worthwhile.
It’s arguably unlikely that AGI—if it were ever achieved—would suddenly be everywhere all at once. Is it not more realistic to think it will be implemented in different ways in different places at different speeds? That would make it possible to position a business strategically to take advantage.
To continue that thought experiment, you might ask yourself: where does your organisation rely on having the smartest people? If intelligence becomes a commodity, that may no longer be a competitive differentiator. If you can shore up your moat in other ways, that could be prudent and could strengthen your position. With or without the advent of AGI.