Meta is paying a 24-year old AI researcher $250 million – echoing the dot-com bubble

By Jeffrey Funk and Gary Smith

Mark Zuckerberg and other AI boosters show they’re players by spending money – not by making profits

We are much further from artificial general intelligence than Mark Zuckerberg, Sam Altman and other tech leaders claim.

During the dot-com bubble, companies showed they were players not by making a profit but by spending money – especially other people’s money. The more you spent, the more important you were.

At the time, one of co-author Gary’s friends told him in all seriousness that making a profit was “so old-economy.” His startup spent money as fast as he could raise it – on stylish offices, ergonomic tables, Aeron chairs, extravagant parties and advertisements for products that didn’t yet exist.

Now we have the AI bubble, where money-losing companies are said to be valued in the hundreds of billions of dollars. As during the dot-com bubble, AI companies show they are players by spending money, not by making profits. One reflection of this current mindset is the drunken-sailor spending for AI engineers and researchers – most notably by Meta Platforms (META) CEO Mark Zuckerberg, who recently signed a 24-year-old AI researcher for $250 million over four years – $35 million more than NBA megastar Stephen Curry’s four-year contract.

The Meta Superintelligence Labs, or MSL, is Zuckerberg’s moonshot to surpass OpenAI, Alphabet’s Google (GOOG) (GOOGL), Anthropic, Microsoft (MSFT) and others in the race to dominate artificial general intelligence. Zuckerberg reportedly has been dangling $100 million bonuses to poach AI talent from competitors (including OpenAI, Google and Anthropic), though Dario Amodei, Anthropic’s CEO, recently claimed that his team has consistently declined the offers” and that some staff wouldn’t even talk to Zuckerberg.

Meta reportedly has now implemented a hiring freeze after its spree, but it’s hardly alone in this intellectual arms race. Microsoft hired two dozen from Google, while dozens more researchers have played musical chairs among AI companies – to the degree that, according to a Financial Times report, some freshly arrived Meta recruits have swiftly made exits. The Wall Street Journal reported that the median annual salary for engineers rose to $280,000 from $220,000 between August 2022 and early 2024. The Journal story quotes one recruiter saying that “the median salary for six candidates who had consulted the career-services platform about job offers from OpenAI was $925,000 including bonus and equity.”

These AI salaries far outpace those given to famous researchers of the past, even when adjusted for inflation. For example, Meta’s $250 million man is getting 327 times what Robert Oppenheimer earned while developing the atomic bomb, five times Thomas Watson’s peak compensation as CEO of IBM (IBM) in 1941, and many times more than what Claude Shannon was paid in 1948 when he created information theory at Bell Labs.

Why spend so much money? Today’s tech bros believe that artificial general intelligence is imminent and want to be the first to commercialize it. This relies on three questionable assumptions: that AGI is imminent; that the commercial value of large language models, or LLMs, will be far larger than the costs; and that only the best of the best researchers can get us where we want to go.

ChatGPT and other large language models are incapable of understanding how the text they input and output relate to the real world we live in.

As for AGI being imminent, Yann LeCun, vice president and chief AI scientist at Meta, has called AI “dumber than a cat” and said that it will be years before artificial general intelligence is achieved. It is increasingly recognized that scaling up on larger and larger databases will not get us to AGI. If anything, training on databases that have been increasingly polluted by LLM rubbish may create a rubbish cycle.

ChatGPT and other large language models are not designed to understand – and, in practice, are incapable of understanding – how the text they input and output relate to the real world we live in. They consequently cannot be relied on for tasks that require critical thinking or even common sense. Realizing this, LLM companies use thousands of “trainers” to put millions (perhaps billions) of bandages on LLM missteps, and build in links to calculators that can make accurate mathematical calculations (if given the right inputs, which is still dodgy). None of this will give LLMs intelligence in any meaningful sense of the word.

Regarding the assumption that the commercial value of LLMs will be far larger than the costs, their inherent stupidity makes it risky to trust LLMs for decisions in which mistakes are costly. For example, OpenAI CEO Sam Altman has been touting the use of ChatGPT for medical advice – but a recent paper reported that a man following ChatGPT’s advice stopped eating salt and began eating bromide instead and nearly died from bromide poisoning.

Few people would trade indoor plumbing for a smartphone. Would you trade indoor plumbing for ChatGPT?

It is difficult to imagine a large commercial payoff from a gee-whiz technology that can’t be trusted to do consequential things. Economic history is filled with great products and services that aren’t glitzy but provided great value – for example, economist Robert Gordon notes that few people would trade indoor plumbing for a smartphone. Would you trade indoor plumbing for ChatGPT?

The third assumption – that only the best of the best can get us where we want to go – is actually an admission that we are much further from AGI than what the tech bros claim. A history of Bell Labs noted that, during its heyday in the 1950s, it emphasized Midwestern farm boys rather than top academic degrees when hiring talent, and yet the research done there by people who were paid modest salaries led to 11 Nobel prizes. Transistors, integrated circuits, lasers, LEDs, the internet and countless other great technologies were each successfully commercialized by thousands of engineers, many of whom were paid modestly and highly respected only long after the fact.

Why does anyone believe AI should be different? Clearly, when the history of the AI bubble is written, future generations will surely be amused.

More: AI stocks are in a bubble. Why are so many investors refusing to believe it?

Also read: It’s time to face the truth: American consumers have been gloomy for quite some time

-Jeffrey Funk -Gary Smith

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

09-04-25 1650ET

Copyright (c) 2025 Dow Jones & Company, Inc.

Continue Reading