Microsoft AI CEO Calls Superintelligence an ‘Anti-Goal’

While much of Silicon Valley races to build godlike AI, Microsoft’s AI chief is trying to pump the brakes.

Mustafa Suleyman said on an episode of the “Silicon Valley Girl Podcast” published Saturday that the idea of artificial superintelligence shouldn’t just be avoided. It should be considered an “anti-goal.”

Artificial superintelligence — AI that can reason far beyond human capability — “doesn’t feel like a positive vision of the future,” said Suleyman.

“It would be very hard to contain something like that or align it to our values,” he added.

Suleyman, who cofounded DeepMind before moving to Microsoft, said his team is “trying to build a humanist superintelligence” — one that supports human interest.

Suleyman also said that granting AI anything resembling consciousness or moral status is a mistake.

“These things don’t suffer. They don’t feel pain,” Suleyman said. “They’re just simulating high-quality conversation.”

The debate on superintelligence

Suleyman’s comments come as some industry leaders speak about building artificial superintelligence. Some say that it could arrive this decade.

OpenAI CEO Sam Altman has repeatedly described artificial general intelligence — AI that can reason like a human — as the company’s core mission. Altman said earlier this year that OpenAI is already looking beyond AGI to superintelligence.

“Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,” Altman said in January.

Altman also said in an interview in September that he’d be very surprised if superintelligence doesn’t emerge by 2030.

Google DeepMind’s cofounder, Demis Hassabis, offered a similar timeline. He said in April that AGI could be achieved “in the next five to 10 years.”

“We’ll have a system that really understands everything around you in very nuanced and deep ways and kind of embedded in your everyday life,” he said.

Other leaders have urged skepticism. Meta’s chief AI scientist, Yann LeCun, said we may still be “decades” away from achieving AGI.

“Most interesting problems scale extremely badly,” LeCun said at the National University of Singapore in April. “You cannot just assume that more data and more compute means smarter AI.”


Continue Reading