Google says ‘Big Sleep’ AI tool found bug hackers planned to use

Google said a large language model it developed to find vulnerabilities recently discovered a bug that hackers were preparing to use.

Late last year, Google announced an AI agent called Big Sleep — a project that evolved out of work on vulnerability research assisted by large language models done by  Google Project Zero and Google DeepMind. The tool actively searches and finds unknown security vulnerabilities in software.

On Tuesday, Google said Big Sleep managed to discover CVE-2025-6965 — a critical security flaw that Google said was “only known to threat actors and was at risk of being exploited.”

The vulnerability impacts SQLite, an open-source database engine popular among developers. Google claims it was “able to actually predict that a vulnerability was imminently going to be used” and was able to cut it off beforehand. 

“We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild,” the company said. 

A Google spokesperson told Recorded Future News that the company’s threat intelligence group was “able to identify artifacts indicating the threat actors were staging a zero day but could not immediately identify the vulnerability.” 

“The limited indicators were passed along to other Google team members at the zero day initiative who leveraged Big Sleep to isolate the vulnerability the adversary was preparing to exploit in their operations,” they said.

The company declined to elaborate on who the threat actors were or what indicators were discovered. 

In a blog post touting a variety of AI developments, Google said since Big Sleep debuted in November, it has discovered multiple real-world vulnerabilities, “exceeding” the company’s expectations. 

Google said they are now using Big Sleep to help secure open-source projects and called AI agents a “game changer” because they “can free up security teams to focus on high-complexity threats, dramatically scaling their impact and reach.”

The tech giant published a white paper on how they built their own AI agents in a way that allegedly safeguards privacy, limits potential “rogue actions” and operates with transparency. 

Dozens of companies and U.S. government bodies are hard at work developing AI tools built to quickly search for and discover vulnerabilities in code. 

Next month, the U.S. Defense Department will announce the winners of a years-long competition to use AI to create systems that can automatically secure the critical code that undergirds prominent systems used across the globe.

Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

Continue Reading