Hundreds of thousands of Grok chats exposed in Google results

Liv McMahon

Technology reporter

Getty Images Grok logo displayed on a smartphone, with it's logo shown on a blurred, larger backdrop behind it. Getty Images

Hundreds of thousands of user conversations with Elon Musk’s artificial intelligence (AI) chatbot Grok have been exposed in search engine results – seemingly without users’ knowledge.

Unique links are created when Grok users press a button to share a transcript of their conversation – but as well as sharing the chat with the intended recipient, the button also appears to have made the chats searchable online.

A Google search on Thursday revealed it had indexed nearly 300,000 Grok conversations.

It has led one expert to describe AI chatbots as a “privacy disaster in progress”.

The BBC has approached X for comment.

The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google.

Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.

Some indexed transcripts also showed users’ attempts to test the limits on what Grok would say or do.

In one example seen by the BBC, the chatbot provided detailed instructions on how to make a Class A drug in a lab.

It is not the first time that peoples’ conversations with AI chatbots have appeared more widely than they perhaps initially realised when using “share” functions.

OpenAI recently rowed back an “experiment” which saw ChatGPT conversations appear in search engine results when shared by users.

A spokesperson told BBC News at the time it had been “testing ways to make it easier to share helpful conversations, while keeping users in control”.

They said user chats were private by default and users had to explicitly opt-in to sharing them.

Earlier this year, Meta faced criticism after shared users conversations with its chatbot Meta AI appeared in a public “discover” feed on its app.

‘Privacy disaster’

While users’ account details may be anonymised or obscured in shared chatbot transcripts, their prompts may still contain – and risk revealing – personal, sensitive information about someone.

Experts say this highlights mounting concerns over users’ privacy.

“AI chatbots are a privacy disaster in progress,” Prof Luc Rocher, associate professor at the Oxford Internet Institute, told the BBC.

They said “leaked conversations” from chatbots have divulged user information ranging from full names and location, to sensitive details about their mental health, business operations or relationships.

“Once leaked online, these conversations will stay there forever,” they added.

Meanwhile Carissa Veliz, associate professor in philosophy at Oxford University’s Institute for Ethics in AI, said users not being told shared chats would appear in search results is “problematic”.

“Our technology doesn’t even tell us what it’s doing with our data, and that’s a problem,” she said.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”

Continue Reading