Can Your Chatbot Logs Be Used Against You in Court?

“Courts are built on transparency, explainability and fairness, and those are precisely the areas where AI chatbots still struggle,” says Mark Espositio, a professor in the D’amore-McKim School of Business.

Silhouette of a person sitting at a table typing on a laptop.
Could AI reshaping legal cases? Photo by Matthew Modoono/Northeastern University

We know that a person’s Google history can be used as evidence in court, but what about a conversation with an artificial intelligence chatbot? 

Many people are turning to large language models for everything from life advice to online searching. 

Can those logs be used against them legally? 

Mark Esposito, a professor in international business and strategy in the D’Amore-McKim School of Business and an expert on AI governance, says in theory AI chatbots could be used as evidence, but “we’re still far from having the legal and procedural framework” to make that realistic.  

“Courts are built on transparency, explainability and fairness, and those are precisely the areas where AI chatbots still struggle,” he says. 

First, is the issue in how chatbot data is collected and managed, he says. AI companies store every chatbot interaction in a secured location on their servers, he explains.    

“Chatbot interactions are permanent by design since every exchange is logged and stored,” he says. “That raises discovery issues (opposing counsel could demand access to everything, even irrelevant bits), and it complicates how courts would treat those logs as evidence.”

Portrait of Mark Esposito.
Mark Esposito, teaching professor in the D’Amore-McKim School of Business, Photo by Matthew Modoono/Northeastern University

Accountability is also a major concern, he says. 

“If a chatbot generates an argument or recommendation, who’s responsible? The user, the developer or the system itself? Courts need clear attribution and auditability, but most models are still black boxes,” he says. 

There are also data protection rules since chatbots “tend to overcollect information compared to what is legally ‘proportionate,’” he says. 

While there are many challenges with using AI chatbots in courtrooms, Esposito doesn’t rule out that they may one day play a bigger role. Researchers like himself have developed legal clinics to test how these technologies should be used in the future. 

“These are pilots in legal form that are testing hypotheses that are leading to a specific set of behavior,” he says. 

But the bigger questions AI researchers are grappling with right now are much more about privacy, data ownership and surveillance, he says. 

“From there, the legal implications of those things is not too much of a leap,” he says. “We started to recognize that in the last four years under the Biden administration. Now, we are much more oriented around the question, ‘Where’s the data and how is it being used?’ That is where accountability eventually has to be set up.” 

Science & Technology

Recent Stories


Continue Reading