They found that interactivity enhanced perceived playfulness and users’ intention to engage with an app, which was accompanied by a decrease in privacy concerns. Surprisingly, Sundar said, message interactivity, which the researchers thought would increase user vigilance, instead distracted users from thinking about the personal information they may be sharing with the system. That is, the way AI chatbots operate today — building responses based on a user’s prior inputs — makes individuals less likely to think about the sensitive information they may be sharing, according to the researchers.
“Nowadays, when users engage with AI agents, there’s a lot of back-and-forth conversation, and because the experience is so engaging, they forget that they need to be vigilant about the information they share with these systems,” said lead author Jiaqi Agnes Bao, assistant professor of strategic communication at the University of South Dakota who completed the research during her doctoral work at Penn State. “We wanted to understand how to better design an interface to make sure users are aware of their information disclosure.”
While user vigilance plays a large part in preventing the unintended disclosure of personal information, app and AI developers can balance playfulness and privacy concerns through design choices that result in win-win situations for individuals and companies alike, Bao said.
“We found that if both message interactivity and modality interactivity are designed to operate in tandem, it could cause users to pause and reflect,” she said. “So, when a user converses with an AI chatbot, a pop-up button asking the user to rate their experience or leave comments on how to improve their tailored responses can give users a pause to think about the kind of information they share with the system and help the company provide a better customized experience.”
AI platforms’ responsibility goes beyond simply giving users the option to share or not share personal information via conversation, said study co-author Yongnam Jung, a doctoral candidate at Penn State.
“It’s not just about notifying users, but about helping them make informed choices, which is the responsible way for building trust between platforms and users,” she added.
The study builds on the team’s earlier research, which revealed similar patterns, according to the researchers. Together, they said, the two studies underscore a critical trade-off: while interactivity enhances the user experience, it highlights the benefits of the app and draws attention away from potential privacy risks.
Generative AI, for the most part and in most application domains, is based on message interactivity, which is conversational in nature, said Sundar, who is also the director of Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI). He added that this study’s finding challenges current thinking among designers that, unlike clicking and swiping tools, conversation-based tools make people more cognitively alert to negative aspects, like privacy concerns.
“In reality, conversation-based tools are turning out to be a playful exercise, and we’re seeing this reflected in the larger discourse on generative AI where there are all kinds of stories about people getting so drawn into conversations that they do things that seem illogical,” he said. “They are following the advice of generative AI tools for very high-stakes decision making. In some ways, our study is a cautionary tale for this newer suite of generative AI tools. Perhaps inserting a pop-up or other modality interactivity tools in the middle of a conversation may stem the flow of this mesmerizing, playful interaction and jerk users into awareness now and then.”