Introduction
The last year has seen a rapid growth in the public use of generative AI tools and systems and continued development of the features and performance on offer. Many businesses say they have not yet seen any clear benefits from their attempts to use the technology, while examples of problematic performance continue to surface, and some worry about the scale of investment potentially creating a new dot.com-type bubble, the risk of further entrenching dominant technology companies, and the environmental impact of energy-hungry new technologies.
In this report, we use survey data to document and analyse the public use of, perceptions of, and expectations around generative AI when it comes to news specifically, and society more broadly. Building on a similar report covering the same six countries that we published last year (Fletcher and Nielsen 2024), we look at developments in Argentina, Denmark, France, Japan, the UK, and the US, which together represent a range of different media systems and high/upper middle-income contexts across four continents.
While many developments in and around generative AI concern specific organisational adaptations, a central driver continues to be public uptake, and a central issue continues to be how the public makes sense of these technologies. We focus on these issues here, and examine the public’s use, perceptions, and expectations of generative AI.
Overall, our analysis documents a complicated situation where people increasingly see AI as being used everywhere, while having very nuanced expectations around what the impact is across different sectors, for society as a whole, and for them personally. There are plenty of misgivings and reservations about some aspects of generative AI and its effects. At the same time public use continues to grow rapidly and, overall, the percentage of respondents across these six countries who say they have used at least one generative AI system in the last week has nearly doubled, from 18% to 34% in just one year. To put that into perspective, it took about three years for internet use to grow in a similar fashion in these countries in the late 1990s and early 2000s. There is plenty of unsubstantiated hype around AI performance and adoption, and many concerns about the implications of it all, but our data clearly documents rapid public uptake across all countries covered – generative AI use seems to be growing roughly three times as fast as the initial spread of internet use.
In this report, after explaining the online survey methodology we rely on, we first examine public awareness and use of generative AI. Next, we look specifically at AI-generated search engine answers before turning our attention to public assessment of the use and impact of generative AI across different sectors of society. We then look at public opinion on the use of generative AI in journalism and news specifically. Finally, we end the report with a concluding discussion.
It is important to stress that, as with all survey-based work, we are reliant on people’s own understanding and recall. This is particularly important to keep in mind with a new and sometimes nebulous phenomenon like generative AI, and when considering what people think others (whether individuals or organisations) do with these technologies. But even in cases where this data is at odds with behavioural and passive tracking data from other sources, it is still an important source of information about population-level practices, and public opinion overall.
We hope the data and analysis we publish here will be useful for media professionals, scholars, and others interested in generative AI and its societal impact.
Methodology
The report is based on a survey conducted by YouGov on behalf of the Reuters Institute for the Study of Journalism (RISJ) at the University of Oxford and the Department of Communication at the University of Copenhagen. The main purpose is to understand if and how people use generative AI, and what they think about its application in journalism and other areas of work and life.
The survey and this report are follow-ups to a survey carried out in 2024 using the same methodology (Fletcher and Nielsen 2024). They are both part of the AI and the Future of News project at RISJ.
The 2025 data were collected by YouGov using an online questionnaire fielded between 5 June and 15 July 2025 in six countries: Argentina, Denmark, France, Japan, the UK, and the US. YouGov was responsible for the fieldwork and provision of weighted data and tables only, and RISJ was responsible for the design of the questionnaire and the reporting and interpretation of the results.
Samples in each country were assembled using nationally representative quotas for age group, gender, region, and political leaning. The data were weighted to targets based on census or industry-accepted data for the same variables.
Sample sizes are approximately 2,000 in each country. The use of a non-probability sampling approach means that it is not possible to compute a conventional margin of error for individual data points. However, differences of +/- 2 percentage points or less are very unlikely to be statistically significant and should be interpreted with a very high degree of caution. We typically do not regard differences of +/- 2 percentage points as meaningful, and as a general rule we do not refer to them in the text.
It is important to note that online samples tend to under-represent the opinions and behaviours of people who are not online (typically those who are older, less affluent, and have limited formal education). Moreover, because people usually opt in to online survey panels, they tend to over-represent people who are well-educated and socially and politically active.
Some parts of the survey require respondents to recall their past behaviour, which can be flawed or influenced by various biases. Additionally, respondents’ beliefs and attitudes related to generative AI may be influenced by social desirability bias, and when asked about complex socio-technical issues, people will not always be familiar with the terminology experts rely on, or understand the terms the same way. We have taken steps to mitigate these potential biases and sources of error by implementing careful questionnaire design and testing.
1. Public awareness and use of generative AI
Awareness of generative AI systems
In comparison to last year, awareness of AI tools and systems has grown significantly. The share of respondents who say they have not heard of any of the 13 tools or systems we asked about shrunk to just 10%, meaning that 90% have heard of at least one of the most widely used AI tools. In 2024 the figure was 78%.
Although it is still not universally recognised, Figure 1 reveals that OpenAI’s ChatGPT is still by far the most widely recognised generative AI system. No other brand comes close to ChatGPT’s recognition, but generative AI products tied to incumbent technology companies have grown even more in recognition in comparison to last year’s data (see Figure 1), closing the gap between ChatGPT and the rest of the pack.
Figure 1.
ChatGPT is the most widely recognised AI tool in each of the six countries, ranging from 79% awareness in Denmark to 68% in Argentina. Beyond ChatGPT, awareness shows strong regional differentiation (Figure 2). Google’s suite of Gemini models enjoys high familiarity in the USA (59%) but drops sharply in Denmark (36%). Microsoft’s AI-assistant Copilot likewise registers with around 45% of people in the UK and the US, but falls below 30% in France, Japan, and Argentina, possibly owing to the fact that Microsoft’s search engine integration has more traction in Anglo-American markets. Snapchat’s My AI is relatively prominent in Denmark (28%) and Argentina (20%) but almost invisible in Japan (6%), while Meta AI (i.e. AI features within WhatsApp or Instagram) is exceptionally high in Argentina (63%), yet only a quarter to a half of respondents in Europe report having heard of it.
Figure 2.
An interesting fact is that AI systems from two early and still very prominent AI firms – Perplexity and Anthropic (who make Claude) – barely seem to register with large parts of the public in all these countries. Meanwhile, China-based DeepSeek has reached 20% recognition across markets, possibly owing to the broad media coverage following the January 2025 release of DeepSeek’s R1 model, seen as potentially disruptive to the US’s dominance in AI, in what some have termed a ‘Sputnik moment’ (Metz 2025). Even X’s Grok – which frequently made headlines over the last year, including for praising Adolf Hitler and producing antisemitic tropes (Murphy et al. 2025) – does not reach similar levels of awareness outside of the US.
Use of generative AI
The pattern we see for awareness extends to use. In 2024, 40% on average across the six countries said that they had ever used generative AI. In 2025 this rose to 61%. Regular use has also grown, nearly doubling from 18% who used generative AI weekly in 2024, to 34% in 2025 (Figure 3).
Figure 3.
The same is also true at the country level, with AI use roughly doubling in most countries (Figure 4). The exception to this is the US, where the proportion who used generative AI in the last week grew just 5 percentage points, from 31% to 36%. However, generative AI use in the USA remains comparatively high, second only to Argentina in our 2025 survey.
Figure 4.
ChatGPT is by far the most widely used generative AI system in the six countries surveyed, with 22% on average across six countries saying they used it in the last week. It is followed by the AI offerings of other major technology companies (Figure 5). Since our last survey in May 2024, every major system or tool has seen its core user base double (Figure 5). Overall, the percentage of respondents across these six countries who say they have used at least one generative AI system in the last week has nearly doubled from 18% to 34% in just one year – a very significant growth within the space of a year and about three times faster in relative terms than the growth of internet use in these countries in the late 1990s and early 2000s. Many others, however, are barely used by the wider public in these countries, even if they have seen significant uptake in niche circles and in some cases by specific corporations or national governments.
Figure 5. Proportion who use each generative AI system weekly
Breaking the data down by country, we see that ChatGPT is the most widely used system in each (see Figure 6). It is typically followed by either Google Gemini, Meta AI,igu or Microsoft Copilot.
Figure 6.
At the same time, the majority of people in all countries covered are not regular users of any AI tool. For the four most popular systems, many say they use it monthly or have only ever used it once or twice, and large numbers say they have never used AI tools or have not heard of them (Figure 7). Despite the rapid adoption, which is likely to grow further as such systems become more widely integrated across digital media platforms, this should be an important reminder that not everyone can or wants to use AI systems directly.
Figure 7.
Just as last year, use of generative AI is slightly more common among those with higher levels of formal education, with the biggest differences again by age group. Younger people are much more likely to have ever used generative AI, and to say they use it on a weekly basis. Averaging across all six countries, and combining the use of all generative AI tools asked about in the survey, 59% of those aged 18–24 say they have used generative AI in the last week, compared to 20% of those 55 and over (see Figure 8).
Figure 8.
When we break this down by generative AI system, we see that the age gaps are primarily driven by large differences in the use of ChatGPT. Nearly half (47%) of those aged 18–24 say they used ChatGPT in the last week, but among those aged 55 and over the figure is just one in ten (9%). While there are some age differences in the use of Google Gemini, there are no real age gaps in the use of Microsoft Copilot and Meta AI. This is likely because these AI tools are embedded within existing products and services that are widely used across different age groups.
Specific uses of AI: Getting information, creating media, getting news
Apart from making sense of people’s knowledge and use of different AI systems, we were also interested in understanding what people are using these systems for (see also Tamkin et al. 2024). In general, we see a significant change between 2024 and 2025. Averaging across the six countries, 24% say they have used generative AI for getting information in the last week, more than double the figure from 2024 when 11% said they were using it for this purpose.
Using generative AI for getting information is now the most widespread use of the technology, overtaking using it for creating media such as text, images, video, and code, though the latter also increased, up 7 percentage points to 21% in 2025 (see Figure 9). Of course, it should be acknowledged that the lines between getting information and media creation are fuzzy, given that all generative AI responses require some form of media creation in order for them to be communicated to the user.
We also asked for the first time if people used AI systems for social interaction – for example as a friend or adviser – a topic that has received growing attention in recent months due to news reports and research that some people are forming closer bonds with systems that are perceived to have a personality (Kirk et al. 2025). Across countries, 7% replied that they had done so in the last week, especially among younger people (with 13% among the 18- to 24-year-olds saying they used AI as a social companion versus just 4% among those 55 and older).
Figure 9.
In terms of concrete information-retrieval tasks, we see the strongest growth around asking advice and answering factual questions, both increasing from 6% of people saying they had used it for this purpose in the last week in 2024 to 11% in 2025. A new option we provided in 2025 was ‘Researching a topic’, to which 15% of people responded that they had used AI for this purpose (Figure 10).
Meanwhile, while growing overall, the use of AI for most individual media creation tasks is largely stagnant, with the growth driven by more people (from 5% to 9%) saying they use AI for creating an image, potentially reflecting the wider rollout of free access to such features, for example in Google’s Gemini. The use for video generation remained similar (3%) as did the figure for audio (2%), despite more powerful multimodal AI systems coming on to the market in the meantime. Likewise, despite improved coding systems (for example Claude Code or OpenAI Codex), the use of AI for programming and coding remained flat (Figure 10). This could be because those that wanted to use generative AI for coding were already doing so in 2024.
Figure 10.
The percentage who say they have used generative AI to get the latest news has doubled over the last year, from 3% in 2024 to 6% this year, although this change is mainly driven by changing habits in Japan and Argentina, with the number static in other countries. The use of AI systems for news in our data is strongest in Argentina and the USA and for the youngest age group – 8% among the 18–24s, versus 5% for those 55+ (Figure 11). Using AI systems for news is also more pronounced for people with a degree than those without across countries. However, it remains one of the least-practised information-retrieval tasks. Just one in four of those who say they use generative AI for one or more information-retrieval tasks count getting news among them. While these findings in many ways for now document a ‘dog that didn’t bark’ situation – an absence of a much bigger change that might have been expected – it is a meaningful starting position. Given how rapidly people adopt convenient interfaces and technologies, it is possible that this number will grow more strongly in the years to come.
Figure 11.
Looking at only this sub-group of people who say they have used a chatbot for getting news in the last week, we can dig deeper into what they used it for (see Hagar 2024 for the list of news uses that helped inform this survey question), with the largest number wanting the latest news (54%) or help in dealing with a news story; for example, by having it summarised, evaluated, or re-written to make it easier to understand (see Figure 12).
Figure 12.
However, an interesting picture emerges when we break this down by age. Older people tend to use AI chatbots or systems primarily to get the latest news, whereas younger people use it more to help them navigate the news. Around half (48%) of the 18- to 24-year-olds who used an AI tool for getting news said they had used it specifically to make a news story easier to understand, in comparison to just 27% of those 55 and older, a 21 percentage point difference (see Figure 13). We should bear in mind that these are small sub-groups of the wider population, but they show that the use of such chatbots for news purposes is far from uniform.
Figure 13.
Trust in specific generative AI products
We also asked about trust in individual generative AI tools and systems. As we saw earlier, awareness of individual brands is low in many cases, and many of those who have heard of a brand may not have an opinion on it, so we must interpret the data with caution.
On average across six countries, just under one third (29%) say that they trust ChatGPT, ahead of Google Gemini (18%), Microsoft Copilot (12%), and Meta (12%) (Figure 14). Less than one in ten say they trust all of the other generative AI systems we asked about. Again, this is primarily because only around 15% or fewer have actually heard of them, meaning that the proportion who say they distrust them is often equally small.
Figure 14.
It is worth noting that public trust in the most widely trusted generative AI system, ChatGPT, is lower than public trust in news in every country except Argentina (Figure 15). As such, there is a significant ‘trust gap’ between how many respondents trust news in their country versus the much lower number of people, across virtually every demographic, who say they trust any of the specific generative AI chatbots or tools we ask about. This is in line with our previous research, which has often documented such a gap between news in general and news found via various digital platforms (Mont’Alverne et al. 2022).
Figure 15.
An alternative way of interpreting and presenting the trust data is to look at the difference between the proportion who say they trust and the proportion who say they distrust, which we can think of as the net trust score (Figure 16). When we do this we see that, on average across six countries, more people trust ChatGPT (+12), Google Gemini (+7), and Microsoft Copilot (+5) than distrust it. Despite the wide reach via its social apps such as Instagram, Facebook, and WhatsApp, the net trust at the population level is negative for Meta AI (-4), and the same is true for DeepSeek (-4). For the rest, the gap between trust and distrust is likely too small to be meaningful.
Figure 16.
There are, however, important differences by country (see Figure 17). In most countries people are more likely to trust ChatGPT than distrust it. The exception is the UK, where most systems (including ChatGPT) are more distrusted than trusted. Argentina is the opposite, where none of the top brands are more distrusted than trusted, including DeepSeek (+3), which is more distrusted in most of the other countries studied.
Figure 17.
2. AI-generated search answers
Awareness of AI-generated search answers
This year we also included questions on AI-generated answers in response to online searches. These have become more common, especially with the rollout of Google’s ‘AI Overviews’ and ‘AI Mode’ in a growing number of countries. The availability of such answers has sparked concern because they could contain false or misleading information or so-called hallucinations. News publishers and other content creators have also expressed worries about a negative impact on the referral traffic they receive from such online searches (The Economist 2025; Brown and Jaźwińska 2025), as more people come to rely on such AI-generated answers, instead of seeking out any of the provided search results. While it is clear that some individual publishers and some genres of content have already been affected, data from Chartbeat suggests the feared impact has not yet hit the industry at large. This may be partly because not all news-related queries receive AI-generated responses by default.
An example of an AI-generated answer in response to an online search on Google. Source: Screenshot.
First, we wanted to know how often people encounter such AI-generated search answers in their day-to-day lives. Across all six countries, 54% of respondents say they have seen an AI-generated answer in response to one of their search queries in the last week, although 19% said they don’t know (Figure 18), perhaps highlighting that for some people these are hard to separate from standard search results. It is also a remarkable illustration of how the integration of generative AI in already widely used products can drive exposure; AI-generated summaries were only rolled out to most users by Google in the last year or so. There are already more people who encounter these on a weekly basis than who actively use the standalone AI tools we ask about.
Figure 18.
This figure is highest in Argentina (70%), followed by the UK (64%) and the USA (61%), and lowest in France, where only 29% said they had encountered such responses (Figure 19). At the time of the survey Google, the most widely used search engine in France, to our knowledge had not rolled out the AI overview feature there, which likely explains the lower numbers for the country.
Figure 19.
‘Click-through’ behaviour and trust in AI-generated search answers
Zooming in on just the people who said they have seen AI-generated answers in the last week, we can see how often they say that they also click through to the information these overviews sometimes link to (Figure 20). While we have to be careful about the limits of survey data in this context, as what people say they do might not always match what they actually do, our data are broadly in line with web-tracking data on the topic, which are better at capturing people’s behaviour (see Chapekis and Lieb 2025).
Figure 20 shows that across all countries about one third (33%) of respondents who say they have seen AI-generated search answers report clicking through regularly (always/often), one third (37%) do so only sometimes, and approximately 28% seldom or never click (rarely/never). The youngest group (18–24) say they are more likely to do so, with nearly 40% clicking through often, compared with just 28% of those aged 55+ saying the same. Conversely, low engagement with these links is highest among the oldest cohort (31%) and lowest among 35- to 44-year-olds (25%). While industry data, for example from Chartbeat, suggest referrals from search have not changed as dramatically as some headlines suggest, it remains possible that there will be a material impact on search referrals over time.
Figure 20.
One advantage of studying this topic through survey research is that we can explore what people think about AI-generated search results. Looking at people’s trust in the AI answers they get served (Figure 21), around half of respondents who say they have seen AI-generated search answers express trust in them: when ‘strongly’ and ‘somewhat trust’ are combined, 50% across counties say they have confidence in these answers (Figure 23), with no significant difference between men (50%) and women (49%). Overall, younger adults show slightly higher trust than older users. Neutral attitudes (‘neither trust nor distrust’) account for about one quarter to one third of respondents. Looking at the results across countries we see relatively small differences, although trust is lower in the UK and higher in Japan.
Figure 21.
Figure 22.
Trust in AI-generated responses is also related to how often people click the links through to the underlying sources (see Figure 23). Those who trust AI search responses are more likely to say that they ‘always’ or ‘often’ click through (46%) than those that distrust (20%). This perhaps runs counter to the idea that links are used to validate the AI response, and instead suggests that they are used to dig deeper or get more information. However, it is also possible that people who are more likely to click through to links end up trusting AI responses more because they see that the response matches the underlying content.
Figure 23.
We also asked people to explain why they trust or distrust the answers they receive. Looking at the open-ended survey comments in response to this question provides some valuable pointers on why about half of respondents say they ‘somewhat’ or ‘strongly’ trust AI-generated answers.
A dominant theme in the answers we received is the speed and convenience these AI summaries offer: as one respondent put it, ‘Because it’s fast and saves me time and it’s usually close to what I’m looking for.’ Another noted simply, ‘AI provides quick and accurate responses, saving time and effort.’ For many respondents, the mere fact of an almost instantaneous, concise summary seems to make AI summaries worthy of at least provisional trust. Many also said that they feel AI ‘knows’ more because it aggregates vast amounts of data – from textbooks, websites, and scientific literature – and thus generally produces correct answers, especially in non-critical contexts. In the words of one user, ‘AI models are trained on vast amounts of information … That gives them a general grasp of many topics, which can be useful for surface-level understanding or first drafts.’ The use of phrases like ‘close to what I am looking for’ and ‘general grasp’ are interesting – they are a reminder that people often, in Herbert Simon’s famous term, engage in satisficing, looking for good enough things that get the job done, rather than optimising, taking the time to find the optimal answer or solution.
This, however, does not mean that people trust AI-generated answers blindly, especially for high-stakes or nuanced topics, such as health or politics. Many users say that they often treat AI as a first pass and then corroborate its output. One respondent explained, ‘After reading the AI response, I went back and checked the responses from another source,’ while another said, ‘I trust it when it gives factual answers that can be verified by non-AI sources.’ Such statements show that trust in such answer is likely conditional, partially relying on users’ ability and willingness to follow up AI suggestions with deeper research.
3. Public assessment of scale and impact of AI use in different sectors
Individuals are of course not the only users of generative AI. Virtually all institutions in our societies – from governments to the banking industry – are at least considering whether and how these technologies might be used in their domains, and many actors have already incorporated them into their work. In this section, we look at public assessment of the scale of generative AI use in different sectors, as well as what people believe the use of these technologies will mean for their own experience of the sector in question, for them personally and for society.
Perceptions of use across sectors
In terms of how frequently, if at all, people think generative AI is being used by different kinds of actors today, overall there is a widespread public perception that generative AI is already everywhere, at least to some extent – the number of respondents who believe generative AI is used ‘always’ or ‘often’ in different sectors is 41% on average across countries, far exceeding those who say it is used ‘rarely’ or ‘never’ (15%) (Figure 24). Younger respondents are less likely to say they think various sectors are using generative AI always or often, but more likely to say ordinary people are.
Figure 24.
That said, these technologies are still primarily associated with media and communication; the percentage of people who say they think they are being used ‘always’ or ‘often’ is much higher for news media (51%) and, especially, social media companies (68%) and search engine companies (68%).
For most other sectors, public assessment of the frequency of use is basically about the same as public assessment of how frequently ordinary people use generative AI, which in turn is basically in line with how many in fact say they have used at least one AI tool in the last week. Of course, it can be hard to assess whether someone or some institution is using generative AI, and for most sectors more than a fifth of our respondents say they don’t know how frequently these technologies are being used.
Overall, our data suggest much of the public believes generative AI is already fairly widely used across society and is especially intensely used in the news media and technology industries. What do people then make of what such use will mean for them?
Perceived effect of use in different sectors on people’s lives
To understand this better, we asked respondents how much better or worse they think different actors’ use of generative AI will make people’s experience of interacting with them. This is necessarily a broadly phrased question, and ‘interacting with’ only captures parts of all the ways in which the use of generative AI may shape how different institutions affect us. But the data still provide a useful sense of people’s expectations.
For most sectors, half or more of our respondents express a clear expectation for better or worse, even as many answer ‘neither better nor worse’ or simply that they don’t know (Figure 25). On average, across all countries and all sectors, 29% are optimistic and 22% pessimistic.
Figure 25.
The net balance between how many expect their experience of interacting with a given sector or set of actors will get somewhat or much better, and those who expect it will get somewhat or much worse provides a sense of different public expectations across sectors. Generally, there are more optimists than pessimists – for sectors like healthcare, science, and search engines there is a particularly clear preponderance. Only three sectors see the pessimists outnumber the optimists: news media, government, and, especially, politicians and political parties (Figure 26).
Figure 26.
Our data on public assessment of the scale of generative AI use across different sectors, and their expectations as to whether this use will make their own experience of dealing with the actors in question better can be used to map a nuanced overall landscape. Compared to the average share of respondents who say they believe generative AI is used ‘always’ or ‘often’ in a given sector (41%) and the average share of respondents who say they believe the use of generative AI will make their experience of interacting with a given sector better (29%), we can identify sectors that stand out.
In a few sectors, people think generative AI is particularly widely used and many expect this will improve their experience. This is the case for search and social media. In other sectors, people don’t think generative AI use is particularly widespread, but they still expect it will benefit them. This is the case for health care and science. Then there are sectors where expectations are particularly low – most notably government use, and use by politicians and political parties (Figure 27) but also the news media. As most people’s personal experience of using generative AI is limited, and our insight into how different institutions use it (let alone our ability to assess what the implications are) even more so, it is important to stress that these are aggregate public judgements and will at least in part reflect the cues and heuristics from popular culture, news media, and various opinion-makers that people rely on when making up their mind – and some of these are more positive around, for example, science than around news and politics (Ross Arguedas 2024).
Figure 27.
Personal versus societal impact of generative AI
As we did last year, we have asked respondents if they think that generative AI will make their life better or worse, and whether it will make society better or worse. The answers are largely similar to last year. About half of our respondents answer ‘neither better nor worse’ or ‘don’t know’ when it comes to their own life, and a third when it comes to society – the latter mainly because there are more pessimists when it comes to the societal implications of generative AI than when it comes to the individual implications (Figure 28). There may be irrational exuberance around these technologies in some quarters, but not in the public at large.
Figure 28.
When it comes to the impact of generative AI on people’s own lives, the optimists outnumber the pessimists in four of the six countries covered, and only in the UK do we see significantly more pessimists. This overall cautious optimism is well aligned with the overall sense that there are – with some notable exceptions – many important sectors where AI may bring people benefits as documented above.
When it comes to the impact of generative AI on society, there is much more pessimism. In three of the six countries there are significantly more pessimists than optimists – including the USA, where many of the technologies involved have been developed and many of the companies pushing them are based (Figure 29). Compared to last year, there has been a significant swing in US public opinion in a more negative direction – the percentage who expect generative AI to make society better has declined by 6 percentage points, and the share who expects it to make it worse has increased by 7 percentage points.
This pessimism is no doubt in part about reservations in some quarters over the use of generative AI by social media companies, search engine companies, and other technology companies – and to some extent news media – but probably also because many are worried that the use of generative AI by governments, politicians, and political parties will be for the worse. Again, as noted above, elite debate and media discussions around these technologies can come across as quite positive for some use-cases, even as it is often more negative for what it might mean for public life.
Figure 29.
As we found last year, younger people (respondents under 35), tend to be significantly more optimistic when it comes to the impact of generative AI on both their own lives and society at large. Expectations around AI also differ along other socio-economic variables. Female respondents are significantly less likely to expect that generative AI will make their lives better. They are also significantly less likely to say they expect it to make society better, and they are more likely to expect it will make it worse (Figure 28). This probably reflects the growing number of gendered forms of harm AI has contributed to.
4. Public opinions on the use of generative AI in journalism and news
Overall comfort with AI in news
This section focuses on what people think of generative AI being used for news and journalism specifically. News organisations have been using various forms of AI for some time, and some of them have been experimenting with generative AI for several years now (Simon 2024: 46). Nonetheless, as we showed last year, the public remain cautious about its use in newsrooms (see also Mitova et al. 2025; Mitchell et al. 2025; Lipka 2025; Morosoli et al. 2025). On average across six countries, just 12% say they are very or somewhat comfortable with news made entirely by AI, rising to 21% if there is some human oversight, sometimes also known as the ‘human in the loop’ (Figure 30). Comfort with news production that puts humans in the driving seat is considerably higher, with 43% saying they are comfortable with news made mostly by a human with some help from AI, and 62% saying they are comfortable with news made entirely by a human journalist – up 4 percentage points compared to 2024. This change has contributed to a slight widening of the ‘comfort gap’ between AI- and human-driven news production in the last year.
Figure 30.
As we highlighted in our 2024 report (Fletcher and Nielsen 2024), younger people tend to be more comfortable with AI-driven news production, but even here – and among most other demographic groups – the comfort gap exists. The comfort gap can be seen in all six countries studied (Figure 31), especially Denmark and the UK, but it is smaller in Japan and Argentina.
Figure 31.
Acceptance across topics and tasks
In our 2024 report (Fletcher and Nielsen 2024), we showed how comfort with the use of AI in the newsroom varies by news topic, with people generally more comfortable with it being used for ‘soft’ news topics, such as arts and lifestyle, and less comfortable with it being used for serious topics, such as politics and international affairs. Comfort also varies by the journalistic task being performed (Figure 32). On average across six countries, people are more comfortable with AI being used for back-end tasks, such as editing the spelling and grammar of an article (55%) or translation into different languages (53+), but much less comfortable with rewriting the same article for different people (30%), creating a realistic image if a real photograph is not available (26%), and creating an artificial presenter or author (19%). These percentages are very similar to those from 2024, but the proportion comfortable with AI being used to write a headline increased from 38% to 41%. What this shows is that despite people becoming more aware of AI use, both overall and in the news, as well as higher personal use of AI, they seem to remain sceptical for now of its use in news contexts, where it is directly affects their user experience (see also Mitchell et al. 2025).
Figure 32.
Perceived prevalence of AI in news
Although public comfort with these uses of AI is stable, people are now more likely to think journalists are regularly using AI to carry out these tasks (Figure 33). In 2025, the proportion who said journalists ‘always’ or ‘often’ use generative AI for each has increased by at least 3 percentage points compared to 2024. The proportion who think journalists always or often use generative AI to edit the spelling and grammar of an article has increased 8 percentage points, from 43% to 51%.
Figure 33.
One positive finding is that, at the task level, people’s comfort with journalists using generative AI for specific tasks is correlated with how often they think journalists actually use generative in this way (Figure 34). In other words, people think that journalists use AI in ways that the public are generally comfortable with. For example, comfort with AI being used to edit spelling and grammar is relatively high (55%), and this is also the task that people think journalists regularly perform with AI most often (51%). Conversely, just 19% are comfortable with generative AI being used to create an artificial presenter or author, but only 20% think that this is done regularly.
Figure 34.
There’s some evidence that people’s comfort (or lack thereof) with news being made by AI is rooted in views about how they think it will shape different news qualities, such as how easy it is to understand. As in last year’s survey, we asked respondents whether news produced mostly by artificial intelligence with some human oversight is likely to be more or less trustworthy, easier to understand, transparent, etc. compared to news produced entirely by a human journalist (Figure 35). If we look at the net score for each quality – which is the proportion who said ‘much less’ or ‘somewhat less’ subtracted from the proportion who said ‘much more’ or ‘somewhat more’ – we see clear and pronounced differences. On average across six countries, people are more likely to think that generative AI will make news cheaper to make (+39) and more up to date (+22), but also less transparent (-8) and, crucially, less trustworthy (-19). In other words, to some extent people think that generative AI will primarily benefit publishers rather than them as users. What’s more, although this pattern was clearly evident in 2024, public opinion seems to have hardened slightly, with some of the net scores increasing in 2025 but none of them decreasing.
Figure 35.
As we have seen in many places throughout this report, people in Japan and Argentina tend to be more positive about the impact of generative AI on news (Figure 36). Here, people tend to think that generative AI will make most news qualities better. Elsewhere, we see more differentiated views, and in the UK people think that generative AI will make many aspects of the news worse. But across countries, the rank order of the effect on each quality remains broadly consistent.
Figure 36.
Perceived prevalence of AI in news
A key issue around the use of AI in journalism is human oversight. News organisations and news scholars regularly stress the importance of ‘having a human in the loop’, particularly when it comes to checking generative AI outputs (Porlezza and Schapals 2024). However, when asked, just 33% on average across six countries think that journalists ‘always’ or ‘often’ check AI outputs to make sure they are correct or of a high standard before publishing them, with percentages slightly higher in Japan (42%) and Argentina (44%), but lower in the UK (25%) (Figure 37). The figures are roughly similar to those from 2024, but there was a slight decrease in the USA (-3 percentage points) and slight increases in Japan and Argentina (both +3 percentage points). There was a larger increase in Denmark (+6 percentage points), which may reflect efforts by Danish publishers to promote their responsible use of AI.
Figure 37.
Of course, most people do not have direct experience or knowledge of what goes on in newsrooms, so the fact that a relatively low proportion thinks that regular checking takes place to some extent reflects cynicism about the news media as an institution. This is borne out when we look at the data on checking by different levels of trust in news (Figure 38). Among those that strongly trust the news in their country, 57% say that they think journalists always or often check outputs, but this falls to just 19% among those that strongly distrust. This is another reminder of the importance of trust in news for journalism’s ability to innovate while maintaining the confidence of the public – and perhaps a warning against complacent thinking that AI transparency measures implemented by many organisations on their own will be enough.
Figure 38.
It is also clear that people hold a differentiated view about how different news organisations will use generative AI. On average across six countries, people are more likely to think there will be ‘very’ or ‘somewhat’ large differences in how responsibly different news outlets will use generative AI (43%) rather than small differences (28%) (Figure 39). Despite the news landscape being very different in each of the six countries, the pattern is remarkably consistent. France is the exception, however, where the proportion who think there will be large differences (35%) is roughly the same as the proportion who think the differences will be small (34%). Across countries, between one quarter and one third say they ‘don’t know’, reflecting the fact that there is still considerable uncertainty about how AI will be used.
Figure 39.
Perception of audience-facing AI features and labels
In addition to the fact that few people know about the day-to-day work of journalists, another reason for this uncertainty about how AI will be used in news is that most people (60%) do not yet regularly see AI-powered, audience-facing features – such as bullet-point summaries and chatbots – on news websites and apps. While a handful of outlets, like the Financial Times in the UK and the Washington Post in the USA, have experimented with and introduced features like these, many of the most popular and widely used outlets have not.
An example of an AI feature on a news website: The Financial Times’ chatbot ‘Ask FT’. Source: Screenshot.
An example of an AI-summary at The Economist. Source: Screenshot.
We presented people with four different, very common uses of AI by news organisations based on recent industry research (Newman and Cherubini 2025). The patterns vary by country, but on average, people are slightly more likely to have seen an AI bullet-point summary of a news story (19%), than an AI chatbot that answers questions about the news (16%). Features offering AI audio (14%) and video (11%) conversion of news stories are less frequently encountered (Figure 40).
Figure 40.
Another way people can be made aware of the use of AI in the newsroom is when news outlets label their content as being made with the help of (generative) AI. On average across six countries, 19% say that they see these labels daily, with a further 9% saying they see them on a weekly basis – meaning that, in common with the other AI features discussed earlier, most people do not encounter labels regularly. To put this into some kind of context, it is worth remembering that 77% of respondents in our survey say that they use the news on a daily basis, meaning that only a small fraction of frequent news users regularly see AI labelling or other forms of AI disclosure.
Again, the numbers vary by country (Figure 41). In the UK, 6% say they see labels daily, compared to 12% in Denmark, 16% in the USA and France, and around one third in Japan (33%) and Argentina (32%). This will partly reflect differences in frequency of news use, but also different approaches to labelling adopted by outlets in different countries. However, it is worth keeping in mind that labels are not always easy for users to spot, and as with any data from surveys, respondents’ recall can be prone to error.
Figure 41.
Although many prominent news outlets have a policy to clearly label content that has been produced with the help of generative AI, the public may suspect that AI has been used to make unlabelled content, especially in the context of low levels of trust in news in many countries (Newman et al. 2025). Fortunately for news organisations, this is a relatively fringe view (Figure 42). Only around 15% on average say that they always or often encounter ‘a news story and suspect that it might have been made entirely or partly using generative AI – even though it was not labelled as such’.
Figure 42.
The figure is as high as 30% in Argentina and 17% in the USA, but falls to around 10% in Japan and Europe. Less reassuring is the fact that most people think that they encounter unlabelled AI content on news sites at least some of the time. Of course, people have no way of knowing for sure, and they may well be incorrectly perceiving the use of AI. But these perceptions are still real and given that the use of AI in news is viewed quite cautiously by the public, clearly communicating their AI policies remains important for news organisations across the world.
Conclusion
We have documented public use of, perceptions of, and expectations around generative AI in news specifically, and across society and people’s own life in six countries. Based on online surveys of nationally representative samples, we have shown how awareness and use of generative AI systems have grown rapidly over the year since we last surveyed these countries, with weekly usage nearly doubling from 18% to 34%.
In terms of what people say they do with generative AI, creating media is still an important use, but overall, information-seeking has become the most widespread type of use. This involves many different tasks – researching a topic, asking advice, or answering factual questions – and includes getting news. The latter is growing, but it is not yet particularly widespread; it has doubled from a low base from 3% to 6%, but it is still only about one in six regular generative AI users who say they use these tools to get news.
A crucial question going forward is the balance between the role of standalone generative AI systems such as ChatGPT and its competitors versus the integration of various forms of generative AI into already widely used products and services like search engines and social media. AI-generated search answers, rolled out to many users by first Microsoft and then Google in the last year or so, is a particularly prominent example of this. In a remarkable illustration of how such integration can drive exposure, seeing AI-generated summaries is now more common (54% say they have seen such answers in the last week) than having used any of the generative AI systems or tools we ask about (34% say they have in the same period). News publishers who have long received a large volume of search referrals are understandably concerned about this development, as not only are AI summaries widely seen, in line with what others have found in studies of individual countries (Chapekis and Lieb 2025), but across all countries covered many of our respondents also say they do not click through to source links when they encounter AI summaries.
While the vast majority of the public are now at least aware of various generative AI systems and many use them, most people have not made up their mind about whether they trust these systems. Several of the most widely used tools – ChatGPT, Gemini, and Copilot – have more people saying they trust them than distrust them (whereas public opinion is net negative for several others including, for example, DeepSeek and Meta AI). While 50% of those who say they have encountered AI answers in search say they trust them, in most cases, there is a ‘trust gap’ between how many people say they trust the news, and how many say they trust even the most widely used AI tools. This is akin to similar gaps we have documented in the past between how people think of the news media versus news found via search engines and social media (Mont’Alverne et al. 2022).
Looking beyond personal use, our respondents from six countries think that generative AI is already prevalent across sectors, with an average of 41% believing it’s used ‘always’ or ‘often’ across a range of different sectors. Of course, it is not always clear whether and how much actors in a particular sector are using specific technologies, but we at least can say that people’s judgement of how often other people are using generative AI is pretty good – 36% of our respondents say others are using these tools ‘always’ or ‘often’, compared to 34% of respondents who are using them at least weekly.
Asked how much better or worse they think different actors’ use of generative AI will make their experience of interacting with them, many respondents answer ‘neither better nor worse’ or simply that they don’t know. But a majority do express a view, and on average, across all countries and sectors, 29% are optimistic and 22% pessimistic. Looking more closely across sectors, however, there is significant variation. Generally there are more optimists than pessimists, especially for sectors like healthcare, science, and search engines, but in other sectors, pessimists outnumber the optimists. This is the case for the news media, government, and, especially, politicians and political parties.
Public opinion is similarly nuanced when it comes to the impact generative AI will have on the respondents’ own lives, versus the impact on society – the optimists outnumber the pessimists in four of the six countries covered in terms of people’s own lives, but when it comes to society there are significantly more pessimists than optimists in three of the six countries. As we have noted above, there may be irrational exuberance around these technologies in some quarters, but that does not seem to be the case for the public at large.
For news media, our findings are in some ways challenging – we document rapid growth in the use of a new set of tools bound to impact the discovery of information, how people use it, and, by extension over time, changing competition for both attention, advertising, and the money people are willing to spend on media and content. This will affect the news media irrespective of whether people use generative AI for getting news specifically. We also document that while many are cautious, or pessimistic, about what generative AI portents, many of the more regular users are more content with these tools, and many trust, for example, the AI summaries they see in search. Finally, the parts of our survey that focus on news specifically reinforce past findings – people are sceptical of the use of generative AI in the news, with a clear ‘comfort gap’ between AI- and human-led news production, and they have a pretty negative view of what the greater use of generative AI in news will mean for them – basically, more cheaply produced, less trustworthy news.
There are also important encouraging findings, however – at least for some news media. The ‘trust gap’ between news and most generative AI tools will be welcome news for many journalists. The premium on human involvement plays to the strength of those news media willing and able to invest in original reporting and professional editing (even as they may at the same time invest aggressively in using various forms of AI for a whole slew of back-end tasks for efficiency and improved performance). And the fact that a large plurality of the public expect there to be very or somewhat large differences in how responsibly different news outlets will use generative AI provides a basis for individual news media brands to clearly demonstrate and communicate how they stand out – whether from ‘AI slop’ on the wider internet or old-fashioned ‘churnalism’ from competitors in the news media.
Many journalists and media professionals might want the public to look more kindly on the sector as a whole, but ultimately they might settle for people at least valuing those who really do deliver stand-out journalism, also with the help of generative AI.
About the authors
Dr Felix M. Simon is the Research Fellow in AI and News at the Reuters Institute for the Study of Journalism and a research associate at the Oxford Internet Institute at the University of Oxford. His research looks at the implications of a changing news and information environment for democratic discourse and the functioning of democracy, with a special focus on artificial intelligence.
Professor Rasmus Kleis Nielsen is a professor at the Department of Communication of the University of Copenhagen and a senior research associate at the Reuters Institute for the Study of Journalism. His research is focused on the changing role of news and media in our societies. He has written extensively about journalism, digital media, the business of news, political communication, misinformation, and related topics.
Dr Richard Fletcher is Director of Research at the Reuters Institute for the Study of Journalism. He is primarily interested in global trends in digital news consumption, the use of social media by journalists and news organisations, and, more broadly, the relationship between computer-based technologies and journalism.