Updating the EU DisinfoLab Impact-Risk Index: Addressing AI and Coordinated Inauthentic Behavior

Authors: Raquel Miguel Serrano & Maria Giovanna Sessa, EU DisinfoLab; Amaury Lesplingart

The EU Disinfo Lab’s 2022 impact-risk index provided researchers with a valuable tool for evaluating disinformation threats. However, given the rapid emergence of generative AI and sophisticated cross-platform coordination tactics, a revision of the index is imperative. This paper proposes modest additions to the original framework, introducing new elements to reflect the latest developments whilst maintaining the index’s core simplicity and usability for European researchers. At the same time, measurement is improved via data standardisation, and an automated calculator is provided to facilitate impact assessment.

Introduction 

Since 2022, the disinformation landscape has transformed dramatically. The World Economic Forum’s 2025 Global Risks Report continues to rank mis- and disinformation as the top immediate-term global risk. Two developments particularly warrant attention: the democratisation of AI-generated content and the evolution of Coordinated Inauthentic Behaviour (CIB) tactics in order to amplify information manipulation campaigns across multiple platforms.

The original EU DisinfoLab index captured eight key indicators to assess the impact of individual unverified claims. Its strength lay in simplicity — researchers could quickly assess threats without extensive technical expertise. This update preserves that accessibility whilst addressing critical gaps in measuring the impact of the components of contemporary disinformation campaigns.

In parallel, the update includes a calculator that aids computation and enforces statistical data normalisation, ensuring equal weighting across indicators.

Updates to the Index

Modifications to Existing Indicators

Indicator 1 (Engagement): Expand to reflect AI-powered engagement strategies

  • 0 – 1.000 shares and reactions = 0 points
  • 1.001 – 10.000 shares and reactions = 1 point
  • 10.001 – 100.000 shares and reactions = 2 points
  •  More than 100.001 shares and reactions = 3 points
  • AI-powered engagement strategies (i.e., real-time AI-generated responses) = 1 extra point

Indicator 2 (Exposure): Add a higher tier to capture truly viral content:

  • 0 – 10,000 views = 0 points
  • 10.001 – 100.000 views = 1 point
  • 100.001 – 1.000.000 views = 2 points
  • More than 1.000.000 views = 3 points

Indicator 3 (Platform Spread): Expand to reflect platform proliferation, coordination, and include AI management of accounts:

  • Content on 1-2 platforms = 0 points
  • Content on 3-4 platforms = 1 point
  • Content on 5+ platforms OR includes high virality platforms (e.g., Telegram/TikTok)* = 2 points
  • Content spread by accounts managed with AI = 1 extra point
  • Suspected coordination/CIB (Similar timing or content publication across different accounts AND/OR shared technical identifiers or hashtags with other known amplifiers) = 1 extra point

* We specified Telegram and TikTok because research shows they’ve become primary vectors for unverified claims due to Telegram’s channel-based distribution system enabling mass broadcasting without traditional social media constraints, and TikTok’s algorithm particularly favouring viral content targeting younger demographics.

Indicator 4 (Diffusion across communities: language as a proxy indicator): Expand to reflect AI-powered mass translations

  • Content circulated in one language = 0 points
  • Content circulated in more than one language = 1 point
  • Content circulated in multiple languages, including through AI-powered translations = 2 points

Indicator 5 (Media outreach): no updates

  • Content did not reach mainstream media = 0 points
  • Content reached at least one mainstream media = 1 point

Indicator 6 (Actor Type): Include AI-generated personas:

  • Not a public figure/AI persona = 0 points
  • Public figure, repeat offender, OR verified AI influencer = 1 point
  • Public figure AND repeat offender = 2 points 
  • Undisclosed AI-generated personas** = 1 extra point

** AI-generated personas receive an extra point because they deliberately deceive audiences by masquerading as real humans, undermining trust in authentic discourse. This deception amplifies disinformation impact as people are more likely to engage with and believe content from what they perceive as genuine human sources rather than disclosed bots or AI accounts.

Indicator 7 (Formats): Account for synthetic media:

  • Single format = 0 points
  • Multiple formats = 1 point
  • Includes photorealistic AI-generated content = 2 points

Indicator 8 (Call for action and danger of the narrative): Extended to align with the engagement score:

This indicator has a multiplier effect: if the hoax includes an exhortation or call to action, the researcher will multiply the value of engagement by one and sum the result to the total.

  • If there is no call to action = 0 points
  • If there is a call to action, but engagement is 0, then 0 x 1 = 0 points
  • If there is a call to action and engagement is 1, then 1 x 1 = 1 point
  •  If there is a call to action, and engagement is 2, then 2 x 1 = 2  points
  •  If there is a call to action, and engagement is 3, then 3 x 1 = 3 points
  • If there is a call to action, and engagement is 4, then 4 x 1 = 4 points

Impact calculator and statistical adjustment  

This update introduces a user-friendly calculator that automates researchers’ workflows, eliminating manual computation. The tool applies statistical normalisation to the scoring system to ensure equal weighting across indicators.

Normalisation. To ensure comparability across indicators without imposing weights, all indicators will be linearly normalised so that each has a maximum value of 1. After normalisation, the scale for each indicator is as follows: 0 = lowest/absence, 1 = maximum/best, and all intermediate values represent proportional achievement. The composite score, calculated as the sum of all 8 normalised indicators (with a maximum sum of 8), will then be divided by 8 to produce a final normalised score on a 0-1 scale, where 0 represents the lowest possible overall performance and 1 represents the maximum achievable performance across all indicators.

Final index score and tranches. After normalisation, the composite index score (on a 0–1 scale) is categorised into interpretative tranches as follows:

  • Low Impact-Risk: 0.00–0.25 points
  • Moderate Impact-Risk: >0.25–0.50 points
  • High Impact-Risk: >0.50–0.75 points
  • Alarming Impact-Risk: >0.75–1.00 points

Despite additions, the index remains user-friendly. Free AI detection tools (such as Deepware Scanner or Sensity AI) enable researchers to identify synthetic content without technical expertise. Similarly, coordination detection tools like CooRTweet provide accessible methods for identifying suspicious patterns.

The updated index requires minimal additional resources. Most indicators rely on publicly available information or free tools.

in collaboration with Amaury Lesplingart

Continue Reading