AI for financial sector supervision: New evidence from emerging market and developing economies

The rapid development of increasingly powerful AI tools has the potential to reshape business models, market structures, consumer behaviour, and supervisory practices in the financial sector in emerging market and developing economies (e.g. Foucault et al. 2025). Research has found that while the financial sector in advanced economies is at the forefront of integrating machine learning and generative AI (GenAI), emerging market and developing economies are still in the early stages of adoption, including for financial supervision (Consultative Group on Risk Management 2025, Dohotaru et al. 2025). Our recent World Bank report (Boeddu et al. 2025), commissioned by the South African G20 Presidency, is based on a survey of 27 financial sector authorities in emerging market and developing economies and sheds new light on the state of AI adoption in the financial sector.

Most authorities in emerging market and developing economies expect AI to deliver a net positive impact in the financial sector, in particular, those reporting higher AI adoption by supervised institutions. Among jurisdictions with at least early-stage AI adoption, the most common AI applications of financial institutions are for customer service chatbots and virtual assistants, fraud detection, and anti-money laundering and Know Your Customer compliance (Figure 1). African financial institutions are more likely to use AI for credit scoring and underwriting, reflecting the need to serve populations without formal credit histories. Additionally, the drive to improve regulatory compliance and meet requirements more efficiently is another key factor that promotes AI adoption.

Figure 1 In jurisdictions with AI adoption, financial institutions most commonly use AI for customer service, fraud detection, and anti-money laundering (AML)/combating the financing of terrorism (CFT) and know your customer (KYC) compliance

Notes: Question asked: “What are the top three use cases of AI by financial institutions in your jurisdiction?” Only respondents who reported early-stage, moderate, or advanced levels of adoption by at least one type of financial institution were asked this question. This question was skipped for those who reported very limited adoption or did not know the level of adoption in their jurisdictions. Therefore, the total number of respondents is 25.
Source: World Bank Survey on AI in Supervision, 2025.

AI adoption by regulators

Central banks and other financial authorities are adopting AI for several policy purposes (BIS 2025). In particular, AI has the potential to make supervisory technology tools more efficient and applicable for more complex tasks, augmenting or automating work previously only undertaken by humans. However, most emerging market and developing economy authorities are still in the early stages of using AI – none of whom are in Africa – for core supervisory tasks such as data collection, on- and off-site supervision, asset quality review, and anomaly detection, with some currently conducting tests and pilot programmes. Some authorities are experimenting with AI for use cases such as fraud detection, complaints analysis, and risk and compliance assessments. Authorities are also optimistic about AI’s potential for tasks such as data collection, risk forecasting, and off-site inspections (Figure 2).

Figure 2 AI adoption by authorities is likely to increase significantly in the medium to longer term across a wide range of supervisory tasks

Notes: N= 27. Question asked: “For which supervisory activities is your authority currently using or planning to use AI in the next 12 months? Select all that apply” and “For which type of supervisory activities will AI most likely be used in your authority in the medium to longer term (e.g. the next 5 years)? Select all that apply.”
Source: World Bank Survey on AI in Supervision, 2025.

Basic GenAI tools have seen widespread uptake by staff within authorities for general purposes such as drafting and summarisation. Some authorities are also cautiously working to deploy AI agents, chatbots, and other GenAI-based tools for more sophisticated tasks such as internal knowledge management, complaints analysis, and risk and compliance assessments of supervisory documents.

Authorities across emerging market and developing economies have recently started adopting formal policies and strategies regarding their internal use of AI. About a quarter have a formal policy governing their internal use of AI, although only one-fifth of African authorities do. Most authorities that did not already have a formal strategy or policy expect to establish one within the following 12 months, by July 2026. Many authorities are mapping supervisory processes to identify areas where AI can add the most value. Some are more proactive, encouraging departments to experiment broadly, while others are more cautious, limiting AI experimentation to certain types of projects or supervisory business lines.

Challenges and risks for regulators

Unlocking and managing large amounts of sensitive data – currently often fragmented or not in readily usable or accessible form – while also complying with data privacy, data security, cybersecurity, and data localisation rules poses challenges for authorities seeking to integrate AI into their supervisory processes (Table 1). Authorities have diverse approaches to leveraging cloud services for AI, with issues such as vendor dependency, data security, and data sovereignty emerging as common challenges.

Table 1 Data privacy and security, internal skills gaps, AI model-related challenges, and integration challenges are the top four barriers to AI adoption in supervision among survey respondents

Notes: N=27. Survey respondents could select up to five challenges/barriers from a larger set of options. The scores are the weighed scores for each challenge/barrier out of a total possible score of 135 – the top score would have been possible if all 27 respondents had selected the same option as the top option.
Source: World Bank Survey on AI in Supervision, 2025.

Integrating new AI systems into existing and often outdated infrastructure can be cumbersome. As a result, several authorities are strengthening their foundational IT and data infrastructure. Many authorities, especially in Africa, cite skill gaps and struggles to attract and retain talent as fundamental challenges, and are investing in enhancing workforce readiness for AI. Several authorities take a strategic approach to embedding the necessary skill sets within supervisory teams, combining both domain knowledge and relevant technological expertise.

Many emerging market and developing economy authorities lack the capacity to monitor AI developments in the financial sector and assess their impact, yet several risks loom large, including financial stability risks (Financial Stability Board 2024). For example, cybersecurity threats and data breaches are top of mind, prompting authorities to safeguard systems and develop strong governance frameworks. Most authorities rely on the outsourcing of critical IT and AI infrastructure, typically with a small set of global vendors, amplifying vendor-related and concentration risks. Emerging market and developing economy authorities will likely need to increase their focus on AI-related consumer risks as financial institutions continue to adopt AI. Currently, they display varying levels of readiness to understand and address these risks.

Well-documented AI risks requiring the attention of regulators (Crisanto et al. 2024 and Perez-Cruz et al. 2025) – such as those regarding model transparency, explainability, accuracy, accountability, and biases – are recognised by emerging market and developing economy authorities, but these risks are not yet sufficiently addressed, as AI adoption is still in its early stages.

Looking ahead

Two basic principles emerge from our interactions with a wide range of emerging market and developing economy authorities. First, AI should not replace supervisory judgement and discretion, and supervisors should retain final authority over AI-assisted supervisory decisions and be able to explain their rationale. Second, supervisors should ensure that financial institutions thoroughly understand their AI applications and can be held accountable for decisions made based on model outputs.

We conclude with five recommendations for financial authorities in emerging market and developing economies as they seek to promote the safe adoption of AI for financial supervision:

  • First, authorities need to create a board-level governance framework to align their AI innovation and adoption with organisational objectives and the need to maintain public trust.
  • Second, authorities need to catch up in establishing integrated internal IT and data infrastructures necessary to support effective AI adoption, with attention to challenges related to cloud integration.
  • Third, authorities should develop systematic approaches to attract, retain, and nurture the right technical skills and expertise, as well as integrate both domain knowledge and new digital skills into supervision teams.
  • Fourth, authorities should invest in monitoring AI developments and risk assessments, including bridging data gaps (Financial Stability Board 2025), to strengthen their understanding of the associated opportunities and risks.
  • Finally, collaboration – both domestic and international – is essential to avoid fragmentation, regulatory arbitrage, and the build-up of new risks and to ensure effective oversight as AI technologies and use cases evolve.

References

BIS – Bank for International Settlements (2025), The use of artificial intelligence for policy purposes.

Boeddu, G, E Feyen, S Martinez Jaramillo, S Mesquita, Y Palta, A Sarkar, S Sinha, and A Gutiérrez Traverso (2025), “Artificial intelligence for financial sector supervision: An emerging market and developing economies perspective”, World Bank Prosperity Insight.

Consultative Group on Risk Management (2025), Governance of AI adoption in central banks, BIS Representative Office of the Americas, Bank for International Settlements.

Crisanto, J C, C B Leuterio, J Prenio, and J Yong (2024), “Regulating AI in the financial sector: Recent developments and main challenges”, FSI Insights on Policy Implementation No. 63, Bank for International Settlements.

Dohotaru, M, Y Palta, M Prisacaru, and J H Shin (2025), AI for risk-based supervision: Another nice to have tool or a game-changer, World Bank.

Foucault, T, L Gambacorta, W Jiang, X Vives (2025), “Artificial intelligence in finance”, VoxEU.org, 5 June.

Financial Stability Board (2024), The financial stability implications of artificial intelligence.

Financial Stability Board (2025), Monitoring adoption of artificial intelligence and related vulnerabilities in the financial sector.

Perez-Cruz, F, J Prenio, F Restoy, and J Yong (2025), “Managing explanations: How regulators can address AI explainability”, BIS Occasional Paper No. 24, Bank for International Settlements.

Continue Reading