A central tension in contemporary AI governance debates concerns the perceived trade-off between advancing innovation and ensuring safety and security. AI safety, which concerns preventing harm from advanced systems ranging from catastrophic misuse to bias and labor market disruption, and AI security, which focuses on protecting the integrity of models through their design, implementation, and deployment, are both increasingly recognized as necessary. Yet they are often framed as impediments to innovation. This framing has given rise to an implicit narrative: that prioritizing safety and security may delay adoption and therefore limit countries—particularly those in the Global Majority—from fully capturing AI’s economic and developmental benefits.
The evolution of global summits reflects how this dichotomy has been reinforced in policy discourse. The Bletchley Park Summit (2023) placed safety concerns, particularly those surrounding frontier models, at the center of international discussion. The Paris AI Action Summit (2025), by contrast, emphasized innovation, celebrating large-scale funding commitments and framing regulation as a potential barrier to progress. The Global AI Summit on Africa in Kigali (2025) presented yet another perspective: a more techno-optimist vision, formalized in the Africa Declaration on AI, which emphasized developmental potential while devoting comparatively less attention to systemic risk.
For Global Majority countries, the stakes are particularly high. Risks are likely to fall disproportionately on states with fewer resources to absorb systemic shocks, yet the pressure to close the widening “AI divide” creates urgency to adopt AI technologies. In this context, safety and security should not be seen as the cost of innovation but as the conditions for sustainable innovation. With its anchor in Prime Minister Modi’s vision of democratizing AI for solving real-world challenges, and IndiaAI’s techno-legal approach combining regulation with technological solutions, the upcoming AI Impact Summit in India (February 2026) offers a unique opportunity to contextualize AI safety and security (S&S) for the Global Majority’s innovation needs.
Beyond the innovation: Safety and security dichotomy
Economic consequences of technological failures, cybercrime, or regress on the United Nations’ Sustainable Development Goals (SDGs) could be further amplified due to increased reliance on and capabilities of AI. As such, the potential costs of inaction on AI S&S underscore the necessity of treating AI S&S as foundational pillars for resilient and equitable technological development.
For example, a lack of investment in securing AI models at the state level could significantly raise the costs of future cybercrime and digital incidents (e.g., the economic costs of attacks on a state’s critical infrastructure). In 2021, cybercrime was estimated to cost African nations 10% of their gross domestic product (GDP), or $4.12 billion. A notorious 2022 ransomware attack in Costa Rica, which targeted over 20 of its government agencies and led to a declaration of a national emergency, cost Costa Rica’s economy around 2.4% of its annual GDP. Compounding these existing attacks with emerging frontier AI capabilities to aid in or lead on cyberattacks, as has been recently demonstrated with Google’s Gemini models, could lead to more frequent and severe threats. More broadly, if left unaddressed, systemic AI risks may compromise or even undo global progress on the SDGs, negatively affecting individuals’ lives and (inter)national order. In turn, these significant disruptions and costs pose a direct threat to the state’s capabilities and authority, calling for an urgent buildup of societal resilience against escalating AI-related threats. Table 1 presents a few summary examples of how systemic AI risks threaten the SDGs.
Development dividends of safety and security
Looking briefly at previous cases of technology innovation and adoption, we can identify how S&S investments have yielded substantial development dividends for Global Majority countries.
Addressing local risks to harness tech for local priorities
Technology and knowledge transfer from developed to emerging economies has been considered an effective tool for attaining sustainable development. However, for such transfer and deployment to be beneficial, it requires attention to local environments. Context-appropriate technologies and their safety requirements are more likely to be effective, as they adequately address the unique and most prevalent threats.
The nuclear sector illustrates how ignoring local risks, such as limited engineering capacity or environmental vulnerabilities, can lead to massive failures. For example, consider the Bataan plant in the Philippines, which cost over $2 billion without ever becoming operational. In contrast, the International Atomic Energy Agency’s approach to nuclear technology transfer emphasizes safety as a development asset, not an overhead cost, by embedding S&S standards into broader capacity-building and infrastructure programs. Similarly, AI’s cross-sectoral nature demands great attention to local risks and priorities; generic global frameworks often miss deployment-specific vulnerabilities and opportunities. Context-sensitive AI S&S frameworks should therefore be seen as essential to effective, sustainable tech deployment—maximizing AI’s potential while avoiding preventable failures.
Building trust and stability for widespread adoption and international investment
Trust is essential for the uptake of new technologies. Users adopt innovations not because they necessarily understand the technology’s underlying mechanisms, but because they believe the system will reliably deliver benefits without causing harm. Trust is also critical to the attraction of international investment, especially in emerging economies.
S&S measures are foundational to building this trust, as demonstrated by the success of Kenya’s M-PESA financial service. M-PESA’s institutional backing and robust security reassured users and enabled widespread adoption, transforming the national economy. M-PESA’s model offers lessons for AI, where public trust across Global Majority communities is already high due to its tangible developmental benefits. Strengthening AI S&S frameworks can further accelerate adoption, expand access for informal workers, and unlock economic potential. At the international level, clear and stable regulatory environments signal predictability and safety, encouraging investment and supporting domestic innovation in line with the ninth SDG, which emphasizes the need to “build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation.”
Advocating for Global Majority S&S interests to mitigate global power imbalances
Active participation in international AI governance is crucial for addressing global power imbalances and advancing the development and sovereignty of Global Majority countries, whose interests are often sidelined in existing fora.
India has emerged as a key actor in this effort, using its leadership on digital public infrastructure (DPI) to promote inclusive, secure, and rights-respecting systems designed for resource-constrained settings. Through initiatives like the UN’s DPI Safeguards and its 2023 G20 presidency, India has championed a model that embeds safety from the outset. With the risk of AI S&S frameworks being shaped by a narrow group of dominant actors in exclusive fora, India is well positioned to realign global AI governance with the development priorities of the Global Majority, especially by steering the upcoming AI Impact Summit toward inclusive decision-making and a renewed focus on the risks of advanced AI systems.
Conclusion
As AI governance frameworks continue to take shape, it is essential for Global Majority countries to move beyond the dichotomy that casts safety and security as obstacles to innovation. As argued in this piece, building domestic capacity for AI S&S and participating in multilateral governance are not secondary to innovation; they are its foundation. By reducing systemic risks, building trust, and creating predictable environments for investment, safety and security measures enable the long-term developmental benefits that rapid but fragile adoption cannot deliver. The upcoming AI Impact Summit in India (February 2026) offers a critical opportunity to bridge these competing AI governance narratives by demonstrating that another path is possible—one where rigorous attention to AI risks enables rather than constrains the technology’s potential to address humanity’s most pressing challenges.