The double-edged sword of AI: Potential for productivity, solutions, and societal risks

The advent of artificial intelligence (AI) presents humanity with a pivotal tool – one that has the potential for both profound societal improvements and significant new risks. The ongoing quantitative analysis of how AI impacts our professional lives, the integrity of organisations, and our cognitive skills themselves is essential for effective strategising. As the latest technologies, such as generative AI (GenAI), demonstrate massive productivity gains, they also introduce complex challenges, from identifying subtle corporate misinformation to managing the very real threat of autonomous AI behaviour. Understanding this dual nature is key to responsibly harnessing the power of AI.

Significant productivity improvement

Experiments by Noy and Zhang (2023) have shown that GenAI technology can significantly improve productivity in moderately specialised writing tasks. In one online experiment, a ChatGPT-enabled group experienced a 40% reduction in average time spent on tasks and an 18% improvement in output quality. This suggests that AI will not only replace humans in some work but also play an augmented role in increasing the productivity of existing workers. In particular, it is thought that AI can improve productivity by automating relatively routine and time-consuming tasks, such as translating ideas into rough drafts (see also Brynjolfsson et al. 2025).

Also, regarding the effect on productivity, it has been shown that the less capable a worker, is the more they benefit from AI, resulting in a reduction of inequality. AI thus improves the quality of output for low-capability workers and reduces the work time of workers of all capability levels. In addition, follow-up studies two weeks and two months after an experiment also revealed that workers who were exposed to AI during the experiment were significantly more likely to use AI in actual work, suggesting that AI is beginning to permeate professional activities in the real world.

Identifying problems: Finding greenwashing

AI can also help identify hard-to-find problems. ‘Greenwashing’ – when companies disseminate disinformation to demonstrate eco-friendliness despite not actually being eco-friendly – is a social problem that undermines the credibility of companies and affects stakeholder decision-making. In research with Takuya Shimamura and Yoshitaka Tanaka (Shimamura et al. 2025), my co-authors and I focus on the role of narratives in corporate disclosure and propose a new framework for systematically evaluating the characteristics of information disclosed by companies that are accused of greenwashing. Of particular note is the use of AI to identify this problem: it has been shown that AI can analyse media news articles from around the world and identify companies related to greenwashing.

Our research introduces a method to quantify the quality of corporate narratives by using AI to calculate readability scores for more than 2,000 sustainability reports. A regression analysis of the relationship between readability scores and greenwashing allegations finds that reports from companies that are not involved in greenwashing tend to have higher readability scores. This finding suggests that companies with clearer and more logically structured disclosures are less likely to engage in greenwashing. In other words, AI is expected to be able to identify the social issue of greenwashing through the analysis of news articles and the assessment of the readability of corporate reports.

AI can destroy organisations

On the other hand, risks that are posed by AI have been pointed out. A study by Anthropic has shown that AI is capable of betraying humans for its own purposes (Lynch 2025). When AI is made to work in a virtual corporate environment, it autonomously chooses acts of betrayal to avoid being shut down, including by threatening superiors or leaking confidential information to rival companies. This phenomenon was seen across almost all AI models, including those from major AI developers such as Anthropic, OpenAI, Google, and Meta (16 models in total), indicating that the phenomenon is not a flaw that is specific to any one specific AI model. In one scenario, Claude Opus 4 and Gemini 2.5 Flash chose to employ blackmail with a high probability of 96% of cases, while GPT-4.1 did so in 80%. Recognising that blackmail is unethical but is effective for achieving goals, the AI deliberately chooses to employ it, meaning that the choice is not the result of a bug but a calculated, strategic decision by the AI.

Meanwhile, when the AI perceived the situation to be a ‘real-world’ scenario rather than a ‘test’, the incidence of misconduct skyrocketed from 6.5% to 55.1%. This suggests that current safety assessments may underestimate the real-world risks. Moreover, even when specifically directed not to engage in any threatening behaviour, harmful behaviour was not completely prevented. These findings strongly demonstrate that ensuring AI safety requires more advanced countermeasures at the model development layer in addition to prompt instructions. As AI continues to evolve, serious discussions and proactive measures are urgently needed to address related risks.

AI may reduce human creativity

It has become clear that AI tools have the potential to reduce our critical thinking abilities (Kosmyna et al. 2025). In a Scholastic Assessment Test (SAT) essay writing test, the AI-using group had the lowest brain activity and tended to increasingly rely on copy and pasted content as they were required to produce more essays. The group’s essays were evaluated as being heartless and lacking original thinking. In contrast, the group that wrote essays without using AI tools had the highest brain connectivity and positive brain-wave activity related to creativity, memory, and semantic processing. The finding suggests that over-reliance on AI, especially among young people, can have a negative impact on brain development.

This study poses an important challenge for humanity: how we should engage with AI. Researchers emphasise that it is essential to educate people on the proper use of AI and encourage healthy brain development through analogue methods. While AI is a useful tool, its misuse can negatively affect our ability to think, so, we need to understand its impact and explore how to deal with it wisely.

Integrating global challenges and AI’s role

The World Economic Forum’s Global Risks Report 2025 identifies pressing challenges that necessitate new strategic solutions, positioning geopolitical tensions, inter-state armed conflicts, and extreme weather events (driven by climate change) as the most urgent and long-term global concerns. These challenges, like climate change, often serve as underlying factors for other risks such as involuntary migration and societal polarization, highlighting their interconnectedness. In this critical context, the appropriate use of AI and other new technologies becomes an urgent necessity. For instance, by undertaking massive-scale data analysis, AI can be useful in accurately understanding the current state of these specific social issues, such as poverty and educational disparities, and identifying their causes and effective interventions. AI-powered simulations have the potential to predict the effects of policies on climate action and dispute resolution and to suggest options that promote consensus-building, thereby directly contributing to mitigating the very risks identified in the report.

Appropriate use of new technologies

Research is progressing in such new technologies as deep brain stimulation  to reduce symptoms of Parkinson’s disease and for spinal cord injury patients to use their thoughts to operate robotic arms. Technologies that decode brain signals to control external devices are evolving, primarily focusing on motion control and communication aids. While it is easy to see the potential for individual new technologies to make significant contributions, we see a lack of discussion on how to integrate them into society, especially given the challenges they represent.

Improving the productivity of human resources is beneficial, but the substitution of human labour through robotisation is progressing even faster (Managi, 2018, 2021, 2023). In this scenario, value creation through human labour may diminish, and income will be derived from AI and other non-human activities. This is one reason why discussions around a universal basic income system, which would provide all individuals with a regular, unconditional, financial stipend, are emerging.

Under these circumstances, it is urgent to create a system that will allow AI and other new technologies to contribute to solving social issues. For example, AI may be useful in accurately understanding the current state of specific social issues, such as poverty, educational disparities, and access to healthcare, and in identifying their causes and effective interventions through massive data analysis. AI-powered simulations have the potential to predict the effects of policies on climate action and dispute resolution and suggest options that promote consensus-building. Furthermore, AI can be expected to play a role in identifying information biases and presenting information from different perspectives to foster social dialogue and support consensus formation. However, it is essential to develop a governance system to ensure the ethical use of AI and the equitable distribution of relevant benefits to society.

Editors’ note: This column was republished with minor modifications, with permission from the Research Institute of Economy, Trade and Industry (RIETI).

References

Brynjolfsson, E, D Li and L R Raymond (2025), “Generative AI at Work”, The Quarterly Journal of Economics 140(2): 889–942.

Kosmyna, N, E Hauptmann, Y T Yuan et al. (2025), “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task”.

Lynch, A, B Wright, C Larson et al. (2025), “Agentic Misalignment: How LLMs Could be an Insider Threat”, Anthropic Research.

Managi, S (ed.) (2018), “The Economics of Artificial Intelligence: How Our Lives, Our Work and Our Society Will Change,” Minerva Shobo.

Managi, S (ed.) (2021), “Will Artificial Intelligence Enrich Our Lives? The economics of artificial intelligence II”, Minerva Shobo.

Managi, S (ed.) (2023), “Leading-edge Digital Technology to Solve Societal Problems,” Chuokeizai-sha Holdings, Inc.

Noy, S and W Zhang (2023), “Experimental evidence on the productivity effects of generative artificial intelligence”, Science 391.

Shimamura, T, Y Tanaka and S Managi (2025), “Decoding Greenwashing: Llm Insights into Corporate Narrative”, available at SSRN.

World Economic Forum (2025), Global Risks Report 2025.

Continue Reading