- BNP Paribas Statement on Sudan Litigation group.bnpparibas
- BNP Paribas To Pay $20 Million Damages For Complicity In Sudan Atrocities Forbes
- U.S. jury issues $20 million verdict against France’s largest bank over Sudanese atrocities Fortune
- Bnp Paribas – states intention to appeal on Sudan litigation MarketScreener
- BNP Paribas Ordered To Pay Millions Over Sudan Genocide Evrim Ağacı
Category: 3. Business
-
BNP Paribas Statement on Sudan Litigation – group.bnpparibas
-
US regional banks' earnings to test investor nerves after jitters over credit risks – Reuters
- US regional banks’ earnings to test investor nerves after jitters over credit risks Reuters
- FTSE 100 slides to two-week low as banks weigh Business Recorder
- US Dollar Forecast: Friday Bounce Fails to Offset Weekly Loss from Trade and Fed Concerns FXEmpire
- Traders ‘Spooked’ as Bank Lending Risk Puts Stock Market on Edge Bloomberg.com
- U.S. Equity ETF Tracker | Relief in trade and credit concerns drives a rebound in U.S. stocks, with the triple-leveraged Nasdaq ETF rising nearly 2%; safe-haven assets pressured as gold closes lower, while the double inverse gold miners index ETF surges 1 富途牛牛
Continue Reading
-

IBM and Groq Partner to Accelerate Enterprise AI Deployment with Speed and Scale
Partnership aims to deliver faster agentic AI capabilities through IBM watsonx Orchestrate and Groq technology, enabling enterprise clients to take immediate action on complex workflows
Oct 20, 2025
ARMONK, N.Y. and MOUNTAIN VIEW, Calif., Oct. 20, 2025 /PRNewswire/ — IBM (NYSE: IBM) and Groq today announced a strategic go-to-market and technology partnership designed to give clients immediate access to Groq’s inference technology, GroqCloud, on watsonx Orchestrate – providing clients high-speed AI inference capabilities at a cost that helps accelerate agentic AI deployment. As part of the partnership, Groq and IBM plan to integrate and enhance RedHat open source vLLM technology with Groq’s LPU architecture. IBM Granite models are also planned to be supported on GroqCloud for IBM clients.
Enterprises moving AI agents from pilot to production still face challenges with speed, cost, and reliability, especially in mission-critical sectors like healthcare, finance, government, retail, and manufacturing. This partnership combines Groq’s inference speed, cost efficiency, and access to the latest open-source models with IBM’s agentic AI orchestration to deliver the infrastructure needed to help enterprises scale.
Powered by its custom LPU, GroqCloud delivers over 5X faster and more cost-efficient inference than traditional GPU systems. The result is consistently low latency and dependable performance, even as workloads scale globally. This is especially powerful for agentic AI in regulated industries.
For example, IBM’s healthcare clients receive thousands of complex patient questions simultaneously. With Groq, IBM’s AI agents can analyze information in real-time and deliver accurate answers immediately to enhance customer experiences and allow organizations to make faster, smarter decisions.
This technology is also being applied in non-regulated industries. IBM clients across retail and consumer packaged goods are using Groq for HR agents to help enhance automation of HR processes and increase employee productivity.
“Many large enterprise organizations have a range of options with AI inferencing when they’re experimenting, but when they want to go into production, they must ensure complex workflows can be deployed successfully to ensure high-quality experiences,” said Rob Thomas, SVP, Software and Chief Commercial Officer at IBM. “Our partnership with Groq underscores IBM’s commitment to providing clients with the most advanced technologies to achieve AI deployment and drive business value.”
“With Groq’s speed and IBM’s enterprise expertise, we’re making agentic AI real for business. Together, we’re enabling organizations to unlock the full potential of AI-driven responses with the performance needed to scale,” said Jonathan Ross, CEO & Founder at Groq. “Beyond speed and resilience, this partnership is about transforming how enterprises work with AI, moving from experimentation to enterprise-wide adoption with confidence, and opening the door to new patterns where AI can act instantly and learn continuously.”
IBM will offer access to GroqCloud’s capabilities starting immediately and the joint teams will focus on delivering the following capabilities to IBM clients, including:
- High speed and high-performance inference that unlocks the full potential of AI models and agentic AI, powering use cases such as customer care, employee support and productivity enhancement.
- Security and privacy-focused AI deployment designed to support the most stringent regulatory and security requirements, enabling effective execution of complex workflows.
- Seamless integration with IBM’s agentic product, watsonx Orchestrate, providing clients flexibility to adopt purpose-built agentic patterns tailored to diverse use cases.
The partnership also plans to integrate and enhance RedHat open source vLLM technology with Groq’s LPU architecture to offer different approaches to common AI challenges developers face during inference. The solution is expected to enable watsonx to leverage capabilities in a familiar way and let customers stay in their preferred tools while accelerating inference with GroqCloud. This integration will address key AI developer needs, including inference orchestration, load balancing, and hardware acceleration, ultimately streamlining the inference process.
Together, IBM and Groq provide enhanced access to the full potential of enterprise AI, one that is fast, intelligent, and built for real-world impact.
Statements regarding IBM’s and Groq’s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
About IBM
IBM is a leading provider of global hybrid cloud and AI, and consulting expertise. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs, and gain a competitive edge in their industries. Thousands of governments and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and consulting deliver open and flexible options to our clients. All of this is backed by IBM’s long-standing commitment to trust, transparency, responsibility, inclusivity, and service. Visit www.ibm.com for more information.About Groq
Groq is the inference infrastructure powering AI with the speed and cost it requires. Founded in 2016, Groq developed the LPU and GroqCloud to make compute faster and more affordable. Today, Groq is trusted by over two million developers and teams worldwide and is a core part of the American AI Stack.Media Contact:
Elizabeth Brophy
elizabeth.brophy@ibm.comSOURCE IBM
Continue Reading
-

How AI is Reshaping Commercial Insurance and Risk Assessment – with Sidharth Ojha of AXA XL
Commercial insurance has long struggled to adopt new technology at the pace of other financial services. Manual workflows, outdated mainframes, and fragmented systems from years of mergers have slowed modernization efforts. Many insurers still view underwriting as an “art” rather than a process, which has historically delayed even basic digital upgrades.
Industry data underscores the substantial adoption gap across the insurance sector and beyond. In MIT Center for Information Systems Research’s global study of enterprise AI maturity, only 7% of organizations have fully embedded AI across operations, while most remain in pilot or mid-stage phases. At the same time, regulatory agendas are finally catching up.
The EU AI Act came into effect in 2025, requiring insurers to categorize AI systems by risk level and comply with strict transparency rules. Meanwhile, the vast majority of enterprise data — more than 90% — is unstructured, stored in documents, contracts, and PDFs that are difficult to analyze without advanced tools.
This mix of legacy systems, compliance demands, and data challenges creates a critical inflection point for insurers. How can they adopt AI responsibly while ensuring ROI and minimizing risk? Drawing on insights from Sidharth Ojha, Head of Process Optimization, Data & AI at AXA XL, in a recent episode of the AI in Business podcast, this article explores how commercial insurers can modernize operations, empower teams to experiment, and lay the foundations for scaling.
This article examines three key insights from Ojha’s perspective on AI adoption in insurance:
- Empowering business users with low-code AI: Provide underwriters a compliant sandbox to experiment safely and uncover constraints early.
- Turning data into a strategic asset: Map data end to end and convert unstructured contracts into structured insights that drive growth.
- Building foundations for scalable AI: Standardize roles, processes, and data definitions to prevent pilots from stalling and unlock enterprise adoption.
Listen to the full episode below:
Guest: Sidharth Ojha, Head of Process Optimization, Data & AI, Global Chief Underwriting Office, AXA XL.
Expertise: Commercial Insurance Transformation, Process Optimization, and Applied AI
Brief Recognition: At AXA XL, Ojha leads initiatives to apply AI in underwriting and operations, balancing compliance with efficiency and cultural change. His experience spans legacy process modernization, regulatory alignment, and enabling practical AI adoption in one of the world’s largest commercial insurers.
Empowering Business Users with Low-Code AI
Ojha sees that, among the clearest challenges for driving AI adoption in insurance, is cultural inertia. Executives often recognize AI’s potential but hesitate to let non-technical staff engage with it directly, which Ojha sees as a missed opportunity.
He describes the importance of creating “safe lanes” where underwriters and business users can test AI tools in controlled environments. By embedding low-code platforms into existing systems, insurers can enable experimentation without risking data leaks or regulatory breaches.
“Think of it like bowling with bumpers,” Ojha explains. “You want to let people take the shot, but keep them from rolling into the gutter.” His approach builds confidence and helps uncover limitations early, before a project absorbs significant budget or time.
In the past, insurance tech projects relied on extended handoffs: business analysts translated requirements, developers built systems, and architects ensured alignment. By the time solutions reached production, critical context was often lost. Low-code AI tools enable underwriters to interact with technology directly, bypassing translation layers and accelerating actionable feedback.
Ojha stresses that leaders should not rush to pilots or MVPs. Instead, they should allocate more time to exploration and failure in the sandbox phase.
“The more time you spend failing your hypotheses, the less time you waste scaling something that doesn’t work,” he notes. For an industry where “failure” carries negative connotations, reframing the need for failure tolerance as controlled testing can help insurers adopt AI more comfortably.
This cultural shift is essential for adoption. By giving underwriters direct but safeguarded access, organizations create buy-in and align tools with real business needs — rather than building in isolation and hoping for adoption later.
Turning Data into a Strategic Asset
Ojha insists – as many previous podcast guests have – that technology alone cannot deliver ROI without clean, usable data. He notes that Insurance companies face a particularly steep challenge because most of their critical information is locked in unstructured formats, such as policy documents, endorsements, quotes, and schedules of values.
Ojha points out that five years ago, insurers struggled to do something as basic as reading a table in a PDF. Generative AI has solved many of these hurdles, but unstructured data remains diverse and inconsistent, making transformation into structured formats difficult:
“Most of the data insurers rely on isn’t even in their systems — it’s trapped in PDFs, Word documents, and scanned contracts. The real challenge isn’t reading it, it’s standardizing it. Each policy is unique, often written like a legal manuscript. Until we can consistently turn that unstructured data into structured information, every downstream AI use case — from risk analysis to pricing — will be operating in the dark.”
— Sidharth Ojha, Head of Process Optimization, Data & AI, AXA XL
The payoff is significant. With structured data, insurers can answer portfolio questions in seconds, such as: “Which policies exclude communicable disease?” or “How much exposure do we have across a region?”
During the COVID-19 pandemic, many organizations could not respond quickly to such queries. Today, AI tools offer the chance to avoid that blind spot.
Ojha also describes new possibilities in summarization capabilities among these systems. Beyond condensing documents, he notes that AI can compare client submissions against internal appetite and compliance rules.
For high-volume underwriting teams, these capabilities mean touching more submissions per day, declining unsuitable risks faster, and focusing on profitable opportunities. “That’s not just efficiency,” Ojha stresses. “That’s real growth potential.”
For leaders, the mandate is clear: treat data as a first-class asset. Inventory policy wordings, target high-volume pain points, and build systems that push structured outputs back into core platforms. Done well, these steps transform AI from a cost-saving tool into a revenue driver.
Building Foundations for Scalable AI
While pilots are familiar with insurance, scaling remains rare. Ojha estimates that “80-90%” of AI projects stall between proof of concept and deployment. The reasons are less about technology and more about organizational readiness.
He outlines the data infrastructure bottlenecks that often derail scaling AI operations in insurance:
- Unclear accountability for data fields, leading to inconsistent inputs.
- Fragmented processes, where teams record different levels of detail for the same product.
- Legacy stacks that are expensive to integrate with new AI models.
- Inconsistent definitions of key metrics across business units.
Without fixing these foundations, even promising pilots fail to expand. Ojha advises leaders to ask: If this solution went live across three countries tomorrow, what would break first? Addressing gaps in that framework upfront prevents costly surprises later.
Regulation also plays a role, and Ojha sees the EU AI Act as a turning point, providing categories that boards and regulators alike can trust.
“If you are compliant with EU rules, you are largely compliant globally,” he notes, insisting that having such assurance can ease executive concerns and accelerate project approvals.
Ultimately, success comes from patience. Insurers are often eager to jump from idea to MVP, but Ojha emphasizes the value of deeper exploration and testing. Companies that invest in clarity of roles, process alignment, and data quality will find it easier to move AI from experimentation to enterprise-wide adoption.
Continue Reading
-

Data centres and energy consumption: evolving EU regulatory landscape and outlook for 2026
The EU’s regulatory framework for data centres is quickly evolving, combining support and funding programmes with measures that pursue energy transition and climate goals. The European Commission (“EC”) will be putting forward a Data Centre Energy Efficiency Package in Q1 2026 – together with the Strategy Roadmap on Digitalisation and AI – the aim of which is to achieve carbon-neutral data centres by 2030. The implications of this package could be significant for key stakeholders, including investors and operators of data centres.
Data centres in the EU: balancing strategic investments and energy efficiency
In the push for EU digital sovereignty and global competitiveness, data centres are a critical infrastructure. The EU’s “State of the Digital Decade 2025” report emphasises the need for further private and public targeted investment in advanced connectivity infrastructure, secure and sovereign cloud and data infrastructures, and AI. While investments in data centres are poised to yield significant returns in growth and productivity, ensuring that the EU remains competitive and resilient in the digital age, such growth comes with substantial energy consumption.1
To tackle these challenges, the EU has adopted several regulatory instruments, and recently announced that in Q1 2026 it will propose a new Data Centre Energy Efficiency Package alongside the Strategic Roadmap on Digitalisation and AI for the Energy Sector, aiming at making data centres carbon-neutral by 2030.
Evolving EU rules to address energy consumption of data centres
Over the last few years, we have seen rapidly evolving EU rules in relation to the data centers and their energy consumption. In sum:
- The (revised) Energy Efficiency Directive (“EED“) in 2023, adopts the ‘energy efficiency first’ as a core principle of EU energy policy.2 Member States are mandated to prioritise this principle in all relevant policy decisions and significant investment choices across both energy and non-energy sectors. It includes requirements for monitoring and reporting, specifically mandating the assessment and disclosure of data centres’ energy performance. The information provided by data centres with an installed information technology power demand of at least 500 kW will be published in a ‘European database’. The key performance indicators that must be communicated to the European database are set out in the Delegated Regulation (EU/2024/1364) on sustainability “ratings“.3 The various Omnibus packages have so far not targeted the EED, although it remains to be seen if a specific Omnibus for the energy sector would be launched at some point in the future.
- The Taxonomy Regulation, which establishes a classification system and defines criteria for economic activities that are aligned with a net zero trajectory, as well as broader environmental goals.4 The EU Taxonomy Climate Delegated Act5, enshrines rules for classification of data centre-related activities with a view to climate change mitigation, building on the European Code of Conduct for Energy Efficiency in Data Centres6, which is a voluntary initiative that provides data centres with guidelines and best practices to reduce energy consumption.
- Energy efficiency requirements under the AI Act, which lays down harmonised rules on the development and use of AI in the EU.7 The AI Act imposes transparency requirements for General-Purpose AI Models (“GPAI Models“), which includes energy consumption reporting. The EC, together with existing EU standardisation organisations and stakeholders, will create standards focused on AI, aimed for example, at improving energy efficiency. While the AI Act does not include rules on data centres, it requires the EC and the Member States to create voluntary codes of conduct on energy efficiency of data centres.8
- EU funding programmes to take into account energy efficiency in data centres, for example, by supporting green projects through programmes like Connecting Europe Facility 2, Digital Europe programme, Horizon Europe, InvestEU and the Recovery and Resilience Facility.
- The EU Battery Regulation, which establishes stricter requirements on the design, production, and recycling of batteries, to promote sustainability and reduce environmental impact.9 Those requirements may impact energy storage in data centres, both in relation to the installation and recycling of batteries.10
- The Ecodesign Regulation for servers and data storage products, which establishes energy efficiency requirements for enterprise servers and online data storage products, typically used in data centres.11 These products are subject to certain requirements, including minimum efficiency, maximum consumption in idle state and information of the operating temperature. They are also subject to circular economy requirements for the extraction of components and critical raw materials.
- The Report on EU Green Public Procurement criteria for data centres, server rooms and cloud services, which offers a set of guidelines to help public authorities procure data centres’ equipment and services in line with European policy objectives for energy, climate change and resource efficiency, as well as reducing life cycle costs.12
- The European High Performance Computing Joint Undertaking (EuroHPC JU)13, which aims to build and operate an interconnected EU supercomputing and AI infrastructure ecosystem, fostering technological sovereignty and competitiveness. The EuroHPC JU, with a budget of approximately EUR 7 billion for 2021–2027, provides financial support through open calls offering procurement as well as research and innovation grants. Some of the EuroHPC JU’s projects are particularly focused on sustainability.
- The EC regularly assesses the energy efficiency and sustainability of data centres, using various tools. The first technical report on this topic provides insights from the first year of implementation of these rules and the effectiveness of the current reporting scheme. It includes an assessment of the scheme itself, the reported data, and the user experience of the reporting entities.14
- The application of State aid rules to sustainable data centres. The call for evidence for the new Cloud and AI Development Act acknowledges the potential role of financial support in line with applicable State aid rules to data centres with a high sustainability contribution, with the aim to increase capacity. In addition, the Clean Industrial Deal State Aid Framework (CISAF) supports the development of clean energy, industrial decarbonisation and clean technology, which may also have an impact on data centres.
Looking ahead
2026 is set to bring new regulatory developments. In Q1 2026, the EC will roll out a proposal for a Data Centre Energy Efficiency Package alongside the Strategic Roadmap on Digitalisation and AI for the Energy Sector.
A public consultation on the Strategic Roadmap is already ongoing and it is open until 5 November 2025, emphasising its goal to leverage the potential of digital and AI technologies for the energy system, while mitigating associated risks and supporting the competitiveness and decarbonisation of the EU economy. This would include measures to sustainably integrate data centres’ electricity demand into the broader energy system.15
The EC is also expected to publish a Cloud and AI Development Act in Q4 2025 or Q1 2026, aimed at increasing Europe’s cloud and AI infrastructure capacity.16 The goal of the proposal will be to triple EU data centre processing capacity in the next 5-7 years and allow for simplified permitting and other public support measures, if they comply with requirements on energy efficiency, water efficiency, and circularity.
Finally, on 8 October 2025, the EC published its Apply AI Strategy, where it refers to the Strategic Roadmap and Cloud and AI Development Act as including strategies to improve energy efficiency in data centres.17
To conclude, in the context of the ongoing simplification drive of EU regulation under the various Omnibus packages, energy consumption by data centres and AI infrastructure remains high on the EC’s agenda. So much is evident from the core strategies, consultations and action plans that the EC has published over the past months. Even though stakeholders are still grappling with a quickly evolving regulatory framework, new energy efficiency measures are already on the horizon, with more details expected in Q1 2026. At the same time, investors, developers and operators of data infrastructure will also be able to benefit from the various support measures that the EU and the Member States have rolled out and are expected to further deploy in the future.
Elisabetta Zuddas (White & Case, Legal Trainee, Brussels) contributed to the development of this publication.
1 State of the Digital Decade 2025 report of 16 June 2025, available here.
2 Directive (EU) 2023/1791 of 13 September 2023 on energy efficiency and amending Regulation (EU) 2023/955, available here. The EED sets a (revised) EU energy efficiency target, making it binding for EU countries to collectively ensure an additional 11.7% reduction in energy consumption by 2030.
3 Commission Delegated Regulation (EU) 2024/1364 of 14 March 2024 on the first phase of the establishment of a common Union rating scheme for data centres, available here. The European Commission has published a user manual (available here) for accessing the European database, a reporter guide (available here) and frequently asked questions and guidance (available here).
4 Regulation (EU) 2020/852 of 18 June 2020 on the establishment of a framework to facilitate sustainable investment, and amending Regulation (EU) 2019/2088, available here.
5 Commission Delegated Regulation (EU) 2021/2139 of 4 June 2021 supplementing Regulation (EU) 2020/852 of the European Parliament and of the Council by establishing the technical screening criteria for determining the conditions under which an economic activity qualifies as contributing substantially to climate change mitigation or climate change adaptation and for determining whether that economic activity causes no significant harm to any of the other environmental objectives, available here. The Taxonomy is being revised as part of the Omnibus simplification process, see W&C Client Alert ‘EU Omnibus Package: 10 things you should know about the proposed changes to key sustainability legislation‘.
6 European Code of Conduct for Energy Efficiency in Data Centres of 21 March 2025, available here.
7 Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available, here.
8 See also W&C Client Alert ‘Energy efficiency requirements under the EU AI Act | White & Case LLP‘.
9 Regulation (EU) 2023/1542 of 12 July 2023 concerning batteries and waste batteries, amending Directive 2008/98/EC and Regulation (EU) 2019/1020 and repealing Directive 2006/66/EC, available here.
10 See also W&C Client Alert ‘New EU Batteries Regulation: introducing enhanced sustainability, recycling and safety requirements | White & Case LLP‘.
11 Commission Regulation (EU) 2019/424 of 15 March 2019 laying down ecodesign requirements for servers and data storage products pursuant to Directive 2009/125/EC and amending Commission Regulation (EU) No 617/2013, available here.
12 Development of the EU green public procurement (GPP) criteria for data centres, server rooms and cloud services report of 8 June 2020, available here.
13 The European High Performance Computing Joint Undertaking (EuroHPC JU), available here.
14 Assessment of the energy performance and sustainability of data centres in EU report of July 2025, available here.
15 Strategic Roadmap for digitalisation and AI in the energy sector – consultations opened, available here.
16 AI Continent – Initiative on new cloud and AI development act, available here.
17 Apply AI Strategy, available here. A public consultation and call for evidence closed on 4 June 2025, feedback available here.White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2025 White & Case LLP
Continue Reading
-

ILO Director-General urges action to strengthen decent work amid global uncertainty
GENEVA (ILO News) – Gilbert F. Houngbo, Director-General of the International Labour Organization (ILO), has urged international financial leaders to place decent work and social justice at the heart of their policy agendas, stressing that robust labour institutions are essential to confronting rising geopolitical tensions and trade disruptions.
In written statements delivered to the World Bank Group / International Monetary Fund (IMF) Annual Meetings in Washington D.C., Houngbo emphasized that decent work policies, including minimum wage systems, collective bargaining and social protection, are essential for sustainable and inclusive development.
The ILO Director-General noted that there have been meaningful gains: inequality between countries has declined since the early 2000s and over half the world’s population now has some form of social protection.
Yet he also warned that persistent structural challenges threaten these gains. “As uncertainty in the global economy persists with shifting geopolitical tensions and trade disruptions, the importance of building institutions that foster decent work for all could hardly be more critical,” the ILO Director-General stated.
The ILO forecasts global employment growth at only 1.5 per cent in 2025, with the creation of 53 million new jobs, down from 60 million previously projected. Around 84 million workers, mostly in Asia and the Pacific, face elevated risk due to trade uncertainty. For its part, informal employment continues to outpace formal employment, with 58 per cent of the global workforce remaining in informal employment in 2024.
“These trends underscore ongoing challenges in translating economic growth into formal economy and decent employment opportunities,” Houngbo noted.
The ILO Director-General highlighted that even as global output per worker grew by 17.9 per cent from 2014 to 2024, the labour income share declined from 53.0 per cent to 52.4 per cent.
“Had the labour income share remained at its 2014 level, global labour income would have been US$1 trillion higher in 2024, and each worker would have earned an additional US$290 on average that year.”
Houngbo stressed the importance of minimum wage systems and institutions for collective bargaining to address low pay and wage inequality.
On the future of work, Houngbo addressed the disruptive potential of generative AI, as nearly one in four workers could see their role significantly transformed, with women disproportionately affected, according to ILO estimates.
“Whether AI adoption ultimately leads either to job losses or to complementarity depends on how technology is integrated, management decisions, and – fundamentally – the role of social dialogue between employers and workers in shaping implementation,” he said.
In concluding remarks, Houngbo called for coordinated policy action under a renewed social contract.
“The real challenge is not an inherent conflict between economic and social objectives, but rather the need to take coordinated action that transforms this potential dilemma into a dynamic, mutually reinforcing synergy.”
He stressed that a renewed social contract, anchored in democratic governance, inclusive dialogue, and people-centred policies, provides the institutional foundation and political legitimacy required to sustain progress.
Continue Reading
-

New Research Project on Computer Security For Nuclear AI
The IAEA has launched a new research project to enhance computer security for artificial intelligence systems that may be used in the nuclear sector. The project aims to strengthen computer security strategies to support the adoption of artificial intelligence-enabled technologies by nuclear facilities, including those for small modular reactors and other nuclear applications.
Artificial intelligence (AI) and machine learning (ML) systems are being deployed across the nuclear industry, offering potential benefits such as improved operational efficiency and enhanced security measures, including for threat detection. However, these technologies also create new computer security concerns that require innovative solutions. Risks include manipulation of data or information being used to teach or run an AI system. Minimizing such risks will involve robust information security and ensuring it is being used correctly.
Continue Reading
-

Industry leader predicts all-flash data storage will power the next wave of AI
Yuan Yuan, President of Huawei Scale-Out Storage Domain. Credit: Huawei As the world’s appetite for data continues to grow, data centres now do far more than simply store information; from managed storage and backup to infrastructure outsourcing, and AI training. According to GlobalData forecasts, the global enterprise data centre and hosting market was worth US$112.4 billion in 2024 and is forecast to grow to $188.2 billion by 2029, a CAGR of 10.9%[i].Despite this growth, most data centres still struggle with limited space and power, and often face performance bottlenecks, particularly when handling demanding AI workloads.
How can we address CPU performance bottlenecks?
Yuan Yuan is President of Huawei Scale-Out Storage Domain and recently spoke at a restricted round table discussion hosted at Huawei CONNECT 2025 in Shanghai. A leading voice in the industry, Yuan discussed how advances in storage technology are shaping the future of data management and offered valuable insights into the challenges and opportunities facing data centres today.
Discover B2B Marketing That Performs
Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.
Find out more
“CPU card utilisation rate is currently only around 50%,” says Yuan. “This means half of the time it is not visited (used). CPU cards are very expensive… but the CPU card is sitting there waiting for the data. That’s a storage issue.” Managing different types of data can add even more complexity, Yuan added. “The performance of traditional HDDs can no longer meet the demands of AI, big data, and high-concurrency applications for low latency and high IOPS [Input/Output Operations Per Second].”
This is where all-flash data storage comes in: it’s fast, efficient and compact in size. “Huawei uses all-flash storage solutions to break through bottlenecks and enhance business efficiency,” explained Yuan. “The release of the Huawei OceanStor Pacific 9928/9926 [all-flash scale-out storage] targets high-performance, large-capacity scenarios, enabling intelligent tiering. This addresses the key issues of traditional HDDs occupying cabinet space, utilising high power consumption, and yet providing low performance.”
Huawei also recently launched the OceanDisk 1800 intelligent disk enclosure, transitioning from JBOD (Just a Bunch of Disks) to diskless enclosures, decoupling storage and computing resources, achieving flexible expansion and efficient management of storage resources. “Moving forward, we will continue to develop GPU+NPU native disk enclosures,” says Yuan.
The advantages of using all-flash data storage over hard disk drives (HDD)
Flash storage (solid state drive – SSD) offers several important advantages over traditional hard disk drives. In short, flash storage is much faster, uses less power, and takes up less room than hard drives and the performance difference between HDDs and SSDs is significant. As Yuan explains: “The throughput can reach to 15 gigabytes per second (for SSDs) compared to just 100 megabytes per second for HDDs – making SSDs roughly 15 times faster.” This means SSDs can handle larger amounts of data much more quickly than traditional hard drives. Flash storage is also more efficient: “Power consumption, space and weight are roughly 10 times less.”
The sticking point right now, says Yuan, is the price: “It’s currently about three times more expensive than hard drives but things are changing quickly. AI needs huge amounts of data, and we now have better ways to manage and use that data. As AI becomes more common, customers need more storage and better performance. For example, with hard drives you might store 10 petabytes, but with flash storage you could store 100 petabytes for AI. That’s why we’re working on new features like power saving and data compression to help lower costs and make it easier for customers to switch to flash storage.”
Traditionally, flash storage was not as robust as HDDs, but that is changing, says Yuan. “Five years ago, I might have agreed that SSDs didn’t last as long, since all semiconductor equipment – like CPUs, memory, and SSDs – can fail. But now, thanks to new technology in SSD controllers and memory, they’re much more reliable and can last five to 10 years without any issues. In fact, SSDs now have a similar lifespan to hard drives, which are rated to last about two million hours.”
Powering AI with Huawei’s AI Data Lake Solution
Huawei’s all-flash strategy could contribute significantly to the development of AI. Huawei’s AI Data Lake Solution is designed to accelerate AI adoption across industries. The solution delivers a high-quality AI corpus and speeds up model training and inference. Able to store massive amounts of unstructured data, AI Data Lake is enabling centralised and unified data management, as well as breaking down data silos and empowering data-driven business innovation.
Yuan emphasises that AI is driving an exponential increase in data demand: “AI consumes data heavily and to make AI models smarter, they need more data.” Huawei has been investing heavily in new features and architectures to meet these needs, and Yuan notes that technical innovations, such as unified cache and vector databases, are essential for supporting AI’s performance and efficiency:
“For AI inference, we’ve added two key features. First, we use a unified cache manager (UCM) to store important data, so the system doesn’t have to recalculate information every time; instead, it can quickly access what it needs. Secondly, because not all information is available from outside sources, we help customers build local databases to keep their data up-to-date and improve the performance of their AI models.”
Is all-flash data storage better for the environment?
Data centres are highly energy intensive and take up a lot of space. Indeed, the International Energy Agency (IEA)[ii] estimates that globally, data centres accounted for approximately 1.5% of the world’s electricity consumption in 2024 (415 TWh) and is expected to more than double to around 945 TWh by 2030, representing nearly 3% of total global electricity consumption.
However, Huawei’s OceanStor Pacific all-flash scale-out storage has been certified by ENERGY STAR® for energy efficiency[iii].
“We were one of the first companies in Asia to have our storage equipment certified for energy efficiency with the ENERGY STAR label,” says Yuan. “This means our products meet strict standards for saving energy. For example, our all-flash storage uses just 0.25 watts per terabyte, which is the lowest in the industry and much lower than the usual 0.5 to 1 watt per terabyte for standard storage. We also design our systems with features to save even more power, like automatically lowering CPU speed when the workload is light.”
This not only helps customers reduce their power consumption and operating costs, but having ENERGY STAR certification gives customers confidence that they are choosing products that are both high quality and energy efficient, says Yuan.
Next steps for data storage
When it comes to the future, Yuan was cautious about divulging Huawei’s next steps, but he did confirm that AI wasn’t going to plateau any time soon: “Data demand isn’t slowing down – it’s actually growing faster than ever. AI uses huge amounts of data and as AI models get smarter and new technologies appear, the need for data keeps increasing. This is happening in many areas, from science and robotics to new types of AI.”
Yuan concludes: “Our approach means our customers can double their storage capacity every two years without needing more power or extra space. This lets them keep up with growing data needs without major changes to their infrastructure.
“Data is the future. AI needs data, and storage is the foundation that will drive new productivity.”
[i] GlobalData: Strategic Intelligence Technology: Data Centers, September 2025
[ii] https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai
[iii] https://www.energystar.gov/productfinder/product/certified-data-center-storage/details/3999750
Continue Reading
-

Corporate and M&A Partner Rejoins Squire Patton Boggs in San Francisco
Squire Patton Boggs is pleased to announce that Francesca Crisera Ruiz has rejoined the firm as a partner in its global Corporate Practice Group, based in San Francisco. She returns from Gunderson Dettmer, where she served as a partner in the firm’s Mergers & Acquisitions Practice Group.
“We are thrilled to have Francesca rejoin Squire Patton Boggs, as her depth of expertise in technology-related M&A and commercial work complements a key area of growth within our global practice,” said Cipriano S. Beredo, Americas Chair of the Corporate Practice. “Francesca not only brings a strong track record of client service, but her experience both as an M&A lead and intellectual property specialist uniquely positions her to support technology-driven deal flow, especially in the Americas.”
Ms. Crisera Ruiz, who spent nearly two decades at Squire Patton Boggs earlier in her career, has extensive experience advising technology and other companies on M&A and corporate matters, both domestic and international. She is particularly focused on complex cross-border deals, joint ventures and other transactions such as late-stage venture exits and private equity acquisitions involving technology assets. She has frequently served a dual role as lead deal counsel and subject matter expert on IP and commercial issues, such as contracts, license agreements, and trademarks. Ms. Crisera Ruiz also provides general corporate advisory work, having served as outside general counsel to a number of growth-stage companies.
Mark C. Dosker, Managing Partner of the San Francisco office added, “San Francisco is an important market for our firm and our technology clients, and Francesca’s legal skill set deepens our transactional capabilities in the region. Her return further strengthens our corporate team and enhances the support we provide to clients in California and across the nation. We are delighted to welcome her back to the firm.”
Commenting on her return, Ms. Crisera Ruiz said, “Having spent a significant portion of my career at Squire Patton Boggs, I am honored to rejoin the firm and its talented team across the globe. I have worked with tech-focused companies throughout my career, and through the firm’s extensive global platform, I look forward to further serving clients on their strategic transactions.”
Ms. Crisera Ruiz earned her JD from the University of California, Hastings College of Law and her BA from the University of California, Los Angeles.
Continue Reading
