Ham and bacon sold by supermarkets including Tesco, and Marks and Spencer still contain cancer-causing chemicals almost 10 years after the World Health Organization warned about the dangers of their use in processed meats.
Wiltshire ham is the product with the highest concentrations of nitrites, according to analysis that compared it to cooked ham and unsmoked bacon.
All of the 21 products tested in a laboratory were found to contain nitrites, which are used to preserve meat, despite the WHO in October 2015 declaring them to be unsafe.
Tesco’s Wiltshire ham contained the most nitrites – almost 33 milligrams per kilogram. That was 11 times the 2.88mg/kg in its cooked ham and almost four times the 8.64mg/kg in its unsmoked bacon. And it was also almost 18 times the 1.84mg/kg found in Morrisons’s bacon.
Other Wiltshire ham products, including those sold by M&S (28.6 mg/kg), Sainsbury’s (21.1mg/kg) and Morrisons (19.2mg/kg) also contained relatively high levels, although Asda’s version only had 8mg/kg.
Food campaigners, who want nitrites banned, said the findings were “alarming”.
Cancer charities said the widespread use of nitrites showed that people should eat as little processed meat as possible, such as ham, bacon and sausages, because consumption increases the risk of bowel cancer. Cancer Research UK estimates that 13% of the 44,100 cases of the disease diagnosed each year in Britain are linked to processed meat.
The analysis was commissioned by the Coalition Against Nitrites and undertaken by Food Science Fusion, an independent company, and the laboratory experts Rejuvetech. However, it found the levels of nitrites in all 21 products were well below the 150mg/kg legal limit.
A spokesperson for the Coalition Against Nitrites, which includes food safety experts, medical specialists and politicians from most of the UK’s major parties, said: “It’s nearly a full decade since the WHO classified nitrite-cured processed meats as a group one carcinogen and it is disappointing and alarming that we continue to see products on sale containing high levels of nitrites.”
They added: “Consumers are increasingly aware of the dangers of nitrites in processed meats, yet they continue to be exposed to their risks.”
Wiltshire ham contains such high levels of nitrites because during the production process the pork is injected with nitrates, as also happens with cooked ham. However, Wiltshire ham is then soaked in a bath of brine and nitrites, to give it its red colour and protect it from deadly bacteria. At that point a chemical reaction occurs, which turns nitrates into nitrites.
Prof Chris Elliott, the food safety expert who led the government-ordered investigation into the 2013 horsemeat scandal, said the research confirmed that nitrites remained “unnecessarily high in certain UK meat products”.
He added: “Given the mounting scientific evidence of their cancer risk, we must prioritise safer alternatives and take urgent action to remove these dangerous chemicals from our diets.”
Several food firms, including Finnebrogue and Waitrose, have responded to mounting concern about nitrites by producing bacon that is free of them.
Dr Rachel Orritt, Cancer Research UK’s health information manager, said: “Eating processed meat increases the risk of bowel cancer. Nitrites … can lead to cell damage, which is one of the ways that processed meat is linked to bowel cancer. The less processed meat you eat, the lower your risk of bowel cancer.”
Dr Giota Mitrou, the director of research and policy at the World Cancer Research Fund, said it recommended “eating as little, if any, processed meat as possible”.
A Tesco spokesperson said: “We follow all UK and EU requirements, alongside guidance from the UK Food Standards Agency, to ensure we get the right balance of improving the shelf life and safety of our products with limited use of additives.
“The nitrites levels in all of our products, including our traditionally cured Finest Wiltshire ham, fall significantly below the legal limits in the UK and EU.
“Nitrates and nitrites are an important part of the curing process for some meats and they are used to prevent the growth of harmful bacteria that cause serious food poisoning.”
Andrew Opie, the director of food and sustainability at the British Retail Consortium, which represents supermarkets, said: “Food safety is paramount to our members and they implement strict policies with their suppliers to ensure all products comply with UK food legislation.”
Any motorist who has ever waited through multiple cycles for a traffic light to turn green knows how annoying signalized intersections can be. But sitting at intersections isn’t just a drag on drivers’ patience — unproductive vehicle idling could contribute as much as 15 percent of the carbon dioxide emissions from U.S. land transportation.
A large-scale modeling study led by MIT researchers reveals that eco-driving measures, which can involve dynamically adjusting vehicle speeds to reduce stopping and excessive acceleration, could significantly reduce those CO2 emissions.
Using a powerful artificial intelligence method called deep reinforcement learning, the researchers conducted an in-depth impact assessment of the factors affecting vehicle emissions in three major U.S. cities.
Their analysis indicates that fully adopting eco-driving measures could cut annual city-wide intersection carbon emissions by 11 to 22 percent, without slowing traffic throughput or affecting vehicle and traffic safety.
Even if only 10 percent of vehicles on the road employ eco-driving, it would result in 25 to 50 percent of the total reduction in CO2 emissions, the researchers found.
In addition, dynamically optimizing speed limits at about 20 percent of intersections provides 70 percent of the total emission benefits. This indicates that eco-driving measures could be implemented gradually while still having measurable, positive impacts on mitigating climate change and improving public health.
“Vehicle-based control strategies like eco-driving can move the needle on climate change reduction. We’ve shown here that modern machine-learning tools, like deep reinforcement learning, can accelerate the kinds of analysis that support sociotechnical decision making. This is just the tip of the iceberg,” says senior author Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).
She is joined on the paper by lead author Vindula Jayawardana, an MIT graduate student; as well as MIT graduate students Ao Qu, Cameron Hickert, and Edgar Sanchez; MIT undergraduate Catherine Tang; Baptiste Freydt, a graduate student at ETH Zurich; and Mark Taylor and Blaine Leonard of the Utah Department of Transportation. The research appears in Transportation Research Part C: Emerging Technologies.
A multi-part modeling study
Traffic control measures typically call to mind fixed infrastructure, like stop signs and traffic signals. But as vehicles become more technologically advanced, it presents an opportunity for eco-driving, which is a catch-all term for vehicle-based traffic control measures like the use of dynamic speeds to reduce energy consumption.
In the near term, eco-driving could involve speed guidance in the form of vehicle dashboards or smartphone apps. In the longer term, eco-driving could involve intelligent speed commands that directly control the acceleration of semi-autonomous and fully autonomous vehicles through vehicle-to-infrastructure communication systems.
“Most prior work has focused on howto implement eco-driving. We shifted the frame to consider the question of shouldwe implement eco-driving. If we were to deploy this technology at scale, would it make a difference?” Wu says.
To answer that question, the researchers embarked on a multifaceted modeling study that would take the better part of four years to complete.
They began by identifying 33 factors that influence vehicle emissions, including temperature, road grade, intersection topology, age of the vehicle, traffic demand, vehicle types, driver behavior, traffic signal timing, road geometry, etc.
“One of the biggest challenges was making sure we were diligent and didn’t leave out any major factors,” Wu says.
Then they used data from open street maps, U.S. geological surveys, and other sources to create digital replicas of more than 6,000 signalized intersections in three cities — Atlanta, San Francisco, and Los Angeles — and simulated more than a million traffic scenarios.
The researchers used deep reinforcement learning to optimize each scenario for eco-driving to achieve the maximum emissions benefits.
Reinforcement learning optimizes the vehicles’ driving behavior through trial-and-error interactions with a high-fidelity traffic simulator, rewarding vehicle behaviors that are more energy-efficient while penalizing those that are not.
However, training vehicle behaviors that generalize across diverse intersection traffic scenarios was a major challenge. The researchers observed that some scenarios are more similar to one another than others, such as scenarios with the same number of lanes or the same number of traffic signal phases.
As such, the researchers trained separate reinforcement learning models for different clusters of traffic scenarios, yielding better emission benefits overall.
But even with the help of AI, analyzing citywide traffic at the network level would be so computationally intensive it could take another decade to unravel, Wu says.
Instead, they broke the problem down and solved each eco-driving scenario at the individual intersection level.
“We carefully constrained the impact of eco-driving control at each intersection on neighboring intersections. In this way, we dramatically simplified the problem, which enabled us to perform this analysis at scale, without introducing unknown network effects,” she says.
Significant emissions benefits
When they analyzed the results, the researchers found that full adoption of eco-driving could result in intersection emissions reductions of between 11 and 22 percent.
These benefits differ depending on the layout of a city’s streets. A denser city like San Francisco has less room to implement eco-driving between intersections, offering a possible explanation for reduced emission savings, while Atlanta could see greater benefits given its higher speed limits.
Even if only 10 percent of vehicles employ eco-driving, a city could still realize 25 to 50 percent of the total emissions benefit because of car-following dynamics: Non-eco-driving vehicles would follow controlled eco-driving vehicles as they optimize speed to pass smoothly through intersections, reducing their carbon emissions as well.
In some cases, eco-driving could also increase vehicle throughput by minimizing emissions. However, Wu cautions that increasing throughput could result in more drivers taking to the roads, reducing emissions benefits.
And while their analysis of widely used safety metrics known as surrogate safety measures, such as time to collision, suggest that eco-driving is as safe as human driving, it could cause unexpected behavior in human drivers. More research is needed to fully understand potential safety impacts, Wu says.
Their results also show that eco-driving could provide even greater benefits when combined with alternative transportation decarbonization solutions. For instance, 20 percent eco-driving adoption in San Francisco would cut emission levels by 7 percent, but when combined with the projected adoption of hybrid and electric vehicles, it would cut emissions by 17 percent.
“This is a first attempt to systematically quantify network-wide environmental benefits of eco-driving. This is a great research effort that will serve as a key reference for others to build on in the assessment of eco-driving systems,” says Hesham Rakha, the Samuel L. Pritchard Professor of Engineering at Virginia Tech, who was not involved with this research.
And while the researchers focus on carbon emissions, the benefits are highly correlated with improvements in fuel consumption, energy use, and air quality.
“This is almost a free intervention. We already have smartphones in our cars, and we are rapidly adopting cars with more advanced automation features. For something to scale quickly in practice, it must be relatively simple to implement and shovel-ready. Eco-driving fits that bill,” Wu says.
This work is funded, in part, by Amazon and the Utah Department of Transportation.
The Microsoft logo is pictured outside the headquarters in Paris, Jan. 8, 2021. [AP Photo/Thibault Camus]
A joint investigation by The Guardian and +972 Magazine has revealed that Microsoft has been providing its Azure cloud computing infrastructure to Israeli military intelligence to store a vast archive of intercepted communications by Palestinians. The data stored and utilized by intelligence agents within Israel Defense Forces (IDF) Unit 8200 has facilitated deadly airstrikes and military operations in both Gaza and the West Bank.
This unprecedented integration of Microsoft with the war crimes of the Israeli military exposes the increasingly central role played by and correspondence of interests between giant global tech corporations and the strategic aims of US imperialism in the ongoing genocide and ethnic cleansing of the Palestinian people.
According to the investigative reports, the collaboration between Microsoft and Unit 8200 was brokered at the highest levels. In late 2021, a delegation from Israel’s military intelligence, led by then-commander Brigadier General Yossi Sariel, met at Microsoft’s Seattle headquarters with CEO Satya Nadella and other key executives.
Nadella personally committed Microsoft’s technological resources to the project, reportedly calling the partnership “critical” for Microsoft’s future, and approved the creation of a customized, segregated area within Azure for exclusive use by Unit 8200.
This platform, according to anonymous sources and leaked internal Microsoft documents, was not for hosting generic cloud services but was designed to meet “the most ambitious demands for mass data collection and analysis ever proposed to the company.” Within months, Unit 8200 was able to begin storing and analyzing intercepted communications on a scale that the Israeli military had previously regarded as technically impossible.
The technological infrastructure that Microsoft built and now maintains is significant in its scale and scope. The foundation of the system is Azure data centers located in the Netherlands, with additional clusters in Ireland and within Israel itself. By July 2025, at least 11,500 terabytes of Israeli military data—equivalent to about 200 million hours of audio—were being stored on Microsoft servers.
Most of the data, according to Israeli and Microsoft sources cited by the investigation, consist of recordings of phone calls by Palestinians in the West Bank and Gaza. Surveillance officers and internal documents described the ambition to “capture and store up to a million calls an hour.”
This required “collaboration on security architecture between Microsoft engineers and military staff,” with Unit 8200 alumni among the Azure development teams responsible for implementation. Internal memos reveal “a rhythm of daily, top-down and bottom-up interaction” between the company’s cloud division and its Israeli military client, with project secrecy so tight that non-essential Microsoft personnel were not permitted to refer to the “8200 project” by name.
The Israeli surveillance of Palestinians is universal. The operations described in the investigation mirror details revealed by former NSA intelligence officer Edward Snowden in 2013 about the scale and scope of US government surveillance, data storage and searching of the electronic communications of everyone in the country, if not throughout the entire world.
Having long controlled the telecommunications infrastructure in the occupied territories, Israel indiscriminately intercepts all phone calls, radio transmissions and internet traffic, gathering up the communications of millions of ordinary Palestinians.
Leaked documents show the primary purpose of this data is to generate rapid “targets for kinetic action,” that is, identify cellular phones and voices for tracking and subsequent drone strikes, air raids or ground operations. Several anonymous Israeli intelligence officers, speaking to reporters, described a machinery that “finds incriminating material to be used for anything: blackmail, mass arrests, administrative detention, or retroactive justification for killing.”
Three sources specifically confirmed the data collected and stored using Microsoft’s cloud has been pivotal in planning lethal airstrikes in Gaza, as well as sweeping military detentions in the West Bank, operations which have left thousands of Palestinians dead or disappeared since October 2023.
The source of these revelations is derived from a rare trove of leaked documents—internal emails, technical memos, contracts and meeting summaries—supplemented by extensive interviews. At least 11 current and former Microsoft and Israeli intelligence sources, some of whom participated directly in the system’s development, provided corroboration under strict anonymity due to the immense legal and career risks involved.
One well-placed official warned that a successful legal challenge to this arrangement, especially under international human rights law, would threaten both Israeli and American interests, given the “codependence of government and cloud monopolies in waging cyber and kinetic war.”
The evidence spans top-secret deals, private communications between Nadella and Sariel, and invoices showing payments amounting to tens of millions of dollars in exchange for priority computing resources and ongoing technical support for both Azure and connected artificial intelligence products.
For its part, Microsoft continues to publicly deny it is abetting Israeli war crimes. In response to questions from The Guardian and +972 Magazine, the company issued carefully worded statements. “Microsoft’s engagement with Unit 8200 has been based on strengthening cybersecurity and protecting Israel from nation state and terrorist cyber-attacks,” a spokesperson said.
With standard language used to deny any connection between US corporations and military violence, Microsoft said, “At no time during this engagement or since that time has Microsoft been aware of the surveillance of civilians or collection of their cellphone conversations using Microsoft’s services, including through the external review it commissioned.”
Nadella’s office insisted that, during the 2021 Seattle meeting, he only “attended for ten minutes at the end” and that “no discussion took place regarding any data the unit intended to transfer to Azure.” The company’s repeated refrain is that it has “found no evidence that Azure or its artificial intelligence products were used to target or harm individuals.” However, the leaked documents and the testimony of Israeli intelligence officers flatly contradict these denials.
The Israeli military, through official IDF spokespeople, has also denied any direct relationship with Microsoft on the processing and storage of surveillance data. One statement reads, “The coordination between the Defense Ministry and the IDF with civilian companies is conducted based on regulated and legally supervised agreements. … The army operates in accordance with international law, with the aim of countering terrorism and ensuring the security of the state and its citizens.”
In a follow-up after the publication of the investigative exposé, the IDF added, “We appreciate Microsoft’s support to protect our cybersecurity. We confirm that Microsoft is not and has not been working with the IDF on the storage or processing of data.” These denials are sharply contradicted by the facts of the reporting, technical documentation and verification by multiple Israeli defense sources.
Unit 8200, the entity at the core of this operation, is Israel’s signals intelligence and cyber warfare unit, widely referred to as the “backbone” of the IDF’s intelligence directorate and described internally as the “Israeli NSA.” Established in 1952 as part of the Aman intelligence directorate, Unit 8200 is responsible for collecting, decrypting and analyzing all forms of electronic communications: calls, texts, radio and internet traffic.
As Israel’s elite military signals intelligence and cyberwarfare unit, Unit 8200 has been central to decades of Israeli military operations, clandestine surveillance and cyber-attacks. It has provided critical intelligence for Israeli wars and targeted assassinations, from the 1967 Six-Day War to operations in Lebanon, Syria and Iran. The unit is closely tied to US and allied intelligence agencies.
Intelligence produced by Unit 8200 has guided countless Israeli attacks on Gaza and underpins the digital infrastructure of occupation, working alongside other Israeli agencies such as Mossad and Shin Bet. Its alumni dominate Israel’s tech sector, reflecting the convergence between surveillance, military violence and privatized digital capitalism.
The Unit 8200 workforce, which reportedly reached 6,000 by 2023, is infamous for developing cutting-edge surveillance technologies and for its ruthless pursuit of “total information dominance” over the Palestinian population. Its alumni permeate Israel’s corporate tech sector and have played roles in high-profile hacking, cyber espionage and malware such as Stuxnet, the computer worm that was designed to sabotage industrial control systems, most notably targeting Iran’s nuclear facilities.
In recent years, and especially under Sariel’s command, the unit was restructured to increase its artificial intelligence (AI) and data-mining capabilities, with over 60 percent of its staff being engineers or data scientists as of October 2023. Unit 8200 was at the center of both the biometric policing—the use of facial recognition, fingerprints, iris scanning, etc. to identify and track individuals in real time—in the West Bank and the aerial kill chains used in Gaza since the intensification of military attacks after October 7, 2023.
Yossi Sariel, who commanded Unit 8200 from 2021 through September 2024, was the architect of the recent shift toward cloud-based surveillance and AI-driven targeting. After a sabbatical in Washington D.C., where he further developed his vision for artificial intelligence on the battlefield, Sariel led the reorganization of the unit’s intelligence efforts into what were dubbed “AI factories” located in a new “targets center” at Nevatim Airbase.
He promoted a doctrine in which the constant harvesting, storage and automated analysis of data—enabled by partners like Microsoft—could reduce the time between interception and lethal action to mere minutes or seconds. The Guardian investigation identified Sariel as the author of “The Human Machine Team,” an influential 2021 treatise published under the pseudonym Brigadier General Y.S.
The book is a manifesto for “synergizing” human and machine decision-making, advocating for the migration of military intelligence databases to secure commercial cloud platforms and the deployment of AI to “generate, prioritize, and execute battlefield targets faster than human adversaries could ever respond.”
Sariel’s text, now available through military academic channels, called on Israeli and allied intelligence services to “leverage cloud supercomputing” as the only path to victory in imperialist warfare.
In January 2025, The Guardian published details showing that Microsoft, along with Google and Amazon, had dramatically deepened their collaboration with the IDF in the wake of the October 2023 assault on Gaza. Leaked documents and commercial records from Israel’s defense ministry showed Microsoft providing at least $10 million worth of technical support and computing resources to the IDF during the most intense phases of the bombardment.
Azure’s participation went far beyond basic administrative services and included core infrastructure essential for “combat and intelligence activities.” The company also provided the IDF with privileged access to artificial intelligence models, including OpenAI’s GPT-4, following policy changes that explicitly allowed such military applications.
At that time, industry analysts noted that Microsoft’s products and expertise played a central role in the IDF’s automation of target selection and in the computerization of “kill/life” decisions in the war on Gaza. Clearly, any errors in this automated selection process were of no concern to the Zionist regime, which has now officially killed more than 60,000 people in Gaza, most of them women and children.
Multiple reports confirm that AI-driven algorithms—built by or with the aid of US tech companies—have been used to rapidly sift through Unit 8200’s Azure-hosted surveillance, producing so-called “target banks” that have enabled targeted strikes on homes, schools and hospitals at a scale and speed previously unknown in world history.
Leaked Israeli documents and internal interviews reveal that commercial cloud access was not a luxury but a strategic requirement for the accelerated pace of massacre in Gaza and the West Bank, with AI-powered identification making mass murder “efficient, scalable, and deniable” for the Israeli government and military command and its US imperialist masters.
The revelations have ignited outrage among workers within Microsoft itself, some of whom have been fired or otherwise sanctioned for their principled opposition to it. On June 14, 2025, GeekWire published an interview with Hossam Nasr, a software engineer who was terminated for organizing protests and open letters against Microsoft’s collaboration with what he called “an apartheid regime committing war crimes.”
Nasr, an American of Egyptian descent, described the atmosphere inside Microsoft as “toxic for dissent,” saying, “When I tried to raise concerns, I was told that contracts with governments are above reproach. The stakes for Palestinians are existential, but for Microsoft’s executives, they are just another revenue stream.”
Nasr also confronted Nadella in a public forum, demanding an end to what he described as the “integration of our cloud with instruments of genocide.” His firing, along with mounting employee resignations and shareholder criticisms, has become part of the broader complicity of the tech giants in modern warfare.
An analysis of Microsoft’s business priorities exposes the link between its drive for profit, and the war aims in the Middle East of US and other imperialist powers and around the world. As of 2025, Microsoft remains the world’s second most valuable company, with annual revenues exceeding $228 billion and a market valuation hovering near $3 trillion.
CEO Satya Nadella’s compensation in the previous fiscal year was over $56 million, a figure linked not only to stock performance but to Microsoft’s ability to secure and expand lucrative government and defense contracts.
Azure’s client roster includes every major branch of the US military, intelligence community and a roster of allied security agencies, making Microsoft’s bottom line dependent on the architecture of surveillance, conquest and war. Key contracts include JEDI (later JWCC) for the US Department of Defense, running into the billions, along with deals with the National Security Agency (NSA), Central Intelligence Agency (CIA) and Department of Homeland Security (DHS).
The company’s strategy is to position its platforms as indispensable “force multipliers” for both commercial and military clients, a model that fuses monopoly capital with the apparatus of military and police violence on a world scale.
The Guardian and +972 Magazine investigation exposes that the ongoing Israeli genocide in Gaza and ethnic cleansing against Palestinians is enabled by the operations and interests of American tech monopolies like Microsoft. As has been clear from the start, the barbarism in Gaza is not an Israeli project alone. It is a global enterprise funded, engineered, supplied and justified at each step by the world capitalist system and its most powerful representatives.
The thermal stability of 1 was assessed kinetically in aqueous solutions at different pH levels relevant to the formulation of commercial products. Andrographolide (1) is the major bioactive metabolite of A. paniculata and is thus a driver for the development of value-added products, either as a single material or as an ingredient in a standardized mixture. The use of standardized preparations of A. paniculata leaves or leaf extracts as functional ingredients for medicinal plant products is also important for commercial development. Knowledge regarding the thermal stability of 1 is therefore of significance for the development of high-quality functional products for the cosmetic and nutraceutical industries. Such standardized extracts and cosmetic products may be exposed to elevated temperatures at different pH levels, depending on the production processes, and may experience a variety of formulation, storage, and manufacturing operations where metabolite integrity could be compromised13,27,28.
Identification of degradation products
The HPLC method for the analysis of 1 was adapted from the American Herbal Pharmacopoeia29. The separation of 1 and its major degradation products from the pH-adjusted solutions in methanol (MeOH) was accomplished within a 25 min time frame. The isolates were characterized using nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry and comparison with published spectroscopic data (Supplementary Tables S3 and S4, Figures S1−S10). In the acidic environment (pH 2.0), the degradation products 230,31,32,33,34,35 and 324,36 eluted at retention times (tR) of 4.1 min and 7.6 min, respectively (Fig. 1). Isoandrographolide (2), a white amorphous powder, possessed a molecular formula of C20H30O5 as determined by (+)-LC-MS-QTOF [M + Na]+ at m/z 373.1976 (Supplementary Figure S11). The 1H- and 13C-NMR spectra of 2 were similar to those of 1, except for a methyl group at C-17 (δH 1.19) and an oxymethine proton at C-12 (δH 4.58), instead of an exo-methylene group H-7α (δH 4.67) and H-7β (δH 4.89) and an olefinic proton H-12 (δH 6.85) in 1, respectively (Supplementary Table S3). The 13C NMR data of 2 showed two oxymethine carbons at C-8 (δC 83.2) and C-12 (δC 72.3) instead of the two resonances of olefinic carbons in 1 (Supplementary Table S4). The NMR data of 2 agree with those reported35. 8,9-Didehydroandrographolide (3), white amorphous powder, C20H30O5, showing (–)-LC-MS-QTOF at m/z 395.2069 ([M + HCOO]– (Supplementary Figure S12). The 1H- and 13C-NMR data were closely related to those of 1, except for an endo-olefinic group at C-8 (δC 129.2), C-9 (δC 134.9), and C-17 (δC 20.9) in 3, instead of an exo-methylene group in 1 (Supplementary Table S4). The NMR spectral data of 3 agreed with those previously reported24.
For the pH 8.0 solution, the degradation products 43,24,37,38, 539, and 619,20,24 eluted at tR of 5.5, 10.2, and 19.8 min, respectively. Compound 6 was not detected in strongly basic conditions (pH 10.0–12.0) (Fig. 1). 15-Seco-andrographolide (4) was isolated as a white amorphous powder, C20H32O6, m/z 367.2123 ([M – H]–) (Supplementary Figure S13). The structure of 4 was confirmed by comparison of the NMR data with 1, through the presence of an acyclic oxy-methylene group at C-15 (δH 3.75 and δH 3.61; δC 66.3) instead of the cyclic oxy-methylene protons (δH 4.16 and δH 4.46; δC 76.1) in 1 (Supplementary Table S3). The NMR data of 4 agreed with those reported24. 14-Deoxy-15-methoxyandrographolide (5) was obtained as a white solid, C21H32O5, m/z 365.2328 ([M + H]+) (Supplementary Figure S14). The NMR data were similar to those of 1, except for resonances for a methylene at C-14 (δH 3.09, 2.67; δC 33.5) and di-oxymethine at C-15 (δH 5.55; δC 104.4), instead of resonances for an oxymethine and oxymethylene in 1. The structure of 5 was confirmed by comparison of the NMR data with those reported39. The structure of 11,14-dehydro-14-deoxy-andrographolide (6) was confirmed by analysis of the 1H-NMR data with those reported24. A trans-olefinic group was observed at C-11 (δH 6.86; δC 136.54) and C-12 (δH 6.16; δC 122.5). and by the resonances for an olefinic group at C-13 (δC 129.6) and C-14 (δH 7.44; δC 146.7) instead of an oxy methine in 1. The (+)-LC-MS-QTOF at m/z 333.2060 ([M + H]+) confirmed the molecular formula to be C20H28O5 (Supplementary Figure S15).
The HPLC chromatograms illustrating the degradation products formed at pH 2.0, pH 6.0, and pH 8.0 are presented in Fig. 1 and Supplementary Figures S16–S18. In experiments using DMSO, the degradation product observed under acidic conditions (pH 2.0) produced similar signals as in MeOH, whereas under strongly basic conditions (pH 10.0 and pH 12.0), only 4 was formed from 1 (Supplementary Figure S19). Polarity or nucleophilicity therefore plays an important role in the degradation pathways of 1 since MeOH is a stronger nucleophile than DMSO and thus 5 could be observed in the MeOH conditions.
Fig. 1
HPLC chromatograms of pH 2.0, pH 6.0, and pH 8.0 stressed solutions at 70 °C on a Poroshell EC C18 column, flow rate: 1 mL/min, 25 min, 50% MeOH–H2O, detected at UV 224 nm. A; pH 2.0 shows the peaks of 2 (4.1 min), 1 (5.7 min), and 3 (7.6 min). B and C; pH 6.0 and pH 8.0 showing the peaks for 4 (5.5 min), 1 (5.7 min), 5 (10.2 min), and 6 (19.8 min). D and E; pH 10.0 and pH 12.0 at 70 °C showing the peaks for 4 (5.5 min) and 5 (10.2 min).
The persistence of the degradation products from 1 in pH 2.0 solution was evident with compounds detected for at least 7 days (Supplementary Figure S16), whereas at pH 6.0 and pH 8.0 4 was the major degradation product, and 5 and 6 were observed in small amounts under these conditions (Fig. 1 and Supplementary Figures S17 and S18). The formation of products 4, 5, and 6 was initially detected after 2 days at pH 6.0, and after 1 h at pH 8.0 (Supplementary Figures S17 and S18). These studies support the previous reports on the kinetic degradation of 1 in solution24. Andrographolide (1) therefore undergoes distinctive acid and base-catalyzed degradation pathways which provides a critical insight into the degradation mechanisms of 1 and highlights the importance of pH on the stability and transformation of this bioactive compound when different pharmaceutical or cosmeceutical formulations are being considered.
Kinetics of the degradation of 1
HPLC chromatographic analysis revealed a direct relationship between the pH of the solution and the rate of the degradation reaction. Specifically, at pH 2.0 and pH 4.0, the degradation of 1 occurred at a slower rate compared to solutions at higher pH levels (pH 6.0 to pH 12.0) (Figs. 2 and 3). The HPLC chromatogram of 1 at pH 2.0 over a period of 7 days at 70 °C indicated two degradation products, 2 and 3 (Supplementary Figure S16), while no degradation was detected at pH 4.0 in the same time frame and temperature (Supplementary Figure S20). Thus, 1 exhibited greater stability in pH 4.0 buffer solution compared to pH 2.0, consistent with previous findings that noted the optimal stability of 1 within the pH range of 3–540. Although the degradation rates increase below pH 3.0, 1 remains largely present at pH 2.0, a typical human gastric value, and making this low pH relevant in the context of food, cosmetic, and drug processing.
The metabolite remained stable at the boiling point of MeOH (64.7 °C for 28 days), whereas in DMSO, the degradation rate of 1 depended on the temperature and the composition of the solvent23. The rate constant for the degradation of 1 was investigated at pH 2.0, pH 6.0, and pH 8.0, and at three individualized, elevated temperatures of 70, 77, and 85 °C (Table 1). Chemical kinetic parameters and profiles for the degradation of 1 are shown in Table 2. As expected, in each case the apparent kinetic rate constant (k) increased with increasing temperature. Strong correlation coefficients (0.9800 < r2 < 0.9978) from the plot of ln (C) against reaction time (day) were found (Table 1). The rate constant obtained from Eq. (1) was fitted to an Arrhenius-type equation in each kinetic model studied to determine the effect of temperature on the chemical reaction (Fig. 2). The k values indicated the decreased thermal stability of 1 as the temperature was increased. Under different conditions, the k value of solid-state andrographolide under heat-accelerating conditions was reported to be 3.8 × 10− 6 per day17 and 6.58 × 10− 6 per day18 while in pH-dependent solutions the k value was revealed as 6.5 × 10− 5 per day (at pH 2.0), 2.5 × 10− 3 per day (at pH 6.0), and 9.9 × 10− 2 per day (at pH 8.0). Thus 1 decomposed faster in acid and basic solutions than in the solid state. This is the first report documenting the thermal degradation kinetics of 1 at specific pH conditions. These results align with earlier studies that characterized the degradation of 1 as following a first-order reaction model in solution through intermolecular interactions23. First-order kinetics indicated that the degradation of 1 is concentration dependent, therefore the amount of 1 degrading per unit of time is not constant for the ambient pH conditions. Most drugs tend to degrade with either zero- or first-order kinetics41. On the other hand, solid state 1 decomposed through second-order degradation kinetics under accelerated conditions through intramolecular interactions19,20. Second-order degradation occurs when the rate is influenced by the concentration of two separate or identical reactants. Therefore, the varied matrix or formulation of 1 under study may be the cause of the identified differences in degradation kinetics. The intermolecular interactions of 1 influence the rate of degradation17,18,19,20. However, the intramolecular interactions of 1, particularly through hydrogen bonds and hydrophobic interactions are crucial for stability and function. The calculated activation energies (Ea) derived from the curves showed that the values for 1 at pH 2.0, pH 6.0, and pH 8.0 were 118.9, 82.8, and 79.4 kJ/mol1, respectively (Table 2). A high Ea generally indicates that the reaction is less sensitive to temperature fluctuations15. Hence, the higher Ea value of 1 at pH 2.0 portends a lower sensitivity to temperature-induced degradation than in pH 6.0 and pH 8.0 solutions. The stability of 1 in acidic conditions may be due to the presence of free –OH groups causing clustering, followed by strong intermolecular hydrogen bonding. The shelf-life values for 1 at pH 2.0, pH 6.0, and pH 8.0 at 25 °C were 4.3 years, 41 days, and 1.1 days, respectively (Table 2) which is highly significant for product formulation studies.
Fig. 2
First-order plot and Arrhenius plots of the degradation of andrographolide based on pH, (A) pH 2.0, (B) pH 6.0, and (C) pH 8.0.
Fig. 3
First-order plot of the degradation of andrographolide (1) in MeOH between pH 2.0 and pH 4.0 at 70 °C for 35 days.
Table 2 Predicted shelf-life (t90%) of 1 in pH 2.0, pH 6.0, and pH 8.0 solutions at 25 °C using an arrhenius method.
Formation of degradation products
The transformation of 1 into 2 can occur through (a) allylic rearrangement of the hydroxyl group on the lactone ring, (b) protonation of the exo-methylene group, and (c) cyclization of the tetrahydrofuran ring30. The formation of 3 was visualized to occur through isomerization of the 8,17-double bond by (a) protonation of the exo-methylene moiety, and (b) abstraction of the proton at C-9[22] (Scheme 1). However, the13C-NMR spectrum indicated a mixture of formation olefinic isomers of 3 for which the precise configuration could not be established. Compound 4 could be produced through fragmentation of the lactone ring of 1, and 6 could arise through E2 elimination by base following abstraction of the δ-proton in 1 leading to 1,4-elimination from an allylic alcohol24 (Scheme 1). The formation of 5 may occur through methoxylation at C-15 involving an enol lactone intermediate (m/z 333.2060)42 which may be derived from 6 (Scheme 1 and Supplementary Figure S21). However, the1H-NMR spectrum indicated either a mixture of C-15-epimers of 5, or only one epimer for which the precise configuration could not be established. This mechanistic possibility was revealed from the HPLC analysis of 5 when 6 was treated at pH 8.0 in MeOH solution (Supplementary Figure S22). This chemical reaction supports the biogenetic pathway for 5 that is proposed to occur in plants39. However, 5 could also be an artifact due to the presence of MeOH in the solution which acts as a nucleophile to react with an enol lactone intermediate (Figure 4 and Scheme 1). This analysis enumerates some of the mechanisms underlying the degradation of 1 and highlights the potential for exploring reaction opportunities towards further analogues for biological assessment while retaining their stability.
Fig. 4
Chemical structures of 1 and its degradation products.
Scheme 1
Reaction mechanisms for the degradation of 1 under acidic and basic conditions.
Biological activity assessments
To investigate the changes in the biological profile of the degradation products in comparison with 1, two bioassays were performed. The first assay evaluated the inhibitory effect on lipopolysaccharide (LPS)-induced nitric oxide (NO) production in RAW264.7 macrophages. The results of this anti-inflammatory bioassay for 1 and its degradation products are summarized in Table 3. The presence of a newly formed tetrahydrofuran ring and an olefinic bond in 2 did not enhance the anti-inflammatory activity when compared to 1 and the other degradation products. This affirmed that the conjugated Δ12(13)-double bond and the hydroxy group at C-14 are critical structural elements for the inhibition of NO production33. In addition, the anti-inflammatory activity of 5 and 6 was not enhanced relative to 1, 3, and 4 through the introduction of a methoxy group at C-15 in 5, and a conjugated double bond in compound 6. These results align with molecular docking studies on the nitric oxide production inhibition activity of 1 and its derivatives43. The formation of a C-8 vinylic methyl group, as in 3, and 4 where the lactone ring is opened, were less active than 1. These data confirm that, since in the degradation products of 1 the strong anti-inflammatory activity is not retained, the stability of 1 in formulated medicinal products must be monitored over time to avoid diminished efficacy for the patient. In this study it was affirmed that 1 was extensively degraded under strongly basic conditions. Therefore, alkaline products of 1, such as soaps and shampoos, can be explored for other activities, recognizing that the anti-inflammatory activity of 1 will have been lost.
Table 3 NO production and cytotoxic activity of 1 and its degradation products 2–6.
The cytotoxic activities of the degradation products and 1 were assessed against the SW480 human colon cancer cell line, with the resulting IC50 values presented in Table 3. Compounds 2 and 6 exhibited no activity at the tested concentrations, in agreement with earlier studies across a selection of cancer cell lines42,43,44,45,46,47. In this investigation, the parent compound 1 was identified as the most cytotoxic with a modest IC50 value of 4.17 µM, suggesting the important role of the allylic hydroxyl lactone moiety of 1 in imparting cytotoxic effects. Compounds with a C-8 vinylic methyl group, e.g., 3, and with the lactone ring-opened, e.g., 4, demonstrated weaker cytotoxic activity compared to 1. In summary, the structural integrity of 1 is necessary for maintaining both the anti-inflammatory and cytotoxic activities in developed products.
A cargo ship carries foreign trade containers on the Jiaozhou Bay waterway in Qingdao, Shandong Province, China, on August 5, 2025.
Costfoto | Nurphoto | Getty Images
China’s export growth in July sharply beat market expectations as the clock on a tariff truce with the U.S. keeps ticking, while imports rose to their highest in a year.
Exports climbed 7.2% in July in U.S. dollar terms from a year earlier, customs data showed Thursday, exceeding Reuters-polled economists’ estimates of a 5.4% rise.
Imports rose 4.1% last month from a year earlier, marking the biggest jump since July 2024, according to LSEG data. The data also indicated a recovery in import levels following June’s 1.1% rebound. Economists had forecast imports in July to fall 1.0%, according to a Reuters poll.
On a year-to-date basis, China’s overall exports jumped 6.1% from a year earlier, while imports fell 2.7%, customs data showed. China’s trade surplus this year, as of July, reached $683.5 billion, 32% higher than the same period in 2024.
China’s exports have supported the economy “strongly” so far this year, said Zhiwei Zhang, president and chief economist at Pinpoint Asset Management, cautioning that the momentum of businesses’ shipment front-loading may soon fade.
In July, China’s factory activity unexpectedly deteriorated to a three-month low with the official manufacturing purchasing managers’ index falling to 49.3 from 49.7 in June, missing expectations for 49.7.
The U.S. and Chinese negotiators have yet to strike an agreement that would keep the triple-digit tariffs at bay as the truce expires on Aug.12.
This is breaking news. Please refresh for updates.
US President Donald Trump said the tariff will not impact companies if they have already invested in US facilities.
United States President Donald Trump says he will impose a 100 percent tariff on foreign-made semiconductors, although exemptions will be made for companies that have invested in the US.
“We’ll be putting a tariff on of approximately 100 percent on chips and semiconductors, but if you’re building in the United States of America, there’s no charge, even though you’re building and you’re not producing yet,” Trump told reporters at the Oval Office on Wednesday evening.
The news came after a separate announcement that Apple would invest $600bn in the US, but it was not unexpected by US observers.
Trump told CNBC on Tuesday that he planned to unveil a new tariff on semiconductors “within the next week or so” without offering further details.
Details were also scant at the Oval Office about how and when the tariffs will go into effect, but Asia’s semiconductor powerhouses were quick to respond about the potential impact.
Taiwan, home of the world’s largest chipmaker TSMC, said that the company would be exempt from the tariff due to its existing investments in the US.
“Because Taiwan’s main exporter is TSMC, which has factories in the United States, TSMC is exempt,” National Development Council chief Liu Chin-ching told the Taiwanese legislature.
In March, TSMC – which counts Apple and Nvidia as clients – said it would increase its US investment to $165bn to expand chip making and research centres in Arizona.
A semiconductor wafer displayed at Touch Taiwan, an annual display exhibition in Taipei, Taiwan, on April 16, 2025 [Ann Wang/Reuters]
South Korea was also quick to extinguish any concerns about its top chipmakers, Samsung and SK Hynix, which have also invested in facilities in Texas and Indiana.
Trade envoy Yeo Han-koo said South Korean companies would be exempt from the tariff and that Seoul already faced “favourable” tariffs after signing a trade deal with Washington earlier this year.
TSMC, Samsung and SK Hynix are just some of the foreign tech companies that have invested in the US since 2022, when then-President Joe Biden signed the bipartisan CHIPS Act offering billions of dollars in subsidies and tax credits to re-shore investment and manufacturing.
Less lucky is the Philippines, said Dan Lachica, president of Semiconductor and Electronics Industries in the Philippines Foundation.
He said the tariffs will be “devastating” because semiconductors make up 70 percent of the Philippines’ exports.
Trump’s latest round of blanket tariffs on US trade partners is due to go into effect on Thursday, but the White House has also targeted specific industries like steel, aluminium, automobiles and pharmaceuticals with separate tariffs.
The metal has gained nearly 30% this year due to expectations of rate cuts and diversification from US dollar assets.
Gold held a moderate loss, as traders looked past uncertainty created by US President Donald Trump’s latest trade moves, including threatening a 100% tariff on chip imports.
Traders also watched for Trump’s nomination within days of a temporary Federal Reserve governor who is expected to be more aligned with his agenda to ease monetary policy.(AFP File Photo)
Bullion was steady around $3,370 an ounce after a 0.3% decline in the previous session. This came after Trump said he would impose a 100% levy on semiconductor imports in a bid to force companies to move production back to the US.
Meanwhile, relations with key trade partners soured, with the US leader doubling the tariff on Indian goods to 50% over the South Asian country’s continued purchases of energy from Russia. Japan may also face higher duties than agreed last month on some products, Kyodo reported.
Traders also watched for Trump’s nomination within days of a temporary Federal Reserve governor who is expected to be more aligned with his agenda to ease monetary policy. Lower rates benefit gold, which doesn’t yield interest.
The precious metal’s recent rally has been driven by rising expectation of rate cuts. Central bank buying and a broad trend of diversifying away from US dollar-denominated assets have also offered support. It’s climbed nearly 30% this year, though the bulk of its gains happened in the first four months, as geopolitical and trade tensions rattled the market.
Gold was 0.1% higher at $3,373.45 an ounce as of 8:42 a.m. in Singapore. The Bloomberg Dollar Spot Index was steady. Silver, palladium and platinum all rose.
Stay updated with US News covering politics, crime, weather, local events, and sports highlights. Get the latest on Donald Trump and American politics also realtime updates on Indonesia ferry fire.
Stay updated with US News covering politics, crime, weather, local events, and sports highlights. Get the latest on Donald Trump and American politics also realtime updates on Indonesia ferry fire.
News / World News / US News / Gold steady amid Trump’s 100% tariff threat on chip imports and trade tensions
At the end of 2024, BBVA launched the Analytics Transformation unit, which houses the functions of advanced analytics and business analytics to accelerate the bank’s transformation to a data-driven organization. This unit includes the more than 1,000 data scientists responsible for designing and executing the AI models and algorithms the bank uses in both its internal operations and in products and services for clients. The more than 2,500 data specialists who help enhance the value of data and AI from the business areas are also part of this unit.
With the aim of making the most of these professionals’ technical knowledge, BBVA is offering them a professional development model that allows them to continue advancing in their careers from the early stages of their careers. They can choose a traditional management role leading teams (manager) or a path of advanced specialization as an expert or individual contributor (IC), which allows them to continue developing advanced AI solutions.
“Many of our professionals stand out for their command of emerging technologies and technical knowledge, which are of great value to the bank’s technological projects. However, traditional career advancement usually leads to management roles, which means that this valuable technical contribution gets lost along the way,” explained Cayetano Gea-Carrasco, Head of Analytics Transformation at BBVA. “With our professional development model for these profiles, we want to recognize and empower technical leadership as a career path parallel to management.”
Within the IC itinerary, BBVA has established three levels of specialization:
Data Scientist Expert: internal professionals who are leaders using advanced algorithms to solve complex business challenges.
Data Scientist Senior Expert: experts with a focus on areas like natural language processing, graphs or recommendation systems, actively participating in publications and training programs.
Data Scientist Master Expert: highly specialized profiles with a key role in innovation, global technical standards and research initiatives.
These experts also belong to a community of practice that encourages knowledge sharing and continuous learning through mentoring, intensive technological training programs (bootcamps) and informal meetings (meetups). Each IC profile also has a personalized development plan, which includes advanced training and participation in strategic projects.
With this model, BBVA is creating an environment where technical knowledge is not only valued – it is also a real path to leadership and professional development.
US Transport Secretary Sean Duffy has announced the US wants to be the first nation to put a nuclear reactor on the Moon after an internal directive showed he ordered the space agency NASA to fast-track the plan.
“We’re going to bring nuclear fission to the lunar surface to power our base,” Mr Duffy wrote on social media X on Thursday, local time.
“If you lead in space, you lead on Earth.“
A directive written by Mr Duffy — first reported by Politico and seen by Agence France Presse (AFP) — demands that NASA build a nuclear reactor that could be used to generate power on the Moon within five years.
It is the first major policy change by Mr Duffy since President Donald Trump appointed him as acting head of the space agency, and it comes just three months after China and Russia announced they were considering a joint effort to also put a nuclear power station on the Moon.
But what would a nuclear reactor help achieve? And what is driving this new space race? Here’s what to know.
What is a nuclear reactor?
Nuclear reactors are the heart of a nuclear power plant. They create electricity by producing a carefully controlled nuclear chain reaction.
Over the years, NASA has funded multiple nuclear reactor research projects. (Four Corners: Ryan Sheridan)
According to the New York Times, Mr Duffy’s directive calls for the agency to solicit proposals from commercial companies for a reactor that could generate 100 kilowatts of power and would be ready for launch in late 2029.
That’s enough electricity to power between 50 and 100 Australian households at once.
As extraordinary as it sounds, this idea to use nuclear energy in space is not new.
Since 2000, NASA has been investing in nuclear reactor research, including in 2022, when it awarded three US$5 million contracts to develop initial designs for the Moon.
But those designs were smaller, producing 40 kilowatts, and were for demonstration purposes to show nuclear power “is a safe, clean, reliable option,” NASA said at the time.
Why put one on the Moon?
A nuclear reactor would be useful for long-term stays on the Moon, as the Trump administration looks to revitalise space exploration.
One lunar day lasts four weeks on Earth, with two weeks of continual sunshine followed by two weeks of cold darkness.
This cycle makes it difficult for a spacecraft or a Moon base to survive with just solar panels and batteries.
Having a source of power independent of the Sun would be key to a sustained human presence on the lunar surface for at least 10 years, NASA has previously said.
A 2022 concept illustration of NASA’s Fission Surface Power Project on the Moon. (Supplied: NASA)
In the internal directive, Mr Duffy also cites China and Russia’s plans to put a reactor on the moon by the mid-2030s.
“The first country to do so could potentially declare a keep-out zone which would significantly inhibit the United States from establishing a planned Artemis presence if not there first,” he writes, according to AFP.
Artemis is a reference to NASA’s Moon exploration program, which aims to send four astronauts to the lunar surface in 2026 to establish a lasting presence near the south pole.
Further, Mr Duffy notes it would pave the way for Mars exploration efforts.
“To properly advance this critical technology to be able to support a future lunar economy, high-power energy generation on Mars, and to strengthen our national security in space, it is imperative the agency move quickly,” he reportedly says.
Amid renewed competition for space dominance — more than 50 years after the Cold War spurred the first man to walk on the Moon — it is worth noting that a 1967 UN agreement says no nation can own the Moon.
Mr Duffy’s comments about the potential for another country to declare a “keep-out zone” on its surface appear to be referring to an agreement called the Artemis Accords.
In 2020, seven nations initially signed the agreement to establish principles on how countries should cooperate on the Moon. Since then, 49 more have done so, including Australia, but China is noticeably absent from the list.
These principles include so-called “safety zones” to be established around operations and assets that countries build on the Moon to prevent interference.
So, what is behind this lunar race?
The race to the Moon is driven by scientific knowledge and technological advances, as well as the prospect of accessing valuable resources.
In a 2015 article published on its website, NASA explains why it plans to mine the Moon and how the “lunar gold rush” could work.
Citing data from geological surveys, the space agency says the Moon contains three crucial elements: water, helium and rare earth metals.
The water reserves frozen inside shadowed craters could be used for drinking, and could even be converted into rocket fuel to support future missions to Mars, according to NASA.
The agency says helium would support developments in the energy sector, like nuclear fusion.
As for rare earth metals, it says they would boost the supplies needed for emerging technologies, like smartphones, computers and medical equipment.
China has also tapped the Moon’s potential and made giant leaps in space exploration and technology in recent years.
It has built a space station that is manned by taikonauts, landed a rover on Mars, and became the first nation to touch down on the far side of the Moon.
China, too, wants to set up a lunar base and send people to Mars, adding a layer of political rivalry to the race.