The launch of Google’s Gemini 3 has the entire investing world rethinking the artificial intelligence landscape. The new reasoning model not only leapfrogged the latest from ChatGPT juggernaut OpenAI, the still-private company driving so much of the massive AI spending out there, but was also trained entirely on Google’s custom chips called tensor processing units (TPUs), co-designed by Broadcom . In a new post from The Information, the tech outlet said that Meta Platforms is thinking about using Google’s TPUs for its data centers in 2027. The report fuels the debate about whether custom silicon is going to take a bite out of Nvidia’s graphics processing units business. Club stock Nvidia sank to nearly three-month lows on Tuesday. Nvidia put out a statement on X , saying, “We’re delighted by Google’s success — they’ve made great advances in AI, and we continue to supply to Google.” But the post continued, “Nvidia is a generation ahead of the industry — it’s the only platform that runs every AI model and does it everywhere computing is done.” Jim Cramer, who views the recent Nvidia stock drop as a buying opportunity , said Tuesday that Meta or any other tech companies shopping around on AI chips won’t lower the price of Nvidia GPUs, which are considered the gold standard in all-purpose chips to run AI workloads. “The demand is insatiable for Nvidia,” Jim said, pointing to last week’s solid earnings and rosy guidance . The real winners here are Meta and Broadcom, which are also Club holdings. Jim said the idea of using less expensive TPUs gives Meta a chance to show that it’s not going to just spend like a drunken sailor, which was basically what slammed the stock the day after the company boosted its already massive spending guidance. For Broadcom, Jim said it is another feather in the cap of CEO Hock Tan, who is also on Meta’s board. So, if there is truth to The Information story, that might be the connection. Broadcom and Nvidia have been top performers for the portfolio in 2025, up more than 60% and 30%, respectively. Meta, also a Club stock, has been up and down, and up only about 7.5% year to date. AVGO NVDA,META YTD mountain Broadcom, Nvidia, and Meta YTD The advent of Gemini 3 and its reliance on TPUs also raises a question about what Gemini 3 means for OpenAI and its growth trajectory — not to mention its financial commitments? After all, so much of what’s going on with AI nowadays has the ChatGPT creator right at the center of it all. While not a public company yet with earnings reported quarterly, it’s safe to assume that OpenAI doesn’t currently make enough money to justify its $500 billion valuation, nor its level of announced spending plans. It’s the momentum of user adoption and, more importantly, the sustainability of the momentum that could, if anything, justify OpenAI’s spending intentions. If it were to lose its lead, then OpenAI’s perceived growth path would bear greater scrutiny. ChatGPT has been trained on Nvidia chips. Alphabet’s Google designed its TPUs with the help of Broadcom. Even before Gemini 3 was released last week, Alphabet stock had been soaring. On Monday, it surged another 6%, extending its year-to-date gains to nearly 70%. Alphabet stock was up again Tuesday, knocking on the door of a $4 trillion market cap. While some believe that the answers to these questions are that Google/Broadcom are now winning at the expense of Nvidia/OpenAI, and that the future is now all about custom silicon, we say, not so fast. First off, it’s way too early to make a call that the battle of AI reasoning models will play out like the search wars, with the winner taking all. The idea that there will only be one model to rule them all, like Google Search has done for more than two decades, is not where we see this going. Not for the hardware, nor for the software or LLMs that run on it. We still think this could all play out in such a way that certain models are better suited to certain tasks. That could mean Gemini for coding and research, Meta AI for more social or creative tasks, Anthropic and Microsoft playing for the enterprise space, and so on. Since we’re still in the early days of AI, the leading model at any given time still must fight to stay on top. For example, when OpenAI’s ChatGPT launched in late 2022 and quickly went viral, Google hastily and disastrously stood up Gemini. But here we are three years later, and Gemini 3 catapulted Google to the top of the heap as far as capabilities. ChatGPT is, however, enjoying its first mover advantage, reporting early last month over 800 million weekly active users. Google said last week that Gemini has over 650 million monthly active users. Second, just because Gemini doesn’t rely on Nvidia graphics processing units (GPUs) doesn’t mean that Nvidia hardware is suddenly less relevant. Custom semiconductors are nothing new. While they can bring financial cost advantages, that advantage does come at a cost to develop, update, and manufacture the chips. Plus, investors must stay mindful that while Gemini may not rely on Nvidia hardware, Google Cloud services do. TPUs are a type of application-specific integrated circuit (ASIC), meaning that these chips are suited to a particular type of task or application. That’s all well and good for internal projects, like the advancement of large language models (LLMs), that will underly much of Google’s own services, such as Search, YouTube, or Waymo. However, TPUs are not as attractive when the aim is to rent compute out to customers, which is what Google does as the world’s third-biggest cloud behind Amazon and Microsoft. For renting cloud compute, Nvidia’s GPUs are the undisputed champions, as they work with Nvidia’s CUDA software platform, which AI researchers have been working with for years. GPUs are flexible, widely available, and already broadly adopted and familiar to developers around the world. If a customer were to develop strictly on TPUs, they might realize a cost benefit. However, to do so, it would require giving up CUDA to develop on Google’s specific software stack, a stack that doesn’t translate to GPUs or likely even other custom chips that might be offered by other companies. To be sure, for the biggest LLM companies out there, it may make sense to develop a TPU version alongside a GPU one, if the volume of business warrants it. We’re monitoring The Information report about Meta, but we are a bit skeptical. For starters, we already know that Meta is working with Broadcom on its own custom chips, so the idea of buying Alphabet’s custom silicon, instead of utilizing the one it has been working with Broadcom to optimize for their own workloads, is a bit odd. Alphabet is also Meta’s main rival in digital advertising, so the idea that it’s going to start shifting to Alphabet as a key supplier, be it for a hardware or a software stack, seems a bit risky. Nonetheless, the race to build out accelerated AI infrastructure has resulted in the formation of plenty of frenemy relationships, so we certainly are not dismissive of the news. However, developing TPU versions of software, alongside GPU-based versions, is not going to be the case for most companies. Even if a company’s stated goal was to diversify beyond the Nvidia ecosystem, locking itself into another, even more specific software and hardware stack like Google’s TPU environment, isn’t a smart way to go about it. In addition to having to rework years of development written in CUDA and realize the cost benefits of that effort, a company would also be giving up the ability to move to another cloud provider or even bring workloads in-house. Google’s TPUs aren’t available on AWS or Microsoft’s Azure clouds, or on neoclouds like CoreWeave, nor can they be purchased outright if a company opts to build its own infrastructure. While The Information report does suggest that Google may consider doing just that, it’s not clear when or to what extent it will sell chips to third parties for use in their own data centers — will it be reserved for large buyers, or open to buyers of all sorts to more directly compete with Nvidia, time will tell and we will conintue to monitor for further details. What Gemini 3 does indicate is that there are other ways to go about developing a leading LLM that can be run more cheaply than those based on Nvidia hardware. However, it requires years of work and billions of dollars of investment to develop both the hardware and software necessary to do so. Additionally, what a company like Google develops for internal use to reduce costs may not be as attractive to customers who don’t want to be locked in. The strategy only works for companies doing so much volume internally that the benefit of a financial cost reduction is worth the loss of flexibility that Nvidia’s GPUs provide. Only a handful of companies in the world have that scale – and fortunately for Nvidia, most of those companies make more money renting out GPU-based compute. In the end, we’re back to where we started, believing that custom silicon does make a lot of sense for the big players, which is one key reason we took a position in Broadcom to begin with. But we know that Nvidia’s GPUs have far more reach thanks to their flexibility to operate many different types of workloads and a long history, which has resulted in broad-based adoption, portability from one cloud or on-premises infrastructure to another, and the largest software library around. Additionally, when we consider sovereign AI spend, these nation-state buyers are going to be far more interested in a more flexible, open ecosystem like the one Nvidia provides that lets buyers write their own code with more control, versus a more specialized closed ecosystem that puts them more at the mercy of a U.S. company. Consider that Google isn’t even allowed in China, so are Chinese buyers really going to demand Google TPUs, especially if President Donald Trump authorizes Nvidia’s H200 chips for sale into China? Cost savings are important, but from the perspective of a sovereign entity, national security is the priority. The introduction of AI agents may also change some of these dynamics, as it may become easier to switch from one infrastructure to another if AI agents can be deployed to, say, convert CUDA-based programs to something that will run on a TPU. However, for the time being, we don’t think the introduction of Gemini 3 to be enough to derail the demand Nvidia spoke about, or put on hold that vast number of deals it has made in recent months. While some may argue that the idea of renting out compute (infrastructure-as-a-service) will become less relevant as companies like Alphabet instead turn to selling their application programming interface (API) in a move toward a model-as-a-service (MaaS) business model. It’s a trend we expect to hear more about in a post-Gemini 3 world. However, we’re not at the point of it altering our investment thesis on Nvidia or the broader AI cohort at the moment. Nonetheless, investors would be remiss not to acknowledge and keep in mind this effort to move away from Nvidia chips, in certain instances, and the effort by Alphabet to potentially move beyond an IaaS model altogether to a new MaaS business model, though even in that scenario, the world wouldn’t need less compute, the end customer may just simply be a bit less picky about the hardware their applications are being run on, as the move to an MaaS model would allow the API provider to choose the hardware based on cost. While mindful of the evolving playing field, we see no major change to our view of the AI space. We still think Nvidia is a must-own name and that Broadcom is the way to play the custom silicon space. However, the introduction of Gemini 3 should wake investors up to these changes happening under the surface, and the potential risks they may bring, in different ways, to the juggernauts driving AI innovation. (Jim Cramer’s Charitable Trust is long NVDA, AVGO, AMZN, META, MSFT. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
Gastric and gastroesophageal junction cancers Gastric (stomach) cancer is the fifth most common cancer worldwide and the fifth-highest leading cause of cancer mortality.1 Nearly one million new patients were diagnosed with gastric cancer in 2022, with approximately 660,000 deaths reported globally.1 In many regions, its incidence has been increasing in patients younger than 50 years old, along with other gastrointestinal (GI) malignancies.3 In 2024, there were roughly 43,000 drug-treated patients in the US, EU and Japan in early-stage and locally advanced gastric or GEJ cancer.2 Approximately 62,000 patients in these regions are expected to be newly diagnosed in this setting by 2030.4
GEJ cancer is a type of gastric cancer that arises from and spans the area where the oesophagus connects to the stomach.5
Disease recurrence is common in patients with resectable gastric cancer despite undergoing surgery with curative intent and treatment with neoadjuvant/adjuvant chemotherapy.6 Approximately one in four patients with gastric cancer who undergo surgery develop recurrent disease within one year, and the five-year survival rate remains poor, with less than half of patients alive at five years.6-7
MATTERHORN MATTERHORN is a randomised, double-blind, placebo-controlled, multi-centre, global Phase III trial evaluating Imfinzi as perioperative treatment for patients with resectable Stage II-IVA gastric and GEJ cancers. Perioperative therapy includes treatment before and after surgery, also known as neoadjuvant/adjuvant therapy. In the trial, 948 patients were randomised to receive a 1500mg fixed dose of Imfinzi plus FLOT chemotherapy or placebo plus FLOT chemotherapy every four weeks for two cycles prior to surgery. This was followed by Imfinzi or placebo every four weeks for up to 12 cycles after surgery (including two cycles of Imfinzi or placebo plus FLOT chemotherapy and 10 additional cycles of Imfinzi or placebo monotherapy).
In the MATTERHORN trial, the primary endpoint is EFS, defined as time from randomisation until the date of one of the following events (whichever occurred first): RECIST (version 1.1, per blinded independent central review assessment) progression that precludes surgery or requires non-protocol therapy during the neoadjuvant period; RECIST progression/recurrence during the adjuvant period; non-RECIST progression that precludes surgery or requires non-protocol therapy during the neoadjuvant period or discovered during surgery; progression/recurrence confirmed by biopsy post-surgery; or death due to any cause. Key secondary endpoints include pathologic complete response rate, defined as the proportion of patients who have no detectable cancer cells in resected tumour tissue following neoadjuvant therapy, and OS. The trial enrolled participants in 176 centres in 20 countries, including in the US, Canada, Europe, South America and Asia.
Imfinzi Imfinzi (durvalumab) is a human monoclonal antibody that binds to the PD-L1 protein and blocks the interaction of PD-L1 with the PD-1 and CD80 proteins, countering the tumour’s immune-evading tactics and releasing the inhibition of immune responses.
In GI cancer, Imfinzi is approved in combination with chemotherapy in locally advanced or metastatic biliary tract cancer (BTC) and in combination with Imjudo (tremelimumab) in unresectable hepatocellular carcinoma (HCC). Imfinzi is also approved as a monotherapy in unresectable HCC in Japan and the EU.
In addition to its indications in GI cancers, Imfinzi is the global standard of care based on OS in the curative-intent setting of unresectable, Stage III non-small cell lung cancer (NSCLC) in patients whose disease has not progressed after chemoradiotherapy (CRT). Additionally, Imfinzi is approved as a perioperative treatment in combination with neoadjuvant chemotherapy in resectable NSCLC, and in combination with a short course of Imjudo and chemotherapy for the treatment of metastatic NSCLC. Imfinzi is also approved for limited-stage small cell lung cancer (SCLC) in patients whose disease has not progressed following concurrent platinum-based CRT; and in combination with chemotherapy for the treatment of extensive-stage SCLC.
Perioperative Imfinzi in combination with neoadjuvant chemotherapy is approved in the US, EU, Japan and other countries for patients with muscle-invasive bladder cancer based on results from the NIAGARA Phase III trial. Additionally, in May 2025, Imfinzi added to Bacillus Calmette-Guérin induction and maintenance therapy met the primary endpoint of disease-free survival for patients with high-risk non-muscle-invasive bladder cancer in the POTOMAC Phase III trial.
Imfinzi in combination with chemotherapy followed by Imfinzi monotherapy is approved as a 1st-line treatment for primary advanced or recurrent endometrial cancer (mismatch repair deficient disease only in the US and EU). Imfinzi in combination with chemotherapy followed by Lynparza (olaparib) and Imfinzi is approved for patients with mismatch repair proficient advanced or recurrent endometrial cancer in the EU and Japan.
Since the first approval in May 2017, more than 414,000 patients have been treated with Imfinzi. As part of a broad development programme, Imfinzi is being tested as a single treatment and in combinations with other anti-cancer treatments for patients with NSCLC, bladder cancer, breast cancer, ovarian cancer and several GI cancers.
AstraZeneca in GI cancers AstraZeneca has a broad development programme for the treatment of GI cancers across several medicines and a variety of tumour types and stages of disease. In 2022, GI cancers collectively represented approximately 5 million new cancer cases leading to approximately 3.3 million deaths.8
Within this programme, the Company is committed to improving outcomes in gastric, liver, biliary tract, oesophageal, pancreatic, and colorectal cancers.
In addition to its indications in BTC and HCC, Imfinzi is being assessed in combinations, including with Imjudo, in liver, oesophageal and gastric cancers in an extensive development programme spanning early to late-stage disease across settings.
Enhertu (trastuzumab deruxtecan), a HER2-directed antibody drug conjugate (ADC), is approved in the US and several other countries for HER2-positive advanced gastric cancer. Enhertu is jointly developed and commercialised by AstraZeneca and Daiichi Sankyo.
Lynparza, a first-in-class PARP inhibitor, is approved in the US and several other countries for the treatment of BRCA-mutated metastatic pancreatic cancer. Lynparza is developed and commercialised in collaboration with MSD (Merck & Co., Inc. inside the US and Canada).
The Company is also assessing rilvegostomig (AZD2936), a PD-1/TIGIT bispecific antibody, in combination with chemotherapy as an adjuvant therapy in BTC, in combination with bevacizumab with or without Imjudo as a 1st-line treatment in patients with advanced HCC, and as a 1st-line treatment in patients with HER2-negative, locally advanced unresectable or metastatic gastric and GEJ cancers. Rilvegostomig is also being evaluated in combination with Enhertu in previously untreated, HER2-expressing, locally advanced or metastatic BTC.
AstraZeneca is advancing multiple modalities that provide complementary mechanisms for targeting Claudin 18.2, a promising therapeutic target in gastric cancer. These include sonesitatug vedotin, a potential first-in-class ADC licensed from KYM Biosciences Inc., currently in Phase III development; AZD5863, a novel Claudin 18.2/CD3 T-cell engager bispecific antibody licensed from Harbour Biomed in Phase I development; and AZD4360, an antibody drug conjugate, currently being evaluated in a Phase I/II trial in patients with advanced solid tumours.
In early development, AstraZeneca is developing C-CAR031 / AZD7003, a Glypican 3 (GPC3) armoured CAR T, in HCC. C-CAR031 / AZD7003is being co-developed with AbelZeta in China where it is under evaluation in an IIT.
AstraZeneca in immuno-oncology (IO) AstraZeneca is a pioneer in introducing the concept of immunotherapy into dedicated clinical areas of high unmet medical need. The Company has a comprehensive and diverse IO portfolio and pipeline anchored in immunotherapies designed to overcome evasion of the anti-tumour immune response and stimulate the body’s immune system to attack tumours.
AstraZeneca strives to redefine cancer care and help transform outcomes for patients with Imfinzi as a monotherapy and in combination with Imjudo as well as other novel immunotherapies and modalities. The Company is also investigating next-generation immunotherapies like bispecific antibodies and therapeutics that harness different aspects of immunity to target cancer, including cell therapy and T-cell engagers.
AstraZeneca is pursuing an innovative clinical strategy to bring IO-based therapies that deliver long-term survival to new settings across a wide range of cancer types. The Company is focused on exploring novel combination approaches to help prevent treatment resistance and drive longer immune responses. With an extensive clinical programme, the Company also champions the use of IO treatment in earlier disease stages, where there is the greatest potential for cure.
AstraZeneca in oncology AstraZeneca is leading a revolution in oncology with the ambition to provide cures for cancer in every form, following the science to understand cancer and all its complexities to discover, develop and deliver life-changing medicines to patients.
The Company’s focus is on some of the most challenging cancers. It is through persistent innovation that AstraZeneca has built one of the most diverse portfolios and pipelines in the industry, with the potential to catalyse changes in the practice of medicine and transform the patient experience.
AstraZeneca has the vision to redefine cancer care and, one day, eliminate cancer as a cause of death.
AstraZeneca AstraZeneca (LSE/STO/Nasdaq: AZN) is a global, science-led biopharmaceutical company that focuses on the discovery, development, and commercialisation of prescription medicines in Oncology, Rare Diseases, and BioPharmaceuticals, including Cardiovascular, Renal & Metabolism, and Respiratory & Immunology. Based in Cambridge, UK, AstraZeneca’s innovative medicines are sold in more than 125 countries and used by millions of patients worldwide. Please visit astrazeneca.com and follow the Company on Social Media @AstraZeneca.
Contacts For details on how to contact the Investor Relations Team, please click here. For Media contacts, click here.
More than 350 staff at London Transit were due to walk out on 26, 27 and 28 November
Three more days of strike action by west London bus workers have been postponed pending further talks next week.
More than 350 staff at London Transit were due to walk out on 26, 27 and 28 November. Drivers, engineers and store workers based at the Westbourne Park depot are unhappy with a below-inflation pay rise offered by the company’s parent firm, First Bus London.
The same Unite union members previously took part in industrial action on 14, 17 and 18 November.
First Bus London said: “We’re pleased that, following positive discussions, the planned strike action has been cancelled and that further talks with Unite will continue.”
It added: “Industrial action causes significant disruption for Londoners who rely on our services and for our colleagues, so we welcome this outcome.”
Unite has agreed to attend a meeting with conciliation service Acas and London Transit on 2 December in an effort to resolve the dispute.
Transport for London confirmed that services were expected to run as normal in the coming days.
‘Utter disregard’
Unite previously requested an above-inflation pay offer with full back pay for all employees, demands it says have not yet been met.
Unite’s general secretary Sharon Graham said: “This is disgraceful behaviour from a company making millions from London bus passengers.
“It shows an utter disregard for its workers and the hard work they do day in, day out.
“Our members won’t stand for such behaviour and Unite will back them all the way in this dispute with a company that has a history of anti-worker behaviour.”
Routes affected during the strike would have included 13, 23, 31, N31, 218, 295 and 452.
WASHINGTON — Commodity Futures Trading Commission Acting Chairman Caroline D. Pham is seeking nominations for the CFTC CEO Innovation Council. The deadline for submissions is December 8. Under Acting Chairman Pham’s leadership, the CFTC has led rapid advancements on innovation and market structure, including the Crypto CEO Forum, prediction markets, perpetual contracts, and 24/7 trading. The CFTC’s Crypto Sprint to implement the President’s Working Group on Digital Asset Markets report recommendations is targeted to continue through August 2026 and includes listed spot crypto trading, tokenized collateral and stablecoins, and rulemaking to enable the use of blockchain technology and market infrastructure.
“The U.S. is leading a new era in market structure, and the CFTC is at the forefront of this renaissance accelerated by innovation and technology,” said Acting Chairman Pham. “The CFTC stands ready to carry out our mission over expanded markets and products, including crypto and digital assets, and ensure our markets remain vibrant and resilient while protecting all participants. In order to hit the ground running, it is critical that the CFTC drives public engagement with the support of expert industry leaders and visionaries who are building the future. That is why today I am calling upon CEOs to join us in shaping responsible regulations that will lay the foundation for America’s Golden Age of Innovation.”
Acting Chairman Pham invites members of the public to nominate individuals for the CEO Innovation Council and propose potential topics to prioritize. Each nomination submission should include relevant information about the nominee, such as the individual’s name, title, and organizational affiliation as well as information that supports the individual’s qualifications for the CEO Innovation Council. The submission should also include suggestions for potential topics to prioritize as well as the name and email or mailing address of the person nominating the individual. Submission of a nomination is not a guarantee of selection for the CEO Innovation Council.
CEO Innovation Council nominations and potential topics should be emailed to [email protected]. Please use the subject ‘‘CEO Innovation Council Nomination’’ for submissions.
Just what an embattled chancellor needs on the eve of a tax-raising budget: a leading retailer upping its profits forecast and singing about the joys of the UK economy.
Unfortunately, only the first bit is true. Kingfisher, owner of B&Q and Screwfix (and similar businesses in France and Poland), raised its profit expectations for its current financial year from £480m-£540m to £540m-£570m.
But it definitely didn’t ooze confidence in the UK outlook. Rather, Kingfisher noted “softening market conditions” and added: “We continue to be mindful of inflation, uncertainty ahead of the autumn budget and the softening labour market.”
In other words, the group is saying its improvement in the profits department is a self-help job, which is fair. Like-for-like sales in the UK in the last quarter were up 3%. It is winning market share in the UK (where it helps that Homebase went into administration a year ago), grabbing a bigger slice of the professional “trade” market and improving its e-commerce game. The slick Screwfix operation continues to be streets ahead of its direct rivals. Meanwhile, the recently troubled French operation (Castorama and Brico Dépôt) is being restructured, which helped slightly to off-set local “weak consumer sentiment” that sounds several degrees worse than in the UK.
Take a step back and Kingfisher’s progress can be regarded as a parable of the retail scene in two ways. First, it is proof that a basically well managed operator in a strong competitive position can prosper even under subdued economic conditions. For other examples, think Tesco, Sainsbury’s and Next. All have been great shares to own in the 12 months since Rachel Reeves’s last budget, never mind the increase in employers’ national insurance and the rest of it.
The other aspect is more nuanced. On one hand, the ridiculously long and chaotic build-up to the budget has plainly sapped consumer confidence – a CBI distributive trades survey on Tuesday confirmed what we already knew. On the other hand, there still remains a basic level of resilience if Kingfisher is a guide. “Softening” is not the same as outright soft. For that, give thanks for four cuts in interest rates since last October’s budget. Lower mortgage costs matter particularly in DIY businesses for big-ticket items such as kitchens and bathrooms.
So one can – just about – sketch out an optimistic scenario for consuming-facing companies in which Reeves avoids inflation-raising howlers such as last year’s NICs rises and clears the way for the Bank of England to cut interest rates faster. The gilts market has half-bought that story in recent weeks as yields have fallen from their scary September highs.
skip past newsletter promotion
after newsletter promotion
The alternative, though, is not good from businesses’ point of view. The prospect of rate cuts is virtually the only big-picture factor running in their favour as they contemplate pressure on wages and fixed costs. Remove lower borrowing costs and there’s little to stop softening conditions turning soggy very quickly. Kingfisher, to repeat, can handle most outcomes. But consumer sentiment across the retail landscape looks fragile.
Agentic AI trained on the judgement of elite security analysts automates high-impact workflows to keep public sector defenders ahead of AI-accelerated threats
AUSTIN, Texas – November 25, 2025 – CrowdStrike (NASDAQ: CRWD) today announced that CrowdStrike® Charlotte AI has achieved Federal Risk and Authorization Management Program (FedRAMP) High Authorization. With this designation, CrowdStrike is transforming public sector defense for the AI era, automating the time-consuming tasks better suited for machines. Trained on the judgement of elite security analysts, Charlotte AI is now available to federal, state, and local agencies through the Falcon® platform in GovCloud, elevating defenders from alert handlers to orchestrators of the agentic SOC.
“Government agencies face some of the most advanced cyber threats in the world and demand the highest level of protection,” said Michael Sentonas, president of CrowdStrike. “By bringing Charlotte AI to GovCloud, security teams can automate high-impact workflows with the expertise of the industry’s best SOC operators, stopping sophisticated threats at the speed of AI with precision and control.”
The public sector is under relentless attack, with adversaries weaponizing AI to accelerate every stage of the threat chain and collapse the defender’s window to act. Trained on years of real-world decisions from Falcon® Complete and incident response engagements, Charlotte AI thinks, reasons, and acts like an elite analyst at machine speed, always under defender control. This flips time in defenders’ favor, eliminating high-friction tasks and enabling analysts to focus on the strategic decisions that strengthen security.
Delivered through the FedRAMP-authorized Falcon platform, the first Charlotte AI capabilities available in GovCloud include:
Detection Triage Agent: Triages security detections with over 98% accuracy1, eliminating more than 40 hours of manual work per week on average2 to scale SOC operations and accelerate response to the most critical threats.
Charlotte AI Actions in Falcon Fusion SOAR: Enables analysts to use a drag-and-drop interface to embed AI reasoning into playbooks. For example, determining device containment based on company policies and generating tailored communications for executives, technical teams, and customers, with automatic translation for global reach.
The FedRAMP High Authorization validates that Charlotte AI meets FedRAMP’s most rigorous security and compliance standards, providing data confidentiality, integrity, and availability across mission-critical U.S. government operations.
To learn more about Charlotte AI, visit here. To learn more about CrowdStrike’s ongoing commitment to meeting the highest independent and government-led cybersecurity and information management standards, visit the CrowdStrike Compliance and Certification Page.
About CrowdStrike
CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world’s most advanced cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity and data.
Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities.
Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value.
CrowdStrike: We stop breaches.
Learn more: https://www.crowdstrike.com/
Follow us: Blog | X | LinkedIn | Instagram
Start a free trial today: https://www.crowdstrike.com/trial
1Accuracy rating is a measure of Charlotte AI triage decisions that match the expert decisions from the CrowdStrike Falcon Complete Next-Gen MDR team.
2Calculated by multiplying the average number of alerts triaged by Charlotte AI by a 5-minute triage time per alert as estimated by the Falcon Complete team. Individual results may vary based on factors such as total alert volume.
Built on a strong, two-year partnership with Google Cloud, this recognition confirms Valeo’s pioneering position as it advances into the AI era.
November 25, 2025 — Paris, France — At the Adopt AI Summit held by Artefact, Valeo CEO Christophe Périllat has been honored by the “CEO of the Year” Award by Google Cloud in France. This title serves as a powerful recognition of Périllat’s bold, systematic strategy to embed artificial intelligence across the entire global mobility technology leader. The award celebrates a forward-looking vision pivotal in shaping the future of mobility, demonstrating how Valeo is translating the vast opportunities of AI into concrete, industry-leading solutions.
Christophe Périllat, CEO of Valeo, said: “We see AI as the ultimate productivity game-changer, particularly for complex R&D cycles. The companies that deploy it with precision will pull ahead. Our teams have fully embraced this strategy, building on more than two decades of AI experience. We are not just experimenting — we are leveraging a profound historical advantage to deliver cutting-edge, effective solutions that others are only now beginning to explore.”
Decades of Vision: Valeo’s 20-Year AI Advantage
At Valeo, the application of AI is a long-term strategy that started more than 20 years ago. Over the course of years, Valeo developed neural networks and deep learning to understand the vehicle’s environment and develop solutions for safer and more automated mobility. Today, the Group leverages AI at every stage, from research and development, factories to product design with more than 9,000 software and system engineers, and 200 AI experts.
The Generative Leap: 25% of Certified Code is Now AI-Generated
Through a long-term partnership with Google Cloud, the Group deploys its coding assistant and trains key users for them to coach and support local teams. By the end of 2024, all software engineers were trained and equipped with Generative AI tools, and today more than 25% of Valeo’s certified automotive code is AI-generated, marking a massive increase from 0% just 16 months ago. The company also deployed Generative AI to greatly simplify the analysis of technical specifications. Through other partnerships, Valeo is also working on the automatic design of mechanical parts and of PCBs.
This significant acceleration is further bolstered by internal initiatives focused on hands-on innovation. At the beginning of its internal transformation, in 2024, Valeo organized its first hackathon focusing on generative AI through a joint partnership with Google Cloud and Artefact. During this hackathon, 16 teams participated in the training sessions with data scientists from Artefact and experts from Google Cloud, and more than 120 Valeo employees leveraged generative AI to advance innovative ideas into the creation of minimum viable products (MVP). Most of these ideas are now being implemented, as the hackathon was the foundation of Valeo’s AI4all program, which has led to the deployment of 84 AI agents to boost productivity within the company.
Valeo.AI: Driving Reliability and Safety in Complex Road Scenarios
Valeo’s long-term AI commitment is driven by Valeo.AI, the first global AI research center dedicated specifically to automotive applications, founded in 2017. Connected with the academic world, the center pioneers research in assisted and autonomous driving. Its mission: to overcome key challenges like enhancing Advanced Driving Assistance Systems (ADAS) reliability in complex scenarios (e.g., adverse weather or abnormal road conditions). Valeo.AI focuses on sensor fusion, data-efficient learning, and developing dependable, explainable AI models to ensure safer, more efficient, and robust autonomous travel worldwide.
Christophe Périllat receives the “CEO of the Year” award from Isabelle Fraine, Managing Director of Google Cloud France.