Category: 3. Business

  • Stellantis Advances Global Robotaxi Strategy With New Collaboration With NVIDIA, Uber and Foxconn

    Stellantis Advances Global Robotaxi Strategy With New Collaboration With NVIDIA, Uber and Foxconn

    AMSTERDAM – Stellantis today announced a new collaboration with NVIDIA, Uber Technologies, Inc., and Foxconn to explore the joint development and future deployment of Level 4 (driverless) autonomous vehicles for robotaxi services worldwide.

    This initiative marks significant progress in Stellantis’ global robotaxi strategy, following its recently announced agreement with Pony.ai to test autonomous vehicles in Europe. Together, these efforts position Stellantis to play a significant role in the transition toward safe, efficient, and sustainable autonomous transportation.


    Driving the Next Era of Mobility

    Together, the companies intend to combine their strengths – Stellantis’ global vehicle engineering and manufacturing expertise, NVIDIA’s autonomous driving software and AI computing, Foxconn’s electronics and system integration capabilities, and Uber’s leadership in ride-hailing operations – to explore a new generation of Level 4 autonomous vehicles.

    The collaboration will build on Stellantis’ AV-Ready Platforms – specifically the K0 Medium Size Van and STLA Small – powered by the NVIDIA DRIVE AGX Hyperion 10 autonomous vehicle architecture, which includes the safety-certified NVIDIA DriveOS operating system and full-stack NVIDIA DRIVE AV software (NDAS) purpose-built for Level 4 autonomy. Stellantis AV-Ready platforms are designed for maximum flexibility to adapt to multiple passenger and commercial mobility use cases. 

    Uber plans to deploy Stellantis autonomous vehicles in select cities worldwide, starting with 5,000 units, with initial operations beginning in the United States. Pilot programs and testing are expected to ramp up over the coming years, with Start of Production (SOP) targeted for 2028.


    Roles and Responsibilities

    • Stellantis will design, engineer, and manufacture autonomous vehicles based on its LCVs and STLA Small AV-Ready Platforms, integrating NVIDIA DRIVE AV software to enable Level 4 driverless capabilities.
    • NVIDIA will provide its NVIDIA DRIVE AV software, including L4 Parking and L4 Driving capabilities based on the NVIDIA DRIVE AGX Hyperion 10 architecture.
    • Foxconn will collaborate with Stellantis on hardware and systems integration.
    • Uber will operate the robotaxi services, expanding its fleet with Stellantis-built vehicles integrating NVIDIA DRIVE AV software.

    Stellantis’ AV-Ready Platforms are engineered to support Level 4 capabilities through technology upgrades that efficiently integrate all key components, including system redundancies, advanced sensor suites, and high-performance computing, into a flexible and scalable architecture. The result is one of the most competitive platforms in the industry, optimized for safety and reliability, and total cost of ownership for service operators.


    Executive Quotes

    Antonio Filosa, CEO, Stellantis: “Autonomous mobility opens the door to new, more affordable transportation choices for customers. We have built AV-Ready Platforms to meet growing demand, and by partnering with leaders in AI, electronics, and mobility services, we aim to create a scalable solution that delivers smarter, safer, and more efficient mobility for everyone.”

    Dara Khosrowshahi, CEO, Uber: “NVIDIA is the backbone of the AI era and is now fully harnessing that innovation to unleash L4 autonomy at enormous scale, with Stellantis among the first to integrate NVIDIA’s technology for deployment on Uber. We are thrilled to work with Stellantis to bring thousands of their autonomous vehicles to riders around the world.”

    Jensen Huang, Founder and CEO, NVIDIA: “Level 4 autonomy isn’t just a milestone for the auto industry – it’s a leap in AI capability. The vehicle becomes a robot – one that sees, perceives, plans, and drives with superhuman precision. By combining Stellantis’ global scale with NVIDIA DRIVE and Foxconn’s system integration, we’re creating a new class of purpose-built robotaxi fleets – making transportation safer, more accessible, and more affordable for everyone.”

    Young Liu, Chairman, Foxconn: “Autonomous mobility is a strategic priority within Foxconn’s EV program. The strategic partnerships and strengths across NVIDIA, Stellantis, and Uber accelerate the deployment of Level 4 robotaxi technology, with Foxconn delivering HPC, sensor integration to enable a global rollout.”


    About the Collaboration

    The non-binding Memorandum of Understanding (MoU) establishes the framework for future agreements covering technology development, licensing, production, and vehicle procurement. Each company retains the flexibility to pursue additional collaborations in the autonomous driving space.

    This new initiative complements Stellantis’ recent partnership with Pony.ai, announced earlier this month, to co-develop and test Level 4 autonomous vehicles in Europe – a first step toward deploying robotaxi services on European roads.

     

     

    About Stellantis

    Stellantis N.V. (NYSE: STLA / Euronext Milan: STLAM / Euronext Paris: STLAP) is a leading global automaker, dedicated to giving its customers the freedom to choose the way they move, embracing the latest technologies and creating value for all its stakeholders. Its unique portfolio of iconic and innovative brands includes Abarth, Alfa Romeo, Chrysler, Citroën, Dodge, DS Automobiles, FIAT, Jeep®, Lancia, Maserati, Opel, Peugeot, Ram, Vauxhall, Free2move and Leasys. For more information, visit www.stellantis.com.

     

     

    Stellantis Forward-Looking Statements

    This communication contains forward-looking statements. In particular, statements regarding future events and anticipated results of operations, business strategies, the anticipated benefits of the proposed transaction, future financial and operating results, the anticipated closing date for the proposed transaction and other anticipated aspects of our operations or operating results are forward-looking statements. These statements may include terms such as “may”, “will”, “expect”, “could”, “should”, “intend”, “estimate”, “anticipate”, “believe”, “remain”, “on track”, “design”, “target”, “objective”, “goal”, “forecast”, “projection”, “outlook”, “prospects”, “plan”, or similar terms. Forward-looking statements are not guarantees of future performance. Rather, they are based on Stellantis’ current state of knowledge, future expectations and projections about future events and are by their nature, subject to inherent risks and uncertainties. They relate to events and depend on circumstances that may or may not occur or exist in the future and, as such, undue reliance should not be placed on them.

    Actual results may differ materially from those expressed in forward-looking statements as a result of a variety of factors, including: the ability of Stellantis to launch new products successfully and to maintain vehicle shipment volumes; changes in the global financial markets, general economic environment and changes in demand for automotive products, which is subject to cyclicality; Stellantis’ ability to successfully manage the industry-wide transition from internal combustion engines to full electrification; Stellantis’ ability to offer innovative, attractive products and to develop, manufacture and sell vehicles with advanced features including enhanced electrification, connectivity and autonomous-driving characteristics; Stellantis’ ability to produce or procure electric batteries with competitive performance, cost and at required volumes; Stellantis’ ability to successfully launch new businesses and integrate acquisitions; a significant malfunction, disruption or security breach compromising information technology systems or the electronic control systems contained in Stellantis’ vehicles; exchange rate fluctuations, interest rate changes, credit risk and other market risks; increases in costs, disruptions of supply or shortages of raw materials, parts, components and systems used in Stellantis’ vehicles; changes in local economic and political conditions; changes in trade policy, the imposition of global and regional tariffs or tariffs targeted to the automotive industry, the enactment of tax reforms or other changes in tax laws and regulations; the level of governmental economic incentives available to support the adoption of battery electric vehicles; the impact of increasingly stringent regulations regarding fuel efficiency requirements and reduced greenhouse gas and tailpipe emissions; various types of claims, lawsuits, governmental investigations and other contingencies, including product liability and warranty claims and environmental claims, investigations and lawsuits; material operating expenditures in relation to compliance with environmental, health and safety regulations; the level of competition in the automotive industry, which may increase due to consolidation and new entrants; Stellantis’ ability to attract and retain experienced management and employees; exposure to shortfalls in the funding of Stellantis’ defined benefit pension plans; Stellantis’ ability to provide or arrange for access to adequate financing for dealers and retail customers and associated risks related to the operations of financial services companies; Stellantis’ ability to access funding to execute its business plan; Stellantis’ ability to realize anticipated benefits from joint venture arrangements; disruptions arising from political, social and economic instability; risks associated with Stellantis’ relationships with employees, dealers and suppliers; Stellantis’ ability to maintain effective internal controls over financial reporting; developments in labor and industrial relations and developments in applicable labor laws; earthquakes or other disasters; risks and other items described in Stellantis’ Annual Report on Form 20-F for the year ended December 31, 2024 and Current Reports on Form 6-K and amendments thereto filed with the SEC; and other risks and uncertainties.

    Any forward-looking statements contained in this communication speak only as of the date of this document and Stellantis disclaims any obligation to update or revise publicly forward-looking statements. Further information concerning Stellantis and its businesses, including factors that could materially affect Stellantis’ financial results, is included in Stellantis’ reports and filings with the U.S. Securities and Exchange Commission and AFM.

     

    (more…)

  • Stock market today: Live updates

    Stock market today: Live updates

    Traders work on the floor of the New York Stock Exchange during morning trading on Oct. 27, 2025 in New York City.

    Michael M. Santiago | Getty Images

    The S&P 500 hit a fresh record on Tuesday as investors stepped further into the artificial intelligence trade a day before the Federal Reserve announces its interest rate decision.

    The broad market index rose about 0.4%. The Nasdaq Composite advanced 0.8%, while the Dow Jones Industrial Average gained 329 points, or 0.7%. The tech-heavy Nasdaq and 30-stock Dow scored new all-time intraday highs alongside the S&P 500.

    Several “Magnificent Seven” names are set to report this week, including Alphabet, Amazon, Apple, Meta Platforms and Microsoft, which together account for roughly one quarter of the S&P 500’s total value. Amazon announced it will begin layoffs on Tuesday, with the move expected to amount to the largest cuts to its workforce in the company’s history. That adds to the slew of job cuts seen in the tech industry this year. Apple and Microsoft were bright spots, however, as both stocks crossed $4 trillion in value during Tuesday’s session.

    Tuesday marks the start of the two-day Fed meeting, where the central bank is expected to cut its benchmark rate rate for a second time this year. Traders hoping for a signal from Fed Chair Jerome Powell on Wednesday that the central bank will cut once more at its final meeting of the year in December, partly driven by concerns about a weakening labor market. The Fed is dealing with an economic data blackout given the ongoing U.S. government shutdown.

    Investors during Monday’s session cheered cooling tensions between the U.S. and China ahead of a highly-anticipated meeting between President Donald Trump and Chinese President Xi Jinping on Thursday. Trump said Monday that both nations were expected to “come away with” a trade deal, which could address China rare earth minerals restrictions, soybean purchases and TikTok. The Wall Street Journal also reported Tuesday that tariffs on goods from China would be lowered if Beijing clamps down on the export of chemicals which produce fentanyl.

    “The market is expecting is something conclusive as a result of this meeting,” Dickson said. “If we don’t get an agreement of some type that can splash the headlines, I think that’ll be a disappointment. That doesn’t necessarily mean that the whole thing is solved. It just means there is definitive progress that something has been agreed to.”

    The S&P 500 in the previous session recorded its first-ever close above the 6,800 level, while the tech-heavy Nasdaq Composite and the Dow Jones Industrial Average likewise closed at record highs. The Russell 2000 small-cap benchmark finished at a new all-time high as well.

    Continue Reading

  • Faraday Future Launches FX Super One MPV in the UAE, with the AIHEREV Max edition priced at 309,000 AED; Soccer Legend Andrés Iniesta Named First Super One Owner and Co-Creation Officer – Faraday Future

    1. Faraday Future Launches FX Super One MPV in the UAE, with the AIHEREV Max edition priced at 309,000 AED; Soccer Legend Andrés Iniesta Named First Super One Owner and Co-Creation Officer  Faraday Future
    2. Faraday Future Founder and Co-CEO YT Jia Shares Weekly  GlobeNewswire
    3. Jia Yueting: QLGN has reached a partnership with BitGo and will proceed with the configuration of C10 Treasury.  Bitget
    4. Faraday Future Enters UAE Market Through Exclusive Deal  Benzinga
    5. FARADAY FUTURE FOUNDER AND CO-CEO YT JIA SHARES WEEKLY INVESTOR UPDATE: REVEALED  MarketScreener

    Continue Reading

  • NVIDIA Makes the World Robotaxi-Ready With Uber Partnership to Support Global Expansion

    NVIDIA Makes the World Robotaxi-Ready With Uber Partnership to Support Global Expansion

    Stellantis, Lucid and Mercedes-Benz Join Level 4 Ecosystem Leaders Leveraging the NVIDIA DRIVE AV Platform and DRIVE AGX Hyperion 10 Architecture to Accelerate Autonomous Driving

    News Summary:

    • NVIDIA DRIVE AGX Hyperion 10 is a reference compute and sensor architecture that makes any vehicle level 4-ready, enabling automakers and developers to build safe, scalable, AI-defined fleets.
    • Uber will bring together human riders and robot drivers in a worldwide ride-hailing network powered by DRIVE AGX Hyperion-ready vehicles.
    • Stellantis, Lucid and Mercedes-Benz are collaborating on level 4-ready autonomous vehicles compatible with DRIVE AGX Hyperion 10 for passenger mobility, while Aurora, Volvo Autonomous Solutions and Waabi extend level 4 autonomy to long-haul freight.
    • Uber will begin scaling its global autonomous fleet starting in 2027, targeting 100,000 vehicles and supported by a joint AI data factory built on the NVIDIA Cosmos platform.
    • NVIDIA and Uber continue to support a growing level 4 ecosystem that includes Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve and WeRide.
    • NVIDIA launches the Halos Certified Program, the industry’s first system to evaluate and certify physical AI safety for autonomous vehicles and robotics.

    GTC Washington, D.C.—NVIDIA today announced it is partnering with Uber to scale the world’s largest level 4-ready mobility network, using the company’s next-generation robotaxi and autonomous delivery fleets, the new NVIDIA DRIVE AGX Hyperion™ 10 autonomous vehicle (AV) development platform and NVIDIA DRIVE™ AV software purpose-built for L4 autonomy.

    By enabling faster growth across the level 4 ecosystem, NVIDIA can support Uber in scaling its global autonomous fleet to 100,000 vehicles over time, starting in 2027. These vehicles will be developed in collaboration with NVIDIA and other Uber ecosystem partners, using NVIDIA DRIVE. NVIDIA and Uber are also working together to develop a data factory accelerated by the NVIDIA Cosmos™ world foundation model development platform to curate and process data needed for autonomous vehicle development.

    NVIDIA DRIVE AGX Hyperion 10 is a reference production computer and sensor set architecture that makes any vehicle L4-ready. It enables automakers to build cars, trucks and vans equipped with validated hardware and sensors that can host any compatible autonomous-driving software, providing a unified foundation for safe, scalable and AI-defined mobility.

    Uber is bringing together human drivers and autonomous vehicles into a single operating network — a unified ride-hailing service including both human and robot drivers. This network, powered by NVIDIA DRIVE AGX Hyperion-ready vehicles and the surrounding AI ecosystem, enables Uber to seamlessly bridge today’s human-driven mobility with the autonomous fleets of tomorrow.

    “Robotaxis mark the beginning of a global transformation in mobility — making transportation safer, cleaner and more efficient,” said Jensen Huang, founder and CEO of NVIDIA. “Together with Uber, we’re creating a framework for the entire industry to deploy autonomous fleets at scale, powered by NVIDIA AI infrastructure. What was once science fiction is fast becoming an everyday reality.”

    “NVIDIA is the backbone of the AI era, and is now fully harnessing that innovation to unleash L4 autonomy at enormous scale, while making it easier for NVIDIA-empowered AVs to be deployed on Uber,” said Dara Khosrowshahi, CEO of Uber. “Autonomous mobility will transform our cities for the better, and we’re thrilled to partner with NVIDIA to help make that vision a reality.”

    NVIDIA DRIVE Level 4 Ecosystem Grows

    Leading global automakers, robotaxi companies and tier 1 suppliers are already working with NVIDIA and Uber to launch level 4 fleets with NVIDIA AI behind the wheel.

    Stellantis is developing AV-Ready Platforms, specifically optimized to support level 4 capabilities and meet robotaxi requirements. These platforms will integrate NVIDIA’s full-stack AI technology, further expanding connectivity with Uber’s global mobility ecosystem. Stellantis is also collaborating with Foxconn on hardware and systems integration.

    Lucid is advancing level 4 autonomous capabilities for its next-generation passenger vehicles, also using full-stack NVIDIA AV software on the DRIVE Hyperion platform for its upcoming U.S. models.

    Mercedes-Benz is testing future collaboration with industry-leading partners powered by its proprietary operation system MB.OS and DRIVE AGX Hyperion. Building on its legacy of innovation, the new S-Class offers an exceptional chauffeured level 4 experience combining luxury, safety and cutting-edge autonomy.

    NVIDIA and Uber will continue to support and accelerate shared partners across the worldwide level 4 ecosystem developing their software stacks on the NVIDIA DRIVE level 4 platform, including Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve and WeRide.

    In trucking, Aurora, Volvo Autonomous Solutions and Waabi are developing level 4 autonomous trucks powered by the NVIDIA DRIVE platform. Their next-generation systems, built on NVIDIA DRIVE AGX Thor, will accelerate Volvo’s upcoming L4 fleet, extending the reach of end-to-end NVIDIA AI infrastructure from passenger mobility to long-haul freight.

    NVIDIA DRIVE AGX Hyperion 10: The Common Platform for L4-Ready Vehicles

    The NVIDIA DRIVE AGX Hyperion 10 production platform features the NVIDIA DRIVE AGX Thor system-on-a-chip; the safety-certified NVIDIA DriveOS™ operating system; a fully qualified multimodal sensor suite including 14 high-definition cameras; nine radars, one lidar and 12 ultrasonics; and a qualified board design.

    DRIVE AGX Hyperion 10 is modular and customizable, allowing manufacturers and AV developers to tailor it to their unique requirements. By offering a prequalified sensor suite architecture, the platform also accelerates development, lowers costs and gives customers a running start with access to NVIDIA’s rigorous development expertise and investments in automotive engineering and safety.

    At the core of DRIVE AGX Hyperion 10 are two performance-packed DRIVE AGX Thor in-vehicle platforms based on NVIDIA Blackwell architecture. Each delivering more than 2,000 FP4 teraflops (1,000 TOPS of INT8) of real-time compute, DRIVE AGX Thor fuses diverse, 360-degree sensor inputs and is optimized for transformer, vision language action (VLA) models and generative AI workloads — enabling safe, level 4 autonomous driving backed by industry-leading safety certifications and cybersecurity standards.

    In addition, DRIVE AGX’s scalability and compatibility with existing AV software lets companies seamlessly integrate and deploy future upgrades from the platform across robotaxi and autonomous mobility fleets via over-the-air updates.

    Generative AI and Foundation Models Transform Autonomy

    NVIDIA’s autonomous driving approach taps into foundation AI models, large language models and generative AI, trained on trillions of real and synthetic driving miles. These advanced models allow self-driving systems to solve highly complex urban driving situations with humanlike reasoning and adaptability.

    New reasoning VLA models combine visual understanding, natural language reasoning and action generation to enable human-level understanding in AVs. By running reasoning VLA models in the vehicle, the AV can interpret nuanced and unpredictable real-world conditions — such as sudden changes in traffic flow, unstructured intersections and unpredictable human behavior — in real time. AV toolchain leader Foretellix is collaborating with NVIDIA to integrate its Foretify Physical AI toolchain with NVIDIA DRIVE for testing and validating these models.

    To enable the industry to develop and evaluate these large models for autonomous driving, NVIDIA is also releasing the world’s largest multimodal AV dataset. Comprising 1,700 hours of real-world camera, radar and lidar data across 25 countries, the dataset is designed to bolster development, post-training and validation of foundation models for autonomous driving.

    NVIDIA Halos Sets New Standards in Vehicle Safety and Certification

    The NVIDIA Halos system delivers state-of-the-art safety guardrails from cloud to car, establishing a holistic framework to enable safe, scalable autonomous mobility.

    The NVIDIA Halos AI Systems Inspection Lab, dedicated to AI safety and cybersecurity across automotive and robotics, performs independent evaluations and oversees the new Halos Certified Program, helping ensure products and systems meet rigorous criteria for trusted physical AI deployments.

    Companies such as AUMOVIO, Bosch, Nuro and Wayve are among the inaugural members of the NVIDIA Halos AI System Inspection Lab — the industry’s first to be accredited by the ANSI Accreditation Board. The lab aims to accelerate the safe, large-scale deployment of Level 4 automated driving and other AI-powered systems.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading

  • Palantir and NVIDIA Team Up to Operationalize AI — Turning Enterprise Data Into Dynamic Decision Intelligence

    Palantir and NVIDIA Team Up to Operationalize AI — Turning Enterprise Data Into Dynamic Decision Intelligence

    News Summary

    • Palantir is integrating NVIDIA accelerated computing, NVIDIA CUDA-X libraries and open-source NVIDIA Nemotron models into its Ontology framework at the core of the Palantir AI Platform.
    • Lowe’s is pioneering operational AI for its supply chain logistics with Palantir and NVIDIA.

       

    GTC Washington, D.C.—NVIDIA today announced a collaboration with Palantir Technologies Inc. to build a first-of-its-kind integrated technology stack for operational AI — including analytics capabilities, reference workflows, automation features and customizable, specialized AI agents — to accelerate and optimize complex enterprise and government systems.

    Palantir Ontology, at the core of the Palantir AI Platform (AIP), will integrate NVIDIA GPU-accelerated data processing and route optimization libraries, open models and accelerated computing. This combination of Ontology and NVIDIA AI will support customers by providing the advanced, context-aware reasoning necessary for operational AI.

    Enterprises using the customizable technology stack will be able to tap into their data to power domain-specific automations and AI agents for the sophisticated operating environments of retailers, healthcare providers, financial services and the public sector.

    “Palantir and NVIDIA share a vision: to put AI into action, turning enterprise data into decision intelligence,” said Jensen Huang, founder and CEO of NVIDIA. “By combining Palantir’s powerful AI-driven platform with NVIDIA CUDA-X accelerated computing and Nemotron open AI models, we’re creating a next-generation engine to fuel AI-specialized applications and agents that run the world’s most complex industrial and operational pipelines.”

    “Palantir is focused on deploying AI that delivers immediate, asymmetric value to our customers,” said Alex Karp, cofounder and CEO of Palantir Technologies. “We are proud to partner with NVIDIA to fuse our AI-driven decision intelligence systems with the world’s most advanced AI infrastructure.”

    Lowe’s Pioneers AI-Driven Logistics With Palantir and NVIDIA

    Lowe’s, among the first to tap this integrated technology stack from Palantir and NVIDIA, is creating a digital replica of its global supply chain network to enable dynamic and continuous AI optimization. This technology can support supply chain agility while boosting cost savings and customer satisfaction.

    “Modern supply chains are incredibly complex, dynamic systems, and AI will be critical to helping Lowe’s adapt and optimize quickly amid constantly changing conditions,” said Seemantini Godbole, executive vice president and chief digital and information officer at Lowe’s. “Even small shifts in demand can create ripple effects across the global network. By combining Palantir technologies with NVIDIA AI, Lowe’s is reimagining retail logistics, enabling us to serve customers better every day.”

    Advancing Operational Intelligence

    Palantir AIP workloads run in the most complex compliance domains and require the highest standards of privacy and data security. The Ontology at the heart of AIP creates a digital replica of an organization by organizing complex data and logic into interconnected virtual objects, links and actions that represent real-world concepts and their relationships.

    Together, this provides enterprises with an intelligent, AI-enabled operating system that drives efficiency through business process automation.

    To advance enterprise intelligence for the era of AI, NVIDIA data processing, AI software, open models and accelerated computing are now natively integrated with and available through Ontology and AIP. Customers can use NVIDIA CUDA-X™ data science libraries for data processing, paired with NVIDIA accelerated computing, via Ontology to drive real-time, AI-driven decision-making for complex, business-critical workflows.

    The NVIDIA AI Enterprise platform, including NVIDIA cuOpt™ decision optimization software, will enable enterprises to use AI for dynamic supply-chain management.

    NVIDIA Nemotron™ reasoning and NVIDIA NeMo Retriever™ open models will enable enterprises to rapidly build AI agents informed by Ontology.

    NVIDIA and Palantir are also working to bring the NVIDIA Blackwell architecture to Palantir AIP. This will accelerate the end-to-end AI pipeline, from data processing and analytics to model development and fine-tuning to production AI, using long-thinking, reasoning agents. Enterprises will be able to run AIP in NVIDIA AI factories for optimized acceleration.

    Palantir AIP will also be supported in the new NVIDIA AI Factory for Government reference design, announced separately today.

    At NVIDIA GTC Washington, D.C., attendees can register to join Palantir and NVIDIA for a hands-on workshop on operationalizing AI, taking place Wednesday, Oct. 29, from 3:15–5:00 p.m. ET.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading

  • Hyundai Motor Group Executive Chair Euisun Chung Meets HRH the Crown Prince, Reviews New Plant Construction and Group Growth Strategy

    • Executive Chair Chung discusses multifaceted cooperation with HRH the Crown Prince
    • Explains collaboration initiatives and plans as strategic partner in realizing Saudi Arabia’s future vision for mobility and other areas 
    • Exchanges views on Saudi Vision 2030 and expresses expectation for expanded cooperation in future energy sectors 
    • Executive Chair Chung Reviews Hyundai Motor’s local plant construction site and Hyundai Motor Group’s mid- to long-term strategy 
    • Hyundai Motor Group expands cooperation with major Saudi institutions in mobility, smart cities, and other sectors 


    SEOUL/RIYADH, October 28, 2025
    – Hyundai Motor Group Executive Chair Euisun Chung visited Saudi Arabia, the Middle East’s largest economy and a nation undergoing major industrial transformation. Chung met with His Royal Highness Prince Mohammed bin Salman bin Abdulaziz Al Saud, Crown Prince and Prime Minister of Saudi Arabia, to review the Group’s local growth strategy and explore future business opportunities.

    During his visit, Executive Chair Chung held discussions with HRH the Crown Prince covering a wide range of topics including the automotive industry and smart cities. This marked the first one-on-one meeting between the two leaders, though they had previously met twice, including during the Crown Prince’s 2022 visit to Korea.

    “Hyundai Motor Group deeply understands the meaning and importance of Saudi Vision 2030,” said Executive Chair Chung.” Based on our competitive business capabilities, we are participating in Saudi Arabia’s giga projects and look forward to expanding collaboration in future energy sectors including renewable energy, hydrogen, Small Modular Reactor (SMR), and nuclear energy.”

    Strategic Partnership for Vision 2030

    Saudi Arabia is pursuing “Vision 2030,” a national development project aimed at diversifying its economy from energy-focused industries toward manufacturing and hydrogen energy. The kingdom is hosting international events including the World Expo and FIFA World Cup, positioning itself as one of the world’s most prominent emerging economies.

    As the Middle East’s largest automotive market, Saudi Arabia is actively attracting global automakers, including Hyundai Motor Company, with the long-term goal of becoming an automotive hub serving not only the Middle East but also North Africa.

    Executive Chair Chung expressed gratitude for the Saudi government’s continued interest and support, outlining Hyundai Motor Group’s ongoing collaborative projects and future plans as a partner in realizing Saudi Arabia’s vision for mobility and other sectors.

    Regarding the new manufacturing facility, Chung stated, “Hyundai Motor is building a locally tailored factory with specialized equipment to meet Saudi Arabia’s industrial demands and customer needs. We will also consider expanding production capacity based on future market conditions.”

    New Production Hub Takes Shape

    Prior to meeting HRH the Crown Prince, Executive Chair Chung visited Hyundai Motor Manufacturing Middle East (HMMME) on October 26 at the King Salman Automotive Cluster to review construction progress. Together with José Muñoz, President and CEO of Hyundai Motor Company, Executive Chair Chung engaged in a business update session with Hyundai Motor and Kia local leaders and held in-depth discussions with employees about growth strategies.

    “Establishing a production base in Saudi Arabia represents Hyundai Motor’s new opportunity in the Middle East,” Chung told employees working in extreme heat. “We must thoroughly prepare in every aspect to deliver mobility that exceeds customer expectations on time, in an environment different from our previous bases—characterized by high temperatures and desert conditions.”

    José Muñoz, President and CEO of Hyundai Motor Company said: “Our new Saudi Arabia production facility demonstrates Hyundai’s long-term commitment to the Middle East’s largest automotive market. This plant plays a strategic role in our global mid-term plan while supporting Vision 2030. We’re combining Hyundai’s manufacturing excellence with Saudi Arabia’s talented workforce to deliver mobility solutions across automotive, smart cities, hydrogen energy, and future mobility.”

    HMMME, the first Hyundai Motor production facility in the Middle East, is a cornerstone for establishing Hyundai as a leading brand in Saudi Arabia. The joint venture is 30 percent owned by Hyundai Motor and 70 percent by Saudi Arabia’s Public Investment Fund. Construction began in May 2025, with operations targeted to commence in the fourth quarter of 2026. The facility will have an annual production capacity of 50,000 units, manufacturing both electric vehicles and internal combustion engine vehicles.

    The plant combines Hyundai Motor’s innovative manufacturing technology with Saudi Arabia’s talented workforce and infrastructure, positioning it to play a crucial role in the growth and development of Saudi Arabia’s mobility ecosystem.

    Hyundai Motor plans to operate HMMME as a high-quality vehicle production hub by implementing multi-model production facilities to address diverse customer needs, applying simple and robust design structures for easy maintenance, and incorporating cooling and dust-proof measures to handle high temperatures and sand.

    Market Growth and Expansion Plans

    Hyundai Motor and Kia continue to grow in Saudi Arabia, selling 149,604 units through September 2025, an 8.5 percent increase year over year, with plans to reach approximately 210,000 units by year-end, up 5.9 percent from 2024.

    Leveraging enhanced brand appeal and stable supply through the Saudi production base, Hyundai Motor aims to become the leading automotive company in Saudi Arabia through strategies including Saudi-exclusive special editions, an expanded SUV lineup based on customer preferences, and launches of diverse eco-friendly vehicles including EVs, EREVs (extended-range electric vehicles), and HEVs.

    Kia plans to develop the recently launched Tasman pick-up truck as its flagship model while expanding EV and HEV supply. The brand is also focusing on capturing the PBV market in connection with Saudi Arabia’s smart city projects.

    Expanding Collaboration Across Multiple Sectors

    Hyundai Motor Group is expanding partnerships with key Saudi institutions and companies across mobility, smart cities, and other sectors.

    In September 2024, Hyundai Motor signed an agreement with NEOM for “introducing eco-friendly future mobility,” successfully demonstrating the Universe FCEV (Fuel Cell Electric Vehicle) bus last May on routes connecting NEOM’s central business district with the high-altitude Trojena region at 2,080 meters above sea level. The Group plans to continue collaboration as NEOM’s key partner in future mobility.

    Last month, Kia launched a PV5 pilot project with Red Sea Global (RSG), one of Saudi Arabia’s giga project developers, following up on Hyundai Motor Group’s March 2024 MOU with RSG. Kia will provide PV5 passenger models and technical training support, contributing to eco-friendly mobility adoption and ecosystem development while delivering customized mobility solutions optimized for RSG’s tourism industry.

    Hyundai Motor Group is also partnering with the MISK Foundation, a non-profit organization established by Crown Prince Mohammed bin Salman in 2011, to foster local youth talent and explore smart city collaboration opportunities.

     

    ###

     

    About Hyundai Motor Group
    Hyundai Motor Group is a global enterprise that has created a value chain based on mobility, steel, and construction, as well as logistics, finance, IT, and service. With about 250,000 employees worldwide, the Group’s mobility brands include Hyundai, Kia, and Genesis. Armed with creative thinking, cooperative communication, and the will to take on any challenges, we strive to create a better future for all.

    More information about Hyundai Motor Group can be found at: http://www.hyundaimotorgroup.com or Newsroom: Media Hub by Hyundai, Kia Global Media Center (kianewscenter.com), Genesis Newsroom


    Contact:
    Jiwon Moon
    Global PR Strategy & Planning Team, Hyundai Motor Group
    m00n@hyundai.com

    Continue Reading

  • Healthcare giant Medline reveals US IPO filing – Reuters

    1. Healthcare giant Medline reveals US IPO filing  Reuters
    2. Medline Announces Public Filing of Registration Statement with the SEC  PR Newswire
    3. Medline Is Said to File Publicly for US IPO as Soon as Tuesday  Bloomberg.com
    4. Medline announces IPO of Class A common stock  MarketScreener
    5. Healthcare giant Medline files to return to public markets via Nasdaq listing  Investing.com

    Continue Reading

  • Newly trained navigation and verbal memory skills in humans elicit changes in task-related networks but not brain structure

    Newly trained navigation and verbal memory skills in humans elicit changes in task-related networks but not brain structure

    Improvements in the learning rate of the Verbal Memory Transfer task, but not the Navigation Transfer task, were found to correlate with lateral hippocampal volume but not with anterior and posterior hippocampus (using average volume between pre and post-test).

    Consistent findings were obtained when analyzing hemispheric hippocampal volumes (see MTL subregions and either the average number Appendix 3—figure 3). Specifically, a significant positive correlation was observed between the learning rate in the Verbal Memory group and both left hippocampal volume (r(23) = 0.568, p-FDR=.014) and right hippocampal volume (r(23) = 0.601, p-FDR=.007), while no significant correlations were found for the Navigation (left: r(25) = 0.038, p-FDR=0.858; right: r(25) = 0.008, p-FDR=0.970) or Video Control group (left: r(19) = –0.123, p-FDR=0.858; right: r(19) = –0.163, p=0.758), after controlling for sex and site as covariates. Fisher’s z-tests revealed that the positive correlation in the Verbal Memory group was significantly stronger than that in the Navigation group even after controlling for sex and site (left hippocampus: Z=1.942, p-FDR=0.039; right hippocampus: Z=2.326, p-FDR=0.015) and Video Control group (left hippocampus: Z=2.234, p-FDR=0.038; right hippocampus: Z=2.702, p-FDR=0.010) while no significant difference was observed between the Navigation group and Video Control group (left hippocampus: Z=–0.439, p=0.669; right hippocampus: Z=–0.553, p-FDR=0.710).

    An analysis was conducted to assess the correlation between anterior and posterior hippocampal volumes and changes in learning rate on the verbal memory transfer task. However, the improvement in learning rate demonstrated no significant correlation with either anterior or posterior hippocampal volume in any of the three groups (ps >0.232, Appendix 2—table 4). This lack of correlation remained consistent after accounting for the influence of sex and site as covariates (ps >0.114).

    We also correlated hippocampal volume and MTL subregions with the change in the average number of words recalled, number of trials to criterion, and slope from linear regression (see Appendix 1) between the post-test and pre-test for the verbal memory transfer task. The analysis revealed no significant correlations between hippocampal volume or MTL subregions and either the average number of words recalled or the number of trials to criterion (Appendix 2—tables 6 and 7), regardless of whether sex and site were included as covariates. However, when correlating slope with hippocampal volume, a positive correlation was found for the Verbal Memory group (total hippocampal volume: r=0.552, p=0.006; left hippocampus: r=0.502, p=0.015;right hippocampus: r=0.561, p=0.005, Appendix 2—table 8), however, no such correlation was found for the Navigation group (total hippocampus: r=0.094, p=0.656; left hippocampus: r=0.114, p=0.586; right hippocampus: r=0.073, p=0.727) and Video Control (total hippocampus: r=–0.239, p=0.325; left hippocampus: r=–0.217, p=0.372; right hippocampus: r=–0.238, p=0.326) even after controlling for sex and site as covariates.

    We also correlated hippocampal volume with the change in learning rate from the post-test and pre-test for the navigation transfer task. No significant correlations were found between total hippocampal volume and changes in learning rate in any of the three conditions (Navigation: r(25) = –0.27, p=0.181; Verbal Memory: r(24) = –0.31, p=0.129; Video Control: r(17) = 0.124, p=0.612), regardless of whether sex and site were included as covariates. Similar no significant correlations were observed for both left hippocampal volume (Navigation: r(25) = –0.25, p=0.214; Verbal Memory: r(24) = –0.29, p=0.160; Video Control: r(17) = 0.25, p=0.310) and right hippocampal volume (Navigation: r(25) = –0.27, p=0.167; Verbal Memory: r(24) = –0.302, p=0.142; Video Control: r(17) = –0.018, p=0.940), regardless of whether sex and site were included as covariates. We also did not find any significant correlations when examining hippocampal subfields or surrounding MTL subregions (p’s>0.20, Appendix 2—table 5).

    The relationship between total hippocampal volume, MTL subregion volumes, and changes in performance on the navigation transfer task was assessed through correlational analyses, focusing on path error, overall pointing error, between-environment pointing error, within-environment pointing error, and map accuracy. These analyses demonstrated a lack of significant associations between hippocampal volume and MTL subregions and any of the behavioral metrics under consideration (Appendix 2—table 9, Appendix 2—tables 10–13), regardless of the inclusion of sex and site as covariates.

    Improvements in the learning rate of the Verbal Memory Transfer task, but not the Navigation Transfer task, were found to correlate with both total hippocampal volume and the volume of the CA2/3/DG subfield (using volume data only from the pre-test). We examined the hypothesis that baseline hippocampal volume might be associated with either verbal or navigation performance. For this analysis, we utilized the hippocampal volume (or subfield volume) obtained for each participant at pre-test, rather than averaging volumes across pre and post-test. We then assessed the relationship between hippocampal volumes at pre-test and the change in learning rate from pre-test to post-test for both the Verbal Memory transfer task and the Navigation Transfer task.

    We found a marginal correlation in the verbal memory training group between the pre-test total hippocampal volume and the observed improvement in verbal memory performance from pre- to post-test (r(23) = 0.352, p=0.084). Further analysis accounting for sex and site as covariates revealed a significant positive correlation between total hippocampal volume and the learning rate in the Verbal Memory condition (r(23) = 0.545, p=0.007). This effect was specific to the verbal memory training; no significant correlation was identified for either the Navigation condition (r(25) = 0.072, p=.721) or the Video condition (r(19) = –0.209, pP=0.362); the same result was observed when controlling for covariates (Navigation condition: r(25) = 0.082, p=0.695; Video condition: r(19) = –0.144, p=0.555). Consistent findings were obtained when analyzing hemispheric hippocampal volumes. Specifically, a significant positive correlation was observed between the learning rate in the Verbal Memory condition and both left hippocampal volume (r(23) = 0.504, p=0.014) and right hippocampal volume (r(23) = 0.545, p=0.007), while no significant correlations were found for the Navigation condition (left: r(25) = 0.122, p=0.561; right: r(25) = 0.045, p=0.832) or Video Control condition (left: r(19) = –0.084, p=0.734; right: r(19) = –0.187, p=0.444), after controlling for sex and site as covariates.

    We further examined the correlation between CA23DG volume and learning rate change. The CA23DG subfield showed a positive correlation with the change in learning rate from pre to post-test in the verbal memory transfer task of the Verbal Memory condition (r(23) = 0.504, p=0.01), suggesting that individuals in the Verbal Memory condition with larger CA23DG volumes exhibited greater improvements in memory performance from pre- to post-test. This correlation persisted even after controlling for sex and site as covariates (r(23) = 0.439, p=0.036). No significant correlations were observed for the CA23DG subfield in the Navigation (r(25) = 0.007, p=0.972) or Video Control conditions (r(19) = –0.05, p=0.828), regardless of whether sex and site were included as covariates.

    Fisher’s z-tests revealed that the positive correlation in the Verbal Memory condition was significantly stronger than that in the Navigation even after controlling for sex and site (total hippocampus: Z=1.793, p=0.037; left hippocampus: Z=1.464, p=0.072; right hippocampus: Z=1.919, p=0.027; CA23DG: Z=1.687, p=0.046) and Video conditions (total hippocampus: Z=2.382, p=0.009; left hippocampus: Z=2.009, p=0.022; right hippocampus: Z=2.518, p=0.006; CA23DG: Z=1.766, p=0.038), while no significant difference was observed between the Navigation and Video Control conditions (total hippocampus: Z=–0.731, p=0.768; left hippocampus: Z=–0.662, p=0.746; right hippocampus: Z=–0.750, p=0.773; CA23DG: Z=–0.214, p=0.585).

    Improvements in the learning rate of the Verbal Memory Transfer task, but not the Navigation Transfer task, were found to correlate with both total hippocampal volume and the volume of the CA2/3/DG subfield (using volume data only from the post-test). We examined the hypothesis that baseline hippocampal volume may be associated with either verbal or navigation performance. For this analysis, we utilized the hippocampal volume (or subfield volume) obtained for each participant at post-test, rather than averaging volumes across pre-test and post-test. We then assessed the relationship between hippocampal volumes at post-test and the change in learning rate from pre-test to post-test for both the Verbal Memory transfer task and the Navigation transfer task.

    We found a marginal correlation in the verbal memory training group between the post-test total hippocampal volume and the observed improvement in verbal memory performance from pre- to post-test (r(23) = 0.360, p=0.078). Further analysis accounting for sex and site as covariates revealed a significant positive correlation between total hippocampal volume and the learning rate in the Verbal Memory condition (r(23) = 0.623, p=0.001). This effect was specific to the verbal memory training; no significant correlation was identified for either the Navigation condition (r(25) = –0.009, p=0.965) or the Video Control condition (r(19) = –0.178, p=0.439); the same was true when controlling for covariates (Navigation condition: r(25) = 0.000, p=0.999; Video Control condition r(18) = –0.083, p=0.736). Consistent findings were obtained when analyzing hippocampal volumes by hemisphere. Specifically, a significant positive correlation was observed between the learning rate in the Verbal Memory condition and both left hippocampal volume (r(23) = 0.597, p=0.003) and right hippocampal volume (r(23) = 0.594, p=0.003), while no significant correlations were found for the Navigation (left: r(25) = 0.026, p=0.901; right: r(24) = –0.023, p=0.912) or Video Control conditions (left: r(19) = –0.036, p=0.883; right: r(18) = –0.109, p=0.656), after controlling for sex and site as covariates.

    We further examined the correlation between CA23DG volume and learning rate change. The CA23DG subfield showed a positive correlation with the change in learning rate from pre to post-test in the verbal memory transfer task of the Verbal Memory condition (r(23) = 0.545, p=0.005), suggesting that individuals in the Verbal Memory condition with larger CA23DG volumes exhibited greater improvement in memory performance from pre to post-test. This correlation persisted even after controlling for sex and site as covariates (r(23) = 0.537, p=0.008). No significant correlations were observed for the CA23DG subfield in the Navigation (r(25) = –0.027, p=0.894) or Video Control conditions (r(19) = –0.110, p=0.634), regardless of whether sex and site were included as covariates.

    Fisher’s z-tests revealed that the positive correlation in the Verbal Memory condition was significantly stronger than that in the Navigation group even after controlling for sex and site (total hippocampus: Z=2.473, p=0.007; left hippocampus: Z=2.243, p=0.012; right hippocampus: Z=2.398, p=0.008; CA23DG: Z=2.209, p=0.014) and the Video Control condition (total hippocampus: Z=2.558, p=0.005; left hippocampus: Z=2.280, p=0.011; right hippocampus: Z=2.498, p=0.006; CA23DG: Z=2.337, p=0.014), while no significant difference was observed between the Navigation and the Video conditions (total hippocampus: Z=–0.266, p=0.605; left hippocampus: Z=–0.2, p=0.841; right hippocampus: Z=–0.276, p=0.609; CA23DG: Z=–0.291, p=0.615).

    Continue Reading

  • NVIDIA and Partners Build America’s AI Infrastructure and Create Blueprint to Power the Next Industrial Revolution

    NVIDIA and Partners Build America’s AI Infrastructure and Create Blueprint to Power the Next Industrial Revolution

    US Government Labs and Nation’s Leading Companies Investing in Advanced AI Infrastructure to Power AI Factories and Accelerate US AI Development

      News Summary

    • Seven new systems across Argonne and Los Alamos National Laboratories to be released, accelerating the Department of Energy’s mission of driving technological leadership across U.S. security, science and energy applications.
    • NVIDIA AI Factory Research Center in Virginia to host the first Vera Rubin infrastructure and lay the groundwork for NVIDIA Omniverse DSX, a blueprint for multi‑generation, gigawatt‑scale build‑outs using NVIDIA Omniverse libraries.
    • Leading U.S. companies across server makers, cloud service providers, model builders, technology suppliers and enterprises are investing in advanced AI infrastructure.

    GTC Washington, D.C. — NVIDIA today announced that it is working with the U.S. Department of Energy’s national labs and the nation’s leading companies to build America’s AI infrastructure to support scientific discovery, economic growth and power the next industrial revolution. 

    “We are at the dawn of the AI industrial revolution that will define the future of every industry and nation,” said Jensen Huang, founder and CEO of NVIDIA. “It is imperative that America lead the race to the future — this is our generation’s Apollo moment. The next wave of inventions, discoveries and progress will be determined by our nation’s ability to scale AI infrastructure. Together with our partners, we are building the most advanced AI infrastructure ever created, ensuring that America has the foundation for a prosperous future, and that the world’s AI runs on American innovation, openness and collaboration, for the benefit of all.”

    NVIDIA AI Advances Scientific Research at National Labs

    NVIDIA is accelerating seven new systems by providing the AI infrastructure to drive scientific research and innovation at two U.S. Department of Energy (DOE) facilities — Argonne National Laboratory and Los Alamos National Laboratory (LANL).

    NVIDIA is collaborating with Oracle and the DOE to build the U.S. Department of Energy’s largest AI supercomputer for scientific discovery. The Solstice system will feature a record-breaking 100,000 NVIDIA Blackwell GPUs and support the DOE’s mission of developing AI capabilities to drive technological leadership across U.S. security, science and energy applications.

    Another system, Equinox, will include 10,000 NVIDIA Blackwell GPUs expected to be available in 2026. Both systems will be located at Argonne, and will be interconnected by NVIDIA networking and deliver a combined 2,200 exaflops of AI performance.

    Argonne is also unveiling three powerful NVIDIA-based systems — Tara, Minerva and Janus — set to expand access to AI-driven computing for researchers across the country. Together, these systems will enable scientists and engineers to revolutionize scientific discovery and boost productivity.

    “Argonne’s collaboration with NVIDIA and Oracle represents a pivotal step in advancing the nation’s AI and computing infrastructure,” said Paul K. Kearns, director of Argonne National Laboratory. “Through this partnership, we’re building platforms that redefine performance, scalability and scientific potential. Together, we are shaping the foundation for the next generation of computing that will power discovery for decades to come.”

    LANL, based in New Mexico, announced the selection of the NVIDIA Vera Rubin platform and the NVIDIA Quantum‑X800 InfiniBand networking fabric for its next-generation Mission and Vision systems, to be built and delivered by HPE. The Vision system builds on the achievements of LANL’s Venado supercomputer, built for unclassified research. Mission is the fifth Advanced Technology System (ATS5) in the National Nuclear Security Administration’s Advanced Simulation and Computing program, which LANL supports, and is expected to be operational in late 2027 and designed to run classified applications.

    The Vera Rubin platform will deliver advanced accelerated computing capabilities for these systems, enabling researchers to process and analyze vast datasets at unprecedented speed and scale. Paired with the Quantum‑X800 InfiniBand fabric, which delivers high network bandwidth with ultralow latency, the platform enables scientists to run complex simulations to advance areas spanning materials science, climate modeling and quantum computing research.

    “Our integration of the NVIDIA Vera Rubin platform and Quantum X800 InfiniBand fabric represents a transformative advancement of our lab — harnessing this level of computational performance is essential to tackling some of the most complex scientific and national security challenges,” said Thom Mason, director of Los Alamos National Laboratory. “Our work with NVIDIA helps us remain at the forefront of innovation, driving discoveries to strengthen the resilience of our critical infrastructure.”

    NVIDIA AI Factory Research Center and Gigascale AI Factory Blueprint

    NVIDIA also announced the build-out of an AI Factory Research Center at Digital Realty in Virginia. This facility, powered by the NVIDIA Vera Rubin platform, will accelerate breakthroughs in generative AI, scientific computing and advanced manufacturing and serve as a foundation for pioneering research in digital twins and large‑scale simulation.

    The center lays the groundwork for NVIDIA Omniverse DSX — a blueprint for multi‑generation, gigawatt‑scale build‑outs using NVIDIA Omniverse™ libraries — that will set a new standard of excellence for AI infrastructure. By integrating virtual and physical systems, NVIDIA is creating a scalable model for building intelligent facilities that continuously optimize for performance, energy efficiency and sustainability.

    With this new center, NVIDIA and its partners are collaborating to develop Omniverse DSX, which will integrate autonomous control systems and modular infrastructure to power the next generation of AI factories. NVIDIA is collaborating with companies to enable the gigawatt-scale rollout of hyperscale AI infrastructure:

    • Engineering and construction partners Bechtel and Jacobs are working with NVIDIA to integrate advanced digital twins into validated designs across complex architectural, power, mechanical and electrical systems.
    • Power, cooling and energy equipment partners including Eaton, GE Vernova, Hitachi, Mitsubishi Electric, Schneider Electric, Siemens, Siemens Energy, Tesla, Trane and Vertiv are contributing to the center. Power and system modeling enable AI factories to dynamically interact with utility networks at gigawatt scale. Liquid-cooling, rectification and power-conversion systems optimized for NVIDIA Grace Blackwell and Vera Rubin platforms are also modeled in the earlier NVIDIA Omniverse Blueprint for AI factory digital twins.
    • Software and agentic AI solutions providers including Cadence, Emerald AI, Phaidra, PTC, Schneider Electric ETAP, Siemens and Switch have built digital twin solutions to model and optimize AI factory lifecycles, from design to operation. AI agents continuously optimize power, cooling and workloads, turning the NVIDIA Omniverse DSX blueprint for AI factory digital twins into a self-learning system that boosts grid flexibility, resilience and energy efficiency.

       

    Building the Next Wave of US Infrastructure

    Leading U.S. companies across server makers, cloud service providers, model builders, technology suppliers and enterprises are investing in advanced AI infrastructure to power AI factories and accelerate U.S. AI development.

    System makers Cisco, Dell Technologies, HPE and Supermicro are collaborating with NVIDIA to build secure, scalable AI infrastructure by integrating NVIDIA GPUs and AI software into their full-stack systems. This includes the newly announced NVIDIA AI Factory for Government reference design, which will accelerate AI deployments for the public sector and highly regulated industries.

    In addition, Cisco is launching the new Nexus N9100 switch series powered by NVIDIA Spectrum-X™ Ethernet switch silicon. The switches’ integration with the existing Cisco Nexus management framework will allow customers to seamlessly deploy and manage the new high-speed NVIDIA-powered fabrics using the same trusted tools and operational models they already rely on.

    Cisco will now offer an NVIDIA Cloud Partner-compliant AI factory with the Cisco Cloud reference architecture based on this switch. The N9100 Series switches will be orderable before the end of the year.

    Leading Cloud Providers and Model Builders Accelerate AI

    Cloud providers and model builders are continuing to invest in AI infrastructure to create a diverse ecosystem for AI innovation, ensuring the U.S. remains at the forefront of AI advancements and their practical applications across industries globally.

    The following companies are expanding their commitments to further bolster U.S.-based AI innovation:

    • Akamai is launching Akamai Inference Cloud, a distributed platform that expands AI inference from core data centers to the edge — targeting 20 initial locations across the globe, including five U.S. states, and plans for further expansion — accelerated by NVIDIA RTX PRO™ Servers.
    • CoreWeave is establishing CoreWeave Federal, a new business focused on providing secure, compliant, high-performance AI cloud infrastructure and services to the U.S. government running on NVIDIA GPUs and validated designs. The initiative includes anticipated FedRAMP and related agency authorizations of the CoreWeave platform.
    • Global AI, a new NVIDIA Cloud Partner, has placed its first big purchase for 128 NVIDIA GB300 NVL72 racks (featuring 9,000+ GPUs), which will be the largest GB300 NVL72 deployment in New York.
    • Google Cloud is offering new A4X Max VMs with NVIDIA GB300 NVL72 and G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs, as well as bringing the NVIDIA Blackwell platform on premises and in air-gapped environments with Google Distributed Cloud.
    • Lambda is building a new 100+ megawatt AI factory in Kansas City, Missouri. The supercomputer will initially feature more than 10,000 NVIDIA GB300 NVL72 GPUs to accelerate AI breakthroughs from U.S.-based researchers, enterprises and developers.
    • Microsoft is using NVIDIA RTX PRO 6000 Blackwell GPUs on Microsoft Azure, and has recently announced the deployment of a large-scale Azure cluster using NVIDIA GB300 NVL72 for OpenAI. In addition, Microsoft is adding Azure Local support for NVIDIA RTX™ GPUs in the coming months.
    • Oracle recently launched Oracle Cloud Infrastructure Zettascale10, the industry’s largest AI supercomputer in the cloud, powered by NVIDIA AI infrastructure.
    • Together AI, in partnership with 5C, already operates an AI factory in Maryland featuring NVIDIA B200 GPUs and is bringing a new one online soon in Memphis, Tennessee, featuring NVIDIA GB200 and GB300 systems. Both locations are set for near-term expansion, and new locations will be coming up in 2026 to accelerate the development and scaling of AI-native applications.
    • xAI is working on its massive Colossus 2 data center in Memphis, Tennessee, which will house over half a million NVIDIA GPUs — enabling rapid, frontier-level training and inference of next-generation AI models.

       

    US Enterprises Build AI Infrastructure for Industries

    Beyond cloud providers and model builders, U.S. organizations are looking to build and offer AI infrastructure for themselves and others that will accelerate workloads across a variety of industries such as pharmaceutical and healthcare.

    Lilly is building the pharmaceutical industry’s most powerful AI factory with an NVIDIA DGX SuperPOD™ with NVIDIA DGX™ B300 systems, featuring NVIDIA Spectrum-X Ethernet and NVIDIA Mission Control™ software, which will allow the company to develop and train large-scale biomedical foundation models that aim to accelerate drug discovery and design. This builds on Lilly’s use of NVIDIA RTX PRO Servers to power drug discovery and research by accelerating enterprise AI workloads.

    Mayo Clinic — with access to 20 million digitized pathology slides and one of the world’s largest patient databases — has created an AI factory powered by DGX SuperPOD with DGX B200 systems and NVIDIA Mission Control. This delivers the AI computational power needed to advance healthcare applications such as medical research, digital pathology and personalized care for better patient outcomes.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading

  • NVIDIA Introduces NVQLink — Connecting Quantum and GPU Computing for 17 Quantum Builders and Nine Scientific Labs

    NVIDIA Introduces NVQLink — Connecting Quantum and GPU Computing for 17 Quantum Builders and Nine Scientific Labs

    News Summary: 

    • NVIDIA NVQLink high-speed interconnect lets quantum processors connect to world-leading supercomputing labs including Brookhaven National Laboratory, Fermi Laboratory, Lawrence Berkeley National Laboratory (Berkeley Lab), Los Alamos National Laboratory, MIT Lincoln Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories.
    • NVQLink provides quantum researchers with a powerful system for the control algorithms needed for large-scale quantum computing and quantum error correction.
    • NVQLink allows researchers to build hybrid quantum-classical systems, accelerating next-generation applications in chemistry and materials science.

    GTC Washington, D.C. — NVIDIA today announced NVIDIA NVQLink™, an open system architecture for tightly coupling the extreme performance of GPU computing with quantum processors to build accelerated quantum supercomputers.

    Researchers from leading supercomputing centers at national laboratories including Brookhaven National Laboratory, Fermi Laboratory, Lawrence Berkeley National Laboratory (Berkeley Lab), Los Alamos National Laboratory, MIT Lincoln Laboratory, the Department of Energy’s Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories guided the development of NVQLink, helping accelerate next-generation work on quantum computing. NVQLink provides an open approach to quantum integration, supporting 17 QPU builders, five controller builders and nine U.S national labs.

    Qubits — the units of information enabling quantum computers to process information in ways ordinary computers cannot — are delicate and error-prone, requiring complex calibration, quantum error correction and other control algorithms to operate correctly.

    These algorithms must run over an extremely demanding low-latency, high-throughput connection to a conventional supercomputer to keep on top of qubit errors and enable impactful quantum applications. NVQLink provides that interconnect, enabling the environment needed for future, transformative applications across industries.

    “In the near future, every NVIDIA GPU scientific supercomputer will be hybrid, tightly coupled with quantum processors to expand what is possible with computing,” said Jensen Huang, founder and CEO of NVIDIA. “NVQLink is the Rosetta Stone connecting quantum and classical supercomputers — uniting them into a single, coherent system that marks the onset of the quantum-GPU computing era.”

    U.S. national laboratories, led by the Department of Energy, will use NVIDIA NVQLink to make new breakthroughs in quantum computing.

    “Maintaining America’s leadership in high-performance computing requires us to build the bridge to the next era of computing: accelerated quantum supercomputing,” said U.S. Secretary of Energy Chris Wright. “The deep collaboration between our national laboratories, startups and industry partners like NVIDIA is central to this mission — and NVIDIA NVQLink provides the critical technology to unite world-class GPU supercomputers with emerging quantum processors, creating the powerful systems we need to solve the grand scientific challenges of our time.”

    NVQLink connects the many approaches to quantum processors and control hardware systems directly to AI supercomputing — providing a unified, turnkey solution for overcoming the key integration challenges that quantum researchers face in scaling their hardware.

    With contributions from supercomputing centers, quantum hardware builders and quantum control system providers, NVQLink sets the foundation for uncovering the breakthroughs in control, calibration, quantum error correction and hybrid application development needed to run useful quantum applications.

    Researchers and developers can access NVQLink through its integration with the NVIDIA CUDA-Q™ software platform to create and test applications that seamlessly draw on CPUs and GPUs alongside quantum processors, helping ready the industry for the hybrid quantum-classical supercomputers of the future.

    Partners contributing to NVQLink include quantum hardware builders Alice & Bob, Anyon Computing, Atom Computing, Diraq, Infleqtion, IonQ, IQM Quantum Computers, ORCA Computing, Oxford Quantum Circuits, Pasqal, Quandela, Quantinuum, Quantum Circuits, Inc., Quantum Machines, Quantum Motion, QuEra, Rigetti, SEEQC and Silicon Quantum Computing — as well as quantum control system builders including Keysight Technologies, Quantum Machines, Qblox, QubiC and Zurich Instruments.

    Availability

    Quantum builders and supercomputing centers interested in NVIDIA NVQLink can sign up for access on this webpage.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading