Category: 3. Business

  • NVIDIA Makes the World Robotaxi-Ready With Uber Partnership to Support Global Expansion

    NVIDIA Makes the World Robotaxi-Ready With Uber Partnership to Support Global Expansion

    Stellantis, Lucid and Mercedes-Benz Join Level 4 Ecosystem Leaders Leveraging the NVIDIA DRIVE AV Platform and DRIVE AGX Hyperion 10 Architecture to Accelerate Autonomous Driving

    News Summary:

    • NVIDIA DRIVE AGX Hyperion 10 is a reference compute and sensor architecture that makes any vehicle level 4-ready, enabling automakers and developers to build safe, scalable, AI-defined fleets.
    • Uber will bring together human riders and robot drivers in a worldwide ride-hailing network powered by DRIVE AGX Hyperion-ready vehicles.
    • Stellantis, Lucid and Mercedes-Benz are collaborating on level 4-ready autonomous vehicles compatible with DRIVE AGX Hyperion 10 for passenger mobility, while Aurora, Volvo Autonomous Solutions and Waabi extend level 4 autonomy to long-haul freight.
    • Uber will begin scaling its global autonomous fleet starting in 2027, targeting 100,000 vehicles and supported by a joint AI data factory built on the NVIDIA Cosmos platform.
    • NVIDIA and Uber continue to support a growing level 4 ecosystem that includes Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve and WeRide.
    • NVIDIA launches the Halos Certified Program, the industry’s first system to evaluate and certify physical AI safety for autonomous vehicles and robotics.

    GTC Washington, D.C.—NVIDIA today announced it is partnering with Uber to scale the world’s largest level 4-ready mobility network, using the company’s next-generation robotaxi and autonomous delivery fleets, the new NVIDIA DRIVE AGX Hyperion™ 10 autonomous vehicle (AV) development platform and NVIDIA DRIVE™ AV software purpose-built for L4 autonomy.

    By enabling faster growth across the level 4 ecosystem, NVIDIA can support Uber in scaling its global autonomous fleet to 100,000 vehicles over time, starting in 2027. These vehicles will be developed in collaboration with NVIDIA and other Uber ecosystem partners, using NVIDIA DRIVE. NVIDIA and Uber are also working together to develop a data factory accelerated by the NVIDIA Cosmos™ world foundation model development platform to curate and process data needed for autonomous vehicle development.

    NVIDIA DRIVE AGX Hyperion 10 is a reference production computer and sensor set architecture that makes any vehicle L4-ready. It enables automakers to build cars, trucks and vans equipped with validated hardware and sensors that can host any compatible autonomous-driving software, providing a unified foundation for safe, scalable and AI-defined mobility.

    Uber is bringing together human drivers and autonomous vehicles into a single operating network — a unified ride-hailing service including both human and robot drivers. This network, powered by NVIDIA DRIVE AGX Hyperion-ready vehicles and the surrounding AI ecosystem, enables Uber to seamlessly bridge today’s human-driven mobility with the autonomous fleets of tomorrow.

    “Robotaxis mark the beginning of a global transformation in mobility — making transportation safer, cleaner and more efficient,” said Jensen Huang, founder and CEO of NVIDIA. “Together with Uber, we’re creating a framework for the entire industry to deploy autonomous fleets at scale, powered by NVIDIA AI infrastructure. What was once science fiction is fast becoming an everyday reality.”

    “NVIDIA is the backbone of the AI era, and is now fully harnessing that innovation to unleash L4 autonomy at enormous scale, while making it easier for NVIDIA-empowered AVs to be deployed on Uber,” said Dara Khosrowshahi, CEO of Uber. “Autonomous mobility will transform our cities for the better, and we’re thrilled to partner with NVIDIA to help make that vision a reality.”

    NVIDIA DRIVE Level 4 Ecosystem Grows

    Leading global automakers, robotaxi companies and tier 1 suppliers are already working with NVIDIA and Uber to launch level 4 fleets with NVIDIA AI behind the wheel.

    Stellantis is developing AV-Ready Platforms, specifically optimized to support level 4 capabilities and meet robotaxi requirements. These platforms will integrate NVIDIA’s full-stack AI technology, further expanding connectivity with Uber’s global mobility ecosystem. Stellantis is also collaborating with Foxconn on hardware and systems integration.

    Lucid is advancing level 4 autonomous capabilities for its next-generation passenger vehicles, also using full-stack NVIDIA AV software on the DRIVE Hyperion platform for its upcoming U.S. models.

    Mercedes-Benz is testing future collaboration with industry-leading partners powered by its proprietary operation system MB.OS and DRIVE AGX Hyperion. Building on its legacy of innovation, the new S-Class offers an exceptional chauffeured level 4 experience combining luxury, safety and cutting-edge autonomy.

    NVIDIA and Uber will continue to support and accelerate shared partners across the worldwide level 4 ecosystem developing their software stacks on the NVIDIA DRIVE level 4 platform, including Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve and WeRide.

    In trucking, Aurora, Volvo Autonomous Solutions and Waabi are developing level 4 autonomous trucks powered by the NVIDIA DRIVE platform. Their next-generation systems, built on NVIDIA DRIVE AGX Thor, will accelerate Volvo’s upcoming L4 fleet, extending the reach of end-to-end NVIDIA AI infrastructure from passenger mobility to long-haul freight.

    NVIDIA DRIVE AGX Hyperion 10: The Common Platform for L4-Ready Vehicles

    The NVIDIA DRIVE AGX Hyperion 10 production platform features the NVIDIA DRIVE AGX Thor system-on-a-chip; the safety-certified NVIDIA DriveOS™ operating system; a fully qualified multimodal sensor suite including 14 high-definition cameras; nine radars, one lidar and 12 ultrasonics; and a qualified board design.

    DRIVE AGX Hyperion 10 is modular and customizable, allowing manufacturers and AV developers to tailor it to their unique requirements. By offering a prequalified sensor suite architecture, the platform also accelerates development, lowers costs and gives customers a running start with access to NVIDIA’s rigorous development expertise and investments in automotive engineering and safety.

    At the core of DRIVE AGX Hyperion 10 are two performance-packed DRIVE AGX Thor in-vehicle platforms based on NVIDIA Blackwell architecture. Each delivering more than 2,000 FP4 teraflops (1,000 TOPS of INT8) of real-time compute, DRIVE AGX Thor fuses diverse, 360-degree sensor inputs and is optimized for transformer, vision language action (VLA) models and generative AI workloads — enabling safe, level 4 autonomous driving backed by industry-leading safety certifications and cybersecurity standards.

    In addition, DRIVE AGX’s scalability and compatibility with existing AV software lets companies seamlessly integrate and deploy future upgrades from the platform across robotaxi and autonomous mobility fleets via over-the-air updates.

    Generative AI and Foundation Models Transform Autonomy

    NVIDIA’s autonomous driving approach taps into foundation AI models, large language models and generative AI, trained on trillions of real and synthetic driving miles. These advanced models allow self-driving systems to solve highly complex urban driving situations with humanlike reasoning and adaptability.

    New reasoning VLA models combine visual understanding, natural language reasoning and action generation to enable human-level understanding in AVs. By running reasoning VLA models in the vehicle, the AV can interpret nuanced and unpredictable real-world conditions — such as sudden changes in traffic flow, unstructured intersections and unpredictable human behavior — in real time. AV toolchain leader Foretellix is collaborating with NVIDIA to integrate its Foretify Physical AI toolchain with NVIDIA DRIVE for testing and validating these models.

    To enable the industry to develop and evaluate these large models for autonomous driving, NVIDIA is also releasing the world’s largest multimodal AV dataset. Comprising 1,700 hours of real-world camera, radar and lidar data across 25 countries, the dataset is designed to bolster development, post-training and validation of foundation models for autonomous driving.

    NVIDIA Halos Sets New Standards in Vehicle Safety and Certification

    The NVIDIA Halos system delivers state-of-the-art safety guardrails from cloud to car, establishing a holistic framework to enable safe, scalable autonomous mobility.

    The NVIDIA Halos AI Systems Inspection Lab, dedicated to AI safety and cybersecurity across automotive and robotics, performs independent evaluations and oversees the new Halos Certified Program, helping ensure products and systems meet rigorous criteria for trusted physical AI deployments.

    Companies such as AUMOVIO, Bosch, Nuro and Wayve are among the inaugural members of the NVIDIA Halos AI System Inspection Lab — the industry’s first to be accredited by the ANSI Accreditation Board. The lab aims to accelerate the safe, large-scale deployment of Level 4 automated driving and other AI-powered systems.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading

  • Palantir and NVIDIA Team Up to Operationalize AI — Turning Enterprise Data Into Dynamic Decision Intelligence

    Palantir and NVIDIA Team Up to Operationalize AI — Turning Enterprise Data Into Dynamic Decision Intelligence

    News Summary

    • Palantir is integrating NVIDIA accelerated computing, NVIDIA CUDA-X libraries and open-source NVIDIA Nemotron models into its Ontology framework at the core of the Palantir AI Platform.
    • Lowe’s is pioneering operational AI for its supply chain logistics with Palantir and NVIDIA.

       

    GTC Washington, D.C.—NVIDIA today announced a collaboration with Palantir Technologies Inc. to build a first-of-its-kind integrated technology stack for operational AI — including analytics capabilities, reference workflows, automation features and customizable, specialized AI agents — to accelerate and optimize complex enterprise and government systems.

    Palantir Ontology, at the core of the Palantir AI Platform (AIP), will integrate NVIDIA GPU-accelerated data processing and route optimization libraries, open models and accelerated computing. This combination of Ontology and NVIDIA AI will support customers by providing the advanced, context-aware reasoning necessary for operational AI.

    Enterprises using the customizable technology stack will be able to tap into their data to power domain-specific automations and AI agents for the sophisticated operating environments of retailers, healthcare providers, financial services and the public sector.

    “Palantir and NVIDIA share a vision: to put AI into action, turning enterprise data into decision intelligence,” said Jensen Huang, founder and CEO of NVIDIA. “By combining Palantir’s powerful AI-driven platform with NVIDIA CUDA-X accelerated computing and Nemotron open AI models, we’re creating a next-generation engine to fuel AI-specialized applications and agents that run the world’s most complex industrial and operational pipelines.”

    “Palantir is focused on deploying AI that delivers immediate, asymmetric value to our customers,” said Alex Karp, cofounder and CEO of Palantir Technologies. “We are proud to partner with NVIDIA to fuse our AI-driven decision intelligence systems with the world’s most advanced AI infrastructure.”

    Lowe’s Pioneers AI-Driven Logistics With Palantir and NVIDIA

    Lowe’s, among the first to tap this integrated technology stack from Palantir and NVIDIA, is creating a digital replica of its global supply chain network to enable dynamic and continuous AI optimization. This technology can support supply chain agility while boosting cost savings and customer satisfaction.

    “Modern supply chains are incredibly complex, dynamic systems, and AI will be critical to helping Lowe’s adapt and optimize quickly amid constantly changing conditions,” said Seemantini Godbole, executive vice president and chief digital and information officer at Lowe’s. “Even small shifts in demand can create ripple effects across the global network. By combining Palantir technologies with NVIDIA AI, Lowe’s is reimagining retail logistics, enabling us to serve customers better every day.”

    Advancing Operational Intelligence

    Palantir AIP workloads run in the most complex compliance domains and require the highest standards of privacy and data security. The Ontology at the heart of AIP creates a digital replica of an organization by organizing complex data and logic into interconnected virtual objects, links and actions that represent real-world concepts and their relationships.

    Together, this provides enterprises with an intelligent, AI-enabled operating system that drives efficiency through business process automation.

    To advance enterprise intelligence for the era of AI, NVIDIA data processing, AI software, open models and accelerated computing are now natively integrated with and available through Ontology and AIP. Customers can use NVIDIA CUDA-X™ data science libraries for data processing, paired with NVIDIA accelerated computing, via Ontology to drive real-time, AI-driven decision-making for complex, business-critical workflows.

    The NVIDIA AI Enterprise platform, including NVIDIA cuOpt™ decision optimization software, will enable enterprises to use AI for dynamic supply-chain management.

    NVIDIA Nemotron™ reasoning and NVIDIA NeMo Retriever™ open models will enable enterprises to rapidly build AI agents informed by Ontology.

    NVIDIA and Palantir are also working to bring the NVIDIA Blackwell architecture to Palantir AIP. This will accelerate the end-to-end AI pipeline, from data processing and analytics to model development and fine-tuning to production AI, using long-thinking, reasoning agents. Enterprises will be able to run AIP in NVIDIA AI factories for optimized acceleration.

    Palantir AIP will also be supported in the new NVIDIA AI Factory for Government reference design, announced separately today.

    At NVIDIA GTC Washington, D.C., attendees can register to join Palantir and NVIDIA for a hands-on workshop on operationalizing AI, taking place Wednesday, Oct. 29, from 3:15–5:00 p.m. ET.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading

  • Hyundai Motor Group Executive Chair Euisun Chung Meets HRH the Crown Prince, Reviews New Plant Construction and Group Growth Strategy

    • Executive Chair Chung discusses multifaceted cooperation with HRH the Crown Prince
    • Explains collaboration initiatives and plans as strategic partner in realizing Saudi Arabia’s future vision for mobility and other areas 
    • Exchanges views on Saudi Vision 2030 and expresses expectation for expanded cooperation in future energy sectors 
    • Executive Chair Chung Reviews Hyundai Motor’s local plant construction site and Hyundai Motor Group’s mid- to long-term strategy 
    • Hyundai Motor Group expands cooperation with major Saudi institutions in mobility, smart cities, and other sectors 


    SEOUL/RIYADH, October 28, 2025
    – Hyundai Motor Group Executive Chair Euisun Chung visited Saudi Arabia, the Middle East’s largest economy and a nation undergoing major industrial transformation. Chung met with His Royal Highness Prince Mohammed bin Salman bin Abdulaziz Al Saud, Crown Prince and Prime Minister of Saudi Arabia, to review the Group’s local growth strategy and explore future business opportunities.

    During his visit, Executive Chair Chung held discussions with HRH the Crown Prince covering a wide range of topics including the automotive industry and smart cities. This marked the first one-on-one meeting between the two leaders, though they had previously met twice, including during the Crown Prince’s 2022 visit to Korea.

    “Hyundai Motor Group deeply understands the meaning and importance of Saudi Vision 2030,” said Executive Chair Chung.” Based on our competitive business capabilities, we are participating in Saudi Arabia’s giga projects and look forward to expanding collaboration in future energy sectors including renewable energy, hydrogen, Small Modular Reactor (SMR), and nuclear energy.”

    Strategic Partnership for Vision 2030

    Saudi Arabia is pursuing “Vision 2030,” a national development project aimed at diversifying its economy from energy-focused industries toward manufacturing and hydrogen energy. The kingdom is hosting international events including the World Expo and FIFA World Cup, positioning itself as one of the world’s most prominent emerging economies.

    As the Middle East’s largest automotive market, Saudi Arabia is actively attracting global automakers, including Hyundai Motor Company, with the long-term goal of becoming an automotive hub serving not only the Middle East but also North Africa.

    Executive Chair Chung expressed gratitude for the Saudi government’s continued interest and support, outlining Hyundai Motor Group’s ongoing collaborative projects and future plans as a partner in realizing Saudi Arabia’s vision for mobility and other sectors.

    Regarding the new manufacturing facility, Chung stated, “Hyundai Motor is building a locally tailored factory with specialized equipment to meet Saudi Arabia’s industrial demands and customer needs. We will also consider expanding production capacity based on future market conditions.”

    New Production Hub Takes Shape

    Prior to meeting HRH the Crown Prince, Executive Chair Chung visited Hyundai Motor Manufacturing Middle East (HMMME) on October 26 at the King Salman Automotive Cluster to review construction progress. Together with José Muñoz, President and CEO of Hyundai Motor Company, Executive Chair Chung engaged in a business update session with Hyundai Motor and Kia local leaders and held in-depth discussions with employees about growth strategies.

    “Establishing a production base in Saudi Arabia represents Hyundai Motor’s new opportunity in the Middle East,” Chung told employees working in extreme heat. “We must thoroughly prepare in every aspect to deliver mobility that exceeds customer expectations on time, in an environment different from our previous bases—characterized by high temperatures and desert conditions.”

    José Muñoz, President and CEO of Hyundai Motor Company said: “Our new Saudi Arabia production facility demonstrates Hyundai’s long-term commitment to the Middle East’s largest automotive market. This plant plays a strategic role in our global mid-term plan while supporting Vision 2030. We’re combining Hyundai’s manufacturing excellence with Saudi Arabia’s talented workforce to deliver mobility solutions across automotive, smart cities, hydrogen energy, and future mobility.”

    HMMME, the first Hyundai Motor production facility in the Middle East, is a cornerstone for establishing Hyundai as a leading brand in Saudi Arabia. The joint venture is 30 percent owned by Hyundai Motor and 70 percent by Saudi Arabia’s Public Investment Fund. Construction began in May 2025, with operations targeted to commence in the fourth quarter of 2026. The facility will have an annual production capacity of 50,000 units, manufacturing both electric vehicles and internal combustion engine vehicles.

    The plant combines Hyundai Motor’s innovative manufacturing technology with Saudi Arabia’s talented workforce and infrastructure, positioning it to play a crucial role in the growth and development of Saudi Arabia’s mobility ecosystem.

    Hyundai Motor plans to operate HMMME as a high-quality vehicle production hub by implementing multi-model production facilities to address diverse customer needs, applying simple and robust design structures for easy maintenance, and incorporating cooling and dust-proof measures to handle high temperatures and sand.

    Market Growth and Expansion Plans

    Hyundai Motor and Kia continue to grow in Saudi Arabia, selling 149,604 units through September 2025, an 8.5 percent increase year over year, with plans to reach approximately 210,000 units by year-end, up 5.9 percent from 2024.

    Leveraging enhanced brand appeal and stable supply through the Saudi production base, Hyundai Motor aims to become the leading automotive company in Saudi Arabia through strategies including Saudi-exclusive special editions, an expanded SUV lineup based on customer preferences, and launches of diverse eco-friendly vehicles including EVs, EREVs (extended-range electric vehicles), and HEVs.

    Kia plans to develop the recently launched Tasman pick-up truck as its flagship model while expanding EV and HEV supply. The brand is also focusing on capturing the PBV market in connection with Saudi Arabia’s smart city projects.

    Expanding Collaboration Across Multiple Sectors

    Hyundai Motor Group is expanding partnerships with key Saudi institutions and companies across mobility, smart cities, and other sectors.

    In September 2024, Hyundai Motor signed an agreement with NEOM for “introducing eco-friendly future mobility,” successfully demonstrating the Universe FCEV (Fuel Cell Electric Vehicle) bus last May on routes connecting NEOM’s central business district with the high-altitude Trojena region at 2,080 meters above sea level. The Group plans to continue collaboration as NEOM’s key partner in future mobility.

    Last month, Kia launched a PV5 pilot project with Red Sea Global (RSG), one of Saudi Arabia’s giga project developers, following up on Hyundai Motor Group’s March 2024 MOU with RSG. Kia will provide PV5 passenger models and technical training support, contributing to eco-friendly mobility adoption and ecosystem development while delivering customized mobility solutions optimized for RSG’s tourism industry.

    Hyundai Motor Group is also partnering with the MISK Foundation, a non-profit organization established by Crown Prince Mohammed bin Salman in 2011, to foster local youth talent and explore smart city collaboration opportunities.

     

    ###

     

    About Hyundai Motor Group
    Hyundai Motor Group is a global enterprise that has created a value chain based on mobility, steel, and construction, as well as logistics, finance, IT, and service. With about 250,000 employees worldwide, the Group’s mobility brands include Hyundai, Kia, and Genesis. Armed with creative thinking, cooperative communication, and the will to take on any challenges, we strive to create a better future for all.

    More information about Hyundai Motor Group can be found at: http://www.hyundaimotorgroup.com or Newsroom: Media Hub by Hyundai, Kia Global Media Center (kianewscenter.com), Genesis Newsroom


    Contact:
    Jiwon Moon
    Global PR Strategy & Planning Team, Hyundai Motor Group
    m00n@hyundai.com

    Continue Reading

  • Healthcare giant Medline reveals US IPO filing – Reuters

    1. Healthcare giant Medline reveals US IPO filing  Reuters
    2. Medline Announces Public Filing of Registration Statement with the SEC  PR Newswire
    3. Medline Is Said to File Publicly for US IPO as Soon as Tuesday  Bloomberg.com
    4. Medline announces IPO of Class A common stock  MarketScreener
    5. Healthcare giant Medline files to return to public markets via Nasdaq listing  Investing.com

    Continue Reading

  • Newly trained navigation and verbal memory skills in humans elicit changes in task-related networks but not brain structure

    Newly trained navigation and verbal memory skills in humans elicit changes in task-related networks but not brain structure

    Improvements in the learning rate of the Verbal Memory Transfer task, but not the Navigation Transfer task, were found to correlate with lateral hippocampal volume but not with anterior and posterior hippocampus (using average volume between pre and post-test).

    Consistent findings were obtained when analyzing hemispheric hippocampal volumes (see MTL subregions and either the average number Appendix 3—figure 3). Specifically, a significant positive correlation was observed between the learning rate in the Verbal Memory group and both left hippocampal volume (r(23) = 0.568, p-FDR=.014) and right hippocampal volume (r(23) = 0.601, p-FDR=.007), while no significant correlations were found for the Navigation (left: r(25) = 0.038, p-FDR=0.858; right: r(25) = 0.008, p-FDR=0.970) or Video Control group (left: r(19) = –0.123, p-FDR=0.858; right: r(19) = –0.163, p=0.758), after controlling for sex and site as covariates. Fisher’s z-tests revealed that the positive correlation in the Verbal Memory group was significantly stronger than that in the Navigation group even after controlling for sex and site (left hippocampus: Z=1.942, p-FDR=0.039; right hippocampus: Z=2.326, p-FDR=0.015) and Video Control group (left hippocampus: Z=2.234, p-FDR=0.038; right hippocampus: Z=2.702, p-FDR=0.010) while no significant difference was observed between the Navigation group and Video Control group (left hippocampus: Z=–0.439, p=0.669; right hippocampus: Z=–0.553, p-FDR=0.710).

    An analysis was conducted to assess the correlation between anterior and posterior hippocampal volumes and changes in learning rate on the verbal memory transfer task. However, the improvement in learning rate demonstrated no significant correlation with either anterior or posterior hippocampal volume in any of the three groups (ps >0.232, Appendix 2—table 4). This lack of correlation remained consistent after accounting for the influence of sex and site as covariates (ps >0.114).

    We also correlated hippocampal volume and MTL subregions with the change in the average number of words recalled, number of trials to criterion, and slope from linear regression (see Appendix 1) between the post-test and pre-test for the verbal memory transfer task. The analysis revealed no significant correlations between hippocampal volume or MTL subregions and either the average number of words recalled or the number of trials to criterion (Appendix 2—tables 6 and 7), regardless of whether sex and site were included as covariates. However, when correlating slope with hippocampal volume, a positive correlation was found for the Verbal Memory group (total hippocampal volume: r=0.552, p=0.006; left hippocampus: r=0.502, p=0.015;right hippocampus: r=0.561, p=0.005, Appendix 2—table 8), however, no such correlation was found for the Navigation group (total hippocampus: r=0.094, p=0.656; left hippocampus: r=0.114, p=0.586; right hippocampus: r=0.073, p=0.727) and Video Control (total hippocampus: r=–0.239, p=0.325; left hippocampus: r=–0.217, p=0.372; right hippocampus: r=–0.238, p=0.326) even after controlling for sex and site as covariates.

    We also correlated hippocampal volume with the change in learning rate from the post-test and pre-test for the navigation transfer task. No significant correlations were found between total hippocampal volume and changes in learning rate in any of the three conditions (Navigation: r(25) = –0.27, p=0.181; Verbal Memory: r(24) = –0.31, p=0.129; Video Control: r(17) = 0.124, p=0.612), regardless of whether sex and site were included as covariates. Similar no significant correlations were observed for both left hippocampal volume (Navigation: r(25) = –0.25, p=0.214; Verbal Memory: r(24) = –0.29, p=0.160; Video Control: r(17) = 0.25, p=0.310) and right hippocampal volume (Navigation: r(25) = –0.27, p=0.167; Verbal Memory: r(24) = –0.302, p=0.142; Video Control: r(17) = –0.018, p=0.940), regardless of whether sex and site were included as covariates. We also did not find any significant correlations when examining hippocampal subfields or surrounding MTL subregions (p’s>0.20, Appendix 2—table 5).

    The relationship between total hippocampal volume, MTL subregion volumes, and changes in performance on the navigation transfer task was assessed through correlational analyses, focusing on path error, overall pointing error, between-environment pointing error, within-environment pointing error, and map accuracy. These analyses demonstrated a lack of significant associations between hippocampal volume and MTL subregions and any of the behavioral metrics under consideration (Appendix 2—table 9, Appendix 2—tables 10–13), regardless of the inclusion of sex and site as covariates.

    Improvements in the learning rate of the Verbal Memory Transfer task, but not the Navigation Transfer task, were found to correlate with both total hippocampal volume and the volume of the CA2/3/DG subfield (using volume data only from the pre-test). We examined the hypothesis that baseline hippocampal volume might be associated with either verbal or navigation performance. For this analysis, we utilized the hippocampal volume (or subfield volume) obtained for each participant at pre-test, rather than averaging volumes across pre and post-test. We then assessed the relationship between hippocampal volumes at pre-test and the change in learning rate from pre-test to post-test for both the Verbal Memory transfer task and the Navigation Transfer task.

    We found a marginal correlation in the verbal memory training group between the pre-test total hippocampal volume and the observed improvement in verbal memory performance from pre- to post-test (r(23) = 0.352, p=0.084). Further analysis accounting for sex and site as covariates revealed a significant positive correlation between total hippocampal volume and the learning rate in the Verbal Memory condition (r(23) = 0.545, p=0.007). This effect was specific to the verbal memory training; no significant correlation was identified for either the Navigation condition (r(25) = 0.072, p=.721) or the Video condition (r(19) = –0.209, pP=0.362); the same result was observed when controlling for covariates (Navigation condition: r(25) = 0.082, p=0.695; Video condition: r(19) = –0.144, p=0.555). Consistent findings were obtained when analyzing hemispheric hippocampal volumes. Specifically, a significant positive correlation was observed between the learning rate in the Verbal Memory condition and both left hippocampal volume (r(23) = 0.504, p=0.014) and right hippocampal volume (r(23) = 0.545, p=0.007), while no significant correlations were found for the Navigation condition (left: r(25) = 0.122, p=0.561; right: r(25) = 0.045, p=0.832) or Video Control condition (left: r(19) = –0.084, p=0.734; right: r(19) = –0.187, p=0.444), after controlling for sex and site as covariates.

    We further examined the correlation between CA23DG volume and learning rate change. The CA23DG subfield showed a positive correlation with the change in learning rate from pre to post-test in the verbal memory transfer task of the Verbal Memory condition (r(23) = 0.504, p=0.01), suggesting that individuals in the Verbal Memory condition with larger CA23DG volumes exhibited greater improvements in memory performance from pre- to post-test. This correlation persisted even after controlling for sex and site as covariates (r(23) = 0.439, p=0.036). No significant correlations were observed for the CA23DG subfield in the Navigation (r(25) = 0.007, p=0.972) or Video Control conditions (r(19) = –0.05, p=0.828), regardless of whether sex and site were included as covariates.

    Fisher’s z-tests revealed that the positive correlation in the Verbal Memory condition was significantly stronger than that in the Navigation even after controlling for sex and site (total hippocampus: Z=1.793, p=0.037; left hippocampus: Z=1.464, p=0.072; right hippocampus: Z=1.919, p=0.027; CA23DG: Z=1.687, p=0.046) and Video conditions (total hippocampus: Z=2.382, p=0.009; left hippocampus: Z=2.009, p=0.022; right hippocampus: Z=2.518, p=0.006; CA23DG: Z=1.766, p=0.038), while no significant difference was observed between the Navigation and Video Control conditions (total hippocampus: Z=–0.731, p=0.768; left hippocampus: Z=–0.662, p=0.746; right hippocampus: Z=–0.750, p=0.773; CA23DG: Z=–0.214, p=0.585).

    Improvements in the learning rate of the Verbal Memory Transfer task, but not the Navigation Transfer task, were found to correlate with both total hippocampal volume and the volume of the CA2/3/DG subfield (using volume data only from the post-test). We examined the hypothesis that baseline hippocampal volume may be associated with either verbal or navigation performance. For this analysis, we utilized the hippocampal volume (or subfield volume) obtained for each participant at post-test, rather than averaging volumes across pre-test and post-test. We then assessed the relationship between hippocampal volumes at post-test and the change in learning rate from pre-test to post-test for both the Verbal Memory transfer task and the Navigation transfer task.

    We found a marginal correlation in the verbal memory training group between the post-test total hippocampal volume and the observed improvement in verbal memory performance from pre- to post-test (r(23) = 0.360, p=0.078). Further analysis accounting for sex and site as covariates revealed a significant positive correlation between total hippocampal volume and the learning rate in the Verbal Memory condition (r(23) = 0.623, p=0.001). This effect was specific to the verbal memory training; no significant correlation was identified for either the Navigation condition (r(25) = –0.009, p=0.965) or the Video Control condition (r(19) = –0.178, p=0.439); the same was true when controlling for covariates (Navigation condition: r(25) = 0.000, p=0.999; Video Control condition r(18) = –0.083, p=0.736). Consistent findings were obtained when analyzing hippocampal volumes by hemisphere. Specifically, a significant positive correlation was observed between the learning rate in the Verbal Memory condition and both left hippocampal volume (r(23) = 0.597, p=0.003) and right hippocampal volume (r(23) = 0.594, p=0.003), while no significant correlations were found for the Navigation (left: r(25) = 0.026, p=0.901; right: r(24) = –0.023, p=0.912) or Video Control conditions (left: r(19) = –0.036, p=0.883; right: r(18) = –0.109, p=0.656), after controlling for sex and site as covariates.

    We further examined the correlation between CA23DG volume and learning rate change. The CA23DG subfield showed a positive correlation with the change in learning rate from pre to post-test in the verbal memory transfer task of the Verbal Memory condition (r(23) = 0.545, p=0.005), suggesting that individuals in the Verbal Memory condition with larger CA23DG volumes exhibited greater improvement in memory performance from pre to post-test. This correlation persisted even after controlling for sex and site as covariates (r(23) = 0.537, p=0.008). No significant correlations were observed for the CA23DG subfield in the Navigation (r(25) = –0.027, p=0.894) or Video Control conditions (r(19) = –0.110, p=0.634), regardless of whether sex and site were included as covariates.

    Fisher’s z-tests revealed that the positive correlation in the Verbal Memory condition was significantly stronger than that in the Navigation group even after controlling for sex and site (total hippocampus: Z=2.473, p=0.007; left hippocampus: Z=2.243, p=0.012; right hippocampus: Z=2.398, p=0.008; CA23DG: Z=2.209, p=0.014) and the Video Control condition (total hippocampus: Z=2.558, p=0.005; left hippocampus: Z=2.280, p=0.011; right hippocampus: Z=2.498, p=0.006; CA23DG: Z=2.337, p=0.014), while no significant difference was observed between the Navigation and the Video conditions (total hippocampus: Z=–0.266, p=0.605; left hippocampus: Z=–0.2, p=0.841; right hippocampus: Z=–0.276, p=0.609; CA23DG: Z=–0.291, p=0.615).

    Continue Reading

  • NVIDIA and Partners Build America’s AI Infrastructure and Create Blueprint to Power the Next Industrial Revolution

    NVIDIA and Partners Build America’s AI Infrastructure and Create Blueprint to Power the Next Industrial Revolution

    US Government Labs and Nation’s Leading Companies Investing in Advanced AI Infrastructure to Power AI Factories and Accelerate US AI Development

      News Summary

    • Seven new systems across Argonne and Los Alamos National Laboratories to be released, accelerating the Department of Energy’s mission of driving technological leadership across U.S. security, science and energy applications.
    • NVIDIA AI Factory Research Center in Virginia to host the first Vera Rubin infrastructure and lay the groundwork for NVIDIA Omniverse DSX, a blueprint for multi‑generation, gigawatt‑scale build‑outs using NVIDIA Omniverse libraries.
    • Leading U.S. companies across server makers, cloud service providers, model builders, technology suppliers and enterprises are investing in advanced AI infrastructure.

    GTC Washington, D.C. — NVIDIA today announced that it is working with the U.S. Department of Energy’s national labs and the nation’s leading companies to build America’s AI infrastructure to support scientific discovery, economic growth and power the next industrial revolution. 

    “We are at the dawn of the AI industrial revolution that will define the future of every industry and nation,” said Jensen Huang, founder and CEO of NVIDIA. “It is imperative that America lead the race to the future — this is our generation’s Apollo moment. The next wave of inventions, discoveries and progress will be determined by our nation’s ability to scale AI infrastructure. Together with our partners, we are building the most advanced AI infrastructure ever created, ensuring that America has the foundation for a prosperous future, and that the world’s AI runs on American innovation, openness and collaboration, for the benefit of all.”

    NVIDIA AI Advances Scientific Research at National Labs

    NVIDIA is accelerating seven new systems by providing the AI infrastructure to drive scientific research and innovation at two U.S. Department of Energy (DOE) facilities — Argonne National Laboratory and Los Alamos National Laboratory (LANL).

    NVIDIA is collaborating with Oracle and the DOE to build the U.S. Department of Energy’s largest AI supercomputer for scientific discovery. The Solstice system will feature a record-breaking 100,000 NVIDIA Blackwell GPUs and support the DOE’s mission of developing AI capabilities to drive technological leadership across U.S. security, science and energy applications.

    Another system, Equinox, will include 10,000 NVIDIA Blackwell GPUs expected to be available in 2026. Both systems will be located at Argonne, and will be interconnected by NVIDIA networking and deliver a combined 2,200 exaflops of AI performance.

    Argonne is also unveiling three powerful NVIDIA-based systems — Tara, Minerva and Janus — set to expand access to AI-driven computing for researchers across the country. Together, these systems will enable scientists and engineers to revolutionize scientific discovery and boost productivity.

    “Argonne’s collaboration with NVIDIA and Oracle represents a pivotal step in advancing the nation’s AI and computing infrastructure,” said Paul K. Kearns, director of Argonne National Laboratory. “Through this partnership, we’re building platforms that redefine performance, scalability and scientific potential. Together, we are shaping the foundation for the next generation of computing that will power discovery for decades to come.”

    LANL, based in New Mexico, announced the selection of the NVIDIA Vera Rubin platform and the NVIDIA Quantum‑X800 InfiniBand networking fabric for its next-generation Mission and Vision systems, to be built and delivered by HPE. The Vision system builds on the achievements of LANL’s Venado supercomputer, built for unclassified research. Mission is the fifth Advanced Technology System (ATS5) in the National Nuclear Security Administration’s Advanced Simulation and Computing program, which LANL supports, and is expected to be operational in late 2027 and designed to run classified applications.

    The Vera Rubin platform will deliver advanced accelerated computing capabilities for these systems, enabling researchers to process and analyze vast datasets at unprecedented speed and scale. Paired with the Quantum‑X800 InfiniBand fabric, which delivers high network bandwidth with ultralow latency, the platform enables scientists to run complex simulations to advance areas spanning materials science, climate modeling and quantum computing research.

    “Our integration of the NVIDIA Vera Rubin platform and Quantum X800 InfiniBand fabric represents a transformative advancement of our lab — harnessing this level of computational performance is essential to tackling some of the most complex scientific and national security challenges,” said Thom Mason, director of Los Alamos National Laboratory. “Our work with NVIDIA helps us remain at the forefront of innovation, driving discoveries to strengthen the resilience of our critical infrastructure.”

    NVIDIA AI Factory Research Center and Gigascale AI Factory Blueprint

    NVIDIA also announced the build-out of an AI Factory Research Center at Digital Realty in Virginia. This facility, powered by the NVIDIA Vera Rubin platform, will accelerate breakthroughs in generative AI, scientific computing and advanced manufacturing and serve as a foundation for pioneering research in digital twins and large‑scale simulation.

    The center lays the groundwork for NVIDIA Omniverse DSX — a blueprint for multi‑generation, gigawatt‑scale build‑outs using NVIDIA Omniverse™ libraries — that will set a new standard of excellence for AI infrastructure. By integrating virtual and physical systems, NVIDIA is creating a scalable model for building intelligent facilities that continuously optimize for performance, energy efficiency and sustainability.

    With this new center, NVIDIA and its partners are collaborating to develop Omniverse DSX, which will integrate autonomous control systems and modular infrastructure to power the next generation of AI factories. NVIDIA is collaborating with companies to enable the gigawatt-scale rollout of hyperscale AI infrastructure:

    • Engineering and construction partners Bechtel and Jacobs are working with NVIDIA to integrate advanced digital twins into validated designs across complex architectural, power, mechanical and electrical systems.
    • Power, cooling and energy equipment partners including Eaton, GE Vernova, Hitachi, Mitsubishi Electric, Schneider Electric, Siemens, Siemens Energy, Tesla, Trane and Vertiv are contributing to the center. Power and system modeling enable AI factories to dynamically interact with utility networks at gigawatt scale. Liquid-cooling, rectification and power-conversion systems optimized for NVIDIA Grace Blackwell and Vera Rubin platforms are also modeled in the earlier NVIDIA Omniverse Blueprint for AI factory digital twins.
    • Software and agentic AI solutions providers including Cadence, Emerald AI, Phaidra, PTC, Schneider Electric ETAP, Siemens and Switch have built digital twin solutions to model and optimize AI factory lifecycles, from design to operation. AI agents continuously optimize power, cooling and workloads, turning the NVIDIA Omniverse DSX blueprint for AI factory digital twins into a self-learning system that boosts grid flexibility, resilience and energy efficiency.

       

    Building the Next Wave of US Infrastructure

    Leading U.S. companies across server makers, cloud service providers, model builders, technology suppliers and enterprises are investing in advanced AI infrastructure to power AI factories and accelerate U.S. AI development.

    System makers Cisco, Dell Technologies, HPE and Supermicro are collaborating with NVIDIA to build secure, scalable AI infrastructure by integrating NVIDIA GPUs and AI software into their full-stack systems. This includes the newly announced NVIDIA AI Factory for Government reference design, which will accelerate AI deployments for the public sector and highly regulated industries.

    In addition, Cisco is launching the new Nexus N9100 switch series powered by NVIDIA Spectrum-X™ Ethernet switch silicon. The switches’ integration with the existing Cisco Nexus management framework will allow customers to seamlessly deploy and manage the new high-speed NVIDIA-powered fabrics using the same trusted tools and operational models they already rely on.

    Cisco will now offer an NVIDIA Cloud Partner-compliant AI factory with the Cisco Cloud reference architecture based on this switch. The N9100 Series switches will be orderable before the end of the year.

    Leading Cloud Providers and Model Builders Accelerate AI

    Cloud providers and model builders are continuing to invest in AI infrastructure to create a diverse ecosystem for AI innovation, ensuring the U.S. remains at the forefront of AI advancements and their practical applications across industries globally.

    The following companies are expanding their commitments to further bolster U.S.-based AI innovation:

    • Akamai is launching Akamai Inference Cloud, a distributed platform that expands AI inference from core data centers to the edge — targeting 20 initial locations across the globe, including five U.S. states, and plans for further expansion — accelerated by NVIDIA RTX PRO™ Servers.
    • CoreWeave is establishing CoreWeave Federal, a new business focused on providing secure, compliant, high-performance AI cloud infrastructure and services to the U.S. government running on NVIDIA GPUs and validated designs. The initiative includes anticipated FedRAMP and related agency authorizations of the CoreWeave platform.
    • Global AI, a new NVIDIA Cloud Partner, has placed its first big purchase for 128 NVIDIA GB300 NVL72 racks (featuring 9,000+ GPUs), which will be the largest GB300 NVL72 deployment in New York.
    • Google Cloud is offering new A4X Max VMs with NVIDIA GB300 NVL72 and G4 VMs with NVIDIA RTX PRO 6000 Blackwell GPUs, as well as bringing the NVIDIA Blackwell platform on premises and in air-gapped environments with Google Distributed Cloud.
    • Lambda is building a new 100+ megawatt AI factory in Kansas City, Missouri. The supercomputer will initially feature more than 10,000 NVIDIA GB300 NVL72 GPUs to accelerate AI breakthroughs from U.S.-based researchers, enterprises and developers.
    • Microsoft is using NVIDIA RTX PRO 6000 Blackwell GPUs on Microsoft Azure, and has recently announced the deployment of a large-scale Azure cluster using NVIDIA GB300 NVL72 for OpenAI. In addition, Microsoft is adding Azure Local support for NVIDIA RTX™ GPUs in the coming months.
    • Oracle recently launched Oracle Cloud Infrastructure Zettascale10, the industry’s largest AI supercomputer in the cloud, powered by NVIDIA AI infrastructure.
    • Together AI, in partnership with 5C, already operates an AI factory in Maryland featuring NVIDIA B200 GPUs and is bringing a new one online soon in Memphis, Tennessee, featuring NVIDIA GB200 and GB300 systems. Both locations are set for near-term expansion, and new locations will be coming up in 2026 to accelerate the development and scaling of AI-native applications.
    • xAI is working on its massive Colossus 2 data center in Memphis, Tennessee, which will house over half a million NVIDIA GPUs — enabling rapid, frontier-level training and inference of next-generation AI models.

       

    US Enterprises Build AI Infrastructure for Industries

    Beyond cloud providers and model builders, U.S. organizations are looking to build and offer AI infrastructure for themselves and others that will accelerate workloads across a variety of industries such as pharmaceutical and healthcare.

    Lilly is building the pharmaceutical industry’s most powerful AI factory with an NVIDIA DGX SuperPOD™ with NVIDIA DGX™ B300 systems, featuring NVIDIA Spectrum-X Ethernet and NVIDIA Mission Control™ software, which will allow the company to develop and train large-scale biomedical foundation models that aim to accelerate drug discovery and design. This builds on Lilly’s use of NVIDIA RTX PRO Servers to power drug discovery and research by accelerating enterprise AI workloads.

    Mayo Clinic — with access to 20 million digitized pathology slides and one of the world’s largest patient databases — has created an AI factory powered by DGX SuperPOD with DGX B200 systems and NVIDIA Mission Control. This delivers the AI computational power needed to advance healthcare applications such as medical research, digital pathology and personalized care for better patient outcomes.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading

  • NVIDIA Introduces NVQLink — Connecting Quantum and GPU Computing for 17 Quantum Builders and Nine Scientific Labs

    NVIDIA Introduces NVQLink — Connecting Quantum and GPU Computing for 17 Quantum Builders and Nine Scientific Labs

    News Summary: 

    • NVIDIA NVQLink high-speed interconnect lets quantum processors connect to world-leading supercomputing labs including Brookhaven National Laboratory, Fermi Laboratory, Lawrence Berkeley National Laboratory (Berkeley Lab), Los Alamos National Laboratory, MIT Lincoln Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories.
    • NVQLink provides quantum researchers with a powerful system for the control algorithms needed for large-scale quantum computing and quantum error correction.
    • NVQLink allows researchers to build hybrid quantum-classical systems, accelerating next-generation applications in chemistry and materials science.

    GTC Washington, D.C. — NVIDIA today announced NVIDIA NVQLink™, an open system architecture for tightly coupling the extreme performance of GPU computing with quantum processors to build accelerated quantum supercomputers.

    Researchers from leading supercomputing centers at national laboratories including Brookhaven National Laboratory, Fermi Laboratory, Lawrence Berkeley National Laboratory (Berkeley Lab), Los Alamos National Laboratory, MIT Lincoln Laboratory, the Department of Energy’s Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories guided the development of NVQLink, helping accelerate next-generation work on quantum computing. NVQLink provides an open approach to quantum integration, supporting 17 QPU builders, five controller builders and nine U.S national labs.

    Qubits — the units of information enabling quantum computers to process information in ways ordinary computers cannot — are delicate and error-prone, requiring complex calibration, quantum error correction and other control algorithms to operate correctly.

    These algorithms must run over an extremely demanding low-latency, high-throughput connection to a conventional supercomputer to keep on top of qubit errors and enable impactful quantum applications. NVQLink provides that interconnect, enabling the environment needed for future, transformative applications across industries.

    “In the near future, every NVIDIA GPU scientific supercomputer will be hybrid, tightly coupled with quantum processors to expand what is possible with computing,” said Jensen Huang, founder and CEO of NVIDIA. “NVQLink is the Rosetta Stone connecting quantum and classical supercomputers — uniting them into a single, coherent system that marks the onset of the quantum-GPU computing era.”

    U.S. national laboratories, led by the Department of Energy, will use NVIDIA NVQLink to make new breakthroughs in quantum computing.

    “Maintaining America’s leadership in high-performance computing requires us to build the bridge to the next era of computing: accelerated quantum supercomputing,” said U.S. Secretary of Energy Chris Wright. “The deep collaboration between our national laboratories, startups and industry partners like NVIDIA is central to this mission — and NVIDIA NVQLink provides the critical technology to unite world-class GPU supercomputers with emerging quantum processors, creating the powerful systems we need to solve the grand scientific challenges of our time.”

    NVQLink connects the many approaches to quantum processors and control hardware systems directly to AI supercomputing — providing a unified, turnkey solution for overcoming the key integration challenges that quantum researchers face in scaling their hardware.

    With contributions from supercomputing centers, quantum hardware builders and quantum control system providers, NVQLink sets the foundation for uncovering the breakthroughs in control, calibration, quantum error correction and hybrid application development needed to run useful quantum applications.

    Researchers and developers can access NVQLink through its integration with the NVIDIA CUDA-Q™ software platform to create and test applications that seamlessly draw on CPUs and GPUs alongside quantum processors, helping ready the industry for the hybrid quantum-classical supercomputers of the future.

    Partners contributing to NVQLink include quantum hardware builders Alice & Bob, Anyon Computing, Atom Computing, Diraq, Infleqtion, IonQ, IQM Quantum Computers, ORCA Computing, Oxford Quantum Circuits, Pasqal, Quandela, Quantinuum, Quantum Circuits, Inc., Quantum Machines, Quantum Motion, QuEra, Rigetti, SEEQC and Silicon Quantum Computing — as well as quantum control system builders including Keysight Technologies, Quantum Machines, Qblox, QubiC and Zurich Instruments.

    Availability

    Quantum builders and supercomputing centers interested in NVIDIA NVQLink can sign up for access on this webpage.

    Learn more about how NVIDIA and partners are advancing AI innovation in the U.S. by watching the NVIDIA GTC Washington, D.C., keynote by Huang.

    Continue Reading

  • Cushman & Wakefield Launches New Quantitative Insights Group to Advance Investor and Occupier Advisory Capabilities | US

    Cushman & Wakefield Launches New Quantitative Insights Group to Advance Investor and Occupier Advisory Capabilities | US

    NEW YORK, October 28, 2025 – Cushman & Wakefield (NYSE: CWK), a leading global real estate services firm, has announced a new Quantitative Insights Group, designed to advise institutional investor and occupier clients. Using advanced mathematics, statistics and our latest AI+ tools, the group will advise clients on capital allocation, investment decision-making and risk management at a portfolio and asset level.  

    The Quantitative Insights Group enhances Cushman & Wakefield’s existing platform of services throughout the asset lifecycle and strengthens its advisory capabilities, solidifying the firm’s expertise and deepening its ability to advise clients through complex market conditions. 

    Rebecca Rockey has been appointed Head of Quantitative Insights and Principal Economist, effective immediately. She will report to Toby Dodd, Chief Revenue Officer, Americas and lead a growing team of experts, including David Hoebbel, Greg Nelson and others.  
    “I’m pleased to lead the Quantitative Insights Group and advise our institutional investor and occupier clients as they navigate complex decisions with clarity and confidence,” said Rebecca Rockey, Head of Quantitative Insights and Principal Economist. “By integrating data and our latest AI+ tools, we’re building a powerful, forward-looking advisory engine that complements our already integrated service offerings.” 
    “This new team represents a strategic investment in our capabilities, connecting data platforms and insights to drive value for our clients,” said Toby Dodd, Chief Revenue Officer, Americas. “Rebecca’s leadership and economic expertise will be instrumental in shaping this effort and delivering impactful results.” 
    “The launch of this group is part of the ongoing evolution of our research, analysis and advisory capabilities,” said Brad Kreiger, Americas Co-Chief Executive. “By combining advanced analytics with our advisors’ deep market expertise, we will equip our clients with sharper tools to make data-driven decisions, optimize portfolios and navigate uncertainty with confidence.” 
    Under Rockey’s leadership, the Quantitative Insights Group will complement the firm’s existing client advisory strategies and further Cushman & Wakefield’s commitment to innovation, analytics and clients portfolio performance. Rockey is a recognized industry thought leader on macroeconomics and real estate and has been an instrumental leader in Cushman & Wakefield’s research agenda, including development of innovative studies like the recent Reimagining Cities, which confronted post-pandemic urban challenges, offering a blueprint for optimizing city real estate portfolios.

    Continue Reading

  • US partners with Westinghouse, Cameco and Brookfield on $80B nuclear deployment

    US partners with Westinghouse, Cameco and Brookfield on $80B nuclear deployment

    Westinghouse Electric, Cameco and Brookfield Asset Management have entered into a strategic partnership with the U.S. government to deploy $80 billion in new nuclear reactors, the companies announced Tuesday.

    “Our administration is focused on ensuring the rapid development, deployment, and use of advanced nuclear technologies. This historic partnership supports our national security objectives and enhances our critical infrastructure,” Secretary of Commerce Howard Lutnick said in a statement,

    The deal calls for Westinghouse AP1000 reactors to be utilized in a deployment that will create more than 100,000 construction jobs, the companies said. “The program will cement the United States as one of the world’s nuclear energy powerhouses and increase exports of Westinghouse’s nuclear power generation technology globally.”

    According to the announcement, the partnership contains “profit sharing mechanisms” that allow all parties,  “including the American people,” to participate in the “long-term financial and strategic value that will be created within Westinghouse by the growth of nuclear energy and advancement of investment into AI capabilities in the United States.”

    Brookfield has more than half a trillion dollars invested in the critical infrastructure that underpins the U.S. economy, “and we expect to double that investment in the next decade as we deliver on building the infrastructure backbone of artificial intelligence,” Brookfield President Connor Teskey said in a statement.

    The U.S. is trying to rapidly bring power resources online to meet rising demand from data centers, and interest in nuclear is growing. Following a decade of stagnant growth, U.S. electricity demand will increase at a 2.5% compound annual growth rate through 2035, according to Bank of America Institute research published in July.

    “For all of the energy policy disagreements in Washington, one thing is clear: nuclear energy is the baseload electrical power source of the future,” said Thomas Ryan, a managing partner in K&L Gates’ energy, infrastructure and resources practice. “Its reliability, durability and sustainability have rendered its deployment an apolitical issue.”

    Continue Reading

  • APM Terminals and Conductix-Wampfler sign third collaboration agreement

    APM Terminals and Conductix-Wampfler sign third collaboration agreement

    At a special ceremony on 28 October 2025, Conductix-Wampfler and APM Terminals signed a framework purchase agreement for rubber tire gantry crane (RTG) electrification products and terminal operations services.

    Grant Morrison, Global Head of Asset Category Management at APM Terminals, and François Bernès, Chief Executive Officer at Conductix-Wampfler signed the agreement in Shanghai

    The framework agreement was signed by François Bernès, Chief Executive Officer at Conductix-Wampfler and Grant Morrison, Global Head of Asset Category Management at APM Terminals. This was conducted at Conductix-Wampfler’s regional head office in Shanghai and marks the third collaboration agreement between the two companies.

    Over the past 14 years, Conductix-Wampfler has supplied APM Terminals with 413 energy supply systems for eRTGs (new and retrofitted) and 116 battery systems for its operations. The deployment of eRTGs with grid power supply or battery systems is a core initiative driving port decarbonisation. eRTGs eliminate carbon emissions at the source during yard operations, while the battery systems enable either full zero-emission operations (FE-RTG) or provide an optimal balance of reduced emissions and operational flexibility (Hybrid-RTG).

    François Bernès, Chief Executive Officer at Conductix-Wampfler, said, “This renewed partnership with APM Terminals provides Conductix-Wampfler with a strategic anchor and practical platform for our long-term vision. As a strategic partner, we will continue to fully support APM Terminals in achieving its energy-saving, emissions-reduction, and carbon-neutral goals, deeply engaging in the port operator’s decarbonisation journey and contributing our expertise to drive sustainable transformation in the industry.”

    As a leading global terminal operator, APM Terminals is working towards achieving net-zero greenhouse gas emissions by 2040. Shifting from fossil-fuelled equipment in its ports to battery-electric container handling equipment is the main lever for reducing its scope 1 GHG emissions.  For its scope 2 emissions, the ambition is to transition to 100% renewable energy by 2030.

    APM Terminals has implemented large-scale “diesel-to-electric” conversions at multiple terminals worldwide, replacing diesel-powered yard cranes with electric-powered ones to reduce emissions at the source. It also actively deploys and utilises fully electric and hybrid port handling equipment and invests in renewable energy sources such as solar power to supply electricity to its terminals.

    Grant Morrison, Global Head of Asset Category Management, APM Terminals, commented, “This framework purchase agreement signifies our continued partnership with Conductix-Wampfler, which will contribute to delivering on our ambitious emissions reduction targets. We look forward to exploring more projects with Conductix-Wampfler as we work towards a more decarbonised future.”

    Continue Reading