Groundbreaking SuperPoD Interconnect: Leading a New Paradigm for AI Infrastructure

[Shanghai, China, September 18, 2025] Ladies and gentlemen, good morning. Welcome to Huawei Connect 2025. It’s great to see you here in Shanghai.

The past year has been a memorable year for all of us, especially those of us who work on AI or take a special interest in it. The surprise debut of DeepSeek-R1 back in January gave all of us a taste for AI during the Chinese Spring Festival. Many model training specialists have pulled all-nighters, working to adjust their training methods and reproduce the DeepSeek’s results. At Huawei, we have felt the impact, too. Between DeepSeek-R1’s launch in January and April 30, our teams worked closely to make sure that the inference capabilities of our Ascend 910B and 910C chips can keep up with customer needs.

Before we begin, I’d like to revisit the five key points that I discussed at last year’s Huawei Connect.

  1. Sustainable computing power is the cornerstone of continuous advancements in AI.
  2. The Chinese mainland will lag behind in semiconductor manufacturing process nodes for a relatively long time.
  3. Sustainable computing power can only be achieved with process nodes that are practically available.
  4. AI has become the predominant source of demand for computing power – and this trend is driving structural changes in computing systems.
  5. Our strategy is to create a new computing architecture, and develop computing SuperPoDs and SuperClusters, to sustainably meet long-term demand for computing power.

Last year, I wanted to elaborate on that last point, but my team didn’t agree. So today I’d like to take this chance to pick up where I left off.

This brings me to the topic of today’s keynote: Groundbreaking SuperPoD Interconnect: Leading a New Paradigm for AI Infrastructure. This theme echoes my fifth point last year, which is about how we’ve been working to create a new computing architecture, and develop both computing SuperPoDs and SuperClusters that can sustainably meet long-term demand for computing power.

Before diving into today’s main topic, I’d like to briefly return to the impact that DeepSeek has had on the industry – and on Huawei in particular. After DeepSeek went open source, our customers began reaching out to us, pointing out all kinds of issues with Ascend, as well as expressing their hopes for the future. We’ve been getting suggestions left and right.

Our team took this feedback to heart, discussed it in depth, and came to a consensus. On August 5, 2025, we held the Ascend Computing Industry Development Summit in Beijing, where I shared the company’s official response. Some of you here today were present at that summit, but others were not.

So I’d like to take this opportunity to share our response with everyone here today. There were four major conclusions:

  1. Our monetization strategy for AI is focused on hardware.
  2. For CANN, we will open interfaces for the compiler and virtual instruction set, and fully open source other software. We will go open source and open access with CANN (based on existing Ascend 910B/910C design) by December 31, 2025. And we’ll synchronize the open source and open access plans for future versions with product launch.
  3. For our Mind series application enablement kits and toolchains, we will go fully open source by December 31, 2025.
  4. We will also fully open source our openPangu foundation models.

Now back to today’s theme.

DeepSeek has come up with new ways to train models using significantly less computing power. But artificial general intelligence (AGI) and physical AI will still need a massive amount of computing power. So we believe that computing power is – and will continue to be – key to AI. This is especially true in China.

Chips are the building blocks of computing power. And at Huawei, Ascend chips are the foundation of our AI computing strategy.

We launched the Ascend 310 chip back in 2018, and the Ascend 910 chip in 2019. In 2025, the Ascend 910C chip has become more well known in the industry as we’ve scaled up deployment of our Atlas 900 A3 SuperPoD.

Over the past few years, our customers and partners have raised quite a few requirements and hopes for Ascend chips.

As we look ahead, some of you might be wondering about Huawei’s chip roadmap. This is a topic of common interest, if not the topic of greatest interest.

So let me show you what we’ve got in store. And let me assure you this: We’ll keep evolving Ascend chips to strengthen the foundation of AI computing power, both in China and around the world.

Over the next three years, we’ll be working on three new series of Ascend chips: the Ascend 950 series, the Ascend 960 series, and the Ascend 970 series.

The Ascend 950 series includes the Ascend 950PR (optimized for prefill and recommendation) and Ascend 950DT (optimized for decode and training). Of course, we still have other chips in the works, too. Next, let me show you the four Ascend chips that are in our pipeline, with some hitting the market very soon.

At the moment, we’re working on the Ascend 950 series. The Ascend 950PR and Ascend 950DT chips will use the same Ascend 950 Die. Compared to their predecessors, the Ascend 950 chips will be fundamentally stronger in multiple respects.

First, the chips will provide additional support for low-precision data formats, including FP8, MXFP8, and MXFP4. They will be able to deliver 1 PFLOPS in FP8, MXFP8, and HiF8, and 2 PFLOPS in MXFP4. The result is significantly higher training efficiency and inference throughout. In particular, the chips will support Huawei’s proprietary HiF8 data format, providing a precision level close to FP16 and an efficiency level comparable with FP8.

Second, the Ascend 950 chips will offer stronger vector processing. We’ll achieve this in three ways:

  • Allocating more compute for vector processing
  • Adopting an innovative design that combines SIMD and SIMT. SIMD, which is short for “Single Instruction, Multiple Data”, makes it possible to handle blocks of vectors in a pipeline fashion. SIMT, short for “Single Instruction, Multiple Threads”, supports flexible processing of more fragmented data.
  • Shrinking the granularity of memory access from 512 bytes to 128 bytes. More refined granularity is essential for discrete and discontinuous memory access.

Third, the Ascend 950 chips will deliver 2 TB/s interconnect bandwidth, which is 2.5 times higher than the Ascend 910C.

Fourth, different stages of inference have disparate needs for computing power, memory capacity, and memory access bandwidth. The needs of recommendation systems and model training vary too. To address these diverse needs, we will offer two proprietary HBMs for the Ascend 950 chips: HiBL 1.0 and HiZQ 2.0. These HBMs will be separately packaged with the Ascend 950 Die. So we’ll have the Ascend 950PR chip for prefill and recommendation, and the Ascend 950DT chip for decode and training.

Let me show you the details.

 

The first chip is the Ascend 950PR, which is optimized for the prefill stage of inference and for recommendation systems.

As agent-based applications become more and more prevalent, context length keeps growing too, so it takes more and more compute to generate first tokens. Recommendation algorithms for e-commerce, content, and social media applications are also raising the bar for accuracy, latency, and compute.

Both the prefill stage of inference and recommendation algorithms are compute-intensive, with higher demand for parallel computing and lower demand for memory access bandwidth. So we’ll offer a layered HBM solution to address these needs. Given that prefill and recommendation algorithms don’t necessarily need huge amounts of memory, our Ascend 950PR chip is designed to support these two scenarios with HiBL 1.0, our proprietary, low-cost HBM. HiBL 1.0 is more cost-effective than more performant HBM3E and HBM4E, and will help our customers maintain the right level of performance while significantly reducing their investment in the hardware needed for prefill and recommendation systems.

The Ascend 950PR chip will be available in the first quarter of 2026. It will support two product form factors at first: cards and SuperPoD servers.

 

The next chip is the Ascend 950DT, which is optimized for both the decode stage of inference and for model training.

These two scenarios have high requirements for interconnect bandwidth and memory access bandwidth. That’s where our HiZQ 2.0 HBM comes in, delivering 144 GB of memory and 4 TB/s memory access bandwidth. The chip’s total interconnect bandwidth will reach 2 TB/s.

The additional data formats supported by this chip will be FP8, MXFP8, XMFP4, and HiF8.

The Ascend 950DT chip will be available in the fourth quarter of 2026.

 

The Ascend 960 is the third chip in our pipeline.

Compared with Ascend 950 chips, the Ascend 960 will have twice the computing power, memory access bandwidth, memory capacity, and number of interconnect ports. It’s designed to significantly boost training and inference performance.

The Ascend 960 will also support Huawei’s proprietary HiF4 data format, which is optimized for 4-bit precision – delivering even greater precision than other FP4 solutions on the market. This chip will bring inference throughput to a new level.

The Ascend 960 chip will be available in the fourth quarter of 2027.

 

The last chip on our immediate roadmap is the Ascend 970, and we’re still working out some of its specs.

But our general goal is to push all of its specs much higher, taking another leap in training and inference performance.

For the time being, the plan is to double its computing power in FP4 and FP8, double its interconnect bandwidth relative to the Ascend 960, and increase its memory access bandwidth by at least 1.5 times.

The plan is to go to market with the Ascend 970 in the fourth quarter of 2028. I’m sure its performance will be worth the wait.

So that’s all for the major specs and roadmap for our Ascend chips. Generally, we will follow a 1-year release cycle and double compute with each release. Throughout this process, we will keep evolving our Ascend chips, making them easier to use, supporting more data formats, and increasing their bandwidth. The goal is to stay on top of ever-growing demand for AI compute.

Compared with the Ascend 910B and Ascend 910C, our newer chips – starting with the Ascend 950 chips – will come with several major changes.

  • We will adopt an innovative design that combines SIMD and SIMT, making the overall development process more user-friendly.
  • The chips will support more data formats, including FP32, HF32, FP16, BF16, FP8, MXFP8, HiF8, MXFP4, and HiF4.
  • They will also have higher interconnect bandwidth: The Ascend 950 series will deliver 2 TB/s, and the Ascend 970 series will jump to 4 TB/s.
  • They will deliver more compute too. The Ascend 950 series will provide 1 PFLOPS in FP8 and 2 PFLOPS in FP4. The Ascend 960 will churn out 2 PFLOPS in FP8 and 4 PFLOPS in FP4. The Ascend 970 will offer 4 PFLOPS in FP8 and 8 PFLOPS in FP4.
  • All chips will come with larger memory capacity and double the memory access bandwidth of their predecessors.

Ascend chips lay the groundwork for us to build computing solutions that meet our customers’ needs.

SuperPoDs have become the main form factor of products for large-scale AI infrastructure. They are becoming the new norm.

A SuperPoD is a single logical machine, made up of multiple physical machines that can learn, think, and reason as one.

As computing demand continues to grow, so will SuperPoDs.

In March 2025, Huawei officially launched the Atlas 900 A3 SuperPoD, which packs up to 384 Ascend 910C chips. With all these interconnected chips, the SuperPoD works like a single computer, delivering up to 300 PFLOPS of computing power. To date, the Atlas 900 A3 SuperPoD remains the largest SuperPoD in the world. Maybe you’ve heard about the CloudMatrix384. It’s a cloud service instance that Huawei Cloud has built on top of our Atlas 900 A3 SuperPoDs.

Since its launch, we’ve deployed more than 300 Atlas 900 A3 SuperPoDs to serve over 20 customers in sectors like ISP, telecoms, and manufacturing. It’s fair to say that this SuperPoD, with its debut in 2025, marks the first milestone in Huawei’s AI SuperPoD journey.

Today, I’d like to show you more SuperPoDs and SuperClusters powered by Ascend chips that are either on the market or still in R&D.

The first is the Atlas 950 SuperPoD, built with our Ascend 950DT chips.

  • This SuperPoD will have up to 8,192 Ascend 950DT chips, 20 times more NPUs than our Atlas 900 A3 SuperPoD.
  • In its full configuration, the Atlas 950 SuperPoD will have 160 cabinets, including 128 compute cabinets and 32 communications cabinets, deployed in a 1,000 m2 space. All of these cabinets will be linked with all-optical interconnect.
  • It’s a total compute powerhouse, delivering 8 EFLOPS in FP8 and 16 EFLOPS in FP4.
  • Its interconnect bandwidth will be 16 PB/s. This means a single Atlas 950 SuperPoD will have an interconnect bandwidth over 10 times higher than the entire globe’s total peak Internet bandwidth.
  • The Atlas 950 SuperPoD will be available in the fourth quarter of 2026.

We’re pretty confident that, for the next few years, the Atlas 950 SuperPoD will remain the world’s most powerful SuperPoD. And it will far exceed its counterparts across all major metrics.

NVIDIA plans to launch their NVL144 system in the latter half of 2026. Our Atlas 950 SuperPoD will have 56.8 times more NPUs than they’ve got GPUs, and will deliver 6.7 times more computing power. Our SuperPoD will have 15 times more memory capacity, reaching 1,152 TB, and 16.3 PB/s interconnect bandwidth – 62 times higher than its counterpart. And even if we compare it against the NVL576 system, which NVIDIA plans to launch in 2027, it’s clear that our Atlas 950 SuperPoD will still be ahead on all fronts.

This SuperPoD will deliver massive boosts in computing power, memory capacity, memory access speeds, and interconnect bandwidth, leading to significantly higher training performance and inference throughput.

Compared to our Atlas 900 A3 SuperPoD, the Atlas 950 SuperPoD will offer a 17-fold improvement in training performance, delivering 4.91 million tokens per second. With support for FP4, its inference performance will be 26.5 higher, generating 19.6 million tokens per second.

 

The Atlas 950 SuperPoD, with 8,192 NPUs, is not our end goal. We will keep pushing the limits.

Let me introduce our second new SuperPoD product: the Atlas 960 SuperPoD. It will pack up to 15,488 Ascend 960 chips, and comprise 220 cabinets (176 for compute and 44 for communications) deployed in a 2,200 m2 space.

This SuperPoD will be available in the fourth quarter of 2027.

 

The Atlas 960 SuperPoD will mark yet another leap for our AI SuperPoDs.

Supercharged with the Ascend 960 chips, this SuperPoD will have twice the computing power, memory capacity, and interconnect bandwidth of the Atlas 950 SuperPoD.

It will deliver 30 EFLOPS in FP8 and 60 EFLOPS in FP4, and come with 4,460 TB of memory and 34 PB/s interconnect bandwidth.

The Atlas 960 SuperPoD will deliver 15.9 million tokens per second during training and 80.5 million tokens per second during inference, which means it will be 3 and 4 times more performant than our Atlas 950 SuperPoD in training and inference, respectively.

With the Atlas 950 SuperPoD and Atlas 960 SuperPoD, we are confident in our ability to provide abundant computing power for rapid AI advancements, both today and tomorrow.

 

SuperPoDs have redefined the paradigm for AI infrastructure. But their impact is not limited to intelligent computing.

They can also create considerable value in general-purpose computing.

In the finance sector, some mission-critical services are still run on mainframes and mid-range computers – systems that have higher performance and reliability requirements for servers than ordinary server clusters. General-purpose computing SuperPoDs are strong in both performance and reliability.

Tech-wise, SuperPoDs can also inject new life into general-purpose computing.

Our Kunpeng processors will evolve nonstop to support SuperPoDs, more cores, and higher performance. Built on our proprietary dual-threaded LinxiCore, Kunpeng processors can also support more threads.

In the first quarter of 2026, we will unveil the Kunpeng 950 processor in two models: One that features 96 cores and 192 threads, and the other with 192 cores and 384 threads. This processor will support general-purpose computing SuperPoDs. It will offer four-layer isolation in security, making it the first Kunpeng datacenter processor with confidential computing capabilities.

We will keep making breakthroughs in Kunpeng processors, including their microarchitecture and advanced packaging technology. In the first quarter of 2028, we plan to introduce two models. There will be a high-performance model with 96 cores and 192 threads, providing a 50%+ improvement in individual core performance and making it a good fit for scenarios like AI hosts and databases. The other will be a high-density model with at least 256 cores and 512 threads – ideal for scenarios like virtualization, containers, big data, and data warehouses.

 

Next, I’d like to introduce our third product for today: the TaiShan 950 SuperPoD. Built on the Kunpeng 950 processor, this SuperPoD will be the world’s first general-purpose computing SuperPoD. It will have up to 16 nodes, 32 processors, and 48 TB of memory, along with memory, SSD, and DPU pooling.

This SuperPoD will significantly improve general-purpose computing performance while offering an ideal solution for the finance sector, which is having a difficult time replacing its legacy mainframes and mid-range computers. The main challenge with legacy setups is support for distributed databases. Integrated with the TaiShan 950 SuperPoD, our GaussDB multi-write architecture requires no modifications and can still deliver a 2.9x performance boost. This SuperPoD can help customers in the finance sector seamlessly phase out traditional databases deployed on mainframes and mid-range computers. The TaiShan 950 SuperPoD, combined with the distributed GaussDB, can serve as a viable alternative to mainframes and mid-range computers, and even Oracle’s Exadata database servers.

In addition to mission-critical databases, the TaiShan 950 SuperPoD will also deliver a solid performance for other applications. For example, it will increase memory utilization by 20% in virtualized environments. For Spark workloads, it will make real-time data processing 30% faster.

The TaiShan 950 SuperPoD will be available in the first quarter of 2026.

 

The value of SuperPoDs extends beyond intelligent computing and general-purpose computing. They also have the potential to reshape recommendation systems used in the Internet sector – driving a shift from traditional algorithms to generative recommendation systems. We can build a hybrid SuperPoD that combines TaiShan 950 SuperPoDs and Atlas 950 SuperPoDs, offering a new architecture for new generative recommendation systems.

Thanks to its super-large bandwidth, ultra-low interconnect latency, and super-large memory, a hybrid SuperPoD can create an ultra-large shared memory pool. This memory pool can support PB-scale embedding tables for recommendation systems, enabling ultra-high-dimensional user features. The hybrid SuperPoD can also provide massive AI computing power, so it can support ultra-low-latency inference and feature retrieval.

To sum up, hybrid SuperPoDs will offer a new option for new generative recommendation systems.

 

Large-scale SuperPoDs are pushing both intelligent and general-purpose computing to new heights. They also pose major challenges for interconnect technology. For Huawei, though, as a global leader in connectivity, no challenge is too big for us.

We ran into two major challenges as we worked to define and design tech specs for the Atlas 950 SuperPoD and Atlas 960 SuperPoD.

  • The first challenge relates to long-range communications and reliability. A large-scale SuperPoD is made up of a number of cabinets that are far apart from each other. Existing copper and optical cabling technologies fall short of demand here. Copper cables provide high bandwidth, but only within a short range, and can connect two cabinets at most. Optical cables, on the other hand, can support long-range connections between multiple cabinets, but they fall short when it comes to reliability.
  • The second major challenge has to do with bandwidth and latency. With existing technology, inter-cabinet, inter-NPU bandwidth is still low: about five times less than what you would need for a SuperPoD. In terms of latency, existing technology, at its best, can provide an inter-cabinet, inter-NPU latency of about 3 microseconds, which is still 24% slower than what you would need for our Atlas 950 and 960 SuperPoDs. Right now, getting latency down to 2 or 3 microseconds is already pushing up against physical limits, so even a 0.1-microsecond improvement isn’t easy.

 

Huawei has honed its connectivity expertise over the past three decades. And by combining this expertise with systems innovations, we’ve managed to overcome these challenges, producing designs that exceed the base requirements for Atlas 950 and 960 SuperPoDs. And in doing so, we can pave the way for SuperPoDs with over 10,000 NPUs.

To ensure long range and high reliability, we have built reliability into every layer of our interconnect protocol, from the physical layer and data link layer, all the way up to the network and transmission layers. There is 100-ns-level fault detection and protection switching on optical paths, making any intermittent disconnections or faults imperceptible at the application layer. That is, applications will still run normally in the event of any faults.

We have also redefined and redesigned optical components, optical modules, and interconnect chips. With these innovations and designs, we’ve made optical interconnect 100 times more reliable, and extended the range of our interconnect to over 200 meters. Our interconnect technology essentially combines copper reliability with optical range.

To ensure high bandwidth and low latency, we’ve worked out multi-port aggregation and high-density packaging technologies, a peer-to-peer architecture, and a unified protocol. All these combine to deliver TB/s bandwidth and 2.1-microsecond latency.

With a series of novel systems innovations, we’ve managed to develop a solid interconnect technology for SuperPoDs – one that provides the high reliability, all-optical interconnect, high bandwidth, and low latency required for large-scale SuperPoDs.

 

Our goal is to make sure that the Atlas 950 SuperPoD and Atlas 960 SuperPoD – which will have several thousand or even more than 10,000 NPUs – will work like a computer. To meet this goal, we have developed a groundbreaking SuperPoD architecture and a new interconnect protocol for SuperPoDs.

The value proposition of SuperPoD architecture built on this type of interconnect protocol is simple: 10,000+ NPUs working as one machine. In other words, the protocol can connect more than 10,000 NPUs to form a SuperPoD that can work, learn, think, and reason like a single computer.

In terms of the tech itself, we believe that the architecture for a SuperPoD with more than 10,000 NPUs needs to have six key features, namely bus-grade interconnect, peer-to-peer coordination, all-resource pooling, a unified protocol, large-scale networking, and high availability.

This new interconnect protocol for SuperPoDs is called UnifiedBus, or UB for short.

And today we’re officially releasing it.

Today, we are also releasing the technical specifications for UnifiedBus 2.0. You might wonder why we’re starting things out with version 2.0.

Our research on UnifiedBus actually began back in 2019. For reasons everyone here is familiar with, we don’t have access to advanced process nodes, so we’ve decided to focus our efforts on making breakthroughs by combining chips – essentially connecting more computing resources.

We decided on “UnifiedBus” for the English name of the interconnect protocol. We came up with the Chinese name later – “Lingqu” – which in Chinese refers to a massive, well-connected transportation hub[1]. With UnifiedBus, we’re able to interconnect computing resources on a massive scale.

Our Atlas 900 A3 SuperPoD uses UnifiedBus 1.0 and its delivery started in March 2025. So far, we have deployed more than 300 Atlas 900 A3 SuperPoDs and fully validated the UnifiedBus 1.0 technology during this process.

Building on UnifiedBus 1.0, we have improved the protocol in terms of functionality, performance, and scale. The result is UnifiedBus 2.0, which will lay the groundwork for our Atlas 950 SuperPoD.

Now, we think the time is ripe to release UnifiedBus 2.0 as an open protocol so it can contribute more broadly to interconnect technology and industry development. So today, we are releasing its technical specifications. And we hope that industry partners will adopt this protocol and develop more UnifiedBus-based products and components. Together, we can build an open UnifiedBus ecosystem.

 

At last year’s Huawei Connect, I emphasized our goal to sustainably meet long-term computing demand by building SuperPoDs and SuperClusters with the semiconductor manufacturing process nodes that are practically available to the Chinese mainland. And today, I’ve introduced three SuperPoD products that do just that.

UnifiedBus is designed for SuperPoDs. And though it’s an interconnect protocol for SuperPoDs, it’s also a state-of-the-art interconnect technology for computing clusters.

So next, I’ll introduce two cluster products.

The first is our Atlas 950 SuperCluster with over 500,000 NPUs.

The Atlas 950 SuperCluster will be made up of 64 Atlas 950 SuperPoDs. More than 520,000 Ascend 950DT chips, spread out in over 10,000 cabinets, will work together to deliver 524 EFLOPS in FP8. This SuperCluster will go on the market in the fourth quarter of 2026, at the same time as the Atlas 950 SuperPoD.

The Atlas 950 SuperCluster will support both UBoE (UB over Ethernet) and RoCE (Remote Direct Memory Access over Converged Ethernet) protocols. With UBoE, our UnifiedBus protocol will allow our customers to make use of their existing Ethernet switches.

Compared to a conventional RoCE cluster, a UBoE cluster will have lower static latency and higher reliability, and require fewer switches and optical modules. So we recommend our customers go with UBoE.

This is the Atlas 950 SuperCluster: It will even outstrip xAI’s Colossus, now the world’s largest computing cluster, with 2.5 times more NPUs and 1.3 times more computing power. The Atlas 950 SuperCluster will unequivocally be the world’s most powerful computing cluster. From existing dense and sparse models with over 100 billion parameters, to future models with over 1 trillion or even 10 trillion parameters, the Atlas 950 SuperCluster will be a compute powerhouse for model training, driving efficient and steady innovation in AI.

 

As we launch the Atlas 960 SuperPoD in the fourth quarter of 2027, we will also launch the Atlas 960 SuperCluster. This SuperCluster will integrate more than 1 million NPUs to deliver 2 ZFLOPS in FP8, and 4 ZFLOPS in FP4.

It will also support UBoE and RoCE. UBoE will take the performance and reliability of this SuperCluster to the next level, and offer considerable improvements in static latency and mean time between failures (MTBF). Of the two protocols, UBoE is a more preferable option for connecting up the SuperCluster.

With the Atlas 960 SuperCluster, we hope to help customers speed up their application innovation and probe new frontiers of intelligence.

 

I’m thrilled to have the opportunity to share some of the new products we have in our pipeline moving forward. UnifiedBus in particular is a groundbreaking interconnect technology for SuperPoDs, and it will give rise to a new paradigm for AI infrastructure. SuperPoDs and SuperClusters powered by UnifiedBus are our answer to surging demand for computing, both today and tomorrow.

Moving forward, we hope to work more closely with industry players, and keep pushing advancements in AI to create greater value.

Thank you!

Themed All Intelligence, HUAWEI CONNECT 2025 will delve into AI across three dimensions: strategy, technology, and ecosystems. You can expect an in-depth look at our latest strategic initiatives, and we’ll also be unveiling our all-new digital and intelligent infrastructure products, scenario-specific solutions for industries, and development tools. The event will run from September 18 to 20 at the Shanghai World Expo Exhibition & Convention Center and Shanghai Expo Center. For more information, please visit HUAWEI CONNECT 2025 online at www.huawei.com/en/events/huaweiconnect



[1] Translation note: When describing the origins of the Chinese name “Lingqu”, Xu uses the phrase 九省通衢 (jiǔ shěng tōng qú). This is a phrase first used by Wang Xijue, a prominent statesman during the late Ming Dynasty, back in 1595 A.D. Literally, the phrase means “a thoroughfare connecting nine provinces.” Wang used it to describe the importance of the city of Hanyang as a major hub for transportation and trade.

Continue Reading