Chip designers and gaming companies are scrambling to figure out whether the gaming market will tilt toward the cloud, the edge, or some combination of both. Multi-gigabit internet allows more people to play high-end games in the cloud, but edge-based gaming consoles and devices remain well-rooted, more secure, and private.
Which one wins? So far, there are more questions than answers. Handheld devices and phones offer basic games online, and they are the most popular way people access games globally today. Edge-based consoles and dedicated devices, meanwhile, provide much more realistic action and better graphics. But there is overlap between these two worlds, as well, blurring what previously were rigid dividing lines.
“Gaming has historically worked on edge,” said Tyrran Ferguson, director of product and strategic partnerships at Imagination Technologies. “That’s how the graphics are being rendered on the device, in your hand, or in your computer, and the compute is happening there too. Cloud gaming is slowly removing it from the edge and moving it to the cloud. Maybe we see a future where the local GPUs are less powerful, running AI workloads locally or compute workloads, and the graphics are all on the cloud or whatever it may be. But the way it has worked up until recently is only edge gaming if you want to call it that — there’s no real term for it.”
One key difference is that there is more collaboration in cloud gaming. “This means that the gaming software isn’t just working with one individual,” said Sathishkumar Balasubramanian, head of products for IC verification and EDA AI at Siemens EDA. “You can think about a gaming ecosystem covering millions of users pinging it, and making sure that they all interact with each other, so the problem changes. The edge becomes more of a client interface, where you have a front end and some edge processing — your tactile inputs. But most of the processing gets done in the cloud, because the entire gaming ecosystem resides in the cloud.”
Yet even though more game processing is happening in the cloud, more intelligence is moving to the edge. “For both these use cases, the chips, OSes, and loads are different,” said Balasubramanian. “On the gaming side you need to provide graphics, because it’s got a display. That means the chip needs to be smart enough, and the driver should be smart enough and powerful enough to drive that resolution. It needs to be more responsive because you’re doing real-time with what you press. It needs to have enough data. Some of the simple things in the gaming system need to be done on the edge, so the processor load is different. Based on that, the chip complexity and performance that’s needed is changing.”
Above all, there needs to be stability and correct functionality, as well as high performance, especially for the graphics processing. “We need to run fundamental functional verification. When I turn this switch, does the light go on? But the key for gaming is, how long does that take?” said Matthew Graham, senior group director, verification software product management at Cadence. “People don’t want to buy the latest gaming GPU and find their favorite game doesn’t run on it, or it doesn’t run on the right OS, whether it is MacOS, Windows, Android, iOS, or Linux that some consoles run on. The important thing is the combination of the hardware and the software. We can’t yet run the games that they want to run pre-silicon, but we can certainly run the game workloads, analyze full HD, 4K frames, and how they’re processed by the various tools and so on.”
Because of the need for real-time processing, the edge will always have a role to play. “If you put everything in the cloud, you lose time,” said Balasubramanian. “You want to make sure things are very fast, more redundant, instantaneous in terms of making decisions. With intelligence coming into these edge devices where you’ve got to make decisions, there’s a lot more processing needed at the edge. I’m not talking about an LLM. I’m talking about a domain-specific language model.”
AI and language models are a key challenge. “Edge computing pushes the limits of latency, bandwidth, and power efficiency — especially as AI workloads move closer to the user,” said Steven Woo, fellow and distinguished inventor at Rambus. “The challenge is delivering real-time responsiveness while managing thermal and energy constraints in compact, distributed environments that may also be battery-powered. AI models need to be smaller than their data center counterparts, but must maintain good accuracy and efficiency to be effective.”
In the gaming and AI space, the fast pace of change makes chip design even harder. “We’re building hardware today that’s going to be in devices in three to five years,” said Anand Patel, senior director of product management for GPUs in the client line of business at Arm. “On the gaming side, it’s our relentless push for delivering more and more performance year on year. People talk about Moore’s Law and the gains diminishing, but we’re not seeing that play out in reality. There’s a continued push on how much performance and efficiency we can get out of the chip, and double-digit gains every year. Advanced nodes are absolutely a tool. There are things we’re having to do at the system level, as well, beyond the GPU. The CPU has to deliver overall performance, then stitch all this together in the board for the memory system (typically LPDDR), memory bandwidth, and new types of memory technology — with storage, since these models can be quite large, combined with the fact that we have to store them in flash — all of this needs to move along together to ensure that we can keep up with those games.”
Gamers’ pain is latency
In the gaming world, latency remains the biggest concern. “What is fascinating is that cloud models are getting so powerful that the latency is acceptable,” said Dave Garrett, vice president of technology and innovation at Synaptics. “I always think it’s going to be a split compute model, with an explosion of very compact data on the edge, where we’ll render the hardest stuff on the edge, see what’s the minimum that you need to communicate between these two nodes, and that’s going to give you the latency and the speed. Lag is the thing that gamers hate the most. If we go to the heart of it, what’s the customer’s experience? They don’t care that you’re using AI. They care if the game is rendering. Is it fast? Is it glitching? That’s why the split will be permanently putting you halfway between the two domains.”
One common application for split compute is massive multiplayer online games, where the server coordinates and aggregates the different gamers while the graphics are still rendered locally. “Technology is always a chicken-and-egg,” said Garrett. “Let’s say I get to the point where the cloud can render the whole game. Somebody’s going to go and invent a game that needs more or does more. It’s a weapons race, so you’re probably permanently in this edge/cloud split because someone’s going to find a more interesting game in the future that requires a massive next level of compute.”
Multiplayer games are essentially a physics simulation. “The console manipulates that physical model by putting the users’ inputs into it, and then manipulates the user by firing back whatever is going on in the network around them as they’re busy losing their game,” said Mike Borza, principal security technologist and scientist at Synopsys. “All of that data, all of the storage for it, exists as part of a massive server infrastructure. A lot of times, there are physics simulations for aspects of the game. Other things get rendered differently, and then the model of the world the player is interacting with gets downloaded to their console or their gaming platform. That’s how they get to see their view of the world, which is a space and time subset of the overall game. They get the piece that’s geographically near them in the timeframe in which they’re playing this game.”
More games on more devices
Weighing the benefits of cloud versus edge is more than idle speculation for chipmakers and providers of gaming technology. The global games market revenue is projected to reach $522.46 billion in 2025 and grow at a 7% CAGR to $733.22 billion by 2030. Today, there are an estimated 2.2 billion users.[1] In fact, gaming is so big that Netflix’s next competitor for eyeballs on screens wasn’t other TV and movie streaming services.
“It was the gamers,” said Imagination’s Ferguson. “So, what did they do? They bought a bunch of studios, and now you’ve got Netflix games. They spent who knows how much money getting that up and running, because they know that gaming is such a massive industry.”
The general trend is a consolidation of game engines, like Epic Unity and Unreal Engine, across desktop and PC, console, and mobile.
“Developers don’t want to target different devices with different types of games. We’re seeing more desktop-like content running on mobile,” said Arm’s Patel. “Both gaming companies and mobile companies need to solve the problem of how to make desktop games work on mobile. We need to engage developers and the engine vendors to make it easy for them to move this stuff. You can’t just take a game running on desktop and run it on mobile. There are bits you need to change or adapt for it to be mobile or battery-powered device-friendly.”
While advancements in GPU architecture are expected to enhance performance across all platforms, consoles and PCs will consistently outperform handheld devices in terms of raw capability due to fewer power and area limitations, said Amol Borkar, director of product management and marketing for Tensilica DSPs in the Silicon Solutions Group at Cadence. “Despite this, the latest generation of handheld gaming devices presents a distinctive combination of portability and robust performance, closely resembling the console experience. For instance, the Xbox Ally ROG enables users to play their preferred games on the move while seamlessly integrating with consoles on the same network using an optimized screen mirroring capability. This handheld system utilizes mobile-optimized processors, effective thermal management, and dynamic resolution scaling to deliver smooth gameplay without requiring extensive cooling solutions.”
Battery-powered mobile gaming is gaining ground, but many gamers still prefer a plug. “Gaming, in many cases, is still wired,” said Michal Siwinski, chief marketing officer at Arteris. “My boys were using mobile devices, but the moment they could afford a plugged-in device with a full NVIDIA GPU rack, they did, because the performance is so much better. Gaming is getting so advanced, and the graphics are getting so sophisticated and so awesome, that you have to plug it in. It’s not a data center. It is an edge device. But it’s a wired edge versus wireless edge.”
Wired versus wireless gaming
Wired gaming devices have different challenges compared to wireless. “In a wired edge application, it’s not about the battery, but it’s about energy,” said Siwinski. “The problem there is you want to have the highest bandwidth, highest performance, the most compute you can, to get the best graphics. That means you’re probably going to go to the most advanced nodes to get the best density, such as TSMC 2nm or Intel 18A. The problem in the advanced nodes is that the wires are so small, and as you’re computing all of this massive machine learning stuff, you have thousands or millions of wires conveying one signal. You have billions of connection points, all of them very tiny. It comes out to the wires and becomes heat, and all of a sudden, you have a thermal problem. Networks on chips help reduce wiring. If you can have fewer wires and be much more efficient with how you connect all of these elements to have these wired devices be efficient, that’s huge.”
Meanwhile, the suppliers of cloud gaming continue to grow rapidly, with the cloud providers becoming system houses. “They want to control the whole stack,” said Graham. “They want to build their own silicon in the places where they see it as a unique advantage.”
A flexible SoC platform could potentially be used for multiple product segments, such as mobile, PC, gaming, and AR/VR wearables, said Gervais Fong, senior director of product marketing for mobile, automotive, and consumer interfaces at Synopsys. “It’s amortizing the very high cost of doing these designs across multiple product lines.”
Fig. 1: A portable gaming device SoC with higher-performance ARC multi-core processors and DesignWare Audio Subsystem. Source: Synopsys
The GPUs in a gaming system can be fully integrated into an SoC or on a separate piece of silicon. “We have companies that will do a specific, fully integrated SoC,” said Kristof Beets, vice president of product management at Imagination Technologies. “That can be an Arm CPU or RISC-V CPU. They put a GPU next to it, along with everything else they need, and it goes into a set-top box or a mobile phone. Increasingly, we’re also seeing more use in the NVIDIA-style desktop market or laptops. There, the GPU could be a completely separate piece of silicon, a separate graphics chip, so it’s graphics and memory controllers and some interfaces. Or it could be a chiplet, allowing a lower cost to scale up performance. You use one chiplet, two, three, four, and scale up the performance from there. These architectures are making their way into cloud systems, as well, with mixed success.”
Additionally, system-level verification is needed to measure that protocols such as PCI Express, HDMI, and DisplayPort on customers’ devices are functionally correct, as well as measure the latency and throughput of the systems that are employing those protocols. “You need the correct latency, the correct throughput, with an appropriate total bandwidth, so you know that the environment or the use model that the customer is going to require enables that end-to-end,” explained Cadence’s Graham. “This applies to console manufacturers, graphics/GPU manufacturers, graphics card manufacturers, and others.”
Upheaval expected in gaming console market
The market most at risk of a shift in demand is the big home console, such as the PlayStation 5 or Xbox.
“Do I really want to pay $600 or $700 and put that device in my house?” asked Beets. “Or would I prefer to pay a subscription for $25 with a tiny box, while Microsoft or Sony pay for the big servers in the cloud, providing you’ve got a fast enough connection? That’s the space where we see the most happening. Still, if you’re truly on the move, then mobile gaming doesn’t tend to work, even though mobile phones continue to get more powerful. Devices like the Nintendo Switch and other dedicated handheld consoles are a growing market, and you see a lot of PC-like devices, like the Steam Deck, that are essentially portable computers, but very focused on the gaming experience.”
Another device that could play a bigger role in gaming is the TV operator set-top box. If it takes off, it’s an easy job to add more compute and AI accelerators to the boxes. “The AI is already doing language models, translation, and other things,” said Synaptics’ Garrett. “When I switch to gaming mode, I reallocate the AI engine so the games can be more powerful without increasing the cost.”
Within gaming controllers, microcontroller chips play a vital role along with capacitive and touch sensors. “You open up a wider set of developers that can access MCUs, because they’re not requiring a special level of knowledge or expertise, for example, to have someone on your team who can lay out a super high-speed DRAM,” said Steve Tateosian, senior vice president of IoT, consumer, and industrial MCUs at Infineon.
Fig. 2: A game controller featuring a programmable SoC. Source: Infineon
Gaming and XR wearables also feature MCUs. For example, Meta recently developed a surface electromyography (sEMG) wristband with an MCU, IMU, ADC, battery, and Bluetooth antenna, allowing people to control computers with hand gestures.
Cloud democratizes gaming, but security needed
For people who cannot afford high-end gaming consoles and controllers, there is great appeal in a future where standards such as Wi-Fi 7 and 6G enable even AAA games to be played on mobile devices via the cloud.
“If you’re playing games in the cloud, then you don’t need local processing power in your device,” said Adam Hepburn, founder and CEO of Elo, a gaming peripherals company. “That comes with other issues, like you don’t own anything. You don’t have any cold storage if a company goes bankrupt or if they decide to sell your information. But in many countries, people don’t have enough resources to buy a PlayStation. They can buy a phone with a screen and a Wi-Fi signal and play the same games we do. Like in the movie ‘Ready Player One,’ users were charged 25 cents to play the ‘best game ever.’ That’s what it is going to be in the future. Gaming will be financially accessible for the entire world, which removes the limitations between the rich and the poor for this type of media.”
To solve issues around network speeds and latency, Hepburn suggested the world needs an orbiting mesh around it to deliver internet across the world. “Once we get to that point, then cloud gaming is available.”
However, as gaming shifts further to the cloud, there are going to be trillions of edge devices filling these enormous clouds, and that won’t be sustainable. “The ratio has to be such that we use AI to decide when to fire that up,” said Synaptics’ Garrett. “OpEx is a big deal. If I have the cloud running for millions of customers, you’re going to be drowning in cost, and the edge is a solution to that.”
At the same time, game providers would like gaming to be on the cloud because then they control where it is and what the experience is like. “One reason gamers don’t want that to be the case is that they want their games to work whether or not they have an internet connection,” noted Synopsys’ Borza. “If they think they bought permanent access to a game, they should have that right.”
Another reason gamers prefer the edge is privacy. “People don’t generally want everyone to know what they’re doing, but at some point, convenience trumps the knowledge that the cloud knows what I’m working on,” said Garrett. “If I’m playing this particular game, at this time of night, the cloud has awareness of your behaviors. It’s hard to mask all of that.”
Gaming security
Closely related to privacy is security. “Gaming is a good example of an early area in which people started to understand their need for security,” said Borza. “Originally, console manufacturers understood that well and ratcheted up their security over time. But now you’ve had this metamorphosis into massive multiplayer games, and those are cloud-based or server-based. There are active defenses built around the servers and the cloud infrastructure that hosts all of those games, and those fall back to the same kinds of techniques. You need to authenticate the users of the system. Users shouldn’t be able to escalate their privileges, even if they get into something available to them through a bug, or something available to them because they managed to steal somebody else’s credentials and break into the system. They should have several layers of security, and there are differences in the ways that operators of the system get access to the system versus the way the regular players do.”
The operator’s data needs to be stored completely independently of where the games are being played. “That is the way a well-designed gaming platform will have it,” Borza noted. “There are separate systems firewalled from each other. People try to detect intrusions, so there’s a lot of sensor technology in the network to look for signs that somebody is roaming around in the network doing stuff they shouldn’t be doing or gaining access to things they shouldn’t be getting at.”
Conclusion
While the cloud will expose more people to gaming with less expensive equipment, for many gamers, the variety of equipment and gaming cards is part of the appeal.
“It’s nice to have tacit things to touch and play with and it’s more creative because then we’re not all sharing the same machine that’s running the same specs,” said Imagination’s Ferguson. “From that perspective, cloud gaming can remove a bit of the fun of what kind of hardware you’re running on, because we’re all running the same hardware on a data center somewhere, but it also makes it more accessible for people.”
The global gaming industry, including games and peripherals, is bigger than the music industry, all of American sports, and all of Hollywood combined, according to Elo’s Hepburn. “It’s massive — over $250 billion. Out of that, the majority of it is mobile gaming, which was $136 billion last year in revenue. Just four years ago, it was $90 billion. A lot of that is going to be legacy games like Candy Crush. But the competitive AAA games are the fastest growing in that industry, because they can now be played on all devices.”
Like vinyl records, CDs, cassettes, and DVDs, video gaming consoles are becoming less essential, yet it seems unlikely they will disappear. “They will complement other types of devices, including PC and smartphones,” said Arm’s Patel. “But the shape of consoles may change and adapt. It might not be consoles as we know them.”
Reference
- https://www.statista.com/outlook/amo/media/games/worldwide
Related Reading
AI Drives More Realistic Gaming
Neural networks handle graphics while AI agents coach gameplay; hallucinations help fill in the gaps.
AR/VR Glasses Taking Shape With New Chips
Smart glasses with augmented reality functions look more natural than VR goggles, but today they are heavily reliant on a phone for compute and next-gen communication.