Most GPUs in contemporary data centers use cold plates to cool the chips. Microsoft has successfully tested a new cooling system that removed heat up to three times better than cold plates, the tech giant said in a blog.
The cost of building and operating a data center is measured in terms of the power required to run it and, therefore, expressed, generally, in kilowatts, megawatts, gigawatts and so on. Electrical systems account for 40-45% of the cost, according to one estimate.
Microsoft said it hopes to use the new fluid for next-generation AI chips, but stopped short of giving a timeline for the implementation. “It will also continue to work with fabrication and silicon partners to bring microfluidics into production across its datacenters,” the post said.
Last week, the Redmond, Washington-based tech giant said it will set aside $4 billion to build a new data center in Wisconsin, where the company is already building one, likely to be operation in 2026, at a cost $3.3 billion.
If a GPU can be cooled quickly, it can also be stretched to greater limit during times of increased demand, leading to more efficiency.
“Instead of using more chips, the company could just overclock ones for a few minutes, said Jim Kleewein, a Microsoft technical fellow who works with the hardware team on filling the needs of its Office software products,” Jim Kleewein, a Microsoft technical fellow, told Bloomberg.
On the other hand, it’s bad news for companies that make cooling solutions for data centers. The NYSE-listed Vertiv fell over 6% in trade after Microsoft’s announcement.
Read more: The gaping holes between the AI hype and real demand
(Edited by : Sriram Iyer)