Simple Life is getting smarter.
What’s happening: The London-based behavioral coaching company secured a $35M Series B led by actor Kevin Hart’s HartBeat Ventures to advance AI-powered plans.
Lean machine. Counting 800K…
Simple Life is getting smarter.
What’s happening: The London-based behavioral coaching company secured a $35M Series B led by actor Kevin Hart’s HartBeat Ventures to advance AI-powered plans.
Lean machine. Counting 800K…
The data centers, server hardware, and global network infrastructure that underpin Meta’s operations are a critical focus to address the environmental impact of our operations. As we develop and deploy the compute capacity and storage racks used in data centers, we are focused on our goal to reach net zero emissions across our value chain in 2030. To do this, we prioritize interventions to reduce emissions associated with this hardware, including collaborating with hardware suppliers to reduce upstream emissions.
Design for Sustainability is a set of guidelines, developed and proposed by Meta, to aid hardware designers in reducing the environmental impact of IT racks. This considers various factors such as energy efficiency and the selection, reduction, circularity, and end-of-life disposal of materials used in hardware. Sustainable hardware design requires collaboration between hardware designers, engineers, and sustainability experts to create hardware that meets performance requirements while limiting environmental impact.
In this guide, we specifically focus on the design of racks that power our data centers and offer alternatives for various components (e.g., mechanicals, cooling, compute, storage and cabling) that can help rack designers make sustainable choices early in the product’s lifecycle.
To reach our net zero goal, we are primarily focused on reducing our Scope 3 (or value chain) emissions from physical sources like data center construction and our IT hardware (compute, storage and cooling equipment) and network fiber infrastructure.
While the energy efficiency of the hardware itself deployed in our data centers helps reduce energy consumption, we have to also consider IT hardware emissions associated with the manufacturing and delivery of equipment to Meta, as well as the end-of-life disposal, recycling, or resale of this hardware.
Our methods for controlling and reducing Scope 3 emissions generally involve optimizing material selection, choosing and developing lower carbon alternatives in design, and helping to reduce the upstream emissions of our suppliers.
For internal teams focused on hardware, this involves:
These actions to reduce Scope 3 emissions from our IT hardware also have the additional benefit of reducing the amount of electronic waste (e-waste) generated from our data centers.
There are many different rack designs deployed within Meta’s data centers to support different workloads and infrastructure needs, mainly:
While there are differences in architecture across these different rack types, most of these racks apply general hardware design principles and contain active and passive components from a similar group of suppliers. As such, the same design principles for sustainability apply across these varied rack types.
Within each rack, there are five main categories of components that are targeted for emissions reductions:
The emissions breakdown for a generic compute rack is shown below.
We focus on four main categories to address emissions associated with these hardware components:
We will cover a few of the levers listed above in detail below.
Modular Design which allows older rack components to be re-used in newer racks. Open Rack designs (ORv2 & ORv3) form the bulk of high volume racks that exist in our data centers.
Here are some key aspects of the ORv3 modular rack design:
These design principles should continue to be followed in successive generations of racks. With the expansion of AI workloads, new specialized racks for compute, storage, power and cooling are being developed that are challenging designers to adopt the most modular design principles.
Retrofitting existing rack designs for new uses/high density is a cost-effective and sustainable approach to meet evolving data center needs. This strategy can help reduce e-waste, lower costs, and accelerate deployment times. Benefits of re-use/retrofitting include:
There are several challenges when considering re-using or retrofitting racks:
Overall, the benefits of retrofitting existing racks are substantial and should be examined in every new rack design.
Steel is a significant portion of a rack and chassis and substituting traditional steel with green steel can reduce emissions. Green steel is typically produced using electric arc furnaces (EAF) instead of traditional basic oxygen furnaces (BOF), allowing for the use of clean and renewable electricity and a higher quantity of recycled content. This approach significantly reduces carbon emissions associated with steel production. Meta collaborates with suppliers who offer green steel produced with 100% clean and renewable energy.
While steel is a significant component of rack and chassis, aluminum and copper are extensively used in heat sinks and wiring. Recycling steel, aluminum, and copper saves significant energy needed to produce hardware from raw materials.
As part of our commitment to sustainability, we now require all racks/chassis to contain a minimum of 20% recycled steel. Additionally, all heat sinks must be manufactured entirely from recycled aluminum or copper. These mandates are an important step in our ongoing sustainability journey.
Several of our steel suppliers, such as Tata Steel, provide recycled steel. Product design teams may ask their original design manufacturer (ODM) partners to make sure that recycled steel is included in the steel vendor(s) selected by Meta’s ODM partners. Similarly, there are many vendors that are providing recycled aluminum and copper products.
Extending the useful life of racks, servers, memory, and SSDs helps Meta reduce the number of hardware equipment that needs to be ordered. This has helped achieve significant reductions in both emissions and costs.
A key requirement for extending useful life of hardware is the reliability of the hardware component or rack. Benchmarking reliability is an important element to determine whether hardware life extensions are feasible and for how long. Additional consideration needs to be given to the fact that spares and vendor support may have diminishing availability. Also, extending hardware life also comes with the risk of increased equipment failure, so a clear strategy to deal with the higher incidence of potential failure should be put in place.
Dematerialization and removal of unnecessary hardware components can lead to a significant reduction in the use of raw materials, water, and/or energy. This entails reducing the use of raw materials such as steel on racks or removing unnecessary components on server motherboards while maintaining the design constraints established for the rack and its components.
Dematerialization also involves consolidating multiple racks into fewer, more efficient ones, reducing their overall physical footprint.
Extra components on hardware boards are included for several reasons:
In addition, changes in requirements over time can also lead to extra components. While it is very difficult to modify systems in production, it is important to make sure that each hardware design optimizes for components that will be populated.
Examples of extra components on hardware boards include:
In addition to hardware boards, excess components may also be present in other parts of the rack. Removing excess components can lead to lowering the emissions footprint of a circuit board or rack.
Productionizing new technologies can help Meta significantly reduce emissions. Memory and SSD/HDD are typically the single largest source of embodied carbon emissions in a server rack. New technologies can help Meta reduce emissions and costs while providing a substantially higher power-normalized performance.
Examples of such technologies include:
Teams can explore additional approaches to reduce emissions associated with memory/SSD/HDD which include:
Meta engages with suppliers to reduce emissions through its net zero supplier engagement program. This program is designed to set GHG reduction targets with selected suppliers to help achieve our net zero target. Key aspects of the program include:
The Clean Energy Procurement Academy (CEPA), launched in 2023 (with Meta and other corporations), helps suppliers — especially in the Asia-Pacific region — learn how to procure renewable energy via region-specific curricula.
The Design for Sustainability principles outlined in this guide represent an important step forward in Meta’s goal to achieve net zero emissions in 2030. By integrating innovative design strategies such as modularity, reuse, retrofitting, and dematerialization, alongside the adoption of greener materials and extended hardware lifecycles, Meta can significantly reduce the carbon footprint of its data center infrastructure. These approaches not only lower emissions but also drive cost savings, e-waste reductions, and operational efficiency, reinforcing sustainability as a core business value.
Collaboration across hardware designers, engineers, suppliers, and sustainability experts is essential to realize these goals. The ongoing engagement with suppliers further amplifies the impact by addressing emissions across our entire value chain. As Meta continues to evolve its rack designs and operational frameworks, the focus on sustainability will remain paramount, ensuring that future infrastructure innovations support both environmental responsibility and business performance.
Ultimately, the success of these efforts will be measured by tangible emissions reductions, extended useful life of server hardware, and the widespread adoption of low carbon technologies and materials.
Advanced labor cesarean birth raises future preterm birth risk | Image Credit: © Michael – © Michael – stock.adobe.com.
Scarring in the wound linked to increased preterm birth risk in future pregnancies is 8 times more likely in women with…
YouTube is updating the look of the video player to be “cleaner and more immersive” beginning this week. “This includes updated controls and new icons to make the viewing experience more visually satisfying while obscuring less content,”
By Joseph Adinolfi
The revival of the U.S.-China trade war has ended a streak of summer calm that had brought about the lowest volatility since January 2020
The stock market’s “fear gauge” is back above its long-term average.
After one of the quietest summers for the stock market in years, Wall Street’s fear gauge has once again shot higher as investors fret that a trade standoff between the U.S. and China could escalate further.
The Cboe Volatility Index VIX, better known as the VIX, or Wall Street’s “fear gauge,” traded as high as 22.76 on Tuesday, its highest intraday level since May 23, when it traded as high as 25.53, according to Dow Jones Market Data. By the time the market closed, the VIX had moved well off its earlier highs. The index ended the day above 20, a level with some significance.
Since the VIX’s inception in the early 1990s, its long-term average sits just below 20. As a result, investors tend to see this level as the line in the sand between a relatively calm market, and one that is starting to look a bit more panicked.
The level of the VIX is based on trading activity in options contracts tied to the S&P 500 SPX that are due to expire in roughly one month. It is seen as a proxy for how worried traders are about the possibility that stocks could be due for a nosedive. After all, volatility tends to rise more quickly when the market is falling.
A summer lull
Looking back, there were signs that investors were beginning to feel a bit too complacent.
Stocks trundled higher all summer with few interruptions. This placid trading ultimately sent the three-month realized volatility for the S&P 500 to its lowest level since January 2020 last week, according to FactSet data and MarketWatch calculations.
Realized volatility is a calculation that measures how volatile a given index or asset has been in the recent past. The VIX, which measures implied volatility, attempts to gauge how volatile investors expect markets will be in the immediate future.
For a while, the VIX trended lower alongside realized volatility for the S&P 500. But around Labor Day, the two started to diverge.
This could mean a couple of different things, according to portfolio managers who spoke with MarketWatch. The first is that investors increasingly preferred to bet on further upside in the stock market using call options instead of actual shares. Call options on the S&P 500 will deliver a payoff if the index rises above a predetermined level before a given time, which is known as the expiration date.
It might also mean that some traders were scooping up put options, which act like a form of portfolio insurance. Wary of myriad risks that could upset the apple cart following a record-setting rebound earlier in the year, some investors may have preferred to hedge their downside risk, while holding on to their stocks, so as not to miss out on any further gains.
Signs that the market might be bracing for some upcoming turbulence first started to emerge in late September. Between Sept. 29 and Oct. 3, the S&P 500 and the VIX rose simultaneously for five straight sessions. That hadn’t happened since at least 1996, according to an analysis from Carson Group’s Ryan Detrick.
Seeing both the VIX and S&P 500 trend higher hinted that the market’s streak of calm might soon be coming to an end, said Michael Kramer, portfolio manager at Mott Capital Management.
“The tinder was there for something like Friday to occur,” said Mike Thompson, co-portfolio manager at Little Harbor Advisors.
“You just needed that spark to trigger it,” Mott Capital’s Kramer said.
While the U.S.-China trade tensions remain far from settled, Thompson and his brother, Matt Thompson, also a co-portfolio manager at Little Harbor Advisors, are keeping an eye out for any indication that a bigger burst of volatility might lie ahead.
Investors have largely blamed the selloff for the revival of trade tensions between the U.S. and China. On Friday, President Donald Trump threatened 100% tariffs on all Chinese goods imported into the U.S. in retaliation for Beijing stepping up export controls on rare earth metals.
Then on Tuesday, Beijing sanctioned U.S. subsidiaries of a South Korean shipping firm, sparking a global stock-market selloff that had largely reversed by the time the closing bell rang out on Wall Street.
But according to the Thompson brothers, the U.S.-China tariff dance has started to feel a little too familiar for it to be a real cause for concern. Investors appear to be catching on to the pattern of escalation, followed immediately by de-escalation, as each side vies for maximum leverage.
A more plausible threat to market calm, in their view, would be the ructions in the credit market. On Tuesday, JPMorgan Chase & Co. (JPM) Chief Executive Jamie Dimon warned about the potential for more credit problems after the bank lost money on a loan to bankrupt subprime auto lender Tricolor. Trouble in the space could get worse after a long period where conditions in the credit market were relatively favorable.
On Friday, BlackRock (BLK) and other institutional investors asked for their money back from Point Bonita Capital, a fund managed by the investment bank Jefferies (JEF), after the bankruptcy of auto parts supplier First Brands Group saddled the fund with big losses.
“We’re keeping an eye out for whether there is another shoe to drop,” Matt Thompson said.
U.S. stocks were on track to finish mostly higher on Tuesday, until Trump dropped a Truth Social post accusing China of a “Economically Hostile Act” for refusing to purchase soybeans from American farmers. That caused the S&P 500 to finish 0.2% lower, while the Nasdaq Composite COMP ended down 0.8%. Of the three major U.S. indexes, only the Dow Jones Industrial Average DJIA managed to finish higher. Meanwhile, the Russell 2000 RUT, an index of small-cap stocks, quietly notched another record closing high.
-Joseph Adinolfi
This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.
(END) Dow Jones Newswires
10-14-25 1642ET
Copyright (c) 2025 Dow Jones & Company, Inc.
As Meta focuses on achieving net zero emissions in 2030, understanding the carbon footprint of server hardware is crucial for making informed decisions about sustainable sourcing and design. However, calculating the precise carbon footprint is challenging due to complex supply chains and limited data from suppliers. IT hardware used in our data centers is a significant source of emissions, and the embodied carbon associated with the manufacturing and transportation of this hardware is particularly challenging to quantify.
To address this, we developed a methodology to estimate and track the carbon emissions of hundreds of millions of components in our data centers. This approach involves a combination of cost-based estimates, modeled estimates, and component-specific product carbon footprints (PCFs) to provide a detailed understanding of embodied carbon emissions. These component-level estimates are ranked by the quality of data and aggregated at the server rack level.
By using this approach, we can analyze emissions at multiple levels of granularity, from individual screws to entire rack assemblies. This comprehensive framework allows us to identify high-impact areas for emissions reduction.
Our ultimate goal is to drive the industry to adopt more sustainable manufacturing practices and produce components with reduced emissions. This initiative underscores the importance of high-quality data and collaboration with suppliers to enhance the accuracy of carbon footprint calculations to drive more sustainable practices.
We leveraged AI to help us improve this database and understand our Scope 3 emissions associated with IT hardware by:
For this work we used various AI methods to enhance the accuracy and coverage of Scope 3 emission estimates for our IT hardware. Our approach leverages the unique strengths of both natural language processing (NLP) and large language models (LLMs).
In our first use case (Identifying similar components with AI), we employed various NLP techniques such as Term Frequency-Inverse Document Frequency (TF-IDF) and Cosine similarity to identify patterns within a bounded, relatively small dataset. Specifically, we applied this method to determine the similarity between different components. This approach allowed us to develop a highly specialized model for this specific task.
LLMs are pre-trained on a large corpus of text data, enabling them to learn general patterns and relationships in language. They go through a post-training phase to adapt to specific use cases such as chatbots. We apply LLMs, specifically Llama 3.1, in the following three different scenarios:
Unlike the first use case, where we needed a highly specialized model to detect similarities, we opted for LLM for these three use cases because it leverages general human language rules. This includes handling different units for parameters, grouping synonyms into categories, and recognizing varied phrasing or terminology that conveys the same concept. This approach allows us to efficiently handle variability and complexity in language, which would have required significantly more time and effort to achieve using only traditional AI.
When analyzing inventory components, it’s common for multiple identifiers to represent the same parts or slight variations of them. This can occur due to differences in lifecycle stages, minor compositional variations, or new iterations of the part.
PCFs following the GHG Protocol are the highest quality input data we can reference for each component, as they typically account for the Scope 3 emissions estimates throughout the entire lifecycle of the component. However, conducting a PCF is a time-consuming process that typically takes months. Therefore, when we receive PCF information, it is crucial to ensure that we map all the components correctly.
PCFs are typically tied to a specific identifier, along with aggregated components. For instance, a PCF might be performed specifically for a particular board in a server, but there could be numerous variations of this specific component within an inventory. The complexity increases as the subcomponents of these items are often identical, meaning the potential impact of a PCF can be significantly multiplied across a fleet.
To maximize the utility of a PCF, it is essential to not only identify the primary component and its related subcomponents but also identify all similar parts that a PCF could be applied to. If these similar components are not identified their carbon footprint estimates will remain at a lower data quality. Therefore, identifying similar components is crucial to ensure that we:
To achieve this, we employed a natural language processing (NLP) algorithm, specifically tailored to the language of this dataset, to identify possible proxy components by analyzing textual descriptions and filtering results by component category to ensure relevance.
The algorithm identifies proxy components in two distinct ways:
Meta’s Net Zero team reviews the proposed proxies and validates our ability to apply them in our estimates. This approach enhances the accuracy and consistency of component data, ensures that high-quality PCF data is effectively utilized across similar components, and enables us to design our systems to more effectively reduce emissions associated with server hardware.
When PCFs are not available, we aim to avoid using spend-to-carbon methods because they tie sustainability too closely to spending on hardware and can be less accurate due to the influence of factors like supply chain disruptions.
Instead, we have developed a portfolio of methods to estimate the carbon footprint of these components, including through parameterized modeling. To adapt any model at scale, we require two essential elements: a deterministic model to scale the emissions, and a list of data input parameters. For example, we can scale the carbon footprint calculation for a component by knowing its constituent components’ carbon footprint.
However, applying this methodology can be challenging due to inconsistent description data or locations where information is presented. For instance, information about cables may be stored in different tables, formats, or units, so we may be unable to apply models to some components due to difficulty in locating input data.
To overcome this challenge, we have utilized large language models (LLMs) that extract information from heterogeneous sources and inject the extracted information into the parameterized model. This differs from how we apply NLP, as it focuses on extracting information from specific components. Scaling a common model ensures that the estimates provided for these parts are consistent with similar parts from the same family and can inform estimates for missing or misaligned parts.
We applied this approach to two specific categories: memory and cables. The LLM extracts relevant data (e.g., the capacity for memory estimates and length/type of cable for physics-based estimates) and scales the components’ emissions calculations according to the provided formulas.
We utilize our centralized component carbon footprint database not only for reporting emissions, but also to drive our ability to efficiently deploy emissions reduction interventions. Conducting a granular analysis of component-level emissions enables us to pinpoint specific areas for improvement and prioritize our efforts to achieve net zero emissions. For instance, if a particular component is found to have a disproportionately high carbon footprint, we can explore alternative materials or manufacturing processes to mitigate its environmental impact. We may also determine that we should reuse components and extend their useful life by testing or augmenting component reliability. By leveraging data-driven insights at the component level and driving proactive design interventions to reduce component emissions, we can more effectively prioritize sustainability when designing new servers.
We leverage a bill of materials (BOM) to list all of the components in a server rack in a tree structure, with “children” component nodes listed under “parent” nodes. However, each vendor can have a different BOM structure, so two identical racks may be represented differently. This, coupled with the heterogeneity of methods to estimate emissions, makes it challenging to easily identify actions to reduce component emissions.
To address this challenge, we have used AI to categorize the descriptive data of our racks into two hierarchical levels:
We have developed two classification models: one for “domain” mapping, and another for “component” mapping. The difference between these mappings lies in the training data and the additional set of examples provided to each model. We then combine the two classifications to generate a mutually exclusive hierarchy.
During the exploration phase of the new taxonomy generation, we allowed the GenAI model to operate freely to identify potential categories for grouping. After reviewing these potential groupings with our internal hardware experts, we established a fixed list of major components. Once this list was finalized, we switched to using a strict GenAI classifier model as follows:
This methodology has been presented at the 2025 OCP regional EMEA summit with the goal to drive the industry toward a common taxonomy for carbon footprint emissions, and open source the methodology we used to create our taxonomy.
These groupings are specifically created to aid carbon footprint analysis, rather than for other purposes such as cost analysis. However, the methodology can be tailored for other purposes as necessary.
As we work toward achieving net zero emissions across our value chain in 2030, this component-level breakdown methodology is necessary to help understand our emissions at the server component level. By using a combination of high-quality PCFs, spend-to-carbon data, and a portfolio of methods that leverage AI, we can enhance our data quality and coverage to more effectively deploy emissions reduction interventions.
Our next steps include open sourcing:
We are committed to sharing our learnings with the industry as we evolve this methodology, now as part of a collaborative effort with the OCP PCR group.
Following last month’s 1.5 update, which introduced new features and the Liquid Glass design, Apple Invites was updated today with new “festive event backgrounds.” Here’s what’s new.
Turkey defeated Georgia at Kocaeli Stadium in a 2026 World Cup qualifier. Arda Güler was in the starting line-up and played 68 minutes. The Turks, who dominated the match from start to finish, secured the win in the first half thanks to goals…
Generalized anxiety disorder (GAD) affects approximately 4% of the population [,] and is characterized by persistent and excessive worry that significantly disrupts daily functioning and diminishes quality of life [,]. The…
EXCLUSIVE: A film adaptation of Rebecca Serle’s New York Times bestseller Expiration Dates is in development at Amazon MGM Studios, with Emma Roberts attached to star.
The news comes in the wake of former Kroll & Co. Heads