Blog

  • Intel Targets 2026 for Sampling of Crescent Island GPU Built for Next-Gen AI Data Centers

    Intel Targets 2026 for Sampling of Crescent Island GPU Built for Next-Gen AI Data Centers

    This article first appeared on GuruFocus.

    Intel (INTC, Financials) is making another push into the artificial intelligence chip market with its upcoming data center GPU, code-named Crescent Island, which customers will begin testing in the…

    Continue Reading

  • Simple Life Lands $35M For AI-Powered Weight Loss

    Simple Life Lands $35M For AI-Powered Weight Loss

    Simple Life is getting smarter.

    What’s happening: The London-based behavioral coaching company secured a $35M Series B led by actor Kevin Hart’s HartBeat Ventures to advance AI-powered plans.

    Lean machine. Counting 800K…

    Continue Reading

  • Design for Sustainability: New Design Principles for Reducing IT Hardware Emissions

    Design for Sustainability: New Design Principles for Reducing IT Hardware Emissions

    • We’re presenting Design for Sustainability,  a set of technical design principles for new designs of IT hardware to reduce emissions and cost through reuse, extending useful life, and optimizing design.
    • At Meta, we’ve been able to significantly reduce the carbon footprint of our data centers by integrating several design strategies such as modularity, reuse, retrofitting, dematerialization, using greener materials, and extended hardware lifecycles, Meta can significantly reduce the carbon footprint of its data center infrastructure. 
    • We’re inviting the wider industry to also adopt the strategies outlined here to help reach sustainability goals.

    The data centers, server hardware, and global network infrastructure that underpin Meta’s operations are a critical focus to address the environmental impact of our operations. As we develop and deploy the compute capacity and storage racks used in data centers, we are focused on our goal to reach net zero emissions across our value chain in 2030. To do this, we prioritize interventions to reduce emissions associated with this hardware, including collaborating with hardware suppliers to reduce upstream emissions.

    What Is Design for Sustainability? 

    Design for Sustainability is a set of guidelines, developed and proposed by Meta, to aid hardware designers in reducing the environmental impact of IT racks. This considers various factors such as energy efficiency and the selection, reduction, circularity, and end-of-life disposal of materials used in hardware. Sustainable hardware design requires collaboration between hardware designers, engineers, and sustainability experts to create hardware that meets performance requirements while limiting environmental impact.

    In this guide, we specifically focus on the design of racks that power our data centers and offer alternatives for various components (e.g., mechanicals, cooling, compute, storage and cabling) that can help rack designers make sustainable choices early in the product’s lifecycle. 

    Our Focus on Scope 3 Emissions

    To reach our net zero goal, we are primarily focused on reducing our Scope 3 (or value chain) emissions from physical sources like data center construction and our IT hardware (compute, storage and cooling equipment) and network fiber infrastructure.

    While the energy efficiency of the hardware itself deployed in our data centers helps reduce energy consumption, we have to also consider IT hardware emissions associated with the manufacturing and delivery of equipment to Meta, as well as the end-of-life disposal, recycling, or resale of this hardware.

    Our methods for controlling and reducing Scope 3 emissions generally involve optimizing material selection, choosing and developing lower carbon alternatives in design, and helping to reduce the upstream emissions of our suppliers.

    For internal teams focused on hardware, this involves:

    • Optimizing hardware design for the lowest possible emissions, extending the useful life of materials as much as possible with each system design, or using lower carbon materials.
    • Being more efficient by extending the useful life of IT racks to potentially skip new generations of equipment.
    • Harvesting server components that are no longer available to be used as spares. When racks reach their end-of-life, some of the components still have service life left in them and can be harvested and reused in a variety of ways. Circularity programs harvest components such as dual In-line memory modules (DIMMs) from end-of-life racks and redeploy them in new builds.
    • Knowing the emissions profiles of suppliers, components, and system designs. This in turn informs future roadmaps that will further reduce emissions.
    • Collaborating with suppliers to electrify their manufacturing processes, to transition to renewable energy, and to leverage lower carbon materials and designs.

    These actions to reduce Scope 3 emissions from our IT hardware also have the additional benefit of reducing the amount of electronic waste (e-waste) generated from our data centers.

    An Overview of the Types of Racks We Deploy 

    There are many different rack designs deployed within Meta’s data centers to support different workloads and infrastructure needs, mainly:

    1. AI – AI training and inference workloads
    2. Compute – General compute needed for running Meta’s products and services
    3. Storage – Storing and maintaining data used by our products
    4. Network – Providing Low-latency interconnections between servers

    While there are differences in architecture across these different rack types, most of these racks apply general hardware design principles and contain active and passive components from a similar group of suppliers. As such, the same design principles for sustainability apply across these varied rack types.

    Within each rack, there are five main categories of components that are targeted for emissions reductions: 

    1. Compute (i.e., memory, HDD/SSD)
    2. Storage
    3. Network
    4. Power
    5. Rack infrastructure (i.e., mechanical and thermals)

    The emissions breakdown for a generic compute rack is shown below.

    Our Techniques for Reducing Emissions

    We focus on four main categories to address emissions associated with these hardware components:

    We will cover a few of the levers listed above in detail below.

    Modular Rack Designs

    Modular Design which allows older rack components to be re-used in newer racks. Open Rack designs (ORv2 & ORv3) form the bulk of high volume racks that exist in our data centers. 

    Here are some key aspects of the ORv3 modular rack design:

    • ORv3 separates Power Supply Units (PSUs) and Battery Backup Units (BBUs) into their own shelves.
      This allows for more reliable and flexible configurations, making repairs and replacements easier as each field replaceable unit (FRU) is toolless to replace.
    • Power and flexibility
      The ORv3 design includes a 48 V power output, which allows the power shelf to be placed anywhere in the rack. This is an improvement over the previous ORV2 design, which limited the power shelf to a specific power zone
    • Configurations
      The rack can accommodate different configurations of PSU and BBU shelves to meet various platform and regional requirements. For example, North America uses a dual AC input per PSU shelf, while Europe and Asia use a single AC input. 
    • Commonization effort
      There is an ongoing effort to design a “commonized” ORv3 rack frame that incorporates features from various rack variations into one standard frame. This aims to streamline the assembly process, reduce quality risks, and lower overall product costs 
    • ORv3N
      A derivative of ORv3, known as ORv3N, is designed for network-specific applications. It includes in-rack PSU and BBU, offering efficiency and cost improvements over traditional in-row UPS systems 

    These design principles should continue to be followed in successive generations of racks. With the expansion of AI workloads, new specialized racks for compute, storage, power and cooling are being developed that are challenging  designers to adopt the most modular design principles. 

    Re-Using/Retrofitting Existing Rack Designs

    Retrofitting existing rack designs for new uses/high density is a cost-effective and sustainable approach to meet evolving data center needs. This strategy can help reduce e-waste, lower costs, and accelerate deployment times. Benefits of re-use/retrofitting include:

    • Cost savings
      Retrofitting existing racks can be significantly cheaper compared to purchasing new racks.
    • Reduced e-waste
      Reusing existing racks reduces the amount of e-waste generated by data centers.
    • Faster deployment
      Retrofitting existing racks can be completed faster than deploying new racks, as it eliminates the need for procurement and manufacturing lead times.
    • Environmental benefits
      Reducing e-waste and reusing existing materials helps minimize the environmental impact of data centers.

    There are several challenges when considering re-using or retrofitting racks:

    • Compatibility issues
      Ensuring compatibility between old and new components can be challenging.
    • Power and cooling requirements
      Retrofitting existing racks may require upgrades to power and cooling systems to support new equipment.
    • Scalability and flexibility
      Retrofitting existing racks may limit scalability and flexibility in terms of future upgrades or changes.
    • Testing and validation
      Thorough testing and validation are required to ensure that retrofitted racks meet performance and reliability standards.

    Overall, the benefits of retrofitting existing racks are substantial and should be examined in every new rack design.

    Green Steel

    Steel is a significant portion of a rack and chassis and substituting traditional steel with green steel can reduce emissions. Green steel is typically produced using electric arc furnaces (EAF) instead of traditional basic oxygen furnaces (BOF), allowing for the use of clean and renewable electricity and a higher quantity of recycled content. This approach significantly reduces carbon emissions associated with steel production. Meta collaborates with suppliers who offer green steel produced with 100% clean and renewable energy.

    Recycled Steel, Aluminum, and Copper

    While steel is a significant component of rack and chassis, aluminum and copper are extensively used in heat sinks and wiring. Recycling steel, aluminum, and copper saves significant energy needed to produce hardware from raw materials. 

    As part of our commitment to sustainability, we now require all racks/chassis to contain a minimum of 20% recycled steel. Additionally, all heat sinks must be manufactured entirely from recycled aluminum or copper. These mandates are an important step in our ongoing sustainability journey.

    Several of our steel suppliers, such as Tata Steel, provide recycled steel. Product design teams may ask their original design manufacturer (ODM) partners to make sure that recycled steel is included in the steel vendor(s) selected by Meta’s ODM partners. Similarly, there are many vendors that are providing recycled aluminum and copper products.

    Improving Reliability to Extend Useful Life

    Extending the useful life of racks, servers, memory, and SSDs helps Meta reduce the number of hardware equipment that needs to be ordered. This has helped achieve significant reductions in both emissions and costs. 

    A key requirement for extending useful life of hardware is the reliability of the hardware component or rack. Benchmarking reliability is an important element to determine whether hardware life extensions are feasible and for how long. Additional consideration needs to be given to the fact that spares and vendor support may have diminishing availability. Also, extending hardware life also comes with the risk of increased equipment failure, so a clear strategy to deal with the higher incidence of potential failure should be put in place.

    Dematerialization

    Dematerialization and removal of unnecessary hardware components can lead to a significant reduction in the use of raw materials, water, and/or energy. This entails reducing the use of raw materials such as steel on racks or removing unnecessary components on server motherboards while maintaining the design constraints established for the rack and its components. 

    Dematerialization also involves consolidating multiple racks into fewer, more efficient ones, reducing their overall physical footprint. 

    Extra components on hardware boards are included for several reasons:

    1. Future-proofing
      Components might be added to a circuit board in anticipation of future upgrades or changes in the design. This allows manufacturers to easily modify the board without having to redesign it from scratch.
    2. Flexibility
      Extra components can provide flexibility in terms of configuration options. For example, a board might have multiple connectors or interfaces that can be used depending on the specific application.
    3. Debugging and testing
      Additional components can be used for debugging and testing purposes. These components might include test points, debug headers, or other features that help engineers diagnose issues during development.
    4. Redundancy
      In some cases, extra components are included to provide redundancy in case one component fails. This is particularly important in high-reliability applications where system failure could have significant consequences.
    5. Modularity
      Extra components can make a board more modular, allowing users to customize or upgrade their system by adding or removing modules.
    6. Regulatory compliance
      Some components might be required for regulatory compliance, such as safety features or electromagnetic interference (EMI) filtering.

    In addition, changes in requirements over time can also lead to extra components. While it is very difficult to modify systems in production, it is important to make sure that each hardware design optimizes for components that will be populated. 

    Examples of extra components on hardware boards include:

    • Unpopulated integrated circuit (IC) sockets or footprints
    • Unused connectors or headers
    • Test points or debug headers
    • Redundant power supplies or capacitors
    • Optional memory or storage components
    • Unconnected or reserved pins on ICs

    In addition to hardware boards, excess components may also be present in other parts of the rack. Removing excess components can lead to lowering the emissions footprint of a circuit board or rack. 

    Productionizing New Technologies With Lower Emissions

    Productionizing new technologies can help Meta significantly reduce emissions. Memory and SSD/HDD are typically the single largest source of embodied carbon emissions in a server rack. New technologies can help Meta reduce emissions and costs while providing a substantially higher power-normalized performance. 

    Examples of such technologies include:

    • Transitioning to SSD from HDD can reduce emissions by requiring fewer drives, servers, racks, BBUs, and PSUs, as well as help reduce overall energy usage. 
    • Depending on local environmental conditions, and the data center’s workload, using liquid cooling in server racks can be up to 17% more carbon-efficient than traditional air cooling.
    Source: OCP Global Summit, Oct 15-17, 2024, San Jose, CA.

    Teams can explore additional approaches to reduce emissions associated with memory/SSD/HDD which include:

    • Alternate technologies such as phase-change memory (PCM) or Magnetoresistive Random-Access Memory (MRAM) that have the same performance with low carbon.
    • Use Low-Power Double Data Rates (LPDDRs ) for low power consumption and high bandwidth instead of DDR.
    • Removing/reusing unused memory modules to reduce energy usage or down-clocking them during idle periods.
    • Using fewer high capacity memory modules to reduce power and cooling needs. Use High Bandwidth Memory (HBM) which uses much less energy than the DDR memory.

    Choosing the Right Suppliers

    Meta engages with suppliers to reduce emissions through its net zero supplier engagement program. This program is designed to set GHG reduction targets with selected suppliers to help achieve our net zero target. Key aspects of the program include:

    • Providing capacity building: Training suppliers on how to measure emissions, set science-aligned targets, build reduction roadmaps, procure renewable energy, and understand energy markets. 
    • Scaling up: In 2021 the program started with 39 key suppliers; by 2024 it expanded to include 183 suppliers, who together account for over half of Meta’s supplier-related emissions. 
    • Setting target goals: Meta aims to have two-thirds of its suppliers set science-aligned greenhouse gas reduction targets by 2026 . As of end-2024, 48% (by emissions contribution) have done so. 

    The Clean Energy Procurement Academy (CEPA), launched in 2023 (with Meta and other corporations), helps suppliers — especially in the Asia-Pacific region — learn how to procure renewable energy via region-specific curricula. 

    The Road to Net Zero Emissions

    The Design for Sustainability principles outlined in this guide represent an important step forward in Meta’s goal to achieve net zero emissions in 2030. By integrating innovative design strategies such as modularity, reuse, retrofitting, and dematerialization, alongside the adoption of greener materials and extended hardware lifecycles, Meta can significantly reduce the carbon footprint of its data center infrastructure. These approaches not only lower emissions but also drive cost savings, e-waste reductions, and operational efficiency, reinforcing sustainability as a core business value.

    Collaboration across hardware designers, engineers, suppliers, and sustainability experts is essential to realize these goals. The ongoing engagement with suppliers further amplifies the impact by addressing emissions across our entire value chain. As Meta continues to evolve its rack designs and operational frameworks, the focus on sustainability will remain paramount, ensuring that future infrastructure innovations support both environmental responsibility and business performance.

    Ultimately, the success of these efforts will be measured by tangible emissions reductions, extended useful life of server hardware, and the widespread adoption of low carbon technologies and materials.


    Continue Reading

  • Advanced labor cesarean birth raises future preterm birth risk

    Advanced labor cesarean birth raises future preterm birth risk

    Advanced labor cesarean birth raises future preterm birth risk | Image Credit: © Michael – © Michael – stock.adobe.com.

    Scarring in the wound linked to increased preterm birth risk in future pregnancies is 8 times more likely in women with…

    Continue Reading

  • YouTube has a new video player

    YouTube has a new video player

    YouTube is updating the look of the video player to be “cleaner and more immersive” beginning this week. “This includes updated controls and new icons to make the viewing experience more visually satisfying while obscuring less content,”

    Continue Reading

  • Wall Street’s ‘fear gauge’ surges to highest level since May. Here’s what investors should know.

    Wall Street’s ‘fear gauge’ surges to highest level since May. Here’s what investors should know.

    By Joseph Adinolfi

    The revival of the U.S.-China trade war has ended a streak of summer calm that had brought about the lowest volatility since January 2020

    The stock market’s “fear gauge” is back above its long-term average.

    After one of the quietest summers for the stock market in years, Wall Street’s fear gauge has once again shot higher as investors fret that a trade standoff between the U.S. and China could escalate further.

    The Cboe Volatility Index VIX, better known as the VIX, or Wall Street’s “fear gauge,” traded as high as 22.76 on Tuesday, its highest intraday level since May 23, when it traded as high as 25.53, according to Dow Jones Market Data. By the time the market closed, the VIX had moved well off its earlier highs. The index ended the day above 20, a level with some significance.

    Since the VIX’s inception in the early 1990s, its long-term average sits just below 20. As a result, investors tend to see this level as the line in the sand between a relatively calm market, and one that is starting to look a bit more panicked.

    The level of the VIX is based on trading activity in options contracts tied to the S&P 500 SPX that are due to expire in roughly one month. It is seen as a proxy for how worried traders are about the possibility that stocks could be due for a nosedive. After all, volatility tends to rise more quickly when the market is falling.

    A summer lull

    Looking back, there were signs that investors were beginning to feel a bit too complacent.

    Stocks trundled higher all summer with few interruptions. This placid trading ultimately sent the three-month realized volatility for the S&P 500 to its lowest level since January 2020 last week, according to FactSet data and MarketWatch calculations.

    Realized volatility is a calculation that measures how volatile a given index or asset has been in the recent past. The VIX, which measures implied volatility, attempts to gauge how volatile investors expect markets will be in the immediate future.

    For a while, the VIX trended lower alongside realized volatility for the S&P 500. But around Labor Day, the two started to diverge.

    This could mean a couple of different things, according to portfolio managers who spoke with MarketWatch. The first is that investors increasingly preferred to bet on further upside in the stock market using call options instead of actual shares. Call options on the S&P 500 will deliver a payoff if the index rises above a predetermined level before a given time, which is known as the expiration date.

    It might also mean that some traders were scooping up put options, which act like a form of portfolio insurance. Wary of myriad risks that could upset the apple cart following a record-setting rebound earlier in the year, some investors may have preferred to hedge their downside risk, while holding on to their stocks, so as not to miss out on any further gains.

    Signs that the market might be bracing for some upcoming turbulence first started to emerge in late September. Between Sept. 29 and Oct. 3, the S&P 500 and the VIX rose simultaneously for five straight sessions. That hadn’t happened since at least 1996, according to an analysis from Carson Group’s Ryan Detrick.

    Seeing both the VIX and S&P 500 trend higher hinted that the market’s streak of calm might soon be coming to an end, said Michael Kramer, portfolio manager at Mott Capital Management.

    “The tinder was there for something like Friday to occur,” said Mike Thompson, co-portfolio manager at Little Harbor Advisors.

    “You just needed that spark to trigger it,” Mott Capital’s Kramer said.

    While the U.S.-China trade tensions remain far from settled, Thompson and his brother, Matt Thompson, also a co-portfolio manager at Little Harbor Advisors, are keeping an eye out for any indication that a bigger burst of volatility might lie ahead.

    Investors have largely blamed the selloff for the revival of trade tensions between the U.S. and China. On Friday, President Donald Trump threatened 100% tariffs on all Chinese goods imported into the U.S. in retaliation for Beijing stepping up export controls on rare earth metals.

    Then on Tuesday, Beijing sanctioned U.S. subsidiaries of a South Korean shipping firm, sparking a global stock-market selloff that had largely reversed by the time the closing bell rang out on Wall Street.

    But according to the Thompson brothers, the U.S.-China tariff dance has started to feel a little too familiar for it to be a real cause for concern. Investors appear to be catching on to the pattern of escalation, followed immediately by de-escalation, as each side vies for maximum leverage.

    A more plausible threat to market calm, in their view, would be the ructions in the credit market. On Tuesday, JPMorgan Chase & Co. (JPM) Chief Executive Jamie Dimon warned about the potential for more credit problems after the bank lost money on a loan to bankrupt subprime auto lender Tricolor. Trouble in the space could get worse after a long period where conditions in the credit market were relatively favorable.

    On Friday, BlackRock (BLK) and other institutional investors asked for their money back from Point Bonita Capital, a fund managed by the investment bank Jefferies (JEF), after the bankruptcy of auto parts supplier First Brands Group saddled the fund with big losses.

    “We’re keeping an eye out for whether there is another shoe to drop,” Matt Thompson said.

    U.S. stocks were on track to finish mostly higher on Tuesday, until Trump dropped a Truth Social post accusing China of a “Economically Hostile Act” for refusing to purchase soybeans from American farmers. That caused the S&P 500 to finish 0.2% lower, while the Nasdaq Composite COMP ended down 0.8%. Of the three major U.S. indexes, only the Dow Jones Industrial Average DJIA managed to finish higher. Meanwhile, the Russell 2000 RUT, an index of small-cap stocks, quietly notched another record closing high.

    -Joseph Adinolfi

    This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

    (END) Dow Jones Newswires

    10-14-25 1642ET

    Copyright (c) 2025 Dow Jones & Company, Inc.

    Continue Reading

  • How Meta Is Leveraging AI To Improve the Quality of Scope 3 Emission Estimates for IT Hardware

    How Meta Is Leveraging AI To Improve the Quality of Scope 3 Emission Estimates for IT Hardware

    • As we focus on our goal of achieving net zero emissions in 2030, we also aim to create a common taxonomy for the entire industry to measure carbon emissions.
    • We’re sharing details on a new methodology we presented at the 2025 OCP regional EMEA summit that leverages AI to improve our understanding of our IT hardware’s Scope 3 emissions.
    • We are collaborating with the OCP PCR workstream to open source this methodology for the wider industry. This collaboration will be introduced at the 2025 OCP Global Summit.

    As Meta focuses on achieving net zero emissions in 2030, understanding the carbon footprint of server hardware is crucial for making informed decisions about sustainable sourcing and design. However, calculating the precise carbon footprint is challenging due to complex supply chains and limited data from suppliers. IT hardware used in our data centers is a significant source of emissions, and the embodied carbon associated with the manufacturing and transportation of this hardware is particularly challenging to quantify.

    To address this, we developed a methodology to estimate and track the carbon emissions of hundreds of millions of components in our data centers. This approach involves a combination of cost-based estimates, modeled estimates, and component-specific product carbon footprints (PCFs) to provide a detailed understanding of embodied carbon emissions. These component-level estimates are ranked by the quality of data and aggregated at the server rack level.

    By using this approach, we can analyze emissions at multiple levels of granularity, from individual screws to entire rack assemblies. This comprehensive framework allows us to identify high-impact areas for emissions reduction. 

    Our ultimate goal is to drive the industry to adopt more sustainable manufacturing practices and produce components with reduced emissions. This initiative underscores the importance of high-quality data and collaboration with suppliers to enhance the accuracy of carbon footprint calculations to drive more sustainable practices.

    We leveraged AI to help us improve this database and understand our Scope 3 emissions associated with IT hardware by:

    • Identifying similar components and applying existing PCFs to similar components that lack these carbon estimates.
    • Extracting data from heterogeneous data sources to be used in parameterized models.
    • Understanding the carbon footprint of IT racks and applying generative AI (GenAI) as a categorization algorithm to create a new and standard taxonomy. This taxonomy helps us understand the hierarchy and hotspots in our fleet and allows us to provide insights to the data center design team in their language. We hope to iterate on this taxonomy with the data center industry and agree on an industry-wide standard that allows us to compare IT hardware carbon footprints for different types and generations of hardware.

    Why We Are Leveraging AI 

    For this work we used various AI methods to enhance the accuracy and coverage of Scope 3 emission estimates for our IT hardware. Our approach leverages the unique strengths of both  natural language processing (NLP) and large language models (LLMs). 

    NLP For Identifying Similar Components

    In our first use case (Identifying similar components with AI), we employed various NLP techniques such as Term Frequency-Inverse Document Frequency (TF-IDF) and Cosine similarity to identify patterns within a bounded, relatively small dataset. Specifically, we applied this method to determine the similarity between different components. This approach allowed us to develop a highly specialized model for this specific task.

    LLMs For Handling and Understanding Data

    LLMs are pre-trained on a large corpus of text data, enabling them to learn general patterns and relationships in language. They go through a post-training phase to adapt to specific use cases such as chatbots. We apply LLMs, specifically Llama 3.1, in the following three different scenarios:

    Unlike the first use case, where we needed a highly specialized model to detect similarities, we opted for LLM for these three use cases because it leverages general human language rules.  This includes handling different units for parameters, grouping synonyms into categories, and recognizing varied phrasing or terminology that conveys the same concept. This approach allows us to efficiently handle variability and complexity in language, which would have required significantly more time and effort to achieve using only traditional AI. 

    Identifying Similar Components With AI

    When analyzing inventory components, it’s common for multiple identifiers to represent the same parts or slight variations of them. This can occur due to differences in lifecycle stages, minor compositional variations, or new iterations of the part.

    PCFs following the GHG Protocol are the highest quality input data we can reference for each component, as they typically account for the Scope 3 emissions estimates throughout the entire lifecycle of the component. However, conducting a PCF is a time-consuming process that typically takes months. Therefore, when we receive PCF information, it is crucial to ensure that we map all the components correctly.

    PCFs are typically tied to a specific identifier, along with aggregated components. For instance, a PCF might be performed specifically for a particular board in a server, but there could be numerous variations of this specific component within an inventory. The complexity increases as the subcomponents of these items are often identical, meaning the potential impact of a PCF can be significantly multiplied across a fleet.

    To maximize the utility of a PCF, it is essential to not only identify the primary component and its related subcomponents but also identify all similar parts that a PCF could be applied to. If these similar components are not identified their carbon footprint estimates will remain at a lower data quality. Therefore, identifying similar components is crucial to ensure that we:

    • Leverage PCF information to ensure the highest data quality for all components.
    • Maintain consistency within the dataset, ensuring that similar components have the same or closely aligned estimates.
    • Improve traceability of each component’s carbon footprint estimate for reporting.

    To achieve this, we employed a natural language processing (NLP) algorithm, specifically tailored to the language of this dataset, to identify possible proxy components by analyzing textual descriptions and filtering results by component category to ensure relevance.

    The algorithm identifies proxy components in two distinct ways:

    1. Leveraging New PCFs: When a new PCF is received, the algorithm uses it as a reference point. It analyzes the description names of components within the same category to identify those with a high percentage of similarity. These similar components can be mapped to a representative proxy PCF, allowing us to use high-quality PCF data in similar components.
    2. Improving Low Data Quality Components: For components with low data quality scores, the algorithm operates in reverse with additional constraints. Starting with a list of low-data-quality components, the algorithm searches for estimates that have a data quality score greater than a certain threshold. These high-quality references can then be used to improve the data quality of the original low-scoring components.

    Meta’s Net Zero team reviews the proposed proxies and validates our ability to apply them in our estimates. This approach enhances the accuracy and consistency of component data, ensures that high-quality PCF data is effectively utilized across similar components, and enables us to design our systems to more effectively reduce emissions associated with server hardware.

    When PCFs are not available, we aim to avoid using spend-to-carbon methods because they tie sustainability too closely to spending on hardware and can be less accurate due to the influence of factors like supply chain disruptions. 

    Instead, we have developed a portfolio of methods to estimate the carbon footprint of these components, including through parameterized modeling. To adapt any model at scale, we require two essential elements: a deterministic model to scale the emissions, and a list of data input parameters. For example, we can scale the carbon footprint calculation for a component by knowing its constituent components’ carbon footprint.

    However, applying this methodology can be challenging due to inconsistent description data or locations where information is presented. For instance, information about cables may be stored in different tables, formats, or units, so we may be unable to apply models to some components due to difficulty in locating  input data.

    To overcome this challenge, we have utilized large language models (LLMs) that extract information from heterogeneous sources and inject the extracted information into the parameterized model. This differs from how we apply NLP, as it focuses on extracting information from specific components. Scaling a common model ensures that the estimates provided for these parts are consistent with similar parts from the same family and can inform estimates for missing or misaligned parts.

    We applied this approach to two specific categories: memory and cables. The LLM extracts relevant data (e.g., the capacity for memory estimates and length/type of cable for physics-based estimates) and scales the components’ emissions calculations according to the provided formulas. 

    A Component-Level Breakdown of IT Hardware Emissions Using AI

    We utilize our centralized component carbon footprint database not only for reporting emissions, but also to drive our ability to efficiently deploy emissions reduction interventions. Conducting a granular analysis of component-level emissions enables us to pinpoint specific areas for improvement and prioritize our efforts to achieve net zero emissions. For instance, if a particular component is found to have a disproportionately high carbon footprint, we can explore alternative materials or manufacturing processes to mitigate its environmental impact. We may also determine that we should reuse components and extend their useful life by testing or augmenting component reliability. By leveraging data-driven insights at the component level and driving proactive design interventions to reduce component emissions, we can more effectively prioritize sustainability when designing new servers.

    We leverage a bill of materials (BOM) to list all of the components in a server rack in a tree structure, with “children” component nodes listed under “parent” nodes. However, each vendor can have a different BOM structure, so two identical racks may be represented differently. This, coupled with the heterogeneity of methods to estimate emissions, makes it challenging to easily identify actions to reduce component emissions.

    To address this challenge, we have used AI to categorize the descriptive data of our racks into two hierarchical levels:

    • Domain-level: A high-level breakdown of a rack into main functional groupings (e.g., compute, network, power, mechanical, and storage)
    • Component-level: A detailed breakdown that highlights the major components that are responsible for the bulk of Scope 3 emissions (e.g., CPU, GPU, DRAM, Flash, etc.)

    We have developed two classification models: one for “domain” mapping, and another for “component” mapping. The difference between these mappings lies in the training data and the additional set of examples provided to each model. We then combine the two classifications to generate a mutually exclusive hierarchy.

    During the exploration phase of the new taxonomy generation, we allowed the GenAI model to operate freely to identify potential categories for grouping. After reviewing these potential groupings with our internal hardware experts, we established a fixed list of major components. Once this list was finalized, we switched to using a strict GenAI classifier model as follows:

    1. For each rack, recursively identify the highest contributors, grouping smaller represented items together.
    2. Run a GenAI mutually exclusive classifier algorithm to group the components into the identified categories.

     

    The emissions breakdown for a generic compute rack.

    This methodology has been presented at the 2025 OCP regional EMEA summit with the goal to drive the industry toward a common taxonomy for carbon footprint emissions, and open source the methodology we used to create our taxonomy.

    These groupings are specifically created to aid carbon footprint analysis, rather than for other purposes such as cost analysis. However, the methodology can be tailored for other purposes as necessary.

    Coming Soon: Open Sourcing Our Taxonomies and Methodologies

    As we work toward achieving net zero emissions across our value chain in 2030, this component-level breakdown methodology is necessary to help understand our emissions at the server component level. By using a combination of high-quality PCFs, spend-to-carbon data, and a portfolio of methods that leverage AI, we can enhance our data quality and coverage to more effectively deploy emissions reduction interventions. 

    Our next steps include open sourcing:

    • The taxonomy and methodology for server rack emissions accounting.
    • The taxonomy builder using GenAI classifiers.
    • The aggregation methodology to improve facility reporting processes across the industry.

    We are committed to sharing our learnings with the industry as we evolve this methodology, now as part of a collaborative effort with the OCP PCR group.


    Continue Reading

  • Apple Invites update brings new ‘festive’ backgrounds

    Apple Invites update brings new ‘festive’ backgrounds

    Following last month’s 1.5 update, which introduced new features and the Liquid Glass design, Apple Invites was updated today with new “festive event backgrounds.” Here’s what’s new.

    Apple Invites adds festive event…

    Continue Reading

  • Arda Güler leads Turkey to victory over Georgia

    Arda Güler leads Turkey to victory over Georgia

    Turkey defeated Georgia at Kocaeli Stadium in a 2026 World Cup qualifier. Arda Güler was in the starting line-up and played 68 minutes. The Turks, who dominated the match from start to finish, secured the win in the first half thanks to goals…

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Generalized anxiety disorder (GAD) affects approximately 4% of the population [,] and is characterized by persistent and excessive worry that significantly disrupts daily functioning and diminishes quality of life [,]. The…

    Continue Reading