The grandson of HB Reese, the inventor of Reese’s Peanut Butter Cups, has accused the chocolate giant Hershey of faking a pledge to investors to switch back the recipes of its popular products – including KitKat – to the original milk and dark chocolate ones.
A confectionary-focused dust-up between Brad Reese and the $42bn Pennsylvania-based company began in February when Reese, 70, accused the company of “quietly replacing” the ingredients – or “architecture” – in his grandfather’s invention with cheaper “compound coatings” and “peanut-butter-style crèmes”.
At a recent Hershey investor conference, the company said it would change about 3% of select products to the original recipes but maintained it had never altered the renowned Reese’s Peanut Butter Cups.
The company’s chief growth officer, Stacy Taffet, said Hershey was “transitioning our sweets portfolio to colors from natural sources, and ensuring that all Hershey’s and Reese’s offerings are consistent with their brand’s classic milk and dark chocolate recipes”. The changes are planned to come into effect by next year.
But Reese accused the company of “ingredient drift across flagship brands” and described the move “as a board level accountability problem” that caused shareholders to sell stock. “Your consumers are revolting,” he added.
Reese further told the New York Times he is not satisfied and the changes were not coming quickly enough. “This is just a PR stunt; there’s no victory here,” he said in an interview with outlet. “If they were serious, they would do it right away.”
The company has said the changes were already under way but not in response to Reese’s criticism, saying it had previously decided to revert to classic recipes after seeing a 25% increase in research and development to fund talent, technology and nutrition science.
The issue has become something of a crusade for Reese, who alleges Hershey changed sometime after buying the Reese’s brand in the 1960s.
In his original complaint to the company, issued on Valentine’s Day on the LinkedIn social media platform, Reese railed that recipes were “being rewritten, not by storytellers, but by formulation decisions that replace Milk Chocolate with compound coatings and Peanut Butter with peanut‑butter‑style crèmes”.
Reese said he’d noticed the difference in taste when he tried Reese’s Unwrapped Chocolate Peanut Butter Creme Mini Hearts.
“I opened it up, and I had about two of them, and I had to spit them out,” he said. “I dumped the entire contents into my kitchen garbage can, and I kept the pouch. I checked it and it wasn’t milk chocolate, it wasn’t real peanut butter.
“I’ve never in my entire life spit out a Reese’s product.”
But Reese’s family do not support his complaints. They said in a statement provided to USA Today by Hershey that “his statements and opinions are entirely his own and do not reflect the view or position of our family”.
“We continue to respect The Hershey Company, its leadership, and its longstanding role in our community,” they added. “We believe HB Reese would take great pride in the products produced under his name today and in the integrity with which the brand continues to be managed.”
Brad Reese didn’t accept that and accused the company of trying to “shoot the messenger”.
“Hershey can issue all the statements it wants,” he fumed on LinkedIn. “They changed the REESE’S product. They got caught. And now they’re trying to manage perception instead of fixing the problem. The evidence chain isn’t going away.”
The U.S. Food and Drug Administration (FDA) has granted traditional approval to the chimeric antigen receptor (CAR) T-cell therapy brexucabtagene autoleucel (Tecartus) for adult patients with relapsed or refractory mantle cell lymphoma (MCL). The full approval now includes efficacy, safety, and pharmacokinetic data from cohort 3 of the ZUMA-2 study (ClinicalTrials.gov identifier NCT05537766) in patients whose disease becomes relapsed or refractory after one or more lines of therapy and who are Bruton’s tyrosine kinase (BTK) inhibitor–naive.
This action converts the relapsed or refractory MCL indication to full approval based on the totality of evidence from ZUMA-2, including confirmatory data from cohort 3, which demonstrated high response rates, durable efficacy outcomes, and a manageable safety profile consistent with prior experience. This milestone fulfills the postmarketing requirement for verification and description of clinical benefit in a confirmatory trial under the FDA’s Accelerated Approval pathway for brexucabtagene autoleucel in relapsed/refractory MCL.
“The full approval of [brexucabtagene autoleucel] in relapsed or refractory MCL, along with the inclusion of cohort 3 data in the label, reinforces our confidence in the overall profile of brexucabtagene autoleucel,” said Michael Wang, MD, ZUMA-2 lead investigator and Professor in the Department of Lymphoma and Myeloma, Division of Cancer Medicine at The University of Texas MD Anderson Cancer Center. “The cohort 3 results showed high response rates, including deep remissions, in patients who were BTK inhibitor–naive, with a manageable safety profile consistent with prior experience. These data provide important information to help guide treatment decisions in the relapsed or refractory setting for appropriate patients.”
MCL is a rare form of non-Hodgkin lymphoma that arises from cells originating in the “mantle zone” of the lymph node and predominantly affects men over the age of 60. Approximately 33,000 people worldwide are diagnosed with MCL each year. MCL is highly aggressive following relapse, with many patients’ disease progressing following therapy.
More From ZUMA-2
ZUMA-2 is a single-arm, open-label, multicenter study evaluating brexucabtagene autoleucel in adult patients with relapsed or refractory MCL. Cohorts 1 and 2 evaluated the therapy in patients who had previously received up to five lines of therapy, including anthracycline- or bendamustine-containing chemotherapy, an anti-CD20 antibody, and a BTK inhibitor. Cohort 3 evaluated brexucabtagene autoleucel in patients who had received up to five prior lines of therapy and were BTK inhibitor–naive. A total of 82 patients were treated in cohorts 1 and 2 and 86 patients were treated in cohort 3. The primary endpoint across the study was objective response rate per the Lugano Classification (2014), as assessed by an Independent Radiologic Review Committee.
The objective response rate in cohort 1 was 87%; in cohort 3, it was 91%. The complete remission rate was 62% vs 79%; in both cohorts, the median duration of response was not reached (the median follow-up for duration of response at the time of the primary analysis was 8.6 months in cohort 1 vs 23.0 months in cohort 3).
In the updated U.S. Prescribing Information, MCL safety data are pooled across cohorts 1–3 (n = 168). In this pooled MCL population, cytokine-release syndrome (CRS) occurred in 93% of patients, including grade ≥ 3 CRS in 12%; the median time to onset was 4 days and the median duration was 7 days. Neurologic events occurred in 80% of patients, including grade ≥ 3 neurologic events in 33%; the median time to onset was 6 days and the median duration was 19 days. Infections of any grade occurred in 63% of patients, including grade ≥ 3 infections in 33%. In cohort 3, serious adverse reactions occurred in 65% of patients. The most common serious adverse reactions (occurring in > 2% of patients) were nonventricular arrhythmias, tachycardias, pyrexia, CRS, unspecified pathogen infections, viral infections, bacterial infections, fungal infections, musculoskeletal pain, motor dysfunction, encephalopathy, aphasia, tremor, seizure, delirium, hypoxia, hypotension, hemorrhage, and thrombosis.
Get insights on thousands of stocks from the global community of over 7 million individual investors at Simply Wall St.
NANO Nuclear Energy (NNE) is back on investor radars after a recent 4.8% daily gain, standing out against a month and past 3 months stretch of negative share price returns.
See our latest analysis for NANO Nuclear Energy.
That 4.8% 1 day share price return, with the stock now at $21.38, comes after a year to date share price return of negative 22.56% and a 1 year total shareholder return of negative 8.91%. This suggests recent momentum is improving from a weaker stretch.
If this move in NANO Nuclear Energy has you looking across the sector, it could be a good moment to scan other nuclear related names using our 93 nuclear energy infrastructure stocks
With NANO Nuclear Energy still showing weak recent returns despite a 4.8% daily jump and trading at a wide gap to the US$46.67 analyst target, you have to ask whether this is a real opportunity or whether the market is already pricing in future growth.
The most followed narrative currently anchors NANO Nuclear Energy’s fair value at $46.67, which sits well above the latest close of $21.38 and frames that analyst gap in a very specific way.
The focus on vertical integration in enrichment, conversion, transport and TRISO fuel supply aims to address expected bottlenecks in the nuclear fuel chain. This approach may give NANO more control over input costs and availability, with potential implications for future net margins.
Read the complete narrative.
Want to see what kind of revenue ramp and margin profile that narrative is assuming? The numbers rely on rapid top line growth and a future earnings multiple more often associated with high growth stories than early stage nuclear developers.
Result: Fair Value of $46.67 (UNDERVALUED)
Have a read of the narrative in full and understand what’s behind the forecasts.
However, you still need to weigh licensing and construction timing risks, as well as the reliance on future AI and industrial power demand that may not convert into projects.
Find out about the key risks to this NANO Nuclear Energy narrative.
If the mixed tone of this story has you on the fence, act while the data is fresh and weigh both sides with our 2 key rewards and 4 important warning signs
If this story has caught your attention, do not stop here. Broaden your watchlist with fresh ideas pulled from different corners of the market.
This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.
Companies discussed in this article include NNE.
Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team@simplywallst.com
Atawey Announces the Signing of a Contract for Three Hydrogen Stations in Belgium
2 April 2026
Thyssengas and Cogas have agreed to the first cross-border connection of a Dutch distribution system operator to the German hydrogen core network.
The Dutch distribution system operator Cogas signed a contract with Thyssengas H2 GmbH for a T-piece, which will connect to the hydrogen pipeline from Vlieghuis to Ochtrup. This is the first time that a German transmission system operator has established a connection with a Dutch distribution system operator. The collaboration aims to facilitate an early H2 supply to industry and small and medium-sized enterprises in the Dutch region Twente. The project is supported by a German-Dutch consortium comprising Province of Overijssel, H2HUB Twente and TECHLAND, among others.
The US labor market picked up in March as employers showed signs of resilience amid the US-Israel war in Iran.
After an extraordinary contraction in February, employers added 178,000 jobs last month, ahead of economists’ expectations of about 70,000.
chart showing monthly change in US jobs
The unemployment rate fell to 4.3%, according to data from the US Bureau of Labor Statistics. In February, the economy lost 133,000 jobs, according to revised figures. Job figures for January were revised up, from 126,000 to 160,000. With revisions, total employment in January and February is 7,000 lower than previously reported. .
Previous data painted a mixed picture of the US labor market, which economists say has been in a static “low-fire, low-hire” state, where both layoffs and new hires are down.
Outplacement firm Challenger, Gray & Christmas found that employers announced 217,362 job cuts in the first quarter of 2026 – the lowest total for that period since 2022. But hiring in February slowed to a six-year low, according to data released earlier this week, with dips seen in construction and leisure and hospitality.
The so-called “quits rate” fell to 1.9%, the lowest since 2020, suggesting that uncertainty in the labor market has prompted more workers to stay put at their jobs.
The trend follows sluggish overall growth in the US jobs market since last year. In 2025, just 116,000 jobs were added to the economy for the entire year – around the same number that was added per month in previous years.
The slowdown points to caution among employers, particularly as consumer inflation experienced whiplash over the last year. US inflation dipped down to 2.3% in April 2025 before jumping to 3% in September. Price increases have been steady at 2.4% since the start of this year, though the US-Israel war with Iran is expected to drive inflation higher if the fallout continues to grow. Last month, US average gas prices broke through $4 a gallon, and the squeeze on oil and gas is expected to trickle into other industries.
The oil price shock is reminiscent of higher prices that were seen in 2022, after Russia invaded Ukraine. US average gas prices reached $5 a gallon at the time while inflation reached a generational high of 9%. Experts say that every $10 increase in the price of a barrel of oil can lead to 0.2% climb in inflation.
Quarterly earnings results are a good time to check in on a company’s progress, especially compared to its peers in the same sector. Today we are looking at FTAI Aviation (NASDAQ:FTAI) and the best and worst performers in the industrial distributors industry.
Supply chain and inventory management are themes that grew in focus after COVID wreaked havoc on the global movement of raw materials and components. Distributors that boast a reliable selection of products–everything from hardhats and fasteners for jet engines to ceiling systems–and quickly deliver goods to customers can benefit from this theme. While e-commerce hasn’t disrupted industrial distribution as much as consumer retail, it is still a real threat, forcing investment in omnichannel capabilities to better interact with customers. Additionally, distributors are at the whim of economic cycles that impact the capital spending and construction projects that can juice demand.
The 24 industrial distributors stocks we track reported a mixed Q4. As a group, revenues beat analysts’ consensus estimates by 1% while next quarter’s revenue guidance was in line.
Amidst this news, share prices of the companies have had a rough stretch. On average, they are down 7.6% since the latest earnings results.
With a focus on the CFM56 engine that powers Boeing and Airbus’s planes, FTAI Aviation (NASDAQ:FTAI) sells, leases, maintains, and repairs aircraft engines.
FTAI Aviation reported revenues of $662 million, up 32.7% year on year. This print fell short of analysts’ expectations by 4.1%. Overall, it was a slower quarter for the company with a significant miss of analysts’ revenue and adjusted operating income estimates.
“FTAI delivered exceptional results in 2025, driven by continued demand for our Aerospace Products business and excellent execution across the Company,” said Joe Adams, Chairman and CEO.
FTAI Aviation Total Revenue
The stock is down 19.1% since reporting and currently trades at $244.24.
Is now the time to buy FTAI Aviation? Access our full analysis of the earnings results here, it’s free.
With roots dating back to 1959 and a strategic focus on extending the life of transportation assets, VSE Corporation (NASDAQ:VSEC) provides aftermarket parts distribution and maintenance, repair, and overhaul services for aircraft and vehicle fleets in commercial and government markets.
VSE Corporation reported revenues of $301.2 million, flat year on year, outperforming analysts’ expectations by 4.6%. The business had an incredible quarter with a beat of analysts’ EPS estimates and a solid beat of analysts’ EBITDA estimates.
VSE Corporation Total Revenue
Although it had a fine quarter compared its peers, the market seems unhappy with the results as the stock is down 15.7% since reporting. It currently trades at $185.17.
Is now the time to buy VSE Corporation? Access our full analysis of the earnings results here, it’s free.
Founded in 1952, Distribution Solutions (NASDAQ:DSGR) provides supply chain solutions and distributes industrial, safety, and maintenance products to various industries.
Distribution Solutions reported revenues of $481.6 million, flat year on year, falling short of analysts’ expectations by 3%. It was a disappointing quarter as it posted a significant miss of analysts’ revenue estimates and a significant miss of analysts’ EBITDA estimates.
As expected, the stock is down 10.3% since the results and currently trades at $26.66.
Read our full analysis of Distribution Solutions’s results here.
Founded in NYC’s Little Italy, MSC Industrial Direct (NYSE:MSM) provides industrial supplies and equipment, offering vast and reliable selection for customers such as contractors
MSC Industrial reported revenues of $917.8 million, up 2.9% year on year. This result lagged analysts’ expectations by 1.6%. It was a softer quarter as it also produced a miss of analysts’ revenue estimates and a miss of analysts’ adjusted operating income estimates.
The stock is down 2.1% since reporting and currently trades at $90.33.
Serving the pharmaceutical, industrial manufacturing, energy, and chemical process industries, Transcat (NASDAQ:TRNS) provides measurement instruments and supplies.
Transcat reported revenues of $83.86 million, up 25.6% year on year. This number surpassed analysts’ expectations by 4.1%. However, it was a slower quarter as it recorded a significant miss of analysts’ EBITDA estimates and a significant miss of analysts’ EPS estimates.
The stock is up 20.1% since reporting and currently trades at $76.07.
Read our full, actionable report on Transcat here, it’s free.
Late in 2025 into early 2026, there was hand wringing around artificial intelligence. For software companies, the fear was that AI would erode pricing power and compress margins as new tools made it easier to replicate what once required expensive enterprise platforms. Crypto investors had their own version of the same anxiety: if AI agents could trade, allocate capital, and manage wallets autonomously, what exactly was the long-term value of today’s crypto infrastructure?
These concerns triggered a noticeable rotation away from these sectors and into safer havens. But markets rarely dwell on one narrative for long. Spring 2026 came, and the focus shifted abruptly from technological disruption to geopolitical risk. The US’ conflict with Iran became the dominant driver of market psychology, and when geopolitics takes center stage, the script changes quickly. Investors stop debating growth rates and start worrying about oil supply, inflation, and global stability.
Want to invest in winners with rock-solid fundamentals? Check out our 9 Best Market-Beating Stocks and add them to your watchlist. These companies are poised for growth regardless of the political or macroeconomic climate.
StockStory’s analyst team — all seasoned professional investors — uses quantitative analysis and automation to deliver market-beating insights faster and with higher quality.
Data center construction is expanding rapidly across the U.S., driven by surging artificial intelligence (AI) adoption and the resulting need for computing power. Investment is accelerating alongside construction, with annual data center project spending reaching $14 billion in July 2025 — a 100% increase over the prior year.
Comprehensive tax and financial forecasting helps prevent overspending and leaving potential savings on the table, making it essential for projects with such high levels of investment. Depreciation planning plays a central role in those efforts because data centers contain numerous specialized and redundant systems, all of which depreciate at different rates. Neglecting to account for those differences can cause companies to miss strategic tax planning opportunities and could increase their overall tax burdens.
This installment of BDO’s Data Center Dangers series explores how facility owners can leverage cost segregation analyses to help conduct depreciation planning, reduce their total tax liability, and capture all available tax savings.
Understanding Depreciation Schedules
At a basic level, depreciation is the act of expensing the cost of an asset over its usable life. Under federal tax rules, nonresidential real property, such as commercial property, is expensed over 39 years, but some individual building elements can be depreciated using five-, seven-, or 15-year recovery periods. Assigning the appropriate depreciation methods and recovery periods to property is crucial because depreciation can significantly affect after-tax costs and long-term project economics.
Data centers combine real estate, industrial infrastructure, and advanced technology systems under one roof. Because those elements depreciate at different rates, depreciation planning can be more complex for data centers than other commercial projects such as office building or retail spaces.
Recent tax law changes add another layer of complexity to depreciation planning. The One Big Beautiful Bill Act (OBBBA), passed in 2025, restored 100% bonus depreciation, which had been slated to sunset. Bonus depreciation allows companies to immediately deduct 100% of the purchase price of eligible assets, so its reinstatement opens the door to substantial savings for data center owners that set themselves up to claim them.
Tangible property, computer software, and other assets with a class life of no more than 20 years — categories that encompass many of a data center’s systems — are eligible for bonus depreciation. Common data center assets that fall under this umbrella include specialized components (for instance, electrical and HVAC systems), as well as office equipment, such as computers and AI chips.
Bonus depreciation eligibility also extends beyond new construction. Many data center projects involve renovations to existing structures, which can be treated as 15-year, bonus eligible qualified improvement property (QIP).
However, data center owners must also consider state-level variations on bonus depreciation. California, for instance, does not conform to federal bonus depreciation rules. Further, states must determine whether they will decouple from the OBBBA, with several states having already decoupled in whole or part. Those decisions will affect bonus depreciation options for data centers.
To take advantage of depreciation opportunities and maintain compliance with state tax filing standards, data center owners need to know how much money they are spending on each depreciable asset. Conducting a cost segregation analysis allows companies to itemize individual project expenses and gain full visibility into their depreciation options.
What is Qualified Improvement Property (QIP)?
Some building or structural upgrades, known as QIP, are QIP, are depreciable over 15 years, making them eligible for 100% bonus depreciation. But QIP comes with its own nuances that businesses must understand. Elevators, building expansions, or any exterior improvements do not count as QIP, while electrical systems, flooring, and HVAC systems inside a building — all common to data center projects — might qualify.
State conformity related to QIP also varies. For example, consistent consistent with its decoupling from federal bonus depreciation rules, California also does not recognize QIP as 15-year property. As such, data centers in California risk losing both federal QIP treatment and the ability to properly classify five-year property for state purposes unless they conduct a cost segregation analysis. Doing so will allow them to break out costs in a format that is acceptable to federal and state taxing authorities, allowing for greater tax planning flexibility.
Cost Segregation Analysis
Cost segregation is a tax planning technique that can increase cash flow by accelerating the federal tax depreciation of construction-related assets over five-, seven-, and 15-years instead of 27.5 or 39 years. By breaking out individual costs rather than grouping them all together, data center owners can make more strategic tax planning decisions regarding when and how to depreciate an asset. Without a cost segregation study, property with a tax depreciable recovery period of less than than 20 years might be classified alongside longer-lived assets, eliminating eligibility for bonus depreciation and forfeiting potential tax savings.
A final cost segregation study cannot occur until a project is completed, but companies should start preliminary analyses at the outset. Breaking out costs early in the process can reduce the need for retroactive work and allow for more precise spend tracking than a price tag that does not include per-item details. Ideally, owners should work with their general contractors — who will have the highest level of line-by-line cost visibility — to break out costs within initial construction contracts.
From there, owners can elect into or out of bonus depreciation by class life, depending on their business and tax planning priorities. They can also use the cost segregation to properly classify spending categories for other federal and state tax purposes, as well as general financial planning and recordkeeping. For example, a company might not elect to take 100% bonus depreciation for every piece of data center property. Instead, it might choose to forgo bonus depreciation for specific asset classes to better align with business priorities such as cash flow or anticipated future income. Making such strategic decisions is not possible without a robust cost segregation analysis that assigns accurate dollar values to each asset category.
AFP via Getty Images and NASA/Collage by Emily Bogle/NPR
Standing before a friendly crowd in March, Elon Musk laid out his plan for the future of his companies, and it was literally out of this world.
Musk announced that his space-launch company, SpaceX, which had recently merged with his artificial intelligence company, xAI, would put data centers into orbit around the Earth.
It all comes down to electricity, he explained. “You’re power constrained on Earth,” he said. “Space has the advantage that it’s always sunny.”
Musk envisions legions of data-crunching satellites spinning around the planet, powering the AI revolution from above. It’s the perfect pitch for taking SpaceX public. This week, Bloomberg reported that the company had filed documents confidentially to the Securities and Exchange Commission with the goal of going public this summer.
Musk also claims it makes financial sense. “I actually think that the cost of deploying AI in space will drop below the cost of terrestrial AI much sooner than most people expect,” he said. “I think it may be only two or three years.”
Others are skeptical. Musk’s timeline is “an optimistic interpretation,” according to Brandon Lucia, a professor of electrical and computer engineering at Carnegie Mellon University who specializes in putting computers on satellites. The napkin math looks appealing, and power is free up there after all — but it turns out there are a lot of obstacles to building a data center among the stars.
A global power problem
Here on Earth, the problem is glaring: AI is gobbling up electricity around the globe. Global data-center power consumption is expected to roughly double to nearly 1,000 terawatt-hours by the end of the decade, according to an estimate by the International Energy Agency.
High-voltage transmission lines provide electricity to data centers in Ashburn, Virginia. Globally, data centers’ demand for electricity is expected to roughly double by 2030.
Ted Shaffrey/AP
hide caption
toggle caption
Ted Shaffrey/AP
To fill the gap, some companies are building dedicated gas turbines, while others are investing in nuclear technology. It’s not enough, according to Philip Johnston, CEO and co-founder of Starcloud, which is seeking to build orbital data centers.
“We’re very quickly running up on constraints on where you can build new energy projects terrestrially,” Johnston said. “Within six months, they’ll just be leaving chips in warehouses because they don’t have power for turning them on.”
Starcloud launched its first spacecraft last fall with an Nvidia H100 chip on board. The company demonstrated the ability to run a version of Google’s Gemini AI from space, and it plans to launch a second spacecraft in October. “That one has 100 times the power generation of the first one,” Johnston said, though it’s still expected to generate only around 8 kilowatts of power.
Google is also pursuing the idea of building data centers in space through a project known as Suncatcher. It envisions an 81-satellite cluster that it plans to build in partnership with the satellite-imagery company Planet. Two prototype satellites will launch in early 2027, according to the companies.
“Orbital data centers are an idea whose time has come,” Will Marshall, Planet’s CEO, wrote to NPR in an email. “When exactly it will be more cost efficient than terrestrial ones is debatable but now is the time to be working on this.”
Everything must get bigger
To go from a handful of prototype satellites to something useful is not so easy. For one thing, the power requirements of the microchips used for artificial intelligence are enormous.
To get a sense of just how much power is needed, consider the largest power-producing facility in space right now: the International Space Station (ISS).
The solar panels of the ISS are around half the size of a football field and produce around 100 kilowatts of average power, according to Olivier de Weck, a professor of astronautics at the Massachusetts Institute of Technology. “It’s basically the amount of power that a single big car engine produces.”
To replicate a 100-megawatt data center in space would require a facility that’s 500 to 1,000 times, depending on the orbit.
“Is that feasible? Yeah, I think it’s feasible, but not next year and certainly not in three years,” he said.
A slide from Elon Musk’s presentation shows his concept of an “AI Sat Mini” that is larger than SpaceX’s Starship rocket.
Screenshot by NPR/SpaceX
hide caption
toggle caption
Screenshot by NPR/SpaceX
And power is not the only requirement; the satellites also have to provide cooling to the microchips. While it’s true that space is cold, it’s also a vacuum. This means that when a satellite gets hot, there’s no easy way to get rid of that heat — it just builds up.
“All of that heat that the computer generates has to be dispelled,” said Rebekah Reed, a former NASA official now at Harvard University’s Belfer Center for Science and International Affairs.
The best solution is radiators, which move liquids out to giant panels where the heat can be dissipated. So in addition to solar panels, an AI satellite would need another set of large radiators.
“When you put those massive radiators together with massive solar arrays that are required in order to power and cool, you’re actually talking about really large satellites, or very, very large satellite constellations,” Reed said.
An alternative is to build smaller satellites and fly them in preset formations called constellations. Such constellations allow the heat and power problems to be distributed, but to work, the satellites would need to send huge amounts of data back and forth. That likely means using lasers to beam data between satellites. But even moving at the speed of light, the time it takes to get data from one satellite to another is long enough to slow down computing.
Google’s Project Suncatcher proposes flying groupings of satellites in extremely tight clusters to reduce that latency. Musk, meanwhile, has proposed launching upward of a million satellites and placing them in orbit around Earth’s poles. He recently unveiled the first generation “AI Sat Mini” spacecraft — with solar arrays spanning roughly 180 meters (about 600 feet) — during his presentation.
Launching all that into space would cost money — lots of money. At the moment, it can cost around $1,000 per kilogram to launch a satellite into orbit. Google believes that cost must drop by at least a factor of five to $200 per kilogram before data centers in space will begin to make sense.
SpaceX’s megarocket Starship blasts off for a test flight from Starbase in Boca Chica, Texas, on Oct. 13, 2025. Starcloud CEO Philip Johnston says Starship is central to building orbital data centers. He told investors: “If you don’t think Starship’s going to work, don’t invest in us — that’s totally fine.”
Eric Gay/AP
hide caption
toggle caption
Eric Gay/AP
Musk thinks he can do it with his new Starship rocket, which is still in development. Starcloud’s Johnston says Starship is central to more than just SpaceX’s vision. He told investors: “If you don’t think Starship’s going to work, don’t invest in us — that’s totally fine.”
Upgrading the server
Even if a company could get a data center into space, running it would involve a lot more than just moving microchips into orbit.
Data centers on Earth are not just static buildings full of chips humming away, says Raul Martynek, the CEO of DataBank, a company that maintains 75 data centers, primarily located in the United States. They require constant maintenance and upgrades, all of which is done by workers.
Take DataBank’s IAD1 data center in Ashburn, Virginia. The facility is 144,000 square feet filled with rows and rows of black computer cabinets, which are filled with microprocessors. It’s fairly run-of-the-mill, as these facilities go, but it still consumes around 13 megawatts of power at any given moment (that’s 130 times more than the International Space Station).
“We have vendors here every single day,” says James Mathes, who manages IAD1.
Workers are constantly in and out of these data centers, installing new servers, upgrading microchips and fixing things. And to stay competitive, space data centers would need to do much of the same.
Some of that could be done through software, and Musk points out that chips can be rigorously tested on the ground before they’re sent aloft. But the fact remains that the companies that rent data centers often want to access them physically for one reason or another.
Martynek, who has spent decades in telecom, says he’s not worried about space data centers taking business from his company.
“It seems like there’s a lot of ifs and a lot of advancements that would have to occur, and I find it kind of hard to believe that all that could happen in two or three years,” he said. “No one in data center land is losing any sleep.”
Statistics and decision making are essential for enabling the broader success of precision medicine. We have identified the following strategies and factors as particularly impactful:
Apply enrichment strategies consistently, starting in early phases
Enrichment designs, which focus on recruiting patients most likely to benefit from the therapy, can improve trial efficiency and increase the likelihood of success. For example, targeting high-biomarker expressors in early-phase studies can de-risk later phases. A common challenge is that in early phases, there is limited knowledge on optimal biomarker cut-offs. This can be overcome with trial designs that adapt the biomarker cut-off and hence the degree of enrichment during the trial conduct9.
Invest in robust biomarker subgroup identification and cut-off determination
Identifying biomarker-based subgroups and determining biomarker cut-offs are critical. We acknowledge that this is a challenging task, especially when patient numbers are low. We still recommend starting early by identifying potentially predictive biomarkers (see e.g.10.) and optimizing cut-offs once more information is available. Bayesian methods and adaptive trial designs can help optimize cut-offs and guide decision-making.
Use more adaptive trial designs
Adaptive designs, in early-phase as well as in late-phase trials, allow for mid-trial modifications based on data accumulated, enabling more flexible and efficient exploration of biomarker-defined subgroups. This is particularly relevant if the subgroup is identified in a data-driven way during the trial, potentially using machine learning or other advanced methods. Solutions to seamlessly include data-driven subgroup identification into the trial design exist11 and we need to adopt them systematically. Conducting a systematic search of patient subgroups that is embedded in the clinical trial design will empower key decision-makers within an organization to choose between pursuing a precision medicine development approach or an all-comer development strategy.
Integrate biomarker strategy into program strategy
We have lost count of how often we have heard the statement that biomarker analyses are solely exploratory and hypothesis-generating. Without a clear goal on how the biomarkers impact the overall strategy, precision medicine is likely to fail. Biomarker statisticians should be involved from the outset to ensure that biomarker strategies are seamlessly integrated into the overall development plan. This includes providing risk assessments of complete program strategies for the entire program to evaluate trade-offs and risks. We recommend assessing different strategies for enrichment designs, subgroup identification and adaptive designs. Tools like probability of success12 and extensive simulations add substantial value in this context.
Integrate scientific and commercial considerations
Organizations should integrate precision medicine approaches from the earliest stages of drug development, combining both scientific and commercial considerations. This includes identifying disease endotypes through molecular characterization to enable precise patient stratification and inform adequately powered sub-population studies. As emphasized by Jenkins et al.13, biomarker studies require prospective planning with meaningful effect size specifications, even in exploratory phases, while gathering prior knowledge on variability sources and maintaining high data quality standards. Following established statistical frameworks, teams should ensure robust methodologies support their precision medicine strategies throughout the development continuum, enabling more targeted therapy development while optimizing resource allocation and enhancing clinical and commercial success probability.
Leverage large datasets using AI/ML
While exploratory biomarker discovery was rarely presented in the reviewed NDAs, large datasets (e.g., OMICS) collected in phase III/IV or sourced from external databases can be used for back-translation and identifying new biomarkers for future programs. Given the complexity and dimensionality of such data sets and the underlying disease biology, best practices and standardized application of Artificial Intelligence / Machine Learning (AI/ML) approaches should be employed to derive robust and actionable insights (see, e.g.14,15).