- Hit by AI, edtech firm Chegg slashes jobs and names new CEO in major overhaul Reuters
- Chegg Earnings: Big Quarter Sends Shares Higher 24/7 Wall St.
- Chegg to Remain a Standalone Public Company to Maximize Shareholder Value Business Wire
- Chegg Announces Major Workforce Reduction and Restructuring TipRanks
- Chegg (CHGG) Announces Restructuring Plan and Leadership Change GuruFocus
Category: 3. Business
-
Hit by AI, edtech firm Chegg slashes jobs and names new CEO in major overhaul – Reuters
-

Barbie, Monopoly makers see bright holiday season despite tariffs
President Donald Trump’s tariffs are hitting toy giants Mattel and Hasbro as the critical holiday season nears. Still, both companies see a successful year end ahead.
“This quarter, our U.S. business was again challenged by industry-wide shifts in retailer ordering patterns,” CEO Ynon Kreiz said on Mattel’s recent earnings call. “That said, consumer demand for our products grew in every region, including in the U.S.”
During the most recent quarter, which ended Sept. 30, Mattel said sales slipped 6% globally, led by a 12% decline in North America. International sales rose 3%.
Some of the company’s top performing categories included Hot Wheels and action figures, primarily from the “Jurassic World,” Minecraft and WWE franchises.
Other Mattel brands saw a drop in sales, however, including Barbie and Fisher-Price.
With retail stores waiting until the last minute to assess the level of tariffs that would apply to their holiday orders, Kreiz said “since the beginning of the fourth quarter, orders from retailers in the U.S. have accelerated significantly.”
Retailers “expect strong demand for the holiday and they are restocking,” he added.
Meanwhile, rival toy giant Hasbro’s revenue jumped 8% in the quarter and it raised its financial guidance for the rest of the year.
Key drivers of that included “Peppa Pig” and Marvel franchise toys, as well as the Wizards of the Coast games.
Hasbro “managed tariff volatility with agility” and used price hikes to protect its margins, said Gina Goetter, the company’s chief financial officer and chief operating officer.
The company remains “firmly on track” to achieve its financial targets.
“As we calculate the various scenarios of where that absolute rates will play out, we’re really putting all of our levers to work,” she said on the company’s recent earnings call.
“From how we think about pricing, how we’re thinking about our product mix, how we’re thinking about our supply chain, and how we’re managing all of our operating expenses to mitigate and offset the impact” of tariffs, she said.
For its part, Hasbro also saw “softness” in the U.S. during the quarter due to retail chains waiting longer to place holiday orders, but said momentum is accelerating as the season gets underway.
In July, Mattel’s chief financial officer, Paul Ruh, said that the company was raising prices because of tariffs.
“We have implemented a variety of actions that will help us withstand some of those headwinds and those include … supply chain efficiencies and some pricing adjustments, particularly in the U.S.,” Ruh said on the company’s earnings conference call.
“So with that array of actions, we’re able to withstand some of the uncertainty that is mostly coming in the top line,” Ruh said. “Our goal is to keep prices as low as possible for our consumers.”
Still, Kreiz said that “consumers are buying our products and the toy industry is growing.”
He also said that consumers are taking price hikes in stride and those increases haven’t hurt demand: “We are not seeing any slowdown in consumer demand so far.”
Hasbro CEO Chris Cocks said the company has also raised some prices, but it was “pretty surgical” in what it chose to adjust.
“In terms of ongoing pricing, I think we just kind of have to see how the holiday goes and the consumer holds up,” he told analysts on the company’s earnings call.
Cocks also cautioned that there may be a two-tier economy forming, something other executives and economists have observed in recent months.
“Right now, I think it’s really kind of a tale of two consumers. The top 20%, particularly in the U.S., continue to spend pretty robustly,” he said. “The balance of households are watching their wallets a bit more.”
On Friday, the Labor Department released the latest consumer price index data, which showed that inflation is rising at a 3% annual pace, up from August’s 2.9%.
In May, Kreiz told CNBC that approximately half of the company’s toys were sourced from China.
Beijing has faced some of the steepest tariffs from Washington of any U.S. trade partner, as Trump has rolled out his disruptive trade agenda this year.
Mattel’s Ruh said the company continued to adjust its supply chains in response to shifting global tariff policies.
“We will be continuing to work with our retailers to make sure that the product is on the shelf,” he said.
At the same time, Hasbro’s Goetter said the company is diversifying its supply chains away from high-tariff countries.
“By 2026, we expect approximately 30% of our total Hasbro toy and game revenue will be sourced from China and 30% of our revenue will be based in the U.S., as we opportunistically lean into our U.S. manufacturing capacity,” she said.
Continue Reading
-

Wealthsimple Announces $750 Million Equity Round at $10 Billion Post-Money Valuation to Accelerate Growth
- Dragoneer and GIC co-lead investment alongside CPP Investments, Power Corporation of Canada, IGM Financial Inc., ICONIQ, Greylock and Meritech reinforcing growing global conviction in Wealthsimple’s mission to build the financial platform of the future.
- With a profitable and growing business, new capital will accelerate Wealthsimple’s product roadmap and deepen the value it delivers to Canadians.
TORONTO, ON [Oct 27, 2025] – Wealthsimple, Canada’s leading financial innovator, today announced it has signed an equity round of up to CAD $750 million at a post-money valuation of CAD $10 billion. The round, which includes both a $550-million primary offering and a secondary offering up to $200 million, is co-led by Dragoneer Investment Group and GIC, and signals deep conviction from world-renowned investors in Wealthsimple’s role as the future of financial services in Canada. Other investors include new investor Canada Pension Plan Investment Board (CPP Investments), and existing investors Power Corporation of Canada, IGM Financial Inc., ICONIQ, Greylock and Meritech.
Since 2014, Wealthsimple has consistently set the pace for innovation in Canadian finance and reimagined how Canadians build wealth. The company broke down barriers to the markets for a new generation of investors with its managed and self-directed investing platforms and led the charge on many investing firsts for the country, including commission-free trading, regulated crypto trading and 24/5 trading. It has also redesigned everyday banking, with features such as bank draft delivery and automatic paycheque allocation, and a competitive chequing account with no monthly, foreign exchange or ATM withdrawal fees. What began as a simple investing app has become a trusted financial platform that millions of Canadians use to grow and manage their money, whether they’re just starting out, or managing complex portfolios.
The equity round comes after an explosive few years for Wealthsimple and the company continues to scale from a position of strength. Wealthsimple shared that it was profitable in 2024 and the company continues to be profitable in 2025. The company reached $50 billion in assets under administration in 2024, and in one year has doubled it to $100 billion in assets. The company’s latest capital raise will accelerate its roadmap across investing, banking and credit, support strategic opportunities to expand its platform, and deepen the value it delivers to Canadians.
“This raise reflects deep confidence from new and returning investors in our mission and our role as a defining Canadian company,” said Michael Katchen, CEO and co-founder, Wealthsimple. “We were intentional in choosing partners committed to the long-term future of Wealthsimple. These are well-respected, global leaders with a proven track record scaling category leaders, and who believe in our vision for the future of financial services.”
Guided by its mission to help everyone achieve financial freedom, Wealthsimple offers an expansive suite of smart, low-cost financial tools that empower Canadians to build wealth in whatever way works for them. The platform brings together self-directed investing, managed portfolios, cryptocurrency, banking services, tax filing, and advisor services into one simple, integrated experience. The company is also responsible for building Canada’s most-read financial newsletter, TLDR, educating four million weekly subscribers on money and market news.
This year, the company launched a waitlist for its first credit card, surpassing 300,000 Canadians in the first six months. The company also launched Wealthsimple Presents, a bi-annual, live product showcase featuring its latest financial innovations. Nearly 350,000 Canadians registered to attend the 2025 livestream events.
“Few companies have achieved what Wealthsimple has in the last few years,” said Christian Jensen, Partner at Dragoneer Investment Group. “The Wealthsimple team has built an expansive financial platform that millions of Canadians trust. They’re not just participating in Canada’s financial services industry; they’re redefining it. Wealthsimple’s product velocity, customer obsession, and category leadership remind us of some of the most enduring global companies and we’re thrilled to be partnering with them in this next phase of growth.”
Dragoneer is focused on investing in leading growth businesses and recently led OpenAI’s $8.3 billion raise in August 2025 as its largest contributor. The firm previously participated in Wealthsimple’s 2021 funding raise.
“We look for companies that will transform industries for decades to come, and Wealthsimple is one of them,” said Choo Yong Cheen, Chief Investment Officer, Private Equity, GIC. “Their track record of innovation, from investing to trading to banking, combined with deep trust from Canadians, positions them to build a defining, generational company in Canadian financial services.”
GIC is a leading global investment firm delivering long-term, sustainable returns across diverse market landscapes. GIC is one of two new investors this raise, alongside CPP Investments.
“Wealthsimple has built a strong foundation as a trusted financial platform in Canada, combining innovation with disciplined growth,” said Afsaneh Lebel, Managing Director, Head of Funds, CPP Investments. “Alongside our partner Dragoneer, we’ve seen the company’s innovative approach to making financial products more accessible to Canadians, consistent with our strategy to back technology-driven businesses that deliver lasting value for CPP contributors and beneficiaries.”
Meritech and Greylock co-led Wealthsimple’s raise in May 2021 alongside best-in-class investors DST Global, Sagard, ICONIQ, Dragoneer, TCV and iNovia, among others. This raise builds on Wealthsimple’s 2021 financing round — one of the largest in Canadian history at that time — and marks the next chapter in its mission to transform financial services.
Continue Reading
-
Two Major Trials Support Drug-Coated Balloons in PCI – Medscape
- Two Major Trials Support Drug-Coated Balloons in PCI Medscape
- Sirolimus-eluting balloon strategy matches drug-eluting stents in large international PCI trial News-Medical
- REC-CAGEFREE I: DCB vs. DES For Treating de Novo CAD at 3 Years American College of Cardiology
- Selution’s 1-Year Data Push Drug-Coated Balloons Further Into the Mainstream MedPage Today
- New Drug-Eluting Balloon May Be as Safe and Effective as Conventional Metal Stents for Repeat Percutaneous Coronary Interventions Mount Sinai
Continue Reading
-

US envoy discusses ZTBL’s digital transformation
.
ISLAMABAD:United States Charge d’Affaires Natalie A Baker visited the head office of Zarai Taraqiati Bank Limited (ZTBL), where discussions focused on the bank’s ongoing digital transformation, including the introduction of internet, WhatsApp and mobile banking services to facilitate farmers in loan repayments, settlements and information access.
The envoy commended ZTBL’s contributions to supporting small farmers and strengthening Pakistan’s agriculture-based economy. She noted that several American companies were actively investing and operating in Pakistan’s agriculture and food sectors. Both sides exchanged views on the pressing challenges of food security and other difficulties facing the agriculture sector. Baker reaffirmed the US commitment to supporting Pakistan’s efforts to build a more resilient and sustainable agriculture sector.
Continue Reading
-

Clinical Data Support Use of Low-Carbon Version of Albuterol Metered Dose Inhaler for Asthma
Clinical data confirm that the formulation of the metered dose inhaler (MDI), albuterol (Ventolin; GSK), containing a low-carbon propellant HFA-152a, has therapeutic equivalence and is comparable in safety to salbutamol MDI containing HFA-134a, the current propellant, according to a news release from the manufacturer. These findings support regulatory submissions for a next-generation version of albuterol, referred to as salbutamol outside of the US, which will bring a more sustainable option to patients who have respiratory diseases.
Albuterol is approved by the FDA for the treatment and prevention of acute or severe bronchospasm in patients with reversible obstructive airway disease, such as asthma and chronic obstructive pulmonary disease (COPD). Albuterol acts on β2-adrenergic receptors, inducing bronchial smooth muscle relaxation and inhibiting immediate hypersensitivity mediator release, particularly from mast cells. Albuterol also affects β1-adrenergic receptors, but the impact is minimal, thereby exerting little effect on a patient’s heart rate.2
Albuterol is available in various dosage forms and strengths, including an aerosol metered-dose inhaler delivering 90 mcg (base)/actuation, equivalent to 108 mcg of albuterol sulfate; a powder metered-dose inhaler form providing the same values as the aerosol metered-dose inhaler; 2-mg and 4-mg tablets; 4-mg and 6-mg extended-release tablets; nebulized solutions, including 0.083%, 0.5%, 0.63 mg/3 mL, and 1.25 mg/3 mL; and an oral syrup in a concentration of 2 mg/5 mL.2
In the absence of albuterol’s bronchodilatory effects, patients experiencing bronchospasms may face the risk of catastrophic asphyxiation, emphasizing the crucial need for patients to have a readily available treatment. According to the manufacturers, nearly half a billion people are affected by asthma and COPD worldwide.1,2
“Healthy air is essential for healthy lungs, and our next-generation [albuterol] has the potential to reduce greenhouse gas emissions by 92% per inhaler. Almost 6 decades after its first development, this medicine remains highly valued by patients and health care professionals and is a key component of our respiratory portfolio. Today, we are one step closer to a reliever MDI that we believe will continue to help patients for many decades to come,” Kaivan Khavandi, senior vice president, global head of respiratory, immunology & inflammation research and development at GSK, said in a manufacturer news release.1
WHO considers climate change to be the biggest global health issue, and patients with chronic respiratory diseases such as asthma and COPD are particularly susceptible to variable weather conditions and extreme weather events. Short-acting β2-agonists (SABAs) are typically used as reliever medications for the short-term relief of asthma and COPD symptoms; however, they are also responsible for approximately 70% of total inhaler-related greenhouse gas (GHG) emissions. The development of MDI devices that contain low global warming potential (GWP) propellants can reduce the carbon footprint of MDIs while balancing reduced GHG emission goals with patient health and well-being.3
The aim of the trial was to assess the carbon footprints of albuterol HFA-152a MDI, albuterol HFA-134a MDI, and an albuterol dry-powder inhaler (DPI). For this study, 3 cradle-to-grave lifecycle analyses (LCA) were undertaken to compare the carbon footprints of albuterol HFA-152a MDI, albuterol HFA-134a MDI, and albuterol DPI. Over 600 individual emission factors were calculated from over 2000 data points and categorized into active pharmaceutical ingredient manufacture, micronization, device, formulation and packaging, use phase, distribution, and end-of-life stages.3
The data show that the average carbon footprint values were about 27.09, 2.24, and 0.76 kg CO2e per device for albuterol HFA-134a MDI, albuterol HFA-152a MDI, and albuterol DPI, respectively, representing an approximate 92% reduction in carbon footprint for albuterol HFA-152a MDI compared with albuterol HFA-134a MDI. The investigators observed that the difference was primarily driven by the patient use phase. These findings suggest that substituting the currently available HFA-134a propellant with a new HFA-152a candidate propellant could substantially reduce the carbon footprint of a SABA reliever.3
“While low carbon alternatives already exist, such as dry powder and soft mist inhalers, we know that many patients worldwide with both asthma and COPD prefer a[n albuterol] MDI to relieve their symptoms. These data should enable patients to use their preferred inhaler choice. This is a crucial advance to help global health care systems meet their climate targets at the same time as optimizing the care of patients,” Ashley Woodcock, professor of respiratory medicine at the University of Manchester, said in the news release.1
REFERENCES
1. GSK. GSK announces positive pivotal phase III data for next-generation low carbon version of Ventolin (salbutamol) metered dose inhaler. News release. October 22, 2025. Accessed October 27, 2025.
https://www.gsk.com/en-gb/media/press-releases/gsk-announces-positive-pivotal-phase-iii-data-for-next-generation-low-carbon-version-of-ventolin-salbutamol-metered-dose-inhaler/ 2. National Library of Medicine – National Center for Biotechnology Information. Albuterol. Updated January 10, 2024. Accessed October 27, 2025.
https://www.ncbi.nlm.nih.gov/books/NBK482272/ 3. Plank M, Anzueto A, Janson C, Henderson R, Fulmali S, and King J. Decarbonizing Respiratory Care: The Impact of a Low-carbon Salbutamol Metered-dose Inhaler [abstract]. Am J Respir Crit Care Med. 2025;211:A5548. doi:10.1164/ajrccm.2025.211.Abstracts.A5548
Continue Reading
-

Fogarty Innovation and CRF unite to accelerate breakthroughs in cardiovascular medicine
After several years of close collaboration and a relationship rooted in mutual respect and shared vision, Fogarty Innovation is coming together with the Cardiovascular Research Foundation® (CRF®) to create a unified platform for advancing transformative healthcare technologies. This strategic combination strengthens CRF’s leadership in medtech by integrating Fogarty’s renowned expertise in early-stage innovation, creating a powerful, cross-specialty platform to accelerate transformative breakthroughs into patient care. The merger was announced during a special keynote session at the Transcatheter Cardiovascular Therapeutics® (TCT®) meeting.
This platform unites two mission-driven organizations with a shared vision: to catalyze the next generation of disruptive medical technologies by advancing innovations that have the power to transform patient care and reshape the future of healthcare. It builds on a strong track record of successful collaboration: CRF and Fogarty Innovation have partnered on the TCT MedTech Innovation Forum over the past four years, fostering early-stage innovation and supporting emerging medtech entrepreneurs. Together, they form a powerful alliance poised to accelerate progress in cardiovascular medicine and transform patient care on a global scale.
The unified platform will unlock immediate access to world-class incubation, long-term strategic growth, and enhanced philanthropic impact. By combining the deep expertise of both organizations, it will provide unparalleled access to seasoned medtech executives, expand innovation education initiatives, and increase business opportunities in Silicon Valley and beyond – all while accelerating the development of cutting-edge technologies and driving measurable improvements in patient care.
Fogarty Innovation will continue to carry its name and mission, now serving as CRF’s West Coast innovation hub. Together, CRF and Fogarty Innovation will amplify their collective impact by combining resources, expertise, and leadership – aligning on strategic initiatives to drive innovation and expand their reach across the cardiovascular landscape.
“We’re thrilled to enter into this partnership with our close friends and colleagues at Fogarty Innovation,” said Juan F. Granada, MD, President and Chief Executive Officer of CRF. “We are entering a groundbreaking era in cardiovascular medicine; one defined by unprecedented technological potential. This move is not just a step forward; it is a bold move to lead the future. By uniting our strengths into a single, purpose-driven platform, we are shaping the development of transformative technologies that will redefine care and bring us closer to a more equitable health care system.”
Over the past 17 years, Fogarty Innovation has demonstrated that our model of immersive support – through incubation, acceleration, education, and alliances – meaningfully increases the success of innovators in bringing new tools and therapies to clinicians and patients. We are excited to join forces with CRF, as we can now scale that impact globally, giving entrepreneurs a larger stage, stronger resources, and a faster path to delivering transformative care.”
Andrew Cleeland, CEO of Fogarty Innovation
“Our partnerships have always been rooted in trust, shared purpose, the highest ethical standards, performance excellence, and a commitment to transform the lives of patients everywhere,” said Martin B. Leon, MD, Founder and Chairman Emeritus of CRF. “CRF’s mission to advance cardiovascular care through research and education will be amplified by Fogarty Innovation, opening new opportunities for collaborations, advanced innovation, and strategic growth – ultimately, impacting the science and practice of medicine worldwide.”
Source:
Cardiovascular Research Foundation
Continue Reading
-

PCI With A Sirolimus-Eluting Balloon And Provisional Stenting Shows Comparable Outcomes To Routine DES Implantation For Treatment Of De Novo Coronary
New study results from a large international all-comer population of percutaneous coronary intervention (PCI) candidates found that utilizing a strategy of sirolimus-eluting balloons with bailout stenting only if necessary was noninferior to routine drug-eluting stent (DES) implantation as part of the treatment for de novo coronary artery disease.
Findings were reported today at TCT® 2025, the annual scientific symposium of the Cardiovascular Research Foundation® (CRF®). TCT is the world’s premier educational meeting specializing in interventional cardiovascular medicine.
DES are implanted in the vast majority of PCIs with well-established immediate and mid-term outcomes. However, long term follow-up studies have reported annual adverse event rates of 2-4%. This has led to a growing interest in strategies that minimize metallic stent implantation to potentially reduce late events. The study used the SELUTION SLR Drug-Eluting Balloon (DEB) which delivers a sustained drug release maintaining therapeutic tissue concentration for up to 90 days designed to have a similar elution profile to current DES.
Between August 2021 and July 2024, a total of 3,341 participants were randomized one to one to either the DEB (n=1,671) or DES strategy (n=1,670) at 62 sites in 12 countries across Europe and Asia. Patients with lesion reference vessel diameter (RVD) ≥2.0 and ≤5.0 mm were eligible for inclusion. Baseline characteristics were similar in both groups, with a relatively high proportion of patients presenting with acute coronary syndromes or having high bleeding risk. Randomization occurred after angiography when all target lesions were considered suitable for either strategy and prior to lesion wiring and lesion preparation. Eighty percent of participants treated with the SEB did not require a stent.
The primary endpoint of target vessel failure, comprised of cardiac death, target vessel-related myocardial infarction and clinically driven target vessel revascularization, occurred in 5.3% of the DEB strategy group and 4.4% of the DES strategy group at one year (Risk Difference 0.91, 95% CI: -0.55, 2.38%, Pnoninferiority=0.02). No acute or late safety concerns were noted with the DEB strategy, with low rates of cardiac death (0.70% vs 1.0%), lesion thrombosis (0.1 %vs 0.3%), and target vessel myocardial infarction (2.7% vs 2.6%) comparable with DES.
The SELUTION DeNovo trial provides the first comparison of a PCI strategy based on the use of sirolimus-eluting balloons versus systematic implantation of DES in a large international all-comer population of PCI candidates. With no acute or late safety concerns, these results apply to a significant segment of PCI procedures including high-risk patients and complex lesions. We look forward to obtaining five-year data to determine long-term noninferiority or possible superiority of this strategy.”
Christian M. Spaulding, MD, PhD, Chief, Integrated Interventional Laboratory at European Hospital Georges Pompidou in Paris, France
The study was funded by M.A. Med Alliance SA (a Cordis Company), Switzerland.
Dr. Spaulding reported receiving grant /research support from the French Ministry of Health, CERC; consultant fees / honoraria from Medtronic, Techwald, Sanofi, Novartis Sonivie, Valcare and Boston Scientific as well as individual stock(s)/stock options/salary support from Cordis (MedAlliance) and Sonivie.
The results of the study were presented on Sunday, October 26, 2025, at 11:00 a.m. PT in the Main Arena (Hall A, Exhibition Level, Moscone South) at the Moscone Center during TCT 2025.
Source:
Cardiovascular Research Foundation
Continue Reading
-

COMPETE Trial: ITM-11 Tops Everolimus for GEP-NET PFS and OS | Targeted Oncology
Final analysis from the
phase 3 COMPETE trial (NCT03049189) demonstrated that ITM-11 (177Lu-edotretide) met its primary and secondary end points in patients with gastroenteropancreatic neuroendocrine tumors (GEP-NETs) compared with everolimus. Data were presented at the 2025 European Society for Medical Oncology (ESMO) Congress on October 18, 2025, by Jaume Capdevila, MD, PhD, Vall d’Hebron University Hospital, and at theNANETS Symposium on October 25.1The primary end point was progression-free survival (PFS), which was reached with statistically significant and clinically meaningful improvement. The median PFS was significantly longer in patients administered ITM-11 compared to those administered everolimus. The secondary end point of the trial was overall survival (OS), which was also identified to be higher in patients who were administered ITM-11 vs everolimus.2
There was a total of 207 patients in the ITM-11 group and 102 patients in the everolimus group. The median ages of both groups were 65 (ITM-11), and 61 (everolimus). Majority of patients in both groups were male. The majority of patients had grade 2, nonfunctional GEP-NETs and had received prior therapy.
COMPETE Trial Findings
COMPETE met its primary end point of PFS, which proved to be significantly longer in patients treated with ITM-11 vs everolimus. The central assessment was 23.9 vs 14.1 months (HR, 0.67; 95% CI, 0.48–0.95; P =.022).The local assessment was 24.1 vs 17.6 months (HR, 0.66; 95% CI, 0.48–0.91] P =.010;).
In the subgroup analysis of PFS by tumor origin, mPFS was found to be numerically longer in GE-NETs and P-NETs in the ITM-11 arm. In GE-NETs the mPFS was 23.9 vs 12 months (HR 0.64, 95% CI, 0.38–1.08; P =.090;). In P-NETs the mPFS was 24.5 vs 14.7 months (HR, 0.70, 95% CI, 0.45–1.09; P =.114;).
It was also identified that mPFS was numerically longer in grade 1 and significantly longer in grade 2 tumors in the ITM-11 arm. Grade 1 was 30 vs 23.7 months (HR, 0.89, 95% CI, 0.42–1.8; P =.753;), and grade 2 was 21.7 vs 9.2 months (HR 0.55l 95% CI, 0.37–0.82] P =.0003).
In exploring PFS by prior therapy, it was identified that mPFS was numerically longer in the first line and significantly longer in the second line in the ITM-11 arm. First line data showed the mPFS was not reached in the ITM-11 vs 18.1 months (HR, 0.60, 95% CI, 0.25–1.45; P =.249), and second line data showed 23.9 vs 14.1 months (HR, 0.68; 95% CI, 0.47–0,98] P=.039).
Overall response rates (ORR), one of the secondary end points of the trial, was found to be significantly higher in the ITM-11 arm. Central assessment was 21.9% vs 4.2% (P <.0001), and local assessment was 30.5% vs 8.4% (P <.0001).
Safety Profile
Adverse events (AEs) related to the drug study were experienced by 82% of patients ITM-11 group and 97% of patients in the everolimus group. The most common AEs reported were nausea (30% vs 10.1%), diarrhea (14.3% vs 35.4%), asthenia (25.3% vs 31.3%), and fatigue (15.7% vs 15.2%). These AEs were expected based on the known safety profile of ITM-11.2
AEs leading to premature study discontinuation were 1.8% vs 15.2% among both groups, respectively, dose modification or discontinuation were 3.7% vs 52.5%, and patients with delayed study drug administration due to toxicity was 0.9% in the ITM-11 group and 0% in the everolimus group.2
Dosimetry data showed targeted tumor uptake with low exposure to healthy organs, with normal organ absorbed doses well below safety thresholds.
Patient Characteristics
Patient inclusion criteria included being 18 or older, having well-differentiated, nonfunctional GE-NET or functional/nonfunctional P-NET; grade 1/2 unresectable or metastatic, progressive, SSRT-positive disease; and being treatment-naive to first-line therapies or progressing under prior second-line therapies.1,2
Morphologic imagining was conducted in 3-month intervals. The PFS follow-up was done every 3 months after the first 30 days. Long-term follow-up was done every 6 months.
“With these data combining extensive dosimetry information from more than 200 patients included in a prospective trial, ITM is laying the groundwork for improved therapeutic decision-making by providing important insights into tumor uptake and treatment variability,” Emmanuel Deshayes, MD, PhD, professor in biophysics and nuclear medicine at the Montpellier Cancer Institute in France, said in a news release.2 “It may offer clinically meaningful implications for optimizing individualized patient management.”
Dosimetry data from COMPETE shaped the design of ITM’s phase 3 COMPOSE (NCT04919226)4 trial with ITM-11 in well-differentiated, aggressive grade 2 or grade 3 SSTR-positive GEP-NET tumors, as well as the upcoming phase 1 pediatric KinLET (NCT06441331) study in SSTR-positive tumors.
DISCLOSURES: Capdevila noted grants and/or research support from Advanced Accelerator Applications, AstraZeneca, Amgen, Bayer, Eisai, Gilead, ITM, Novartis, Pfizer, and Roche; participation as a speaker, consultant, or advisor for Advanced Acclerator Applications, Advanz Pharma, Amgen, Bayer, Eisai, Esteve, Exelixis, Hutchmed, Ipsen, ITM, Lilly, Merck Serono, Novartis, Pfizer, Roche, and Sanofi; position as advisory board member for Amgen, Bayer, Eisai, Esteve, Exelixis, Ipsen, ITM, Lilly, Novartis, and Roche; and a leadership role and chair position for the Spanish Task Force for Neuroendocrine and Endocrine Tumours Group (GETNE).
REFERENCES:
1. Capdevilla J, Amthauer H, Ansquer C, et al. Efficacy, safety and subgroup analysis of 177Lu-edotreotide vs everolimus in patients with grade 1 or grade 2 GEP-NETs: Phase 3 COMPETE trial. Presented at: 2025 ESMO Congress; October 17-20, 2025; Berlin, Germany. Abstract 1706O
2. ITM presents dosimetry data from phase 3 COMPETE trial supporting favorable efficacy and safety profile with n.c.a. 177Lu-edotreotide (ITM-11) in patients with gastroenteropancreatic neuroendocrine tumors at EANM 2025 Annual Congress. News release. ITM. October 8, 2025. Accessed October 18, 2025.
https://tinyurl.com/3nuscs4m 3. Lutetium 177Lu-Edotreotide versus best standard of care in well-differentiated aggressive grade-2 and grade-3 gastroenteropancreatic neuroendocrine tumors (GEP-NETs) – COMPOSE (COMPOSE). ClinicalTrials.gov. Updated September 10, 2025. Accessed October 18, 2025.
https://www.clinicaltrials.gov/study/NCT04919226 4. Phase I trial to determine the dose and evaluate the PK and safety of lutetium Lu 177 edotreotide therapy in pediatric participants with SSTR-positive tumors (KinLET). ClinicalTrials.gov. Updated September 19, 2025. Accessed October 18, 2025.
https://www.clinicaltrials.gov/study/NCT06441331 Continue Reading
-

A home genome project: How a city learning cohort can create AI systems for optimizing housing supply
Executive summary
Cities in the U.S. and globally face a severe, system-wide housing shortfall—exacerbated by siloed, proprietary, and fragile data practices that impede coordinated action. Recent advances in artificial intelligence (AI) promise to increase the speed and effectiveness of data integration and decisionmaking for optimizing housing supply. But unlocking the value of these tools requires a common infrastructure of (i) shared computational assets (data, protocols, models) required to develop AI systems and (ii) institutional capabilities to deploy these systems to unlock housing supply. This memo develops a policy and implementation proposal for a “Home Genome Project” (Home GP): a cohort of cities building open standards, shared datasets and models, and an institutional playbook for operationalizing these assets using AI. Beginning with an initial pilot cohort of four to six cities, a Home GP-type initiative could help 50 partner cities identify and develop additional housing supply relative to business-as-usual projections by 2030. The open data infrastructure and AI tools developed through this approach could help cities better understand the on-the-ground impacts of policy decisions, while also providing a constructive way to track progress and stay accountable to longer-term housing supply goals.
1. Introduction
More than 150 U.S. communities now participate in the Built for Zero initiative, a data‑intensive model that has helped several localities achieve “functional zero” chronic or veteran homelessness and dozens more to achieve significant, sustainable reductions. For instance, Rockford, Illinois, became the first U.S. community to end both veteran and chronic homelessness by establishing a unified command center that used real-time, person-specific data to identify individuals experiencing homelessness and strategically target resources to achieve functional zero. The work has revealed an important formula: pairing real‑time, person‑level data integrated across agencies with nimble, cross‑functional teams can drive progress on seemingly intractable social problems.
Homelessness is typically downstream of shortages of housing supply. In the U.S. alone, there is an estimated 7.1 million‑unit shortage of homes for extremely low-income renters. But no equivalent playbook, standardized taxonomy, or shared data infrastructure exists to holistically address housing supply at the city or regional level. Developers, school districts, transit agencies, financing authorities, and planning departments each steward partial information and property assets that could translate into expanded housing supply.
Without shared accountability for meeting community housing needs, chronic coordination failure results. Homelessness is one stark result. Individuals and families shuttle between services, attempting to qualify for housing and income assistance while competing for limited housing options. Meanwhile, opportunities to increase housing supply—through repurposing idle land, preserving at-risk units, streamlining development approvals, or other strategies—go unrealized because critical information remains fragmented across agencies or never collected at all.
When attempting to integrate city-level housing data, most cities confront an unsatisfying choice: license an expensive proprietary suite, outsource a one-off dashboard to consultants, or manually assemble spreadsheets in-house. Some jurisdictions run dual systems—an official internal view and a vendor dashboard—further fragmenting workflows and complicating institutional learning. Among the commercial vendors trying to fill the information void are those offering proprietary suites with parcel visualization, market analytics, scenario modeling features, and/or regulatory particulars. Many are built on public data but packaged behind paywalls that limit transparency, interoperability, and reuse. Several also only cover a handful or a finite number of geographies.
Without open interfaces, common data standards, or accessible tools, even well-staffed departments struggle to maintain the continuous data integration that drives real outcomes. The manual processes that have enabled breakthrough successes require dedicated teams and sustained funding that most cities cannot maintain through personnel changes and budget cycles. Equally important, fragmented or proprietary data ecosystems can persist because existing arrangements benefit from the opacity—whether by limiting public scrutiny of how housing assets are managed or, as recent cases against rent-setting platforms illustrate, by enabling landlords and data vendors to leverage nonpublic information in ways that reinforce market power and reduce competition. What emerges is a patchwork of partial views—each typically anchored in narrow mandates and reinforced by opaque systems that resist integration.
While many cities have prototyped tools and surfaced effective approaches to optimizing their housing assets despite these challenges, sustaining and scaling them across contexts requires more than heroic individual efforts. The path forward to unlocking hidden housing supply at scale lies in durable data pipelines, cross-functional teams with clear shared goals, and shared data and playbooks that reduce the transaction costs of doing this important work city-by-city.
AI’s potential as an accelerant
Recent breakthroughs in generative AI, computer vision, and geospatial analytics—many of which have only been commercialized since 2023—drastically lower the cost and increase the speed of data integration and analytics. For housing data, early pilots show that machine-learning models can rapidly reconcile parcel IDs across assessor, permit, and utility records; detect latent development sites—such as vacant lots, single-story strip malls, or underutilized garages—by triangulating land-use data with computer-vision analysis of aerial imagery; and forecast supply impacts of zoning tweaks or financing incentives across thousands of parcels.
However, lessons from Built for Zero and elsewhere would suggest that new forms of technical automation must be paired with common infrastructure and institutional capabilities to drive measurable outcomes. For instance, meta-analyses of cross-agency Integrated Data Systems (IDS) in the U.S. are associated with better targeting and continuous program improvement when paired with governance, standards, and security protocols.
Early experiments applying machine learning (ML) to housing data (discussed in the next section) suggest that computation alone does not eliminate complexity. Legal nuance, inconsistent document formats, and context-specific exceptions routinely defeat even state-of-the-art data integration and machine learning techniques, requiring manual verification and domain expertise. Even when AI delivers efficiency gains, those improvements alone do not build housing. Without clear protocols, shared taxonomies, and durable governance that elevates domain expertise and supports capacity building, even efficient AI tools risk automating the wrong tasks—or failing when local conditions shift.
The core challenge of optimizing housing supply is not simply an absence of tools, but the absence of a common and underlying infrastructure, taxonomy, institutional capacity, and incentives for transparent data sharing needed to support computational tools.
The case for a city-level “Home Genome Project”
Against this backdrop, a coalition of city housing leaders, community-development practitioners, technologists, and funders gathered in “Room 11”—a 17 Rooms flagship working group aligned with Sustainable Development Goal 11 for sustainable cities and communities—to explore how to harness AI tools to increase housing supply. In a rapid sequence of virtual meetings, the group identified data gaps, transaction costs, and governance hurdles that stall housing production and allocation and identified the key technical and institutional ingredients required to harness AI’s potential value for local decisionmakers.
Drawing on these insights, this memo suggests standardizing and integrating city-level public data and capabilities should be a primary focus for leveraging AI’s potential value. A concerted international movement, starting with a learning network of cities, could generate the necessary shared inputs required to develop AI systems (a shared data model, data standards and datasets, and machine learning models) and the playbooks for the infrastructure, human capacities, and institutions needed to operationalize AI systems for optimizing city-level housing supply.
As discussed in Room 11 meetings, the siloed status quo resembles biomedical research before the step-change advances in data sharing and management initiated through the Human Genome Project. By mandating 24-hour public release of DNA sequences across participating laboratories, the HGP’s Bermuda Principles ignited a wave of global discovery that later underpinned AI-driven feats like CRISPR and AlphaFold. When researchers openly shared SARS-CoV-2 genomic sequences in early 2020, it enabled parallel vaccine development that would have been impossible under traditional closed models.
Housing needs an analogous shift—a “Home Genome Project” (Home GP)—defined by shared data standards, open pipelines, and reciprocal learning norms that convert local experiments into a global commons of actionable knowledge. A de-siloed approach to housing data infrastructure and shared learnings could also provide a more structured brokering mechanism for connecting front-line teams with the resources, expertise, and partnerships they need to scale their solutions.
Whereas DNA data is inherently structured and universally interpretable, housing data reflects diverse, locally determined rules and contexts, making integration and standardization far more complex. Achieving a Home GP will require careful, collaborative design of data models and standards from the outset, ensuring consistent definitions, quality inputs, and governance frameworks that can sustain large-scale, cross-jurisdictional use.
By coupling open data standards and city-contributed datasets and ML models with peer-to-peer capacity building, Home GP could help catalyze collaboration, learning, and innovations for increasing housing supply above baseline in 50 partner cities by 2030. Like the Human Genome Project, Home GP would be designed to treat data as critical infrastructure and collaboration as a force multiplier for AI system development. While the start-up phase of Home GP would likely focus on larger cities, the development of the approach would enable integration of data from communities of every size and type—including towns, villages, and counties with unincorporated areas.
2. Home GP foundations: A cross-section of city-level approaches and progress
Room 11 discussions unearthed a rich cross-section of city-level experimentation across three key barriers to housing supply. Several cities in the U.S. and globally are making noteworthy inroads. Teams are integrating data and prototyping digital tools to help map existing land assets, simulate the effects of policy interventions on development, and detect and forecast vacancies.
Different cities possess different raw ingredients—data, models, talent, or political capital—to influence decisionmaking for optimizing housing supply. Cities with massive metropolitan areas like Atlanta are developing bespoke solutions, while data and resourcing constraints faced by smaller cities like Santa Fe (U.S.) are developing more nimble and leaner solutions.
Mapping existing land assets and development proposals: Atlanta (U.S.) has merged tax-assessor records with other agency spreadsheets into the city’s first live map of every publicly owned parcel; its new Urban Development Corporation can now identify and package land deals across departments instead of hunting for records one by one. London (U.K.) leverages its tight regulatory framework to systematically collect and standardize data from multiple organizations, capturing information on roughly 120,000 development proposals annually. This regulatory process creates opportunities for comprehensive data gathering that feeds into what functions as a digital twin of the planning system. The Greater London Authority’s planning data map has been accessed 23.4 million times in the past year and serves as the evidence base for public-sector planning across the region. Boston’s (U.S.) citywide land audit surfaced a substantial inventory of underutilized public parcels.
These approaches point toward a “digital twin” approach that gives cities real-time insight into how their built environment is changing—helping planners do long-range scenario planning with more accurate, up-to-date information. A tool like this can also strengthen accountability—by transparently tracking new development, cities can measure progress against housing goals (similar to California’s Regional Housing Needs Allocation process) and hold themselves responsible for delivering results.
Simulating the effects of policy interventions: Denver (U.S.) is coupling parcel-level displacement-risk models with zoning-and-feasibility simulators so that staff can test how ordinances (like parking minimums or inclusive housing regulations) could impact housing developments before policies reach decisionmakers. Charlotte (U.S.) is moving to automate updates to its Housing Location Tool (an Esri workbook that scores parcel-level properties on development potential based on four dimensions: proximity, access, change, and diversity) and to allow the process to proactively score all parcels and recommend areas for development.
Vacancy detection: Water-scarce Santa Fe (U.S.) is beginning to mine 15-minute water meter readings to flag homes that sit idle for months and explore incentives for releasing those units for rental use—an information loop that turns a utility dataset into a housing-supply radar.
Together this range of efforts shows that cities possess different raw ingredients to accelerate the development of new housing supply. What is missing is the infrastructure to convert these opportunities and one-off accomplishments into standardized, repeatable, and shareable playbooks. Just as the Human Genome Project transformed genomics by creating structured vocabularies for machine readability and cross-jurisdictional collaboration, a similar infrastructure and lexicon for describing and cataloging assets such as parcels, vacancies, and zoning types could unlock the innovation and progress needed to meet the country’s urgent housing needs.
Diagnosing the decision problem in city housing markets
Room 11 meetings surfaced several incentive, capacity, and governance barriers that need to be overcome to secure more housing.
1. Demand-driven solutions for surviving the “dashboard valley of death”
The allure of technology‐driven reform is hardly new. Over the past decade, several public funders and technology companies have supported housing dashboards as potentially scalable solutions to generating real-time insights for local decisionmakers. But most, particularly those built top-down, have achieved only limited uptake owing to the fragility of public data and capacity ecosystems underneath them. Where cities have developed their own tools—like Charlotte’s Housing Locational Tool—the data pipelines often rely on manual, once-a-year updates. This so-called “dashboard valley of death,” a key challenge discussed in Room 11, suggests the need for an alternative strategy to technology development, one where front-line agencies are equipped with common infrastructure and capabilities to self-generate tailored solutions to locally defined problems.
In this vein, a new generation of tools demonstrates that success is possible when technology serves clearly defined user needs. The Terner Housing Policy Simulator, developed by UC Berkeley’s Terner Center, for example, models housing supply impacts at the parcel level across 25-30 jurisdictions. By combining zoning analysis with economic feasibility modeling—running multiple pro-formas to assess development probability under different scenarios—the tool provides actionable intelligence that cities like Denver are using to evaluate parking minimum reforms and inclusionary housing policies. Similarly, initiatives like the National Zoning Atlas have achieved traction by tackling a foundational data challenge: digitizing and standardizing the country’s fragmented zoning codes, having already mapped over 50% of U.S. land area, including major metros like Houston and San Diego.
2. Fragmented ownership and authority
Basic facts about land are often missing. In many local jurisdictions, no single office can confirm which public entity owns which parcel, let alone coordinate how those assets are deployed for housing. In Atlanta, a dedicated central team eventually stitched together assessor files, handwritten ledgers, and transit-authority spreadsheets to build the city’s first unified public-land map. Armed with this new data—and inspired by Copenhagen’s municipal land corporation—the city established an Urban Development Corporation in 2024 to broker multi-agency approvals and unlock dormant parcels for housing. The exercise surfaced more than 40 developable acres that had been hiding in plain sight. Fifty new development projects are underway.
3. Capacity bottlenecks
Most local housing departments operate with lean staffing and tight budgets; larger cities may command more resources, but must navigate correspondingly larger and more complex operating environments. Smaller communities may not have data or geospatial departments, or may even have difficulties accessing or understanding their own regulations, including zoning. Either way, the personnel and financial headroom required to sustain continuous data collection, community engagement, and policy iteration remain in short supply. Denver’s team, for example, is forced to guess how inclusionary-housing tweaks will land on developers’ pro-formas—an impossible task without automated modeling support.
What AI can—and cannot—do for housing supply
In theory, AI systems can be leveraged to assist in data integration processes by reconciling parcel tables, flagging underused land, and running zoning simulations in minutes rather than months. Successful implementation of these functions could, in turn, help free up local teams to focus on higher-impact activities that move from identifying potential supply to building supply—such as creating new agencies like Atlanta’s Urban Development Corporation, reviewing property records that automated systems cannot interpret, and engaging communities in housing production efforts.
In practice, however, there are many constraints when attempting to use ML to integrate data and automate inference. As learned in Room 11 conversations, the National Zoning Atlas’ two-year collaboration with Cornell Tech, backed by NSF funding, found that even leading ML models could not reliably parse zoning codes. Despite processing thousands of pages of text, researchers concluded that legal nuance, inconsistent formatting, and local exceptions rendered zoning documents effectively unreadable by AI alone. Atlanta’s experience mapping public land similarly revealed that property ownership records required manual verification—automated systems failed to detect transactions between public agencies that did not trigger tax records. In Charlotte, displacement monitoring still depends on human review to distinguish qualified transactions from multi-parcel or otherwise unqualified sales that the model can’t classify automatically. Collectively, these examples demonstrate that civic data often embeds ambiguity and context-specific nuance that resist full automation.
A strategy for supporting cities to leverage AI for housing supply
Room 11 conversations validated the hypothesis that developing more standardized city-level data infrastructures and institutional capabilities will help unlock more opportunities for AI systems to be used to optimize housing supply. Local staff with context and domain expertise are needed to design the inputs and ground-truth the outputs of AI systems, while interpreting and using those inputs in real-world negotiations with developers, residents, and finance authorities. In short, algorithms can deliver speed; local knowledge and policy judgment can turn speed into supply.
To harness this potential, Home GP’s proposed learning cohorts could be designed to convert isolated pilots into shared public infrastructure. In the same way that the Human Genome Project transformed biomedical research through open datasets, protocols, and collaborative standards that supported downstream AI-enabled technologies like CRISPR, Home GP could create the data commons and institutional capacities cities need to support AI systems for optimizing housing supply.
This approach builds three core functions:
- “Vertical ladders” that help each city climb from basic data auditing and management to increasingly sophisticated tools and competencies in its chosen domain. For example, following its first public land use dataset, Atlanta was able to add more and more sophisticated layers of data (e.g., zoning and market data), which in turn enabled more sophisticated inference informing project development.
- “Horizontal branches” for peer exchange: The city that perfects a vacancy-detection model can lend that module as a template for others to adapt while borrowing, say, a permitting-analytics script in return.
- Cross-sector brokerage that connects city teams with technical experts, funders, and peer cities—facilitating the partnerships and resource flows essential for turning pilots into sustainable programs. This brokering function, exemplified by organizations like Community Solutions in the homelessness space, has proven critical for scaling local innovations.
The initial phase of Home GP would develop two pillars of support architecture: (1) shared computational tools (data definitions and standards, datasets, and ML models) that can support context-calibrated AI applications for data integration, pattern recognition, and forecasting housing supply; and (2) an institutional readiness playbook that helps any jurisdiction develop institutional capacities for data integration and AI system deployment.
- Shared computational tools: Initial Home GP efforts to develop shared resources might include data integration standards and tools, integrated city-level datasets (land use, zoning, market data), and historical time series data (on either actual as-built conditions or policies) that can be federated and used to train ML predictive models and develop applications. Innovative approaches to inference developed in specific contexts (e.g., Santa Fe’s water use proxy data for vacancies) could be made available for adaptation to other relevant contexts.
- Institutional readiness playbook: Room 11 discussions identified at least five institutional enabling conditions for harnessing the potential value of AI tools for which playbooks could be developed through a community-of-practice model:
a. Impact-focused mandate. A concrete, numeric, and time-bound housing supply target shared across city-level stakeholders and tied to public reporting—e.g., “add 20 percent versus baseline affordable units by 2030.”
b. Empowered cross-functional teams. Land bank, planning, IT, community liaison—everyone who touches parcels or permits at one table. As in Denver, Atlanta, and Santa Fe, often these mission-driven, cross-functional teams sit within the mayor’s office.
c. Minimum viable data foundations. A clean parcel table, zoning layer, and permitting feed that update on a regular cadence.
d. Technical literacy and readiness. Analysts, organizers, and dealmakers who can translate model outputs into negotiations with developers and residents.
e. Equity guardrails. Bias audits, open-source code, and transparent processes that protect against unintended harm. Denver has already begun developing internal equity review processes as part of its housing data modernization efforts, while Charlotte is focusing on transparent use of displacement data to monitor outcomes.
While cities serve as the natural starting point for this work, the long-term sustainability of these systems may require thinking beyond municipal boundaries. London’s success in collecting standardized data from roughly 120,000 development proposals annually stems from national legislation—the Town and Country Planning Act—that creates regulatory leverage for data collection. This demonstrates how state and federal policy frameworks can enable the data standardization that cities need. Similarly, Metropolitan Planning Organizations (MPOs) could coordinate cross-jurisdictional housing strategies, while state agencies might maintain regional databases and technical infrastructure at scale. The institutional readiness playbook should therefore anticipate how governance structures can evolve from city-led experiments to more distributed models that leverage policy frameworks and regional coordination.
3. Next steps and open considerations
Home GP—hosted initially by a working group of Room 11 organizations—could convene four to six U.S. cities as a proof-of-concept cohort over 12 months. The ultimate goal is to produce open dataset layers released under permissive licenses, reusable AI modules (e.g., for vacancy detection, land-assemblage scoring), and implementation playbooks covering procurement language, governance, and community engagement. A “story bank” could document use cases that demonstrate what cities can achieve with better data.
Cities would select an appropriate peer-learning format, for example, rotating as lead developer and fast follower can ensure that expertise diffuses rather than concentrates; while a parallel pilot approach might allow cities to adapt quickly to local conditions.
Critically, the working group would also consider the technical and institutional architecture requirements for data and model standardization. The National Zoning Atlas discovered that standardizing data across jurisdictions was among its consultants’ most technically complex projects. Building an integrated and scalable national or international infrastructure may require specialized partners and/or a unified platform with capabilities no single organization currently possesses.
To reach this proof-of-concept stage, a prior planning phase would likely be needed to develop a detailed implementation roadmap, including governance structure, data sharing protocols, and potential funding models. This phase could convene municipal Chief Technical Officers, or equivalent, and housing leaders from those cities—bringing together those with technical expertise, housing expertise, and local commitment to investment in housing innovation capacity.
4. Conclusion: Choosing to build together
Local leaders already possess many important raw ingredients—granular parcel data, courageous front-line teams, and a new generation of AI tools—to close information gaps that have long stymied housing production. The key need is for civic leaders, government partners, philanthropy, and investors to knit those ingredients into a durable, shared infrastructure— analogous to scientific open-data protocols. By treating data pipelines and AI models as shared public infrastructure—and by learning in public through a cohort architecture that amplifies shared competencies and brings relevant stakeholders together—cities can unlock the transformative potential of AI to close housing supply gaps and make homelessness rare and brief. The goal—50 cities each identifying and unlocking more homes than the currently projected new supply by 2030—is ambitious yet reachable.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).Continue Reading