The market wants a Federal Reserve interest-rate cut soon, but it doesn’t want to need one. Wall Street economists are fixated on identifying tariff effects, yet stocks either celebrated or shrugged off three warm and sticky inflation readings this week, laboring to hold near record highs. The S & P 500 immediately processed a moderately elevated consumer price index report Tuesday as solidifying the chances for a September cut by the Fed into a still-steady economy, logging on that day slightly more than what would become a 0.9% gain for the week. Notably, over the next three days — through a hot but noisy producer price index reading and a messy University of Michigan consumer-sentiment survey — the benchmark treated Tuesday’s closing level just under 6,450 as a floor, testing it repeatedly and finishing the week right on it. The index has now logged a total return of 10% year to date, having more than recovered the near-20% tariff-panic collapse in April. .SPX 3M bar SPX 3-month chart The divide between optimists and pessimists on the market entering the second half of August is whether this action seems judicious or oblivious. The starting point for determining such things should be in assuming the market has it roughly right and isn’t overlooking much of the important stuff. Whether the Fed “should” look through potential tariff-driven inflation, the market is trying its best to do so. As Bespoke Investment Group summed things up at week’s end: “Price has trended steadily higher in a tight range over the last few months, representing rather sanguine action even though much of the news flow has been negative. While politics plays a big role in the going narrative about the market and the economy, price ultimately tells the real story. Whatever negativity there is out there hasn’t been nearly enough to interrupt the uptrend that’s been in place since we made new highs in early summer.” The latest push higher has not been terribly emphatic, or broadly inclusive, allowing skeptics to withhold style points from the rally. Both the Dow Jones Industrial Average and the equal-weight S & P 500 tagged new highs this week before faltering a bit, a sign either of fatigue or late-summer indifference. Big rotations This week also saw some forced-seeming rotations, with the weakest laggards in the S & P 500 performing best and the Russell 2000 small-cap index making yet another lunge for its late-2021highs on all the anticipated-rate-cut energy. Neglected groups such as health care showed some life, the likes of Johnson & Johnson breaking higher from a long slumber, even before news of Berkshire Hathaway’s second-quarter purchase of UnitedHealth shares jolted that name higher on Friday. These reinforcements allowed the overtaxed mega-cap AI glamour names to take a rest. Such rotations tend to support and refresh a rally while suppressing volatility, though eventually they can imply an exhaustion of leadership that may make the tape less stable than when the largest index weights are in firm control. From a certain angle, it could appear odd that the bond market is simultaneously assigning more than an 80% chance of a Fed rate cut in six weeks when equities are at records, valuations are full, crypto is melting higher, credit spreads are drum-tight and buzzy IPOs are rocketing out of the gate. Registering nominal appreciation for these flush conditions while still asking for monetary-policy help, Wall Street, in the words of the old Elvis Costello song, has “a mouthful of ‘Much obliged’ and a handful of ‘Gimme.’” This is less a contradiction than it is nuance. The jarring monthly payrolls miss of two weeks ago came after the Fed had last called the risks “balanced” between labor weakness and revived inflation, and the latest inflation upticks weren’t enough to offset the job-market softness. Not to mention the relentless White House campaign to browbeat Fed Chair Jerome Powell to lower rates while auditioning dovish successors. And confidence on multiple rate cuts, after one in late September, is not as evident in market pricing. Perhaps the market is conveying comfort with its own ability to hang tough even in the absence of a Fed move next month, casting a 25 basis-point reduction in short-term rates as a “Nice to have” rather than a “Need to get.” More tangibly, earnings forecasts for the remainder of the year are on the rise again, albeit with the AI-propelled tech players the largest contributors. When profits are growing, credit markets are calm and the next Fed move is a cut, stocks tend to have little trouble holding their valuations. Historically, when the Fed resumes an easing campaign after a pause of at least six months (December was the last cut), stocks have responded well over subsequent months, based on this study by Ned Davis Research. There’s no doubt the market sometimes takes credit in advance for a hoped-for future that might never arrive. It could turn out this is one of those moments. Economists at Morgan Stanley argued Friday that Powell in the upcoming week’s Jackson Hole symposium address is his last best chance to push against market pricing of a September cut, believing that “the Fed would prefer to retain optionality and, if anything, we look for Powell’s remarks at Jackson Hole to be similar to the message from July.” In other words, noncommittal and data dependent. One possible “tell” would come from the bond market’s reaction to any data or rhetoric that makes a cut less likely. If the 10-year Treasury yield were to rush lower in the face of reduced perceived chances of a rate cut next week, it could be taken as bonds declaring a high risk of a policy mistake, with the Fed behind the curve. If not, equity markets should take heart. One answer to those confused by the strength of the equity indexes in the face of still-elevated policy flux and potential stagflationary forces: Maybe markets are still burning off the last of the relief that burst forth after a worst-case scenario was priced in during the spring sell-off. Similarities to 1998 and 2018? Around the time that downturn was underway, I repeatedly noted the rich history of sharp, severe corrections that result from a sudden shock, stop just short of a 20% decline and are not associated with a recession. Precedents include the 1998 hedge-fund blowup, the 2011 U.S. debt-downgrade scare and the late-2018 tariff/Fed-mistake tumble. Fidelity’s head of global macro Jurrien Timmer tracks such patterns, and the current recovery is in synch with those of 1998 and 2018 so far. Clearly this is a small sample and these are close to best-case paths from here, but the echoes are pretty distinct. It’s no longer possible to argue that most investors are still outright bearish or are fighting the market’s four-month advance. Systematic and quantitative funds are back to very full equity exposures. But there remains a lack of aggressive participation by the broader group of professional investors, by some measures. Deutsche Bank’s composite investor positioning gauge is up to the 71 st percentile over the past 15 years, not a very high reading when indexes are at a record. Saying that not all investors have pushed every chip they have into the market is not the same as arguing the market enjoys a wide margin of safety, of course. Seasonal factors remain challenging and the tape is probably due for a routine wobble of a few percent before long. But there’s not much reason to argue if one arrived it would be the start of the “Big One.”
Category: 3. Business
-
Engine Capital takes a stake in Avantor. Activist sees several ways to create value
Company: Avantor (AVTR)
Business: Avantor is a life science tools company and global provider of mission-critical products and services to the life sciences and advanced technology industries. The company’s segments include laboratory solutions and bioscience production. Within its segments, it sells materials and consumables, equipment and instrumentation and services and specialty procurement to customers in the biopharma and health care, education and government and advanced technologies and applied materials industries. Materials and consumables include ultra-high purity chemicals and reagents, lab products and supplies, highly specialized formulated silicone materials, customized excipients and others. Equipment and instrumentation include filtration systems, virus inactivation systems, incubators, analytical instruments and others. Services and specialty procurement include onsite lab and production, equipment, procurement and sourcing and biopharmaceutical material scale-up and development services.
Stock market value: $8.85 billion ($12.98 per share)
Activist: Engine Capital
Ownership: ~3%
Average Cost: n/a
Activist Commentary: Engine Capital is an experienced activist investor led by Managing Partner Arnaud Ajdler. He is a former partner and senior managing director at Crescendo Partners. Engine’s history is to send letters and/or nominate directors but settle rather quickly.
What’s happening
On Aug. 11, Engine sent a letter calling on Avantor’s board to focus on commercial and operational excellence, demonstrate organic growth, reduce costs, optimize the portfolio, refresh the board and use free cash flow to repurchase stock. Engine noted that the company can alternatively consider a sale.
Behind the scenes
Avantor is a market leading distributor of life science tools and products for the life sciences and advanced technology industries. The company is comprised of two segments: laboratory solutions (LSS) (67% of revenue) and bioscience production (BPS) (33% of revenue). LSS is one of the three top life sciences distributors in the world (Thermo Fisher and Merck KGaA being the other two).
BPS is a supplier of high-purity materials and is the leading supplier of medical-grade silicones. Despite being one of the few scaled global life science tool distribution platforms, the company has vastly underperformed. At its 2021 investor day, management projected earnings per share above $2 for 2025; and at its 2023 investor day, management targeted an EBITDA margin exceeding 20%. Now in 2025, these currently stand at 96 cents per share and 11.8%, respectively. Consequently, Avantor’s share price has declined 53.96%, 59.69%, and 43.41% over the past 1-, 3- and 5-year periods, as of Engine’s announcement Monday.
Engine believes that Avantor’s significant underperformance is a consequence of self-inflicted mistakes rooted in a flawed leadership team and framework. A complex matrix organizational structure and resultant lack of accountability have led to mass leadership turnover, including Avantor’s CEO, CFO and both segment leaders within the past three years, contributing to a dysfunctional decision-making process and inefficient employee structure.
The biggest casualty of this rocky management team is LSS, which has lost significant profitability and market share to its peers. Specifically, poor capital allocation decisions have destroyed significant value. In 2020 and 2021, Avantor spent a total of $3.8 billion to acquire Ritter, Masterflex and RIM Bio – companies that were notably purchased during the peak of the pandemic when life sciences businesses were trading at exceptionally high multiples. Applying Avantor’s next 12 months 10x multiple to the 28x average acquisition price implies over $2.4 billion in lost value on these acquisitions, contributing to the company’s high leverage.
On top of that, despite LSS’s ongoing underperformance and the need for strong leadership, from June 2024 to April 2025, LSS was left without a leader due to a non-compete lawsuit involving the hiring of its new segment leader, underscoring the operational dysfunction that has been taking place at the company.
But perhaps the nail in the coffin for this management team and board is that despite this cascading set of errors and the internal knowledge of these forecasted losses, they were still given a way out. In 2023, the company was approached by Ingersoll Rand to be acquired at an estimated $25-$28 per share, a 20%-35% premium of the share price at the time, yet the board inexplicably rebuffed this approach. Today, Avantor trades at just under $13 per share.
Enter Engine, who has announced an approximately 3% position in Avantor and is urging the board to focus the organization on commercial and operational excellence, demonstrate organic growth, reduce costs, optimize the portfolio, refresh the board and use free cash flow to repurchase its own stock.
Engine points out that Avantor’s reported $6.8 billion in revenue was stretched across 6 million stock keeping units, while Thermo’s peer segment achieves similar revenue with less than half the SKUs, indicating a large opportunity, specifically within LSS, to optimize the portfolio by concentrating purchases to improve inventory turns, rebates and margins.
Divesting non-core assets is another way to optimize the portfolio. For BPS, certain facilities operate in periods of extended downtime, limiting growth. For LSS, subscale facilities in smaller geographies may be more valuable to a competitor, and the same goes for some of the assets purchased under Avantor’s aforementioned acquisition spree.
On the cost discipline side, Avantor’s history of poor M&A and its low valuation should limit its accretive M&A opportunities, and while the company is on the path to reduce leverage below 3x, the market remains concerned that once this is achieved, they will simply resume this costly M&A strategy. Engine argues that free cash flow should instead be allocated evenly to share repurchases and debt reduction.
Additionally, executive compensation is also a concern. In 2024, despite organic revenue declining by 2% and a 7% share price decline, the board awarded CEO Michael Stubblefield 110% of his target annual bonus, underscoring the need to align these management incentives with shareholder value creation.
Engine believes that all of these changes would be best implemented with a comprehensive board refreshment. Adding directors with executive leadership, capital allocation, and distribution expertise to replace board members that have overseen years of value destruction, likely targeting chairman Jonathan Peacock specifically, should signal to the market the start of a new chapter. Engine believes that if these changes are properly implemented that Avantor shares would be worth between $22 and $26 per share by the end of 2027.
As a secondary option, Engine suggests that if a standalone path does not appear viable then the board should consider selling the entire company or splitting LSS and BPS into separate entities.
When Avantor acquired VWR, which is now the core of the LSS business, it was valued at about 12x EBITDA, or $6.5 billion, and BPS peers trade at a median of 17x EBITDA. Neither of these businesses’ valuations correspond to what Avantor trades at, roughly 8x EBITDA, and it’s possible that a strategic path could become the best way to unlock this value on a risk-adjusted basis. If this were to become the case, there is likely to be both private and strategic interest. New Mountain Capital previously owned Avantor prior to its IPO and still maintains an approximately 2% position. Strategics, like Ingersoll, would likely be interested as well, especially at a significant discount to what they once offered. Engine believes that Avantor could sell between $17 to $19 per share.
Overall, Engine makes not only a compelling case that major change is needed at Avantor, but also a clear multipath plan forward. While some of these changes are already underway: a new CEO is set to start next week and management announced a $400 million cost-cutting initiative, the sheer volume of change required here is unlikely to occur by Engine’s 2027 estimate.
Engine’s plan includes strengthening execution, instilling a culture of cost discipline, improving capital allocation, evaluating the company’s portfolio, aligning executive compensation to shareholder value creation and refreshing the board. Engine’s plan is the right one, but this is a company whose top line and operating margins have been in decline since 2022 and refreshing a board, instilling a new culture, reversing declining revenue and operating margins and evaluating and executing asset sales, many of which cannot be done simultaneously, is something that will likely take much longer than two years, particularly with the director nomination window not opening until Jan. 8. Moreover, the kind of change that Engine calls for here is generally not the kind of change that comes from an amicable settlement.
Ken Squire is the founder and president of 13D Monitor, an institutional research service on shareholder activism, and the founder and portfolio manager of the 13D Activist Fund, a mutual fund that invests in a portfolio of activist 13D investments. Viasat is owned in the fund.
Continue Reading
-
Tumor-specific PET tracer imaging and contrast-enhanced Mri based tumor volume differences inspection of glioblastoma patients
Datasets
We have compiled a comprehensive and well-structured dataset through the meticulous collection of image data obtained from the prestigious Institute of Radiotherapy and Nuclear Medicine (IRNUM) in Khyber Pakhtunkhwa, Pakistan. Utilizing the advanced and state-of-the-art GE SIGNA PET/MRI scanner, we acquired a comprehensive set of imaging data, comprising accurately captured and processed attenuation-corrected and reconstructed PET images, along with the invaluable gadolinium-enhanced T1-weighted images.
Our dataset encompasses a total of 207 meticulously selected digital imaging and communications in medicine (DICOM) images, forming the foundation for our rigorous image analysis endeavors. This intricate analysis process was meticulously carried out employing industry-standard tools such as MATLAB and the renowned imlook4d, ensuring the highest degree of precision and accuracy throughout the analytical pipeline.
To ensure a harmonious and coherent integration of the PET and T1-weighted images, a crucial preprocessing step involved the meticulous resampling of the PET matrix, precisely aligning it with the exact slice positions of the T1-weighted images. Subsequently, to facilitate standardized and consistent quantitative analysis, a vital step encompassed the normalization of voxel values to standardized uptake values (SUV), ensuring the establishment of a common metric for accurate assessment and comparison. This normalization process followed the formulation provided by Eq. (3), seamlessly integrating the essential mathematical framework into our analytical workflow.
While FLT and Gd have emerged as widely utilized PET tracer and MRI contrast agent, respectively, for the assessment of brain tumors, it is imperative to acknowledge their inherent limitations. Notably, FLT uptake, although serving as an indicator of cellular activity, lacks specificity when discerning malignant neoplasms, as it can be influenced by factors such as increased permeability resulting from blood-brain barrier disruption, which may also manifest in bone marrow and treatment-induced alterations. On the other hand, Gd, despite its established utility, necessitates careful chelation to ensure safe utilization in MRI due to its inherent toxicity. Moreover, differentiating between tumor tissues and surgical invasions poses a formidable challenge.
In this study, we employed 18 F-fluorothymidine (18 F-FLT) as the PET tracer for imaging glioblastoma multiforme (GBM). 18 F-FLT is a thymidine analog that functions as a proliferation-specific radiotracer by targeting thymidine kinase-1 (TK1), an enzyme upregulated during DNA synthesis in actively dividing cells. FLT is phosphorylated intracellularly and retained within proliferating cells, making it a valuable marker for assessing tumor cell proliferation. This characteristic renders 18 F-FLT a “tumor-specific” tracer, particularly advantageous for identifying active tumor regions beyond the contrast-enhancing zones visible on MRI. Unlike amino acid tracers such as 18 F-FET or 11 C-methionine, which accumulate based on increased transport in tumor cells, FLT provides more direct insight into mitotic activity, thereby offering complementary biological information that enhances the delineation of aggressive tumor subregions and contributes to more informed treatment planning and monitoring.
All imaging data were acquired using the GE SIGNA PET/MRI hybrid scanner, which integrates simultaneous PET and MRI acquisition. This system enabled exact co-registration between PET and MR images by capturing both modalities during a single imaging session without repositioning the patient. Consequently, issues related to alignment or time-lag variability between modalities were inherently minimized. The MRI component involved contrast-enhanced T1-weighted imaging, using gadolinium-based contrast agents to visualize the enhancing tumor core. Simultaneously, 18 F-FLT PET images were acquired to assess tumor proliferation. For PET imaging, attenuation correction was automatically performed using MRI-based correction algorithms native to the hybrid scanner. No separate image smoothing was applied during acquisition.
To evaluate surgical efficacy and determine the presence of any residual tumor tissue, the integration of follow-up MRI with contrast enhancement within a 48-hour timeframe following surgery assumes paramount importance. Such subsequent examinations often reveal contrast enhancement attributed to surgical intervention and the effects of radiotherapy. As shown in Fig. 1, pre-surgical FLT-PET/MRI brain imaging highlights glioblastoma contrast enhancement on Gd-enhanced MRI and FLT-PET activity concentration. The PET image was normalized to create a Gd-enhanced T1-weighted MRI matrix. While FLT PET and contrast-enhanced MRI images of GBM offer invaluable insights and information, it is crucial to consider and account for the aforementioned limitations. Therefore, exploring alternative imaging methodologies and approaches is imperative to enhance the accuracy of diagnosis and improve the efficacy of treatment monitoring, ultimately augmenting patient outcomes and prognoses.
Fig. 1 Pre-surgical FLT-PET/MRI brain imaging showing glioblastoma contrast enhancement on Gd-enhanced MRI and FLT-PET activity concentration. PET image was normalized to create a Gd-enhanced T1-weighted MRI matrix.
The normalization process followed the formulation provided by Eq. (3), which can be expressed as:
$$SUV=~frac{{{I_{PET}}}}{{{C_t}{text{*}}{A_{inj}}*W*D}}$$
Where:
(SUV) represents the Standardized Uptake Value. ({I_{PET}}) denotes the measured intensity in the PET image. ({C_t}) stands for the tissue concentration of the tracer. ({A_{inj}}) represents the injected activity concentration. W denotes the weight of the patient. D represents the quantity of tracer injected adjusted for decay.
In this equation, the measured PET image intensity represents the pixel intensity value obtained from the PET image. The tissue concentration of the tracer refers to the concentration of the radiotracer within the tissue of interest. The injected activity concentration denotes the concentration of the injected radiotracer dose. The patient’s weight is the weight of the individual undergoing the PET scan. Lastly, the injected fluid tracer accounts in the the decay-corrected for the radioactive decay of the tracer over time.
Digital image processing
ROI selection
In the realm of brain tumor imaging, the current study has made significant advancements in the methodologies employed for precise delineation of tumor regions within PET and MRI images through a series of preprocessing steps. Nonetheless, the task of accurately delineating regions of interest (ROIs) in PET images presents challenges due to the influence of the partial volume effect (PVE), which impacts the resolution of PET cameras and results in a low signal-to-noise ratio.
To mitigate these challenges, a meticulous approach was adopted wherein tumor regions were delineated on each transaxial slice of the PET and MRI scans. Notably, the MRI scans exhibited superior resolution and contrast properties, enabling enhanced visualization of tumor regions within the ROIs.
Partial Volume Effect (PVE) Correction Equation:
$$PV{E_{Correcte{d_{PET}}}}=frac{{PET_intensity~}}{{PVE_Factor}}$$
Where:
(PET_Intensity) represents the intensity measured in the PET image. (PVE_Factor) denotes the correction factor accounting for the partial volume effect.
In order to establish initial delineation of PET ROIs, two distinct methods were devised. The first method involved excluding meninges and skull bone when tumors were situated in proximity to these areas. Conversely, the second method entailed comparing the affected cerebral hemisphere with its contralateral counterpart in instances where regions of heightened radiotracer uptake were not located near the proximity to the skull bone or meninges. Subsequently, a delineated ROIs were subjected to adaptive thresholding techniques, refining the initial approximation. In the image analysis phase, a synergistic combination of both delineation methods was employed to improve the accuracy of tumor delineation and achieve robust ROIs for subsequent volume of interest statistics and standardized uptake value (SUV) analyses. This study underscores the significance of advancing tumor imaging techniques and highlights the potential advantages associated with utilizing a comprehensive amalgamation of methodologies to surmount limitations such as the PVE. By refining and optimizing these techniques, it becomes conceivable to enhance the precision and specificity of brain tumor diagnosis and treatment monitoring, thereby yielding improved patient outcomes.
Adaptive Thresholding Equation:
$$Threshold=Mean+k*Standard~Deviation$$
Where:
(Mean) represents the mean intensity within the delineated region. (Standard~Deviation) denotes the standard deviation of intensity within the delineated region. k represents a constant multiplier used to adjust the threshold.
To enhance the precision of tumor delineation in PET/MRI imaging, it is essential to refine the imaging process by excluding structures such as the skull bone and meninges. As shown in Fig. 2, this refinement involves copying the PET delineation onto the MRI image to define initial boundaries (Fig. 2a and b). Subsequently, the MRI is employed to exclude the skull bone and meninges from the region of interest (ROI) (Fig. 2c). The delineation is further enhanced using adaptive thresholding techniques (Fig. 2d). These refinements are critical for achieving accurate boundary definitions and improving diagnostic precision.
Fig. 2 Refining PET/MRI imaging tumor delineation by excluding skull bone and meninges. PET delineation (a) copied onto MRI image (b) for defining boundaries. MRI used to exclude skull bone and meninges from ROI (c), followed by refinement with adaptive thresholding (d).
Contrast-enhanced MRI and FLT-PET imaging provide complementary insights into tumor characterization. As illustrated in Fig. 3, the blue areas in the contrast-enhanced MRI image (Fig. 3a) represent active tumor regions identified through increased uptake of the contrast agent. Meanwhile, the red delineated regions in the FLT-PET image (Fig. 3b) highlight tumors with high proliferative activity, offering a distinction from the surrounding tissues. This combined imaging approach enhances the accuracy of tumor detection and characterization, aiding in treatment planning and monitoring.
Fig. 3 (a) The blue areas in the image represent active tumor regions detected through contrast-enhanced MRI, where tumor tissues show increased contrast agent uptake. (b) The red delineated regions in the FLT-PET image indicate tumors with high proliferative activity, distinct from surrounding tissues.
For tumor volume delineation, distinct segmentation strategies were applied to PET and MRI modalities. The MRI tumor volumes were manually segmented on contrast-enhanced T1-weighted images using the imlook4d analysis platform by two independent expert radiologists with over five years of neuroimaging experience. To reduce inter-observer variability, consensus segmentation was used for final volume generation. The inter-observer agreement was assessed using the Dice Similarity Coefficient (DSC), yielding an average Dice score of 0.88 ± 0.04 across subjects. For PET imaging, an adaptive thresholding approach was used to delineate the metabolic tumor volume (MTV). Specifically, regions with uptake values exceeding 40% of the lesion’s SUV_max were classified as tumor regions, in accordance with previously published clinical guidelines. This method has been shown to provide robust segmentation for 18 F-FLT PET imaging in glioma patients. The thresholding algorithm was implemented in MATLAB and validated internally through comparison with manually segmented test cases. To quantify the spatial agreement between PET- and MRI-derived tumor volumes, the Dice Similarity Coefficient was computed for each subject. The average Dice coefficient observed across all patient examinations was 0.42 ± 0.09, consistent with prior reports indicating limited spatial overlap between functional (PET) and structural (MRI) imaging in GBM. This reinforces the complementary nature of the two modalities and highlights the clinical relevance of multimodal imaging in glioblastoma assessment.
Consistency of inter observer
To assess the reliability and reproducibility of manual tumor delineation, a rigorous examination was conducted, involving a subset of four randomly selected patients from the overall cohort. In this evaluation, three proficient individuals expertly delineated regions of interest (ROIs) in a total of 20 MRI examinations, thereby yielding a comprehensive dataset of 60 MRI examinations for the purpose of consistency analysis. The degree of similarity between the segmented tumor volumes was quantitatively assessed by means of the Dice index, a widely recognized metric for measuring spatial overlap between binary segmentations.
Dice Index Equation:
$$Dice_Index=~frac{{2*left| {RO{I_1} cap RO{I_2}} right|}}{{left| {RO{I_1}} right|+left| {RO{I_2}} right|}}*100%$$
Where:
(RO{I_1}) and (RO{I_2}) represent two segmented tumor volume. (left| {RO{I_1} cap RO{I_2}} right|) denotes the intersection of the two segmented volumes. (left| {RO{I_1}} right|) and (left| {RO{I_2}} right|) represent the total volume of each segmented tumor. The Dice Index quantifies the extent of spatial intersection between the two segmented tumor volumes, represented in percentage terms.
In order to gain deeper insights into the consistency of the manual delineation process, the coefficient of variation (CV) was employed to calculate the relative standard deviation of the segmented tumor volumes. This statistical measure allowed for a comprehensive analysis of the degree to which the delineated tumor volumes deviated from the mean volume. By examining the mean tumor volumes associated with different delineated ROIs for each examination, the CV served as a valuable indicator of the degree of clustering or dispersion of the data points around the mean volume.
Coefficient of Variation (CV) Equation:
$$CV=frac{{Standard~Deviation}}{{Mean~Volume}}*100%$$
Where:
(Standard~Deviation) represents the standard deviation of the segmented tumor volumes. (Mean~Volume) denotes volume of the tumor being segmented with highlighted mean. The (CV) calculates the relative standard deviation of the segmented tumor volume as a percenatage of the mean volume.
This meticulous evaluation shed light on the subjective nature of manual segmentation in tumor delineation. It became evident that the manual approach introduced a certain level of variability, highlighting the need for more objective and standardized methods in the realm of tumor delineation. The findings underscored the importance of adopting quantitative and reproducible techniques to enhance the accuracy and reliability of tumor delineation processes, ultimately contributing to improved diagnostic and treatment outcomes.
Patient cohort and study design
This was a prospective observational study conducted at the Institute of Radiotherapy and Nuclear Medicine (IRNUM), Khyber Pakhtunkhwa, Pakistan, with institutional ethical approval obtained prior to data collection. All patients provided written informed consent in accordance with the Declaration of Helsinki. The study enrolled 22 patients diagnosed with histologically confirmed glioblastoma multiforme (GBM), who were undergoing initial staging and treatment planning. Inclusion criteria comprised: (1) newly diagnosed and treatment-naïve GBM patients; (2) availability of both pre-treatment PET and MRI scans; and (3) no prior radiotherapy, chemotherapy, or neurosurgical intervention except for biopsy. Patients were excluded if they had incomplete imaging data, motion artifacts, or were lost to follow-up. Ultimately, a total of 18 patients met all inclusion criteria and were analyzed. All imaging was conducted within a narrow time window, with PET and MRI scans performed within 48 h of each other to minimize temporal variations in tumor volume. Both imaging modalities were acquired before initiation of any radiochemotherapy or surgical resection to ensure a consistent baseline across the cohort. This approach eliminated potential confounding factors related to treatment-induced changes in enhancement or uptake patterns. Potential biases were mitigated by adopting strict inclusion criteria and standardized imaging protocols. However, we acknowledge that selection bias may persist, as patients undergoing both PET and MRI are often those with more complex or ambiguous presentations. This limitation is discussed in the concluding section. Nevertheless, the consistency in disease stage and imaging timelines across the included patients enhances the reliability of volume comparisons between modalities.
Segmentation performance evaluation
The inherent limitations of imaging systems, particularly their restricted spatial resolution, give rise to partial volume effects (PVEs) that engender spill-out phenomena in small objects or regions, as vividly depicted in Fig. 4. Consequently, when delineating ROIs for analysis, the pervasive influence of PVEs must be duly taken into account. In order to establish a reliable reference point for accurate measurements, a ground truth image was meticulously constructed by amalgamating PET images with predetermined standardized uptake values (SUVs) corresponding to distinct anatomical components, such as the tumor, regions with elevated tumor activity, skull bone, and adjacent background areas. These ground truth images serve as invaluable benchmarks, enabling precise quantification of object dimensions within the imaging domain.
Fig. 4 Constructed phantom with precise dimensions (left) and original PET image with blurred object boundaries and loss of activity in small objects due to PVE (right, white arrow).
Figure 4 serves as a visual illustration, juxtaposing a phantom image on the left and a conventional PET image on the right. Evidently, the PET image exemplifies a discernible attenuation of activity in diminutive entities, such as the high tumor spots region, which regrettably fails to manifest in the phantom image as prominently indicated by the absence of its characteristic red coloration. Furthermore, a notable discrepancy in spatial resolution is discernible between these two images, underscoring the criticality of addressing PVEs within imaging systems to foster enhanced precision in ROI calculations and tumor delineation endeavors.
By actively mitigating the deleterious effects of PVEs through judicious methodological interventions, one can effectively ameliorate the impact of spatial resolution constraints, thereby bolstering the accuracy and reliability of ROI determinations. This imperative pursuit toward PVE-aware imaging practices holds considerable promise in refining tumor delineation methodologies, ultimately facilitating more robust and dependable analyses with significant ramifications for diagnostic and therapeutic decision-making.
The impact of partial volume effects (PVE) on imaging accuracy can be observed in a constructed phantom study. As shown in Fig. 4, the phantom with precise dimensions is depicted on the left, while the original PET image on the right demonstrates blurred object boundaries and a loss of activity in small objects (indicated by the white arrow) due to PVE. This highlights the need for advanced imaging techniques to mitigate PVE and improve resolution and activity quantification in PET imaging.
To appraise the efficacy and discriminatory capabilities of diverse segmentation algorithms, a comprehensive assessment was conducted utilizing a meticulously constructed phantom and the generation of two synthetic PET images that faithfully emulated the salient characteristics of a genuine PET image dataset. With a meticulous attention to detail, these synthetic images were meticulously engineered to exhibit high and low tumor-to-background ratios (TBR), while maintaining a pixel size of 0.4883 mm in both the x- and y-directions, along with a slice spacing in the z-direction up to 1 mm. Notably, the “Uptake” values for FLT corresponding to each TBR configuration were meticulously determined and tabulated for reference in Table 1, serving as invaluable benchmarks for subsequent analyses1.
To faithfully replicate the inherent blurriness that frequently plagues real-world PET images, a Gaussian smoothing filter was judiciously employed to deliberately introduce blur effects onto the synthetic images. Specifically, the input parameters for the full width at half maximum (FWHM) of the Gaussian smoothing filter were diligently set at 3.5 mm in both the x- and y-directions, with a corresponding value of 5 mm in the z-direction. Such deliberate manipulation effectively induced blurred boundaries, faithfully mirroring the commonplace phenomenon encountered in actual PET images1.
By subjecting these meticulously crafted synthetic PET images to a comprehensive evaluation, profound insights were gleaned regarding the performance characteristics of the segmentation algorithms under scrutiny. Specifically, the algorithms’ capacity to accurately discern tumor activity from background activity was scrutinized, shedding light on their respective strengths and limitations1. This rigorous evaluation process constitutes an invaluable contribution to the field, offering crucial guidance and empirical evidence to guide the selection and refinement of segmentation algorithms in pursuit of heightened accuracy and reliability in tumor delineation endeavors.
Table 1 FLT uptake values (in kBq/mL) for different regions used to create synthetic PET images with varying tumor-to-background ratios. Within the confines of Table 1, an intricate compilation of four distinct anatomical regions is presented, accompanied by their respective FLT uptake values crucial for the creation of PET images boasting divergent tumor-to-background ratios. With meticulous attention to detail, these FLT uptake values are meticulously expressed in kilobecquerels per milliliter (kBq/mL), precisely reflecting the standardized units inherent to PET imaging. It is noteworthy that the high uptake tumor spots, in particular, manifest significantly elevated FLT uptake values when juxtaposed with the remaining regions, possibly indicative of a more pronounced presence of metabolically active tumor tissue within these localized areas. Conversely, the values assigned to the skull bone and background regions exhibit an identical manifestation in both the high and low FLT uptake images, effectively indicating a relative paucity in terms of activity levels within these anatomical regions.
To comprehensively evaluate the segmentation accuracy of diverse thresholding methodologies, a meticulous analysis was conducted employing two distinct synthetic PET images thoughtfully engineered to encapsulate disparate tumor-to-background ratios, as comprehensively expounded upon in Sect. 3.2.4. In order to gauge the precision of the segmentation techniques under scrutiny, the Dice index, renowned for its efficacy in quantifying the degree of similarity between the segmented volume and the ground truth volume, was deftly harnessed for evaluation purposes. In accordance with established conventions, this Dice index was diligently calculated by multiplying twice the cardinality of the common elements shared by both the segmented and ground truth volumes, subsequently dividing this product by the element’s total cardinalities of the within respective group. Mathematically articulated the Dice (V1, V2) = 2 × (|V1 ∩ V2″https://www.nature.com/”V1| + |V2|)17, this esteemed metric definitively enabled the precise quantification of the segmentation accuracy achieved by each thresholding methodology under scrutiny, thereby facilitating an informed assessment of their relative efficacy.
Methods of PET volume segmentation
Medical Image segmentation, a fundamental aspect of medical image processing, assumes paramount significance in facilitating subsequent computational analysis. Its underlying objective entails partitioning images into distinct regions or segments to enable efficient handling of individual components. Among the manifold techniques employed for this purpose, thresholding emerges as a prominent approach, particularly adept at converting grayscale images into binary counterparts. This transformative process involves assigning foreground or background values to pixels whose intensity surpasses or equals a predetermined threshold value, thereby generating a binary mask that selectively designates pixels of interest as 1 while relegating others to a value of 0.
Within the scope of the present research endeavor, the paramount focus revolves around accurately delineating FLT-PET target cross-sections and effectively characterizing tumor volumes. To achieve this crucial objective, a comprehensive evaluation encompassing three distinct thresholding segmentation methodologies was undertaken, meticulously tailored to the specific nuances of the PET image dataset at hand. Notably, this discerning analysis incorporated two conventional thresholding techniques, esteemed for their established efficacy, alongside an adaptive thresholding technique, celebrated for its capacity to dynamically adapt to the unique characteristics inherent to the dataset under scrutiny. By judiciously applying these diverse segmentation approaches, the research aimed to achieve precise and robust tumor volume definition, thus paving the way for subsequent comprehensive analysis and interpretation of the FLT-PET image data.
The conventional thresholding methods conventionally employed a uniform threshold value applied to all pixels within the image, assuming a homogeneous response across the entire image domain. However, the efficacy of this approach is contingent upon various image attributes such as texture, noise characteristics, and the employed image reconstruction techniques. Recognizing the inherent limitations of a global thresholding strategy, an innovative and advanced adaptive thresholding technique was incorporated, wherein distinct threshold values were assigned to individual pixels based on their specific image properties.
This adaptive thresholding methodology, distinguished by its intrinsic adaptability to account for spatial variations in image illumination, proved to be remarkably robust and well-suited for the PET dataset under investigation. Unlike fixed threshold approaches, which faltered in the face of inherent spatial differences in image illumination, the adaptive thresholding method adeptly responded to such variances, facilitating accurate and refined segmentation outcomes. Notably, the adaptive thresholding algorithm was applied to the initial rough delineation of the regions of interest (ROIs), as visually depicted in Fig. 5.
Fig. 5 Segmentation accuracy comparison using traditional and adaptive thresholding methods on a transaxial slice.
In order to ascertain the fidelity and precision of the segmentation achieved through the diverse segmentation methods, a pivotal metric known as the Dice index was leveraged. This quantitative index, extensively employed in the field of image segmentation, facilitated a comprehensive evaluation of the segmentation accuracy vis-à-vis the ground truth-constructed tumor volume. By quantifying the overlap and concordance between the segmented regions and the true tumor volume, the Dice index served as an indispensable tool in assessing the reliability and robustness of the employed image segmentation techniques.
The implementation of adaptive thresholding, an intricate technique rooted in the concept of local mean intensity surrounding each pixel, was realized in MATLAB through the utilization of the integral image method. A critical aspect of this approach involved determining the optimal sensitivity factor, which played a pivotal role in delineating the pixels deemed to belong to the foreground. To determine the most suitable sensitivity factor, an exhaustive evaluation of the PET imaging dataset was undertaken, employing two distinct PET images, each encompassing regions of varying TBR. Notably, these PET images were accompanied by preliminary delineations of the regions of interest (ROIs) to guide the analysis.
Within this evaluation framework, a comprehensive range of sensitivity factors spanning the spectrum of 0.1 to 0.9 was systematically explored. The accuracy and fidelity of tumor boundaries were meticulously assessed using a robust metric known as the Dice index. Through the rigorous analysis, the adaptive thresholding method emerged as the paradigmatic choice, yielding the most precise and reliable segmentation outcomes for the lesions under scrutiny.
It is worth emphasizing that the accurate definition of the ROIs assumes paramount significance, as even minute errors or inaccuracies within the delineated margins can impart substantial repercussions on crucial metrics such as SUV and TLA-based metrics. Thus, with utmost meticulousness, the SUV parameters were diligently calculated employing the well-established Eq. (3). Furthermore, the computation of the TLA further enriched the analytical repertoire, affording a more profound and nuanced comprehension of the tumor’s intricate characteristics.
These noteworthy findings not only contribute valuable insights to the existing body of knowledge pertaining to PET imaging data and tumor volume estimation through image segmentation but also hold substantial promise in informing and shaping future endeavors within this domain. Segmentation accuracy plays a critical role in the precise delineation of tumor boundaries in imaging studies. As demonstrated in Fig. 5, a transaxial slice comparison highlights the difference between traditional and adaptive thresholding methods. The adaptive approach shows improved accuracy in defining tumor regions, underscoring its potential for enhancing imaging precision and clinical decision-making.
Image features extraction
Sophisticated algorithms were meticulously crafted to extract intricate features from the PET and MRI image datasets, facilitating a comprehensive analysis of the distinctive characteristics and commonalities in tumor features. A central focus of this investigation was to evaluate the degree of tumor volume overlap (referred to as Voverlap) observed in both modalities. Precisely quantifying Voverlap entailed assessing the segmented volume shared by the VPET and VMR images. Furthermore, the analysis encompassed discerning the distinct volumes unique to each modality, denoted as VonlyPET and VonlyMR, respectively. It is crucial to comprehend that PET imaging enables the acquisition of information at an earlier stage in tumor development compared to MRI. Thus, VonlyPET signifies the tumor exhibiting active growth discernible solely through PET imaging, not yet manifest in MRI. Similarly, VonlyMR represents the active and necrotic tumor volume exclusively visualized through MRI imaging. By judiciously amalgamating the segmented volumes attributed to either VPET or VMR or both, their volume of tumor in combination, portrayed in MR and PET images can be ascertained. This comprehensive information significantly contributes to a more profound understanding of tumor growth dynamics and provides crucial insights into potential avenues for treatment optimization. The segmentation of tumor volumes across PET and MR modalities provides valuable insights into spatial overlap and modality-specific differences. As illustrated in Fig. 6, the segmented volumes for PET and MR are displayed, highlighting their spatial overlap (Voverlap) as well as unique volumes identified exclusively by each modality (VonlyPET and VonlyMR). This comparison underscores the complementary nature of PET and MR imaging for comprehensive tumor characterization.
Fig. 6 Illustrates the segmented volumes for PET and MR modalities, their spatial overlap as Voverlap, and the unique volumes as VonlyPET and VonlyMR.
Figure 7 exhibits a meticulously selected axial slice of the brain, ingeniously captured utilizing MRI with contrast enhancement. This striking visualization distinctly showcases the derived tumor volumes, meticulously delineating the distinct regions exclusively revealed through PET imaging (referred to as PET-only), the unique areas solely visible in MR imaging (referred to as MR-only), and the captivating overlapping tumor volumes that are prominently observed in both modalities. This comprehensive illustration provides a profound visual representation of the intricate interplay between PET and MR imaging in capturing and characterizing tumor volumes, thereby enriching our understanding of the multifaceted nature of these tumors.
Fig. 7 Axial slice of the brain obtained using contrast-enhanced MR, showing PET and MRI-derived tumor volumes: Voverlap, VonlyPET, and VonlyMR.
Statistical analysis
All statistical analyses were performed using MATLAB and SPSS. To quantitatively compare tumor volumes derived from PET and MRI, we applied both descriptive and inferential statistical tests. A paired t-test was used to evaluate whether statistically significant differences existed between PET-derived and MRI-derived tumor volumes across matched examinations. For datasets violating normality assumptions (as verified using the Shapiro-Wilk test), the Wilcoxon signed-rank test was employed as a non-parametric alternative. A two-tailed significance level of p < 0.05 was considered statistically significant. The relationship between PET and MRI tumor volumes was assessed using Pearson’s correlation coefficient (r). In cases of non-normal data distribution, Spearman’s rank correlation (ρ) was used. However, we acknowledge that correlation reflects association and not agreement.
To assess the level of agreement and identify any systematic bias between PET and MRI tumor volumes, we conducted a Bland-Altman analysis. Bland-Altman plots were generated to illustrate the mean difference (bias) between modalities and the 95% limits of agreement, defined as the mean difference ± 1.96 standard deviations. This approach provided a visual and statistical measure of proportional bias and potential outliers. For each comparison, 95% confidence intervals were reported alongside p-values to support the interpretation of effect sizes and measurement variability.
Given the limited cohort size, we treated statistical outcomes as preliminary. We explicitly note the limited statistical power and advocate for further validation using larger patient cohorts in future studies. As shown in Fig. 8, the Bland-Altman analysis illustrates a mean bias of 3.0 cm3, indicating that PET tends to slightly overestimate tumor volume compared to MRI. As summarized in Table 2, statistical analysis revealed a significant difference between PET- and MRI-derived tumor volumes, with PET consistently measuring larger volumes.
Fig. 8 Bland-Altman plot showing the agreement between PET- and MRI-derived tumor volumes, with a mean bias of 3.0 cm3 and limits of agreement indicating PET tends to slightly overestimate tumor size.
Table 2 Summary of statistical comparison between PET- and MRI-derived tumor volumes. Imaging protocol and PET tracer details
All imaging was conducted using a hybrid GE SIGNA PET/MRI scanner at the Institute of Radiotherapy and Nuclear Medicine (IRNUM), enabling simultaneous acquisition of PET and MRI data to eliminate registration errors and temporal discrepancies. PET and MRI scans were performed on the same day during each of the four scheduled examination sessions.
We employed the radiotracer 3’–deoxy-3’–[^18F]fluorothymidine (FLT), an ^18F-labeled thymidine analog, as the PET tracer. FLT is a tumor-specific proliferation marker that accumulates in actively dividing cells by mimicking endogenous thymidine, a DNA synthesis substrate. Due to its uptake via thymidine kinase-1 (TK1), which is upregulated in proliferating glioma cells, FLT-PET provides a reliable assessment of tumor growth and cellular activity. For the PET acquisition, approximately 370 MBq (10 mCi) of ^18F-FLT was intravenously administered, followed by a 60-minute uptake period prior to scanning. PET data were acquired in 3D mode for 20 min and corrected for attenuation using MRI-based attenuation maps. MRI scans included a T1-weighted post-contrast sequence (Gd-enhanced) with the following parameters: repetition time (TR) = 500 ms, echo time (TE) = 15 ms, slice thickness = 1 mm, field of view (FOV) = 256 × 256 mm. The contrast agent used was gadolinium-DTPA, administered at a dose of 0.1 mmol/kg body weight immediately prior to scanning. All PET and MRI images were preprocessed using MATLAB and imlook4d, including intensity normalization, motion correction, and resampling to a common voxel size. For PET-MRI co-registration, the hybrid system enabled automatic alignment, and further manual adjustments were made when necessary to ensure spatial fidelity.
Continue Reading
-
Pakistan’s Oil and Gas Production Drops to Lowest in 20 Years
Pakistan has reported its lowest oil and gas output in over two decades. Data from fiscal year 2025 (FY25) reveals a significant double-digit decline in both crude oil and natural gas production.
Industry experts attribute the downturn to structural inefficiencies, regulatory challenges, and an oversupply of imported regasified liquefied natural gas (RLNG), which has displaced local production.
According to a report by Topline Securities, Pakistan’s hydrocarbon production saw a sharp decline in FY25. Crude oil output dropped by 12% year-on-year (YoY), while natural gas production fell by 8% YoY. The decline was even more pronounced in the final quarter, with oil production decreasing by 8% quarter-on-quarter (QoQ) and 15% YoY, and gas production contracting by 7% QoQ and 10% YoY.
The oversupply of RLNG played a significant role in this decline. A policy shift redirected industrial users from natural gas to the national power grid, while the government imposed an “off-grid levy” of Rs791 per million British thermal units (mmbtu) on captive gas consumption, raising the total cost to Rs4,291/mmbtu.
This made gas-based electricity generation more expensive than grid power, discouraging industrial gas use and reducing demand for domestic production.
Oil production averaged 62,400 barrels per day (bpd) in FY25, with major fields experiencing declines ranging from 3% to 46%. The Tal Block, which contributes around 17% of Pakistan’s total oil output, recorded a steep 22% YoY decline in the fourth quarter. Within the block, production from the Maramzai and Mardankhel fields plummeted by 54% and 52% YoY, respectively.
Natural gas production also faced significant challenges, averaging 2,886 million cubic feet per day (mmcfd) in FY25. Major fields like Qadirpur and Nashpa saw sharp contractions in the fourth quarter, with output falling by 36% and 34% YoY, respectively. Even the Sui field, Pakistan’s largest gas producer, reported consistent declines.
Topline Securities estimates that the increased reliance on imported fuels added over $1.2 billion to Pakistan’s foreign exchange burden in FY25. This reliance not only inflates the import bill but also leaves the country vulnerable to global fuel price volatility and potential supply disruptions.
The outlook for FY26 remains bleak, with oil production projected to hover between 58,000-60,000 bpd and gas output expected to range from 2,750-2,850 mmcfd. Without significant policy changes or new investments in exploration and production (E&P), FY26 could mark the third consecutive year of declining hydrocarbon output.
However, there is a glimmer of hope. The government is set to renegotiate its long-term RLNG supply agreement with Qatar in March 2026. Industry experts believe that more flexible contract terms could create room for domestic E&P companies to increase production, provided they prioritize field maintenance and capital investment. Balancing imported LNG with sustainable local production will be crucial for safeguarding Pakistan’s energy security and stabilizing its foreign exchange reserves.
Continue Reading
-
Dubai chocolate desserts hit state fairs as a hot-ticket item
At the upcoming State Fair of Texas, confectioner Stephen El Gidi will offer up his own Dubai chocolate-inspired dessert — a base of rich Belgian chocolate and a pistachio spread layered like a sweet lasagna over cheesecake in a cup.
It’s definitely a departure from the typical corn dogs and cotton candy.
From the West Coast to Middle America, dessert creators at state fairs are hawking their own confections based on Dubai chocolate, a milk chocolate shell filled with creamy pistachio, tahini and crispy kataifi, a Middle Eastern pastry.
The offerings derive inspiration from the original Dubai Chocolate, a bar created in 2021 by Sarah Hamouda, the founder of Fix Dessert Chocolatier, an online confectionary shop in Dubai.
The bar quickly went viral, with influencers touting its gooey, crunchy goodness and Hamouda saying she was selling 100 bars per minute. Now, Dubai chocolate-inspired desserts have hit the masses and are popping up at a handful of state fairs for the first time this year.
The Minnesota State Fair will offer a Dubai chocolate strawberry cup in late August. Wisconsin just featured its version of the Dubai chocolate bar. And the Orange County Fair in Southern California debuted a Dubai chocolate brownie last month. In May, the L.A. County Fair also sold a Dubai chocolate strawberry cup.
Stephen El Gidi with his creation, Dubai Chocolate Cheesecake. State Fair of Texas El Gidi, who owns Drizzle Cheesecakes based in a Dallas suburb, is originally from Libya and moved to the U.S. in 2021 in hopes of becoming a business owner. He said he aims to sell between 15,000 and 20,000 cups this year at the state fair.
“I became an entrepreneur because of my father, who is also a business owner. He inspired me to be my own boss,” El Gidi told NBC News.
Stores like Trader Joe’s, Costco and even mall kiosks have featured their own versions of the chocolate bars for prices of around $3.99 and up. There’s even a Dubai chocolate pistachio shake at some Shake Shack locations featuring pistachio frozen custard with kataifi and a dark chocolate shell for $11.04.
Currently, people in Dubai can order Hamouda’s bar, which she calls “Can’t get knafeh of it,” from her online shop or through a delivery service. It costs a little over $18 per bar. Additionally, chocolate aficionados can find the bar at Dubai International Airport’s duty-free store in Terminal 3.
In May, product sales in the Confectionary category of Dubai Duty Free reached $20.2 million, up 81% thanks in part to Dubai chocolate, according to a press release from the company.
The versions of Dubai chocolate people are buying in the U.S. are more a replication of the flavor profile than the real thing, says Kristie Hang, a food journalist based in Southern California’s San Gabriel Valley.
These products are more wallet-friendly, selling for around $15 at grocery stores, and they’re made using standard ingredients like milk chocolate, strawberries and nut butters.
Kristie Hang, food and culture expert, tries a viral Dubai chocolate cup.Kristie Hang / via Instagram True Dubai chocolate, Hang says, is an artisanal dessert that’s made in small batches.
“The pistachios are imported from Turkey and the chocolate is special chocolate with edible gold,” she said.
There’s an element of luxury and craftsmanship to authentic Dubai chocolate, Hang added, noting a Dubai chocolate-covered strawberry confection would have only the finest, perfectly shaped strawberries dipped in high-quality Belgian or dark chocolate, paired with kataifi bits and pistachio cream from finely ground pistachios.
“It’s definitely a mass fad at this point, but it’s far removed from what the original Dubai chocolate was intended for, which was an exclusive luxury item. Now, it’s being marketed as a very generic thing that anyone can get,” said Hang.
From left, Arielle Federico Chellsie Duarte, Billy Duarte and Bianca Tamondong said they love trying food trends and posting about them.Liz Rojas Texas-based food reviewer Zain Mohammed said he’s not a fan of the Dubai chocolate trend. Mohammed, who was born in Chicago, raised in Saudi Arabia and now reviews restaurants in Houston, said he thinks the proliferation of the dessert is glossing over the culture and the important role food plays in family.
“There’s more to Dubai than just Dubai chocolate. I grew up in Saudi Arabia, and the Arab culture is very family-oriented and Arab hospitality is very unmatched.”
He said he’s also worried about people benefiting from the trend without appreciating the culture. “I believe that there is cultural appropriation because of the fact that so many people are doing it — they are latching onto the trend and then advertising it as their own.”
Bianca Tamondong, a college student who tried the Dubai brownie dessert from the Mom’s Bakeshoppe stand at the OC Fair, said she thought it was a winning combo. “I’ve tried so many other Dubai chocolate desserts before, such as the actual chocolate bar, ice cream variations and Dubai chocolate-covered strawberries. Ten dollars honestly seemed like such a steal since many other Dubai chocolate desserts cost $15.”
“The pistachios balanced out the sweetness of the brownie perfectly,” she said.
Confection connoisseur Dominic Palmieri sells a Dubai chocolate strawberry cup at the OC Fair.
Confection connoisseur Dominic Palmieri with a Dubai chocolate strawberry cup. OC Fair “It has all the components of the Dubai chocolate. However, we’re putting chocolate on top of the strawberries, and it’s got silky cream chocolate that doesn’t harden,” said Palmieri.
It took more than three months to get enough pistachio cream for the fairs he’s participating in due to a pistachio shortage and high demand for pistachio cream. He projects securing around 2,000 gallons of pistachio cream and over 10,000 pounds of raw chocolate this year.
It was rare for anyone to find Dubai chocolate in 2024. “You had to find it in specialty chocolate shops, sweet shops or different places that were doing dessert,” says Palmieri.
Now, it’s everywhere, he says.
“When you go to the fair, you’ll go get your corn dog, turkey leg, funnel cake, and you’ll get your Dubai chocolate strawberry cup. This one is quickly becoming a fan favorite,” Palmieri said.
Continue Reading
-
Gold price in Pakistan continues downward trend
The price of gold in Pakistan as well as global market dropped for the sixth consecutive day on Saturday, reported 24NewsHDTV Channel.
According to the All Pakistan Sarafa Gems and Jewellers Association, the price of 24-karat gold per tola saw a drop of Rs900 on Saturday to reach Rs356,200. Similarly, the price of 10 grams of gold dropped by Rs771, taking it to Rs305,384.
The global market also witnessed a minor change, with the price of gold per ounce declining by just $9 to reach $3,335. The silver prices in Pakistan also witnessed a decline, with the price of a tola of 24-karat silver dropping to Rs4,031, and the price for 10 grams of silver to Rs3,455.
Continue Reading
-
Exploring the role of lipid metabolism related genes and immune microenvironment in periodontitis by integrating machine learning and bioinformatics analysis
Identification of differentially expressed lipid metabolism related genes between periodontitis and healthy individuals
The workflow of this study is display in Fig. 1A. A total of 74 differentially expressed lipid metabolism-related genes (DELMRGs) were identified through differential expression analysis. Among these, 44 genes were upregulated, while 30 genes were downregulated (Fig. 1B). The hierarchical clustering heatmap shows the detailed expression patterns of these DELMRGs (Fig. 1C). We then investigated the biological functions of the upregulated and downregulated LMRGs using GO and KEGG enrichment analyses. GO enrichment analysis revealed that the upregulated LMRGs were primarily involved in the phospholipid metabolic process, lipid catabolic process, regulation of phosphatidylinositol 3-kinase activity, regulation of inflammatory response, and regulation of lipid kinase activity (Fig. 1D). In contrast, the downregulated LMRGs were mainly associated with membrane lipid metabolism, membrane lipid biosynthesis, sphingolipid metabolism and biosynthesis, and fatty acid metabolism (Fig. 1F). KEGG enrichment analysis indicated that the upregulated LMRGs were significantly involved in the chemokine signaling pathway, phospholipase D signaling pathway, cytokine-cytokine receptor interaction, NF-kappa B signaling pathway, and ether lipid metabolism (Fig. 1E). Meanwhile, the downregulated genes were primarily associated with arachidonic acid metabolism, fatty acid metabolism, and the biosynthesis of unsaturated fatty acids (Fig. 1G).
Fig. 1 Identification of differentially expressed lipid metabolism-related genes (LMRGs) in periodontitis and healthy individuals. (A) The workflow of this study. (B–C) Volcano plot (B) and hierarchical clustering heatmap (C) illustrating the 74 differentially expressed LMRGs in periodontitis and healthy individuals. (D–E) Significant terms of GO (Gene Ontology) enrichment analysis (D) and KEGG (Kyoto Encyclopedia of Genes and Genomes) enrichment analysis (E) of the 44 upregulated LMRGs. (F–G) Significant terms of GO enrichment analysis (F) and KEGG enrichment analysis (G) of the 30 downregulated LMRGs.
Hub lipid metabolism related genes identification by multiple machine learning approaches
Next, three machine learning algorithms were employed to identify the most relevant diagnostic LMRGs for periodontitis. Random Forest (RF) identified 10 LMRGs based on importance scores, including FABP4, PLEKHA1, CWH43, CLN8, PDGFD, NEU1, HMGCR, CYP24A1, and OSBPL6 (Fig. 2A, B). Using LASSO regression, 16 LMRGs were identified after selecting the appropriate penalty parameter (Fig. 2C, D), with the coefficients of these genes shown in Fig. 2E. In the XGBoost model, the top 10 important LMRGs were CWH43, RORA, HSD11B1, PLEKHA1, ADGRF5, CLN8, FABP4, LYN, ENPP2, and OSBPL6 (Fig. 2F). Ultimately, five hub LMRGs were identified across these three algorithms: FABP4, CWH43, CLN8, ADGRF5, and OSBPL6 (Fig. 2G). We then explored the expression levels of these genes between periodontitis and healthy samples in the GSE16134 dataset. ADGRF5 and FABP4 were significantly upregulated in periodontitis samples compared to healthy tissues (Fig. 2H). Conversely, the expression levels of CWH43, CLN8, and OSBPL6 were significantly higher in healthy tissues than in periodontitis samples (Fig. 2H).
Fig. 2 Identification of hub lipid metabolism-related genes (LMRGs) as promising diagnostic biomarkers for periodontitis through machine learning framework. (A–B) Variable selection in the Random Forest algorithm. (A) Line plot illustrating the relationship between the number of trees and the misclassification rate and out-of-bag (OOB) error in the Random Forest model. (C–E) Variable selection in the least absolute shrinkage and selection operator (LASSO) regression model. (C) The variable selection process during LASSO regression, with the horizontal axis representing the penalized parameter lambda (log-transformed) and the vertical axis showing the coefficients of each variable. (D) The 10-fold cross-validation (CV) of the LASSO model. The blue line represents the value of lambda and the corresponding variable number with non-zero coefficients selected by lambda.1se, while the red line represents the value of lambda and the corresponding variable number with non-zero coefficients selected by lambda.min. (E) Bar plot displaying the coefficients of the LMRGs identified by LASSO regression. (F) The importance score of the top 10 variables identified by the XGBoost model. (G) Venn plot illustrating the common LMRGs identified by the three machine learning algorithms. (H–I) Facet boxplots (H) and receiver operator characteristics (ROC) curves (I) to demonstrate the expression pattern and diagnostic ability of key LMRGs in periodontitis. (J–K) Boxplots (J) and ROC curve (K) showing the expression pattern and diagnostic ability of the LMRGs score in periodontitis.
Subsequently, ROC curves were used to evaluate the diagnostic ability of these LMRGs in distinguishing between periodontitis and healthy participants. The results demonstrated that these genes performed well in differentiating periodontitis from healthy individuals (Fig. 2I). The AUC value for CWH43 was 0.898 (0.848 − 0.947), followed by OSBPL6 (0.894 [0.849 − 0.939]), CLN8 (0.864 [0.810 − 0.918]), ADGRF5 (0.855 [0.808 − 0.903]), and FABP4 (0.850 [0.794 − 0.906]) (Fig. 2I). To further enhance the diagnostic ability of these genes for periodontitis, we developed an LMRGs score using the following formula: LMRGs score = 1.137 * FABP4 + (-1.113) * CWH43 + (-1.701) * CLN8 + 2.403 * ADGRF5 + (-1.642) * OSBPL6. As expected, periodontitis patients had higher LMRGs scores than healthy individuals (Fig. 2J). Integrating these genes into one index further improved the diagnostic ability, with the LMRGs score yielding an excellent discrimination ability between periodontitis and healthy samples, with an AUC of 0.954 (0.919 − 0.988) (Fig. 2K). We also validated the expression levels and diagnostic ability of the identified hub LMRGs and LMRGs score in an external validation cohort, with consistent results observed (Figure S1A-D). Ultimately, we also validated the protein expression level of these hub genes in healthy and periodontitis human samples through IHC staining. The results indicated that CLN8, CWH43, and OSBPL6 were upregulated in periodontitis tissues, while ADGRF5 and FABP4 were upregulated in healthy tissues (Figure S2). Taken together, these results indicate that these LMRGs and their integrated index, the LMRGs score, have promising diagnostic ability for periodontitis.
Immune cell infiltration landscape and immune function status in patients with periodontitis
Next, we investigated immune cell infiltration and immune function in periodontitis using ImmuCellAI and GSVA. In the GSE16134 cohort, we observed multiple immune cells, including CD4 + T cells, Th17, follicular helper T cell (Tfh), nTreg, CD4 naive T cells, iTreg, Th1, Th2, dendritic cells (DC), exhausted T cells, and CD8+ T cells (Fig. 3A). Consistent results were observed in the GSE10334 cohort (Fig. 3D). We then examined the association between LMRG scores and the infiltration levels of different immune cells in periodontitis and healthy tissues. A significant positive correlation was found between LMRG scores and the infiltration of CD4+ T cells, CD4 naive T cells, nTreg, Th1, Th17, Tfh, and exhausted T cells in periodontitis tissues in the GSE16134 dataset (Fig. 3B). In contrast, LMRG scores were significantly negatively correlated with the infiltration of macrophages and cytotoxic T cells (Fig. 3B). Similar correlation patterns were identified in healthy participants in the GSE16134 dataset (Fig. 3C) and all participants in the GSE10334 dataset (Fig. 3E, F). Boxplots demonstrated that periodontitis tissues had significantly higher infiltration levels of CD4+ T cells, Tfh, and Th17 than healthy tissues in both the GSE16134 and GSE10334 datasets (Fig. 3G, H). However, the infiltration levels of gamma delta T cells and macrophages in periodontitis were significantly decreased compared to healthy individuals (Fig. 3G, H).
Fig. 3 Immune cell infiltration and immune function estimation in periodontitis. (A–C) Hierarchical clustering heatmap (A) and correlation heatmap (B–C) illustrating the diverse immune cell infiltration status and their correlation with lipid metabolism-related genes (LMRGs) in periodontitis and healthy individuals in the GSE16134 cohort. (D–F) Hierarchical clustering heatmap (D) and correlation heatmap (E–F) illustrating the diverse immune cell infiltration status and their correlation with LMRGs in periodontitis and healthy individuals in the GSE10334 cohort. (G–H) Boxplots demonstrating the differences in immune cell infiltration estimated by ImmuneCellAI between periodontitis and healthy individuals in the GSE16134 (G) and GSE10334 (H) datasets. (I) Significant altered biological processes in periodontitis identified by Gene Set Enrichment Analysis (GSEA). ns no significance; * P-value < 0.05; ** P-value < 0.01; *** P-value < 0.0001.
GSVA was performed to investigate immune function in periodontitis. The results indicated that antigen presentation processes were significantly activated in periodontitis, as evidenced by increased scores for APC co-stimulation, HLA, and aDCs (Figures SA, B). This further triggered T cell activation and inflammation responses in periodontitis, as indicated by the increased scores of multiple T cell subsets and the inflammation-promoting score in GSVA (Figure S3A, B). GSEA was then employed to investigate significantly altered biological processes and pathways between periodontitis and healthy participants. Consistent with the GSVA analysis, multiple immune-related processes and inflammation response-related pathways were significantly activated in periodontitis, as evidenced by the activation of inflammatory response, TNF-α signaling via NF-κB, IL2-STAT5 signaling, IL6-JAK-STAT3 signaling, and chemokine signaling pathways (Fig. 3I and Figure S4A). Furthermore, epithelial-mesenchymal transition (EMT), angiogenesis, apoptosis, and metabolism pathways were also significantly activated in periodontitis (Fig. 3I and Figure S4A). Collectively, these results revealed that the immune microenvironment plays pivotal roles in periodontitis.
scRNA-seq analysis identifies cell subtypes and cell communication in periodontitis
Considering the limitations of bulk RNA-seq data in reflecting the immune microenvironment components in patients, we performed scRNA-seq analysis to decipher the immune microenvironment of periodontitis at single-cell resolution. After quality control and dimensionality reduction, we identified 12 distinct clusters in both periodontitis and healthy gingival tissue (Fig. 4A and Figure S4B). The top five marker genes of each cluster were visualized using bubble plots (Fig. 4B) and heatmaps (Figure S4C). Ultimately, using the SingleR algorithm, we identified eight main cell types among the 12 clusters: plasma cells, monocytes, multipotent progenitor cells (MPP), fibroblasts, keratinocytes, CD8+ effector memory T cells (Tem), microvascular (mv) endothelial cells, and class-switched memory B cells (Fig. 4C). We then validated the expression levels of key LMRGs at the single-cell level. Our analysis revealed that ADGRF5 and FABP4 were primarily expressed in fibroblasts and microvascular endothelial cells, while CLN8 was expressed across all single-cell types. In contrast, OSBPL6 was not expressed in CD8+ Tem cells, plasma cells, or monocytes (Fig. 4D and Figure S5A-C). Subsequently, we mapped the cell communication network among these cell types, identifying strong interactions mediated by multiple signaling pathways, including MHC-II signaling, CXCL signaling, and ADGRE5 signaling (Fig. 4E). These pathways are crucial in antigen processing and the inflammatory response. Further investigation of these pathways revealed that HLA and CD4 primarily contributed to signaling from various cells to MPP (Figure S6A), CXCL12 and CXCR4 contributed to signaling from plasma cells to other cells (Figure S6B), and ADGRE5 and CD55 contributed to signaling from diverse cells (Figure S6C).
Fig. 4 Single-cell RNA sequencing analysis in periodontitis to decipher its immune microenvironment landscape. (A) The utilization of the t-distributed stochastic neighbor embedding (tsne) algorithm identified 12 clusters in gingival tissues from individuals with periodontitis and healthy individuals. (B) Dot plot visualization of the top five marker genes for each cluster. (C) Annotation of eight cell types in gingival tissues from individuals with periodontitis and healthy individuals. (D) Feature plots displaying the expression levels of hub lipid metabolism-related genes (LMRGs) across the cell subtypes. (E) Cell communication network and the contribution of MHC-II, CXCL, and ADGRE5 signaling among different cell types. (F) Ridge plots displaying the indicated biological pathway scores estimated by single-cell Gene Set Enrichment Analysis (GSEA) among different cell types.
Single-cell GSEA was performed to investigate the most relevant biological functions of different cell subtypes in periodontitis. The results suggested that monocytes and MPPs primarily contributed to the inflammatory response and immune reaction in periodontitis by activating pathways such as the inflammatory response, TNF-α signaling via NF-κB, IL6-JAK-STAT3 signaling, IL2-STAT5 signaling, complement, and reactive oxygen species pathways (Figure S6D). Fibroblasts were mainly involved in processes like EMT, myogenesis, and Wnt/β-catenin signaling (Figure S6D). Keratinocytes and mv endothelial cells were engaged in cell proliferation by activating the E2F targets, MYC targets V1, and MYC targets V2 pathways (Figure S6D). We further performed single-cell GSEA to investigate the significantly altered biological processes of these cell types in periodontitis. Consistent with previous findings, the results indicated that the IL6-JAK-STAT3 signaling, inflammatory response pathway, TNF-α signaling via NF-κB, and interferon gamma response pathways were significantly activated in monocytes (Fig. 4F and Figure S6D). Meanwhile, EMT was exclusively activated in fibroblasts. The process of adipogenesis was activated in multiple cell types in periodontitis, except for CD8+ Tem and class-switched memory B cells (Fig. 4F and Figure S6D). Taken together, these findings highlight the important role of monocytes in periodontitis through the activation of inflammatory response pathways.
Deciphering the heterogeneity among different monocytes clusters in periodontitis
Given the important role and heterogeneity of monocytes in periodontitis, we further investigated the role of different subsets of monocytes by clustering them into distinct groups. Using the t-SNE algorithm, we identified five monocyte clusters, with the top five marker genes for each cluster summarized in Fig. 5A. Based on the unique expression patterns of these marker genes (Figure S7), we annotated the monocyte clusters as follows: C1 (APOE+SELENOP+ monocytes), C2 (CD1E+ monocytes), C3 (GZMB+PTGDS+ monocytes), C4 (CLEC9A+ monocytes), and C5 (IGHA1+ monocytes) (Fig. 5B). Trajectory analysis revealed three differentiation states among these monocyte subtypes. C1 represented the earliest stage of differentiation, while C4 appeared at the terminal stage (Fig. 5C). Additionally, BEAM analysis was conducted to investigate differentially expressed genes (DEGs) before and after branch point 1. We identified the top 100 DEGs, categorizing them into three subtypes as shown in Fig. 5D.
Fig. 5 Deciphering the heterogeneity of monocytes in periodontitis through single-cell analysis. (A) Dot plot visualization of the top five marker genes for each monocyte cluster. (B) Annotation of five monocyte subtypes based on their unique marker genes. (C) Cell trajectory and pseudotime analysis among monocyte subtypes. (D) Differentially expressed genes (DEGs) of branch 1 along the pseudotime were hierarchically clustered into three subclusters, and their biological functions were estimated by Gene Ontology (GO) enrichment analysis. (E) Hierarchical clustering heatmap showcasing the activity of diverse biological processes among different monocyte subtypes.
GO enrichment analysis demonstrated that DEGs in cluster 1 were mainly involved in neutrophil activation, inflammatory response, and immune response. DEGs in cluster 2 were primarily associated with signal peptide processing, while DEGs in cluster 3 were linked to antigen presentation and T cell-mediated cytotoxicity (Fig. 5D). We further performed GSVA to explore the significantly altered biological processes in these monocyte subtypes. As illustrated in Fig. 5E, clusters C1 and C2 were predominantly involved in immune and inflammatory response pathways (Fig. 5E). Additionally, the EMT process was activated in C1 and C2 monocytes (Fig. 5E). In contrast, cell proliferation, adipogenesis, fatty acid metabolism, and oxidative phosphorylation were activated in C3, C4, and C5 monocytes (Fig. 5E). Single-cell GSEA was conducted to validate these findings, with consistent results observed (Figure S8). Taken together, different monocyte subtypes have distinct roles in periodontitis.
Continue Reading
-
Factoring the Cost of Carbon into Long-Term Decision-Making
INTRODUCTION
Decarbonization is a major investment theme for long-term investors, given the worldwide shift already under way in energy infrastructure, transportation, agriculture, business models, and the built environment. Annual climate finance surpassed $1 trillion in 2021 and has been climbing since. Renewable energy generation will meet 35 percent of global demand by 2025; that mix was just 19.5 percent in 2010. Investment flows contrast with the global cost of climate change damages, which could range between $1.7 trillion and $3.1 trillion per year by 2050.
Climate-related risks will have far reaching implications for the long-term investment portfolios of sovereign wealth funds, pension funds, insurance companies, and endowments. A recent survey of 200 asset owners found that 56 percent plan to increase climate investment over the next 1-3 years, and 46 percent said that navigating the transition is their most important investment priority over the same period.
Despite momentum, progress feels incremental. Today’s volatile political and geopolitical context has upended climate and industrial policy, creating significant uncertainty for long-term investors. A reshuffling of global trade and supply chains also means a reshuffling of where emissions occur. To be clear, the climate transition was never assumed to progress in a linear fashion. At times, decarbonization pathways may appear to stagnate or even move in the wrong direction. While various regions are at different points in implementing climate policies, greater policy uncertainty has the effect of widening potential outcomes.
Investors are probably wondering, where do we go from here?
Successful long-term investing is built around having a future view of risks and opportunities, including how climate policy and regulations will affect future investment, and in particular, how economies move to price carbon emissions. This paper will provide a fresh look at how asset owners can approach risks and opportunities in the climate transition, by focusing on carbon price risk and how the cost of carbon could affect portfolios over the long run. The end of this chapter contains toolkits for asset owners and investors to facilitate analysis of transition risk in portfolios.
Straight talk on climate risks and opportunities
Imagine a market where the trajectory is unclear. It’s changing rapidly, and the instruments and rules are new and different. There may be disparate prices in different jurisdictions for the same instrument, while opportunities for making money seem overlooked. Is it high yield bonds or mortgage-backed securities in the 1980s? Hedge funds or emerging markets in the 1990s?
Carbon markets today evoke that feeling – they’re nascent and perhaps even inefficient. They possess the ingredients that attract market participants and support the search for alpha, through unique insights, due diligence, and data analysis.
Carbon markets and the cost of emissions within them are not a requirement or a commitment made by investors, but rather present as a financial input in computing future risk and return opportunities. Just as interest rates, inflation, growth, and other key macro-economic factors serve as building blocks for evaluating asset allocation, real estate deals, infrastructure and private equity, the cost of carbon can also serve as an input in the investment process. Specifically, a view on where carbon markets and policy are going, and how that could affect cash flows of investments over various time horizons.
The entire investment industry is built around having a future view on macro-economic factors, yet there seems to be a prevailing thought in financial markets that the cost of emissions is far off in the future. But the future can sneak up on you, and no investor likes to be caught off guard. Investors may be happy collecting interest and dividends now, but if they’re unprepared, portfolios will not be resilient to shocks in carbon markets.
Asset owners are in a unique position
Asset owners that are ahead of the curve assess the cost of carbon as a financial input for transition plans, strategy, and risk assessment, regardless of their location, political views, or whether there is a net-zero commitment or not. Asset owners are at the center of a “value chain” of investment activity: they invest in companies and assets directly that generate emissions or hire external asset managers to invest on their behalf.
Asset owners are currently navigating a period where parts of their portfolios may reflect the economics of carbon emissions, while others may not. In Decarbonizing Long-term Portfolios, FCLTGlobal research found that an adaptable, top-down approach to decarbonization provides long-term investors with multiple levers for addressing climate risk inside their investment portfolios while fulfilling their purpose and capitalizing on new opportunities.
The companies that asset owners invest in may or may not use internal carbon prices (ICPs) or shadow carbon pricing to reflect the cost of emissions over time, or they may be subject to regulated carbon prices. A recent FCLTGlobal study found that 14 percent of MSCI ACWI companies reported using an internal carbon price. This was up from just 5 percent five years ago.
The asset managers that asset owners hire serve as a bridge between portfolio companies and owners, on issues like strategy, company engagement, investment selection, and due diligence. Effective investment mandate design holds managers accountable to the owner’s expectations and views on climate change. Institutional Investment Mandates: Anchors for Long-Term Performance provides tools for asset owners and managers to create mandates that align both parties on long-term goals, including sample mandate terms that consider climate objectives.
Investors face a patchwork of global policy and regulations
Today, the world is at different points on internalizing the economics of emissions. One study places the global average price of carbon at around $23 a ton in 2023, while it is estimated that just 24 percent of global emissions are covered by direct or indirect pricing measures. Analysis from the World Bank Group includes measures such as direct carbon taxes, fuel taxes, and emissions trading schemes (ETS), net of fuel subsidies (exhibit 1). The majority of emissions pricing has been through fuel taxes, with ETS a small but growing portion.
Policy sets the backdrop to climate action, yet regulatory treatment of carbon emissions is uneven around the world. Furthermore, carbon markets exist as mandatory (compliance) schemes as well as voluntary programs. Economists debate the merits of implementing quantity-based instruments (ETS), or price-based (fuel taxes) policies, without a strong consensus on an optimal approach for adoption in the current context. Green incentives and decarbonization subsidies, like tax credits for clean energy investment, have also been a major factor in shaping markets. As a result, investors face the challenge of investing in markets at various stages of developing their domestic policies for pricing carbon emissions. Are we in the midst of a “carbon carry trade” where capital is drawn to regions where emissions are relatively underpriced compared to regions where the cost of carbon is internalized by markets?
It is a fast-evolving landscape, with policy taking steps forwards and backwards. Yet momentum has been building in local markets. The global value of traded carbon dioxide (CO2) permits reached a record $948.75 billion in 2023, while 12.5 billion metric tons of carbon permits changed hands. Exhibit 2 shows regions around the world that have implemented various policies to capture the cost of carbon in emissions.
Even since this graphic was produced in May 2024, major markets have implemented pricing mechanisms. Brazil established the Brazilian Greenhouse Gas Emissions Trading System in November 2024, to be phased in over the next five to six years. Indonesia has been advancing cap-and-trade with plans to expand coverage to industrial sectors.
Investors are closely following developments in the European Union and its Emissions Trading System, which is set to decrease the quantity of free emissions allowances, leading to potentially higher prices over time. New mechanisms are arising, such as carbon tariffs, which require importers to pay the same carbon prices as domestic producers to reduce “carbon leakage”. The first carbon-tariff system, the EU Carbon Border Adjustment Mechanism (CBAM), takes effect in October 2023 for reporting purposes, and becomes chargeable in January 2026. CBAM has the potential to significantly influence market development as it serves as an incentive for other countries to price emissions. Why would a country let other governments collect revenues that they could be collecting on local production that is exported to the EU?
Understanding the barriers to evaluating the cost of carbon for portfolios and assets
FCLTGlobal convened members for a working group in Q4 2024, with asset owners and asset managers sharing their experiences on the theme of climate change and managing transition and physical risks in portfolios. Member CEOs met at FCLTGlobal’s annual Summit in January 2025, where the subject of addressing future climate costs featured in the agenda. CEOs were concerned with carbon pricing realities and how low carbon prices translate into weak investment signals. They also highlighted the benefits of scenario planning based on climate science fundamentals – actual levels of CO2 in the atmosphere and projections going forward. Several things stood out from these discussions, including:
- Uncertain financial impact – Regulatory uncertainty and the timing of policy development complicate short- and long-term capital allocation decisions. Individual company responses, as well as decarbonization pathways and how emissions cost will affect future earnings also contributes to financial uncertainty. An investor may determine little impact to future earnings, if, for instance, companies can pass through higher costs of carbon to consumers. Pass-through strongly depends on the market structure and the supply-demand equilibrium.
- Low confidence in financial assumptions – Uncertainty around the trajectory and level of carbon prices leads to lower confidence in risk/ return assumptions. Low confidence in assumptions means that information output from models, scenario analysis, and due diligence does not significantly impact or influence investment decision making. There is no singular quantitative model or approach that fully captures the risk of carbon in portfolios. For analyzing real estate investment like a building, it’s more straightforward. A business, with complex supply chain, shipping routes, and energy sources, is much more complicated.
- Focus on immediate financial performance – Some investors fall into the trap of prioritizing short-term financial performance. While carbon prices are comparably low in the current environment, assuming they will be low forever could expose portfolios to future shocks.
- Cost and complexity – While disclosures have been improving, data collection and management as well as the complexity of measuring emissions has been a challenge for investors. Not all companies globally report on their emissions, requiring estimation methods in some instances. Yet you could also say that there is too much data, which makes focusing on what is material and impactful to a business more difficult to process and interpret.
STAYING AHEAD OF THE CURVE
The cost of carbon, reflected as a price per ton of emissions, is a critical financial input.
68 percent of financial professionals in a recent survey believe that climate risk is mispriced in the stock market. Investors clearly grapple with uncertainty about the future path of climate change, the energy transition, policy parameters and adaptation by firms and households. Market pricing is also hampered by a lack of historical data, consistent methodologies, standardized metrics, and comparable disclosures around climate risks.
There is a real risk of underestimating – or overestimating – the cost of emissions and its impact on asset prices, but that doesn’t mean it should be left out of the formula. Effective processes place less of an emphasis on accurately forecasting future carbon costs, and more on developing fundamental analysis around policy, market development, and company response, with an objective of embedding those views in decision making. Few investors accurately forecast the path for interest rates, yet that doesn’t mean investors shouldn’t develop views on how central banks may set monetary policy.
Not all emissions are the same
Measuring portfolio emissions is a daunting task. A number of asset owners and asset managers have developed methodologies to measure their carbon footprints – there are plenty of good practices for investors to follow in this regard. Yet there are gaps in that reporting, in asset classes like private equity where data may not be available, or other blind spots to consider. Ask an investor how confident they are in their assessment of carbon footprint and you may get a caveated answer.
In developing decarbonization scenarios and cost scenarios, investors could make some simplifying assumptions (Exhibit 3). Considered to be mostly part of Scope 1 or 3, Unabatable emissions are the emissions that are too costly to reduce or eliminate, or there isn’t yet a technological solution to reduce them. These are the emissions that a company ends up paying for, affecting earnings. Scope 2 emissions would eventually be mitigated through the decarbonization of energy systems, with potential cost pass-through. As one investor put it at FCLTSummit 2025, “Would you rather own 1 ton of Scope 1 emissions, 5 tons of Scope 2, or 1 ton of Scope 3. I would take Scope 2 emissions because Scope 1 are hard to abate, the value at risk comes from Scope 1.” The source of emissions really matters: a company or asset with high Scope 1 will be more sensitive to the price of carbon, with an asset valuation as a function of how difficult they are to abate or mitigate.
Climate winners and losers
Building on the concept of asset valuation sensitivity to emissions, carbon pricing can be used to uncover which companies or assets are poised to benefit from higher carbon prices, and which will be harmed. Just as one can view that owning shares in an oil company is like being long the price of oil, it’s also tantamount to being short the price of carbon. The higher the price of carbon, the higher the risk to earnings of unmitigated emissions. The same can be said for companies that are “long carbon prices.” Depending on whether the price of carbon reflects a cost, or is a source of revenue, it’s like having a short or long call option on the price of carbon embedded in your portfolio.
Tesla, which only manufactures electric vehicles, has earned billions of dollars selling emissions credits to other automakers, collecting $2.1 billion in the first nine months of 2024 alone, which was 43 percent of net profit. Carbon credits have been a key revenue driver for the automaker, even as other automakers have struggled to meet regulated emissions targets. A long/short mindset to the price of carbon reinforces the view that the price of carbon is a financial input or indicator in valuing an asset and recognizes that there will be winners and losers as carbon prices fluctuate.
The market seems to overlook the possibilities of technological developments, especially in the energy sector, which could put downward pressure on carbon prices. Investing in low carbon technologies and research and development holds the potential to deliver winners and reduce carbon risk in portfolios.
Financial tools in carbon markets are improving
The investment industry spends significant time and resources developing forward-looking views on key financial variables, the price of carbon is just another to add to that mix. As carbon markets continue to mature, several developments in financial markets have supported investors and companies in managing risks. The emergence of derivatives markets is one such development. Futures and options markets volumes for the EUA, UKA, CCA, and RGGI, the largest cap and trade markets in the world, have been increasing (Exhibit 4).
Broadly speaking, derivatives aid in the allocation of risks and provide tools for companies and investors to manage risks, and they can lower the cost of diversifying portfolios. It is also a compliance tool for meeting emissions caps placed on regulated entities. Companies can lock-in prices on their future carbon emissions through derivatives markets. Investors can trade derivatives contracts, speculate on prices, search for arbitrage opportunities, or hedge portfolio exposures. Negative carbon convenience yields have attracted the attention of carbon traders interested in exploiting arbitrage opportunities between carbon spot and futures markets. Furthermore, Investors can use price signals from carbon derivatives to assess climate transition risk in their portfolios.
The forward price curves of futures contracts offer market views on the future price of emissions and is analogous to the term structure of interest rates (Exhibit 5). Although academic research in commodities markets finds that futures prices have not been reliable predictors of subsequent price movements, derivatives markets do provide a complementary source of information for investors to analyze.
APPROACHES TO INTEGRATE RISKS AND OPPORTUNITIES IN PORTFOLIO INVESTMENT DECISION MAKING
Given significant uncertainty in climate change policy, geopolitics, technology, and in some cases, backlash against sustainability initiatives, asset owners and long-term investors would benefit from a “reboot” of their approach to managing risks and opportunities resulting from climate change. Despite the near-term uncertainty, investors able to think long-term enough acknowledge the risk that the cost of carbon places on portfolios.
A recent study combining climate scenario analysis with assessments of impacts on firm revenues and operating costs—capturing both winners and losers—found that aggregate losses on an equity portfolio composed of MSCI World Index companies could range from 0.5 to 6.0 percent. Sector-specific losses were found to be much higher, reaching as much as 10 percent to 60 percent in vulnerable industries such as utilities.
One firm’s added cost could become another firm’s added revenue. As such, investors need forward-looking measures of transition risk and opportunity, that are flexible enough to apply to multiple scenarios, and that highlight the upside and downside to companies and industries.
A handful of leading investors have developed tools
Investors that are ahead of the curve have defined approaches to climate risk and opportunity, integrating tools into investment strategy and decision making. Investors recognize that we are in a period where not all parts of their portfolio reflect the economics of carbon prices. Some leading examples from asset owners include:
– Temasek. Singapore’s currently applies an internal carbon price (ICP) of $65 per tCO2e to embed the cost of carbon in its investment and operating decisions, and to further align its portfolio and business to the company’s net zero target. Review of the ICP is performed every two years, and takes into account carbon price projections by international bodies. Temasek has also developed a proprietary metric called the “carbon spread”, which reflects its ICP modelled as a spread on top of its risk-adjusted cost of capital, acting as a trigger for deeper analysis into the climate transition and decarbonization plans of prospective investee companies.
– GIC, also of Singapore, has published research on carbon markets and investment portfolio analysis. Carbon Earnings-at-risk Scenario Analysis (CESA) is a forward-looking risk measure that estimates the portfolio’s value at risk due to carbon prices. The tool can be incorporated directly into companies’ valuation analyses and is combined with a scenario analysis approach for assessing carbon earnings-at-risk at the total portfolio level, helping to identify specific areas of vulnerabilities within the portfolio for deeper due diligence. GIC finds that carbon price impact varies widely across climate scenarios, ranging from 0 percent to 14 percent for a global equities portfolio tracking the MSCI All Country World Index.
– Norway’s Norges Bank Investment Management (NBIM) estimates net portfolio losses associated with different climate scenarios using MSCI’s Climate Value at Risk (CVaR) model. CVaR is a bottom-up model that approximates the net climate costs of each individual portfolio company, rolling them up to the portfolio level. In broad terms, the loss estimates are the discounted sum of portfolio losses until 2080 associated with climate policy risk, technological opportunities, and physical climate impacts. Based on NBIM’s global equity investments at the end of 2024, the cumulative impact of climate change on the portfolio’s value by 2080 across various scenarios is estimated to result in a reduction ranging from 2 to 10 percent of present value, and 2 to 8 percent when technology opportunities are considered.
Striking the right balance
This report highlights several approaches, in the following toolkits, that enable asset owners and investors to “reboot” their assessments of climate risks and opportunities. During working groups, we heard from participants that striking the right balance between complexity and simplicity of tools was important. Complex methodologies benefit from additional rigor, yet it’s not necessarily precision that is required, but better guidance. Complex tools can be more costly to research and populate and can also be more difficult for investment committees and boards to interpret. Simpler methodologies are easier to accomplish and are less resource intensive and can be easier for investment decision makers to interpret. Although even simple approaches require a certain amount of rigor to support sound decision making.
CONCLUSION
The cost of carbon is no longer a hypothetical concern— it is a material financial input that will increasingly shape the investment landscape. As the global climate transition accelerates, asset owners and investors must adapt to a world where carbon pricing is fragmented but gaining traction. Integrating transition risks into financial decision-making, even amid uncertainty, is vital for building resilient portfolios. This involves not only recognizing which companies and sectors are most exposed to emissions costs but also identifying those poised to benefit from the shift to a low-carbon economy. Tools such as carbon beta, NoCEBITDA, and scenario-based valuation adjustments provide practical entry points for making climate risk analysis more actionable and investment-relevant.
Ultimately, the climate transition is not just a risk to manage, but an opportunity to seize. Investors ahead of the curve are rethinking valuation methods, developing forward-looking tools, and embedding carbon cost assumptions into strategic investment decision making. By reframing carbon as a dynamic financial variable— similar to interest rates or inflation—long-term investors can better navigate an uncertain policy environment and uncover opportunities in a rapidly evolving world.
TOOLKIT: INDICATORS FOR SCENARIO ANALYSIS
This toolkit focuses on indicators that align with already established investment methodologies and concepts, essentially building on the “finance language” and expertise that committees and boards already possess, while applying it to climate concepts. These indicators are based on readily available data and disclosures and can complement climate scenario analysis using different assumptions on carbon price levels.
Market-implied cost of hedging carbon price exposure: Similar to insurance for physical climate exposure, this indicator asks what the cost is to insure against transition risks for asset classes and individual assets with exposure to carbon price risk. The estimated cost to hedge carbon exposure can be used as a “haircut” to prospective returns, in order to map transition risks in equity industries and investments in private equity and real assets. The implied cost to hedge risks can be computed for an individual company, or at the sector/aggregate level, using underlying assumptions of unabated emissions, or emissions exposed to price risk. One study has found that the cost of hedging tail risks using options, on average, amounts to 2 percent of portfolio assets per year.
NoCEBITDA – Net of Carbon Earnings Before Interest, Taxes, Depreciation, and Amortization: Carbon price risk can leave exposure to fluctuating costs, leading to volatile earnings. Estimations can also include sources of “green revenues” where companies are poised to benefit from higher carbon prices. Carbon adjusted EBITDA over multiple periods forms the basis of valuation models, which makes this tool well suited to analyze private equity, real estate, and infrastructure deals.
Carbon-adjusted financial indicators, EPS, ROI, IRR: Adjusting EBITDA for carbon-related costs and changes in revenue can serve as a basis for calculating other financial indicators, notably earnings per share (EPS), return on investment (ROI), and internal rates of return (IRR) with applications in both public and private markets.
TOOLKIT: CARBON BETA
Carbon beta is a tool designed to measure transition risk in individual stocks and portfolios of stocks. It is a forward-looking measure that determines the extent to which an asset’s price correlates with a carbon risk factor.
The beta of a stock is a traditional finance indicator of risk with a long academic history, and is a concept well understood in the finance world. It simply measures how the value of a stock or portfolio of stocks moves in relation to the market. For the purpose of computing carbon beta, the “market” is redefined as a group of high emitting stocks believed to possess high climate risk and face potential high costs for abating emissions. The carbon beta of an individual stock or portfolio is its valuation in relationship to the carbon risk portfolio. Huij et al. developed a methodology to estimate asset-level climate risk exposure by regressing stock returns on a pollutive-minus-clean portfolio. The authors find that, not surprisingly, climate risk is highest in energy and utility sectors, and lowest in healthcare and financials.
The methodology is relatively simple and straightforward, and investors can readily replicate it using existing historical market data and emissions and transition risk disclosures. Investors can even develop their own methodologies on how climate risk is defined in the “market” portfolio. For example, the pollutive-minus-clean portfolio assumes that all emissions are the same, while investors might assign greater weight to some emissions over others. For firms whose emissions occur in the production of goods that reduce emissions elsewhere (e.g. solar panels), or that operate in sectors for which abatement is expected to be easier, investors might perceive lower risk exposures. Interestingly, Huij et al. find that returns to stocks with high carbon betas are lower during months in which climate change is more frequently discussed in the news, during months in which temperatures are abnormally high, and during exceptionally dry months.
Carbon beta can also be a tool to identify firms that are investing in green technologies. Green innovation is largely driven by firms in the energy sector, yet paradoxically these firms are generally amongst the worst performers on environmental issues. Using green patents as an indicator, Huij et al. test the association of carbon beta with green innovation and find that green innovators are less exposed to climate risk, including firms in the energy sector.
Carbon beta is but one factor to analyze amongst many. It is an indicator that can be used in concert with other factors while recognizing how factors can interact.
INTRODUCTION – WHY CLIMATE EXTERNALITIES CAN’T BE IGNORED
Despite the policy uncertainty and differences around the world, understanding climate-related risks and opportunities is essential to effective long-term capital allocation. Just as interest rates, inflation, and geopolitical risks shape investment decisions, so too do emissions costs, insurance premiums, and the growing financial toll of climate change., ,
The effects of climate have slowly manifested not only in transition risk (e.g. a grey premium for carbon-intensive sectors like oil and gas), but also in physical risk (e.g. depressed exit-multiples among real estate developments in climate-affected areas)., While some companies still view such risks as immaterial to their strategy, over a long-enough horizon, these previously insignificant costs, are set to hit the bottom line at an ever-consequential rate.
Climate externalities are the unpriced costs or benefits of business activities that impact the climate system, such as greenhouse gas emissions, water usage, or land degradation. These externalities vary by sector and jurisdiction but are increasingly prevalent worldwide. While some companies may currently view these risks as immaterial or choose to wait for regulatory developments, proactively addressing them (through, e.g. pricing carbon internally, hedging future climate risk, and scenario planning) can help manage uncertainty and enhance competitive positioning. Incorporating climate externalities into capital allocation decisions today—before they are fully reflected in regulations or markets—can enable companies to manage risks, seize opportunities, and build long-term resilience.
While carbon emissions are among the most common and financially significant externalities, especially in emissions-intensive sectors, companies with inherently low carbon footprints—like those in software or professional services—may find other externalities more material. For instance, data centers operated by tech firms often consume substantial amounts of water for cooling purposes, which can strain local water resources, particularly in drought-prone regions.
Many of these climate externalities ultimately are internalized and show up in the economics of a project, whether through costs, pricing dynamics, or asset values. Whatever assumptions companies make about them will inevitably reshape the risk-return profile of their investments.
As such, identifying which climate externalities are material—when and where—and determining appropriate responses is a vital tool in helping companies stay ahead of the curve. By factoring future climate-related costs into investment decisions, companies can maximize long-term value.
In practice, the path to do so is complex. While some firms actively embed climate considerations into their capital allocation decisions, many still struggle to weigh long-term climate risks against short-term financial pressures—particularly when the costs of carbon remain uncertain or inconsistently applied.
Exhibit 6 (page 20) illustrates how emissions pricing might alter investment attractiveness between two hypothetical projects: when weighing two projects (A and B), their attractiveness to the company varies depending on whether the analysis factors in the future cost of carbon.
If emissions are priced for the long-term project A may make more sense over project B, and vice versa if emissions are free. In truth, many payoffs to these investments follow a “j-curve”: companies are faced with the decision to “pay now or pay later” when it comes to climate (and carbon especially).
This challenge is amplified by the fact that many of the tools and signals needed to inform long-term climate-adjusted decisions—such as internal carbon pricing, external regulatory coverage, and credible forward commitments—remain challenging to adopt or apply.
While still early days, a few key data points illustrate how this plays out across company practice, market coverage, and credibility:
- 18% of MSCI ACWI companies reported using an internal carbon price as of 2024. Among those that do, the median price is $49 per ton.
- 24% of global carbon emissions are currently covered by an emissions trading system (ETS) or carbon tax —up from 13% in 2014.
- 26% of S&P Global BMI companies generated unpriced environmental costs that exceeded their net income—suggesting that externalities remain financially invisible in many corporate accounts.
Evidence suggests that leading firms are increasingly developing internal views on climate externalities— particularly carbon—and are actively incorporating them into investment and strategy decisions.
However, many others do not quantify or operationalize these externalities in capital allocation, especially in a time when carbon prices are uncertain, or policies are evolving. Deferring action altogether, adopting a “waitand-see” approach will lead companies to be behind the curve.
GETTING AHEAD OF THE CURVE IS EASIER SAID THAN DONE
Amid mounting short-term performance pressures and unclear financial payoffs, few companies meaningfully integrate climate considerations like carbon pricing into capital allocation decisions. , , ,
For the past 12 months, companies have been scaling back sustainability pledges, as many over-promised and under-delivered. Companies have been called out for “greenwashing” and underperformance.
Treating climate as a financial issue requires overcoming several key challenges that need to be overcome, as seen in the table below:
In such a challenging current environment, how can companies perform sufficiently now while keeping an eye toward the future, and what can investors do to support companies in their journeys, while fulfilling their mandates and fiduciary duty?
TURNING CLIMATE INSIGHTS INTO ACTION
As companies weigh near-term pressures against long-term climate strategy, many are seeking clearer direction—not in theory, but in practice. They want to know: Who is doing this well? What frameworks are working? And how can we make sound decisions amid evolving regulations and stakeholder expectations?
In our discussions with corporate and investor leaders, the message is consistent. Organizations are not just looking for metrics—they are looking for clarity. Clarity about how others are embedding climate into real capital allocation decisions. Clarity around how to navigate uncertainty across jurisdictions. And clarity about how to act decisively, without getting ahead of their boards or behind their peers.
The toolkits on the following pages aim to meet that demand. By showcasing leading company examples and practical tools—ranging from internal carbon pricing and marginal abatement cost curves to climate-adjusted financial metrics—we provide a set of forward-looking approaches for navigating uncertainty, aligning decisions with long-term value, and staying ahead of the curve.
CONCLUSION
To get ahead of the curve on climate, forward-thinking companies are integrating climate externalities into overall strategy and capital allocation.
Internalizing climate externalities like carbon is not a one-time fix—it requires ongoing recalibration of strategy, risk management, and investor communication. Organizations that begin now—by applying internal carbon pricing, stress-testing projects against future scenarios, and translating climate metrics into familiar financial language—will be better positioned to capture upside and avoid downside in a decarbonizing economy.
The future will reward those who align capital with long-term value creation. Getting ahead of the curve today means thriving in the economy of tomorrow.
Link to the full report can be found here.
1 Climate Policy Initiative, “Global Landscape of Climate Finance 2024”, 31 October, 2024. Link. (go back)
2 International Energy Agency, “The Global Power Mix Is Set to Transform by 2030”, Accessed 11 April, 2025. Link. (go back)
3 World Economic Forum, “Climate change is costing the world $16 million per hour: study”, 12 October, 2023. Link. (go back)
4 BlackRock, “Global perspectives on investing in the low-carbon transition”, 2023. Link. (go back)
5 Tiseo, Ian, “Weighted Average Direct Carbon Price Worldwide from 2020 to 2023”, 30 January 2025. Link. (go back)
6 World Bank Group, “State and Trends of Carbon Pricing”, 21 May 2024. Link. (go back)
7 Roncalli, Thierry, and Raphaël Semet, “The Economic Cost of the Carbon Tax”, Amundi Investment Institute, Working Paper 156, March 2024. Link. (go back)
8 Twidale, Susanna, “Global Carbon Markets Value Hit Record $949 Billion Last Year – LSEG”, Reuters, 12 February 2024. Link. (go back)
9 International Carbon Action Partnership, “Brazil Adopts Cap-and-Trade System”, 28 November, 2024. Link. (go back)
10 PwC, “The Hidden Cost of Carbon”, 19 October, 2023. Link. (go back)
11 Roncalli, Thierry, and Raphaël Semet, “The Economic Cost of the Carbon Tax”, Amundi Investment Institute, Working Paper 156, March 2024. Link. (go back)
12 Bauer, Rob, Katrin Gödker, Paul Smeets, and Florian Zimmermann, “Mental Models in Financial Markets: How Do Experts Reason about the Pricing of Climate Risk?”, IZA Discussion Paper No. 17030, 14 June 2024. Link. (go back)
13 Eren, Egemen, Floortje Merten, and Niek Verhoeven, “Pricing of Climate Risks in Financial Markets: A Summary of the Literature”, BIS Papers No. 130, December 2022. Link. (go back)
14 Bolton, Patrick, et al., “The Green Swan: Central Banking and Financial Stability in the Age of Climate Change”, BIS, January 2020. Link. (go back)
15 Davenport, Coral, and Jack Ewing, “Automakers to Trump: Please Require Us to Sell Electric Vehicles”, New York Times, 21 November, 2024. Link. (go back)
16 Futures and options contracts are linked to the European Union Allowance (EUA), United Kingdom Allowance (UKA), California Carbon Allowance (CCA), and Regional Greenhouse Gas Initiative (RGGI), collectively covering a majority of the world’s traded carbon allowances and credits. (go back)
17 Palao, Fernando and Angel Pardo, “The Inconvenience Yield of Carbon Futures”, September 2021. Link. (go back)
18 International Swaps and Derivatives Association, “Role of Derivatives in Carbon Markets”, September 2021. Link. (go back)
19 Nixon, Dan and Tom Smith, “What Can the Oil Futures Curve Tell Us about the Outlook for Oil Prices?”, Bank of England, 2012. Link. (go back)
20 Bouchet, Vincent, Thomas Lorens, and Julien Priol, “Beyond Carbon Price: A Scenario-Based Quantification of Portfolio Financial Loss from Climate Transition Risks”, Scientific Portfolio, January 2025. Link. (go back)
21 Temasek, “Sustainability Report 2025”. Link. (go back)
22 Temasek, “Sustainability Report 2025”. Link. (go back)
23 Teo, Rachel, De Rui Wong, and Lloyd Lee, “Carbon Earnings-at-Risk Scenario Analysis (CESA)”, GIC, 25 October 2022. Link. (go back)
24 Norges Bank Investment Management, “Climate and Nature Disclosures 2024”. (go back)
25 Chang, Linda, Jeremie Holdom, and Vineer Bhansali, “Tail Risk Hedging Performance: Measuring What Counts”, 18 November 2021. Link. (go back)
26 Huij, Joop, Dries Laurs, Philip Stork, and Remco C.J. Zwinkels, “Carbon Beta: A Market-based Measure of Climate Risk Exposure”, April 2024. Link. (go back)
27 ibid. (go back)
28 Cohen, Lauren, Umit G. Gurun, and Quoc H. Nguyen, “The ESG Innovation Disconnect: Evidence from Green Patenting”, National Bureau of Economic Research, Working Paper No. 27990, October 2020. Link. (go back)
29 Paul A. Griffin and Yuan Sun, “Climate-Related Financial Risk: Insights from a Semi-Systematic Review,” The International Journal of Accounting 56, no. 2 (2021): 1–25. Link. (go back)
30 Théo Le Guenedal, Frédéric Lepetit, Thierry Roncalli, and Takaya Sekine, “Measuring and Managing Carbon Risk in Investment Portfolios,” (2020). Link. (go back)
31 Emanuele Campiglio, Pierre Monnin, and Adrian von Jagow, “Climate Risks in Financial Assets,” CEP Discussion Note (Milan: Centro Europa Ricerche, 2019). Link. (go back)
32 Patrick Bolton and Martin Kacperczyk, “Do investors care about carbon risk?” Journal of Financial Economics, Volume 142, Issue 2, 2021. Link. (go back)
33 Bernstein, Asaf, Matthew T. Gustafson, and Ryan Lewis. “Disaster on the horizon: The price effect of sea level rise.” Journal of Financial Economics 134, no. 2 (2019): 253-272. Link. (go back)
34 World Economic Forum. “Circular Water Solutions Are Key to Making Data Centres More Sustainable.” November 2024. Link. (go back)
35 “State and Trends of Carbon Pricing 2024,” World Bank, 2024. Link. (go back)
36 “Unpriced Environmental Costs: The Top Externalities of the Global Market,” S&P Global, 2024. Link. (go back)
37 Swiss Re. “CO₂NetZero Programme.” Accessed April 30, 2025. Link. (go back)
38 “Sustainability or Strategy: Bridging the Gap Between Climate Change and Long-term Value Creation,” FCLTGlobal, 2022. (go back)
39 “FCLTCompass 2023 Report,” FCLTGlobal, 2023. (go back)
40 “The CEO Shareholder: Straightforward Rewards for Long-term Performance,” FCLTGlobal, 2023. (go back)
41 “Walking the Talk: Valuing a Multi-Stakeholder Strategy,” FCLTGlobal, 2022. (go back)
42 Pucker, Kenneth P. “Companies are scaling back sustainability pledges. Here’s what they should do instead,” Harvard Business Review, 2024. Link. (go back)
43 Yao Yao, Yifan Zhou, and Yifan Zhang, “How Greenwashing Affects Firm Risk: An International Perspective,” Journal of Risk and Financial Management 17, no. 11 (2024): 526. Link. (go back)
44 International Foundation for Valuing Impacts. “Environmental Topic Methodology – Interim Methodologies.” Link. (go back)
45 Climate Scenario Catalogue. “Final 2024.” Link. (go back)
46 B Team. “Reform $1.8 Trillion Yearly Environmentally Harmful Subsidies to Deliver a Nature-Positive Economy.” Link. (go back)
47 Science Based Targets initiative (SBTi). Link. (go back)
48 UNEP Finance Initiative. “Net Zero Alliance.” Link. (go back)
49 Goldstein, John and Chex Yu. “Climate Metrics 2.0: Measuring What Matters for Real Economy Climate Progress.” Goldman Sachs Asset Management, 2023. Link. (go back)
Continue Reading
-
Association of workplace support for health with occupational health literacy and illness avoidance: moderated mediation by functioning through a salutogenic lens | BMC Public Health
Design
A cross-sectional design that follows the STROBE (i.e., Strengthening the Reporting of Observational Studies in Epidemiology) checklist was adopted. Figure 2 is a flowchart of the study design.
Study setting, participants, and recruitment
The study setting was Accra, Ghana, and the participants were community-dwelling middle-aged and older adults aged 50 years or older. Multistage sampling was utilised to select the participants. We first classified the neighbourhoods of Accra into four cardinal blocks (i.e., north, south, west, and east) and randomly selected a representative number of neighbourhoods from each block. The participants were selected randomly from all selected neighbourhoods based on three selection criteria: (1) being aged 50 years or older; (2) being a permanent resident of Accra, and (3) availability and willingness to participate in the study. We calculated the minimum sample size necessary with the Daniel Soper’s sample size calculator for structural equation modelling [37, 38] based on standard statistics (i.e., moderate effect size = 0.3; power = 0.8, and α = 0.05). The sample size reached was 823, but we increased this number by 10% to allow for attrition. Thus, the minimum sample size of this study was 905.
Variables, operationalization, and measures
WSH, the dependent variable, was measured with a 5-item scale with five descriptive anchors (i.e., 1 – strongly disagree, 2 – disagree, 3 – somewhat agree, 4 – agree, and 5 – strongly agree). This measure was adopted in whole from a previous study [9] and was developed based on our earlier definition of WSH. Some of its items are “Overall, my workplace supports me in living a healthier life” and “Most employees here have healthy habits”. It yielded satisfactory internal consistency in the form of Cronbach’s α ≥ 0.7 (overall α = 0.76; men’s α = 0.74; women’s α = 0.77), which is within the recommended cut-off point [9, 39]. Scores on this tool obtained by summing up its items range from 5 to 25, with larger scores indicating higher WSH.
Illness avoidance and functioning were measured with sub-scales from a previously validated successful ageing measure [11], associated with five descriptive anchors (i.e., 1 – strongly disagree, 2 – disagree, 3 – somewhat agree, 4 – agree, and 5 – strongly agree), and comprised 4 and 9 items respectively. Illness avoidance, the dependent variable, is a measure of overall health and the avoidance of medication as well as therapy. Some of its items are “I did not use medication or therapy” and “I was healthy enough to move around freely”. Functioning, the mediating variable, is a measure of cognitive functioning and how well one could perform physical and social tasks independently. Some of its items are “I had enough energy for daily life” and “When I tried to recall familiar names or words, it was not difficult for me to do so”. Illness avoidance (overall α = 0.72; men’s α = 0.76; women’s α = 0.81) and functioning [overall α = 0.84; men’s α = 0.87; women’s α = 0.80] produced Cronbach’s α ≥ 0.7. The ranges of scores on illness avoidance and functioning were 4–20 and 9–45 respectively, with larger scores indicating higher illness avoidance and functioning.
OHL, the moderating variable, was measured with a 12-item standard measure adopted in whole from a previous study [36]. The scale was associated with four descriptive anchors [i.e., 1 – strongly disagree, 2 – disagree, 3 – agree, and 4 – strongly agree] and produced satisfactory Cronbach’s α ≥ 0.7 (overall = 0.75; men’s = 0.82; women’s = 0.74). Its scores range from 12 to 48, with higher scores indicating higher OHL. Appendix 1 shows items used to measure WSH, illness avoidance, functioning, and OHL.
Eight potential covariates were measured following previous research [7, 34, 40, 41]. Chronic disease status was measured with a single question asking participants to report the number of chronic conditions they had, and the responses were coded into two groups (none – 1, and one of more – 2). Self-reported health was measured with a single question asking the participants to report whether their health was poor or good (poor – 1, and good – 2). Like chronic disease status and self-reported health, sex (men – 1, and women – 2), and marital status (not married – 1, and married – 2) were measured as categorical variables and coded into dummy-type variables. Job tenure, age, education, and income were measured as discrete variables. Job tenure was how long (in years) participants had worked in their current organization whereas age was a measure of chronological age. Education was measured as years of schooling whereas income was measured as the individual’s gross monthly earnings in Ghana cedis.
Instrumentation
Data were collected with a self-reported questionnaire comprising three sections. The first section presented a statement of the study’s aim, importance, ethical considerations, and general survey completion instructions. The second section presented measures on WSH, OHL, functioning, and illness avoidance, whereas the final part captured questions on the covariates and personal factors. We avoided or minimised Common Methods Bias (CMB) at the survey design stage by following recommendations in the literature [34, 42, 43] to structure sections and questions in the questionnaire. In this vein, specific instructions for completing each scale and section in the right context were provided. The general instructions provided guided the participants to avoid errors in their completion of the survey. Standard scales with concise and unambiguous items or questions were used. In the second stage, Harman’s one-factor approach, a statistical procedure, was followed to investigate the absence of CMB in the data.
This technique required the use of an exploratory and confirmatory factor analyses to assess the factor structures of the scales used. With this technique, the absence of CMB in the data is confirmed if two or more factors are produced on each scale, or variances extracted are less than 40% [42, 43]. In the exploratory factor analysis, each scale yielded at least two factors, and each factor accounted for less than 40% of the total variance. WSH produced two factors (factor loadings of items ≥ 0.5; variance explained by factor 1 = 31.12, and variance explained by factor 2 = 23.09). OHL (number of factors extracted = 4), functioning (number of factors extracted = 3), and illness avoidance (number of factors extracted = 2) yielded similar results. Confirmatory factor analysis produced consistent results, signifying the absence of CMB in the data.
Ethics and data collection
The study received ethical review and clearance from the ethics review board of the Africa Centre for Epidemiology (no. 005-10-2022-ACE) after the board reviewed the study protocol. All the participants provided written informed consent before participating in the study. We gathered data with three specially trained research assistants who administered questionnaires at designated centres. Some participants could not complete the questionnaire at the centres, so they were allowed to take the questionnaires home and return them over two weeks through a private courier hired by the researchers. Data were gathered over four weeks between July and August 2023. Out of 1501 questionnaires administered, 1015 were analysed, 465 were not returned by the participants, and 21 were discarded because at least 50% of their questions were not answered.
Statistical analysis
We utilised SPSS 28 (IBM Inc., New York, USA) to summarise the data and perform exploratory data analysis, including the first sensitivity analyses for the ultimate covariates. Amos 28 was used to test the moderated mediation model. Data were summarised with descriptive statistics [i.e., frequency, and mean], enabling us to identify missing data. Marital status was the only variable with 1% missing data, but we performed the exploratory data analysis with the missing values as they were less than 10% (of the data on marital status) and were randomly distributed [34, 41]. We found no outliers in the data after using box plots to visualize the distribution of the data on all continuous variables. Previous studies [7, 34, 40] were then followed to perform the first sensitivity analysis for the ultimate covariates. We utilised this analysis to ensure that only measured covariates likely to confound the primary relationships were incorporated into the moderated mediation model. After following some standard steps (see Appendix 2), none of the measured covariates qualified as the ultimate covariate.
Fig. 3 The statical moderated mediation model fitted. Note: WSH – workplace support for health; OHL – occupational health literacy; e1 and e2 are errors
Figure 3 shows the statistical moderated mediation model tested with Hayes’ Process Model [44, 45] through structural equation modelling. To create the interaction term (i.e., WSHxOHL), we centred the moderator (i.e., OHL) and multiplied it with WSH in harmony with Haye’s Process Model. The moderated mediation was fitted on the whole sample after computing the basic path coefficients (i.e., a, b, and c; see Fig. 3), Simple Slope (SS), Conditional Indirect Effect (CIE), and Index of Moderated Mediation (i.e., InModMed) using the “user-defined estimands” function in Amos 28. Appendix 3 shows the equations used to estimate the SS and CIE on the whole sample and sub-samples (i.e., men and women). The constant in each equation is the standard deviation of the moderator variable. The InModMed, SS, and CIE were estimated at different levels [i.e., low and high] of the moderator variable. The above parameters and their significance were based on 2000 biased corrected sampling iterations (bootstraps) with a 95% confidence interval.
In the second sensitivity analysis, the statistical model (see Fig. 3) was fitted for men and women, and the relevant parameters (i.e., direct effects, indirect effects, SS, CIE, and InModMed) were estimated for these samples. A minimum of p < 0.05 was used to detect the statistical significance of the effects. The moderation effect was visualized with figures depicting the effect of WSH on functioning at two levels (i.e., low, and high) of OHL. Multivariate normal distribution of the data was not achieved in fitting the models possibly owing to the relatively large sample used [46], but this violation of the assumption was corrected with our 2000 biased-corrected bootstraps [46].
Table 1 Summary characteristics on demographic and main variables (n = 1015) Table 2 Direct and indirect effects of workplace support for health on illness avoidance Continue Reading