Category: 3. Business

  • Efficacy of Oxymetazoline 0.1% in Acquired Blepharoptosis: A Systemati

    Efficacy of Oxymetazoline 0.1% in Acquired Blepharoptosis: A Systemati

    Introduction

    Blepharoptosis, commonly referred to as “ptosis”, is characterized by the abnormal descent of the upper eyelids in the primary gaze position.1 This condition is classified as either acquired or congenital, depending on the underlying structural or neurological abnormalities contributing to its etiology.2 The pathophysiology of blepharoptosis is multifactorial, warranting a comprehensive analysis of both anatomical and neurological components to fully elucidate its origins and implications.2

    Blepharoptosis is among the most frequently encountered eyelid disorders, with a reported prevalence ranging from 4.7% to 13.5%.3,4 Clinically, it is often perceived as a cosmetic concern due to the characteristic “sleepy” appearance it imparts, which may affect one or both eyes.5,6 Beyond aesthetics, blepharoptosis can impair quality of life by reducing functional independence and increasing the risk of psychological distress, including anxiety and depression.7,8 Functionally, even mild cases may obstruct the superior visual field, contributing to declines in health-related quality of life.9–11

    Surgical correction remains the primary treatment for blepharoptosis.1 A variety of surgical procedures and techniques are available, with the selection of an appropriate approach determined by a comprehensive clinical evaluation.12,13 Evidence suggests that surgery can substantially improve functional and quality-of-life outcomes.14–16 However, surgery may not be suitable for all patients—particularly those with mild ptosis or contraindications to surgery.1,5 In such cases, the potential benefits of surgery must be weighed against the risk of adverse events, which range from minor complications, such as infection and bleeding, to more serious outcomes like eyelid asymmetry, over- or undercorrection, atypical lid creases, and scarring.1,5

    Thus, a compelling demand for pharmacological intervention has emerged in response to evolving clinical challenges. Müller’s muscle, a sympathetically innervated smooth muscle within the upper eyelid responsible for approximately 1 to 2 millimeters of eyelid elevation, represents a viable therapeutic target.2,17 Activation of the alpha-adrenergic receptors on Müller’s muscle induces contraction, leading to elevation of the upper eyelid and consequently improvement in the superior visual field.2,3 Accordingly, alpha-adrenergic agonists provide a non-invasive treatment modality for patients with blepharoptosis, especially for those opting against surgical intervention. Numerous clinical trials have investigated the efficacy of a topical adrenergic agonist, specifically oxymetazoline topical solution, in patients afflicted with blepharoptosis.17–20 Oxymetazoline has consistently demonstrated significant improvements in both functional (visual field) and anatomical (marginal reflex distance) outcomes among affected individuals.17–19

    Additionally, certain studies have indicated that the cosmetic enhancements afforded by Oxymetazoline may boost self-confidence and psychological well-being, thereby positively impacting the overall quality of life for patients.21 Notably, these effects may be comparable to those achieved through surgical intervention, yet with a reduced incidence of adverse effects.19 Despite these promising findings, the definitive conclusion regarding the efficacy of Oxymetazoline remains elusive. Therefore, this meta-analysis seeks to assess the efficacy and safety of Oxymetazoline 0.1% ophthalmic solution in patients with blepharoptosis.

    Materials and Methods

    This systematic review and meta-analysis were conducted according to a pre-registered protocol on PROSPERO (CRD42024555846) and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.

    Search Strategy

    A comprehensive search was conducted from database inception to June 5th, 2024, using Medline, Web of Science, Google Scholar, Scopus, and the Cochrane Central Register of Controlled Trials (CENTRAL). Additionally, to identify ongoing or recently completed trials, we searched through ClinicalTrials.gov, the Australian New Zealand Clinical Trials Registry, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and the International Standard Randomized Controlled Trial Number (ISRCTN) registry. Reference lists of included studies were also manually searched to find any possibly pertinent RCTs that may have been missed. The search strategy incorporated terms such as “blepharoptosis”, “ptosis”, and “oxymetazoline hydrochloride”. Full search details are provided in the supplementary material.19,22–24

    Eligibility Criteria

    We included only randomized controlled trials (RCTs) comparing oxymetazoline 0.1% ophthalmic solution to placebo in the treatment of acquired ptosis, and that assessed efficacy using the marginal reflex distance 1 (MRD1). Studies not reporting MRD1 as an outcome were excluded. Additional exclusion criteria were: nonhuman studies, non-RCTs, reviews, case reports, cohort studies, duplicates, inaccessible articles, studies on congenital ptosis, and studies involving ocular or systemic confounding conditions. Articles lacking sufficient clinical data (eg, demographic or outcome measures) or not published in English were also excluded.

    The primary efficacy outcome was the mean change in MRD1 from baseline to 14 days post-intervention. Secondary outcomes included the mean change in the Leicester Peripheral Field Test (LPFT) and the incidence of adverse events (AEs) and serious adverse events (SAEs). Reported adverse events included eye pruritus, conjunctival hyperemia, punctate keratitis, dry eye, and headache.

    Study Selection and Data Extraction

    Two reviewers (RB and OB) independently screened titles and abstracts, followed by full-text assessments based on eligibility criteria. Disagreements were resolved through consensus.

    Data were extracted independently by two authors (OB and RH) using a standardized data extraction form. Extracted data included study characteristics, patient demographics (eg, age, gender), efficacy outcomes, and adverse events. Any discrepancies were resolved by mutual agreement.

    Risk of Bias Assessment

    Two reviewers (OB and RH) independently assessed the risk of bias using the Revised Cochrane Risk of Bias Tool (RoB 2).25 Each domain was evaluated and assigned a score of high, low, or some concerns. Discrepancies were resolved through discussion and consensus.

    Meta-Analysis

    Statistical analyses were performed using Stata (StataCorp, 2024).26 For categorical variables, logit-transformed proportions and 95% confidence intervals (CIs) were calculated. Continuous variables were reported as weighted means with 95% CIs. A random-effects model was used for all meta-analyses.27,28 The differences between intervention groups were evaluated using subgroup analyses of weighted mean differences (WMDs) and logit-transformed proportions, each with 95% CIs.29,30 Heterogeneity was assessed using Higgins’ I² and the chi-square (χ²) test. A two-sided p-value <0.05 was considered statistically significant.31 Publication bias was evaluated using Egger’s test and funnel plots, with no significant bias detected (all p > 0.05; Figure 1a and b).32

    Figure 1 Funnel plots of (a) MRD1 and (b) LPFT outcomes.

    Abbreviations: MRD, marginal reflex distance 1 in mm; LPFT, Leicester peripheral field test; CI, confidence interval.

    Certainty of Evidence

    The certainty of evidence was evaluated using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) framework.33 This structured approach considered study design, consistency, directness, precision, publication bias, and other relevant factors. Evidence quality was categorized as very low, low, moderate, or high.

    Results

    Study Selection

    A total of 87 articles were yielded from the initial search of databases. After removing duplicates, only 57 articles remained. Fifty studies were eliminated after titles and abstracts were screened. Seven papers were then obtained and evaluated for inclusion through a full text review.

    Finally, three articles failed to meet the inclusion criteria and were therefore excluded, resulting in four, level II evidence articles being incorporated into the analysis. (Figure 2).19,22–24

    Figure 2 PRISMA flowchart for articles screening process.

    Demographics and Clinical Characteristics

    Our cohort comprised 448 patients who were diagnosed with blepharoptosis and received either oxymetazoline 0.1% (n = 275/448, 61.4%) or placebo (n = 173/448, 38.6%). The weighted mean age was 57.7 years (95% CI: 44.5 years – 70.9 years) in the oxymetazoline group and 52.2 years (95% CI: 35.5 years – 68.9 years) in the placebo group (p = 0.61). An overall female predominance was noticed in both groups [oxymetazoline group: 73.6% (95% CI: 67.3–79.0%) and placebo group: 69.3% (95% CI: 62.0–75.7%), p = 0.36]. See Table 1 and Supplementary Table 1.19,22–24

    Table 1 Demographic Characteristics and Clinical Outcomes

    There were no statistically significant differences between the oxymetazoline group [1.2% (95% CI: 0.4–3.6%)] and the placebo group [1.6% (95% CI: 0.5–5.3%), p = 0.73] in terms of serious adverse events (Table 1). In both groups, punctate keratitis was the most common adverse event [oxymetazoline group: 5.8% (95% CI: 3.3–10.0%) and placebo group: 3.2% (95% CI: 1.1–8.8%), p =0.33]. In the oxymetazoline group, the second most common adverse event was conjunctival hyperemia [4.6% (95% CI: 2.4–8.6%), p =0.20]. In contrast, the second most common adverse event in the placebo group was eye pruritus [3.1% (95% CI: 1.1–8.4%), p = 0.50; Table 1].

    Risk of Bias Assessment

    All four included RCTs demonstrated a low risk of bias in terms of bias of measurement of outcome, missing data, and deviation from intended intervention.19,22–24 Where two of the included studies demonstrated unclear risk in terms of random sequence generation and selection bias.22,24 See Figure 3 19,22–24 and Figure 4.19,22–24

    Figure 3 Risk of bias graph. Review authors’ judgements about each risk of bias item presented as percentages across all included studies.

    Figure 4 Risk of bias summary.

    Intervention vs Placebo Outcomes

    In the subgroup meta-analysis of studies including data from baseline to day 14 of the trial, there was a statistically significant weighted mean difference (WMD) in both outcome measures: MRD1 and Leicester peripheral field test (LPFT). The overall pooled WMD for the MRD1 outcome [0.37 (95% CI: 0.13–0.6), z = 3.04] showed that oxymetazoline significantly increases MRD1 compared to placebo (p<0.01; Figure 5). Using the GRADE criteria, the quality of evidence for this outcome yielded moderate certainty of evidence (Table 2). In terms of LPFT outcome, the overall pooled WMD [4.72 (95% CI: 3.37–6.08), z = 6.84] revealed that oxymetazoline significantly improves LPFT outcome compared to placebo (p<0.01; Figure 6). Using the GRADE criteria, the quality of evidence was assessed and yielded moderate certainty (Table 2). The weighted mortality rates between the groups were statistically non-significant (p = 0.73). While the mortality rate of the Oxymetazoline group was 0.9% (95% CI: 0.2–3.5%), the mortality rate of the placebo group was 1.3% (0.3–4.9%; Table 1).

    Table 2 Grading of Recommendations Assessment, Development, and Evaluation (GRADE) Evidence Profile

    Figure 5 Forest plots of mean change in MRD1 measurements.

    Abbreviations: MRD1, marginal reflex distance 1 in mm; SD, standard deviation; NCT, national clinical trial; CI, confidence interval.

    Figure 6 Forest plots of mean change in LPFT measurements.

    Abbreviations: LPFT, Leicester peripheral field test; SD, standard deviation; NCT, national clinical trial; CI, confidence interval.

    Discussion

    Blepharoptosis is a common diagnosis encountered in oculoplastic clinics and is typically managed through surgical intervention. Although surgical correction of eyelid position is considered the standard of care, it carries risks such as infection, delayed wound healing, and eyelid scarring.34 These potential complications may be avoided with the use of oxymetazoline eye drops, which act on Müller’s muscle to elevate the upper eyelid and thereby improve ptosis.35,36 Additionally, through its alpha-adrenergic agonist properties, oxymetazoline may also reduce ocular redness.37

    Our findings demonstrated the superiority of oxymetazoline over placebo in improving ptosis. Daily administration of oxymetazoline significantly enhanced both marginal reflex distance 1 and Leicester Peripheral Field Test scores. Across the included studies, MRD1 improvement from baseline ranged from 0.80 mm to 1.06 mm.22–24 Notably, a previous study by Uragdar et al reported a potential increase in MRD1 of up to 1.9 mm.38 This is clinically relevant, as ptosis that improves by at least 1 mm in MRD1 may be functionally and cosmetically significant. Furthermore, LPFT scores were significantly higher in the oxymetazoline group, further supporting the efficacy of the medication.

    Both treatment and placebo groups reported a small number of non-serious adverse events, including pruritus, conjunctival hyperemia, punctate keratitis, dry eye, and headache. Serious adverse events and mortality rates were comparable between the two groups, with no statistically significant differences observed. These findings suggest that oxymetazoline may be considered safe for short-term use over a 14-day period.

    Patients with ptosis may experience psychosocial impacts, including anxiety and depression due to concerns about appearance and social judgment.8 Prior studies have shown that surgical correction of ptosis can positively influence self-perception and psychological well-being.21,39,40 Similarly, quality of life has been reported to improve following ptosis correction.14,39,41 One of the RCTs included in our analysis also reported that oxymetazoline improved patient-perceived eye appearance.19 Moreover, a study by Wirta et al found that oxymetazoline had tolerability and safety profiles comparable to placebo, with most adverse events deemed unrelated to treatment.42

    Given its favorable safety and efficacy profiles, oxymetazoline may serve as a valuable non-surgical option for patients with mild to moderate ptosis or those ineligible for surgery. However, our findings have certain limitations, including the short follow-up duration of existing studies, which evaluated outcomes over only 14 days. This is may partly be due to oxymetazoline’s recent approval by the US Food and Drug Administration (FDA) in 2020. Additionally, the limited number of available RCTs and our inclusion of only English-language studies may have introduced selection bias and restricted the generalizability of our findings.

    Conclusion

    In conclusion, this meta-analysis suggests that oxymetazoline 0.1% ophthalmic solution may be an effective and well-tolerated short-term treatment for acquired blepharoptosis. The included randomized controlled trials generally demonstrated improvements in marginal reflex distance and peripheral visual field outcomes. Additionally, oxymetazoline was associated with a low incidence of adverse events, and no serious safety concerns were reported.

    While these findings are promising, further research is warranted to evaluate the long-term efficacy and safety of oxymetazoline. Larger, high-quality randomized controlled trials and comparative studies against surgical interventions are needed to clarify their role in clinical practice. In the interim, oxymetazoline may offer a promising, non-invasive treatment alternative for patients who choose to forgo surgery, with the potential to improve functional vision and quality of life.

    Acknowledgments

    We extend our sincere gratitude to all individuals who contributed to the success of this research project.

    Author Contributions

    All authors contributed substantially to the work, including aspects such as the conception, study design, execution, data acquisition, analysis, and interpretation. They participated in drafting, revising, or critically reviewing the manuscript, approved the final version for publication, agreed on the selected journal, and accepted responsibility for all elements of the work.

    Funding

    This research did not receive any specific grant from commercial, public funding agencies or non-profit sectors.

    Disclosure

    The authors declare no conflicts of interest in this work.

    References

    1. Bacharach J, Lee WW, Harrison AR, Freddo TF. A review of acquired blepharoptosis: prevalence, diagnosis, and current treatment options. Eye. 2021;35(9):2468–2481. doi:10.1038/s41433-021-01547-5

    2. Thakker MM, Rubin PA. Mechanisms of acquired blepharoptosis. Ophthalmol Clin North Am. 2002;15(1):101–111. doi:10.1016/s0896-1549(01)00005-0

    3. Hashemi H, Khabazkhoob M, Emamian MH, et al. The prevalence of ptosis in an Iranian adult population. J Curr Ophthalmol. 2016;28(3):142–145. doi:10.1016/j.joco.2016.04.005

    4. Kim MH, Cho J, Zhao D, et al. Prevalence and associated factors of blepharoptosis in Korean adult population: the Korea national health and nutrition examination survey 2008-2011. Eye. 2017;31(6):940–946. doi:10.1038/eye.2017.43

    5. Finsterer J. Ptosis: causes, presentation, and management. Aesthetic Plast Surg. 2003;27(3):193–204. doi:10.1007/s00266-003-0127-5

    6. Zoumalan CI, Lisman RD. Evaluation and management of unilateral ptosis and avoiding contralateral ptosis. Aesthet Surg J. 2010;30(3):320–328. doi:10.1177/1090820X10374108

    7. McKean-Cowdin R, Varma R, Wu J, Hays RD, Azen SP, Los Angeles Latino Eye Study Group. Severity of visual field loss and health-related quality of life. Am J Ophthalmol. 2007;143(6):1013–1023. doi:10.1016/j.ajo.2007.02.022

    8. Richards HS, Jenkinson E, Rumsey N, et al. The psychological well-being and appearance concerns of patients presenting with ptosis. Eye. 2014;28(3):296–302. doi:10.1038/eye.2013.264

    9. Alniemi ST, Pang NK, Woog JJ, Bradley EA. Comparison of automated and manual perimetry in patients with blepharoptosis. Ophthalmic Plast Reconstr Surg. 2013;29(5):361–363. doi:10.1097/IOP.0b013e31829a7288

    10. Ho SF, Morawski A, Sampath R, Burns J. Modified visual field test for ptosis surgery (Leicester Peripheral Field Test). Eye. 2011;25(3):365–369. doi:10.1038/eye.2010.210

    11. Meyer DR, Stern JH, Jarvis JM, Lininger LL. Evaluating the visual field effects of blepharoptosis using automated static perimetry. Ophthalmology. 1993;100(5):651–659. doi:10.1016/s0161-6420(93)31593-9

    12. Shields M, Putterman A. Blepharoptosis correction. Curr Opin Otolaryngol Head Neck Surg. 2003;11(4):261–266. doi:10.1097/00020840-200308000-00009

    13. Ben Simon GJ, Lee S, Schwarcz RM, McCann JD, Goldberg RA. External levator advancement vs Müller’s muscle-conjunctival resection for correction of upper eyelid involutional ptosis. Am J Ophthalmol. 2005;140(3):426–432. doi:10.1016/j.ajo.2005.03.033

    14. Battu VK, Meyer DR, Wobig JL. Improvement in subjective visual function and quality of life outcome measures after blepharoptosis surgery. Am J Ophthalmol. 1996;121(6):677–686. doi:10.1016/s0002-9394(14)70634-8

    15. Federici TJ, Meyer DR, Lininger LL. Correlation of the vision-related functional impairment associated with blepharoptosis and the impact of blepharoptosis surgery. Ophthalmology. 1999;106(9):1705–1712. doi:10.1016/S0161-6420(99)90354-8

    16. Cahill KV, Bradley EA, Meyer DR, et al. Functional indications for upper eyelid ptosis and blepharoplasty surgery: a report by the American academy of ophthalmology. Ophthalmology. 2011;118(12):2510–2517. doi:10.1016/j.ophtha.2011.09.029

    17. Slonim CB, Foster S, Jaros M, et al. Association of oxymetazoline hydrochloride, 0.1%, solution administration with visual field in acquired ptosis: a pooled analysis of 2 randomized clinical trials. JAMA Ophthalmol. 2020;138(11):1168–1175. doi:10.1001/jamaophthalmol.2020.3812

    18. Bacharach J, Wirta DL, Smyth-Medina R, et al. Rapid and sustained eyelid elevation in acquired blepharoptosis with oxymetazoline 0.1%: randomized phase 3 trial results. Clin Ophthalmol. 2021;15:2743–2751. doi:10.2147/OPTH.S306155

    19. Shoji MK, Markatia Z, Ameli K, et al. The effects of topical oxymetazoline on eyelid position, eye redness, and patient-reported eye appearance: a randomized controlled trial. J Plast Reconstr Aesthet Surg. 2023;80:66–74. doi:10.1016/j.bjps.2023.02.006

    20. Bernardini FP, Skippen B, Croasdell B, et al. Management of severe botulinum-induced eyelid ptosis with pretarsal botulinum toxin and oxymetazoline hydrochloride 0.1. Aesthet Surg J. 2023;43(9):955–961. doi:10.1093/asj/sjad070

    21. Maisel A, Waldman A, Furlan K, et al. Self-reported patient motivations for seeking cosmetic procedures. JAMA Dermatol. 2018;154(10):1167–1174. doi:10.1001/jamadermatol.2018.2357

    22. RVL Pharmaceuticals, Inc. Study of safety and efficacy of RVL-1201 in the treatment of blepharoptosis. Available from: https://clinicaltrials.gov/study/NCT03565887?cond=blepharoptosis&intr=Oxymetazoline%20&rank=4. Accessed June 5, 2024.

    23. RVL Pharmaceuticals, Inc. Safety and Efficacy Study of RVL-1201 in acquired blepharoptosis. Available from: https://clinicaltrials.gov/study/NCT01848041?cond=blepharoptosis&intr=Oxymetazoline%20&rank=1. Accessed June 5, 2024.

    24. RVL Pharmaceuticals, Inc. Study of the safety and efficacy of RVL-1201 in the treatment of acquired blepharoptosis. Available from: https://clinicaltrials.gov/study/NCT02436759. Accessed June 5, 2024.

    25. Cochrane Risk of Bias. RoB 2: revised Cochrane risk-of-bias tool for randomized trials. Cochrane; [date not specified]. Available from: https://methods.cochrane.org/bias/resources/rob-2-revised-cochrane-risk-bias-tool-randomized-trials. Accessed June 7, 2024.

    26. StataCorp. Stata Statistical Software: Release 18.5. College Station, TX: StataCorp LLC; 2024. Available from: https://www.stata.com/. Accessed June 7, 2024.

    27. Riley RD, Higgins JP, Deeks JJ. Interpretation of random effects meta-analyses. BMJ. 2011;342:d549. doi:10.1136/bmj.d549

    28. Nikolakopoulou A, Mavridis D, Salanti G. How to interpret meta-analysis models: fixed effect and random effects meta-analyses. Evid Based Ment Health. 2014;17(2):64. doi:10.1136/eb-2014-101794

    29. Andrade C. Mean difference, standardized mean difference (SMD), and their use in meta-analysis: as simple as it gets. J Clin Psychiatry. 2020;81(5):20f13681. doi:10.4088/JCP.20f13681

    30. Barker TH, Migliavaca CB, Stein C, et al. Conducting proportional meta-analysis in different types of systematic reviews: a guide for synthesisers of evidence. BMC Med Res Methodol. 2021;21(1):189. doi:10.1186/s12874-021-01381-z

    31. Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002;21(11):1539–1558. doi:10.1002/sim.1186

    32. Sterne JA, Sutton AJ, Ioannidis JP, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011;343:d4002. doi:10.1136/bmj.d4002

    33. Cochrane Training. Chapter 14: completing “summary of findings” tables and grading the certainty of the evidence. Available from: https://training.cochrane.org/handbook/current/chapter-14. Accessed June 1, 2024.

    34. Hakimbashi M, Kikkawa DO, Korn BS. Complications of ptosis repair: prevention and management. In: Cohen A, Weinberg D editors. Evaluation and Management of Blepharoptosis. Springer; 2011:333–343. doi:10.1007/978-0-387-92855-5_30

    35. Esmaeli-Gutstein B, Hewlett BR, Pashby RC, Oestreicher J, Harvey JT. Distribution of adrenergic receptor subtypes in the retractor muscles of the upper eyelid. Ophthalmic Plast Reconstr Surg. 1999;15(2):92–99. doi:10.1097/00002341-199903000-00005

    36. Skibell BC, Harvey JH, Oestreicher JH, et al. Adrenergic receptors in the ptotic human eyelid: correlation with phenylephrine testing and surgical success in ptosis repair. Ophthalmic Plast Reconstr Surg. 2007;23(5):367–371. doi:10.1097/IOP.0b013e3181462a2e

    37. McLaurin E, Cavet ME, Gomes PJ, Ciolino JB. Brimonidine ophthalmic solution 0.025% for reduction of ocular redness: a randomized clinical trial. Optom Vis Sci. 2018;95(3):264–271. doi:10.1097/OPX.0000000000001182

    38. Ugradar S, Kim JS, Trost N, et al. Changes to eye whiteness and eyelid/brow position with topical oxymetazoline in aesthetic patients. Aesthet Surg J. 2022;42(6):582–589. doi:10.1093/asj/sjab400

    39. Richards HS, Jenkinson E, Rumsey N, Harrad RA. Pre-operative experiences and post-operative benefits of ptosis surgery: a qualitative study. Orbit. 2017;36(3):147–153. doi:10.1080/01676830.2017.1279669

    40. Richards HS, Jenkinson E, White P, Harrad RA. Patient reported psychosocial functioning following successful ptosis surgery. Eye. 2022;36(8):1651–1655. doi:10.1038/s41433-021-01685-w

    41. Smith HB, Jyothi SB, Mahroo OA, et al. Patient-reported benefit from oculoplastic surgery. Eye. 2012;26(11):1418–1423. doi:10.1038/eye.2012.188

    42. Wirta DL, Korenfeld MS, Foster S, et al. Safety of once-daily oxymetazoline HCl ophthalmic solution, 0.1% in patients with acquired blepharoptosis: results from four randomized, double-masked clinical trials. Clin Ophthalmol. 2021;15:4035–4048. doi:10.2147/OPTH.S322326

    Continue Reading

  • Kuwait inflation alert: Consumer prices rise 2.39% in July 2025 amid climbing costs of essentials | World News

    Kuwait inflation alert: Consumer prices rise 2.39% in July 2025 amid climbing costs of essentials | World News

    Consumer Price Index in Kuwait rose 2.39% year-on-year in July 2025/Image: File

    Kuwait’s consumer price index (CPI) recorded a significant increase of 2.39% in July 2025 compared to the same month the previous year, according to data from the Kuwait Central Statistical Bureau (CSB). The rise reflects inflationary pressure primarily driven by higher prices in essential groups such as food, health, clothing, and education, placing increased financial stress on Kuwaiti households.TL;DR:

    • Consumer Price Index in Kuwait rose 2.39% year-on-year in July 2025. Monthly inflation increased by 0.22% from June to July 2025.
    • Major drivers: food and beverages (up 5.63%), health (2.85%), clothing (3.7%), and education (0.71%).
    • Some sectors like transportation saw a price decrease of 1.75%. Inflation excluding food and beverages rose by 1.61% year-on-year.

    Detailed inflation analysis

    The CSB’s recent report highlights escalating costs across various consumer sectors. The food and beverages group saw the highest surge, with prices climbing 5.63% compared to July 2024. Meanwhile, the cigarettes and tobacco group experienced a slight increase of 0.07% year-on-year.The clothing sector followed with a notable 3.7% rise, reflecting ongoing cost pressures on apparel and textiles. Housing service prices edged up by 0.98%, and household furniture costs rose by 3.22% year-on-year, adding to household expenditure burdens.Healthcare inflation also contributed, with a 2.85% increase in the price index. Conversely, transportation costs declined by 1.75%, providing some relief against the broader inflationary trend.The communications sector saw a marginal price rise of 0.48% year-over-year, while recreation and culture prices increased by 1.76%. The education sector recorded a 0.71% rise in costs, and restaurant and hotel prices increased by 1.94%.The miscellaneous goods and services group also had a significant inflation rate of 4.8%.

    Excluding food and beverages

    When excluding the volatile food and beverages group, inflation in Kuwait still rose by 1.61% compared to last year, indicating pervasive price pressures beyond just food-related items. The month-on-month inflation excluding food and beverages was a modest 0.08% increase.Kuwait’s inflation data for July 2025 points to mounting price pressures on essential consumer goods and services, intensifying the financial strain on households. While some sectors like transportation offer limited relief, the overall rising costs in food, health, clothing, and education highlight the challenges facing consumers amid a shifting economic landscape.

    FAQ

    Q. What was Kuwait’s inflation rate in July 2025?The inflation rate rose to 2.39% year-on-year in July 2025.Q. Which categories contributed most to the inflation increase?Food and beverages saw the largest rise at 5.63%, followed by clothing (3.7%), healthcare (2.85%), and miscellaneous goods (4.8%).Q. Did any sectors see a price decrease during this period?Yes, transportation costs decreased by 1.75% compared to the previous year, slightly offsetting inflationary pressures.Q. How did consumer prices change month-on-month from June to July?Consumer prices increased by 0.22% in July, slightly lower than the 0.29% rise recorded in June.


    Continue Reading

  • Meat Returns to Eleven Madison Park, Fires Rage in Europe, and Mexico Cuts Poverty – Food Tank

    Each week, Food Tank is rounding up a few news stories that inspire excitement, infuriation, or curiosity.

    Meat Returns to Eleven Madison Park After Four Vegan Years

    Eleven Madison Park will reintroduce meat and seafood to its menu this October, ending its four-year run as a fully plant-based fine dining restaurant. Chef Daniel Humm, who helms the three-Michelin-starred Manhattan restaurant, cites financial pressures and a desire to be more inclusive as key reasons behind the pivot. “While we had built something meaningful, we had also unintentionally kept people out. This is the opposite of what we believe hospitality to be,” Humm says.

    The restaurant introduced its vegan menu after a 15-month pandemic closure. Concerned with the sustainability of how food is sourced and consumed, EMP vowed to create exciting meals without moving a single animal product. In 2022, EMP became the first restaurant in the world to earn three Michelin stars for a fully vegan menu.

    But some doubted diners would be willing to pay as much for a plant-based meal, and some dismissed it as a high-end stunt.

    Over the past year at EMP, private bookings dwindled and Humm says the labor-intensive, high-concept menu became harder to sustain. The decision also followed a trip to Greece, where Humm watched a goat being slaughtered with reverence. The experience, combined with guest feedback about the restrictive menu, led him to reconsider.

    Humm says plant-based option will remain on the menu but will no longer be the only option.

    USDA Cancels Contract for Food Safety Inspectors

    The U.S. Department of Agriculture (USDA) recently moved to cancel collective bargaining agreements with employees at its animal health and food safety inspection agencies, affecting more than 8,000 unionized workers.

    Notices provided to union leaders at the Food Safety and Inspection Service (FSIS) and Animal and Plant Health Inspection Service (APHIS) explain that the move was aligned with Executive Order 14251. Signed in March, the executive order excludes some federal workers from collective bargaining if their agencies have national security roles.

    But according to Cole Austen Gandy, president of the National Association of Agriculture Employees (NAAE), which represents nearly 1,500 APHIS workers, none of the employees’ work involves national security.

    The USDA says the change will allow the agency to be nimble and “farmer-first.” But critics warn the shift may undermine food safety and labor protections. “It doesn’t just erode labor rights — it damages the public’s trust in the safety of our food supply,” says Milton Jones, president of the United Food and Commercial Workers International Union.

    Paula Soldner, chair of the National Joint Council of Food Inspection Locals, which represents some FSIS employees, notes that the move “flies in the face of every promise [FSIS] has made to protect America’s food supply.

    The National Association of Agriculture Employees recently filed a suit challenging the executive order and its revocation of its collective bargaining rights. According to the union, it is the sixth lawsuit contesting Executive Order 14251.

    Environmental Data Is Disappearing from Government Websites

    A new Environmental Data & Governance Initiative (EDGI) report finds that scope and speed of the second Trump administration’s changes to websites related to environmental regulation has far exceeded that of the first. According to the researchers, the removals and revisions have significantly altered the federal environmental information landscape.

    EDGI began documenting loss of access and usability of government environmental information during the first Trump administration. The organization found that the second Trump Administration has made 70 percent more changes to government websites related to environmental regulation during the first 100 days than the first administration did.

    The amount of information being scrubbed has surprised Gretchen Gehrke, Co-Founder of EDGI, who tells NPR that the level of “total erasure” of any topic was never seen under Trump’s first term.

    The report also finds that the changes are getting bolder, with intensifying rhetoric and increasing challenge to statutory authorities for information sharing. The changes’ biggest targets include diversity, equity, and inclusion (DEI) efforts and environmental justice, according to EDGI.

    The Council on Environmental Quality’s Climate and Economic Justice Screening Tool, which identified disadvantaged communities to ensure a percent of climate program benefits reached them, was removed. Nine similar screening tools also disappeared.

    On the EPA’s website all pages about environmental justice are gone. And following layoffs at the National Oceanic and Atmospheric Administration (NOAA), climate.gov—which shared information about changing weather patterns, drought conditions, and greenhouse gas emissions—stopped publishing new content this summer.

    Europe Faces Worst Wildfire Season in Nearly 20 Years

    The European Union is experiencing its most destructive wildfire season in nearly two decades. Fueled by heat waves, prolonged drought, and strong winds, the fires have ravaged the continent, displacing tens of thousands, burning homes, and devastating farmland.

    Satellite data from the Copernicus space program show that almost 9,000 square kilometers—roughly the size of Puerto Rico—have burned in 2025. Several deaths and many injuries have been reported.  Thousands of fire crews have been deployed across the continent.

    In Greece, flames swept through pine forests and olive groves, burning houses, dozens of vehicles. Spain is experiencing its highest fire emissions since 2003, severely degrading air quality up to several hundred kilometers from the fires.

    Turkey has been battling wildfires since late June, with 18 deaths reported so far. And Portugal, facing its worst year of fires since 2006, has been in a state of emergency for weeks. “We are at war and we must win this war,” says Portugal’s Prime Minister Luis Montenegro.

    Last week, evacuations affected thousands, with more than 31,000 people displaced in Spain alone. Many have lost homes and farmland, and agricultural infrastructure has been severely damaged. Images from the Associated Press show scorched barns, charred equipment, and families fleeing with their livestock. In Albania, where rural communities rely heavily on animals for food and livelihood, some were forced to leave livestock behind. Residents are now returning to assess the damage, with volunteers helping to care for injured animals.

    Though wildfires are common during European summers, their severity can be exacerbated by heatwave conditions—which meteorologists say are becoming more frequent.

    Mexico Celebrates “Historic” Reduction in Poverty

    More than 8.3 million people in Mexico rose above the poverty line between 2022 and 2024, according to a recent report from the national statistics agency. The report shows a nearly 18 percent overall drop in poverty, with extreme poverty falling 23 percent and moderate poverty decreasing by over 16 percent. About one in three Mexicans remains below the poverty line.

    According to Manuel Martínez Espinoza, a researcher at Mexico’s National Council of the Humanities, Sciences and Technologies, the progress can likely be attributed to a number of factors. But the decrease in poverty can likely be attributed to increased wages and social programs under Former President López Obrador, says Espinoza.

    Between 2018 and 2025, Mexico’s minimum wage rose from 88.40 pesos to 278.80 pesos per day, a threefold increase. Former President López Obrador also implemented cash transfers for elderly people, unemployed youth, farmers, and others, significantly raising total social spending.

    López Obrador successor, President Claudia Sheinbaum called the progress “extraordinary” and “historic,” stating that “‘For the food of all – first the poor’ is not just a slogan, but a reality in Mexico.” Viri Ríos, an independent public policy expert, notes that “there has never been a single six-year term in which poverty has been reduced or decreased so significantly.”

    But experts caution the gains may not be sustainable. “If people stop receiving [the transfers], they could fall back into poverty because there wasn’t enough investment in things other than addressing people’s most immediate needs,” says Manuel Martínez Espinoza.

    Articles like the one you just read are made possible through the generosity of Food Tank members. Can we please count on you to be part of our growing movement? Become a member today by clicking here.

    Photo courtesy of Sandie Peters, Unsplash


    Continue Reading

  • Flecainide mediated sodium channel blockade enhances blood brain barrier integrity and promotes neuroprotection in neuroinflammation

    Flecainide mediated sodium channel blockade enhances blood brain barrier integrity and promotes neuroprotection in neuroinflammation

    Animals, animal models, treatment

    Female, six-week-old C57BL/6J mice were purchased from Janvier Labs (Le Genest-Saint-Isle, France). The NOD/ShiLtJ mice (Female, six-week-old) originate from the internal breeding facility (ZETT) at the University of Düsseldorf. EAE mouse model.

    Mice were immunized with 200 μg of myelin oligodendrocyte glycoprotein fragment 35–55 (MOG35–55), purchased from BIOTREND emulsified in 200 μl of complete Freund’s adjuvant (CFA), supplemented with 800 μg of heat-killed Mycobacterium tuberculosis (MT) H37Ra, both purchased from BD Difco (injected subcutaneous, distributed over four spots on the hind and front flank) and additional intraperitoneal injections of 200 ng of pertussis toxin (PTX) from Sigma-Aldrich on days 0 and 2 post immunization (p.i.). The sham control group (sham) also received PTX and CFA, but no MOG35–55 peptide. The substances used for treatment, their concentration, mode of action, treatment interval and treatment start are described in Table 1. The used concentrations for each substance were adapted from recent dose finding studies found in the literature investigating the optimal dose.

    Table 1 Substance treatment details.

    The experimental protocol was reviewed and approved by the “State Office for Nature, Environment and Consumer Protection of North Rhine-Westphalia, Germany “Landesamt für Natur, Umwelt und Verbraucherschutz Nordrhein-Westfalen (LANUV) under approval number Az. 81-02.04.2019.A063.

    The in vitro concentrations of 2 µM and 5 µM flecainide used for treating primary mouse brain microvascular endothelial cells (pMBMECs) were selected as part of a dose-finding approach based on previously published studies17,18. These concentrations were chosen to approximate pharmacologically relevant levels corresponding to the systemic in vivo dosage of 30 mg/kg administered subcutaneously in our EAE model, while allowing the assessment of dose-dependent effects on endothelial gene expression and barrier function.

    OCT measurements

    Periodic OCT measurements were conducted using the Spectralis® HRA+OCT device (Heidelberg Engineering, Germany) with several adaptations for rodents, as previously described19. Segmentation of retinal volume scans was performed using the Heidelberg Eye Explorer software, with manual control for segmentation errors. The volume of the parapapillary region was assessed using the ETDRS grid, excluding the center with the disc, as published previously20. Inner retinal layer (IRL) thickness was examined at defined intervals after irradiation and compared with baseline measurements (IRL: NFL, GCL, and IPL layer).

    For the measurements, the animals were anesthetized using isoflurane (Vaporizer from Harvard Apparatus Anesthetic Vaporizors; Isofluran from Piramal critical care). Specifically, induction was carried out at 3.5% isoflurane, followed by maintenance at 2% isoflurane, with a flow rate of 0.6 L/minute of oxygen. The gas was transferred through nose cones to the mice.

    Optomotor response (OMR) measurements

    OMR measurements were performed periodically, exclusively in C57BL/6J mice, using the OptoMotry® device from Cerebral Mechanics, in parallel with OCT measurements. Spatial frequency was monitored as a parameter for visual function. The spatial frequency threshold was determined by randomly changing the spatial frequency to identify the threshold at which the mouse could track the grids, as previously described20,21.

    Histology (optic nerves, retinal cross sections and retinal whole mounts)

    Mice were euthanized using 100 mg/kg Ketamine und 20 mg Xylazin intraperitoneal (i.p). (in 250 µl NaCl 0,9%) followed by cardiac perfusion using phosphate-buffered saline (Gibco, Carlsbad, USA).

    The optic nerves were then isolated and fixated in 4% paraformaldehyde (Carl Roth, Karlsruhe, Germany) overnight. After fixation, the optic nerves were subjected to a sucrose gradient for dehydration and subsequently embedded in O.C.T. compound (Sakura™ Finetek, Alphen aan den Rijn, Netherlands). Longitudinal sections of five micrometers were cut and prepared for fluorescence staining. Longitudinal sections of the optic nerves were used for quantifying T-Lymphocytes (CD3 (Clone 17A2), 1:400, Biolegend), assessing the state of myelination (MBP (Clone 12), Merck Millipore, 1:500), evaluating microglial activation (Iba1 (Clone GT10312), 1:500, Merck) using Leica HyD detector attached to a Leica DMi8 confocal microscope (63 × objective lens magnification). Astrocytic activation (GFAP (Clone 173,004), 1:1000 Synaptic System) was assessed using retinal cross sections. Cy3 goat anti-mouse (1:500 Millipore), Cy3 goat anti-rat (1:500, Millipore) and Cy3 goat anti-rabbit (1:500, Invitrogen) were used as secondary antibodies. The numbers of cells stained with CD3 and Iba1 were analyzed using ImageJ software, applied by blinded raters, and expressed as a ratio to DAPI staining. The overall signal for MBP (positive total area in the red channel) was analyzed using ImageJ software. Cell counts for CD3 and Iba1, as well as the positive area for MBP and GFAP, were assessed in the whole field across 6–9 images per sample. The mean values were calculated and utilized for analysis.

    RGC count was calculated by a semi-automated count of Brn3a+cells on retinal flat mounts. Briefly, retinae were stained with Brn3a (1:200, Santa Cruz Biotechnology, cat# sc-31984) antibody and flat-mounted on glass slides. Each retina was then divided into four quadrants (three areas per quadrant: central, mid-periphery, and far-periphery). For each eye, Brn3a+cell count was summed up from all 6–12 areas imaged.

    Evan’s Blue Dye assay

    Evan’s Blue Dye (EBD) is a diazo dye that binds to serum albumin, creating a large molecular complex that normally does not cross the intact BBB. However, in pathological conditions leading to increased BBB permeability, the EBD-albumin complex can cross the BBB and accumulate in brain tissue, thus providing a measurable indication of barrier disruption. At the pre-determined time points of 18 days post-immunization, the mice were injected intravenously with 4% Evan’s Blue Dye in saline at a dosage of 4 ml/kg body weight. The dye was allowed to circulate for 3 h to ensure systemic distribution. After the circulation period, the mice were anesthetized and transcardially perfused with phosphate-buffered saline (PBS) to remove intravascular dye. The brains and spinal cords were then carefully extracted, weighed, and homogenized in N,N-dimethylformamide to extract the dye from the tissue. The samples were then centrifuged, and the supernatants were collected for spectrophotometric analysis. Quantification of the EBD was performed using a TECAN spectroscopic device. The absorption of the extracted solutions was measured at 565 nm, a wavelength at which EBD exhibits a distinct peak. EBD concentration was calculated from the absorbance values using a standard curve generated with known concentrations of EBD. The amount of EBD in the brain tissue was then expressed as µg of EBD per g of brain tissue. All data were collected and analyzed using appropriate statistical methods.

    Flow cytometry analysis

    Flow cytometry was used to examine the lymphocyte subpopulations in spinal cord and spleen of the EAE mice at specified time points. Spinal cords were carefully harvested and mechanically and enzymatically (Collagenase, DNAse) dissociated. Lymphocytes were then isolated from the suspension using a LymphoprepTM-gradient (Stemcell technologies). Spleen was collected and mechanically dissociated by passing it through a 70 µm cell strainer. Red blood cells were lysed with ACK Lysis Buffer and suspension was again passed through a 70 µm cell strainer.

    Lymphocytes from both organs were centrifuged and resuspended in FACS buffer, containing 2 mM EDTA and 2% fetal calf serum. Cells were then stained with fluorochrome-conjugated monoclonal antibodies (Tables 2, and 3, respectively) in the dark 4 °C for 30 min. The cells were then washed with FACS buffer and prepared for analysis. Cells were analyzed on a CytoFLEX S (Beckman Coulter) and data was interpreted using Kaluza Analysis Software (Beckman Coulter). Lymphocytes were identified based on forward and side scatter properties, with specific populations determined by surface marker expression (representative gating strategy is stated in Fig. S1). Data are reported as the total cell counts (via flowrate check) of each cell type within the total lymphocyte population.

    Table 2 Antibodies used for flow cytometry analysis on CNS immune cell infiltration analysis (ex vivo experiment).
    Table 3 Antibodies used for flow cytometry analysis on lymphocyte activation, adhesion, and proliferation (in vitro experiment).

    Isolation of pMBMECs

    pMBMECs were isolated according to different protocols for performing the quantitative PCR or the permeability assay. In both procedures, the pMBMECs were never passaged between isolation and experiment. Flecainide treatment was applied for 24 h prior to qPCR analyses and for 7 days prior to Western blot analyses, starting after pMBMECs had reached confluency.

    Isolation pMBMECs for quantitative PCR and western blot

    We began by euthanizing six- to eight-week-old female C57BL/6 mice. After euthanasia, the brains were carefully extracted and immediately transferred onto sterile filter paper to facilitate the removal of the meninges. Following this step, the brain tissue was homogenized to a uniform consistency and subsequently processed according to the protocol of the Adult Brain Dissociation Kit (Miltenyi Biotec), which is optimized for effective dissociation of murine brain tissue. To isolate microvascular endothelial cells, the resulting single-cell suspension was subjected to magnetic-activated cell sorting (MACS). First, CD45+ immune cells were labeled using CD45 MicroBeads and removed by magnetic separation via LS columns and the MACS Separator. The flow-through, containing CD45 cells, was then incubated with CD31 MicroBeads to isolate CD31+ endothelial cells. These CD45CD31+ cells were collected and plated in 12-well plates for culture. Prior to seeding, the plates were coated overnight at 4 °C with a coating solution consisting of 500 µl dH2O, 400 µl collagen, and 100 µl fibronectin (200 µl per well), to enhance cell adhesion. The isolated endothelial cells were seeded in 1.5 ml of MBMEC medium per well. For the initial two days, cells were cultured in MBMEC medium supplemented with 0.1% pyromycin (10 µl pyromycin per 10 ml medium) to select for the desired cell population. Thereafter, the medium was replaced with pyromycin-free MBMEC medium. The MBMEC culture medium consisted of 40 ml DMEM high glucose, 10 ml fetal calf serum (FCS), 25 µl basic fibroblast growth factor (bFGF) and 50 µl heparin.

    Isolation of pMBMECs for permeability assay

    Cells were isolated by the method of Coisne et al.22 as follows: for each preparation cortices from six- to ten-weeks old gender matched mice were isolated and meninges were removed. Preparations were pooled and homogenized in Hank’s balanced salt solution containing 0.1% bovine serum albumin. The homogenate was mixed with 30% dextran and centrifuged at 3000g for 25 min at 10 °C. The pellet containing the vascular fraction was collected. Centrifugation and pellet harvesting was repeated once. The collected vascular fraction was then filtered through a 60 μm nylon mesh. The capillary-enriched filtrate was digested in DNase I (10 mg/mL), TLCK (0.147 mg/mL), and collagenase/dispase (2 mg/mL) for 30 min at 37 °C. The digestion was stopped by an excess of wash buffer and filtered through a 20 μm nylon mesh. The crude cell preparation of pMBMECs were cultured for 48 h in the presence of 4 µg/ml puromycin, which allowed selective growth of pMBMECs only.

    Western blot analysis of protein expression in pMBMECs

    pMBMECs were cultured to confluence as described in “Isolation pMBMECs for quantitative PCR and western blot” section, then treated for 7 days with flecainide (2 µM or 5 µM) or vehicle. After treatment, cells were washed with PBS and lysed in NP-40 buffer (150 mM NaCl, 50 mM Tris/HCl, 1% NP-40, pH 8.0) on ice for 20 min and then scraped off using a pipette tip. Lysates were cleared by centrifugation (12,000 × g, 10 min, 4 °C), and protein concentration was determined using the BCassay kit (Interchim, France). Equal protein amounts (25 µg) were mixed with 1 × Laemmli buffer, denatured at 95 °C for 5 min, separated by SDS-PAGE, and transferred onto 0.2 µm nitrocellulose membranes. Membranes were blocked for 10 min at room temperature with EveryBlot buffer (Bio-Rad, USA), then incubated overnight at 4 °C with the following primary antibodies: anti-Actin (Invitrogen, PA1-183, 1:4000), anti-PECAM1 (Antibodies Online, ABIN669006, 1:1000), anti-β-Integrin (ABIN739029, 1:1000), anti-JAM3 (ABIN1386406, 1:1000), and anti-JAM2 (ABIN3187667, 1:1000). Following three washes with PBS containing 0.05% Tween-20, membranes were incubated for 1.5 h on a shaker at room temperature with IRDye-labeled secondary antibodies: Goat anti-Rabbit 680RD (LI-COR, 926-68071) and Goat anti-Mouse 800CW (LI-COR, 926-32210). Protein bands were visualized using the ChemiDoc system (Bio-Rad) and quantified with ImageLab software. Expression levels were normalized to β-actin and analyzed using GraphPad Prism.

    Quantitative PCR analysis

    For our qPCR analysis, we utilized the QuantStudio 3 from Thermo Fisher Scientific, employing SYBR Green as our detection chemistry. After preparing and loading our samples, we conducted the qPCR run analyzing the genes listed in Table 4 (in different experimental approaches). The machine’s sophisticated technology measured DNA quantity using the fluorescence of the SYBR Green. Post-run, we processed and interpreted the data using the QuantStudio Design and Analysis Software from Thermo Fisher Scientific, enabling us to calculate relative gene expression levels using the ΔΔCt and ΔCt, respectively.

    Table 4 List of primers used for qPCR analysis.

    The treatment groups in the in vitro treatment experiments included cells exposed to 2 µM or 5 µM flecainide or PBS vehicle control 24 h prior to cell harvesting. The in vitro concentrations were used based on previous experiments with flecainide found in literature17,18.

    Permeability assay

    Permeability assays were performed in triplicates as reported by Coisne et al.22, with minor adaptations: pMBMECs were grown on Matrigel-coated Transwell® filter inserts (0.4 μm pore size, 6.5 mm diameter; article number 662640, Greiner Bio-One Vacuette Schweiz GmbH, St. Gallen, Switzerland) for 6 to 8 days. Alexa Fluor 680-dextran (3 kDa, 10 μg/ml; LuBioScience, Luzerne, Switzerland) was used as permeability tracer. Diffused dextran was quantified using the Odyssey Imaging System (LI-COR, Bad Homburg, Germany) and the clearance value (Pe, in cm/min) of the pMBMECs calculated as reported by Coisne et al. 2005. Flecainide treatment at 5 µM was for 24 h and IL-1β treatment at 20 ng/ml was for 16 h. The in vitro concentrations of flecainide were used based on previous experiments found in literature17,18. After the experiment, each filter was examined for confluent growth of pMBMECs by staining with phalloidin-rhodamine and subsequent fluorescence microscopy. The inflammatory state of pMBMECs after IL-1β stimulation was monitored in parallel samples by staining with the homemade rat anti-mouse ICAM-1 monoclonal antibody 29G1 followed by a secondary donkey anti-rat Cy5 antibody (Jackson ImmunoResearch, Milan Analytica AG, Rheinfelden, Switzerland).

    Isolation of splenocytes for proliferation assay and qPCR analysis

    Immediately following euthanasia, spleens were aseptically removed and placed into wash buffer composed of DMEM supplemented with fetal calf serum and antibiotics. For tissue dissociation, each spleen was transferred onto a pre-wetted 40 µm cell strainer positioned on a conical tube. The spleen was mechanically dissociated by gently pressing it through the strainer using the plunger of a syringe. The cell strainer was then rinsed with additional wash buffer to collect all cells. The resulting single-cell suspension was centrifuged at 4 °C. After removal of the supernatant, the cell pellet was resuspended in pre-warmed ACK lysis buffer to lyse erythrocytes. The suspension was incubated at room temperature, and erythrocyte lysis was subsequently stopped by adding wash buffer, followed by a second centrifugation step at 4 °C. The final cell pellet was resuspended in wash buffer and passed through a freshly rinsed 40 µm cell strainer into a new conical tube to ensure maximal recovery of splenocytes.

    Proliferation assay using CFSE and Ki-67

    To investigate the effects of flecainide on lymphocyte proliferation, we performed a combined assay using CFSE (carboxyfluorescein succinimidyl ester) labeling and intracellular Ki-67 staining. Splenocytes were isolated as described in “Isolation of splenocytes for proliferation assay and qPCR analysis” section. Prior to culture, cells were labeled with CFSE by incubation in PBS supplemented with fetal calf serum and the dye at 37 °C in the dark. The labeling reaction was quenched by the addition of cold wash medium, followed by incubation on ice. After two washing steps, cells were resuspended in murine T cell medium composed of IMDM supplemented with fetal calf serum, β-mercaptoethanol, L-glutamine, and antibiotics. T cell activation was achieved by seeding the CFSE-labeled splenocytes into anti-CD3-coated 96-well plates. Cells were treated with either vehicle (containing 0,005% DMSO), 2 µM flecainide, or 5 µM flecainide and incubated for five days at 37 °C in a humidified CO2 incubator. On day three, half of the culture medium was carefully replaced with fresh treatment medium. At the end of the incubation period, proliferation was assessed by flow cytometric analysis of CFSE dilution and intracellular Ki-67 expression. For Ki-67 staining, cells were fixed, permeabilized, and stained with a fluorochrome-conjugated anti-Ki-67 antibody according to the manufacturer’s protocol. Data were acquired and analyzed within defined immune cell subsets.

    Statistics and data interpretation

    Statistical analysis was performed using Prism (version 9, Graphpad Software, Inc.) and IBM SPSS Statistics (version 20, IBM Corporation, USA). Total and percent changes of the acquired retinal parameters (OCT and OMR) were analyzed using generalized estimation equation (GEE) models, accounting for within-subject inter-eye correlations, to test for differences between the two groups. For non-paired data, group means analyses were compared using a one-way ANOVA with the Dunnett´s post hoc test, utilizing one optic nerve per animal for the histological investigations. The thickness of the inner retinal layers, ranging from the inner limiting membrane to the bottom of the inner plexiform layer, the total retinal thickness and outer retinal layers were assessed in volume scans around the optic disc as the primary OCT-based outcome parameters. The spatial frequency served as a functional primary readout. The other histology-derived parameters were assessed as secondary outcome criteria. The severity of EAE symptoms, extending from minor hind limb weakness to complete paralysis, was assessed through a standardized scoring system (ranging from 0 to 5), providing a quantitative outcome measure of neuroinflammation and neurodegeneration. The EAE scores served as an integral part of the study, illustrating the clinical manifestation and progression of the disease, offering a real-time evaluation of the neurological impairment. QPCR was deployed for the precise quantification of targeted gene expression changes. The relative changes in gene expression levels were examined as one of the primary outcome measures, offering key insights into the molecular mechanisms. The permeability of the BBB was further evaluated by using the Evans Blue assay. This approach involved the systemic administration of Evans Blue dye, the penetration of which into the brain tissue served as a direct indicator of BBB disruption. The quantified extent of dye penetration, extracted and measured spectrophotometrically, offered a primary outcome measure of BBB integrity. The data derived from the Evans Blue assay yielded essential insights into the degree and timing of BBB permeability changes in response to neuroinflammation and following various therapeutic interventions.

    Continue Reading

  • Spatiotemporal dynamics of ecological quality and its drivers in Shanxi Province and its planned mining areas

    Spatiotemporal dynamics of ecological quality and its drivers in Shanxi Province and its planned mining areas

    Study area

    Shanxi Province (34°34′-40°44′N, 110°14′-114°33′E) is located in northern China, in the eastern part of the Loess Plateau, with a total area of 156,700 square kilometers (Fig. 1). The province encompasses twelve planned mining areas, featuring a wealth of diverse mineral resources and a complex geological environment. The climate is classified as a temperate continental monsoon climate, characterized by significant temperature variations across regions, with temperatures decreasing from south to north and from plains to mountainous areas. The province’s geomorphology is predominantly mountainous and hilly, comprising approximately 80% of the total area. The study area includes critical regions such as the Taihang and Lüliang mountains, which are rich in resources but face numerous challenges, including resource depletion and ecological restoration.

    Fig. 1

    Location map of the study area. (This image was created using ArcGIS 10.8 software, which is an Esri product and is publicly available at the URL (https://www.esri.com/). The base map is based on the standard map (GS2024, No. 0650), and no modifications have been made to the map boundaries.)

    Technical route

    The detailed workflow of this study is illustrated in Fig. 2: (1) RSEI was constructed based on the GEE platform to invert the ecological quality changes in Shanxi Province and its planned mining areas from 2000 to 2020; (2) Theil-Sen analysis and Mann-Kendall trend tests were employed to analyze the RSEI trend; (3) Moran’s I was utilized to assess the spatial correlation of RSEI in Shanxi Province; (4) The interactions between RSEI and potential driving factors were analyzed using CatBoost and GWR models.

    Fig. 2
    figure 2

    Technology roadmap. (This maps was created using ArcGIS 10.8 software, which is an Esri product, generated using our dataset, and is publicly available at the URL (https://www.esri.com/).)

    Datasets and preprocessing

    This study employed the MODIS dataset to construct the ecological quality assessment index. The data sources included the MOD09A1 (2000–2023, 500 m resolution, 8-day interval) and MOD11A2 (2000–2023, 500 m resolution, 8-day interval) from the MODIS collection. The JRC annual water classification historical data was utilized for water body masking. Additional datasets were leveraged to extract potential factors that may influence ecological quality in the study area, including Terra Climate (2000–2020, providing climate factor data), NASA’s SRTM Digital Elevation data (90 m resolution), population density data provided by East View Cartographic, and MYD17A3HGF (2000–2020, net primary productivity data). All data were acquired from the GEE cloud platform, with all datasets resampled to a resolution of 500 m. Atmospheric correction and relevant preprocessing techniques were applied to enhance data quality. To account for seasonal variation and meteorological conditions, the data time window was set from June 15 to September 15 each year, ensuring consistency in vegetation status and ecological outcomes.

    Methods

    RSEI indicator construction

    RSEI leverages ground information and ecological data obtained through remote sensing technologies, employing mathematical and statistical methods for processing and analysis, thereby facilitating rapid assessment and monitoring of the health status and environmental quality of specific regions31. Given that greenness, humidity, heat, and dryness are crucial components of the ecological environment, this study selects these four ecological factors to construct the RSEI index32.

    $$ :RSEI = f(Greenness,Wetness,Heat,Dryness) $$

    (1)

    where f is the set of functions used to perform PCA.

    kNDVI applies a weighted average of NDVI for each pixel and its surrounding pixels to more accurately capture the details and variations in vegetation coverage. Thus, kNDVI serves as a representation of greenness. The calculation formula is as follows:

    $$:NDVI=frac{NIR-Red}{NIR+Red}$$

    (2)

    $$ :kNDVI = tanhleft( {left( {frac{{NIR – Red}}{{2sigma :}}} right)^{2} } right) = tanhleft( {left( {frac{{NDVI}}{{2sigma :}}} right)^{2} } right) $$

    (3)

    where NIR and RED represent the reflectances of the near-infrared (NIR1) and red bands of MOD09A1, respectively. The symbol σ denotes a length scale that is proportional to the average reflectance of the near-infrared and red bands, which can be adjusted. When σ is set to 0.5(NIR + RED), the calculation formula is as follows:

    $$:kNDVI=tanhleft({NDVI}^{2}right)$$

    (4)

    Humidity (WET) is calculated using the formula:

    $$ begin{gathered} :WET = 0.1147 times :rho :_{1} + 0.2489 times :rho :_{2} + 0.2408 times :rho :_{3} + 0.3132 hfill \ quad quad times :rho :_{4} – 0.3122 times :rho :_{5} – 0.6416 times :rho :_{6} – 0.5087 times :rho :_{7} hfill \ end{gathered} $$

    (5)

    where (:{rho:}_{i}) (i = 1, 2,, 7) represents the reflectance of the red, near-infrared 1, blue, green, near-infrared 2, short-wavelength infrared 1, and short-wavelength infrared 2 bands of the MOD09A1 image, respectively.

    Dryness consists of the Index of building (IBI) and the index of bare Soil (SI). The formula for calculation is:

    $$:NDBSI=left(IBI+SIright)/2$$

    (6)

    $$:IBI=frac{2{rho:}_{6}/({rho:}_{6}+{rho:}_{2})-{rho:}_{2}/({rho:}_{2}+{rho:}_{1})+{rho:}_{4}/({rho:}_{4}+{rho:}_{6})}{2{rho:}_{6}/({rho:}_{6}+{rho:}_{2})+{rho:}_{2}/({rho:}_{2}+{rho:}_{1})+{rho:}_{4}/({rho:}_{4}+{rho:}_{6})}$$

    (7)

    $$:SI=frac{left({rho:}_{6}+{rho:}_{1}right)-left({rho:}_{2}+{rho:}_{3}right)}{left({rho:}_{6}+{rho:}_{1}right)+left({rho:}_{2}+{rho:}_{3}right)}$$

    (8)

    where (:{rho:}_{i}) (i = 1, 2,, 6) denotes the reflectance of the red, near-infrared 1, blue, green, near-infrared 2, and short-wavelength infrared 1 bands of the MOD09A1 image, respectively.

    Heat is expressed in terms of land surface temperature (LST). MOD11A2 LST products providing the necessary data. The raw LST data (LST0) requires unit conversion from Kelvin (K) to degrees Celsius (°C). The formula for this conversion is:

    $$:LST=0.02LS{T}_{0}-273.15$$

    (9)

    In this paper, the RSEI is calculated using the following formula:

    (:RSEI=f(kNDVI,Wet,NDBSI,LST))(10).

    After obtaining the results of the four ecological factors, it is essential to linearly map each indicator’s values to the range of [0, 1] for normalization. This process eliminates the impacts caused by differing units and value ranges, allowing for PCA to construct the RSEI.

    Furthermore, RSEI is subjected to additional normalization to facilitate comparison and measurement, as shown in the following formula:

    $$:RSEI=frac{{RSEI}_{0}-{RSEI}_{min}}{{RSEI}_{max}-{RSEI}_{min}}$$

    (11)

    where (:{RSEI}_{0})is the initial RSEI, and (:{RSEI}_{min}) and (:{RSEI}_{max}) represent the minimum and maximum values of (:{RSEI}_{0}), respectively. A higher RSEI value, approaching 1, indicates better ecological environmental quality, while a lower RSEI value signifies poorer ecological environmental quality.

    To enhance the assessment and comparison of ecological conditions, the RSEI values were categorized into five levels from high to low: excellent (0.8–1), good (0.6–0.8), moderate (0.4–0.6), fair (0.2–0.4), and poor (0–0.2). This classification provides a clearer understanding of ecological quality, with values closer to 1 indicating better ecological environment health.

    Sen’s slope estimator and Mann–Kendall statistical test

    Sen’s Slope Estimator can accurately estimate the trend slope in time series data, encompassing both linear and nonlinear trends. Its robustness allows it to effectively handle outliers and data with significant fluctuations. By employing Sen’s Slope Estimator, the strength of the trend in each time series can be determined, thereby identifying the rate and magnitude of ecosystem changes. The calculation formula is as follows:

    $$:beta:=Medianleft(frac{{X}_{j}-{X}_{i}}{j-i}right),forall:j>i$$

    (12)

    where (:beta:) denotes the slope and (:{X}_{i}:)and (:{X}_{j}:)denote the data values for year i and year j, respectively.

    The MK test is employed to detect the presence and significance of trends in time series data33. It does not rely on assumptions about the data distribution, making it applicable to various types of ecological data. While Sen’s Slope Estimator provides a quantitative assessment of the trends, the Mann–Kendall test confirms the significance and statistical stability of these trends34. The calculation formulas for the MK test are as follows:

    $$:S={sum:}_{i=1}^{n-1}{sum:}_{j=i+1}^{n}sgnleft({x}_{j}-{x}_{i}right)$$

    (13)

    $$:sgnleft({x}_{j}-{x}_{i}right)=left{begin{array}{c}1,:{x}_{j}-{x}_{i}>0\:0{,x}_{j}-{x}_{i}=0\:-1{,x}_{j}-{x}_{i}<0end{array}right.$$

    (14)

    $$:V=nleft(n-1right)left(2n+5right)/18$$

    (15)

    $$:Z=left{begin{array}{c}frac{S-1}{sqrt{V}},S>0\:0,:S=0\:frac{S+1}{sqrt{V}},S<0end{array}right.$$

    (16)

    where n denotes the time series length; sgn is the sign function; S is the statistic; and Z is the trend significance test statistic.

    In the MK test, the significance of RSEI changes is determined based on a set significance level α. Typically, α is set at 0.05, which corresponds to a Z value of ± 1.96. Therefore, when the absolute value of the calculated Z from the MK test exceeds 1.96, the change passes the significance test at a 95% confidence level, indicating that the trend change is significant; conversely, the opposite holds true. The results are categorized into five classes, as shown in the table below(Table 1).

    Table 1 RSEI trend classification.

    Hurst exponent

    The Hurst exponent is an indicator used to measure the long-term memory in time series or fractals, aiding in understanding the fundamental dynamics of the series and providing a degree of predictability for changes in ecological quality35. The formula for calculating the Hurst exponent is as follows:

    $$:H=log(R/S)/logleft(nright)$$

    (17)

    where H is the Hurst exponent. In the actual computation, it is typically necessary to perform multiple decompositions and calculate the R/S values at different scales, followed by averaging these R/S values to obtain the final estimate of the Hurst exponent.

    The Hurst exponent is commonly employed to analyze the persistence or anti-persistence of time series, with values ranging from 0 to 1. Specifically, when 0 < H < 0.5, it indicates anti-persistent change; the closer H is to 0, the stronger the anti-persistence. When 0.5 < H < 1, it suggests persistent change; the closer H is to 1, the stronger the persistence. When H equals 0.5, it denotes that the time series is independent and random, exhibiting uncertainty with no correlation between past and future trends.

    Global moran’s

    Global Moran’s I is a commonly used method for spatial autocorrelation analysis, employed to measure the spatial correlation among observations within a spatial dataset and to reveal the spatial relationships between reference units and their neighboring units26. The Moran’s I statistic ranges from − 1 to + 1; a value close to + 1 indicates that the observations of neighboring geographic units tend to be similar, while a value close to -1 suggests that the observations are inclined to be opposite. A Moran’s I value near 0 indicates that the data are randomly distributed in space, showing no significant spatial autocorrelation. The formula for calculating Moran’s I is as follows:

    $$:Moran{prime:}s:I=frac{m*{sum:}_{i=1}^{m}{sum:}_{j}^{m}{w}_{ij}({x}_{i}-stackrel{-}{x})({x}_{j}-stackrel{-}{x})}{{sum:}_{i=1}^{m}{sum:}_{j}^{m}{w}_{ij}{({x}_{i}-stackrel{-}{x})}^{2}}$$

    (18)

    where m represents the total number of samples, (:{x}_{i})and (:{x}_{j})denote the RSEI values at positions i and j, (:{w}_{ij})indicates the spatial weight value between i and j, and (:stackrel{-}{x}) is the mean value of the RSEI.

    CatBoost

    CatBoost is a machine learning algorithm based on gradient-boosted decision trees, demonstrating exceptional performance in handling classification and regression problem36. One notable advantage of CatBoost is its ability to efficiently manage categorical features while achieving high prediction accuracy. Additionally, CatBoost employs regularization techniques to mitigate overfitting, enhancing the robustness of the model when dealing with complex data. The fundamental steps for training a CatBoost model are illustrated in Fig. 3: first, create a CatBoost model and train it using the training dataset; once the model is trained, it can be utilized for predicting new data. Throughout the training and prediction processes, CatBoost simplifies the complexities of data preprocessing and offers an efficient gradient boosting algorithm, assisting users in constructing more accurate models.

    Fig. 3
    figure 3

    CatBoost implementation process. In the figure X_traint ,Y traint is the training set, X_test, Y _test is the test set, pt is the prediction of the t-th sample, a is the weight of the t-th sample, and Y is the final prediction obtained by weighting the prediction of each learner.

    Geographically weighted regression (GWR)

    In spatial research, particularly at large scales, conventional multivariate linear regression models may fail to accurately capture the variations and heterogeneity in geographic space. The GWR model, on the other hand, is adept at exploring the relationships between variables in spatial data, effectively reflecting the correlations between dependent and independent variables across geographic space37. Moreover, the GWR model estimates parameters through independent linear regressions within each local area, utilizing local parameter estimates instead of global ones. The kernel type is “bisquare”and the number of neighbors is determined by the optimal bandwidth (gwr_bw).The formula for the GWR model is as follows:

    $$:{varvec{y}}_{varvec{i}}={varvec{beta:}}_{0}({varvec{u}}_{varvec{i}},{varvec{v}}_{varvec{i}})+{sum:}_{varvec{k}}{varvec{beta:}}_{varvec{k}}({varvec{u}}_{varvec{i}},{varvec{v}}_{varvec{i}}){varvec{x}}_{varvec{k},varvec{i}}+{varvec{epsilon:}}_{varvec{i}}$$

    (19)

    where (:{y}_{i}) is the dependent variable for sample i. (:({u}_{i},{v}_{i})) are the coordinates of sample i. (:{beta:}_{0}({u}_{i},{v}_{i}))is the intercept term for sample i. (:{beta:}_{k}({u}_{i},{v}_{i})) is the k-th regression parameter for sample i. (:{x}_{k,i})​ is the k-th independent variable for sample i. (:{epsilon:}_{i}) represents the random error.

    Model evaluation

    To evaluate the model’s estimation performance, the dataset is divided into 80% training set and 20% testing set. Two evaluation metrics are used: the coefficient of determination (R2) and the root mean square error (RMSE). Additionally, to assess the model’s generalization ability, we use the 5-fold cross-validation method. Specifically, the data is randomly divided into 5 subsets, and the model is trained and tested on each subset. The final model performance is the average of the results from each fold. The calculation formulas are as follows:

    $$:{R}^{2}=1-frac{{sum:}_{i=1}^{n}{left({x}_{i}-{y}_{i}right)}^{2}}{{sum:}_{i=1}^{n}{left({x}_{i}-stackrel{-}{{y}_{i}}right)}^{2}}$$

    (20)

    $$:RMSE=:sqrt{frac{1}{m}{sum:}_{i=1}^{n}{left({x}_{i}-{y}_{i}right)}^{2}}$$

    (21)

    Where (:{x}_{i}) is the true value of RSEI, (:{y}_{i}) is the estimated value of RSEI, (:{stackrel{-}{y}}_{i}) is the mean of the true RSEI values, and n is the number of samples in the test set.

    Continue Reading

  • Secret Fintech Payments Cloud $725 Million Facebook Class Action Settlement

    Secret Fintech Payments Cloud $725 Million Facebook Class Action Settlement

    Three years ago, Facebook parent company Meta agreed to pay a whopping $725 million to settle a class action lawsuit accusing it of making users’ data available without their consent (Meta denied wrongdoing). Payments were finally scheduled to start hitting consumers’ wallets this month, but court filings from last week show that the portion of the funds slated to be sent through digital prepaid cards are now under intense legal scrutiny. Forbes estimates those digital payments would total $150 million.

    The controversy stems from secret rebates that Blackhawk Network, the fintech that issues the digital cards, agreed to make to Angeion, the claims administration firm in charge of doling out the class action funds to harmed consumers. The plaintiffs’ attorneys in the Meta case that hired Angeion only discovered these rebates over the past few months, after another lawsuit tipped them off about their possible existence. Since then, the lawyers have asked Angeion to forgo the payments from Blackhawk or hand them over to the consumers in the class. So far, Angeion has refused to give up the rebates or to disclose its contract with Blackhawk.

    A few months ago, Forbes chronicled the industry practice of such back-room dealings in our investigation into how private equity-owned firms were quietly pocketing class action payouts.


    Have a story tip? Contact Jeff Kauflin at jkauflin@forbes.com or on Signal at jeff.273.


    Class action lawsuits often let consumers choose from different payout options such as a paper check, direct deposit into a bank account, PayPal or a digital prepaid card. Digital cards arrive in emails and have their benefits, such as being cheaper to administer and potentially easier to use for unbanked Americans. But a big chunk of the funds deposited on them goes unspent, just as it does for gift cards, resulting in what industry professionals call “breakage.”

    Card issuers like Blackhawk typically claw back the lion’s share of unused funds through monthly fees that pop up after six or twelve months of inactivity on a prepaid card. The breakage total can vary based on factors like when the inactivity fees kick in and how high they are, but even for the most consumer-friendly programs, it can easily add up to millions of dollars for large class action settlements. Yet the breakage amounts are never disclosed in court filings. And until recently, plaintiffs’ attorneys and judges were largely unaware of how the digital prepaid cards work, or who collects the breakage they generate.

    Blackhawk, which is owned by private equity firms Silverlake and P2 Capital Partners, has historically offered claims administrators “rebates” in return for them inserting digital prepaid cards as a payout option into a class action. (The rebates were discovered by whistleblower Todd Hilsee years ago, and he published a research paper on them in October 2024.) This past April, a class action suit was filed over the rebates in the Eastern District of Pennsylvania against three big claims administrators, including Angeion. The suit, which accuses them of fraud and various other breaches, asserts that the rebates “are nothing more than kickbacks” and that administrators have kept these agreements secret from attorneys, judges and class members.

    Angeion has called the lawsuit “baseless.” Blackhawk has been added as a defendant in the case, with the plaintiffs accusing it of conspiracy, unjust enrichment, and aiding and abetting fraud. A Blackhawk spokesperson didn’t respond to our requests for comment, but the company has previously told us in a statement that its programs “are in full compliance with applicable federal and state laws and regulations.”

    So far, in the Meta privacy case, Angeion has agreed to share its Blackhawk contract only with Northern District of California Judge Vince Chhabria for his private review. Judge Chhabria can then decide whether it should be filed in the public record.

    A spokesperson for Angeion told Forbes in an emailed statement that the company administers settlements “according to the terms and conditions of the relevant settlement agreement and court orders.” He added, “Although Angeion has not yet received any revenue from Blackhawk with regard to the Meta settlement, its agreement with Blackhawk contemplates financial benefit to Angeion. That benefit does not in any way reduce the funds available to class members or impose an additional cost to the settlement fund.”

    One obvious question remains: How much did Blackhawk agree to pay Angeion? Based on a 2020 email obtained by Todd Hilsee combined with our own reporting, a Blackhawk executive offered a “discount” or rebate of up to 3.5% to a claims administrator in exchange for running a class payout through digital debit cards. Using that assumption, a $150 million digital payout would result in a $5 million payment from Blackhawk to Angeion. And if the rebate were higher–say, 7%—the payment would be $10 million.

    Last week, the attorneys in the Meta case filed a joint status report proposing changes to make the digital prepaid cards more consumer-friendly. For example, when the cards are emailed out, Blackhawk can require that people click on a link to activate it first before the money leaves the settlement fund. That would help prevent email payouts that go unnoticed by consumers in their inboxes from leaving the settlement fund and eventually getting gobbled up by Blackhawk’s inactivity fees.

    The attorneys also proposed sending multiple email reminders to consumers to activate their cards and use their balances, as opposed to the original email-reminder plan, which appeared to consist of just one reminder sent to a digital card recipient after 11 months of inactivity. Another proposal presented the option of completely replacing digital prepaid cards with other payout methods in the settlement distribution.

    Scrutiny of the dubious practices of class action payouts continues to grow. Earlier this week, in a big class action case that accuses realtors of colluding to inflate real estate agent commissions, Judge Stephen Bough of the Western District of Missouri filed an order asking the plaintiffs’ attorneys to complete a list of new disclosures. The inquiry asks the attorneys whether they have financial relationships with companies involved in the suit, such as litigation financing firms, banks, private equity funds, hedge funds, settlement administrators, vendors or similar institutions. The order aims to prevent any lawyers from having undisclosed conflicts of interest.

    Continue Reading

  • Towards generalist foundation model for radiology by leveraging web-scale 2D&3D medical data

    Towards generalist foundation model for radiology by leveraging web-scale 2D&3D medical data

    In this section, we will detail our method. Notably, our study is based on data obtained from open-source websites, as listed in Supplementary Table 5. Therefore, the relevant ethical regulations are governed by the original data-uploading processes outlined in each dataset’s collection pipeline (please refer to each dataset website in Supplementary Table 5 for more details). Specifically, for the data from Radiopaedia, which forms the main component of our newly proposed dataset, Radiopaedia is a peer-reviewed, open-edit radiology resource collection website. Its mission is to “create the best radiology reference available and to make it available for free, forever, and for all.” We have obtained non-commercial use permission from various uploaders as well as the founder of Radiopaedia. The relevant ethical regulations are governed under Radiopaedia privacy-policy.

    Dataset

    Here, we describe the procedure for constructing the datasets and benchmark. In the section “Medical Multimodal Dataset (MedMD)”, we present several medical multimodal datasets and merge them with an extensive collection of preexisting datasets, resulting Medical Multimodal Dataset (MedMD). MedMD is a large-scale, high-quality medical vision-language dataset, covering a wide range of anatomies with over 5000 diseases, as shown in Fig. 7a. We further construct a filtered radiology subset Radiology Multimodal Dataset (RadMD). In the section “Radiology Evaluation Benchmark (RadBench)”, we introduce a new Radiology Benchmark for evaluation, termed RadBench, with three distinct tasks, e.g., visual question answering, report generation and rationale diagnosis, aiming to monitor the progress of developing foundation models.

    Medical multimodal dataset (MedMD)

    To start, we construct a candidate data pool by pulling a variety of existing visual-language medical datasets together, for example, MIMIC-CXR30 and PMC-OA31. Despite the scale of these high-quality datasets, they are fundamentally limited in several aspects: (i) Data format. These datasets are only composed of 2D medical images, which do not fully capture the complexities in clinical use cases, for example, 3D medical imaging modalities, like CT, MRI; (ii) Modality diversity. A noteworthy limitation arises from the fact only chest X-ray images are provided with medical reports, training models on such data will clearly pose limitation on the generalizability to a broader range of imaging modalities and anatomical regions; (iii) Report quality. Another critical limitation lies in the use of data extracted from figures and captions from research papers. The gap between research-oriented data and real-world clinical scenarios may not support accurate and reliable clinical diagnoses. Therefore, to support the training of our proposed Radiology Foundation Model (RadFM), we augment the dataset with four new ones, including PMC-Inline, PMC-CaseReport, RP3D-Series, and MPx-Series, resulting in MedMD. MedMD has a total of 16M 2D image-text pairs, including around 15.5M 2D images and 500k 3D scans with corresponding captions or diagnosis labels, as shown in Supplementary Table 3. More detailed introduction of different data sources can be found in the Supplementary Section “Detailed Introduction for Different Data Sources”.

    Generally speaking, we split the candidate data pool into two parts (i) interleaved image-language data that is collected from academic papers and (ii) image-language data constructed for visual-language instruction tuning, as detailed below.

    Interleaved dataset

    PMC-Inline. PMC-Inline contains 11M 2D radiology images that are collected from PubMed Central papers. In contrast to existing work, for example, PMC-OA31, that only contains figures and corresponding captions, here, we focus on the inline reference from the main body of papers. For example, one paper may contain many sentences like “As shown in Fig. 2, we can see …”, we localise the keyword “Fig. 2” and link its corresponding figure back into sentences, ending up with interleaved images and texts, with rich context. This dataset shares the same format as MMC427, which has shown to be effective in training foundation models in the computer vision community, for example, Flamingo22.

    Visual-language instruction tuning dataset

    PMC-CaseReport. Inspired by former works leveraging clinical case reports42, PMC-CaseReports is a filtered subset of PMC-Inline with around 103K case reports, where the doctors typically document the valuable clinical cases, based on their contact with the patients, such as family medical history, preliminary diagnosis, radiographic exam results, surgical records, etc., together with critical radiologic scans, that generally follows the real timeline.

    Similar to PMC-VQA8 that generates VQA pairs by querying ChatGPT with image captions, we also generate 1.1M question-answer pairs by querying ChatGPT with the sentences containing inline references in case reports. However, in contrast to PMC-VQA, we keep background information of the patients to simulate the clinical diagnosis scenario, thus can be seen as a medical contextual VQA dataset. For example, a question-answer pair may like “Question: A 58-year-old woman presented to the emergency department …Postoperatively, her pain significantly relieved. What did the MRI indicate? Answer: The MRI indicated tumor recurrence at L2 and S1-S2.”

    RP3D. RP3D (RadioPaedia 3D) is a novel dataset with 3D radiology scans, sourced from the Radiopaedia website (https://radiopaedia.org/). All privacy issues have already been resolved by the clinician who uploaded the case. Specifically, each patient case comprises one or more images from the same or different modalities, accompanied by high-quality captions that have been meticulously peer-reviewed by experts in the Radiopaedia Editorial Board (https://radiopaedia.org/editors). We have included a response letter from Radiopaedia, with the agreement for us to use the dataset for training under non-commercial cases. In addition, for each disease, we can get corresponding radiological features across different modalities. We convert the image-caption pairs into a variety of formats, namely, RP3D-Caption, RP3D-Modality, RP3D-Rationale, and RP3D-VQA, depending on their corresponding text content. Specifically, RP3D-Caption denotes the images paired with their corresponding captions; RP3D-Modality refers to images with modality labels; RP3D-Rationale incorporates radiological features with disease labels for each case; RP3D-VQA involves visual question-answering pairs generated from captions by querying ChatGPT, as illustrated in Supplementary Fig. 1.

    MPx. MPx is collected from the MedPix website (https://medpix.nlm.nih.gov/) and organized by cases. Each case contains multiple radiologic scans, along with general clinical findings, discussions, and diagnostic results. In addition, MPx also provides annotations on the scan level, including information such as image modality, shooting plane, and captions for each scan. Thus, we separate it into MPx-Single and MPx-Multi, containing annotations on the case-level and scan-level, respectively.

    Radiology multimodal dataset (RadMD)

    For domain-specific finetuning, we filter out the non-radiology images from MedMD, and construct a clean subset, named Radiology Multimodal Dataset (RadMD), dedicating to supervised visual instruction tuning. It contains a total of 3M images, spanning various data formats, modalities, and tasks, featuring over 5000 diseases, as shown in Fig. 7b.

    In general, we have conducted the following filtering process: (i) remove non-radiologic images; (ii) remove the entire PMC-OA and PMC-Inline datasets, as the images in PubMed are 2D-only, thus differ from real clinical cases, additionally, the writing styles between academic papers and real clinical reports are inconsistent; (iii) remove a large portion of 2D image cases from PMC-Series, to emphasize the 3D images in training. (iv) filter out the information about patient age or structure size, as the image spacing and patient background information are not provided. Specifically, we applied string matching techniques using Python’s regular expressions to remove any sentences containing terms related to physical measurements, such as “mm”, “cm”, or decimal numbers (e.g., “2.5 cm”), as these are indicative of missing or incomplete metadata related to patient age, structure size, or image spacing. This step primarily addresses the problem in the report generation tasks, where such metadata would otherwise cause incorrect or unpredictable descriptions.; (v) balance the number of normal and abnormal patients in the diagnosis datasets, as generative models are sensitive to data imbalances. More comprehensive details regarding the filtering process and the resulting dataset sizes can be found in Supplementary Table 3.

    Radiology evaluation benchmark (RadBench)

    In addition to the training set, we also introduce RadBench, a comprehensive evaluation benchmark for monitoring progress in the development of radiology foundation model for generative tasks. Considering that most existing medical benchmarks may only include a plain label (like disease categories), that are not suitable to assess the models’ long sentence generation ability, our RadBench is targeted at compensating for this.

    In detail, RadBench is first randomly sampled from the RP3D dataset. Then, We further carry out meticulous manual verification to ensure data quality on all the samples. Specifically, we developed a human evaluation interface, visually presenting the data source, image, question, and answer of each case. Eight human annotators were asked to assess the quality of these cases by addressing the following criteria:

    • Image types: remove the images that do not fall in radiology.

    • Question reasonability: keep the questions that can be answered from the given radiology image, for example, on visual question answering, remove the question related to size; on report generation, remove cases containing sentences like “Compared with previous cases”; on rationale diagnosis, remove cases lacking corresponding radiological features are filtered out.

    • Answer correctness: keep those with correct answers based on the given text reports.

    As a result, we have obtained 4229 for visual question answering, 1468 for report generation, and 1000 for rationale diagnosis. Additionally, we also consider nine existing tasks for our evaluation, which include plain diagnosis and medical VQA tasks. A detailed breakdown of each dataset, including task descriptions and modalities, is provided in Supplementary Table 4. Combining them with our RadBench, in evaluation, we will comprehensively assess models for four tasks, i.e., disease diagnosis, medical VQA, report generation, and rationale diagnosis. The details of the four evaluation tasks and metrics are introduced in the following.

    Disease diagnosis

    This task involves analyzing the radiology images to determine the likelihood of specific diseases. Here, we modify this task to an induction task, which uses introductory text explaining the classification task and providing the name of the queried disease at the beginning of the prompt. Given a medical image, we randomly select a disease and a prompt sentence like “Is {disease} shown in this image” as input, querying the model to determine the existence of a certain disease. Due to this being formulated as a generation task, “AUC” cannot be calculated, so we match the output with ground-truth to calculate the ACC and F1 score. Similarly, we match the output with a closed ground-truth list {“yes”, “no”} using difflib.SequenceMatcher, and choosing the most similar one as the prediction of the model. Considering ACC scores may suffer from data unbalancing, we keep the same ratio to sample positive and negative cases. In our dataset, we do not put prior on the disease, and over 5000 diseases are considered, with a balanced ratio of “yes” or “no” responses.

    Medical visual question answering

    This task is a combination of popular visual question-answering challenges. Given a medical image and a clinically relevant question in natural language as a prompt, the medical VQA system is expected to predict a plausible and convincing answer.

    Radiology report generation

    This task focuses on the automatic generation of reports, i.e., summarizing the radiologic findings based on radiology images, such as X-rays, CT scans, and MRI scans. Given a medical image, we randomly select a prompt sentence like “Please caption this scan with findings” as input.

    Rationale diagnosis

    This task involves analyzing radiology images to predict both the underlying disease and the typical radiologic features of different modalities, such as X-rays, CT scans, and MRI scans associated with that disease. Specifically, we randomly select a prompt sentence like “Determine the disease that corresponds to the given radiographic images, starting with the established radiological features and concluding with the ultimate diagnosis.” Since we have evaluated disease diagnosis accuracy in the common “Disease Diagnosis” setting, for rational diagnosis, we mainly focus on how well the foundation model can give reasons.

    Building generalist foundation model for radiology

    In this section, we start by describing the paradigm for unifying different medical tasks into a generative framework, followed by detailing the proposed RadFM model, and its training details. Our training adopts two types of datasets, namely, interleaved datasets and visual instruction datasets. It is worth noting that their training objectives differ slightly, which will be detailed in the following.

    A unified learning paradigm

    In both of our proposed multimodal datasets, i.e., MedMD and RadMD, each training sample is essentially consisting of two elements, i.e., ({{{mathcal{X}}}}={{{{mathcal{T}}}},{{{mathcal{V}}}}}), where ({{{mathcal{T}}}}) refers to the language part in the case, with special placeholder tokens for images, e.g., “The patient is 47-year-old. 〈image-1〉 〈image-2〉 We can see opacity on the X-ray”. ({{{mathcal{V}}}}) refer to the visual parts containing a set of 2D or 3D image scans, i.e., ({{{mathcal{V}}}}={{v}_{1},{v}_{2},ldots,{v}_{N}}), ({v}_{i}in {{mathbb{R}}}^{Htimes Wtimes C}) or ({v}_{i}in {{mathbb{R}}}^{Htimes Wtimes Dtimes C}), HWDC are height, width, depth, and channel, respectively, corresponding to the “〈image-i〉 ” token in ({{{mathcal{T}}}}). In general, ({{{mathcal{T}}}}) and ({{{mathcal{V}}}}) can be considered as prompts input to model with interleaved language and image.

    The goal is to model the likelihood of generated text tokens in ({{{mathcal{T}}}}), conditioned on interleaved scans as:

    $$p({{{mathcal{T}}}}| {{{mathcal{V}}}})=prod p({{{{mathcal{T}}}}}_{l}| {{{{mathcal{V}}}}}_{ < l},{{{{mathcal{T}}}}}_{ < l}),$$

    (1)

    where ({{{{mathcal{T}}}}}_{l}) represents the l-th token in ({{{mathcal{T}}}}) and ({{{{mathcal{V}}}}}_{ < l}), ({{{{mathcal{T}}}}}_{ < l}) represent the image and language text appearing before the l-th token. We use a generative model (ΦRadFM) to parameterize the probability p, and our final training objective can be expressed as the negative log-likelihood of the correct next token in the text sequence:

    $${{{{mathcal{L}}}}}_{{{{rm{reg}}}}}=-sum {w}_{l}log {Phi }_{{{{rm{RadFM}}}}}({{{{mathcal{T}}}}}_{l}| {{{{mathcal{V}}}}}_{ < l},{{{{mathcal{T}}}}}_{ < l}),$$

    (2)

    where wl refers to a per-token weighting, aiming to either emphasize key tokens or skip special tokens. Its value differs for different datasets and we detail this in the following.

    Interleaved datasets. For samples in visual-language interleaved dataset, i.e., PMC-Inline, there are no strong question-and-answer relationships between contexts, we extract medical-related words in each sentence by using unified medical language system (UMLS)43, and give them a high loss weights. Additionally, we avoid calculate loss on the image placeholder token. Overall, wl can be formulated as,

    $${w}_{l}=left{begin{array}{ll}3,quad &{{{{mathcal{T}}}}}_{l}in ,{mbox{USML}},hfill \ 1,quad &{{{{mathcal{T}}}}}_{l},notin ,{mbox{USML}},hfill \ 0,quad &{{{{mathcal{T}}}}}_{l}=langle ,{mbox{image-i}},rangle end{array}right..$$

    (3)

    Note that, PMC-Inline is the only dataset fit in this case.

    Visual instruction datasets. For samples from visual instruction datasets like PMC-VQA8 or PMC-CaseReport, they are often in the format of dialogue, for example, “What can you see from the image? 〈image-1〉 I can see lesions.” or “Please describe the scans 〈image-1〉. The scan is …”, we further separate the language part ({{{mathcal{T}}}}) into instruction and response, denoted as ({{{mathcal{I}}}}) and ({{{mathcal{R}}}}) respectively. For example, as in the former two cases, ({{{mathcal{I}}}}) refers to “What can you see from the image? 〈image-1〉 ” and “Please describe the scans 〈image-1〉 ”. In a practical scenario, ({{{mathcal{I}}}}) is expected to be given by users, and the model is only required to output correct responses. Overall, wl can be formulated as,

    $${w}_{l}=left{begin{array}{ll}3,quad quad &{{{{mathcal{T}}}}}_{l}in {{{mathcal{R}}}}quad&quad {{{{mathcal{T}}}}}_{l}in ,{mbox{USML}},hfill\ 1,quad quad &{{{{mathcal{T}}}}}_{l}in {{{mathcal{R}}}}quad&quad {{{{mathcal{T}}}}}_{l}, notin ,{mbox{USML}},hfill\ 0,quad quad &{{{{mathcal{T}}}}}_{l}in {{{mathcal{I}}}}hfillend{array}right..$$

    (4)

    Most samples from MedMD fit the weighting formulation. All prompts used for instruction tuning are listed in the Supplementary Tables 8–11. We describe the detailed prompting for different problem settings:

    • Modality recognition. Here, we adopt two types of prompts, (i) we use inductive prompts, and the 2D or 3D medical scan as input, for example, “〈image-1〉 Is this image captured by {modality}?”, and the modality category is randomly sampled from the modality set, forming the text input ({{{mathcal{I}}}}) and if the modality matches the ground truth labels we set the ({{{mathcal{R}}}}) as “yes” otherwise “no”. (ii) we use open prompts, like “What’s the modality of the input scan 〈image-1〉 ?” to form the ({{{mathcal{I}}}}), and translate the corresponding modality label into ({{{mathcal{R}}}}). Samples for training such functionality are from RP3D-Modality and MPx-Single, with modality annotations available.

    • Disease diagnosis. All the datasets listed as “image data” in Supplementary Table 3 are built for diagnosis, they only have binary labels for diseases. Similarly to modality recognition, we use two prompts to transform them into our desired format, (i) we use inductive prompts, like “〈image-1〉 Does the patient have {disease}?” and the disease category is randomly sampled from a disease set, forming the text input ({{{mathcal{I}}}}) and if the disease matches the ground truth labels we set the ({{{mathcal{R}}}}) as “yes” otherwise “no”, note that, during sampling, we balance the positive and negative ratio, (ii) we use open diagnosis prompts, like “Please make diagnosis based on the images 〈image-1〉 〈image-2〉.” to construct the instruction (({{{mathcal{I}}}})), and translate the positive disease labels into response (({{{mathcal{R}}}})), by simply using their category names. A simple example is, ({{{mathcal{I}}}})=”Please make diagnosis based on the image 〈image-1〉.” with ({{{mathcal{R}}}}) = “Edema, pneumothorax.”. With such instruction, the model is thus required to complete a difficult task, i.e., directly outputting the disease name.

    • Visual question answering. Beyond the abovementioned task formulation, there are more complex questions that can be asked, such as those about the spatial relationships among objects (“What is the location of the lesion?”) and common sense reasoning questions (“Given the image context and patient history, what is likely to be the cause of the observed symptoms?”). A robust medical VQA system must be capable of solving a wide range of classic medical diagnosis tasks, as well as the ability to reason about images. Existing medical VQA datasets like VQA-RAD32, SLAKE33, PMC-VQA8 and RP3D-VQA naturally fit into this paradigm. They contain a mixture of question types, thus the language questions can naturally be treated as text instruction (({{{mathcal{I}}}})) and the corresponding answer as response (({{{mathcal{R}}}})). It is worth noting that, our constructed PMC-CaseReport dataset also falls into this category, with more contextual information available for instruction, for example, history diagnosis, is also available, thus providing critical information for answering the question.

    • Report generation. MIMIC-CXR30, RP3D-Caption, PMC-OA31, MPx-Multi, and MPx-Single are all captioning datasets, the task is to write a long caption or report given one or a set of images. The language instruction for this task are like “What can you find from the scans 〈image-1〉 〈image-2〉?”.

    • Rationale diagnosis. We construct RP3D-Rationale based on the RP3D dataset. This task encompasses disease prediction and the generation of typical radiological features associated with the diagnosed disease. Specifically, we design some prompts like “What disease can be diagnosed from these radiological images and what specific features are typically observed on the images? 〈image-1〉 〈image-2〉 ” as instruction (({{{mathcal{I}}}})), and response (({{{mathcal{R}}}})) refers to the disease label along with radiological features collected from the Radiopaedia website.

    Architecture detail

    In this section, we aim to describe the proposed model in detail. As shown in Fig. 1c, our proposed RadFM model consists of a visual encoder Φvis, that can process both 2D and 3D medical scans; a perceiver44 module Φper for aggregating a sequence of scans into a fixed number of tokens, for example, taken with different modalities (CT, MRI) or various time point; and a large language model (LLM) Φllm that enables to generate free-form text responses, based on the input visual-language information.

    Visual encoding. Given one sample instance from our dataset, denoted as ({{{mathcal{X}}}}={{{{mathcal{T}}}},{{{mathcal{V}}}}}), where ({{{mathcal{V}}}}={{v}_{1},{v}_{2},ldots,{v}_{N}}), we first encode each input image separately with an image-encoder Φvis. Specifically, we adopt 3D ViT here to be compatible with both 2D and 3D image input. For 2D images, we expand a new dimension for depth by replicating the slices. Therefore, each image scan can be denoted as ({v}_{i}in {{mathbb{R}}}^{Htimes Wtimes {D}_{i}times C}), where C denotes the image channels and HWDi are the height, width, and depth of the image, respectively. The rationale behind this design choice is as follows: (i) increasingly more radiology diagnosis rely on 3D scans, for example, CT, MRI, the foundation model should certainly be able to process 3D data input; (ii) in 3D data, consecutive slices are highly similar, thus padding 2D into 3D, on the one hand, does not lead information loss, on the other hand, resembles a good approximation of 3D data; (iii) padding 2D images will only affects the tokenization layer, i.e., converting image patches into continuous embedding, while still keep the rest of model shared with 3D scans, thus facilitating knowledge share.

    Note that, comparing to the typical visual encoding scenario that assumes different images have unified shape, we do not normalize the depth dimension into an exact size, only round into a factor of 4, depending on their original resolution. Note that, all the 2D images are padded into four slices on the depth channel. We convert the image into 3D patches, embed them into a token sequence, and feed into the encoder (Φvis). To retain the 3D position of these tokens, we adopt learnable 3D position embeddings, the detailed procedure can be formulated as:

    $${{{{boldsymbol{v}}}}}_{i}={Phi }_{{{{rm{vis}}}}}({v}_{i})in {{mathbb{R}}}^{{P}_{i}times d},$$

    (5)

    where vi is the output embedding for image vi, encoded with 3D ViT, Pi is the total number of tokens, and d is the feature dimension. Due to the inconsistency in depth dimension, Pi varies across 2D and 3D images, and the model can get to know the original image size by positional encoding.

    Aggregation with perceiver. After visual encoding, we adopt a perceiver44 module Φper to aggregate visual representation. Specifically, Φper follows the classical perceiver architecture with a fix number of learnable queries as the latent array input, and the visual embedding vi is treated as the byte array input, so that the final output embeddings will be normalized into the same length with the pre-defined learnable query sequence. The aggregation procedure can be formulated as:

    $${{{{boldsymbol{u}}}}}_{i}={Phi }_{{{{rm{per}}}}}({v}_{i})in {{mathbb{R}}}^{Ptimes d},$$

    (6)

    where ui refers to the aggregated visual embedding, P denotes the number of learnable queries. Leveraging perceiver architecture, we can map an arbitrary number of patch tokens into the same length, such that images of different sizes can be treated equally in the following fusion flow.

    Multimodal fusion. To fuse the visual-language information, we interleave the visual embedding with text embeddings from tokenization, where the special image placeholder token is simply replaced with the corresponding visual embedding. The resulting interleaved sequence is then passed into a decoder-only large language model (Φllm), the self-attention transformer layers in LLM can thus naturally be reused as multi-modal fusion modules:

    $$p={Phi }_{{{{rm{llm}}}}}(,{mbox{concat}},({{{{boldsymbol{t}}}}}_{1},{{{{boldsymbol{u}}}}}_{1},{{{{boldsymbol{t}}}}}_{2},{{{{boldsymbol{u}}}}}_{2},{{{{boldsymbol{t}}}}}_{3},ldots )),$$

    (7)

    where tiui refer to the text and visual embeddings, p is the probability distribution for the next token.

    Training procedure

    Our training procedure includes two stages, namely, pretraining, and domain-specific finetuning, as shown in Fig. 1b. Note that, all training settings remain identical at two stages, with the only distinction lying in the training data, from generalist to radiologic-specific.

    Generally, all the data used for model training is listed in Supplementary Table 2 with citations indicating their sources (those without citations denoting the data are contributed by this work). For pretraining, all the listed data are employed. While for domain-specific instruction tuning, we further filter out some relatively low-quality data, i.e., generated data without human verification or non-radiology data, focusing more on high-quality question-answering pairs. Next, we will describe this in detail.

    Pretraining. At this stage, we use all available data in MedMD as listed in Supplementary Table 3, the main components of the data are PMC-Inline and PMC-OA31, which are all collected from 2.4M PMC papers. These two datasets contain diverse medical vocabularies and images with cutting-edge medical knowledge, however, they are relatively noisy, so we only use them during pretraining in the hope that the network can accumulate enough knowledge about medical-specific terminologies and images. Additionally, we also include other VQA, captioning, and diagnosis datasets, as they are much cleaner.

    Domain-specific Instruction Tuning. At this stage, we adopt RadMD for domain-specific instruction tuning, which contains over 3M radiologic images, with high-quality language instructions and responses. In this stage, we utilize RadMD for domain-specific instruction tuning, which includes over 3M radiological images accompanied by high-quality language instructions and responses. Notably, we filter out PMC-Inline and PMC-OA, as these datasets are not derived from real clinical scenarios. For the remaining data sources, we primarily filter out non-radiology-related content. Specifically, the filtering process targets the MPx-series, RP3D-series, and PMC-CaseReport datasets. For both MPx-series and RP3D-series, the filtering is straightforward since the original websites provide related imaging modalities for each case. For PMC-CaseReport, which is generated from the case reports subset of PMC-Inline using ChatGPT, we rely on the image captions to filter the cases. Only those with captions explicitly mentioning radiology-related terms—such as “MRI”, “CT”, “X-ray”, “ultrasound”, or “mammography”—are retained. We acknowledge that some noisy cases may still remain in the dataset. Therefore, in our evaluation dataset, RadBench, the selected test cases undergo additional manual inspection to further ensure quality.

    Training details

    Image preprocessing. To dismiss the differences of medical images in different modalities, certain preprocessing steps are applied. Specifically, (i) to align the intensity distributions, we employ min-max normalization of all images; (ii) given that medical images can exist in either 3D or 2D formats (such as MRI being 3D and X-ray being 2D), we convert all 2D images to 3D simply by expanding an extra dimension. Consequently, all images, irrespective of their original format, can be processed uniformly as 3D images; (iii) to ensure consistent sizes across all images, we resize them using the torchvision.transforms.Resize function. For height and weight dimensions, we resize them to 512 × 512 for 2D images and 256 × 256 for 3D images because 3D data has more slices, thus taking more computational memorization. For the depth dimension, since our visual encoder, a 3D vision transformer (ViT), requires the input image sizes to be divisible by the patch size of 32 × 32 × 4, we resize the depth dimension to the nearest multiple of 4 and will not surpass 64. Please check the Supplementary Table 6 to obtain more details.

    A detailed forward example. To better illustrate our model architecture, we present a simple instruction tuning example: a radiology image paired with the text prompt “Does the case 〈image〉 have pneumonia?”, with the ground truth response “Yes.” The model forward procedure will include three main steps, i.e., visual encoding, text fusion, and loss calculation. Visual encoding: A 2D image is first expanded into a pseudo-3D format by adding an extra dimension of size 4. It is then processed by a 3D Vision Transformer (ViT) to produce visual tokens. These are compressed to a fixed length of 32 using a perceiver module, ensuring consistent input regardless of image size. Text fusion: The text prompt is tokenized using the LLM’s embedding layer, and the “〈image〉 ” placeholder is replaced with the visual tokens. This fused sequence is input to the LLM’s causal self-attention layers for multimodal understanding. Loss calculation: The model predicts the next tokens auto-regressively, and the loss is computed against the ground truth “Yes”. During pretraining, the same forward process is used, but the loss is calculated over all text tokens except the image placeholder, following GPT-style training.

    Implementation. For the visual encoder, we adopt a 12-layer 3D ViT with 768 feature dimensions and the perceiver is chosen as a six-layer transformer decoder with a learnable latent array in 32 × 5120 dimensions, so that all images will be embedded as a 32 × 5120 feature embedding after passing visual encoding and perceiver aggregation. When inserting them into the text embedding, we will add two extra special tokens 〈image〉, 〈/image〉 at the beginning and ending, respectively, to distinguish them from common text tokens. For the large language model, we initialize it with the MedLLaMA-13B model introduced by PMC-LLaMA25, which has further finetuned the LLaMA-13B2 model on the medical corpus. Our final model has 14B parameters.

    In training, we vary the batch size, i.e., one batch size per device for 3D images and four batch size per device for 2D images with four-step gradient accumulation, and the max token length is set to be 2048. We totally train the model for eight epochs, four epochs for pretraining and four epochs for instruction tuning. In the first one epoch, we freeze the language model to align image embedding space with that of texts, in the following epochs, all parameters are updated. To improve the training speed, we adopt FSDP acceleration strategy45, together with automatic mixed precision (AMP) and gradient checkpointing46. All models are implemented in PyTorch and trained on 32 NVIDIA A100 GPUs with 80 GB memory.

    Evaluation

    In this section, we introduce three evaluation settings, i.e., zero-shot, few-shot and task-specific evaluation, together with the models in comparison. Note that, the first two evaluations require no further training, while the last requires additional finetuning on specific tasks. Afterward, we introduce the automatic metrics and human rating progress.

    Zero-shot and few-shot evaluation

    Foundation models, as a generalist model, the most appealing characteristic is that they can be applied to various tasks just with proper prompting strategies, like zero-shot or few-shot prompting, without any specific training. In the zero-shot setting, models will be given task-related semantic instructions to indicate which task it is expected to perform, and in the few-shot prompting scenario, some similar cases related to the task will be given instead. The insight of both is to use appropriate textual instructions to prompt the model on what tasks to perform, while which one is more suitable for a certain model depends on its training approach.

    Baselines. For our RadFM, we mainly adopt zero-shot evaluation, as in the instruction tuning step, we focus on promoting the model to understand diverse zero-shot instructions. For other baselines, we compare with the following publicly accessible foundation models under these two settings, as follows:

    • OpenFlamingo13. This is an open-source implementation of the prior state-of-the-art generalist visual-language model Flamingo22, that was trained on large-scale data from general visual-language domain. We utilized the released checkpoint for zero-shot and few-shot evaluation in our study.

    • MedVInT8. This is a visual instruction-tuned visual-language model based on LLaMA2, which was trained on PMC-VQA8. Considering that the PMC-VQA data does not contain any few-shot cases, mainly targeting at zero-shot prompting cases, we directly use the released checkpoint of the MedVInT-TD model with PMC-LLaMA and PMC-CLIP backbone for zero-shot evaluation.

    • LLaVA-Med4. LLaVA-Med is a medical-specifical vision-language foundation model trained based on LLaVA47 leveraging zero-shot instruction tuning dataset generated from pubmed image-caption pairs. Similar to MedVInT, it also mainly targets zero-shot prompting cases and we directly use the released checkpoint LLaVA-Med-v1.5 for zero-shot evaluation.

    • Med-Flamingo6. This is a multimodal model developed based on OpenFlamingo-9B13, that can handles multi-image input interleaving with texts. We use the released checkpoint for zero-shot and few-shot evaluation.

    • GPT-4V14. GPT-4V is widely considered as the most powerful multi-modal foundation model, released by OpenAI. Since until our submission, GPT-4V can only input 4 images which can hardly allow few-shot cases with multiple images, thus we evaluate it in zero-shot cases Besides, GPT-4V can be only accessed through the online chatting website, therefore, large-scale auto-evaluation is not feasible. In this paper, we only use it for evaluation under the human rating setting.

    For OpenFlamingo and Med-Flamingo, we perform both zero-shot and few-shot evaluations in our study. Specifically, we follow the prompts derived from the official Med-Flamingo repository. The example prompt for zero-shot evaluation: ‘You are a helpful medical assistant. Please answer the question about the given image. 〈image〉 Question: the query question. Answer:”. In the few-shot setting, we expand upon this format by supplying the models with additional examples to guide their responses. This is structured as follows: “You are a helpful medical assistant. You are being provided with images, a question about the image, and an answer. Follow the examples and answer the last question. 〈image〉 Question: [the first question]. Answer: [the first answer]. 〈 —endofchunk—〉 〈image〉 Question: [the second question]. Answer: [the second answer]. 〈 —endofchunk—〉 〈image〉 Question: the query question. Answer:”.

    To our knowledge, there are currently no existing foundation models that can effectively handle both 2D and 3D radiology images. For comparison, we have strong baseline models that are publicly accessible, for example, OpenFlamingo13, MedVInT8, LLaVA-Med4, and Med-Flamingo6, which have demonstrated efficacy in processing slices and making predictions. In addition, we also compare with GPT-4V(ision)14 use its online chatting website version.

    Datasets. We evaluate the above foundation models on RadBench and 9 exising datasets as introduced in section “Radiology evaluation benchmark (RadBench)”. Additionally, we also evaluate them on PadChest48. It is a labeled large-scale, high-resolution chest x-ray dataset including 160,000 images obtained from 67,000 patients, with 174 different radiographic finding labels. We dismiss the classes with cases fewer than 10 together with the seen classes appearing in our training set, resulting in 163 totally unseen classes. We therefore ensure that not only images, but also categories in the texts never appear in the training, which requires more generalization ability of models.

    Task-specific evaluation

    In addition to directly evaluating different foundation models using zero-shot or few-shot prompting, without any training, our model can also serve as a pretrained model, that can be adapted to different specific tasks by further finetuning on its corresponding training set, giving up the ability to generalize between tasks, but getting better performance on a specific task. In such a case, we compare our final results with different task-specific state-of-the-arts (SOTAs) according to the related datasets. In detail, we use the following datasets, and the corresponding SOTAs for comparison are listed in Table 3 with citations:

    • VinDr-Mammo49 is a mammography diagnosis dataset comprising 20,000 images (5000 four-view scans). Each scan was manually annotated with a five-level BI-RADS score. We view this as a multi-class classification task with the official split following the BenchMD50.

    • CXR1451 is a widely-used chest X-ray diagnosis dataset containing 112,120 frontal-view X-ray images of 30,805 (collected from the year of 1992 to 2015) unique patients with 14 finding labels. We follow its official split and evaluate the SOTA52 on the split.

    • LDCT53 Low dose computed tomography (LDCT) is a procedure that uses an x-ray machine linked with a computer to create 3D images of a patient’s tissues and organs. LIDC-IDRI53 dataset is used here, containing 1018 low-dose lung CTs, where each CT has small/large/no nodule labels. We follow BenchMD50 to set this dataset as a 3D diagnosis task and split it follow BenchMD.

    • MosMedData54 is a set of 1110 3D CT cases labeled with COVID-19 related findings, as well as without such findings. We view it as a classification task and split it randomly with 8:2 for training and testing following54.

    • COVID-CT55 is a set of 349 2D CT slices labeled with COVID-19 collected from 216 patients. We split it randomly with an 8:2 ratio for training and testing.

    • BraTs201932 is an MRI dataset with four MRI modalities T1WI, T2WI, T2FLAIR, and T1 contrast-enhanced(T1CE). There are 259 volumes of high-grade glioma (HGG) and 73 volumes of low-grade glioma (LGG). We follow the setting as DSM56 that uses T1CE to diagnose the HGG or LGG. Due to the original paper did not release their splits we randomly split the dataset following 7:3 for training and testing and re-tested the SOTA on it.

    • ADNI (Alzheimer’s disease neuroimaging initiative)57 is a large collection alzheimer’s disease dataset with 3D brain MRI scans. We follow the setting introduced in ref. 58 and split it randomly 8:2 for training and testing.

    • BTM-17 (Brain-tumor-17)59 is a challenge about classifying an MRI case into 17 tumor types, with 4449 real images. We adopt its official split.

    • Lung-PET-CT-Dx60 consists of CT and PET-CT DICOM images of 355 lung cancer subjects. We treat it as a diagnosis dataset to further distinguish whether one patient is diagnosed with Adenocarcinoma, small cell carcinoma, large cell carcinoma, or squamous cell carcinoma. Considering its limited case number, we split it with 7:3 (train:test) to ensure enough cases for evaluation.

    • VQA-RAD32 is a radiology VQA dataset containing 3515 questions with 517 possible answers. We follow the official dataset split for our evaluation.

    • SLAKE33 is an English-Chinese medical VQA dataset composed of 642 images and 14K questions. There are 224 possible answers in total. We only use the “English” part, and follow the official split.

    • PMC-VQA8 is an English medical VQA dataset generated with auto-nlp methods containing 149K images with 227K questions. Its answers are diverse for different questions. Considering its test set is also auto-generated, we have manually cleaned it as mentioned in section “Radiology Evaluation Benchmark (RadBench)” and retest the SOTA MedVInt8 checkpoint on the cleaned test set.

    • MedDiffVQA61 is a large-scale dataset for difference medical VQA (involving historical comparison) in medical chest x-ray images with 700,703 pairs of question-answer. We follow its official split.

    • IU-X-ray62 is a set of chest X-ray images paired with clinical reports. The dataset contains 7470 pairs of images and reports. We follow the setting and split as CDGPT263 where we use a single-view image to generate the reports.

    Evaluation metrics

    Machine rating. We evaluate on four distinct tasks, e.g., disease diagnosis, visual question answering, report generation and rationale diagnosis. The details of the four tasks and automatic metrics are introduced in section “Radiology Evaluation Benchmark (RadBench)”. To evaluate the model’s performance across a range of tasks, distinct evaluation metrics are employed based on the task type. For tasks with pre-defined answer choices, such as disease diagnosis, we adopted standard metrics developed in the community, for example, F1 stands for “F1 score”, and ACC stands for “Accuracy”. Conversely, for tasks involving open-ended responses, like report generation and visual question answering (VQA) and rationale diagnosis, alternative evaluation metrics, like BLEU, ROUGE and BERT-sim are employed. BLEU stands for “BiLingual Evaluation Understudy”64, ROUGE stands for “Recall-Oriented Understudy for Gisting Evaluation”65. BERT-sim stands for “BERT similarity score”, the F1 BERT score between the generated answer and the correct answer66. For BLEU and ROUGE, if not specific pointing, we all use 1-gram by default.

    In addition, inspired by the score RadCliQ12 designed specifically for evaluating generated chest X-ray reports, we also propose two new metrics, UMLS_Precision and UMLS_Recall, which aim to measure the overlapping ratio of medical-related words between ground truth and predicted response. Specifically, given a pair of ground-truth and prediction, we extract the medical-related words from them by using unified medical language system (UMLS)43, and count the overlap words as true-positive. UMLS_Precision is defined with the classical precision concept, i.e., the number of true-positive divides the whole generated medical-related word number. On the other hand, UMLS_Recall also follows the recall concept, i.e., the number of true-positive words divides the total number of medical-related words in the ground truth.

    Discussion on automatic metrics. Despite these automatic metrics have been widely adopted by the community, they often struggle to capture the semantic accuracy in generative tasks, for example, question answering, report generation, and rationale generation. To address these limitations and ensure a more accurate evaluation of system performance, we incorporate human evaluation, leveraging the expertise of radiologists, to get a professional evaluation on the quality of generated answers.

    Human rating. For the sake of clinical utility, we further involve manual checking in the evaluation stage and compute the human rating score. Three radiologists were asked to rate the quality of the generated answers using a 0–5 scale. Each radiologist has five years of clinical experience in radiology departments. One is affiliated with Shanghai General Hospital, and the other two are from Shanghai Sixth People’s Hospital. All three completed their studies in “Medical imaging and nuclear medicine” at Shanghai Jiao Tong University. Here are the specifics of each rating:

    1. 1.

      Garbled – The content is incomprehensible and lacks any readability.

    2. 2.

      Inaccurate – While readable, the content is entirely incorrect and lacks meaningful information.

    3. 3.

      Partially informative – The content holds some reference value, yet its correctness is subpar.

    4. 4.

      Moderately accurate – The content provides reference points, with approximately half of the information being correct, but containing several errors.

    5. 5.

      Mostly accurate – The content is almost entirely correct, with only a few omissions or errors present.

    6. 6.

      Completely correct – The content is accurate in its entirety, without any mistakes.

    To facilitate this assessment, we have developed a human evaluation interface, visually presenting the generative instances with images, as depicted in Supplementary Fig. 2. Prior to the full evaluation, we conducted a preliminary exam involving 20 randomly sampled test cases. This exam was designed to ensure that the radiologists understood the evaluation criteria. All three radiologists showed consistent results, with one exception: for one case, one radiologist rated the answer as 2 while the others rated it as 3. This indicates that our five-point rating system was sufficiently clear for evaluating the model’s outputs. The exam results were also reviewed by a senior radiologist with over 10 years of experience from the radiology department of Shanghai Sixth People’s Hospital, further confirming the validity of the evaluation process. In the evaluation, raters are provided with images, the question, the correct answer, and a set of generated responses from different models, arranged in a randomized order. The evaluation score given by the professional radiologists differs from the automatic evaluation metrics, offering greater accuracy and flexibility. In the context of the report generation example shown in the figure, they focus on the most crucial aspects, rather than solely on word matching, recall or precision.

    Note that, human rating is only performed for the open-ended tasks, i.e., medical VQA, report generation and rationale diagnosis. As for disease diagnosis, their answers are fixed without confusion; thus, the automatic metrics can already well reflect the performance. Considering the cost for human rating, for each open-ended task, we randomly sample 400 test cases from RadBench, as they are generally collected from clinical practice across the world, and can represent real scenarios, resulting in 1.2K cases for human rating in total.

    Reporting summary

    Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

    Continue Reading

  • EMA Grants Orphan Drug Designation to Pegtarazimod for GVHD

    EMA Grants Orphan Drug Designation to Pegtarazimod for GVHD

    Pegtarazimod for GVHD | Image credit:

    © LELISAT – stock.adobe.com

    The European Medicines Agency (EMA) has granted orphan drug designation (ODD) to pegtarazimod (RLS-0071) for the treatment of patients with graft-vs-host disease (GVHD).1

    The EMA designation was based on data from an ongoing phase 2 AURORA trial (NCT06343792), which is evaluating pegtarazimod in 6 experimental and 2 expansion cohorts of hospitalized patients with steroid-refractory acute GVHD.1,2

    “Receiving EMA orphan drug designation represents a significant new regulatory milestone in our efforts to address the urgent unmet need in acute GVHD, and we are particularly encouraged by the EMA’s positive feedback to our initial cohort of phase 2 data,” David Marek, chief executive officer of ReAlta, stated in a news release.1 “This designation, alongside our existing FDA orphan drug and fast track designations for acute GVHD, validates the potential of our novel dual-targeting approach to modulate both neutrophil and complement-mediated inflammation as we advance our phase 2 trial to bring this potential therapy to acute GVHD patients in Europe.”

    In August 2024, the FDA granted ODD and fast track designation to pegtarazimod in the same indication.3

    Pegtarazimod is a 15-amino-acid peptide that targets humoral and cellular inflammation through selective inhibition of complement activation at C1, myeloperoxidase activity, and neutrophil extracellular trap formation.1 Findings from a study (NCT05351671) published in 2024 illustrated the agent’s mechanism, showcasing reduced neutrophils in sputum 6 hours after healthy participants inhaled low-dose lipopolysaccharide by approximately half compared with placebo (P = .04). Neutrophil effectors myeloperoxidase, neutrophil elastase and IL-1β in sputum were also significantly reduced at the same point vs placebo.4

    Through direct inhibition of these mechanisms, pegtarazimod prevents the inflammatory cascade and tissue damage that results from donor immune cells attacking recipient tissues after hematopoietic stem cell transplantation (HSCT).1

    AURORA is an ongoing phase 2 open-label, prospective, dose-escalation trial that is enrolling patients globally in the United States, Germany, and Spain.

    Eligible patients include adults or adolescents over the age of 12 years who are hospitalized for the treatment of steroid-refractory acute, grade II to IV GVHD after allogeneic HSCT.2

    Patients must have an expected hospital length-of-stay of at least 1 week from the time of RLS-0071 initiation and weigh between 40 kg and 140 kg at screening.Additionally, patients can not plan to undergo additional GVHD treatment or to add, dose-adjust, or discontinue prophylactic GVHD medications during the 7-day treatment course of RLS-0071 pegtarazimod.

    Additional enrollment parameters include neutrophil recovery following transplant, defined as a blood neutrophil count above 500/mL without growth factor supplementation for at least 3 consecutive measurements.

    Eligible patients will receive pegtarazimod for 7 or 14 days according to the assigned dose group. Cohorts 1 through 5 will receive the agent at a dose of either 10 mg/kg every 8 hours (Q8H) for 7 days, 40 mg/kg Q8H for 7 days, 10 mg/kg Q8H for 14 days with concurrent ruxolitinib, 40 mg/kg Q8H for 14 days with concurrent ruxolitinib, 10 mg/kg Q8H for 7 days followed by 10 mg/kg daily for 7 days with concurrent ruxolitinib, or 40 mg/kg Q8H for 7 days followed by 40 mg/kg daily for 7 days with concurrent ruxolitinib.

    In dose expansion patients will receive 10 mg/kg or 40 mg/kg of RLS-0071 Q8H for 7 or 14 days.

    The primary end points are the number of patients with treatment-related adverse effects and treatment-emergent serious adverse effects in addition to overall response rate. Secondary end points include the incidence of refractoriness to RLS-0071 with or without ruxolitinib (Jakafi), overall corticosteroid use, overall survival, non-relapse mortality, and duration of hospital stay. Investigators will also evaluate the change or shift in overall grade of acute GVHD, the change or shift in stage for acute lower gastrointestinal (GI), liver, skin, or upper GI GVHD from baseline according to MAGIC criteria, the initiation of additional or alternative treatments for acute GVHD, and achievement of stage 0 or 1 acute lower GI, liver, skin, or upper GI GVHD.

    “Our targeted intervention addresses the specific pathways driving tissue damage, including the inhibition of extracellular myeloperoxidase, NETosis and neutrophil elastase, while preserving beneficial immune function, unlike broadly immunosuppressive approaches to treat acute GVHD,” Kenji Cunnion, MD, MPH, chief medical officer of ReAlta, stated in a news release.1 “The compelling preclinical and clinical data that we have generated shows pegtarazimod’s potential to address the neutrophil-driven disease process in patients with lower gastrointestinal acute GVHD that is the most difficult to treat and has the highest rate of mortality.”

    Data from additional cohorts of the phase 2 trial are expected in 2026.

    References

    1. ReAlta Life Sciences receives EMA orphan drug designation for RLS-0071 (pegtarazimod) for the treatment of graft-versus-host disease. News release. ReAlta Life Sciences. August 21, 2025. Accessed August 22, 2025. https://realtalifesciences.com/news/realta-receives-ema-orphan-drug-designation-for-rls-0071-pegtarazimod-for-the-treatment-of-graft-versus-host-disease/
    2. Safety, PK, PD, dosing, and efficacy of RLS-0071 for the treatment of hospitalized patients with steroid-refractory acute graft-versus-host disease (AURORA). ClinicalTrials.gov. Updated January 31, 2025. Accessed August 22, 2025. https://www.clinicaltrials.gov/study/NCT06343792
    3. ReAlta granted FDA orphan drug designation and fast track designation for RLS-0071 for the treatment of steroid-refractory acute graft-versus-host disease. News release. ReAlta Life Sciences. August 19, 2024. Accessed August 22, 2025. https://realtalifesciences.com/news/realta-granted-fda-orphan-drug-designation-and-fast-track-designation-for-rls-0071-for-the-treatment-of-steroid-refractory-acute-graft-versus-host-disease/
    4. Cunnion K, Goss J, Hair P, et al. RLS-0071, a novel anti-inflammatory agent, significantly reduced inflammatory biomarkers in a randomised human evaluation of mechanisms and safety study. ERJ Open Res. 2024;10(4):01006-2023. doi:10.1183/23120541.01006-2023

    Continue Reading

  • HSBC’s Swiss Bank Said to Exit 1,000 Mideast Clients Amid Revamp

    HSBC’s Swiss Bank Said to Exit 1,000 Mideast Clients Amid Revamp

    (Bloomberg) — HSBC Holdings Plc’s Swiss private bank is ending relationships with wealthy Middle Eastern clients, including many with assets exceeding $100 million, as the bank seeks to lower its exposure to individuals it deems high-risk, according to people familiar with the matter.

    More than 1,000 clients from Saudi Arabia, Lebanon, Qatar and Egypt are among those being told they can no longer bank with HSBC’s Swiss wealth management business, the people said, asking not to be identified discussing an ongoing process. 

    Some clients have already started to be informed and over the next few months will receive closing letters advising them they could consider transferring to other jurisdictions, the people said.

    “HSBC announced plans in October last year to reshape the Group to accelerate strategic delivery. As part of this, we are evolving the strategic focus of our Swiss Private Bank,” the bank said in an e-mailed statement. 

    The reshuffle is coming at a time of ongoing scrutiny from Swiss banking watchdog Finma, which has found that the lender’s private bank failed to carry out adequate due diligence on high-risk accounts owned by politically exposed persons. The exits are expected to largely be completed within six months and HSBC is putting in place a team to help it with the closures, the people said.  

    “We are creating a simpler, more dynamic organisation, focused on increasing leadership and market share in the areas where we have a clear competitive advantage,” according to HSBC.

    The move would come as a further blow for HSBC in a region that’s become a magnet for wealth managers. Rival firms have beefed up to cater to high-net worth individuals in the Middle East, though HSBC has struggled despite hiring Credit Suisse’s top wealth-management executive Aladdin Hangari a few years ago.

    Last year, Finma ordered HSBC not to enter into any new business relationships with so-called politically exposed persons, or individuals with a public role that may make them more susceptible to corruption. 

    Finma instructed the lender to mandate an external auditor to conduct a review of the relevant business. 

    Clients with over 100 million Swiss francs ($124 million) in assets are deemed by the bank to be high risk. The risk rating is also impacted by factors including the individual’s domicile and nationality.

    HSBC’s Swiss unit had been part of the bank’s effort to build its wealth offerings for the Middle East, which had faced setbacks including the departures of high-profile bankers. While the lender has typically been among the top players in the region’s capital markets, it has struggled against rivals — including the Swiss — in private banking.  

    Last month it was revealed that HSBC’s Swiss private bank is the focus of a Swiss investigation into suspected money-laundering connected to the alleged embezzlement of hundreds of millions of dollars by the former head of Lebanon’s central bank. 

    In June last year, Finma pointed specifically to two high-risk business relationships where it said HSBC Private Bank (Suisse) SA hadn’t adequately checked the origins, purpose or background of the assets involved. The suspect transactions involving more than $300 million moved between Lebanon and Switzerland were carried out between 2002 and 2015, according to Finma.

    –With assistance from Harry Wilson.

    More stories like this are available on bloomberg.com

    Continue Reading

  • The AI Bubble Debate: 8 Business Leaders Weigh in

    The AI Bubble Debate: 8 Business Leaders Weigh in

    • OpenAI CEO Sam Altman has given renewed voice to concerns about an AI bubble.
    • Altman recently told reporters that investors are “overexcited” about AI.
    • There’s disagreement, even among business leaders and tech CEOs, around the existence of a bubble.

    It’s AI summer, but some business leaders seem concerned that they’re partying like it’s 1999, just before the dot-com bubble burst.

    OpenAI CEO Sam Altman recently told reporters that the AI market might be too hot, renewing the debate over whether there’s an AI bubble.

    Here’s what leading tech CEOs and business leaders are saying about what’s ahead.

    Sam Altman

    OpenAI CEO Sam Altman said investors are overexcited about AI.

    Andrew Harnik via Getty Images


    OpenAI CEO Sam Altman said that the AI market is in a bubble.

    “When bubbles happen, smart people get overexcited about a kernel of truth,” Altman recently told reporters, per The Verge.

    Altman said this describes the state of play.

    “Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes,” he said.

    Eric Schmidt


    Former chairman and CEO of Google, Eric Schmidt.

    Former chairman and CEO of Google Eric Schmidt says people are misreading the signs.


    Shahar Azran/Getty Images

    Former Google CEO Eric Schmidt said just because it looks like a bubble doesn’t mean that it is.

    “I think it’s unlikely, based on my experience, that this is a bubble,” Schmidt said in July during an appearance at the RAISE Summit in Paris. “It’s much more likely that you’re seeing a whole new industrial structure.”

    Schmidt said it takes solace in where the hardware and chips markets stand.

    “You have these massive data centers, and Nvidia is quite happy to sell them all the chips,” he said. “I’ve never seen a situation where hardware capacity was not taken up by software.”

    Joe Tsai


    Jos Tsai speaks at a conference in Paris

    Alibaba cofounder Joe Tsai has worries about one particular aspect of AI investments.


    Gonzalo Fuentes/Reuters

    Alibaba cofounder Joe Tsai has voiced concerns about the scramble for data centers needed to help power the next generation of AI models.

    “I start to see the beginning of some kind of bubble,” Tsai told the HSBC Global Investment Summit in March, Bloomberg News reported.

    Tsai said he’s worried the building rush might outpace demand.

    “I start to get worried when people are building data centers on spec,” he said. “There are a number of people coming up, funds coming out, to raise billions or millions of capital.”

    Lisa Su


    Lisa Su arrives for a dinner at the Elysee Palace

    Lisa Su


    Thomas Padilla/AP

    AMD CEO Lisa Su says the bubble talk “is completely wrong.”

    “For those who are talking about a ‘bubble,’ I think they’re being too narrow in their thinking of, what is the return on investment today or over the next six months,” Su told Time Magazine in 2024. “I think you have to look at this technology arc for AI over the next five years, and how does it fundamentally change everything that we do? And I really believe that AI has that potential.”

    Ray Dalio


    Ray Dalio speaks onstage during the 2025 TIME100 Summit at Jazz at Lincoln Center in New York City on April 23, 2025.

    Hedge fund icon Ray Dalio says people are confusing revolutionary tech with successful investments.

    Jemal Countess via Getty Images


    Hedge fund icon Ray Dalio voiced concerns about a bubble earlier this year, when DeepSeek’s rollout led analysts to rethink AI’s outlook.

    “Where we are in the cycle right now is very similar to where we were between 1998 or 1999,” Dalio told the Financial Times in January. “There’s a major new technology that certainly will change the world and be successful. But some people are confusing that with the investments being successful.”

    At the time, Dalio cited high stock prices and high interest rates. The good news is that Wall Street widely expects the Federal Reserve to cut rates during its September meeting.

    Tom Siebel


    TomSiebel_photo1[1]

    C3.ai CEO Thomas Siebel said earlier this year that OpenAI is overvalued.


    C3.ai

    Billionaire tech CEO Thomas Siebel said there is “absolutely” an AI bubble and that it’s “huge.”

    “So we have this similar thing going on with generative AI that we’ve seen with previous technologies,” Siebel told Fortune in January. “The market is way, way overvaluing.”

    Siebel, who leads C3.ai, singled out OpenAI in terms of overevaluations.

    “If it disappeared, it wouldn’t make any difference in the world,” he said. “Nothing would change. I mean, nobody’s life would change. No company would change. Microsoft would find something else to power Copilot. There’s like 10 other products available that would do it equally as good.”

    Mark Cuban


    Mark Cuban speaks during a summer meeting of the National Governors Association

    Mark Cuban says the quality of AI-related companies going public remains high.


    David Zalubowski/AP

    Mark Cuban, who famously sold Broadcast.com just before the dot-com bubble burst, said he doesn’t see similarities to the current situation.

    “There were people creating companies with just a website and going public. That’s a bubble where there’s no intrinsic value at all,” Cuban told podcaster Lex Fridman in 2024. ‘”People aren’t even trying to make operating cap profits, they’re just trying to leverage the frothiness of the stock market, that’s a bubble. You don’t see that right now. “

    Cuban took particular notice of the quality of AI companies going public.

    “We’re not seeing funky AI companies just go public,” he said. “If all of a sudden we see a rush of companies who are skins on other people’s models or just creating models to create models that are going public, then yeah, that’s probably the start of a bubble.”


    Continue Reading