Category: 3. Business

  • Fitch Revises Outlook on the City of Porto to Positive on Sovereign Rating Action – Fitch Ratings

    1. Fitch Revises Outlook on the City of Porto to Positive on Sovereign Rating Action  Fitch Ratings
    2. Lusa – Business News – Portugal: Budget deficit expected this year following storms, Iran conflict  Alliance of Mediterranean News Agencies
    3. Fitch revises Portugal’s outlook to “positive”  XXV Governo Constitucional

    Continue Reading

  • Sempra Cancels Vista Pacifico LNG in the Aquarium of the World – NRDC

    1. Sempra Cancels Vista Pacifico LNG in the Aquarium of the World  NRDC
    2. Sempra Infrastructure Cancels Vista Pacífico LNG Project  Mexico Business News
    3. Sempra Axes Vista Pacifico LNG Plan, Cutting Mexico’s Proposed Export Capacity  Natural Gas Intelligence
    4. Sempra Infrastructure drops plans for LNG project in Sinaloa, Mexico  San Diego Union-Tribune
    5. Sempra pulls plug on Mexico’s Vista Pacifico LNG  BNamericas

    Continue Reading

  • Odey told receptionist he was in ‘deep trouble’ after she alleged he tried to kiss her – Financial Times

    1. Odey told receptionist he was in ‘deep trouble’ after she alleged he tried to kiss her  Financial Times
    2. Hedge fund manager who massaged female employees may have miscalculated  eFinancialCareers
    3. Crispin Odey fails to exclude sexual misconduct allegations from court case  Financial Times
    4. FCA: Odey showed ‘disregard, disrespect and contempt’ in dismissing executive committee  Investment Week
    5. Crispin Odey ‘showed receptionist indecent photo of girlfriend’  The Times

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Background

    Gliomas are the most frequent primary tumors affecting the central nervous system (CNS) in adults, with a variable prognosis depending on the specific subtype and histological grade []. The global prevalence of CNS neoplasms in 5 years is approximately 771,110 cases. Primary brain tumors account for 1.7% of all cancers, with a global incidence of 3.9 cases per 100,000 people each year. Among these, gliomas are the most common malignant type, accounting for 75% of all malignant CNS tumors, with an incidence of 6 cases per 100,000 people annually []. Gliomas are among the tumors that are difficult to treat, with a 5-year overall survival of less than 35% []. Patients with brain tumors present with variable clinical symptoms according to the part of the brain affected. However, they usually share general symptoms that are nonspecific to anatomic location (eg, seizures, headaches).

    Gliomas are neuroectodermal tumors arising from glial cells or their precursors and encompass astrocytomas, oligodendrogliomas, and ependymomas []. Mutations in isocitrate dehydrogenase 1 and 2 (IDH1 and IDH2) are considered early events in gliomagenesis and occur more frequently in lower-grade gliomas []. Diffuse gliomas harboring IDH1/2 mutations are associated with significantly improved prognosis compared with IDH–wild-type diffuse gliomas [].

    Codeletion of chromosomal arms 1p and 19q results from an unbalanced centromeric translocation, t(1;19)(q10;p10) []. In combination with IDH mutation, 1p/19q codeletion is a defining molecular feature required for the diagnosis of oligodendroglioma, IDH mutant, and 1p/19q codeleted. This molecular alteration is associated with a favorable prognosis among diffuse gliomas and predicts enhanced responsiveness to alkylating chemotherapy [].

    Conventional classification of gliomas relied on histologic features as a gold standard. Initially, pathologists utilized microscopic analysis of histochemically stained (eg, hematoxylin and eosin) sections for the diagnosis of gliomas []. However, ancillary immunohistochemistry has been increasingly used over the past few decades to enhance diagnostic accuracy []. With either method, pathologists followed a diagnostic algorithm to rule out nonneoplastic lesions (eg, infarcts) or other types of malignancies (eg, metastatic) based on histologic features. Clinical information (eg, age) and radiologic findings (eg, location) are also valuable clues in the diagnosis []. Nonetheless, the histologic-based classification system has a few limitations, particularly interobserver variability, in addition to insufficient or nonrepresentative tissue sampling []. Although histologic-based classification can be accurate for prototypic tumors, classification of tumors encountered in clinical practice is often more challenging due to a degree of mixed features in the same sample [].

    In order to address these limitations, molecular-based diagnosis has emerged as a possible solution. Recent advancements in genetics have helped to identify specific gene alterations involved in the development of different types of gliomas. Detection of these gene alterations is the base of molecular-based diagnosis. This method offers improved accuracy and made it possible to distinguish between primary and secondary gliomas, which was not possible with the previous histologic-based method []. Therefore, the World Health Organization (WHO) integrated molecular features into its most recent classification of CNS tumors, published in 2021, based on the presence or absence of IDH1 and IDH2 mutations, in addition to chromosome 1p/19q codeletion []. It categorized adult-type gliomas into three categories: astrocytoma with IDH mutation, oligodendroglioma with IDH mutation and 1p/19q codeletion, and glioblastoma with IDH wild-type []. Furthermore, a combined approach using molecular and histologic features is used for the grading of IDH-mutant astrocytomas as CNS WHO grade 2, 3, or 4. The new WHO classification also led to a more accurate prognosis of gliomas, as the absence of IDH mutation is often a sign of more aggressive tumors [].

    Neuroimaging provides noninvasive insight into the molecular landscape of diffuse gliomas. Advanced magnetic resonance imaging (MRI) features can suggest IDH mutation status, as IDH-mutant tumors typically exhibit lower perfusion and less aggressive radiological behavior than IDH–wild-type gliomas [-]. Magnetic resonance spectroscopy allows in vivo detection of the IDH-specific oncometabolite D-2-hydroxyglutarate, enabling preoperative identification of IDH-mutant tumors []. In addition, oligodendrogliomas defined by combined IDH mutation and 1p/19q codeletion display characteristic imaging features, including cortical localization, calcifications, and relatively increased perfusion, reflecting distinct tumor biology [,]. These radiogenomic associations support the integration of imaging into molecular stratification of adult-type diffuse gliomas [-].

    As gliomas pose diagnostic challenges due to their complex histological and molecular features, artificial intelligence (AI) has emerged as a promising tool to enhance diagnostic accuracy and efficiency. AI models were able to offer rapid and accurate results with accuracy exceeding 90%, which can further improve the access and speed of molecular diagnosis []. In general, AI implementation in health care can lead to improved data synthesis (eg, patient data, medical literature) []. Additionally, AI can process multimodal data (two or more different data sources such as histopathological images, radiological data, clinical variables, genomic markers, and other relevant data types) to provide more accurate diagnosis and predictions. Moreover, it offers means for augmenting human performance, as human limitations can prevent processing large quantities of information and making the optimal judgment at all times. Thus, AI can improve care consistency, increase precision, accelerate discoveries, and minimize disparities [].

    Research Gaps and Aim

    The use of AI in the diagnosis of brain tumors and gliomas has been widely studied, with many reviews summarizing the results of numerous studies. However, existing reviews exhibit several limitations. For instance, many reviews focused on AI implementation in other diagnostic methods of gliomas, such as radiology-based [,], surgery-based [], and DNA-based diagnoses []. As for reviews focusing on AI implementation in the molecular diagnosis of gliomas, they fall into one of the following categories: not systematic [], outdated [], restricted by databases or search terms [,], lacking subtyping of gliomas [], not focused on genetic markers (IDH mutations and 1p/19q codeletion) [], or limited to a specific subfield of AI (eg, deep learning or machine learning) []. To address the limitations of the previous reviews, this study aims to systematically assess the performance of AI in detecting and classifying IDH mutation status and 1p/19q gene codeletion in adult-type gliomas using histopathological images.

    This review was conducted in accordance with the PRISMA-DTA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses–Extension for Diagnostic Test Accuracy) [] (see ) and registered with the international Prospective Register of Systematic Reviews (PROSPERO) under the number CRD420250653668.

    Search Strategy

    The literature search was performed on November 15, 2024, using the following digital databases: MEDLINE, PsycINFO, Embase, IEEE Xplore, ACM Digital Library, Scopus, and Google Scholar. A biweekly automated search was set for 4 months, concluding on March 15, 2025. Given the large number of results from Google Scholar, which are ranked by relevance, we reviewed only the first 100 entries (10 pages). Our query utilized a combination of proper keywords and Boolean operators, including but not limited to ((“Artificial Intelligence” OR “Machine Learning” OR “Deep Learning”) AND (“Histopatholog*” OR “Whole-Slide Imag*”) AND (“Glioma*” OR “Astrocytoma*” OR “Oligodendroglioma*” OR “Glioblastoma*”)). The search was restricted to studies published between 2015 and 2025. The search query for each database is shown in . To identify additional studies, we screened the reference lists of included studies (ie, backward reference list checking) and checked studies that cited the included studies (ie, forward reference list checking).

    Study Selection

    The selection process of relevant studies consisted of three steps. The first step involved removing duplicates using EndNote. In the second step, the studies were assigned to two authors, who screened the papers independently by title and abstract. The final step involved each author reading the consequent papers in full text independently. Discussions were held to resolve any differences or ambiguities in steps two and three.

    Study Eligibility Criteria

    Studies fulfilling the following criteria were included: (1) original articles, theses, dissertations, and conference papers written in English, (2) studies published in 2015 onward, (3) studies focused on patients with adult-type glioma with no restrictions on age, (4) studies that used histopathological images gathered from tissue biopsies, tumor excision, or digital histopathological images, (5) studies that assessed the performance of AI algorithms in the detection and classification of the molecular markers, specifically the IDH mutation status and the 1p/19q gene codeletion status using histopathology data, and (6) studies that reported results related to the performance of the developed AI models. No restriction was applied regarding study settings, study design, and country of publication. Studies were excluded if they (1) used data modalities other than histopathology images (eg, radiological images, stimulated Raman histology), (2) did not employ AI algorithms for molecular marker detection or classification, (3) were not human studies, or (4) were categorized as editorials, letters to editors, posters, conference abstracts, reviews, commentaries, short communications, and brief reports.

    Data Extraction

    Two authors independently extracted data from the eligible studies using predefined tables in Microsoft Excel sheet. Any discrepancies in the extracted data were resolved through discussion. The extracted data included four main categories: (1) study characteristics, including surname of first author, publication type, publication year, and country of publication; (2) participants and dataset characteristics, including number of participants, mean age, female percentage, dataset size, data source, magnification, slide image resolution, level of analysis, number of classes, and ground truth assessor; (3) AI model characteristics, covering problem-solving approaches, AI classifiers, type of classification, aim of classifier, data input to AI algorithms, features extraction methods, type of validation, and performance metrics; (4) results, including accuracy, sensitivity, specificity, and area under the curve (AUC). The data extraction form is shown in .

    Risk of Bias Assessment

    To evaluate the quality and reliability of the included studies, two reviewers independently conducted a risk of bias assessment using an adapted version of the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2) tool []. The original QUADAS-2 tool was modified to better suit the context of our review, which focuses on AI applications. Specifically, we incorporated relevant elements from the PROBAST (Prediction Model Risk of Bias Assessment Tool) to ensure that the assessment adequately addressed the methodological nuances of AI-based diagnostic models []. Each domain included four tailored signaling questions designed to determine the potential for bias and to evaluate applicability to the review’s research question. The risk of bias was rated separately for each domain, while concerns regarding applicability were assessed for the first three domains. For example, the “Participants” domain examined issues such as sample representativeness, exclusions, and subgroup balance. The “Index Test” domain evaluated the transparency and consistency of the AI model’s design and feature use. The “Reference Standard” focused on the validity and consistency of outcome definitions and assessor qualifications, while the “Analysis” domain addressed data inclusion, preprocessing, set division, and model performance evaluation. In addition to risk of bias, we also assessed concerns regarding applicability in the first three domains, evaluating whether the participants, AI model, and outcome definitions were aligned with the review’s objectives. To refine and validate our modified tool, we conducted a pilot assessment using five studies, which allowed us to fine-tune the criteria before full application. Subsequently, two reviewers independently assessed all included studies using the finalized tool (). Any discrepancies in assessments were resolved through discussion and consensus.

    Data Synthesis

    A narrative approach was used to synthesize the data extracted from the included studies. Specifically, we used texts and tables to summarize and describe the characteristics of the included studies and results. We could not perform a meta-analysis because fewer than two studies reported the confusion matrices or other necessary details (eg, number of cases and controls in the test set) required to calculate the numerators and denominators for the same performance measures (accuracy, sensitivity, specificity, and AUC), AI algorithms, and measured outcomes (detection of IDH and 1p/19q). Instead of meta-analysis, we calculated the traditional mean for each metric (ie, accuracy, sensitivity, specificity, and AUC).

    Search Results

    As shown in , the literature searches identified a total of 2453 reports from databases. After removing duplicate records (n=583), we screened the titles and abstracts of 1870 reports. This led to the exclusion of 1753 reports. Of the remaining 117 reports, we could not retrieve the full text for 5 reports. Subsequently, we checked the eligibility of the remaining 112 reports. This led to the exclusion of 91 reports for the following reasons: not related to glioma (n=7), not related to molecular classification (n=53), not pertaining to histopathology (n=11), inappropriate publication type (n=19), and not in the English language (n=1). We included an additional study by checking the reference list of the included studies. Ultimately, only 22 reports met the inclusion criteria and were included in the review [-].

    Figure 1. Flowchart of the study selection process.

    Characteristics of the Included Studies

    The included studies were published between 2015 and 2024 (). The majority of studies were published in 2023 (9/22, 40.91%), followed by 2024 (5/22, 22.73%). Only 3 of the 22 studies (13.64%) are conference papers, whereas 16 (72.73%) are journal articles and 3 (13.64%) are preprints. Studies were carried out in different countries, with the United States and China being the most frequent contributors (6/22, 27.27%) and (5/22, 22.73%), respectively. The number of participants in the included studies ranges from 29 to 2845, with an average of 765.1 (SD 815.6). The mean of female participation rate is 42.45% with a range between 34.5% and 49.3%. The dataset size (ie, number of whole-slide image or patches) varied widely (47‐27,000), with an average of 2464 (SD 5851). It is worth mentioning that the unit of analysis used to compute performance metrics was predominantly whole-slide image level (21/22, 95.5%), while it was at the patch level in 1 (4.5%) study. The participants’ age ranged from 42.2 to 54.5 with an average of 48.03 (SD 4.2; ). The characteristics of each included study are described in .

    Table 1. Characteristics of included studies.
    Features Number of studies References
    Year of publication, n (%)
    2024 5 (22.73) [,,,,]
    2023 9 (40.91) [,,,,-,]
    2022 2 (9.09) [,]
    2021 3 (13.64) [,,]
    2020 2 (9.09) [,]
    2019 1 (4.55) []
    Publication type, n (%)
    Journal article 16 (72.73) [,,,,-,-,-]
    Conference paper 3 (13.64) [,,]
    Preprint 3 (13.64) [,,]
    Country of publication, n (%)
    United States 6 (27.27) [,,,,,]
    China 5 (22.73) [,,,,]
    Germany 2 (9.09) [,]
    Australia 2 (9.09) [,]
    India 1 (4.55) []
    Luxembourg 1 (4.55) []
    Canada 1 (4.55) []
    Switzerland 1 (4.55) []
    South Korea 1 (4.55) []
    Saudi Arabia 1 (4.55) []
    Japan 1 (4.55) []
    Number of participants, mean (SD); range (%) 765.1 (815.6); 29‐2845 (95.45) [,,-]
    Female percentage (%), mean (SD); range (%) 42.45 (0.03); 34.5‐49.3 (59.09) [,,,,-,-]
    Dataset size, mean (SD); range (%) 2464 (5851); 47‐27,000 (95.45) [-]
    Mean age (year), mean (SD); range (%) 48.03 (4.2); 42.2‐54.5 (59.09) [,,,,-,-]

    aOne study did not report the mean age [].

    bStudies did not report the female percentage [-,-,,,].

    cOne study did not report the dataset size [].

    dStudies did not report the mean age [-,-,,,].

    Features of Histopathological Images

    As presented in , the histopathological images were captured at various magnification levels, including 2.5×, 5×, 10×, 20×, 40×, and 100×. Among these, 20× magnification was the most commonly used (12/22, 54.55%). While 14 (63.64%) studies focused on architectural-level analysis, 13 (59.09%) studies focused on cellular-level analysis. Slide resolutions varied across studies, with 256×256 pixels being the most common (8/22, 36.36%). Datasets were sourced from either open-source databases (16/22, 72.73%) or closed-source datasets (13/22, 59.09%) (ie, data were collected by the study authors or obtained from previous studies). The open datasets used include The Cancer Genome Atlas (15/22, 68.18%) and the Digital Brain Tumor Atlas (1/22, 4.55%), with some studies combining open and closed sources. The histopathological images were labeled into 2 classes in about two-thirds of the studies (14/22, 63.64%). The features of histopathological images in each study are described in .

    Table 2. Features of histopathological images.
    Features Number of studies, n (%) References
    Magnification
    2.5× 1 (4.55) []
    2 (9.09) [,]
    10× 4 (18.18) [,,,]
    20× 12 (54.55) [,-,,,,,,-]
    40× 5 (22.73) [,,,,]
    100× 1 (4.55) []
    Not reported 7 (31.82) [,,,,-]
    Level of analysis
    Architecture level 14 (63.64) [,-,,,,,-,-]
    Cell level 13 (59.09) [,-,,-,,,-]
    Not reported 7 (31.82) [,,,,-]
    Slide image resolution
    224×224 4 (18.18) [,,,]
    256×256 8 (36.36) [,,,,,,,]
    512×512 3 (13.64) [,,]
    1024×1024 5 (22.73) [,,,,]
    Other 7 (31.82) [,,,,,,]
    Not reported 2 (9.09) [,]
    Data sources
    Open source 16 (72.73) [-,,-,-,-,]
    Closed source 13 (59.09) [-,,,-,-]
    Open dataset name
    TCGA 15 (68.18) [,,,-,-,-,]
    DBTA 1 (4.55) []
    Not applicable 6 (27.27) [,,,,,]
    Number of classes
    Two 14 (63.64) [,,-,,]
    Three 2 (9.09) [,]
    Four 4 (18.18) [,,,]
    Five 2 (9.09) [,]
    Six 1 (4.55) []

    aStudies used both data sources, open and closed [,,,,,].

    bValues in italics indicate highest score.

    cTCGA: The Cancer Genome Atlas.

    dDBTA: Digital Brain Tumor Atlas.

    Features of AI

    AI was primarily used for binary classification in 86.36% (19/22) of studies, where models differentiate between two outcomes (eg, IDH-mutant vs wild-type), whereas only 22.73% (5/22) employed multiclass classification, which predicts among 3 or more glioma subtypes (). The majority of studies (20/22, 90.91%) used AI for IDH subtyping (ie, distinguishing between IDH mutant and IDH wild-type), whereas only 6 (27.27%) studies applied AI for subtyping IDH mutant tumors in relation to 1p/19q gene codeletion. Convolutional neural networks (CNNs) were the most commonly used classifier and feature extraction method, appearing in 54.55% (12/22) and 90.91% (20/22) of studies. For specific classifiers, CNNs are the most utilized, appearing in 54.55% (12/22) of studies, followed by multiple instance learning (MIL) (which handles weakly labeled slide-level data by learning from sets of image patches without requiring pixel-level annotations) at 50% (11/22), transformers at 18.18% (4/22), and hybrid models (ie, single end-to-end architectures that integrate convolutional components with attention-based mechanisms [eg, CNN-transformer models] within a unified computational graph and are trained jointly) also at 18.18% (4/22). For specific feature extraction models, ResNet50 leads with 46.67% (10/22) usage, followed by densely connected convolutional network 121 (DenseNet121) and InceptionV3, both at 19.05% (4/22). Among the included studies, most models were developed using histopathological images alone (17/22, 77.3%), while a smaller number (5/22, 22.7%) incorporated additional modalities such as MRI scans (2/22, 9.1%), demographic information (3/22, 13.6%), and clinical variables (2/22, 9.1%). The most widely used method for model validation was k-fold cross-validation, used in 50% (11/22) of studies, followed by the train-test split approach in 45.45% (10/22) of studies. AI model performance was assessed using multiple metrics, with accuracy being the most frequently reported (19/22, 86.36%), followed by AUC and sensitivity, each used in 68.18% (15/22) of studies. A detailed summary of the AI models used in each study is provided in .

    Table 3. Features of artificial intelligence (AI).
    Feature Number of studies, n (%) References
    Type of classification
    Binary 19 (86.36) [,,,-]
     Multiclass 5 (22.73) [,,,,]
    Aim of AI algorithms
    Subtyping IDH 20 (90.91) [-,-]
     Subtyping 1p/19q codeletion 6 (27.27) [,,,,,]
    Classifier
    Convolutional neural networks 12 (54.55) [,,,,-,-,]
     Multiple instance learning 11 (50) [-,,,,-,,]
     Transformers 4 (18.18) [,,,]
     Hybrid models 2 (9.09) [,]
     Ensemble model 2 (9.09) [,]
     Logistic regression 2 (9.09) [,]
     Random forest 1 (4.55) []
     Multilayer perceptron 1 (4.55) []
    Specific classifier
    AttMIL 7 (31.82) [,,,,,,]
    ResNet50 5 (22.73) [,,,,]
    Traditional MIL 4 (18.18) [,,,]
    InceptionV3 4 (18.18) [,,,]
    ViT 4 (18.18) [,,,]
    DenseNet121 3 (13.64) [,,]
    VGG19 3 (13.64) [,,]
    Features extraction methods
    Convolutional neural networks 20 (90.91) [-,-,]
     Transformers 4 (18.18) [,,,]
     Hybrid models 4 (18.18) [,,,]
     MIL 1 (4.55) []
     Random forest 1 (4.55) []
    Specific features extraction model
    ResNet50 10 (45.45) [,,,,,-,,]
     DenseNet121 5 (22.73) [,,,,]
     InceptionV3 4 (18.18) [,,,]
     ResNet18 3 (13.64) [,,]
     VGG19 2 (9.09) [,,]
     ViT 2 (9.09) [,]
     Not reported 2 (9.09) [,]
    Data modality
     Unimodel data 17 (77.3) [-,,,]
     Multimodel data 5 (22.7) [,-,]
    Types of validation
    k-fold cross-validation 11 (50) [,,,,,,,-,]
     Training-test split 10 (45.45) [,,,,,,-,]
     External validation 7 (31.82) [,,,,,,]
    Performance measures
    Accuracy 19 (86.36) [-,,,-]
     AUC 15 (68.18) [-,,,,,-,-]
     Sensitivity 15 (68.18) [-,,,-,-,,,]
    F1-score 9 (40.91) [,,,,,,,,]
     Precision 8 (36.36) [,,,-,,]
     Specificity 8 (36.36) [,,,,,,,]
     Others 8 (36.36) [,,,-,,]

    aValues in italic indicate highest score.

    bStudies reported results for both IDH mutation status and 1p/19q codeletion status [,,,].

    cIDH: isocitrate dehydrogenase.

    dAttMIL: attention-based multiple instance learning.

    eMIL: multiple instance learning.

    fViT: vision transformer.

    gDenseNet121: densely connected convolutional network 121.

    hVGG: visual geometry group.

    iAUC: area under the curve.

    Results of the Risk of Bias and Applicability

    In the selection of participants domain, nearly a quarter of the included studies (5/22, 23%) provided sufficient information to confirm the use of an appropriate consecutive or random sampling of eligible participants. Nearly all studies (21/22, 95%) reported a sufficient sample size. Additionally, 8 (36%) studies avoided inappropriate exclusions, while over half (12/22, 55%) demonstrated balanced subgroup representation. Consequently, a small proportion of the studies (2/22, 9%) were assessed as having a low risk of bias, while a larger portion (8/22, 36%) were rated as having an unclear risk of bias in the “selection of participants” domain, as shown in .

    In terms of matching participants to the predefined requirements of the review question, a low level of concern was identified in 95% (21/22) of the included studies, as shown in .

    Figure 2. Results of the assessment of risk of bias in the included studies.
    Figure 3. Results of the assessment of applicability concerns in the included studies.

    A substantial majority of the included studies comprehensively detailed their AI models, with 21 of the 22 (95%) studies providing thorough descriptions. An overwhelming majority, all 22 studies (100%) clearly reported the features (predictors) used. Moreover, all 22 studies (100%) ensured that these features were sourced without prior knowledge of the outcome data. Consistency in feature assessment across participants was observed in 21 of the 22 (95%) studies. Consequently, the potential for bias in the “index test” domain was assessed as low in 20 of the 22 (91%) studies, as shown in . In addition, all 22 (100%) studies were found to have minimal concerns regarding the alignment between the model’s predictors and the review question’s criteria, as illustrated in .

    In most of the included studies (20/22, 91%), the outcome of interest, histopathology image analysis, was conducted by an expert (eg, pathologist or neuropathologist), ensuring the necessary expertise to accurately classify the outcomes. Nearly all studies (22/22, 100%) applied consistent and uniform outcome definitions across all participants, with outcome classification (ie, image labeling) blinded to the AI model predictions. An overwhelming majority (22/22, 100%) determined the outcome without prior knowledge of the predictor information. Furthermore, in all 22 (100%) studies, the diagnostic process was carried out over an appropriate duration, using the same diagnostic criteria for all participants. As a result, the risk of bias in the “reference standard” domain was considered low in the majority of studies (20/22, 91%), as shown in . Additionally, a similar proportion of studies (21/22, 95%) exhibited minimal concerns regarding discrepancies in outcome definition, timing, and determination methods, as shown in .

    Finally, a significant majority of the studies (22/22, 100%) ensured the inclusion of all enrolled participants in the data analysis. A substantial number of these studies (21/22, 95%) reported no missing values, or if missing values were present, they were handled appropriately (eg, through multiple imputation).

    Similarly, a high proportion of studies (22/22, 100%) adopted suitable measures to evaluate the performance of their models. The confusion matrix was presented, or more than one evaluation measure was used, and the selected measures were deemed appropriate.

    Nearly half of the studies (21/22, 95%) demonstrated an appropriate split among training, validation, and test sets. The chosen distribution aligned with best practices in the field (eg, 70%‐80% for training, 10%‐15% for validation, and 10%‐20% for testing).

    Consequently, the conduct and interpretation of the analysis did not introduce bias in most studies, with 21/22 (95%) having a low risk of bias in the “analysis” domain, as shown in . A detailed breakdown of the “risk of bias” and “applicability concerns” for each domain in every study is available in .

    Results of the Included Studies

    As shown in , the accuracy of AI models in detecting or subtyping molecular markers in adult-type gliomas ranged from 58% to 100%, with a mean accuracy of 85.46%. Sensitivity ranged from 62% to 100%, averaging 84.55%, while specificity ranged from 46% to 100%, with a mean of 86.03%. The AUC values ranged from 46% to 99%, with an average of 86.53%. The hybrid model demonstrated the highest accuracy (92.80%) and sensitivity (89.62%), while the CNN model achieved the highest specificity (89.30%). Logistic regression outperformed other models in terms of AUC (93.05%). Among specific CNN architectures, DenseNet121 achieved the highest accuracy (90.82%), sensitivity (89.28%), and AUC (91.05%). Moreover, AI models that used multimodal data outperformed those that used unimodal data in terms of sensitivity (90.15% vs 84.31%) and AUC (88.93% vs 86.29%). In outcome prediction, AI models showed better performance in detecting IDH mutations compared to 1p/19q codeletions, with higher accuracy (86.13% vs 81.63%), specificity (86.61% vs 78.11%), and AUC (86.74% vs 85.15%). AI models also performed better in IDH subtyping for multiclass classification compared to binary classification in terms of accuracy (91.98% vs 84.02%) and sensitivity (93.41% vs 80.18%). Conversely, AI models for binary classification yielded a lower performance than multiclass classification in terms of accuracy (91.98% vs 84.02%) and sensitivity (93.41% vs 80.18%). However, these differences should be interpreted as descriptive trends rather than statistically validated superiority, as formal between-group comparisons were not feasible.

    Table 4. Summary of overall performance of all models based on number of classes, type of data, and artificial intelligence (AI) algorithms in the target variable.
    Groups Accuracy Sensitivity Specificity AUC
    Number of studies Mean % (SD) Range Number of studies Mean % (SD) Range Number of studies Mean % (SD) Range Number of studies Mean % (SD) Range
    Algorithms
    CNNs 8 85.60 (0.09) 0.72‐1.00 8 84.09 (0.11) 0.64‐1.00 7 89.30 (0.09) 0.75‐1.00 8 86.54 (0.06) 0.76‐0.99
    MIL 10 80.83 (0.09) 0.58‐0.91 8 82.54 (0.13) 0.62‐0.99 3 68.69 (0.23) 0.46‐0.94 10 81.73 (0.16) 0.46‐0.97
    Transformer 4 88.27 (0.03) 0.83‐0.92 3 81.62 (0.10) 0.69‐0.92 3 92.37 (0.04) 0.89‐0.96
    Hybrid model 3 92.80 (0.05) 0.85‐0.97 3 89.62 (0.08) 0.81‐0.97 2 92.20 (0.04) 0.89‐0.95
    Ensemble model 2 86.35 (0.12) 0.78‐0.95 2 89.60 (0.05) 0.86‐0.93 2 92.00 (0.09) 0.85‐0.99
    Logistic regression 2 87.15 (0.01) 0.86‐0.88 2 86.00 (0.12) 0.78‐0.94 2 93.05 (0.00) 0.93‐0.931
    CNN models
    ResNet (ResNet,18,50) 7 87.04 (0.09) 0.72‐1.00 6 87.51 (0.11) 0.69‐1.00 5 91.48 (0.07) 0.84‐1.00 6 90.19 (0.06) 0.81‐0.99
    VGG (VGG16,19) 4 89.67 (0.09) 0.79‐1.00 4 87.03 (0.13) 0.69‐1.00 2 90.65 (0.13) 0.81‐1.00 2 81.55 (0.01) 0.81‐0.82
    DenseNet121 3 90.82 (0.10) 0.79‐0.97 3 89.28 (0.10) 0.78‐0.97 2 91.05 (0.08) 0.86‐0.97
    InceptionV3 3 80.15 (0.05) 0.77‐0.86 3 75.82 (0.11) 0.64‐0.86 —- 3 83.58 (0.03) 0.80‐0.87
    Type of data
    Unimodel data 20 85.67 (0.09) 0.58‐1.00 16 84.31 (0.11) 0.62‐1.00 9 86.27 (0.15) 0.46‐1.00 16 86.29 (0.10) 0.46‐0.99
    Multimodel data 5 83.64 (0.05) 0.78‐0.90 2 90.15 (0.06) 0.86‐0.94 4 88.93 (0.03) 0.85‐0.93
    Outcome prediction
    1p/19q 6 81.63 (0.12) 0.58‐1.00 6 87.64 (0.09) 0.75‐1.00 3 78.11 (0.28) 0.46‐1.00 5 85.15 (0.19) 0.46‐0.99
    IDH 19 86.13 (0.08) 0.72‐1.00 14 83.96 (0.11) 0.62‐1.00 8 86.61 (0.12) 0.52‐1.00 16 86.74 (0.08) 0.52‐0.99
    Class prediction
    Binary IDH 16 84.02 (0.07) 0.72‐1.00 12 80.18 (0.11) 0.62‐1.00 8 86.61 (0.12) 0.52‐1.00 15 86.87 (0.08) 0.52‐0.99
    Multiclass IDH 3 91.98 (0.07) 0.75‐1.00 2 93.41 (0.05) 0.85‐1.00
    Overall 20 85.46 (0.08) 0.58‐1.00 16 84.55 (0.01) 0.62‐1.00 9 86.03 (0.15) 0.46‐1.00 17 86.53 (0.10) 0.46‐0.99

    aSpecificity not reported in all studies.

    bAUC: area under the curve.

    cCNNs: convolutional neural networks.

    dValues in italics indicate highest score.

    eMIL: multiple instance learning.

    fVGG: visual geometry group.

    gDenseNet121: densely connected convolutional network 121.

    hIDH: isocitrate dehydrogenase.

    Principal Findings

    This review aimed to systematically evaluate the performance of AI models in detecting and classifying IDH mutation status and 1p/19q codeletion in adult-type gliomas using histopathological images. Our analysis indicates that these models demonstrate promising diagnostic capabilities, achieving pooled mean values of 85.46% for accuracy, 84.55% for sensitivity, 86.03% for specificity, and 86.53% for the area under the receiver operating characteristic curve. As noticed, the performance across all metrics was comparable, suggesting that AI models offer balanced diagnostic capabilities. Despite these promising results, they remain below the commonly cited thresholds for clinical adoption, which often exceed 90% for high-stakes diagnostic tasks []. As such, the current evidence is not sufficient to support stand-alone clinical decision-making. Therefore, the results of this meta-analysis should be interpreted as early indicators of feasibility rather than clinically actionable performance, underscoring the substantial methodological and validation work still required before these models can be safely integrated into clinical workflows.

    These results are generally consistent with prior reviews. Specifically, a systematic review conducted by Farahani et al [] reported a similar performance in IDH prediction using MRI, with pooled sensitivity of 84% and specificity of 87% and an AUC of 0.89. Further, Chen et al [] found pooled sensitivity of 83%, specificity of 84%, and AUC of 0.90 for predicting IDH mutations using machine learning–based radiomics. Taken together, these comparisons suggest that our findings are in line with the broader body of research, reinforcing the reliability and clinical promise of AI-based histopathological approaches for glioma molecular classification.

    Individual studies showed considerable variability across performance metrics; for instance, specificity values ranged from 46% to 100%. This variability likely stems from population differences, sample sizes, model architectures, and the inherent challenge of predicting rare molecular alterations (eg, telomerase reverse transcriptase mutations) from histopathology images, compounded by class imbalance, domain shift (ie, performance degradation when models are applied to data from different sources), and the absence of distinct histological correlates. While accuracy and sensitivity remain higher for well-characterized targets (eg, IDH, 1p/19q), specificity suffers when the model encounters rare or ambiguous cases in external validation.

    Hybrid models achieved the highest mean accuracy (92.80%) and sensitivity (89.62%), while logistic regression models achieved the top mean AUC just slightly higher than hybrid models (93.05% vs 92.20%). These results suggest that hybrid models deliver the strongest overall performance. Hybrid models are defined as AI frameworks that integrate multiple deep learning architectures to leverage the complementary strengths of each component. In the context of glioma diagnosis, most recent reviews have highlighted the effectiveness of hybrid models combining CNNs with transformer-based architectures [-]. CNNs excel at extracting local spatial features from histopathological images, such as cellular morphology and texture patterns, while transformers are adept at capturing long-range dependencies and global contextual information through their self-attention mechanisms. By combining these capabilities, hybrid CNN-transformer models can achieve a more comprehensive understanding of complex tissue structures and molecular characteristics within gliomas, leading to improved predictive performance. Our findings align with a recent review by Redlich et al [], which analyzed 83 studies on AI-based methods for whole-slide histopathology image analysis of gliomas. The review highlighted that hybrid models combining CNNs with other architectures, including transformers, consistently outperform single-architecture models in tasks such as subtyping, grading, and molecular marker prediction []

    Among pure CNN backbones, DenseNet121 demonstrated the best performance, achieving an accuracy of 90.82%, sensitivity of 89.28%, and an AUC of 91.05%. DenseNet121’s superior performance may be due to its unique architectural features, notably dense connectivity, which enhance training dynamics, feature representation, and parameter efficiency.

    In terms of data modalities employed by AI models, our analysis demonstrated that multimodal approaches exhibited superior performance in sensitivity (90.15% vs 84.31%) and AUC (88.93% vs 86.29%) compared to unimodal models. This suggests that multimodal models benefit from richer and more diverse feature representations, thereby enhancing their diagnostic capability. However, unimodal models achieved slightly higher accuracy (85.67% vs 83.64%), which may be attributed to the fact that none of the five multimodal studies included in the accuracy analysis employed hybrid modeling. Instead, they utilized simple model ensembles or comparisons involving architectures such as multilayer perceptron, logistic regression, random forests, and CNNs. Our findings are consistent with existing literature. For instance, a systematic review by Alleman et al [] concluded that multimodal deep learning approaches enhance prognostic accuracy in glioma patients compared to unimodal models, particularly in predicting overall survival. Similarly, d’Este et al [] demonstrated that combining multimodal imaging with AI techniques improves visualization of glioma infiltration, surpassing the capabilities of single-modality approaches. Furthermore, the GlioMT model, introduced by Byeon et al [], which integrates imaging and clinical data, exhibited superior performance in predicting adult-type diffuse gliomas compared to conventional CNNs and visual transformers.

    Regarding outcome prediction and AI model performance in detecting molecular markers, our analysis revealed that AI models demonstrated stronger performance in identifying IDH mutations compared to 1p/19q codeletions. Specifically, IDH detection yielded higher accuracy (86.13% vs 81.63%), specificity (86.61% vs 78.11%), and AUC (86.74% vs 85.15%). In contrast, AI models achieved slightly better sensitivity for 1p/19q codeletion detection (87.64% vs 83.96%). The discrepancy in performance between IDH mutation and 1p/19q codeletion detection may be attributed to several factors. IDH mutations often result in more distinct histopathological features, making them more amenable to detection by AI models. In contrast, 1p/19q codeletions may not produce as pronounced morphological changes, posing a greater challenge for AI-based detection. The findings of published studies align with our results. For example, the review by Farahani et al [] showed higher performance in identifying IDH mutations compared to 1p/19q codeletions using MRI in terms of sensitivity (84% vs 76%) and specificity (87% vs 85%).

    In our systematic review, CNNs and MIL algorithms were the most predominantly used in AI models, likely due to their suitability for histopathological image analysis. CNNs are designed to capture spatial hierarchies in image data through convolutional layers, making them effective at identifying morphological patterns such as cellular structures and tissue architecture that are critical in tumor classification. Histopathological slides of gliomas are typically of high resolution and contain complex visual features, which CNNs can learn to recognize with high accuracy. Meanwhile, MIL frameworks address the challenge of weakly labeled data, which is common in medical imaging, where only slide-level labels (eg, tumor type or mutation status) are available, rather than precise annotations at the pixel or region level. MIL allows models to learn from sets of image patches (instances) while associating predictions with the whole slide (bag), making it particularly useful in computational pathology where exhaustive labeling is impractical.

    Practical and Research Implications

    While our results are encouraging, they remain not optimal and arguably lower than that acceptable in clinical practice []. Thus, our findings should be interpreted with caution due to several limitations. First, 15 of the 22 studies included in the review utilized the same dataset, raising concerns regarding the generalizability of the results. Second, many of the studies were based on small sample sizes, limiting the robustness and reliability of the conclusions. Third, several key findings were derived from fewer than 5 studies, reducing the strength of the evidence. Lastly, the aggregation of results using simple arithmetic means rather than formal meta-analytic techniques may have limited the ability to draw definitive conclusions about the overall efficacy of AI models.

    Given these limitations, AI should not be used as a stand-alone diagnostic tool for the detection or subtyping of adult-type gliomas but rather as a complementary approach alongside conventional methods. Future research should prioritize the development of AI models using larger and more diverse datasets to enhance performance and generalizability. Ensuring adequate sample sizes and improving the transparency of reporting, such as including confusion matrices, will also facilitate more rigorous meta-analyses. Notably, the majority of studies reviewed developed AI models exclusively from histopathological images. Developing multimodal AI models that incorporate additional data types (eg, radiological and clinical information) may further enhance diagnostic accuracy. Thus, continued efforts are needed to advance the integration of multimodal data in AI model development for the detection and subtyping of adult-type gliomas.

    In this review, the majority of studies used AI algorithms that were either MIL or CNNs, indicating a research gap in exploring the performance and effectiveness of other types of AI algorithms (eg, transformers, hybrid models, ensemble models [ie, approaches that combine predictions from multiple independently trained models using averaging, voting, or weighted fusion strategies], or logistic regression). Additionally, we observed a lack of research assessing the performance of AI models that use multimodal data, which may have affected the accuracy of the results. Moreover, the number of studies evaluating AI models’ performance in detecting IDH mutations was noticeably greater than those that used 1p/19q as a predicted outcome. These studies predominantly employed binary IDH classification (ie, IDH mutation vs IDH wild-type), whereas studies using IDH multiclass classification (eg, astrocytoma, oligodendroglioma) were markedly fewer. Future research should address these gaps to produce more accurate and representative findings.

    Our findings that AI hybrid models performed better in detecting adult-type gliomas than other models underscore the need for more advanced approaches to explore their effectiveness and their potential to achieve diagnostic outcomes comparable to those of histopathologists. However, their real-world applicability requires consideration of computational and operational demands. These models often require greater processing power (eg, GPU-enabled servers or high-performance cloud computing) and longer inference times compared to conventional CNNs. Such requirements may pose challenges for pathology laboratories lacking advanced digital infrastructure. Nonetheless, many modern clinical centers are increasingly equipped with GPU-accelerated digital pathology ecosystems, and cloud-based deployment can significantly reduce the need for on-site hardware. From a cost-benefit perspective, the additional computational resources required by hybrid models may be justified given their superior accuracy and robustness, particularly for molecular prediction tasks where diagnostic errors carry serious clinical implications. However, successful implementation in routine practice will depend on streamlined model optimization, efficient inference pipelines, and prospective validation of runtime performance to ensure that diagnostic gains meaningfully outweigh operational costs.

    As our findings indicated superior performance of multimodal AI models in sensitivity and AUC, we suggest a complementary approach: multimodal AI models may serve as effective initial diagnostic tools due to their comprehensive data integration, while unimodal models can be employed subsequently for more accurate and specific outcome determination. Moreover, our results reflect a stronger performance of AI models in the detection of IDH mutations compared to 1p/19q codeletion in all metrics except sensitivity. Therefore, we suggest that while IDH mutation detection should remain a primary focus for model refinement and deployment due to its balanced high performance, models targeting 1p/19q codeletions may serve as effective initial diagnostic tools in adult-type gliomas due to their higher sensitivity, which is critical in early disease detection.

    Given that the current review focused on IDH mutations and 1p/19q codeletions, we encourage researchers to undertake further reviews covering other glioma molecular markers, including MGMT methylation, ATRX, and telomerase reverse transcriptase mutations. Moreover, further research should extend to other types of brain tumors, such as meningiomas, pituitary adenomas, and schwannomas. These tumors differ significantly in terms of biological behavior, treatment options, and clinical outcomes, warranting a comprehensive evaluation of AI’s potential in diagnosing and subtyping each tumor type.

    Limitations

    This systematic review is subject to several limitations. First, the generalizability of the findings is limited, as approximately 68.18% of the included studies relied on the same dataset, and the number of included studies is relatively small (n=22). This raises concerns about the external validity of the results and their applicability to the broader population of adult-type glioma patients. To address the potential duplication of patients or slides across studies (ie, those relying on The Cancer Genome Atlas), we reviewed the data sources reported by each study to identify where overlap was likely. Because individual-level data were not available, we were unable to verify or remove duplicate cases. Instead, we treated studies drawing from the same dataset as nonindependent samples and interpreted pooled estimates with caution. As a result, our summary metrics should be viewed as descriptive indicators rather than independent, unbiased effect estimates. Second, the review focused exclusively on adult-type gliomas, thereby restricting the ability to draw conclusions about the performance of AI in detecting other adult brain tumors or brain tumors in the general population. Third, the analysis was confined to studies utilizing histopathological images, limiting the assessment of AI performance with alternative imaging modalities, such as MRI or computed tomography scans. Fourth, meta-analysis was not feasible due to incomplete reporting of key performance metrics across studies. Instead, mean values were calculated to provide a descriptive summary, though this approach lacks the statistical rigor and precision of formal meta-analysis. Finally, the inclusion of only English-language studies may have introduced language bias and led to the exclusion of relevant research published in other languages, potentially affecting the comprehensiveness of the review.

    Conclusion

    This systematic review demonstrates that AI models applied to histopathological images show strong potential for the molecular classification of adult-type gliomas, particularly in detecting IDH mutations and 1p/19q codeletions. Our findings indicate that while overall diagnostic performance is promising—especially for IDH mutations—variability in model performance and methodological limitations across studies temper the generalizability of results. Hybrid models and multimodal approaches emerged as particularly effective strategies, offering enhanced sensitivity and discriminative capability. However, the limited use of diverse datasets, the predominance of unimodal designs, and the lack of standardized reporting remain critical barriers to clinical translation. Given these constraints, AI models should currently serve as complementary tools rather than stand-alone diagnostic systems, functioning primarily to assist pathologists in pattern recognition and decision support while final diagnoses remain clinician-led. Future research should prioritize methodological standardization, broader data integration, and validation across larger, heterogeneous populations to support the clinical deployment of AI in glioma diagnostics.

    The authors declare the use of generative AI in the research and writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to generative artificial intelligence (GAI) tools under full human supervision: Proofreading and editing. The GAI tool used was ChatGPT-4. Responsibility for the final manuscript lies entirely with the authors. GAI tools are not listed as authors and do not bear responsibility for the final outcomes. All intellectual content, study design, data collection, analysis, and final interpretations are the sole responsibility of the authors.

    The publication of this article was funded by the Weill Cornell Medicine–Qatar Health Sciences Library.

    The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.

    All authors agreed to submission of the manuscript.

    AA is an associate editor of JMIR Nursing at the time of publication. All other authors declare no conflict of interest.

    Edited by Amy Schwartz; submitted 01.Jun.2025; peer-reviewed by Babul Salam Ibrahim, Khaldoon Dhou, Yi Lu; accepted 06.Jan.2026; published 13.Mar.2026.

    © Obada Almaabreh, Rukaya Al-Dafi, Aliya Tabassum, Ahmad Othman, Alaa Abd-alrazaq. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 13.Mar.2026.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • CFTC Issues Significant Staff Advisory on Prediction Markets – Dentons

    1. CFTC Issues Significant Staff Advisory on Prediction Markets  Dentons
    2. Letter From America #54  iGamingFuture
    3. CFTC, SEC Acknowledge Need for Prediction Market Rules, Need Help Figuring Them Out  Gaming America
    4. Do Your Insider Trading Policies Cover The Prediction Markets? Should They? | Proskauer – Regulatory & Compliance  JD Supra
    5. Financial Services Roundup: Market Talk  WSJ

    Continue Reading

  • FDA streamlines its process for approving biosimilar drugs

    FDA streamlines its process for approving biosimilar drugs

     

    FDA streamlines its requirements for biosimilar drug approvals


    US Food and Drug Administration commissioner Marty Makary speaks on Jan. 7, 2025. The FDA on Monday announced new guidelines for evaluating biosimilar drug applications.

    Credit:
    Associated Press

    The US Food and Drug Agency announced on March 9 new guidelines for testing biosimilar products, biologic drugs that are made by someone other than the inventor.

    Whereas a generic-drug manufacturer can make exactly the same active pharmaceutical ingredient as that of an approved drug after an inventor’s patent expires, biological drugs’ size and complexity make exact replication more difficult. Biosimilarity is a regulatory solution that lets generic manufacturers prove their therapies work the same way as approved drugs without requiring a chemically identical product.

    Under the new guidance, the FDA says it is open to considering data from clinical studies of how a biosimilar drug candidate is metabolized, even if the study compares the drug candidate with a product that is not licensed in the US, as long as the foreign and US products are comparable. This follows another change in testing requirements from October 2025, when the agency announced that chemical analysis could replace clinical studies in some parts of a biosimilar application.

    In a press release, the agency says the latest change could cut about $20 million from the cost of developing a biosimilar drug, potentially lowering the end price for consumers. The Association for Accessible Medicine, a trade group for generic drug manufacturers, notes in a statement praising the new guidelines that the vast majority of drugs set to come off patent in the next decade do not have a biosimilar in development.

    The guidance is a draft, and the agency is accepting comments through May 11.

    —Laurel Oldach

    FDA creates a new adverse-events database for all regulated products

    The US Food and Drug Administration has launched a new data platform, the Adverse Event Monitoring System (AEMS), that will combine adverse-event reports for all the products the agency oversees and replace what were previously seven separate databases for vaccines, drugs and biologics, medical devices, cosmetics, human foods and dietary supplements, tobacco and nicotine products, and animal drugs and animal foods.

    Adverse events are typically unanticipated side effects but can include “any undesirable experience” associated with using a product.

    The new unified system, announced in a March 11 press release, is also expected to start publishing reports submitted by manufacturers, health-care workers, and consumers in real time by the end of May. Reports will not be vetted before being published, and the system comes with a disclaimer that they “may contain incomplete, inaccurate, untimely, and/or unverified information.” The FDA processes approximately 6 million adverse-event reports per year; the reports were previously published in quarterly batches.

    One of the databases being replaced by AEMS, the Vaccine Adverse Event Reporting System (VAERS), has been cited in political debates over the safety of COVID-19 and other vaccines. Peter Marks, the former director of the Center for Biologics Evaluation and Research, resigned in April 2025 after refusing to give US Health and Human Services secretary Robert F. Kennedy Jr. access to edit data in VAERS, according to reporting by the Associated Press.

    The new AEMS currently includes reports on vaccines, drugs and biologics, and cosmetics. Data on animal food and animal drug adverse events will be published within the next few weeks, the HHS press office said in an email. And in May, the FDA expects to add reports for medical devices, human foods and dietary supplements, and tobacco products.

    —Delger Erdenesanaa

    25 states ask to side with the EPA in endangerment finding lawsuit

    The attorneys general of Kentucky, West Virginia, and 23 other states filed a motion (PDF) with a federal court on March 6 to join the lawsuit filed Feb. 18 by environmental, health, and youth climate groups challenging the US Environmental Protection Agency’s final rule to overturn the 2009 climate endangerment finding.

    The group, made up of all but three of the Republican attorneys general in the US, filed a motion in the US Court of Appeals for the District of Columbia Circuit to intervene as respondents in the lawsuit, after the group’s involvement in a prior related case. In April 2024, Republican state attorneys general had asked the DC Circuit to review emission standards for light- and medium-duty trucks that the Joe Biden administration EPA had finalized earlier that year. The group had also submitted statements in support of overturning the endangerment finding before the EPA finalized the rule.

    Emission standards raise the cost of maintaining and running state-owned vehicles, increase road and infrastructure expenses, and deprive states of revenue from fuel taxes, the attorneys general say in their March 6 motion.

    The endangerment finding provides the scientific justification for regulating greenhouse gas emissions under the Clean Air Act. The EPA’s final rule to rescind the finding also includes removing some emission standards for passenger vehicles.

    In the motion to join the case, the attorneys general also brought up a question that could potentially extend beyond this particular lawsuit. “The Final Rule raises fundamental questions about how climate-related matters should be addressed—whether through federal administrative agencies, Congress, the States, or other repositories of power,” the filing says.

    Two other groups have filed lawsuits challenging the EPA’s endangerment finding action. The Zero Emission Transportation Association filed a lawsuit on Feb. 23, and the Metropolitan Congregations United for St. Louis and the Missouri Coalition for the Environment filed separately on Feb. 27. According to court records, all four suits have been combined into one case.

    —Leigh Krietsch Boerner

    Continue Reading

  • Terning The Flight to Innovation with Terns Pharma

    Terning The Flight to Innovation with Terns Pharma

    Dr. Yaron Werber is a Managing Director and senior research analyst on TD Cowen’s biotechnology team. In this role, Dr. Werber is responsible for providing analysis on large-, mid-, and small-cap biotechnology stocks. Dr. Werber has 20+ years of experience as a research analyst in the financial services industry and has served as an executive in a public biotechnology company.

    Prior to rejoining TD Cowen, Dr. Werber was a founding team member, chief business and financial officer, treasurer and secretary of Ovid Therapeutics, a biotechnology company focused on developing transformative drugs for orphan disorders of the brain. In this role, Dr. Werber established and was responsible for all financial planning and reporting, business development, strategy, operations/IT and investor and public relations and human resources functionality. Dr. Werber also led negotiations to secure several pipeline compounds including an innovative partnership with Takeda Pharmaceutical Company, a deal that expanded Ovid’s pipeline and pioneered a novel approach for partnering the focused expertise of small biotech with big pharma.

    This deal was chosen by Scrip as a finalist for the 2017 Best Partnership Alliance Award. In addition, Dr. Werber oversaw all financing activities and led a $75 million Series B round in 2015 and Ovid’s $75 million IPO in 2017. In that capacity, Dr. Werber was selected as an “Emerging Pharma Leader” by Pharmaceutical Executive magazine in 2017.

    Prior to joining Ovid, Dr. Werber worked at Citi from 2004 to 2015, where he most recently served as a managing director and head of U.S. healthcare and biotech equity research. During his tenure at Citi, Dr. Werber led a team that conducted in-depth analyses of life science companies at all stages of development, ranging from successful, profitable companies to recently public and privately held companies. Previously, Dr. Werber was a senior biotech analyst and vice president at SG Cowen Securities Corporation from 2001- 2004.

    Dr. Werber has been awarded several accolades for performance and stock picking, he has been highly ranked by Institutional Investor magazine, has received awards from Starmine and was voted among the top five analysts in biotech in the Wall Street Journal’s “Best on the Street” Greenwich survey. He has frequently been featured as a guest on CNBC, Fox News, Bloomberg News and has been quoted in the Wall Street Journal, New York Times, Fortune, Forbes, Bloomberg thestreet.com and BioCentury.
    Dr. Werber earned his B.S. in Biology from Tufts University, cum laude, and a combined M.D./MBA degree from Tufts University School of Medicine where he was a Terner Scholar.

    Continue Reading

  • The privacy of things: How New Jersey’s Scrap Tire Act creates a criminal surveillance trap

    The privacy of things: How New Jersey’s Scrap Tire Act creates a criminal surveillance trap

    In the push toward a circular economy, the physical and digital worlds are colliding with unprecedented legal friction. 

    On 20 Jan., former New Jersey Gov. Phil Murphy signed Assembly Bill A5851, known as the Scrap Tire Act. While the law aims to end illegal dumping, it inadvertently creates a regulatory paradox that could expose New Jersey businesses to criminal liability under the state’s strict privacy statutes.

    The traceability mandate vs. criminal surveillance

    The act mandates that “an electronic manifest system,” otherwise described as a “digital tracking system,” monitor the transfer of every scrap tire from source to destination. For the state’s licensed haulers, compliance necessitates the use of telematics and GPS-enabled manifest software to prove the journey of the tire to the Department of Environmental Protection.

    However, this mandate runs headlong into New Jersey A3950, which passed in 2022 and prohibits employers from using tracking devices in vehicles operated by employees without prior written notice. Most critically, a violation of A3950 is not a mere compliance lapse; it is a fourth-degree crime, punishable by up to 18 months in prison, fines of up to USD10,000 and a permanent criminal record.

    Herein lies the trap: To comply with the environmental transparency under A5851, a hauler must track the asset. But if that tracking captures the real-time movement of the driver without a specialized privacy-first notice and consent framework, the employer has moved from environmental steward to potential felon.

    The PII in the rubber: Beyond the license plate

    Privacy professionals must look deeper at the data lineage. A tire manifest is not just a receipt; it is a record of personally identifiable information. Under the New Jersey Data Privacy Act, information that is “reasonably linkable” to an individual — such as the precise geolocation of a small business owner’s garage or the behavioral patterns of a sole-proprietor hauler — is protected.

    When millions of these data points are aggregated, a high-resolution map of commercial and personal movement is created. If this scrap tire database is not governed with the same rigor as a financial database, it becomes a target for de-anonymization, exposing the data generator’s precise geolocation — a category of sensitive data requiring heightened protections under U.S. National Institute of Standards and Technology and ISO 27701 standards.

    Violations of the Scrap Tire Act are punishable by a civil administrative penalty of USD7,500 for a first offense, USD10,000 for a second offense and USD25,000 for a third and every subsequent offense.

    Solving the paradox with responsible AI

    The solution to this regulatory clash cannot be found in paper manifests. It requires a privacy-enhancing technology approach.

    Anonymized spatial mapping: Utilizing differential privacy to report the flow of tires to the DEP without recording the precise GPS coordinates of the driver at every second.

    Automated redaction via artificial intelligence: Implementing natural language processing-based agents to automatically scrub personally identifying information from digital manifests before they are uploaded to state databases, ensuring transparency without exposure.

    Governance as a service: Mapping the requirements of the NIST AI Risk Management Framework directly to the Scrap Tire Act. By “mapping” the risks of A3950 during the “govern” phase of the tracking system’s life cycle, firms can mitigate criminal liability before a single tire is moved.

    A blueprint for the future

    The Scrap Tire Act is a harbinger of things to come. As more physical assets — from batteries to electronics — require digital passports, the role of the privacy officer will expand into the physical supply chain.

    In New Jersey, we have a unique opportunity to lead. We must prove that environmental accountability does not require the sacrifice of individual privacy. By implementing privacy by design at the source, we can ensure that the journey of a tire does not end in a courtroom.

    Continue Reading

  • US economy ended 2025 on weaker footing than previously thought – Financial Times

    1. US economy ended 2025 on weaker footing than previously thought  Financial Times
    2. Fourth-quarter GDP revised down to just 0.7% growth; January core inflation was 3.1%  CNBC
    3. US economy grew much more slowly than previously reported in fourth quarter  CNN
    4. Breaking News: U.S. economic growth was slower at the end of 2025 than previously thought, and consumer prices crept higher in January.  facebook.com
    5. Dow Jones Top Markets Headlines at 9 AM ET: GDP Grew at a Tepid 0.7% Pace in the Fourth Quarter. The Future Is Foggy, Too. | Fed’s …  富途牛牛

    Continue Reading

  • Uber and Motional Launch Robotaxi Service in Las Vegas – Uber Investor Relations

    1. Uber and Motional Launch Robotaxi Service in Las Vegas  Uber Investor Relations
    2. Uber’s stock rises as new Amazon robotaxi partnership is ‘a positive surprise’  MarketWatch
    3. Uber adds Motional to its stable of robotaxis.  The Verge
    4. Uber seen as well-positioned in autonomous vehicles following Zoox partnership  Proactive financial news
    5. Uber robotaxi rides are now available for passengers in Las Vegas  Engadget

    Continue Reading