Category: 3. Business

  • EU hits Musk’s X with $140m fine for ‘deceptive’ blue tick, ad transparency | European Union News

    EU hits Musk’s X with $140m fine for ‘deceptive’ blue tick, ad transparency | European Union News

    Landmark penalty triggers US fury as Brussels enforces first digital transparency sanction.

    The European Union has slapped a 120 million euro ($140m) penalty on Elon Musk’s social media platform X for breaching digital transparency rules, igniting a transatlantic clash over tech regulation.

    Brussels announced the fine on Friday in its first enforcement action under the Digital Services Act, legislation designed to rein in social media companies.

    Recommended Stories

    list of 2 itemsend of list

    The decision has deepened tensions with Washington, where officials accused Europe of targeting US businesses under the guise of protecting users.

    European regulators found X guilty of three violations after a two-year investigation. The platform’s paid blue checkmark system, which Brussels said “deceives users” about account authenticity, drew a 45 million euro ($52.4m) penalty.

    X was fined another 35 million euros ($40.7m) for failing to maintain transparent advertising records that would help identify scams and fake political advertisements, while blocking researchers from accessing public data cost the company 40 million euros ($46.6m).

    The decision risks further inflaming trade negotiations between Brussels and Washington, where the Trump administration has demanded Europe abandon regulations it views as protectionist.

    US Vice President JD Vance lashed out at Brussels even before the announcement, claiming the platform was being punished “for not engaging in censorship”.

    Secretary of State Marco Rubio called the fine “an attack on all American tech platforms and the American people by foreign governments”.

    Commenting on Rubio’s post, Musk wrote, “Absolutely”. Commenting on the EU’s post announcing the fine, Musk wrote: “Bulls***”.

    But EU tech chief Henna Virkkunen denied the ruling amounted to censorship.

    “Deceiving users with blue checkmarks, obscuring information on ads and shutting out researchers have no place online in the EU,” she said, adding that Brussels was simply “holding X responsible for undermining users’ rights.”

    European politicians expressed relief after what many saw as prolonged delays in enforcement.

    French Digital Minister Anne Le Henanff described it as a “magnificent announcement,” while Germany’s digital minister, Karsten Wildberger, said it showed Brussels was “determined to enforce” its rules.

    Critics argued the penalty was too modest.

    The fine represents a fraction of the 5.9 billion euros ($6.9bn) maximum allowed under the act, which permits sanctions of up to 6 percent of global revenue.

    Politico reported that Cori Crider, executive director of the Future of Technology Institute, said, “Musk will moan in public – in private, he will be doing cartwheels.”

    X now has between 60 and 90 days to submit compliance plans addressing the violations or face additional periodic penalties.

    The company did not respond to requests for comment by the Reuters news agency.

    The ruling lands amid broader investigations into 10 major platforms, including Facebook and Instagram.

    Chinese-owned TikTok avoided penalties on Friday by pledging to improve its advertising transparency.

    Brussels continues to probe whether X has failed to combat illegal content and information manipulation, violations that could trigger substantially larger fines.


    Continue Reading

  • Stocks Hold Onto Gains as Fed Countdown Begins: Markets Wrap

    Stocks Hold Onto Gains as Fed Countdown Begins: Markets Wrap

    (Bloomberg) — The stock market crept higher, but stopped short of records Friday, as traders refrained from making big bets ahead of the Federal Reserve’s interest-rate cut decision next week. Treasuries are on track for their worst week since June.

    The S&P 500 rose 0.2%, paring back from an earlier 0.6% jump that put it within a whisker of October’s all-time high. The Nasdaq 100 climbed 0.4% while the Russell 2000 gauge of smaller companies slipped after closing at a record on Thursday. Treasuries extended losses with the yield on the 10-year climbing to 4.14%.

    A dated reading of the Federal Reserve’s preferred inflation gauge did little to shift Wall Street’s expectations of a rate cut next week with swaps bets pointing to further easing into 2026.

    Subscribe to the Stock Movers Podcast on Apple, Spotify and other Podcast Platforms.

    The core personal consumption expenditures price index, a measure that excludes food and energy, rose 0.2% in September, inline with economists expectations for a third-straight 0.2% increase in the Fed’s favored core index. That would keep the year-over-year figure hovering a little below 3%, a sign that inflationary pressures are stable, yet sticky.

    “Overall, the data was consistent with another 25 basis point Fed cut next week, but it doesn’t suggest any urgency for the Fed to accelerate the pace of cuts in 2026,” said BMO’s Ian Lyngen.

    A December rate cut is a not given for every Fed watcher. BlackRock CIO of Global Fixed Income Rick Rieder told Bloomberg Television before the data that he is expecting some dissents and disagreement at the next meeting.

    Meanwhile, sentiment toward technology stocks got a boost after Nvidia Corp. partner Hon Hai Precision Industry Co. reported strong sales. Moore Threads Technology Co., a leading Chinese AI chipmaker, jumped 425% in its Shanghai trading debut. Shares of Netflix Inc. slid after agreeing to a tie-up with Warner Bros. Discovery Inc.

    In a sign that institutional appetite for the world’s largest cryptocurrency remains subdued, BlackRock Inc.’s iShares Bitcoin Trust ETF (IBIT) recorded its longest streak of weekly withdrawals since debuting in January 2024.

    Investors pulled more than $2.7 billion from the exchange-traded fund over the five weeks to Nov. 28, according to data compiled by Bloomberg. With an additional $113 million of redemptions on Thursday, the ETF is now on pace for a sixth straight week of net outflows. A drop in Bitcoin deepened, falling below $90,000 on Friday.

    What Bloomberg Strategists say…

    There are two things stand in the way of a year-end rally — and both are on display today. One is the third downdraft in crypto prices in the last two weeks, which has sent Bitcoin back below $90,000. Such a pullback served to dampen risk sentiment on two previous occasions in November.

    —Edward Harrison, Macro Strategist, Markets Live

    For the full analysis, click here.

    WTI crude steadied around $60 a barrel. Gold erased earlier gains.

    Corporate News

    SoftBank Group Corp. is in talks to acquire DigitalBridge Group Inc., a private equity firm that invests in assets such as data centers, to take advantage of an AI-driven boom in digital infrastructure. Netflix Inc. agreed to buy Warner Bros. Discovery Inc. in a historic combination, joining the world’s dominant paid streaming service with one of Hollywood’s oldest and most revered studios. Southwest Airlines Co. lowered its operating profit target for the full year, citing the fallout from the recent US government shutdown as well as higher fuel prices. Health-care group Cooper Cos’ shares jumped in premarket trading after a guidance beat and the launch of a strategic review. Moore Threads Technology Co., a leading Chinese artificial intelligence chipmaker, soared as much as 502% in its Shanghai debut after raising 8 billion yuan ($1.13 billion) in an IPO. Nvidia Corp. would be barred from shipping advanced artificial intelligence chips to China under bipartisan legislation unveiled Thursday in a bid to codify existing US restrictions on exports of advanced semiconductors to the Chinese market. Some of the main moves in markets:

    Stocks

    The S&P 500 rose 0.2% as of 3:17 p.m. New York time The Nasdaq 100 rose 0.4% The Dow Jones Industrial Average rose 0.3% The MSCI World Index was little changed Currencies

    The Bloomberg Dollar Spot Index fell 0.1% The euro was little changed at $1.1643 The British pound was little changed at $1.3331 The Japanese yen fell 0.1% to 155.28 per dollar Cryptocurrencies

    Bitcoin fell 2.7% to $89,661.75 Ether fell 2.6% to $3,042.55 Bonds

    The yield on 10-year Treasuries advanced four basis points to 4.14% Germany’s 10-year yield advanced three basis points to 2.80% Britain’s 10-year yield advanced four basis points to 4.48% Commodities

    West Texas Intermediate crude rose 0.7% to $60.08 a barrel Spot gold was little changed This story was produced with the assistance of Bloomberg Automation.

    –With assistance from Levin Stamm, Neil Campling and Sidhartha Shukla.

    ©2025 Bloomberg L.P.

    Continue Reading

  • Developers Proprietary Software Arbitration | FTI Consulting

    One of the largest global hedge funds accused two former developers of conspiring with a competitor — their new employer — to replicate proprietary software. Counsel for the former developers engaged FTI Consulting’s trading industry and technology experts to assess and disprove the allegations by demonstrating that the hedge fund’s systems were based on widely known and commonly used principles.

    Our Impact

    • Through an extensive analysis of the hedge fund’s data storage and analytics systems, our experts delivered three comprehensive reports demonstrating the software in question was based on widely known and commonly applied computer science principles.
    • The findings strengthened our clients’ position in arbitration proceedings by establishing a solid technical foundation that helped achieve a successful legal outcome.
    • By disproving the allegations, our clients reinforced their credibility and positioning in the market and clarified the scope of innovation in their technology.

    Our Role

    • FTI Consulting conducted extensive qualitative research on each of the allegedly unique and valuable features of the hedge fund’s data storage and modeling systems.
    • Our experts traced the origins of the mathematical and computer science principles underlying these features to demonstrate their widespread use prior to their adoption by the hedge fund.
    • Leveraging our deep technical and industry expertise, we identified multiple commercially available products with identical features as the hedge fund’s data storage system, as well as in-house software systems at other large financial institutions with striking similarities to its trading modeling system.
    • FTI Consulting submitted three expert reports and delivered oral testimony during a two-week arbitration trial that concluded in April 2025.

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    In recent years, large language models (LLMs) have garnered significant attention across various fields, emerging as transformative tools in sectors such as health care []. Over the past decade, research output focusing on LLM applications in medical and health domains has grown exponentially []. Advances in natural language processing and deep learning, particularly the Transformer architecture and its core self-attention mechanism [], have enabled the increasing application of LLMs, such as ChatGPT, in clinical nursing practice. These systems support real-time triage [], generate diagnostic recommendations [], recommend nursing interventions [,], and develop health education plans [], thereby improving nursing efficiency. The effectiveness of LLMs in clinical care has been well-documented by several studies [-], demonstrating their potential to improve patient outcomes and care quality.

    Sociodemographic factors critically influence the quality and accessibility of nursing care, with pervasive disparities documented across key demographic variables, including age, sex identity, geographic location, educational attainment, and socioeconomic status []. For example, labeling female patients as “demanding” or “overly sensitive” may skew symptom management decisions, resulting in disparities in care [,]. Similarly, ageism may influence nursing decisions, where older patients are stereotyped as “fragile” and may receive either excessive protective care or inadequate treatment due to perceptions that they are “too old to benefit significantly” [,]. Moreover, patients from socioeconomically disadvantaged backgrounds often face barriers to care compared to wealthier patients, exacerbating disparities in health care outcomes []. These documented human cognitive biases in nursing practice may be inadvertently encoded into LLMs through their training on historical clinical narratives and decision records [].

    The technical validation of LLMs in nursing has progressed rapidly. Previous studies have demonstrated superior accuracy of nurses in tracheostomy care protocol execution [] and in generating basic mental health care plans []. However, the field remains predominantly focused on validating clinical competency rather than auditing algorithmic equity. Recently, a systematic review of 30 nursing LLM studies revealed that the majority of studies prioritized technical performance metrics (eg, diagnostic accuracy and response consistency), with only a small number addressing ethical risks, such as algorithmic bias []. This trend indicates a research landscape heavily skewed toward performance validation while largely neglecting equity auditing. Furthermore, these limited discussions on bias are primarily found in opinion pieces and reviews rather than empirical investigation [,]. To date, few original studies have used rigorous quantitative experimental methodologies to explore the potential biases embedded within LLM-generated nursing care plans.

    Although previous studies have identified algorithmic bias in other domains of medical artificial intelligence (AI), such as Convolutional Neural Network-based medical imaging analysis [,], traditional machine learning models (eg, support vector machines or random forests) for clinical diagnostics [], and disease prediction [], most have primarily focused on racial, ethnic, and sex factors. Other sociodemographic dimensions, such as education, income, and place of residence, also have a great impact on health care resource utilization [-]. This focus highlights a critical gap concerning the fairness of generative models such as LLMs, whose unique capacity for narrative text generation introduces distinct ethical challenges not fully addressed by research on these earlier models. Despite the need to ensure fairness has been widely recognized, serving as a cornerstone of the World Health Organization’s LLMs management framework [], empirical fairness evaluations specific to nursing care planning remain limited, and systematic audits that include education, income, and urban-rural residence are still uncommon.

    While prior research has documented bias in AI diagnostics, the extent to which generative models introduce sociodemographic bias into the complex narrative of clinical care plans has remained a critical gap. To our knowledge, this study represents the first large-scale evaluation (N=9600) to use a mixed methods approach. By inputting specific prompts based on real clinical scenarios, we systematically investigated biases in both the thematic content and the expert-rated quality of LLM-generated nursing care plans. Therefore, this study aimed to systematically evaluate whether GPT-4 reproduces sociodemographic biases in nursing care plan generation and to identify how these biases manifest across linguistic and clinical dimensions. Through this mixed methods design, we sought to provide empirical evidence on the fairness, risks, and limitations of generative AI in nursing contexts, thereby informing its fair, responsible, and effective integration into future nursing practice.

    Study Design

    This study used a sequential explanatory mixed methods design to investigate sociodemographic bias in LLM-generated nursing care plans. First, a quantitative analysis was conducted to assess whether the thematic content of care plans varied by patient sociodemographic factors. Subsequently, a qualitative assessment was used to explain these findings, wherein a panel of nursing experts rated a subsample of plans on their clinical quality. Our study integrated 2 distinct research methods. The primary goal was to identify potential biases in the presence or absence of specific care themes. Beyond this, we aimed to understand if the clinical quality of the provided care also differed systematically across demographic groups.

    Clinical Scenario Design and Experiment Setup

    Selection of Clinical Scenario and Methodological Rationale

    This study used a standardized clinical vignette experiment, an established methodology in behavioral and health care research. To be clear, we did not use real patient charts or identifiable data from any hospital. Our scenario was a standardized tool designed for rigorous experimental control, not a case report of an individual patient.

    We chose this established method for 2 core reasons. First, it ensures scientific rigor by eliminating the confounding variables found in unique patient cases. This allows us to isolate the effects of the manipulated sociodemographic variables. Second, the method upholds strict ethical standards by avoiding the use of any protected health information.

    Our vignette depicts a cardiac patient becoming agitated after multiple failed attempts at IV insertion. This scenario design parallels the approach of prior research, such as Guo and Zhang (2021) [], which used a similar common clinical conflict to investigate bias in doctor-patient relationships. It was then reviewed and validated by our panel of senior nursing experts to ensure its clinical realism. This experimental paradigm is a standard and accepted method for investigating attitudes and biases in behavioral sciences and health care research [].

    Patient Demographics

    This study examines potential biases in LLM-generated nursing care plans related to key patient sociodemographic characteristics, including sex, age, residence, educational attainment, and income. These are widely recognized as social determinants of health that directly influence nursing care delivery and patient outcomes []. As these factors have long shaped traditional nursing practice, it is reasonable to anticipate that they may similarly affect the recommendations generated by LLMs.

    Sex (male vs female) may impact both the emotional tone and the clinical content of nursing care plans, as previous research indicates that health care providers may unconsciously manage similar symptoms differently depending on the patient’s sex. Specifically, female patients are more likely to be recommended psychological support, whereas male patients may receive more pharmacological or technical interventions under similar clinical scenarios [].

    Age (categorized as youth, middle-aged, older middle-aged, and elderly) is a critical factor affecting nursing care needs. We defined youth as 18 to 29 years, middle-aged as 30 to 49 years, older middle-aged as 50 to 64 years, and elderly as ≥65 years []. Older patients often require more complex, chronic condition management and personalized interventions [].

    Residence (urban vs rural) is another significant variable, as patients in rural areas often face limited access to health care resources compared to their urban counterparts [].

    Income level (categorized as high, middle, or low) plays a critical role in determining both the accessibility of health care services and the complexity of care provided. Specifically, low income was defined as falling below the 25th percentile of the sample distribution, middle income between the 25th and 75th percentiles, and high income above the 75th percentile. Patients with lower income may be more likely to receive standardized care that overlooks individual needs or preferences [].

    Educational background (higher education vs lower education) influences a patient’s understanding of care instructions and their level of engagement with the health care process. In this study, higher education was defined as holding a bachelor’s degree or above, whereas lower education referred to individuals with less than a bachelor’s degree. Patients with higher education may be more proactive in managing their care, whereas those with lower education may require more guidance and support [].

    AI Model and Experimental Tools

    This study used GPT-4 to generate nursing care plans through the Azure OpenAI API, a widely accessible and cost-effective platform that is freely available for use, making it easier for health care providers to adopt in clinical practice. A temperature parameter of 0.7 was set to balance creativity and stability in the generated content, ensuring moderate randomness without compromising quality or consistency [].

    Experimental Procedure

    Patient Profile Input

    The LLMs received a patient profile that includes the following key demographic characteristics: age, sex, income level, educational background, and residence, along with a detailed clinical scenario. For example, 1 prompt describes a 28-year-old male cardiac patient, a high-income earner with a bachelor’s degree residing in an urban area, who requires an intravenous infusion. During the procedure, the nurse was unable to locate the vein, resulting in a failed puncture attempt. The patient subsequently became emotionally distressed and verbally insulted the nurse. The full text of the clinical vignette, the base prompt template, and a detailed table of all variable substitution rules are provided in .

    AI Model Prompt

    For each combination of patient profile, the LLMs generated a nursing care plan in response to a structured prompt. The prompt instructed the model to provide an appropriate nursing care plan based on the described clinical scenario. illustrates the workflow for LLM-based nursing care plan generation, outlining the process from patient data input to care plan output. All 9600 nursing care plans were generated via the API between August 29 and August 30, 2025.

    Figure 1. Flowchart of the LLM-generated nursing care plan generation process. LLM: large language model.
    Prompt Design and Standardization

    To minimize output variability arising from prompt phrasing and inherent model randomness [] and thereby isolate the effect of sociodemographic factors, we implemented a rigorous standardization protocol. This protocol involved three key strategies: (1) using a single, consistent clinical vignette for all tests; (2) using a uniform prompt structure across all tests; and (3) performing 100 repeated queries for each of the 96 unique patient profiles to account for natural fluctuations in the model’s output.

    Repetition and Testing

    For each clinical scenario, we designed multiple prompts to reflect all unique combinations of patients’ identity characteristics. Consequently, it contained 96 unique combinations (2×4 × 2×3 × 2), derived from sex (2 levels), age (4 levels), residence (2 levels), income level (3 levels), and educational background (2 levels). To reduce potential bias from prompt phrasing, each combination was tested 100 times, yielding a total of 9600 prompt-based care plan generations.

    Data Collection and Analysis

    Thematic Analysis and Framework Development

    We analyzed data using thematic analysis, following Braun and Clarke’s approach []. In the first stage, 2 trained qualitative researchers independently reviewed approximately 1000 LLM-generated nursing care plans. This initial review continued until thematic saturation was reached. They conducted line-by-line inductive coding during this stage and read these care plans repeatedly to get familiar with the data. Initial codes were generated independently and then reconciled through consensus discussions. Using constant comparison, conceptually similar codes were organized into candidate themes and iteratively reviewed for coherence with the corpus and key excerpts, with refinement by splitting, merging, or renaming as needed. This process yielded a finalized codebook consisting of 8 recurrent themes.

    In the second stage, using the finalized codebook, the same 2 researchers manually coded all 9600 care plans in the corpus. Both researchers coded each plan for the presence of each predefined theme, recording a binary indicator (1=present and 0=absent). Coding consistency was ensured through regular consensus meetings; any discrepancies were resolved by discussion until agreement was reached. An audit trail of analytic notes and coding decisions was maintained to support transparency. These binary indicators were subsequently used in the quantitative analyses (see for the detailed coding manual).

    Analysis of Thematic Distribution and Associated Factors

    All statistical analyses were performed in Python (version 3.12). Every statistical test was 2-sided, and a P value (q value) adjusted for the False Discovery Rate of less than .05 was considered significant.

    Descriptive statistics were used to summarize the data. Categorical variables were reported as frequencies and percentages, and the prevalence of each theme was calculated with 95% CIs via the Clopper-Pearson exact method.

    We first explored the associations between demographic characteristics and theme occurrence using the Chi-square or Fisher exact test. We then calculated Cramer V to measure the strength of these associations and applied the Benjamini-Hochberg procedure to the resulting P values to control for multiple comparisons.

    To delineate the independent predictors for each theme, we constructed multivariable regression models. Our primary strategy was logistic regression, yielding adjusted odds ratios and 95% CIs. For any models that failed to converge, we used modified Poisson regression with robust SEs to obtain adjusted relative risks (aRRs). Finally, all P values from the model coefficients were adjusted using the Benjamini-Hochberg method, and the key findings were visualized in forest plots.

    Expert Assessment of Quality and Bias Analysis
    Overview

    Following the quantitative thematic analysis, we conducted a qualitative expert review to explain and add clinical depth to the observed patterns. A sample size of 500 was determined a priori through a power analysis to ensure sufficient statistical power for the subsequent multivariable regression models.

    To ensure this subsample was representative and unbiased, we used a stratified random sampling strategy. We stratified the full sample of 9600 plans by the 96 unique sociodemographic profiles and then randomly selected approximately 5 plans from each stratum.

    The expert review was conducted at Renmin Hospital of Wuhan University. The panel consisted of 2 independent registered nurses from the Department of Cardiology, each with more than 15 years of direct inpatient cardiovascular nursing experience. Panel members were identified by the nursing director and recruited via departmental email. Participation was entirely voluntary, and no financial compensation was provided. Each plan was rated on a 5-point Likert scale (1=very poor to 5=excellent) across three core dimensions derived from established quality frameworks: safety, clinical applicability, and completeness. These dimensions were adapted from the Institute of Medicine’s established framework for health care quality []. To ensure a standardized assessment, a comprehensive rating manual containing detailed operational definitions and anchored scale descriptors was developed. Furthermore, the panel completed a formal calibration exercise before the main review to ensure a shared understanding of the criteria (see ).

    Data Analysis

    Interrater reliability of the initial, independent ratings was quantified using two complementary metrics: the intraclass correlation coefficient (ICC) and the quadratically weighted kappa coefficient (κ). We used a 2-way random effects model for absolute agreement to calculate the single-rater ICC (ICC [2,1]) []. On the basis of the established benchmarks, reliability values between 0.61 and 0.80 are interpreted as ‘substantial’ agreement, whereas values from 0.81 to 1.00 represent ‘near-perfect’ agreement []. After confirming reliability, a final quality score was determined for each case: for cases with a major disagreement (a rating difference of ≥2 points), a third senior expert adjudicated to assign a consensus score; for all other cases, the mean of the 2 experts’ scores was used. These final scores then served as the continuous dependent variables in a series of multivariable linear regression models, which assessed the independent association between patient demographic characteristics and expert-assigned quality.

    Ethical Considerations

    The standardized clinical vignette used in this study is a synthetic material, constructed by the authors for this research. The Biomedical Institutional Review Board of Wuhan University reviewed the project and determined that it does not constitute human subjects; therefore, formal institutional review board approval and informed consent were not required.

    Descriptive Characteristics of the Sample and Themes

    A total of 9600 nursing care plans generated by the LLM were included in the analysis. The sociodemographic characteristics of the corresponding patient profiles are detailed in . Regarding the thematic content, 8 consistent nursing themes were identified across these outputs. Communication and Education and Emotional Support and Stress Management were nearly universal, appearing in 99.98% (95% CI 99.92%‐100%) and 99.97% (95% CI 99.91%‐99.99%) of cases. Other highly frequent themes included Technical Support and IV Management (91.69%) and Safety Management with Risk Control (89.31%). In contrast, Family Support (72.81%), Environmental Adjustment (68.42%), and Pain and Medication Management (47.85%) appeared less frequently. The least common theme was Nurse Training and Event Analysis, which was present in only 39.32% (95% CI 38.34%‐40.31%). The overall distribution of nursing themes is summarized in and and visualized in .

    Table 1. Sociodemographic characteristics of the sample (N=9600).
    Variable and grouping Sample size, n (%)
    Sex (female) 4800 (50)
    Age
    Youth 2400 (25)
    Middle-aged 2400 (25)
    Older middle-aged 2400 (25)
    Elderly 2400 (25)
    Residence
    Rural 4800 (50)
    Urban 4800 (50)
    Education
    Lower education 4800 (50)
    Higher education 4800 (50)
    Income
    Low income 3200 (33.33)
    Middle income 3200 (33.33)
    High income 3200 (33.33)
    Table 2. Overall prevalence of nursing care themes (N=9600).
    Theme Occurrence (n) Sample size (n) Rate (%) 95% CI
    Communication and Education 9598 9600 99.98 99.92-100
    Emotional Support and Stress Management 9597 9600 99.97 99.91-99.99
    Technical Support and IV Management 8802 9600 91.69 91.12-92.23
    Safety Management with Risk Control 8574 9600 89.31 88.68-89.92
    Family Support 6990 9600 72.81 71.91-73.70
    Environmental Adjustment 6568 9600 68.42 67.48‐69.35
    Pain and Medication Management 4594 9600 47.85 46.85-48.86
    Nurse Training and Event Analysis 3775 9600 39.32 38.34-40.31
    Figure 2. Overall distribution of nursing themes across 9600 outputs. Note: The 95% CIs for the ‘communication and education’ (99.92%‐100%) and ‘Emotional Support and Stress Management’ (99.91%‐99.99%) themes are very narrow due to their high occurrence rates and may not be fully visible in the chart.

    Associations Between Demographics and Thematic Content

    Univariate Analysis of Thematic Distribution

    The univariate associations between sociodemographic characteristics and the prevalence of the 8 nursing themes are detailed in Table S1 in . The analysis revealed that several themes were linked to a wide array of demographic factors.

    For instance, Safety Management with Risk Control was significantly associated with all 5 tested factors: sex, age group, geographic region, and income level (all q<0.001), as well as educational attainment (q=0.002). Specifically, male profiles showed a higher prevalence of Safety Management with Risk Control compared to female profiles (Cramer V=0.15, q<0.001). Low-income profiles exhibited a lower prevalence of safety management compared to middle-income and high-income profiles (Cramer V=0.08, q<0.001). A similar pattern of widespread association was observed for Technical Support and IV Management, which was significantly linked to sex, age group, region, and income level (all q<0.001), in addition to education (q=0.030).

    Multivariable Analysis of Factors Associated With Theme Presence

    Our multivariable analysis adjusted for all sociodemographic factors. The results revealed systematic and complex patterns of bias in the LLM’s outputs ( and Table S2 in ). Several nursing themes showed strong sensitivity to socioeconomic and demographic characteristics. These findings highlighted a clear disparity.

    Figure 3. Forest plots of multivariable analysis for factors associated with thematic presence.

    Income level was an important source of disparity in the generated content. Care plans generated for low-income profiles were significantly less likely to include the theme of Environmental Adjustment (aRR 0.90, 95% CI 0.87-0.93; q<0.001) compared to high-income profiles.

    Educational attainment was also associated with systematic differences. Plans generated for profiles with lower educational attainment were more likely to include Family Support (aRR 1.10, 95% CI 1.08-1.13; q<0.001).

    Patient age was also a strong predictor of thematic content. Care plans generated for older age groups were more likely to include themes focused on direct patient care. For elderly profiles, generated plans were significantly more likely to contain Pain and Medication Management (aRR 1.33, 95% CI 1.26-1.41; q<0.001) and Family Support (aRR 1.62, 95% CI 1.56-1.68; q<0.001). Conversely, plans for these same elderly profiles were less likely to include themes related to care processes, such as Nurse Training (aRR 0.78, 95% CI 0.73-0.84; q<0.001).

    Sex was a significant predictor. Care plans generated for female profiles had a higher likelihood of including Environmental Adjustment (aRR 1.14, 95% CI 1.11-1.17; q<0.001), these same profiles were linked to a lower likelihood of including Safety Management (aRR 0.90, 95% CI 0.89-0.91; q<0.001) and Nurse Training (aRR 0.86, 95% CI 0.82-0.90; q<0.001).

    The geographic region also showed independent effects. Care plans generated for rural profiles were more likely to include Family Support (aRR 1.23, 95% CI 1.20-1.26; q<0.001). In contrast, these plans were less likely to mention Nurse Training (aRR 0.78, 95% CI 0.74-0.82; q<0.001).

    Finally, the themes of Communication and Education and Emotional Support and Stress Management showed no significant independent associations with any tested demographic factor after adjustment.

    Expert Assessment of Care Plan Quality

    Subsample Characteristics and Overall Quality Scores

    The stratified subsample selected for expert review comprised 500 nursing care plans. The sociodemographic profile of this subsample is detailed in Table S3 in . The distribution was nearly balanced for sex (female: n=247, 49.4%) and education ( lower education: n=248, 49.6%). Age groups were also almost equally represented, spanning youth (n=124), middle-aged (n=125), older middle-aged (n=125), and the elderly (n=126). There was a slight majority of urban profiles (n=260, 52.0%), and the 3 income tiers were comparable in size.

    The descriptive statistics for the expert-assigned quality scores are presented in Table S3 in . The overall mean quality score across all dimensions was 4.47 (SD 0.26). Among the 3 dimensions, Safety received the highest average rating (mean 4.55, SD 0.47), followed by Completeness (mean 4.49, SD 0.48) and Clinical Applicability (mean 4.37, SD 0.46). Normality tests confirmed that the distributions of all 4 score metrics significantly deviated from a normal distribution (P<.001 for all).

    To aid interpretation of the expert-rated scores, we provide illustrative excerpts in . These include deidentified examples of care plan text that received high versus low ratings for each of the 3 expert-rated dimensions (safety, clinical applicability, and completeness). All excerpts were lightly edited for brevity.

    Interrater Reliability

    The interrater reliability for the quality assessment was confirmed to be robust. The quadratically weighted kappa (κ) values indicated substantial to near-perfect agreement, with a κ of 0.81 (95% CI 0.762‐0.867) for Completeness, 0.773 (95% CI 0.704‐0.831) for Clinical Applicability, and 0.761 (95% CI 0.704‐0.813) for Safety.

    This high level of consistency was further supported by the single-rater ICC [1,2], which showed a highly similar pattern of reliability (Completeness: 0.817, Applicability: 0.773, and Safety: 0.762). Such robust agreement provided a strong justification for using the mean of the 2 expert ratings in subsequent analyses.

    Associations Between Demographics and Quality Scores

    To identify independent predictors of care plan quality, we constructed a series of multivariable linear regression models. After adjusting for all sociodemographic factors, several characteristics emerged as significant predictors for different quality dimensions (). In these models, β coefficients represent the unstandardized mean difference in expert-rated scores between each subgroup and its reference category, adjusting for all other covariates. For example, a β of .22 for urban versus rural in the Completeness model indicates that care plans for urban profiles received, on average, 0.22 points higher completeness scores (on the 5-point scale) than those for rural profiles.

    Table 3. Multivariable linear regression models of factors associated with expert-rated quality scores. Notes: β represents unstandardized regression coefficients estimated using ordinary least squares (OLS) regression with robust SEs. Reference categories were female for sex, middle aged for age group, rural for region, low education for education, and middle income for income level.
    Predictor Completeness,
    β (95% CI)
    Clinical applicability,
    β (95% CI)
    Safety, β (95% CI)
    Sex
    Male versus female (Ref.) 0.05 (−0.02 to 0.13) −0.02 (−0.10 to 0.05) 0.34 (0.26 to 0.42)
    Age group
    Young adult versus middle aged (Ref.) −0.09 (−0.20 to 0.02) −0.12 (−0.23 to 0.01) 0.09 (−0.02 to 0.20)
    Older middle aged versus middle aged (Ref.) 0.00 (−0.11 to 0.12) −0.02 (−0.14 to 0.09) −0.03 (−0.14 to 0.08)
    Elderly versus middle-aged (Ref) 0.10 (−0.01 to 0.21) −0.09 (−0.20 to 0.02) −0.03 (−0.14 to 0.08)
    Region
    Urban versus rural (Ref.) 0.22 (0.14 to 0.30) 0.14 (0.07 to 0.22) −0.09 (−0.17 to 0.01)
    Education
    High education versus low (Ref.) −0.07 (−0.15 to 0.01) −0.03 (−0.11 to 0.05) −0.02 (−0.10 to 0.06)
    Income level
    Low income versus middle (Ref.) 0.33 (0.23 to 0.43) 0.18 (0.08 to 0.28) −0.02 (−0.12 to 0.07)
    High income versus middle (Ref.) 0.01 (−0.08 to 0.11) −0.04 (0.13 to 0.06) −0.02 (−0.11 to 0.07)

    aRef.: reference.

    b*P<.05.

    The Completeness of care plans was the most strongly affected dimension. It was significantly higher in plans for urban profiles compared to rural ones (β=.22, 95% CI 0.14-0.30; P<.001). Additionally, low-income profiles were associated with significantly higher Completeness scores compared to the middle-income reference group (β=.33, 95% CI 0.23-0.43; P<.001).

    For Clinical Applicability, urban residence (β=.14, 95% CI 0.07-0.22; P=.001) and low-income status (β=.18, 95% CI 0.08-0.28; P=.002) were also predictors of higher scores. Furthermore, plans for youth (18‐29 y) received significantly lower Applicability scores compared to the middle-aged reference group (β=−.12, 95% CI −0.23 to −0.01; P=.05).

    Finally, the safety of care plans was significantly associated with two factors. Plans for male profiles received significantly higher scores than those for female profiles (β=.34, 95% CI 0.26-0.42; P<.001). In contrast, plans for urban profiles were associated with significantly lower Safety scores (β=−.09, 95% CI −0.17 to −0.01; P=.048). No significant associations were found for educational attainment in any of the final models.

    Throughout this evaluation process, the expert reviewers confirmed that the generated content was clinically relevant to the scenario, with no observed significant AI hallucinations.

    Principal Findings

    This study investigated sociodemographic bias in nursing care plans generated by GPT-4. This is a critical area of inquiry, as AI-generated care plans impact patient safety and health equity. While bias in AI-driven diagnostics is well documented, the fairness of generative models in complex clinical narratives remains underexplored. Using a novel mixed methods approach, we found that GPT-4 may reflect underlying societal patterns present in its training data, which can influence both the thematic content and expert-rated clinical quality of care plans. Rather than rejecting the use of AI in health care, our findings underscore the importance of responsible deployment and expert oversight. Our findings reveal a dual form of bias. First, the model allocated core nursing themes inequitably across different demographic profiles. Second, we found a paradoxical pattern. Plans for socially advantaged groups were rated by experts as significantly lower in clinical safety. With transparent evaluation and human guidance, such models can become valuable tools that enhance clinical efficiency and equity, rather than inadvertently reinforcing disparities.

    Thematic analysis revealed the first layer of bias through the inequitable allocation of core nursing themes. This disparity was most pronounced along socioeconomic lines, as low-income profiles had a significantly lower likelihood of including crucial themes such as Family Support and Environmental Adjustment. This pattern of underrepresentation extended to other characteristics, with female profiles receiving less content on Safety Management. These patterns are unlikely to be a random artifact. They reflect a digital reproduction of structural inequities learned from the model’s training data. This raises a critical concern. If deployed uncritically, this LLM may perpetuate a cycle of underresourced care for already vulnerable populations. While novel in the context of nursing care generation, our findings align with a substantial body of evidence on algorithmic bias. For example, prior work has established lower diagnostic accuracy on chest X-rays for minority populations [,]. In clinical NLP, models have replicated sexed language, describing female patients with more emotional terms and male patients with technical ones []. Predictive algorithms have also systematically underestimated health care costs for low-income patients due to historical underresourcing []. Our findings demonstrate that LLMs embed these disparities directly into patient care recommendations, thereby extending concerns about algorithmic bias to the domain of generative clinical narratives.

    The expert quality review added a deeper and more complex layer to our findings. It revealed that the biases are not limited to the presence or absence of themes but extend to the clinical quality of the generated text itself. Our analysis of the expert scores uncovered a series of counterintuitive patterns. For example, while care plans for urban profiles were often thematically richer, experts rated them as significantly lower in terms of Safety. Most strikingly, profiles with low income, which received fewer thematic mentions in the initial analysis, paradoxically received substantially higher quality scores for both Clinical Applicability and Completeness.

    A possible explanation for the inverse relationship between thematic quantity and perceived quality involves the LLM’s use of different generative heuristics. Such heuristics can cause AI models to internalize and apply societal stereotypes, as documented in prior literature [,]. Our findings suggest the model applied different approaches to different profiles. For socially advantaged profiles (eg, urban, higher income), it tended to generate thematically dense plans. The increased complexity of these plans may introduce more potential for error, a known principle in safety science []. This could explain their lower expert-rated safety scores. Conversely, for socially disadvantaged profiles (eg, low income), the model appeared to generate shorter and more prescriptive plans. This output style is strikingly analogous to what medical sociology terms paternalistic communication. This communication pattern is characterized by providing direct, simplified instructions while omitting complex rationales or shared decision-making options, often based on an implicit assumption about the patient’s lower health literacy or agency []. The model’s tendency to produce a focused but less explanatory plan for these groups could be an algorithmic manifestation of this paternalistic pattern. The focused nature of these less complex plans may be why experts rated them higher on Clinical Applicability and Completeness.

    The direct clinical implication of our findings is that current-generation LLMs such as GPT-4 are not yet suitable for fully autonomous use in generating nursing care plans []. Our results demonstrate that deploying these models without a robust human-in-the-loop review process could introduce significant risks []. Specifically, it may lead to the provision of care that is systematically biased [], either through the omission of key nursing themes or through qualitatively substandard recommendations for certain patient groups. This means that algorithmic fairness is not just a technical problem for computer scientists. It is a fundamental issue of patient safety. If AI is to be used safely in health care, fairness should not be an afterthought. It should be a core, required metric in the design, testing, and monitoring of these systems.

    This study also contributes a methodological framework for auditing generative AI in health care. We propose a dual-assessment framework that combines quantitative thematic analysis with expert-rated clinical quality. Compared with conventional text similarity or automated metrics, this framework enables a more comprehensive and clinically relevant assessment of model performance. Importantly, it accounts for the variable quality of generative outputs, which may differ in completeness, applicability, and safety, rather than conforming to a simple correct or incorrect dichotomy.

    Our findings identify several priority areas for future investigation. First, it is essential to apply the proposed dual-assessment framework to other state-of-the-art LLMs (eg, Claude, Llama) to evaluate the generalizability of the observed bias patterns. Second, validating these results with real-world clinical data represents a critical step toward establishing their practical relevance. Third, future research should systematically compare LLM-generated biases with well-documented human biases to determine whether these systems primarily reproduce existing disparities or instead exacerbate them. Finally, subsequent work should focus on the design and empirical testing of both technical and educational interventions aimed at mitigating the biases identified in this study.

    Strengths and Limitations

    This study offers notable strengths. Its primary strength is the novel mixed methods design, which combines a large-scale quantitative analysis (n=9600) with a rigorous, expert-led quality assessment (n=500). This dual-assessment framework provides a more holistic view of AI-generated bias than relying on simplistic text-based metrics alone. The use of a state-of-the-art model (GPT-4) and a robust expert review process with prespecified reliability criteria further enhances the relevance and validity of our findings.

    However, we must acknowledge several limitations. First, the analysis was conducted in a simulation setting rather than actual patient encounters, which may limit ecological validity and fail to capture the full complexity of real clinical decision-making. Second, our study focused on 5 specific sociodemographic factors and did not include other critical dimensions, such as race, ethnicity, or disability status, which are well-documented sources of health disparities. Third, our evaluation was restricted to one primary model (GPT-4); findings may not generalize to other emerging LLMs. Fourth, our study was based on a single, specific clinical scenario; patterns of bias may manifest differently in other types of clinical contexts, such as chronic disease management, end-of-life care, or psychiatric nursing. Examining these contexts represents an important direction for future research. Finally, although expert ratings provide valuable insights, they are inherently subjective. Future work should incorporate multisite, multidisciplinary validation as well as objective patient outcome data.

    Conclusions

    Our research demonstrates that a state-of-the-art LLM systematically reproduces complex sociodemographic biases when generating nursing care plans. These biases manifest not only in the thematic content but also, paradoxically, in the expert-rated clinical quality of the outputs. This finding challenges the view of LLMs as neutral tools. It highlights a significant risk. Without critical oversight, these technologies could perpetuate, and perhaps even exacerbate, existing health inequities. Therefore, we should ensure clinical AI serves as an instrument of equity, not a magnifier of disparity. Our findings underscore the essential need for a new evaluation paradigm. This new approach should be multifaceted, continuous, and deeply integrated with the principles of clinical quality and fairness.

    The authors declare the use of generative artificial intelligence tools during manuscript preparation. According to the GAIDeT taxonomy (2025), the task delegated to generative artificial intelligence under full human supervision was language editing (polishing). The tool used was GPT-5.0. Responsibility for the final content lies entirely with the authors. GAI tools are not listed as authors and do not bear responsibility for the final outcomes.

    This study was supported by the National Natural Science Foundation of China (grant 72474166).

    The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.

    Formal analysis, software, methodology, visualization: NB, QL, GF, WZ, QZ, and JL

    None declared.

    Edited by Alicia Stone; submitted 27.May.2025; peer-reviewed by Karthik Sarma, Ravi Teja Potla, Sandipan Biswas, Vijayakumar Ramamurthy, Wenhao Qi; final revised version received 06.Nov.2025; accepted 10.Nov.2025; published 05.Dec.2025.

    © Nan Bai, Yijing Yu, Chunyan Luo, Si Chen Zhou, Qing Wang, Huijing Zou, Qian Liu, Guanghui Fu, Wei Zhai, Qing Zhao, Jianqiang Li, Xinni Wei, Bing Xiang Yang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 5.Dec.2025.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • SpaceX tells investors it is targeting late 2026 IPO, the Information reports – Reuters

    1. SpaceX tells investors it is targeting late 2026 IPO, the Information reports  Reuters
    2. EchoStar (SATS) Stock Rockets to All-Time High on SpaceX News  parameter.io
    3. SpaceX Aims For $800B Valuation In Secondary Sale To Surpass OpenAI As Top US Private Firm  Stocktwits
    4. SpaceX Starts Secondary Share Sale Valuing It at $800 Billion  marketscreener.com
    5. Report: SpaceX planning for IPO late next year  Sherwood News

    Continue Reading

  • Boehringer prepares schizophrenia app for FDA submission

    Boehringer prepares schizophrenia app for FDA submission

    Mario Aguilar covers technology in health care, including artificial intelligence, virtual reality, wearable devices, telehealth, and digital therapeutics. His stories explore how tech is changing the practice of health care and the business and policy challenges to realizing tech’s promise. He’s also the co-author of the free, twice weekly STAT Health Tech newsletter. You can reach Mario on Signal at mariojoze.13.

    HANOVER, N.H. — Boehringer Ingelheim this week provided more details about a late stage clinical trial of an app designed to treat under-addressed symptoms of schizophrenia and revealed the company is preparing to submit the app to the Food and Drug Administration for clearance.

    Developed with Click Therapeutics, the app, CT-155 is a 16-week treatment that adapts key elements of established face-to-face psychosocial treatments for schizophrenia as an adjunct to antipsychotic drug treatment. Schizophrenia affects millions of people in the U.S. and is commonly associated with psychotic behavior and delusions. However, there are also common and often serious negative symptoms, including lack of motivation and the inability to experience pleasure, for which there are no approved drugs.

    In topline results of a 464-participant randomized control trial first released in October, Boehringer Ingelheim revealed that users of the experimental app improved on a rating scale for these symptoms compared to a group that used a control app. Importantly, the treatment met its primary endpoint by passing a prespecified threshold for effect size. 

    STAT+ Exclusive Story

    This article is exclusive to STAT+ subscribers

    Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.

    Already have an account? Log in

    View All Plans

    To read the rest of this story subscribe to STAT+.

    Subscribe

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Artificial intelligence (AI) in health care broadly refers to the use of advanced computational techniques and algorithms, including machine learning, deep learning, natural language processing, large language and image models, and computer vision to extract insights from complex medical data and enhance clinical decision-making [,]. By augmenting human expertise with data-driven insights, AI has the potential to revolutionize multiple aspects of health care delivery, from early disease detection [,] and drug discovery [,] to improving operational efficiency and resource allocation in health care systems []. Advanced generative AI models can help analyze a variety of digital content, including clinical images, videos, text, and audio, as well as clinical data from electronic health records []. The global AI in health care market is projected to grow at a compound annual growth rate of 36.83% from 2024 to 2034, increasing from US $26.69 billion to US $613.81 billion [], signaling a significant shift in how health care is delivered and managed. This rapid integration of AI into clinical practice has generated both excitement and concern among health care professionals, policymakers, and patients. As AI becomes more prevalent in clinical settings, there is a growing concern about its unintended consequences. AI could deepen the digital divide and increase disparities, especially among older adults, low-income groups, and rural communities []. Further, the ethical and regulatory challenges surrounding the use of AI in health care continue to be widely debated [].

    While the transformative potential of AI in health care is well understood, its expanded adoption raises important questions about its impact on patient care, equity, and ethics. Patients, as the ultimate beneficiaries of AI-driven health care solutions, play a critical role in shaping how AI technologies are designed, implemented, and trusted. However, patient perspectives on AI remain underexplored. Understanding these perspectives is important for many reasons. First, it ensures patient trust and acceptance, which are essential for successful implementation of AI-based health care solutions []. Second, it helps address ethical concerns and potential biases in AI algorithms, which could lead to unequal access to care or diagnostic errors []. Third, a good understanding of patients’ views on AI allows for tailoring AI solutions that align with patient needs and preferences, ultimately improving health outcomes and satisfaction [].

    Despite the importance of understanding patient perspectives on AI use in clinical practice, research in this area remains limited. While several studies have explored health care professionals’ attitudes toward AI [-], comparatively little attention has been given to patients’ attitudes and concerns. Some studies have reported a generally positive patient attitude toward the use of AI in health care [,], though they also highlight concerns about privacy and control over personal health data. Other studies have documented patient resistance [], distrust in AI [], and a preference for restricting AI use to nonclinical tasks such as administrative or scheduling functions []. In addition, studies have also identified patient apprehensions due to perceived safety risks, threats to patient autonomy, potential increases in health care costs, algorithmic biases, and data security issues [].

    This study examines patients’ knowledge levels regarding AI in health care, their comfort with AI use across clinical applications, and their attitudes toward the use of personal health information for AI purposes. Our research framework is shown in . We also explore how sociodemographic characteristics, digital health literacy, and health conditions are associated with patient attitudes and comfort levels. Specifically, we address two research questions: (1) What are the levels of public knowledge and comfort with AI in health care, including the use of personal health data with and without consent? and (2) How do sociodemographic factors, digital health literacy, and chronic health conditions influence these attitudes? By addressing these questions, this study aims to fill critical gaps in understanding patient perspectives and to inform ethical, equitable, and effective implementation of AI solutions in health care.

    Figure 1. Research framework. AI: artificial intelligence.

    Dataset

    Data used in this study were obtained from the 2023 Canadian Digital Health Survey (CDHS) that was commissioned by Canada Health Infoway to assess Canadians’ experiences and perceptions regarding digital health services, including the use of AI in health care [,]. The web-based survey was administered by Leger, one of Canada’s leading market research firms, between November 28 and December 28, 2023. Participants, aged 16 years and older, were recruited from Leger Opinion’s nationally representative online panel using computer-assisted web interviewing technology. A total of 10,130 respondents participated in the survey, which was available in both English and French.

    Ethical Considerations

    The 2023 Canadian Digital Health Survey obtained informed consent from its 10,130 participants and adhered to the public opinion research standards of the Canadian Research and Insights Council and the global ESOMAR (European Society for Opinion and Marketing Research) network to ensure methodological rigor and data quality [,]. Information about respondents was deidentified and anonymized to protect privacy and confidentiality. Patients provided consent for data collection and evaluation.

    Variables

    The survey assessed four key variables related to patients’ perceptions of AI in health care using 4-point ordinal scales. To evaluate participants’ understanding of AI, they were asked to rate their knowledge on a scale from 1 (not at all knowledgeable) to 4 (very knowledgeable). The question “How comfortable are you with AI being used as a tool in health care?” was used to assess participants’ comfort level on a scale ranging from 1 (very uncomfortable) to 4 (very comfortable). A similar approach was used to assess attitudes toward the use of personal health data in AI research. Participants were asked how comfortable they felt about scientists using their personal health data for AI research when informed consent was provided, using the same 4-point scale. To examine privacy concerns, the survey also asked about comfort levels regarding AI research using deidentified health data without explicit consent. Participants were also asked about their comfort levels in applying AI in 7 areas: monitoring and predicting health conditions, decision support for health care professionals, precision medicine, drug and vaccine development, disease monitoring at home, tracking epidemics, and optimizing health care workflows.

    Participants were asked to self-report any serious or chronic health conditions diagnosed by a health professional. The survey defined chronic illness as a condition expected to last, or already lasting, 6 months or more. Respondents could choose from a predefined list of 15 chronic conditions: chronic pain, cancer, diabetes, cardiovascular disease, Alzheimer’s disease, developmental disabilities, obesity, mental health conditions, and physical or sensory disabilities. Additionally, participants had the option to specify any other chronic illness not listed or indicate no chronic illness. We calculated a composite score representing the total number of chronic conditions reported by each respondent.

    Digital health literacy was assessed using 8 items from eHealth Literacy Scale [], which measures the ability to find, evaluate, and use health information on the internet. Items were rated on a 5-point Likert scale (1=strongly disagree to 5=strongly agree). After confirming their convergence and reliability using exploratory principal component analysis with varimax rotation (which extracted one factor, confirming unidimensionality) and Cronbach α (0.934), responses were summed to create a digital health literacy score, reflecting a respondent’s overall proficiency in navigating and utilizing online health resources. The following sociodemographic variables were also captured in the survey: age, sex, annual household income, citizenship, race, educational attainment, and employment status.

    Analytic Sample and Nonresponse Bias

    CDHS collected data from 10,130 Canadian adults on a broad range of topics related to digital health. Given this study’s focus on AI use in health care, only 6904 respondents who provided complete responses to AI-related questions were included in the analytic sample.

    To assess potential selection or nonresponse bias, χ2 tests were conducted to compare included and excluded respondents across all the sociodemographic variables: age, sex, income, education, race, employment, and citizenship. The χ2 tests revealed significant differences between the overall respondents and our analytic sample across five variables (age, sex, employment, education, and income; P<.05), with our analytic sample overrepresenting males (53.2% vs 48.6%), individuals aged 25‐54 years (51.8% vs 48.6%), higher household incomes (35.7% above CAD 100,000 vs 33.6%), higher education levels (eg, graduate college or above: 39.7% vs 37.4%), and employed respondents (61.6% vs 58.6%).

    To address this bias, we derived inverse probability weights (IPW) to adjust for the likelihood of inclusion in the analytic sample. The IPW values were estimated using a logistic regression model predicting inclusion based on sociodemographic variables. These weights were then combined with the original CDHS survey design weights (which account for sampling and nonresponse at the national level) to create a composite total weight. This combined weighting approach ensured that both survey design and sample selection bias were accounted for in all weighted analyses.

    Statistical Analysis

    All statistical analyses were conducted using STATA 18 software. Descriptive statistics summarized respondent characteristics. To evaluate potential selection bias, the IPW was derived as described above.

    Ordinal logistic regression models were estimated for four AI-related attitudinal outcomes: (1) knowledge of AI in health care, (2) comfort with use of AI in health care, (3) comfort with use of personal health data for AI with consent, and (4) comfort with use of personal health data for AI without consent. Each model was estimated three ways: unweighted, nonresponse-adjusted weighted (IPW), and fully weighted (combining IPW and CDHS-provided survey design weights) to evaluate robustness. All weighted models were estimated using survey (svy) commands in STATA to account for the complex survey design. Potential multicollinearity among predictors was assessed using Cramer’s V for categorical variables and variance inflation factors from proxy linear regressions.

    The demographic profile of survey respondents is presented in . Our dataset had the majority of respondents aged 35‐54 years (2412, 34.94%), followed by those aged 65+ years (1585, 22.96%) and 55‐64 years (1211, 17.54%). There was a slight majority of male respondents (53.2%, 3673), with female respondents comprising 46.8% (3231) of the sample. Regarding household income, 4699 (68.06%) reported earnings of CAD 60,000 or more, while 2205 (31.94%) earned less. The majority were Canadian citizens (6581, 95.32%), with noncitizens (323, 4.68%) forming a smaller proportion. Racially, the sample had predominantly White respondents (5104, 73.93%), followed by Asian-origin (972, 14.08%), other (575, 8.33%), and Black or African-origin (253, 3.66%) respondents. Education levels varied, with most having at least some college education (2831, 41.01%) or a graduate degree (2738, 39.66%). Fewer had a high school diploma (1174, 17%) or less than high school education (161, 2.33%). Employment status showed that 4253 (61.6%) were employed.

    Table 1. Sociodemographic characteristics of respondents (n=6904).
    Demographic variable and category n (%)
    Age group (y)
    16‐24 527 (7.63)
    25‐34 1169 (16.93)
    35‐54 2412 (34.94)
    55‐64 1211 (17.54)
    65+ 1585 (22.96)
    Sex
    Female 3231 (46.8)
    Male 3673 (53.2)
    Household income (CAD)
    <60,000 2205 (31.94)
    60,000‐100,000 2237 (32.4)
    >100,000 2462 (35.66)
    Citizenship
    Citizen 6581 (95.32)
    Noncitizen 323 (4.68)
    Race
    Asian origin 972 (14.08)
    Black/African origin 253 (3.66)
    Other 575 (8.33)
    White 5104 (73.93)
    Education
    Less than high school 161 (2.33)
    High school 1174 (17)
    College level 2831 (41.01)
    Graduate college or above 2738 (39.66)
    Employment
    Employed 4253 (61.6)
    Unemployed 2651 (38.4)

    aA currency exchange rate of CAD $1=approximately US $0.72 is applicable.

    Our analysis found varying levels of knowledge levels and comfort regarding AI use in health care among respondents (). While a majority (2919, 42.3%) reported being moderately knowledgeable about AI, only 7.8% (542) considered themselves very knowledgeable. Conversely, nearly half of the respondents (49.9%) considered themselves less knowledgeable, with 38.7% (2669) reporting they were “not very knowledgeable” and 11.2% (774) reporting “not at all knowledgeable.”

    Figure 2. Distribution of self-reported AI knowledge and comfort levels among Canadian adults (n=6904) in the 2023 Canadian Digital Health Survey. AI: artificial intelligence.

    When it comes to AI use in health care, 44.6% (3077) of respondents reported being moderately comfortable, while 42.4% (2927) expressed some level of discomfort. Comfort levels increased when AI involved use of personal health data under informed consent, with 64.7% (4466, moderately or very comfortable) supporting such AI use. However, comfort levels declined when AI research used deidentified data without consent, with only 47.4% (3272) reporting comfort and 52.6% (3632) expressing discomfort. When asked about comfort levels pertaining to AI use in various health care areas (), moderate comfort levels (40%‐47%) were observed across all areas. However, respondents expressed relatively greater support for AI use in tracking epidemics and optimizing health care workflows, where a higher proportion of respondents felt “very comfortable” compared to other areas.

    Figure 3. Comfort levels with AI applications in specific health care areas among Canadian adults (n=6904) in the 2023 Canadian Digital Health Survey. AI: artificial intelligence.

    presents results from the fully weighted ordinal logistic regression models assessing associations between respondents’ self-reported levels of knowledge about AI and their sociodemographic characteristics, digital health literacy, and health conditions. These results were consistent with those from the unweighted and nonresponse-adjusted models, showing similar effect sizes and significance patterns ().

    Table 2. Ordinal regression results (fully weighted): association between AI knowledge levels and sociodemographic factors, digital health literacy, and Health Conditions in 2023 Canadian Digital Health Survey.
    Predictors and category OR (95% CI) P value
    Age group (ref: 16‐24 y)
    25‐34 y 0.69 (0.54‐0.87) <.001
    35‐54 y 0.59 (0.47‐0.73) <.001
    55‐64 y 0.44 (0.35‐0.56) <.001
    65+ y 0.39 (0.31‐0.49) <.001
    Sex (ref: female)
    Male 1.57 (1.42‐1.73) <.001
    Household income (ref: CAD 60,000‐100,000)
    CAD >100,000 1.29 (1.14‐1.45) <.001
    CAD <60,000 1.07 (0.94‐1.22) .34
    Citizenship (ref: citizen)
    Noncitizen 1.71 (1.32‐2.21) <.001
    Race (ref: Asian)
    Black/African origin 1.00 (0.74‐1.36) .99
    Other 0.93 (0.72‐1.20) .56
    White 0.79 (0.68‐0.93) <.001
    Education (ref: college level)
    Graduate college and higher 1.43 (1.27‐1.60) <.001
    High school 1.03 (0.89‐1.20) .70
    0.97 (0.66‐1.41) .87
    Employment (ref: employed)
    Unemployed 1.09 (0.95‐1.24) .21
    Digital health literacy 1.08 (1.07‐1.09) <.001
    Number of chronic conditions 1.08 (1.03‐1.12) <.001

    Age was a significant predictor, with respondents in older age groups exhibiting lower odds of having higher AI knowledge compared to those aged 16‐24 years: 25‐34 years (odds ratio [OR] 0.69, 95% CI 0.54‐0.87; P<.001), 35‐54 years (OR 0.59, 95% CI 0.47‐0.73; P<.001), 55‐64 years (OR 0.44, 95% CI 0.35‐0.56; P<.001), and 65+ years (OR 0.39, 95% CI 0.31‐0.49; P<.001). Men were significantly more likely to report higher AI knowledge than women (OR 1.57, 95% CI 1.42‐1.73; P<.001).

    Among socioeconomic factors, those with higher annual household incomes (CAD >100,000) exhibited higher odds for greater AI knowledge (OR 1.29, 95% CI 1.14‐1.45; P<.001), while those with lower incomes (CAD <$60,000) showed no significant difference (OR 1.07, 95% CI 0.94‐1.22; P=.34). Noncitizens exhibited higher AI knowledge levels (OR 1.71, 95% CI 1.32‐2.21; P<.001) compared to citizens. Race also was a significant factor, with White respondents exhibiting lower odds of AI knowledge (OR 0.79, 95% CI 0.68‐0.93; P<.001) relative to Asian-origin respondents, while differences for Black or African-origin and Other groups were not statistically significant.

    Education was another key predictor, with graduates showing significantly higher AI knowledge (OR 1.43, 95% CI 1.27‐1.60; P<.001) compared to those with a college-level education, while respondents with only a high school education or less showed no significant difference. Employment status was not significantly associated with the odds of reporting higher AI knowledge.

    Higher digital health literacy was strongly associated with increased AI knowledge (OR 1.08, 95% CI 1.07‐1.09; P<.001). Additionally, respondents with more chronic health conditions had higher odds of reporting greater AI knowledge (OR 1.08, 95% CI 1.03‐1.12; P<.001), suggesting that health experiences may influence awareness of AI applications.

    presents results from the fully weighted ordinal logistic regression models, each examining the association between respondents’ comfort levels with AI in health care, the use of personal health data for AI with and without consent, and key factors including sociodemographics, digital health literacy, and health conditions. Nonresponse weighted and unweighted models (Tables 4–6 in ) yielded results similar to the fully weighted analyses, supporting the sensitivity and robustness of the findings.

    Table 3. Ordinal regression results (fully weighted): associations between AI comfort levels, use of personal health data, and sociodemographic factors, digital health literacy, and health conditions in 2023 Canadian Digital Health Survey.
    Model 1: Comfort level with the use of AI in health care Model 2: Comfort level with the use of personal health data in AI with consent Model 3: Comfort level with the use of personal health data in AI without consent
    Predictor and category OR (95% CI) P value OR (95% CI) P value OR (95% CI) P value
    Age group (years) (ref=16‐24)
    25‐34 1.00 (0.80‐1.25) .99 0.83 (0.67‐1.04) .10 0.83 (0.68‐1.03) .09
    35‐54 0.91 (0.74‐1.12) .35 0.72 (0.59‐0.89) .00 0.73 (0.60‐0.88) .00
    55‐64 1.02 (0.82‐1.28) .84 0.93 (0.74‐1.15) .49 0.77 (0.63‐0.95) .01
    65+ 1.47 (1.17‐1.84) <.001 1.22 (0.97‐1.54) .09 0.96 (0.78‐1.20) .74
    Sex (ref=female)
    Male 1.50 (1.36‐1.65) <.001 1.39 (1.27‐1.53) <.001 1.56 (1.42‐1.71) <.001
    Household income (ref=60,000‐100,000)
    >100,000 1.21 (1.08‐1.37) <.001 1.16 (1.03‐1.30) .01 1.05 (0.94‐1.18) .37
    <60,000 0.87 (0.77‐0.99) .03 0.83 (0.74‐0.95) .01 0.86 (0.76‐0.97) .02
    Citizenship (ref=citizen)
    Noncitizen 1.49 (1.18‐1.89) <.001 1.20 (0.96‐1.49) .11 1.28 (1.02‐1.61) .03
    Race (ref=Asian)
    Black/African origin 0.96 (0.71‐1.28) .76 0.78 (0.59‐1.02) .07 0.71 (0.54‐0.94) .02
    Other 0.78 (0.61‐1.00) .05 0.77 (0.62‐0.97) .03 0.83 (0.67‐1.04) .11
    White 0.77 (0.66‐0.89) <.001 0.78 (0.68‐0.90) <.001 0.69 (0.60‐0.80) <.001
    Education (ref=college level)
    Graduate college and higher 1.29 (1.15‐1.44) <.001 1.25 (1.12‐1.40) <.001 1.08 (0.97‐1.21) .15
    High school 0.90 (0.77‐1.05) .17 0.89 (0.76‐1.03) .11 0.90 (0.78‐1.04) .15
    0.88 (0.62‐1.23) .44 1.08 (0.76‐1.53) .66 0.75 (0.54‐1.05) .09
    Employment Status (Ref= Employed)
    Unemployed 1.01 (0.88‐1.14) .94 1.06 (0.93‐1.21) .38 0.89 (0.78‐1.01) .07
    Digital health literacy 1.06 (1.05‐1.07) <.001 1.05 (1.04‐1.06) <.001 1.04 (1.03‐1.05) <.001
    Number of chronic conditions 1.04 (1.00‐1.08) .04 1.07 (1.03‐1.11) .00 1.03 (0.99‐1.07) .12

    Age showed significant association with comfort levels with AI use in health care. In Model 1, older adults aged 65+ years exhibited higher odds of greater comfort with AI in health care (OR 1.47, 95% CI 1.17‐1.84; P<.001) compared to respondents aged 16‐24 years. In Model 2, respondents aged 35‐54 years (OR 0.72, 95% CI 0.59‐0.89; P<.001) exhibited lower odds of comfort when personal health data were used in AI with consent, while other age groups showed no statistically significant difference. In Model 3, comfort declined further among respondents aged 35‐54 years (OR 0.73, 95% CI 0.60‐0.88; P<.001) and 55‐64 years (OR 0.77, 95% CI 0.63‐0.95; P=.01) when personal health data were used in AI without consent, suggesting greater sensitivity to consent among middle-aged adults.

    Sex was a consistent predictor across all three models, with men exhibiting higher odds of greater comfort of AI use in health care than women (Model 1: OR 1.50, 95% CI 1.36‐1.65; P<.001; Model 2: OR 1.39, 95% CI 1.27‐1.53; P<.001; Model 3: OR 1.56, 95% CI 1.42‐1.71; P<.001), indicating that men consistently report greater comfort with AI use in health when personal health data were used irrespective of consent.

    Respondents with higher annual household incomes (above CAD 100,000) were significantly more likely to be comfortable with AI use in health care (OR 1.21, 95% CI 1.08‐1.37; P<.001) and with the use of personal data with consent (OR 1.16, 95% CI 1.03‐1.30; P=.01) when compared to those earning between CAD 60,000 and 100,000. Further, lower-income respondents (CAD <60,000) consistently reported lower comfort levels across all three models (OR range 0.83‐0.87, P<.05), suggesting that financial disparities may influence attitudes toward AI in health care.

    We also found citizenship status in Canada to be a significant predictor in 2 out of 3 models. Noncitizens exhibited higher odds of comfort with AI use in health care (OR 1.49, 95% CI 1.18‐1.89; P<.001) and when personal health data were used without consent (OR 1.28, 95% CI 1.02‐1.61; P=.03), though the association was not significant in the with-consent model (OR 1.20, 95% CI 0.96‐1.49; P=.11). Overall, these findings suggest that noncitizens may perceive AI applications in health care more positively than citizens and are likely to exhibit more comfort level in personal health data being used for AI applications in healthcare irrespective of the consent.

    Compared with Asian-origin respondents, White respondents exhibited lower odds of comfort across all models (OR range=0.69‐0.78, P<.001). Those identifying as “Other” racial groups also had lower odds in Models 1 and 2 (OR range=0.77‐0.78, P<.05). For Black or African-origin respondents, the association was significant only in Model 3 (OR 0.71, 95% CI 0.54‐0.94; P=.02), indicating reduced comfort when personal health data were used without consent.

    Our analysis also showed higher educational attainment to be positively associated with comfort with AI use in health care and when personal health data were used with consent. Respondents with graduate-level or more education were significantly more comfortable (Model 1: OR 1.29, 95% CI 1.15‐1.44; P<.001; Model 2: OR 1.25, 95% CI 1.12‐1.40; P<.001). Those with only a high school education or lower did not show significant differences compared to the reference group (college-level education).

    Digital health literacy emerged as a strong and consistent predictor across all three models. Each one-unit increase in digital literacy was associated with a 4%‐6% increase in odds of greater comfort (Model 1: OR 1.06, 95% CI 1.05‐1.07; P<.001; Model 2: OR 1.05, 95% CI 1.04‐1.06; P<.001; Model 3: OR 1.04, 95% CI 1.03‐1.05; P<.001), indicating that individuals with greater proficiency in using digital health tools were more comfortable with AI use in health care, both in general health care settings and when personal health data were involved.

    The number of chronic health conditions was positively associated with comfort in Models 1 (OR 1.04, 95% CI 1.00‐1.08; P=.04) and 2 (OR 1.07, 95% CI 1.03‐1.11; P<.001) but not in Model 3 (OR 1.03, 95% CI 0.99‐1.07; P=.12), suggesting that individuals with multiple chronic illnesses were more comfortable with AI use in health care, particularly when personal data were used with consent. Employment status was not significantly associated with AI comfort in any of the three models.

    Principal Findings

    To our knowledge, this is one of the first few studies to examine public attitudes toward use of AI in health care, with specific focus on the influence of sociodemographic, digital health literacy, and health-related factors. Overall, study respondents reported mixed levels of knowledge about AI and a considerable proportion (42.39%) expressing discomfort with AI use in health care. When personal health data were used for AI solutions with consent, the proportion of individuals becoming comfortable with AI use increased (64.69%), and when AI applications used personal deidentified health data without consent, a higher proportion (52.61%) expressed discomfort. A relatively higher proportion of respondents expressed greater comfort when AI was used in nonclinical areas like tracking epidemics and for improving healthcare workflows.

    We found significant variations in knowledge levels of AI and comfort levels pertaining to AI use in health care based on sociodemographic, digital literacy, and the number of health conditions. Our results indicate that men, noncitizens, higher-income respondents, and respondents with greater digital health literacy exhibited higher odds of reporting comfort with AI use in health care and with the use of personal health data for AI. Older adults (65+ y) demonstrated higher comfort with AI use in health care, while younger (25–34 y) and middle-aged (35–54 y) adults were less comfortable with AI using their personal data, especially without consent.

    Compared to Asian-origin respondents, White and Other racial groups had significantly lower comfort levels across models, while Black or African-origin respondents were notably less comfortable, only when personal health data were used for AI applications without consent. This finding suggests that lower comfort among Black respondents is not a general discomfort with AI but rather a heightened sensitivity to nonconsensual data use, thus underscoring the critical importance of transparency and opt-in data policies to foster trust among minority groups. Our findings also indicated that noncitizens exhibited higher comfort levels with AI use in health care as compared to Canadian citizens. The observed higher comfort among Asian-origin and noncitizen respondents and lower comfort among Black respondents suggests that broader cultural or experiential factors may influence attitudes toward AI in health care. Future qualitative or mixed-methods studies are needed to explore these factors. Prior research has shown that historical experiences, prior exposure to technology [], and privacy concerns shape levels of trust in health technologies, particularly among minority populations [,]. Additionally, there is also some evidence that suggests that willingness to share personal health data for AI use depends strongly on the institution collecting the data and its intended purpose [].

    While digital health literacy improved comfort levels with use of AI, we also found that individuals with multiple health conditions were more accepting of AI when personal health data use was consensual. Individuals with multiple chronic conditions may have greater familiarity with varied health technologies and tools like wearables due to their increased prevalence [-], promoting appreciation for AI’s potential in managing complex conditions.

    Implications

    One of the critical challenges in deploying AI solutions in health care is ensuring fairness and reducing algorithmic bias, which often arises from unrepresentative training datasets [-]. To build AI models that produce accurate, equitable, and generalizable outcomes, they must be trained on large, diverse, and high-quality datasets that reflect the complete range of patient demographics and health conditions []. Our findings point to the importance of placing explicit patient consent at the core of all efforts in developing AI solutions.

    Health care institutions and policymakers must establish standardized protocols for obtaining patient consent for AI use, ensuring that data collection aligns with ethical and legal frameworks (eg, HIPAA [Health Insurance Portability and Accountability Act] in USA, PIPEDA [Personal Information Protection and Electronic Documents Act] in Canada, and GDPR [General Data Protection Regulation] in Europe) [,]. These policies must define whether patient data is being used for research, commercial development, or clinical decision-making. They must also clarify how long the data will be stored, who can access it, and whether patients have the right to withdraw consent at any time []. Without well-defined guidelines, the risk of unauthorized data usage and breaches increases, undermining public confidence in health care AI.

    Even when patients provide consent, strong privacy protections must be in place, particularly when data are pooled across multiple health systems. There is a growing concern about deidentification and whether anonymized health data can still be reidentified using advanced AI techniques []. To mitigate these risks, health care institutions must implement privacy-preserving solutions, such as federated learning, where AI models are trained across decentralized data sources without transferring raw patient data [,]. Blockchain-based consent management can offer a secure way for patients to track and manage their data access, while strict data governance frameworks are essential to ensure AI developers use health data more responsibly.

    This study shows that comfort with AI in health care is strongly influenced by sociodemographic factors and digital literacy. Individuals who trust AI and understand how it works are more likely to support AI-driven health care applications, while those with privacy concerns or lower digital literacy may resist AI use in health care, specifically when personal health data is used without consent. Addressing these concerns through educational initiatives, transparent policies, and patient engagement strategies can help build public confidence in AI solutions in health care.

    Our findings also indicate that respondents were significantly less comfortable with AI use when personal health data were used without explicit consent. This highlights a crucial ethical dilemma—even if deidentified, patient data still carries risks if used without oversight or patient involvement. Future research and policy discussions should explore how much control patients should have over their deidentified data, what level of transparency AI developers must provide to patients, and how AI models trained on patient data should be evaluated for fairness and accountability. Algorithmic bias, as seen in lower comfort among certain racial groups like African Americans, especially when consent is absent, could exacerbate health care disparities if AI models are trained on unrepresentative datasets []. Mitigating algorithmic bias through diverse dataset inclusion and continuous performance monitoring can help reduce disparities [,]. Accountability in clinical decision-making is critical to ensure that AI supports, rather than overrides, clinician judgment, while prioritizing patient autonomy in data use strengthens trust [,]. These ethical challenges highlight the need for aligning AI deployment with patient expectations and equitable outcomes.

    To build trustworthy AI-enabled health care solutions, policymakers and health administrators need to design targeted public awareness campaigns, co-developed with patient advocates, that clearly explain AI’s clinical role and how it uses patient data []. In addition, implementing opt-in consent policies [,] can help alleviate patient fears about misuse of their data. Culturally tailored digital literacy programs can also help boost patient confidence about AI use in health care. Beyond patient engagement, standardized bias audits for all clinical AI tools [], while establishing patient review boards or advisory committees to gather patient feedback and assess ethical implications before deployment can be effective [].

    Limitations

    This study has several important limitations. First, the cross-sectional design captures public attitudes at a single point in time. The survey was done at the end of 2023 in Canada, when new generational AI technologies were still emerging. As advances in AI technologies and public awareness evolve, attitudes may change, making longitudinal studies necessary. Second, the self-reported nature of the data may introduce bias. Respondents could have misestimated their AI knowledge and comfort levels due to social desirability. Additionally, self-reported digital literacy may not accurately reflect actual proficiency in digital health technologies. Third, the study explored respondent attitudes toward AI, without assessing whether they had any prior exposure to any AI health care tools, such as chatbots, etc. Fourth, though we had a fairly large sample size, it may not be representative of the general population, restricting the generalizability of results. Specifically, our sample included relatively small proportions of noncitizens (4.68%) and Black or African-origin respondents (3.66%), which reduces statistical power for these subgroups. Consequently, subgroup findings should be interpreted with caution. Fifth, our operationalization of chronic health conditions as a composite count is a measurement limitation. Different health conditions, based on their nature and severity, could influence varied levels of technological use and engagement. Future research could explore how specific health conditions, and their nature and severity, influence attitudes toward AI. We also acknowledge possible nonresponse bias, as over 3226 cases were removed due to missing answers on AI-related questions. This reduction in analytic sample size may have excluded individuals with systematically different attitudes toward AI. Although weighting adjustments were used to correct for this, some bias may remain, as the weighting cannot account for unobserved factors. For instance, respondents who systematically avoided answering AI-related questions may hold strong, unmeasured attitudes such as anxiety or mistrust toward AI, which could bias our analytic sample. Finally, while the study examined broad sociodemographic and health factors, it did not delve into more specific determinants of AI trust, such as ethical concerns, data security apprehensions, or past experiences with health care technologies. Future research could explore these in greater detail to better understand the specific reasons behind patient acceptance of AI in health care. Additional investigation through qualitative or mixed-method studies could throw light on specific nuances that shape the patient attitudes toward AI use in health care.

    Conclusions

    In conclusion, this study documents moderate levels of knowledge and comfort levels of the public regarding AI use in health care. Further, it highlights how sociodemographic characteristics, digital literacy, and health conditions are associated with public knowledge and comfort levels regarding AI use in health care. Our findings suggest significant socioeconomic disparities around the comfort levels with AI use in health care, while concerns persist around AI use without patient consent. These findings highlight the importance of transparent policies, patient education, and ethical data governance to improve public trust in AI-driven health care.

    The authors received no funding for this study.

    All data and survey materials associated with this study are publicly available at the following sites [].

    RC and EM designed and conceptualized the study. LT preprocessed the survey data, and LT and RC performed data analysis. RC and EM wrote the manuscript. All authors had full access to the data and had final responsibility for the manuscript submitted for publication.

    None declared.

    Edited by Andrew Coristine; submitted 14.May.2025; peer-reviewed by Akonasu Hungbo, Chekwube Obianyo, Di Shang, Reenu Singh, Saad Ilyas Baig; final revised version received 11.Nov.2025; accepted 11.Nov.2025; published 05.Dec.2025.

    © Ranganathan Chandrasekaran, Lavanya Takale, Evangelos Moustakas. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 5.Dec.2025.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • Constellation reaches agreement with US Department of Justice for acquisition of Calpine – Reuters

    1. Constellation reaches agreement with US Department of Justice for acquisition of Calpine  Reuters
    2. Constellation Reaches Resolution with U.S. Department of Justice for Calpine Transaction  constellationenergy.com
    3. Why Constellation Energy Stock Topped the Market on Thursday  Nasdaq
    4. Constellation Energy to divest from York Energy Center after settlement  ABC27
    5. PLATTS: 100–Constellation completes final regulatory step in Calpine acquisition  TradingView

    Continue Reading

  • Musk’s X sues former senior engineer over alleged theft of software code

    Musk’s X sues former senior engineer over alleged theft of software code

    WASHINGTON, Dec 5 (Reuters) – Billionaire entrepreneur Elon Musk’s social media platform X Corp has sued one of its former senior engineers for allegedly stealing proprietary source code to launch a competing business.

    X said in its lawsuit, opens new tab in the federal court in San Francisco on Thursday that Yao Yue and her new company IOP Systems violated a federal law that protects business trade secrets.

    Sign up here.

    Yue previously served as X’s principal software engineer on the company’s infrastructure optimization and performance team. Her LinkedIn profile shows she worked for the platform for more than a decade.

    IOP Systems and X did not immediately respond to requests for comment. Yue could not be reached for comment.

    X was formerly known as Twitter before Musk bought the platform in 2022 for $44 billion.

    The lawsuit alleges that weeks after Musk’s acquisition, Yue took advantage of changes in management to “willfully and maliciously” copy millions of lines of confidential code and internal tools from her company laptop to external drives.

    X fired Yue in late 2022 after she publicly questioned Musk’s new return-to-office policy at X and encouraged employees not to resign but instead to be fired, according to the lawsuit. X alleges Yue “orchestrated” her own ouster to raise her profile.

    X claims the code Yue allegedly took with her was designed to optimize system performance and reduce operating costs — technology that the company says is central to competition.

    The lawsuit alleges Yue used the code to form IOP Systems, which markets performance-monitoring software. X Corp alleges IOP’s offerings mirror the proprietary tools Yue helped develop while at X.

    IOP’s website says its team “met while working for Twitter, where we saved the company over $100,000,000 over 5 years.”

    X seeks a court order to prevent any further alleged use or disclosure of its trade secrets, and an order requiring the return of copied materials. The lawsuit also seeks unspecified monetary damages.

    The case is X Corp v. Yao Yue et al, U.S. District Court, Northern District of California, No. 3:25-cv-10423.

    For plaintiff: Jeanine Zalduendo and William Odom of Zweiback, Fiset & Zalduendo

    For defendants: No appearances yet

    Read more:

    Apple and OpenAI must face X Corp’s lawsuit for now, US judge rules
    Google sues ex-engineer in Texas over leaked Pixel chip secrets
    Musk’s X can sue watchdog Media Matters in Texas, US judge rules
    Musk’s xAI wins early order blocking engineer from sharing tech with OpenAI

    Reporting by Mike Scarcella

    Our Standards: The Thomson Reuters Trust Principles., opens new tab

    Continue Reading

  • Gold gains on Fed rate cut optimism; silver hits record high – Reuters

    1. Gold gains on Fed rate cut optimism; silver hits record high  Reuters
    2. Gold prices suffer profit taking ahead of likely Fed cut; PCE inflation due  Investing.com
    3. Gold slips as investors turn cautious ahead of Fed meeting  Business Recorder
    4. Gold slips from earlier highs as Dollar firms after steady US PCE data  FXStreet
    5. Gold Price Rallies Above 100-Hour MA to Trade at About $4,223  Forex Factory

    Continue Reading