Category: 3. Business

  • EMPEROR-Preserved and Beyond: New Data and Perspectives Inform HF Treatment

    EMPEROR-Preserved and Beyond: New Data and Perspectives Inform HF Treatment

    A series of papers published in JACC: Heart Failure provide new findings from the EMPOROR-Preserve trial and several other reports that could benefit the management and treatment of patients with heart failure (HF), specifically those with HF with preserve ejection fraction (HFpEF). Additionally, an expert commentary provides insights into subgroups and special populations in HF clinics.

    In EMPOROR-Preserve, original research from João Pedro Ferreira, MD; Milton Packer, MD, FACC; and Javed Butler, MBBS, FACC, et al., reveals patients with HF with mildly reduced ejection fraction (HFmrEF) or HFpEF who had higher serum magnesium levels had lower risk of primary outcomes events if receiving empagliflozin (10 mg/daily) compared with those receiving placebo.

    The study randomized nearly 6,000 patients to either empagliflozin or placebo, with lab results available at baseline, weeks 4, 12, 32 and 52 and then every 24 weeks thereafter for a median follow-up of 26 months. The primary outcome was a composite of cardiovascular death or HF hospitalization.

    Overall results found that patients receiving placebo experienced a higher risk of primary outcome events with higher magnesium levels, while empagliflozin was associated with a greater reduction of primary outcome events at higher baseline magnesium levels. Researchers also noted that patients with higher magnesium levels were older, had lower estimated glomerular filtration rate, and higher prevalence of atrial fibrillation, while patients with lower serum magnesium had diabetes and more frequently used thiazide-type diuretic agents.

    According to the authors: “The role of magnesium in HFmrEF/HFpEF should be investigated further, particularly how the use of SGLT2 inhibitors may increase magnesium levels and how this influences the function of cardiomyocytes, endothelial cells, and autophagic flux.”

    In a related editorial comment, Wendy McCallum, MD, et al., highlight that “one of the key messages” from EMPOROR-Preserved is the fact “there was no suggestion of harm with randomization to SGLT2 inhibitor irrespective of baseline magnesium level.” They write that “Ferreira et al., have generated an interesting hypothesis that SGLT2 inhibitors are associated with a differing degree of cardiovascular benefit depending on the baseline magnesium level” and suggest that “future studies to evaluate the role of magnesium and change in magnesium in relation to SGLT2 inhibitor treatment among patients with HFrEF and with HFmrEF/HFpEF are needed.”

    Three separate “brief reports” explored other areas of HFpEF management. In one, authors Yu Kang, MD, et al., assessed whether left ventricular (LV) end-diastolic dimension (LVEDD) is a prognostic metric in HFpEF. Their findings suggest yes, with results showing a “showing a U-shaped relationship with mortality” at one-year post-discharge. According to the authors, patients with LVEDD beyond 45 to 59 mm had increased risks of death and distinct clinical characteristics. “This concept of phenotyping HFpEF by LV remodeling could be considered in future clinical practice and research,” they write.

    In another report, Silav Zeid, MSc, et al., shared the results of the MyoMobile Trial, which found that an app-based adaptive digital coaching intervention increased daily step count in patients with HFpEF compared with standard of care or physical activity tracking alone. The findings “support integrating digital coaching into routine HFpEF care and warrant future studies to explore long-term effectiveness, broader implementation and potential influences on functional capacity and clinical outcomes,” they said.

    In a third brief report, Oluwapeyibomi I. Runsewe, MD, et al., assessed whether use of retinal optical coherence tomography angiography (OCTA) could capture early microvascular changes in HFpEF that are associated with cardio-renal dysfunction. Their results found that retinal OCTA imaging is feasible “in identifying retinal microvascular dysfunction in the form of lower vessel densities (especially for the SCP on 3 × 3 mm angiograms) as a distinct phenotype of HFpEF associated with greater cardiorenal impairment,” and they suggest that “further investigations are warranted to establish the proposed thresholds of retinal OCTA metrics that can noninvasively identify patients with HFpEF and microvascular dysfunction.”

    Outside of HFpEF treatment, “leading edge commentary” from Vanessa Blumer, MD, FACC; Biykem Bozkurt, MD, PhD, FACC; and Marvin A. Konstam, MD, FACC, et al., offers an update from the Heart Failure Collaboratory and the Heart Failure Collaboratory Academic Research Consortium Expert Consensus Panel on “consensus definitions and considerations to better characterize subgroups and special populations, supporting more precise, relevant, and patient-centered trial design.”

    “In the evolving landscape of HF management, the identification and analysis of subgroups and special populations within clinical trials are crucial for enhancing clinical decision-making, guiding further research, and understanding heterogeneity in study outcomes,” the authors write. “…These efforts aim to improve the evaluation of therapeutic interventions, inform regulatory decision-making, and advance personalized care across the spectrum of HF.”

    Specifically, the statement calls on policymakers to address “the root causes of environmental stressors” and adopt stricter air quality and noise standards, phase out fossil fuels and regulate toxic chemicals. It also highlights several health system adaptation and resilience measures, including implementation of public awareness campaigns, health care workforce training and retention strategies, data sharing and interoperability, dedicated funding for health system adaptation and crisis response, investment in telemedicine and integrated care models, and more.

    “Research has shown the negative health impacts of pollution, noise, rising temperatures and other environmental stressors,” says ACC President Christopher M. Kramer, MD, FACC. “The time for action on addressing the impact of the environment on cardiovascular health is now and essential to reducing the burden of cardiovascular disease around the world.”

    • Ferreira, J, Packer, M, Butler, J. et al. Serum Magnesium, Outcomes, and the Effect of Empagliflozin in Heart Failure With Mildly Reduced and Preserved Ejection Fraction: Findings From EMPEROR-Preserved. J Am Coll Cardiol HF. Published online Jan. 8, 2026. doi.org/10.1016/j.jchf.2025.102889.
    • Kang, Y, Chen, X, Chen, Y. et al. Phenotyping HFpEF by Using Left Ventricular End-Diastolic Dimension and ITS Relationship With Postdischarge 1-Year Mortality. J Am Coll Cardiol HF. Published online Jan. 8, 2026. doi.org/10.1016/j.jchf.2025.102895.
    • Zeid, S, Buch, G, Söhne, J. et al. App-Based Coaching Improves Physical Activity in Patients With HFpEF: Results of the MyoMobile Trial. J Am Coll Cardiol HF. Published online Jan. 8, 2026. doi.org/10.1016/j.jchf.2025.102845.
    • Runsewe, O, Srivastava, S, Province, V. et al. Heart Failure With Preserved Ejection Fraction and Microvascular Disease Using Retinal Optical Coherence Tomography Angiography. J Am Coll Cardiol HF. Published online Jan. 8, 2026. doi.org/10.1016/j.jchf.2025.102846.
    • Blumer, V, Bozkurt, B, Konstam, M. et al. Subgroups and Special Populations in Heart Failure Clinical Trials: Insights From the HFC-ARC Expert Consensus Panel. J Am Coll Cardiol HF. Published online Jan. 8, 2026. doi.org/10.1016/j.jchf.2025.102775.

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Background

    The demand for advanced medical imaging services continues to grow at a rapid pace, and reducing time delay from image capture to radiological reporting remains a priority for public hospital services [,]. Radiologists often need to work through large volumes of information to report on medical images for a wide array of patient groups. This can be challenging in clinical environments with constraints on time, resources, and related workflow pressures. Excessive delay from time of medical image capture to the provision of a definitive radiological report to the referring clinical team can undermine the quality and safety of health care delivery [,].

    Medical imaging has been, and remains, at the forefront of advancements in digital health technologies in everyday clinical practice [,]. Consequently, there is an array of digital tools available to support radiology decision-making that have arisen from advances in machine learning and other related technologies. Artificial intelligence (AI) algorithms that are commercially available, and with regulatory approvals already in place, are now being embedded in digital tools and readily adopted into routine practices for radiologists in various settings internationally [,].

    In experimental and validation studies, AI algorithms show strong technical performance: meta-analyses report pooled sensitivity and specificity values exceeding 0.80-0.85 for tumor metastasis and rib fracture detection, with the mean area under the curve near 0.90 [,]. These results highlight the potential for AI to enhance accuracy and throughput, particularly in resource-constrained health systems [,,]. Yet this technical promise has outpaced the evidence on real-world implementation []. The literature remains dominated by model validation [] and cross-sectional studies of clinician trust [-], with few studies examining how AI systems are adopted, adapted, or sustained within the operational realities of hospital environments. Recent reviews emphasize this gap, noting that most AI-radiology research ends at performance benchmarking and fails to explore workflow integration, organizational readiness, or long-term routinization [,]. Broader governance and workforce analyses likewise underline persistent uncertainty around accountability, medicolegal responsibility, and system-level preparedness for AI-supported care [,]. Collectively, these gaps constrain understanding of how algorithmic potential translates into clinical and organizational value.

    While these quantitative evaluations and meta-analyses have established AI’s diagnostic capability, they provide little insight into how and why such technologies succeed or fail once introduced into routine clinical practice. Many rely on retrospective datasets, simulated environments, or controlled reader studies that remove the influence of real-world complexity [-]. Consequently, they overlook the macro-, meso-, and microlevel dynamics of workflow adaptation, human-technology interaction, and organizational and sociocultural context that determine whether AI enhances or disrupts practice. A qualitative implementation approach is therefore critical for exploring the lived experiences, informal workarounds, and contextual contingencies that shape integration in situ [,]. Such an approach complements quantitative evidence by revealing the social and organizational mechanisms through which AI adoption is negotiated, sustained, or resisted in everyday radiology work and practice.

    Emerging qualitative and mixed methods studies have begun to address aspects of these challenges by exploring radiologists’ perceptions, sources of trust and mistrust, and organizational barriers to adoption [,]. However, most have relied on single-time-point interviews, limited samples (n<20), or hypothetical case vignettes that do not capture the evolving interaction between users, workflows, and technology over time [,,]. Few have been conducted within active service settings or have systematically linked individual experiences to organizational processes or system-level factors [,]. This has resulted in a descriptive but fragmented evidence base that provides limited insight into how implementation unfolds, stabilizes, or falters once AI becomes part of routine care.

    This study responds directly to that gap by presenting a qualitative, end-to-end evaluation of AI implementation within a large tertiary radiology department in Brisbane, Queensland, Australia. Here, end-to-end refers to a lifecycle approach spanning predeployment context and readiness, peri-implementation adaptation, and postimplementation integration and routinization, examining how technological, human, and organizational factors interact over time [,]. There is a broad range of implementation frameworks to assess the implementation of a digital health innovation across a life cycle; however, they are varied in their analytical purpose []. Strategy-based models such as the Expert Recommendations for Implementing Change (ERIC) framework provide detailed lists of discrete implementation actions, but they are limited in explaining the mechanisms through which adoption unfolds in complex clinical environments []. In contrast, our evaluation sought to understand how and why AI integration succeeds or stalls within a dynamic, real-world system. The nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) framework [] was therefore selected to guide data collection and analysis. NASSS offers a theoretically grounded structure for examining the sociotechnical complexity of digital innovation by integrating the domains of technology, adopters, organization, value proposition, and wider context [,]. This systems-oriented lens provides greater explanatory power than more generalized implementation approaches for capturing interdependencies, contextual contingencies, and the temporal evolution of barriers and enablers across the implementation lifecycle.

    By situating implementation within real-world clinical operations rather than experimental or hypothetical conditions, this study provides a rare longitudinal perspective on how AI becomes normalized or resisted within a complex hospital environment. Its findings have direct relevance to current policy efforts to scale AI responsibly in public health systems, where efficiency, safety, and governance imperatives converge [].

    Aims

    Accordingly, this paper aims to examine the real-world implementation of an AI-based clinical decision support tool in radiology through an end-to-end qualitative evaluation across pre- (baseline), peri-, and postimplementation phases. Specifically, it seeks to identify the key contextual, organizational, and human factors shaping adoption and sustainability, to map these influences using the NASSS framework, and to generate insights that inform evidence-based strategies and policy for integrating AI safely and effectively into public hospital imaging services.

    Study Design and Theoretical Framework

    This study used a qualitative prospective design. The study was structured across 3 temporal phases to capture the evolving context of AI implementation within the radiology department.

    The preimplementation or baseline phase (12 months before deployment) corresponded to the period when the AI tool had not yet been introduced. This phase reflected baseline organizational conditions, established workflows, and prevailing attitudes toward digital tools in radiology.

    The peri-implementation phase (an 8-week transition period) covered the initial rollout of the AI system and its integration into existing digital and reporting infrastructure. This period was characterized by early interaction with the tool and short-term adaptation of work processes.

    The postimplementation phase (12 months after deployment) represented a period of stabilization in which the AI tool had become part of routine operations. This phase captured the mature context of use, reflecting how the technology was embedded, maintained, and normalized within everyday practice.

    The reporting of this study was in alignment with the COREQ (Consolidated Criteria for Reporting Qualitative Studies; ).

    NASSS Framework

    The NASSS framework was used to inform the study design and interview questions. The NASSS framework provides a systematic foundation for examining challenges across multiple domains and their dynamic interactions, which may influence the uptake, implementation, outcomes, and sustainability of technology-supported health programs []. It facilitates consideration of how various factors interact and adapt over time, influencing success, and includes the following domains:

    1. Condition or illness: the nature and complexity of the condition being addressed.
    2. Technology: the specific technology being implemented.
    3. Value attributed to the technology: the perceived benefits and utility of the technology.
    4. Individual adopters: the clinicians and patients using the technology.
    5. Organizational adopters: the health care organizations implementing the technology.
    6. External context: the broader context, including regulatory, economic, and social factors.

    Setting

    This was a single-site study conducted at a large public tertiary referral hospital in Brisbane. The hospital’s Medical Imaging Department offers a comprehensive range of diagnostic imaging services to support patient care across various medical specialties.

    AI Clinical Decision Support System

    The technology adopted by the department is a third-party and commercially available multiorgan AI-based computerized clinical decision support system (CDSS) for radiologists. The CDSS uses multiple specialized convolutional neural networks across the entire machine learning cycle, including preprocessing, candidate generation, classification, and postfiltering. It has been classified and approved as a diagnostic tool under the current Australian Therapeutic Goods Administration regulatory framework. The decision support system integrates with existing medical imaging hardware and software to allow computed tomography (CT) images to be automatically transferred from the scanner, preprocessed, and prepared for interpretation by radiologists. The CDSS flags or highlights any issues within the CT image that require further differential diagnosis by the radiologists. In 2021, before full site implementation, the study site tested this tool among a small group of radiologists (n=4) and anecdotally reported positive experiences.

    Participant Recruitment and Sampling

    Participants included radiology consultants, registrars, and radiographers employed within the Medical Imaging Department who were involved in chest CT reporting during the study period. Recruitment was undertaken via internal email by an embedded chief investigator. A purposive yet stratified sampling approach was used to achieve broad representation across professional roles and levels of seniority. At the time of data collection, the lead interviewer (SN) was an experienced PhD-trained male implementation science researcher with no supervisory, managerial, or clinical authority over participants. SN had established professional familiarity with the department through earlier collaborative work, but no direct reporting relationships. Participants were informed of SN’s research role, disciplinary background, and interest in understanding real-world implementation challenges before interview commencement. Stratification was guided by the departmental organizational chart to ensure inclusion of participants from different functional areas and reporting responsibilities. This approach sought to capture a range of experiences across the implementation process rather than statistical representativeness. Sampling continued iteratively across the pre- (baseline), peri-, and postimplementation phases until thematic adequacy was reached, indicated by repetition of key concepts and no emergence of new issues in subsequent interviews [].

    Qualitative Interviews

    Semistructured face-to-face interviews, approximately 40 minutes in length, were conducted by an experienced implementation science researcher (SN) according to participant preference (in person, via Microsoft Teams, or through phone; ). All interviews were audio-recorded and transcribed upon participant consent. Interviews were conducted flexibly, with questions adapted to participant roles and experience. Not all questions or prompts were asked in every interview, but the guide provided a consistent framework to ensure coverage of key domains. The interviewer kept reflexive notes after each interview to document emerging impressions, relational dynamics, and potential influences of their positionality on data generation.

    Study Materials

    We used a reflexive framework method to guide the development of a semistructured interview template, aligning with our study aims [,]. This approach aimed to capture a comprehensive range of insights, perceptions, and experiences, providing a rich dataset for analysis.

    Data Analysis

    Interview transcripts were analyzed using an iterative, multistage process combining thematic analysis with NASSS-informed framework mapping [-]. Before analysis, the research team discussed their disciplinary positions and assumptions regarding AI in radiology, documenting these reflections to support analytic reflexivity. SN led coding with AE providing independent review; neither had clinical authority over participants.

    Analysis and data collection were conducted concurrently to guide purposive sampling and determine saturation. Early transcript review enabled the identification of preliminary codes and gaps, informing subsequent recruitment to ensure variation in experience and role. No participant reviewed the transcripts or findings before publication, consistent with the exploratory and ecological design of the study. However, findings were discussed with senior departmental clinicians and technical leads during routine project meetings. These discussions did not involve revising data or themes but served to ensure that the interpretations accurately reflected the broader organizational context and the realities experienced within the department. This informal sense-checking supported contextual validity while maintaining analytic independence.

    Initial inductive coding was undertaken by one researcher (SN), who read each transcript line by line and assigned short descriptive phrases summarizing perceived barriers, facilitators, or neutral factors related to AI implementation. To strengthen analytic credibility, a second researcher (AE) independently reviewed a subset of transcripts and the draft codebook. Coding discrepancies and interpretive differences were discussed and resolved through consensus, providing a form of cross-checking without imposing rigid interrater reliability metrics. Iterative discussions across the research team further refined code boundaries, ensuring conceptual coherence and maintaining an audit trail of analytic decisions.

    Once inductive coding was complete, higher-order categories were developed to capture recurrent concepts and relationships. These categories were then deductively mapped to the NASSS framework. This process enabled systematic classification of determinants by domain (eg, technology, organization, value proposition, adopters, wider system, and clinical context) while retaining sensitivity to context-specific nuances. The mapping process was iterative, with subthemes revisited and refined to ensure conceptual alignment between inductive insights and NASSS constructs, and to account for determinants that spanned multiple sociotechnical domains. Mapped subthemes were subsequently synthesized into a set of higher-order, cross-cutting determinants representing the dynamic interactions between technological, organizational, and adopter-related factors across implementation phases. This synthesis informed the structure of the results, where inductive findings and NASSS categories are integrated to illustrate how determinants evolved from baseline to peri-implementation and into routine use. Illustrative quotes were selected by consensus to exemplify the range of perspectives within each theme and subtheme. Quote selection focused on demonstrating variation, depth, and temporal evolution rather than providing isolated examples, ensuring that quotations functioned as analytic evidence and supported the integration of inductive insights with NASSS domains. An accompanying summary table provides an at-a-glance depiction of how inductive themes aligned with specific NASSS domains and subdomains, further enhancing analytic transparency and coherence.

    To enhance analytic transparency, a content count of all coded barriers and enablers was compiled in Microsoft Excel and stratified by implementation phase (baseline, peri-implementation, and postimplementation). This numerical summary illustrated the distribution and relative prominence of determinants across NASSS domains (eg, technological, organizational, and adopter-related), providing a structured complement to the qualitative narrative. The count functioned as a descriptive aid to visualize patterns within the dataset, highlight areas of convergence and divergence across phases, and support the organization of complex, multilevel determinants [].

    Trustworthiness

    To ensure trustworthiness, the research team engaged in continuous reflexive discussions throughout data collection and analysis, critically examining how their disciplinary backgrounds and assumptions could shape interpretation. Coding decisions were documented in an evolving analytic log, forming an audit trail that supported transparency and replicability. Regular peer debriefings were held to resolve interpretive differences and refine theme definitions. Trustworthiness was further reinforced through the systematic application of the NASSS framework, which provided a theoretically grounded lens for organizing inductive findings. The inclusion of a quantitative content count of coded determinants added descriptive transparency, demonstrating how interpretations were anchored in the underlying data distribution. Together, these strategies strengthened the credibility, confirmability, and dependability of the qualitative findings [,].

    Ethical Considerations

    The Human Research Ethics Committee granted ethical clearance for this research (HREC/2021/QMS/81483). All participants provided written and verbal informed consent before participating in the study. Participation was voluntary, and participants could withdraw at any time. Participants were assured that their responses would be confidential, would not be shared with departmental leadership in identifiable form, and would have no bearing on workplace evaluation or progression. No incentives were offered, and no previous personal relationships existed between the researcher and participants beyond professional familiarity.

    Participant Characteristics

    A total of 43 one-on-one interviews were conducted across the study timeframe, as shown in . This consisted of 7 (16%) radiographers, 20 (47%) registrar radiologists, and 16 (37%) consultant radiologists. A total of 9 (21%) participants were interviewed across multiple time points, consistent with public health services experiencing regular staff rotation, shift-based work patterns, and competing clinical pressures. While this posed practical challenges to longitudinal participation, it was also indicative of practical challenges with AI implementation in hospital medical imaging departments. To accommodate this and maintain the integrity of the analysis, each interview was treated as a discrete data point. This allowed us to capture a wider range of perspectives from across the workforce and reflect the dynamic, high-turnover environment typical of public hospital settings.

    Table 1. Participant characteristics across the 3 implementation phases.
    Participants Baseline Peri-implementation Postimplementation
    Total number of participants (N=43), n (%) 16 (37) 9 (21) 18 (42)
    Profession and seniority, n (%)
    Radiographer (N=7) 4 (57) 1 (14) 2 (28)
    Radiology registrar (N=20) 7 (35) 4 (20) 9 (45)
    Consultant radiologist (N=16) 5 (31) 4 (25) 7 (44)
    Sex, n (%)
    Male (N=26) 9 (35) 6 (23) 11 (42)
    Female (N=17) 7 (41) 3 (18) 7 (41)

    NASSS Informed Barriers and Enablers of AI Implementation in Radiology

    A total of 56 barriers and 18 enablers were identified at baseline, 55 barriers and 14 enablers during peri-implementation, and 82 barriers and 33 enablers at postimplementation. presents barriers across NASSS domains and study phases, while shows enablers across the same phases.

    Figure 1. Barriers across nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) domains and study phases.
    Figure 2. Enablers across nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) domains and study phases.

    At baseline, organizational barriers were the most prominent, representing nearly half of all identified barriers (26/56, 46%). These are primarily related to limited technological readiness, insufficient training, and inadequate workflow planning for implementation. Technological barriers followed (12/56, 21%), reflecting early concerns about AI performance and output accuracy, while adopter-related barriers (7/56, 12%) centered on uncertainty regarding medicolegal accountability when using AI in reporting. The main enablers at this stage were found within the organizational (6/18, 33%) and value proposition (5/18, 28%) domains, reflecting a collegial, innovation-friendly culture and a belief in the technology’s potential for efficiency and time savings.

    During the peri-implementation phase, technological barriers dominated (31/55, 56%), particularly those concerning interoperability and system performance. These were followed by organizational barriers (14/55, 25%) related to weak implementation planning and inadequate workflow support, and a smaller set of adopter barriers (4/55, 7%) linked to limited trust in the AI system. Despite these issues, several enablers emerged, most notably within the value proposition domain (7/14, 50%), where participants anticipated potential efficiency gains if technical and integration challenges could be addressed. A smaller number of enablers (4/14, 28%) related to technology, as some users began using the AI system to cross-check their own interpretations.

    By postimplementation, technological barriers persisted (41/82, 50%) as problems with accuracy, reliability, and speed remained unresolved. Organizational barriers (18/82, 22%) continued to reflect deficiencies in communication, training, and workflow integration, while adopter barriers (13/82, 16%) indicated ongoing distrust in the AI and reluctance to incorporate it fully into routine practice. However, this phase also saw the most substantial growth in enablers (a total of 33), particularly within technology (22/33, 67%), as users adapted the system for use as a secondary check or safety mechanism. Additional enablers were identified within the value proposition (7/33, 21%), where participants recognized relative efficiency benefits, and among adopters (4/33, 12%) who expressed emerging, albeit cautious, trust in the AI’s evolving role.

    Across all NASSS domains, implementation was characterized by an interplay between anticipated risks, such as workflow integration and information overload, and realized challenges during peri-implementation, many of which persisted into routine use (). While optimism and perceived value remained for some, trust and adoption were undermined by ongoing performance and communication barriers. Together, these patterns illustrate how implementation unfolded within a large, dynamic clinical service, with determinants shifting as the AI system moved from anticipation to early use and then attempts to move into routine practice. The relative prominence of technological, organizational, and adopter-related factors at each phase provides a contextual frame for understanding the subsequent themes. These distributions therefore situate the qualitative findings within the broader organizational and technological environment in which the AI was being implemented.

    Framework Analysis and Narrative Synthesis of NASSS Domains and Subdomains

    presents a synthesis of inductive themes mapped to the NASSS framework, illustrating how key implementation dynamics evolved across baseline, peri-implementation, and postimplementation phases. The table highlights temporal shifts in organizational readiness, technological integration, value perception, adopter engagement, and wider system influences. These findings are further expanded in the results narrative synthesis, offering deeper insight into the contextual and temporal nuances of implementation.

    Table 2. Mapping of inductive themes to nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) domains and subdomains with indicative change over time.
    NASSS domain Subdomain Inductive theme Change across phases
    Organization Work needed to plan, implement, and monitor change Sustained implementation planning Limited formal planning at baseline; reactive coordination during rollout; structured monitoring and formalized training emerged post implementation.
    Organization Organizational readiness and capacity to innovate Relational engagement and communication Collegial but fragmented culture at baseline; weak interteam communication during rollout; some shared ownership and cross-functional coordination developed post implementation.
    Organization Extent of change to organizational routines Workflow optimization Anticipated efficiency at baseline; workflow disruption and duplication during rollout; individual adaptations post implementation.
    Technology Knowledge generated Extraneous data and information overload Anticipated clutter at baseline became a central frustration during rollout; selective filtering and cognitive habituation emerged post implementation.
    Technology Material properties System performance and material integration Early confidence gave way to concerns about specificity, lag, and reliability during rollout; partial refinement occurred postimplementation, though misalignment persisted.
    Value proposition and clinical context Demand-side value Perceived benefits for workload and safety Strong optimism at baseline; mixed experiences during rollout; perceived value became pragmatic and context-dependent post implementation.
    Value proposition and clinical context Supply-side value Credibility and clarity of purpose Limited understanding of purpose and benefit early on; evolved into clearer but modest recognition of niche utility post implementation.
    Adopters Role and identity Professional positioning and negotiated use Curiosity and willingness at baseline; trust declined during rollout due to inconsistency and false positives; cautious, selective engagement stabilized post implementation.
    Adopters Role and identity Learning and preparedness Informal self-learning predominated during rollout; structured and ongoing AIa literacy training emphasized post implementation.
    Wider system External context Medicolegal uncertainty and system-level guidance Policy and liability ambiguity persisted across phases; postimplementation reflections expanded to ethical and regulatory considerations.

    aAI: artificial intelligence.

    Organization

    There were 3 inductive subthemes that mapped under the organization domain. There were challenges with sustained implementation planning, which mapped under the NASSS subdomain of “work needed to plan, implement, and monitor change;” relational engagement and communication, which mapped to “organizational readiness and capacity to innovate;” and workflow optimization, which mapped to “extent of change needed to organizational routines.”

    Work Needed to Plan, Implement, and Monitor Change (Sustained Implementation Planning)

    At baseline, participants expected limited planning and support for rollout, reflecting past experiences with digital systems. As one consultant noted:

    You don’t actually discover issues or problems with that new process or software or whatever until you’re using it, and then often there’s a lack of support on a day-to-day kind of basis.
    [P4, Consultant]

    Feedback mechanisms were also described as weak, with another adding:

    There’s no way for us to feed…I don’t know of a way for me to feed that back.
    [P1, Consultant]

    These comments illustrated low confidence in the organization’s ability to anticipate or respond to implementation challenges.

    During peri-implementation, radiologists described minimal systematic planning or training.

    Not training per se, I think there was one meeting where they said that it was being implemented.
    [P20, Registrar]

    Another reflected:

    Not very well (when asked about implementation planning)…they haven’t really. Besides telling us that we’re going to put it into practice, yeah, there’s just not much that they’re saying about it.
    [P22, Registrar]

    Such experiences made participants cautious about the department’s readiness to adopt AI, with one consultant admitting:

    I think in retrospect, we could have done more in terms of educating people.
    [P21, Consultant]

    Postimplementation reflections reinforced these concerns, highlighting the ongoing absence of structured improvement strategies to support uptake and sustainment. Participants were critical of the ad-hoc implementation process, arguing that:

    If there are programs that are clinically usable and are planned to be rolled out within the department, then I think it makes sense for everyone to have formal training.
    [P33, Registrar]

    Even so, there was a strong appetite for more structured professional development in AI tools, with one consultant remarking:

    I would like to be taught. I am a better learner if I’m taught.
    [P32, Consultant]

    Across phases, participants viewed planning and monitoring as reactive rather than anticipatory. Despite enthusiasm for AI, the lack of systematic preparation and ongoing learning opportunities constrained the department’s ability to embed change effectively. Organizational challenges were compounded by high staff turnover and rotating clinical rosters, which limited continuity of learning and reduced opportunities for cumulative familiarity with the system. These shifting workforce conditions shaped how planning gaps were experienced and helped explain some variation in engagement and confidence across the implementation period.

    Organizational Readiness and Capacity to Innovate (Relational Engagement and Communication)

    At baseline, participants described a workplace that was broadly supportive of new ideas but slow to coordinate change due to limited capacity. As one consultant noted, “in public systems, people just tend to put up with inefficiencies” (P1, Consultant). Such reflections suggested that innovation was encouraged in principle but rarely matched by structured communication or system support.

    During rollout, participants described poor communication and limited coordination between teams.

    Having [vendor redacted] in one corner, radiologists in another, and us talking together… takes resources to get everyone together.
    [P23, Radiographer]

    Another reflected postimplementation, “We didn’t actually tell most radiographers this was happening” (P41, Radiographer), highlighting the persistence of siloed and reactive coordination. Participants attributed emerging resistance partly to this fragmentation, explaining that “a lot of stuff was happening in the background with the PAX guys and the software people” (P43, Consultant).

    Lack of early adopter involvement and unclear lines of responsibility were seen as weakening organizational readiness, even where enthusiasm for innovation remained high. This persisted into postimplementation, undermining the department’s sociotechnical capacity to support AI integration. As one consultant explained:

    One of the issues is that people who understand computers are not the people who understand medicine, and vice versa. So, there’s probably a communications issue.
    [P29, Consultant]

    Participants linked these challenges to the organization’s limited capacity to learn from implementation, with one concluding:

    We could have done more work with the implementation initially; there could have been more clinician involvement.
    [P43, Consultant]

    Across phases, the organization was perceived as open to innovation but constrained by weak communication channels and reactive coordination. Participants emphasized that the capacity to innovate depended less on enthusiasm than on the presence of structured dialogue, feedback loops, and shared ownership across clinical and technical teams. These relational and coordination challenges intersected with changing workload pressures and fluctuating departmental priorities, reinforcing that uptake was influenced not only by communication structures but by the broader organizational environment in which teams were continuously reconfigured.

    Extent of Change Needed to Organizational Routines (Workflow Optimization)

    Across phases, participants described the introduction of the AI tool as requiring substantial adjustments to established reporting routines. At baseline, senior radiologists viewed it as a potential aid to workflow optimization, especially in easing registrar workloads. As one consultant put it:

    If we were able to create a system or facilitate more report completion… particularly for the registrars, that would increase satisfaction.
    [P1, Consultant]

    This optimism reflected expectations that automation would streamline repetitive elements of reporting rather than disrupt them.

    During peri-implementation, participants described the tool as introducing extra steps and interruptions to normal work patterns. One consultant explained:

    It does add to the amount of things you look at… you’ve got to report your CT as normal, and then you’ve got a bunch of other sequences to scroll through at the end. Not that it adds a lot, but yeah, it does… there’s more… people have been like, what’s all these extra images? They don’t really know what to do with it too much yet either.
    [P18, Consultant]

    This illustrates a lack of clear guidance or established routines for how to use or interpret the additional studies. This persisted at postimplementation:

    I was sort of holding on to reports for a few hours before signing off because I didn’t want additional data to come through that I hadn’t looked up before signing off.
    [P30, Consultant]

    Instead of reducing workload, the new process created pauses, re-checks, and deferred sign-offs. Registrars similarly described difficulty maintaining rhythm and concentration, explaining that:

    It’s hard to get a routine…you have to have a different routine in your workflow for a particular assessment.
    [P26, Registrar]

    These comments captured how the tool altered the flow of image review and report finalization, requiring constant recalibration of familiar sequences. Some radiologists had developed compensatory strategies to manage these disruptions; reordering tasks, batching reports, or consciously ignoring low-yield prompts. As one consultant reflected:

    So for differentials, I go back to the normal scan because the tool obscures some details. So, it’s more I use it for identifications…. So, for me, it’s more identifying. Ok, there’s something there. I need to go back and check (normal scans).
    [P31, Consultant]

    Overall, participants described workflow change as cumulative and largely unplanned. The tool demanded continuous microadjustments rather than a one-time shift in practice. What emerged was a pattern of individual adaptation rather than coordinated redesign; clinicians modified existing routines to fit the tool rather than the tool being aligned with the established clinical workflow. The extent of workflow disruption experienced by clinicians also reflected the realities of a service marked by rotating staff, shifting caseloads, and variable daily pressures.

    Technology

    The NASSS technology domain is mapped with 2 inductive subthemes: extraneous data and information overload, aligning with the subdomain knowledge generated, and system performance and material integration, aligning with the subdomain material properties. Together, these themes captured how the technical characteristics of the AI tool shaped user experience, trust, and perceived value across implementation phases.

    Knowledge Generated (Extraneous Data and Information Overload)

    At baseline, a consultant anticipated risks of information overload based on previous exposure to commercial AI tools. One explained, “You could spend all day circling these things” (P5, Consultant), capturing early concerns that automated outputs might flood readers with marginal or irrelevant findings.

    During peri-implementation, these concerns materialized as the system generated excessive, low-value information:

    Too much data… You really want a traffic-light system.
    [P25, Consultant]

    Another registrar observed:

    I’m not sure how many people look at it. It spits out so many images and random tables
    [P20, Registrar]

    These reactions pointed to an emerging pattern of signal-to-noise imbalance, where radiologists spent more time filtering artefacts than interpreting meaningful results. By postimplementation, some users described partial adaptation, learning to disregard redundant data or mentally triage the AI’s output.

    It gets a little complicated when it picks up things that are artifacts. But yeah, I can work around it.
    [P43, Consultant]

    However, frustration persisted among others who saw the clutter as undermining efficiency rather than enhancing it:

    It’s a waste of time. It’s just clutter, you know? … I usually ignore it.
    [P37, Consultant]

    Across phases, information overload remained one of the most salient barriers to adoption. While individual users developed coping strategies, these adaptations reflected workaround behavior rather than genuine integration, reinforcing perceptions that the AI’s knowledge output was not yet aligned with clinical reasoning or workflow needs.

    Material Properties (System Performance and Material Integration)

    Performance concerns were a defining feature of the AI’s reception, particularly during peri-implementation. A registrar characterized it bluntly as “Not very accurate. Just a splatter approach” (P17, Registrar), reflecting the perception that the system detected excessive findings without adequate specificity. Such errors eroded trust and reduced the incentive to incorporate its output into reporting routines.

    By postimplementation, participants expressed more nuanced but still divided views. Some regarded the system as useful for reassurance or cross-checking:

    Used it more like a check-off — especially when you have things that are complex, and there are a lot of findings.
    [P30, Consultant]

    Others found the persistent false positives distracting and demoralizing. As one put it:

    For me to waste time looking at it…it’s circled this fecal matter in the splenic flexure.
    [P28, Consultant]

    Several participants emphasized that perceived technical performance shaped how often they engaged with the system at all. When lag, sensitivity issues, or interface friction increased, clinicians tended to bypass or ignore the tool. Over time, its role shifted from active decision aid to optional background reference, indicating a decline in both trust and functional value.

    Interoperability problems surfaced most clearly during peri-implementation, where users described limited integration between the AI software, picture archiving and communication system (PACS), and reporting systems. One consultant explained:

    That’s high-level stuff, right? That’s integrating the processing, postprocessing software with the reporting software. But we don’t have that capacity.
    [P21, Consultant]

    They further highlighted redundancy and excess image sets, noting “way too, way too many sequences…we need to distil that down.”

    By postimplementation, interoperability was less salient; some technical issues with integrating the AI into the system appeared resolved, but residual inconsistencies persisted. As one registrar noted:

    There’s not… uniformity to the sequences that are made. The order that they come out, …that’s different from scanner to scanner.
    [P26, Registrar]

    Availability also varied:

    It’s not always there. So, you’ve got to sort of remember…to look for.
    [P26, Registrar]

    Display and PACS constraints continued to affect use:

    I don’t like the way that gets displayed…that’s a PACS system…how the series [are] actually displayed.
    [P26, Registrar]

    Across phases, the material outputs of the AI, its sensitivity, specificity, and responsiveness, directly influenced its perceived usefulness. Participants consistently linked suboptimal performance to disengagement, showing that successful technological integration required not just accuracy, but reliability, responsiveness, and design alignment with radiologists’ expectations of diagnostic precision. Furthermore, while major interoperability barriers had eased, the system never fully aligned with the routine reporting infrastructure, leaving incompatibilities.

    Across all technology-related subdomains, participants described a gap between what the AI produced and what clinicians could use. Information overload and variable system accuracy combined to erode trust and limit engagement. While technical adaptation occurred at the individual level, collective integration into practice remained constrained, signaling that technological refinement and interpretability are prerequisites for sustained adoption.

    Value Proposition and Clinical Context

    Across all phases, discussions of value proposition were less prominent than those of technology or organization, but two inductive subthemes mapped clearly to the NASSS value proposition domain: perceived benefits for clinical workload and safety, which mapped to demand-side value, and credibility and clarity of purpose, which mapped to supply-side value. These intersected closely with the evolving clinical context, in which fluctuating workload pressures and infrastructure challenges shaped how the AI’s value was interpreted.

    Demand-Side Value (Perceived Benefits for Clinical Workload and Safety)

    At baseline, participants viewed the AI as a potential solution to workload strain and reporting delays. Anticipated benefits were framed around efficiency, redistribution of tasks, and registrar support, signaling early optimism that automation would enhance throughput and safety. The department’s intense workload and frequent interruptions reinforced this demand-side appeal: as a consultant noted, AI might “make our job easier” (P1, Consultant).

    During peri-implementation, optimism gave way to more conditional appraisals. While some identified benefits for prioritization, “It highlights a few cases that you can look at first. That’s useful when there’s a backlog” (P19, Registrar), others described it as “unreliable at the moment” (P17, Registrar). Shifts in the clinical environment also tempered expectations, staffing improved, and backlogs eased. By the postimplementation phase, perceptions of value became pragmatic and evidence-driven. Clinicians viewed the AI as a limited but occasionally useful decision support tool:

    I’ve usually written my report before I look at this, and I don’t tend to change the report… it’s another look, I wouldn’t think of it as more than that
    [P29, Consultant]

    Concerns over cost-efficiency persisted:

    If it was free, ambivalent… If it’s significant amounts of money… I don’t see the value because it’s more work than less.
    [P29, Consultant]

    Across phases, expectations regarding the AI shifted from broad hopes of efficiency to a more divided assessment. Some saw modest contributions to safety and prioritization, while others viewed the system as duplicating effort rather than providing genuine workload relief.

    Supply-Side Value (Credibility and Clarity of Purpose)

    At the same time, participants reflected on supply-side value, questioning how clearly the system’s purpose and evidence base had been articulated.

    It just gives you pictures with circles. I’m not sure what the end use is meant to be.
    [P20, Registrar]

    By postimplementation, participants had a clearer understanding of what the AI could do but remained unconvinced of its overall value. However, there was also recognition that the AI was credible in concept but still immature in delivery.

    It’s getting clearer now what it could be for, but it needs to evolve. Right now, it’s still just identifying, not interpreting over time.
    [P43, Consultant]

    Some viewed potential uses to optimize efficiency and workflow, with modifications:

    It could identify which studies need to be reported first…or give us measurement readings.
    [P46, Consultant]

    Maybe some of those sorts of irritations around AI could be changed, you know, or fine-tuned.
    [P25, Consultant]

    These reflections indicated that perceptions of supply-side value were prospective, anchored in what the technology could deliver if optimized, rather than what it had yet achieved.

    Across phases, the vendor narrative of innovation and efficiency had not yet translated into tangible or demonstrable benefit for clinicians or the wider health system. Participants viewed the AI as promising but still lacking the evidence and clarity needed to support confident investment or large-scale deployment.

    Adopters

    The NASSS adopter subdomain of role and identity is mapped with 2 inductive subthemes: professional positioning and negotiated use, and learning and preparedness. Together, these described how clinicians positioned AI within their expertise and accountability, and how limited exposure and training shaped trust, confidence, and uptake. Across phases, adoption reflected an oscillation between curiosity and skepticism, with trust becoming the key mediating factor.

    Role and Identity (Professional Positioning and Negotiated Use)

    At baseline, radiologists expressed a guarded willingness to engage with AI framed less as enthusiasm and more as a professional obligation.

    I think I would use it…it would be almost negligent not to look at it.
    [P6, Registrar]

    Consultants saw potential for practical support:

    It could certainly help you prioritize what you are watching and what order you report things.
    [P8, Consultant]

    During peri-implementation, practical experience unsettled this cautious trust. Registrars described false positives, excessive outputs, and low sensitivity:

    It’s too junior at the moment.
    [P17, Registrar]

    Trust eroded not from resistance to innovation, but from inconsistency between the AI’s promise and its performance. Clinicians voiced a recurring sentiment that while AI might one day assist safety, it currently distracts from clinical focus:

    If the volume of data presented is overwhelming, then that’s negative…the strength would be as a safety net for subtle findings, not changing an overall clinical picture.
    [P19, Registrar]

    Reflecting on their initial use of the tool during peri-implementation, a registrar noted:

    “It was picking up stuff that wasn’t nodules…I still had to go back and look at the images again.
    [P34, Registrar]

    By postimplementation, there was selective use and partial trust.

    I look through the nodules myself first and then correlate with the software to see whether it is congruent with what I’ve come up with.
    [P42, Consultant]

    Others disengaged entirely:

    It slows you down because you have to verify each little dot.
    [P37, Consultant]

    A senior consultant likened the AI to “the registrar with clever ideas, but they’re all wrong” (P40, Consultant), useful for prompting review, yet unreliable without human correction.

    Trust also intersected with medicolegal anxiety. Several raised uncertainties about accountability and liability:

    If the software makes a mistake, who is liable—the vendor or the radiologist? We still haven’t ironed it out.
    [P34, Registrar]

    This uncertainty reinforced their instinct to retain manual control. As a consultant observed:

    If I reported every possible little dot in the chest, I’d end up with a report ten pages long, which nobody would ever read.
    [P37, Consultant]

    The line between cautious trust and defensive practice remained thin.

    Role and Identity (Learning and Preparedness)

    Training and readiness remained persistently underdeveloped. During peri-implementation, there was no structured orientation or clear introduction to the system. Learning was largely self-directed and reliant on peer exchange.

    Personally, I don’t think I’ve had any formal sit-down with it… I’ve just figured it out.
    [P19, Registrar]

    A vendor demonstration was held midphase, but not all clinicians attended, and some felt it was disconnected from practical workflow.

    I just met the software without gathering any prior information about what this new software is.
    [P34, Registrar]

    Without clear instruction or transparency about performance parameters, early experiences became a process of trial and error rather than guided adoption, reinforcing skepticism instead of trust. By postimplementation, clinicians explicitly called for structured and continuous AI education embedded within clinical and professional frameworks.

    If that is incorporated into our routine…every month we have our session doing AI cases.
    [P32, Consultant]

    Others stressed the need for broader institutional responsibility:

    We are severely lacking in training with AI…it should be an integral, assessed part of our training program.
    [P34, Registrar]

    These calls reflected not only a desire for technical competence but also a wish to rebuild confidence and ensure medicolegal clarity, positioning AI as a tool that must be professionally standardized, not individually improvised.

    Ultimately, clinicians saw AI competence as a new layer of professional literacy, necessary to protect judgment, maintain accountability, and engage critically with emerging tools. Their learning needs were not purely technical but ethical and epistemic: how to weigh evidence, interpret probability, and remain vigilant in an era of shared decision automation.

    Wider System

    Participants described the wider system as a persistent barrier across phases. This domain reflects the external political, policy, and institutional forces, such as regulation, professional guidance, legislative, and funding models that define the environment in which implementation occurs, but which local teams cannot directly control. While wider system themes were present across phases, they did not dominate every interview, and some subissues (for example, explicit references to legislation) appeared only sporadically.

    At baseline, consultants depicted a public system tolerant of inefficiency and difficult to influence. Funding constraints were raised in the context of competing pressures.

    Q-Health…there’s not much money around for these sorts of things.
    [P1, Consultant]

    During peri-implementation, registrars and consultants highlighted gaps in professional guidance and medicolegal expectations.

    Training/communication…college says we need to learn AI, but little practical guidance…site-to-site differences.
    [P24, Registrar]

    Others wanted clearer, proactive communication from professional bodies.

    College exposure is low.
    [P27, Registrar]

    Medicolegal norms were seen to expand review obligations when AI added extra views.

    Mentality in radiology, if it’s on a screen, you have to comment on it, and medico-legal, if presented, we must review everything.
    [P21, Consultant]

    By postimplementation, some brought up broader ethical and policy concerns about data provenance and the social license concerns about using it:

    There is concern about the way the data is being used…if all these algorithms are being trained on everyone’s data, it should be open source…it’s everyone’s.
    [P42, Registrar]

    Participants contrasted public and private system incentives and capacity, linking retention and deployment choices to wider economics and case-mix.

    Public keeps me for complex cases, teaching, and feedback; private pays double.
    [P28, Consultant]

    Public vs private…pay and tech are better in private; public has collegiality and case mix.
    [P42, Registrar]

    Across phases, the wider system was characterized by limited policy levers and still developing structural and legislative readiness to support the rapid integration of AI into acute health care. Taken together, these wider-system influences interacted with internal organizational dynamics, staffing fluctuations, workload variability, and shifting operational priorities to shape the evolving trajectory of implementation across phases.

    Principal Findings

    This study reports a prospective, qualitative, end-to-end evaluation of implementing an AI-driven clinical decision support system in a public radiology department, structured through the NASSS framework []. By mapping barriers and enablers across domains and phases, the study captures how early expectations shaped adoption, how sociotechnical challenges emerged during the rollout, and how these dynamics influenced long-term integration and adoption. Findings highlight that successful AI adoption depends not only on technical capability but on the alignment of organizational readiness, workflow design, and professional trust. Implementation success was governed by the interaction of multiple NASSS domains, which included interdependencies among technology, organization, adopters, and value rather than any single factor.

    Weak planning and limited feedback structures (organizational barriers) amplified adopter frustrations with false positives, information clutter, and interoperability issues (technology barriers), eroding trust (adopter-level barriers). Even when technical faults were later mitigated, these initial experiences limited or constrained uptake, illustrating how early technical and communication failures created enduring impressions that shaped subsequent patterns of trust and tool use.

    This mutual reinforcement of challenges across domains aligns with the complexity perspective described by Greenhalgh et al [] and Braithwaite et al [], whereby interacting barriers within and across a complex adaptive system, such as health care, tend to compound rather than resolve over time, particularly when they are not addressed in a coordinated and simultaneous manner. Clinicians’ perceptions of value were shaped primarily by how reliably and efficiently the AI system performed within everyday reporting workflows. Early false positives and excessive image sets undermined those expectations, diminishing confidence in the tool’s promised efficiency benefits. This finding is consistent with a recent qualitative study showing that workflow fit determines perceived usefulness, even for AI []. Participants described the system as “a check-off” tool rather than an integrated aid, and acceptance was suboptimal across our study; consistent with a 2023 semistructured interview with radiologists (n=25), which identified reliability, interpretability, and feedback transparency as decisive for AI acceptance []. Even after performance improved, initial mistrust persisted. This enduring skepticism also mirrors broader evidence that initial experiences set adoption trajectories and that trust is far easier to lose than to regain []. The finding reinforces the importance of consolidating the first-use experience through predeployment testing or “shadow mode” configurations [].

    Organizational conditions were vital in shaping clinician engagement. During peri-implementation, fragmented communication and limited training left some staff unaware that the system had gone live, while others lacked the confidence to use it effectively. This mirrors the work of our team and others who have consistently highlighted that structured rollout, anticipatory planning, and capacity building are critical for sustainable digital adoption [,-]. Participants described the process as reactive and isolated rather than coordinated, reflecting the absence of a shared sense of purpose and mutual accountability between leadership, implementers, and users, necessary for effective implementation []. Although clinicians remained receptive to AI, they expected visible organizational commitment through ongoing education, rapid troubleshooting, and coherent leadership. In its absence, they relied on informal workarounds and peer support to conduct work-as-done, the adaptive, improvised practices that frontline staff develop to keep systems functioning when formal processes or resources fall short []. While such adaptations can sustain local functionality, they also introduce variability in care delivery and make it difficult to scale and standardize best practices across settings.

    These organizational shortfalls also shaped how clinicians experienced implementation. In the absence of clear planning and coordination, several described feeling individually responsible for interpreting and integrating the AI tool into their workflow. This was not an explicit transfer of responsibility but reflected a professional culture in which clinicians relied on their own judgement to make the system workable within local constraints. Medicolegal uncertainty about liability further reinforced this guarded engagement. A 2021 narrative review observed that when accountability is unclear or diffuse, clinicians maintain human oversight to protect both patient safety and professional authority []. These dynamics highlight a core implementation challenge: without clear institutional responsibility and evidentiary assurance, professional caution becomes self-reinforcing, constraining experimentation, shared learning, and the normalization of AI within routine practice.

    The single-site public hospital context influenced organizational capacity. Frequent staff rotations and high turnover meant that many clinicians engaged with the AI system intermittently, limiting opportunities for cumulative learning. Similar patterns are common across public health services, where workforce mobility and resource constraints make it difficult to sustain iterative improvement []. These dynamics interacted with the implementation process itself, shaping the pace and pattern of adoption and modulating how performance issues were perceived over time. Rather than functioning as external confounders, they formed part of the organizational ecology within which the AI system was introduced, influencing continuity, familiarity, and the stability of feedback loops essential for embedding new technologies. These conditions underscore the importance of implementation approaches that are designed for continuity, including modular onboarding, periodic refresher training, and accessible repositories of AI-related resources to support learning across changing teams.

    This study extends the AI-in-radiology literature in 3 key ways. It adopts a temporal perspective, tracing implementation from early anticipation to postintegration and showing how initial optimism and early missteps shape later engagement. It also demonstrates that adoption was driven not by the innovation alone but by the interaction of technical performance, organizational coordination, and context, extending NASSS from description to explanation. In this framing, staff turnover, workload fluctuations, and shifting operational priorities function as active contextual determinants that help explain why implementation trajectories evolve as they do, rather than as background noise. Finally, through reflexive use of the framework, the study generates theoretical insight into why implementation evolves as it does, contributing to emerging work on domain interdependence and temporal complexity in health care innovation [].

    Grounded in our findings and consistent with previous guidance [,,,], effective AI implementation in radiology depends on a combination of technical stability, communication, and organizational preparedness. Early “shadow mode” piloting helps identify faults and build trust before clinical use, while consistent communication about progress and fixes maintains transparency. Workflow-compatible design that minimizes cognitive load supports efficiency and acceptance []. Ongoing professional development, formal feedback channels with vendor responsiveness, and planning for workforce turnover through shared training repositories and local champions help sustain capability over time. Together, these strategies align with the six principles of FUTURE-AI recommendations encompassing fairness, universality, traceability, usability, robustness, and explainability, an international expert-driven consensus to facilitate the adoption of trustworthy medical imaging [] and emphasize that co-design and iterative learning are essential to long-term adoption.

    Study Limitations and Strengths

    This was a single-site study, and further real-world, ecological research is needed to identify determinants of adoption and create solutions that generalize across health systems. Despite this, our findings are similar to several recently published studies examining radiologists’ perceptions around the adoption of AI into standard practice [,,,-]. There was low uptake of the tool during the peri-implementation period due to technical challenges. This limited wider evaluation of this crucial implementation period, particularly in terms of clinical usefulness. The 18-month period may have also introduced a range of system confounders, including staffing changes and organizational priorities, which may have impacted how participants felt about the AI clinical decision support tool. Finally, social desirability bias cannot be ruled out, particularly as this was a department-wide implementation. However, this was mitigated through participant briefings emphasizing the exploratory nature of the trial, coupled with reflexive practice by the interview team []. A key study strength was that this was a real-world evaluation demonstrating ecological validity with findings grounded in practical realities. Second, aligning our findings with a validated implementation science framework supports theoretical transferability and future application in related contexts despite the single-site limitations.

    Future Implications

    Future studies should integrate qualitative and quantitative data, combining workflow observations with metrics such as reporting time, error rates, and AI usage logs, to triangulate findings. Multisite evaluations across differing levels of digital maturity are needed to test transferability and examine how governance, culture, and workforce patterns influence scalability.

    Conclusion

    Implementation of AI-based decision support in radiology is as much an organizational and cultural process as a technological one. Clinicians remain willing to engage, but sustainable adoption depends on consolidating early experiences, embedding communication and training, and maintaining iterative feedback between users, vendors, and system leaders. Applying the NASSS framework revealed how domains interact dynamically across time, offering both theoretical insight into sociotechnical complexity and practical guidance for hospitals seeking to move from pilot to routine, trustworthy AI integration.

    The authors would like to gratefully acknowledge the study participants who gave their valuable time to participate in this study. The authors are grateful to Dr Sue Jeavons and her staff for facilitating this study onsite.

    The datasets generated or analyzed during this study are not publicly available due to confidentiality policies, but may be available in limited form from the corresponding author on reasonable request.

    This work was supported by Digital Health CRC Limited (DHCRC), funded under the Commonwealth Cooperative Research Centres (CRC) program. SM is supported by a fellowship from the National Health and Medical Research Council (NHMRC; #1181138). The funders had no role in the study design or the decision to submit for publication.

    None declared.

    Edited by J Sarvestan; submitted 09.Jul.2025; peer-reviewed by S Mohanadas, L Laverty, X Liang; comments to author 04.Aug.2025; revised version received 09.Dec.2025; accepted 10.Dec.2025; published 28.Jan.2026.

    ©Sundresan Naicker, Paul Schmidt, Bruce Shar, Amina Tariq, Ashleigh Earnshaw, Steven McPhail. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.Jan.2026.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • NVIDIA Sets Conference Call for Fourth-Quarter Financial Results

    NVIDIA Sets Conference Call for Fourth-Quarter Financial Results

    Written CFO Commentary to Be Provided Ahead of Call

    SANTA CLARA, Calif., Jan. 28, 2026 (GLOBE NEWSWIRE) — NVIDIA will host a conference call on Wednesday, February 25, at 2 p.m. PT (5 p.m. ET) to discuss its financial results for the fourth quarter and fiscal year 2026, which ended January 25, 2026.

    The call will be webcast live (in listen-only mode) on investor.nvidia.com. The company’s prepared remarks will be followed by a Q&A session, which will be limited to questions from financial analysts and institutional investors.

    Ahead of the call, NVIDIA will provide written commentary on its fourth-quarter results from Colette Kress, the company’s executive vice president and chief financial officer. This material will be posted to investor.nvidia.com immediately after the company’s results are publicly announced at approximately 1:20 p.m. PT.

    The webcast will be recorded and available for replay until the company’s conference call to discuss financial results for its first quarter of fiscal year 2027.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Investor Relations Corporate Communications
    NVIDIA Corporation  NVIDIA Corporation
    ir@nvidia.com  press@nvidia.com
       

    © 2026 NVIDIA Corporation. All rights reserved. NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries.

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Background

    When people look for health information today, they no longer only consult physicians, pharmacists, or search engines. Increasingly, they also encounter generative artificial intelligence (AI) tools such as ChatGPT or the World Health Organization (WHO)’s chatbot Sarah, which simulate human-like conversations and provide instant responses. These tools promise a new way of accessing medical knowledge: fast, convenient, and interactive. At first glance, this accessibility seems to hold great potential for reducing barriers to health information, therefore directly impacting digital health equity—defined as equitable access to and use of digital health information technology that supports informed decision-making and enhances health [].

    However, the picture is more complex. On the one hand, generative AI can offer cost-free entry points (eg, basic versions of ChatGPT or automatically displayed answers in Google search via Google’s Gemini), deliver content in multiple languages, and rephrase complex medical concepts into more understandable terms. In doing so, it could strengthen patient education, address health inequalities, and help bridge communication gaps between citizens and health care providers [,]. On the other hand, effective use still depends on internet-enabled devices and adequate digital skills, which are not equally distributed. As a result, the very technology that appears open and inclusive may also risk exacerbating existing digital divides [].

    Moreover, unlike other types of information, health-related questions are often sensitive and personal. At the same time, the inner workings of generative AI remain opaque, and the accuracy of its outputs is not guaranteed []. All these tensions raise important questions about adoption: Who is most likely to turn to generative AI for health information, and what factors shape this intention? Moreover, since health communication practices and digital infrastructures differ across countries, cross-national research is urgently needed.

    To address these questions, this study draws on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) []. The model proposes that performance expectancy, effort expectancy, facilitating conditions, social influence, habit, and hedonic motivation shape technology use. We extend this framework by also examining the roles of health literacy and health status in predicting intention to use generative AI for health information. Using cross-national survey data from Austria, Denmark, France, and Serbia, we investigate the drivers of adoption to shed light on both individual and contextual factors that may guide the diffusion of generative AI in health contexts.

    Generative AI as Novel Health Information Source

    Generative AI constitutes a potentially disruptive force in the health information ecosystem []. However, despite its rapid advancement and widespread availability, empirical research on its role in health information–seeking remains limited []. At the same time, broader trends highlight the ongoing digitalization of health-related knowledge acquisition. A representative survey conducted in Germany in 2019 revealed that only 48% of respondents consulted a medical professional for their most recent health issue, while 1 in 3 turned first to the internet []. Similar findings show that online sources—particularly search engines—are the primary means of accessing health information, both for caregivers and the general population [,]. Family and friends and traditional mass media (eg, print media and health-related TV programming) rank behind medical professionals and online sources [].

    The introduction of generative AI tools like ChatGPT may shift these established hierarchies. Unlike static web content or conventional search engines, generative AI enables dialogic, personalized interactions that simulate human conversation. These features may position generative AI as a compelling alternative to established online and offline health information sources. However, current evidence suggests that trust in generative AI—especially regarding complex health-related issues—is still limited [], which might restrict its present adoption potential to early adopters []. This raises questions about how generative AI integrates into the broader ecosystem of health information sources.

    To address this gap, we first explore: How does the use of generative AI for health information–seeking compare to that of more established health information sources?

    Explaining Predictors of Technology Adoption: UTAUT2

    The UTAUT2 [] is one of the most popular models to explain technology adoption. It builds on the technology acceptance model [], emphasizing perceived usefulness and ease of use, and the initial UTAUT model [], which added performance expectancy, effort expectancy, facilitating conditions, and social influence as predictors of adoption behavior. UTAUT2 extends these frameworks to consumer contexts by incorporating hedonic motivation and habit [,] . The UTAUT2 model has demonstrated its versatility in explaining the adoption of diverse eHealth technologies, such as wearable devices [], health websites [], and health apps []. Additionally, recent studies have highlighted its relevance in understanding the uptake of generative AI technologies [-], showcasing its capacity to extend beyond traditional eHealth domains. However, so far, studies on predictors of usage intentions in the context of AI health information–seeking are lacking.

    Performance expectancy, a central construct in the UTAUT2 framework, reflects the belief that using technology will lead to performance benefits []. In the context of health information–seeking using generative AI, performance expectancy is shaped by users’ perceptions of how effectively these tools can enhance their own life, including aspects such as health–decision-making and task efficiency []. Consequently, as users anticipate greater usefulness from adopting generative AI as a health information source, their intention to use such technologies strengthens [-,]. Based on this, we propose the following hypothesis: “the higher the performance expectancy, the stronger the intention to use generative AI for health information–seeking” (H1). Effort expectancy, closely tied to ease of use, emphasizes simplicity in technology adoption []. Generative AI tools like ChatGPT benefit from high effort expectancy when users find them intuitive and easy to integrate into their workflows, particularly during the early adoption phase [,,,]. Addressing usability concerns early can reduce resistance and build user confidence, strengthening behavioral intention [,]. Therefore, we propose that “the higher the effort expectancy, the stronger the intention to use generative AI for health information–seeking” (H2). Facilitating conditions refer to the resources, skills, and support necessary for using technology []. These include training, knowledge, technical assistance, and system compatibility, which significantly enhance behavioral intention and usage [,]. In technologically mature settings, facilitating conditions are critical for sustained adoption and user satisfaction []. In line with the UTAUT2, we hypothesize that “the better the facilitating conditions, the stronger the intention to use generative AI for health information–seeking” (H3). Social influence indicates the perception that peers, such as family, friends, or colleagues, believe one should adopt a technology []. It plays a crucial role in early adoption, where external validation often outweighs personal experience []. Positive reinforcement within social or professional networks can normalize usage [,,,]: If people perceive that their peers already use generative AI for health information–seeking, their own intention to do so might increase as well. We, therefore, propose that “the greater the perceived social influence, the stronger the intention to use generative AI for health information–seeking” (H4). Habit specifies the extent to which behavior becomes automatic through repetition and prior use []. It strongly influences behavioral intention and long-term adoption, emphasizing the importance of regular engagement with technology [,]. For generative AI as a health information source, fostering habitual use can solidify its integration into daily routines and enhance sustained adoption []. This leads us to state, “the more it is a habit to use generative AI, the stronger the intention to use generative AI for health information–seeking” (H5). Hedonic motivation refers to the enjoyment or pleasure derived from using technology, particularly relevant in consumer contexts []. It directly impacts behavioral intention, especially for technologies involving entertainment or leisure []. For generative AI like ChatGPT, it can be expected that the interaction is perceived as fun or entertaining, which can boost user engagement and drive adoption []. Accordingly, we suggest the following hypothesis: “the higher the hedonic motivation, the stronger the intention to use generative AI for health information–seeking” (H6).

    Influence of Health Literacy and Health Status

    With the growing integration of digital tools into everyday lives, the role of health literacy in online health information–seeking has garnered increasing attention. Health literacy has been conceptualized as an individual’s capacity to search, access, comprehend, and critically evaluate health information, as well as to use the acquired knowledge to effectively address health-related issues [,]. Digital health literacy refers to these abilities in the context of digital environments [-]. Generally, low health literacy scores have been associated with undesirable health outcomes [].

    Research suggests that low levels of health literacy are associated with decreased trust in online health resources [], including the outputs of AI tools [], and lower overall adoption of online health technologies []. Furthermore, initial studies indicate a positive association between health literacy levels and attitudes toward the use of AI tools for medical consultations [].

    On the other hand, individuals with higher levels of health literacy are generally better equipped to critically evaluate online health information and scrutinize it in greater detail [,]. This heightened evaluative capacity could make them more aware of the limitations and potential risks of generative AI outputs, such as inaccurate information, bias, data privacy concerns, or oversimplified medical advice []. Moreover, individuals with higher health literacy are more likely to trust and use high-quality medical online resources, whereas those with limited health literacy prefer accessible but potentially less reliable sources []. In this context, outputs from generative AI might be perceived as lower-quality sources by highly digital health–literate individuals. As a result, while higher health literacy could foster openness to using generative AI for health purposes, it might also lead to greater skepticism or hesitancy in relying on these tools. Nonetheless, there is not enough research in the context of generative AI specifically to make conclusive predictions.

    Another well-established factor in online health information–seeking, yet underexplored in the context of AI, is individuals’ health status: Studies suggest that people with poor health are significantly more likely to consult the internet for health information compared to those with good health [,]. Being chronically ill has also been associated with increased reliance on internet-based technologies for health-related purposes []. This relationship can be explained by the fact that individuals in poor health often experience greater health-related concerns, which in turn heightens their motivation to seek information online.

    Given these complex relationships, we propose the second research question: How does health literacy and health status influence the intention to seek health information using generative AI?

    Cross-National Comparison

    In this study, we investigate the predictors of generative AI adoption for health information–seeking across 4 European countries: Austria, Denmark, France, and Serbia. While these countries share certain similarities, they also display notable differences that could shape the strength of the UTAUT2 predictors on the intention to use generative AI for health purposes. Thus, this cross-national approach ensures that the observed effects are generalizable and not confined to specific national contexts or unique country conditions.

    The selected countries share two key characteristics. First, all 4 countries provide universal health coverage, ensuring broad access to health care services for their populations. Second, a significant portion of health care expenditure in these countries is publicly funded [-].

    Despite these commonalities, there are also critical factors that differ among the countries and may shape the predictors of generative AI adoption. On the one hand, variations in digital infrastructure could significantly impact facilitating conditions, effort expectancy, and social influence as predictors of generative AI use. Denmark consistently ranks among Europe’s most digitally advanced nations, boasting high internet penetration and widespread adoption of e-health solutions []. This strong digital ecosystem likely enhances the perceived ease of use and social endorsement of generative AI. In contrast, Austria, France, and Serbia exhibit more moderate levels of digital adoption in the context of health information, which may limit the perceived use and social norms regarding such technologies [].

    On the other hand, access to and trust in health care providers vary significantly across these countries, potentially influencing performance expectancy and social influence. In nations with robust health care systems—characterized by a high availability of medical professionals and easy access to care—individuals are more likely to rely on doctors for health advice, as they are often viewed as the most trusted source of health information []. Denmark exemplifies this with its high levels of public trust in the health care system [], which may reduce the perceived benefits and social norms around using generative AI for health purposes. Conversely, in western Balkan countries like Serbia, studies report generally low levels of trust in the health care system []. In such contexts, individuals may be more inclined to seek alternative information sources, potentially amplifying the perceived benefits of generative AI use.

    By examining these diverse national contexts, this study not only tests the universality of the UTAUT2 model but also deepens our understanding of the contextual factors that shape generative AI adoption for health purposes. We ask: How do the predictors of generative AI use for health information–seeking differ across Austria, Denmark, France, and Serbia?

    Ethical Considerations

    Before data collection, the study received ethical approval from the institutional review board of the Department of Communication at the University of Vienna (approval ID: 1205). All participants provided written informed consent prior to participating in the study. The data were collected in anonymized form and no personal identifiers were recorded or stored. Participants received a compensation of €1.50 (US $1.74) for completing the study through the panel provider.

    Recruitment

    Recruitment of participants occurred during September 2024 via Bilendi, an international panel provider. Bilendi recruited the participants via email. The panel is checked for quality and attendance on a regular basis. The study was conducted in Austria, France, Denmark, and Serbia, with participants randomly selected to achieve samples representative of age, gender, and educational background. The provider’s panel sizes in the respective countries were as follows: Austria: n=60,000; Denmark: n=90,000; France: n=815,000; and Serbia: n=15,100. Per country, the study aimed to reach 500 participants.

    Inclusion criteria required participants to be aged between 16 and 74 years. Additionally, respondents who completed the survey in less than one-third of the median completion time (speeders) were excluded. Completion rates (excluding screened-out participants) were high across all 4 countries, ranging from 84.3% to 89.8%. Further details on survey design, administration, and response rates are provided in the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) checklist ().

    Procedure and Measures

    Overview

    The study consisted of two components. The first component, a survey, investigated predictors of the intention to use generative AI for health information–seeking. The second component, an experimental study, explored the influence of disease-related factors on these intentions []. To ensure respondents shared a common understanding of the concept, the survey began with a short definition of “generative AI,” describing it as technologies that engage in natural language conversations with users and generate responses in real-time. Examples such as ChatGPT, Google’s Gemini, and Microsoft Copilot were provided.

    The original questionnaire was developed in English and subsequently translated into German, French, Danish, and Serbian. Each translation was performed by a bilingual team member and back-translated into English by a different native speaker to ensure conceptual equivalence with the original items.

    This study focused on the following constructs (a complete item list with descriptive analysis and construct reliability values can be found in ).

    Dependent Variables
    Sources of Health Information–Seeking

    Participants rated the frequency with which they use 8 health information [] sources on a 7-point Likert scale (1 = “never” to 7 = “very often”). The sources included conversations with medical professionals, pharmacists, and family or friends, as well as books, mass media, internet search engines (eg, Google and Ecosia), and generative AI.

    AI Usage Intentions

    Participants’ willingness to use generative AI [] for health information–seeking was assessed using 3 items (eg, “I intend to use generative AI for health information seeking”), rated on a 7-point Likert scale (1 = “strongly disagree” to 7 = “strongly agree”).

    UTAUT2 Predictor Variables

    All UTAUT2 model [,,,] predictors were measured on a 7-point Likert scale (1 = “strongly disagree” to 7 = “strongly agree”). The predictors have been described below.

    Performance Expectancy

    Perceived benefits of using generative AI for health information–seeking were measured with 4 items (eg, “Using generative AI would save me time when researching health topics”).

    Effort Expectancy

    The perceived ease of using generative AI as a health information source was assessed with 4 items (eg, “Learning to use generative AI for health information–seeking seems easy for me”).

    Social Influence

    Three items measured the extent to which participants felt that important others encouraged their use of generative AI for health information–seeking (eg, “People who are important to me think that I should use generative AI for health-information seeking”).

    Hedonic Motivation

    The enjoyment of using generative AI for health information–seeking was assessed with 3 items (eg, “I think using generative AI for health-information seeking could be fun”).

    Facilitating Conditions

    Participants’ perceptions of available resources and support for using generative AI to seek health information were measured with 4 items (eg, access to devices and reliable internet, and knowledge).

    Habit

    The extent to which turning to generative AI when seeking health information had become a habitual behavior was measured with 3 items (eg, “I automatically turn to generative AI whenever I have questions about my health”).

    Model Extension Variables

    Health Literacy

    Health literacy [] was assessed with 10 items, asking participants to rate their confidence in tasks such as finding understandable health information. Responses were recorded on a 4-point Likert scale (1 = “not at all true” to 4 = “absolutely true”).

    Health Status

    We measured participants’ health status using 1 item (“How would you describe your current health status?”; 1 = “very poor” to 7 = “very good”).

    Control Variables: Sociodemographic Variables

    We further measured participants’ age, gender, and educational level.

    Statistical Analysis

    Power

    An a priori power analysis was conducted to determine the required sample size for structural equation modeling. Assuming an anticipated effect size of 0.25, a desired statistical power of 0.95, and a significance level of .05, the analysis indicated that a minimum of 391 participants per country would be necessary to detect the hypothesized effects [].

    Analytical Plan

    We used AMOS version 26 (IBM Corp) to run a latent variable, multigroup structural equation model using a maximum-likelihood estimator with full information. We computed the comparative fit index, the Tucker-Lewis Index, the chi-square to degrees of freedom ratio (χ2/df), and the root mean square error of approximation. We also secured metric measurement invariance to be able to compare the paths across countries. We controlled for age, gender, and education (binary coded).

    User Statistics

    In total, data were collected from 1990 respondents, comprising 502 from Austria, 507 from Denmark, 498 from France, and 483 from Serbia. The overall mean age of participants was 45.1 (SD 15.7) years, with 50.2% (n=998) identifying as female participants. In terms of educational attainment, 83.8% (n=1634) of the sample reported completing at least a medium or higher level of education (secondary level II or higher). Furthermore, 87.4% (n=787) of respondents indicated prior use of generative AI for health information–seeking (at least rarely). Detailed demographic and background characteristics of the sample are summarized in .

    Table 1. Descriptive characteristics of survey respondents from Austria, Denmark, France, and Serbia (N=1990; September 2024).
    Demographic characteristics Overall, n (%) Austria, n (%) Denmark, n (%) France, n (%) Serbia, n (%)
    Education
    Secondary I or lower 356 (18.24) 93 (18.6) 136 (26.9) 109 (21.9) 18 (3.7)
    Secondary II 1080 (55.36) 303 (60.3) 179 (35.3) 224 (45.0) 374 (77.4)
    Tertiary 554 (28.39) 106 (21.1) 192 (37.9) 165 (33.1) 91 (18.8)
    Gender
    Female 998 (50.15) 252 (49.8) 251 (49.5) 256 (51.4) 239 (49.5)
    Male 992 (49.85) 250 (50.2) 256 (50.5) 242 (48.6) 244 (50.5)
    Prior experience
    No 1203 (60.45) 316 (62.9) 328 (64.7) 326 (65.5) 233 (48.2)
    Yes 787 (39.54) 186 (37.1) 179 (35.3) 172 (34.5) 250 (51.8)

    aEducational attainment was categorized as low (secondary level I or below) and medium or high (secondary level II or higher). In Serbia, however, representativeness was achieved by grouping educational levels into low or medium (secondary level II or below) and high (tertiary education) due to sampling limitations.

    bPrior experience: no = “I have never used Generative AI for health information seeking” and yes = “I have used Generative AI for health information seeking at least rarely.”

    Statistical tests revealed no significant differences in gender distribution across countries (χ²3=0.48; P=.92) and no significant differences in age (Kruskal-Wallis χ²3=2.15; P=.54). In contrast, educational attainment varied significantly between countries (χ²3=550.76; P<.001), reflecting sampling-related imbalances in Serbia where low versus medium or high education was assessed differently than in the other countries. Finally, prior experience with health information–seeking showed significant country differences (Kruskal-Wallis χ²3=30.95; P<.001), with higher levels reported in Serbia.

    Evaluation Outcomes

    Descriptive Analysis

    In our first research question, we explored how generative AI compares to more established health information sources in terms of usage frequency across countries. As illustrated in , generative AI ranks last among all measured sources, indicating that, as of autumn 2024, it is rarely used for health information–seeking (mean 2.08, SD 1.66). In stark contrast, online search engines like Google are highly used, ranking second with a mean usage frequency of 4.57 (SD 1.88), following conversations with physicians, which hold the top position (4.77, SD 1.70). Family and friends also play a significant role, ranking third (4.27, SD 1.73), alongside pharmacists (3.52, SD 1.81). In comparison, traditional mass media such as TV, newspapers, and magazines are used less frequently (2.74, SD 1.68), as are books (2.68, SD 1.70) and free magazines provided by pharmacies or health insurance companies (2.60, SD 1.65). The relative ranking of information sources was consistent across all 4 countries, with physicians, internet search engines, and family or friends occupying the top positions and generative AI ranking last. However, some variation in mean usage frequencies was observed between countries; detailed country-level results are presented in .

    Figure 1. Mean usage frequency of different health information sources among survey respondents (N=1990) in Austria, Denmark, France, and Serbia (95% CI; September 2024). AI: artificial intelligence.
    Model Evaluation

    For the hypothesis tests, the results are shown in . Model fit was good (comparative fit index=0.95; Tucker-Lewis-Index=.93; χ2/df =2.47, P<.001; root mean square error of approximation=0.03, 95% CI 0.03-0.03). We examined the metric measurement invariance of all latent variables by constraining all factor loadings as equal for all 4 countries. When comparing the constrained model to the unconstrained model, we found no significant difference in model fit (P=.16). Thus, metric invariance across countries was established.

    Table 2. Structural equation model predicting the intention to use generative artificial intelligence (AI) for health information among survey respondents in Austria, Denmark, France, and Serbia (N=1990; September 2024).
    Predictor variables Austria Denmark France Serbia
    b SE P value b SE P value b SE P value b SE P value
    Performance expectancy 0.47 0.05 <.001 0.52 0.05 <.001 .053 0.05 <.001 0.44 0.05 <.001
    Effort expectancy −0.07 0.05 .20 0.03 0.05 .54 −0.11 0.05 .04 -0.02 0.06 .77
    Facilitating conditions 0.12 0.04 .01 0.17 0.05 <.001 0.22 0.05 <.001 0.24 0.05 <.001
    Social influence −0.05 0.04 .29 −0.08 0.05 .17 −0.09 0.05 .10 −0.05 0.04 .27
    Habit 0.29 0.04 <.001 0.32 0.04 <.001 0.28 0.05 <.001 0.28 0.04 <.001
    Hedonic motivation 0.45 0.06 <.001 0.22 0.05 <.001 0.33 0.05 <.001 0.23 0.05 <.001
    Health literacy −0.004 0.09 .97 0.04 0.10 .67 −0.02 0.08 .08 0.08 0.10 .40
    Health status −0.002 0.03 .95 0.02 0.03 .61 −0.09 0.03 .01 −0.05 0.03 .13

    aExplained variance=0.84.

    bExplained variance=0.80.

    cExplained variance=0.86.

    dExplained variance=0.79.

    eThe different subscripts in each row indicate a significant difference between paths (P<.05)

    fThe different subscripts in each row indicate a significant difference between paths (P<.05)

    In line with H1, we found a highly significant positive association between performance expectancy and the intention to use generative AI for health information–seeking across all 4 countries (Austria: b=0.47, P<.001; Denmark: b=0.52, P<.001; France: b=0.53, P<.001; Serbia: b=0.44, P<.001). In contrast, H2 was not supported: effort expectancy showed no significant association with behavioral intention in any of the countries. Turning to H3, results revealed a positive association between facilitating conditions and the intention to use generative AI as a health information source, consistently observed across all 4 contexts (Austria: b=0.12, P=.005; Denmark: b=0.17, P<.001; France: b=0.22, P<.001; Serbia: b=0.24, P<.001). By contrast, no support was found for H4: perceived social influence was unrelated to behavioral intention in any of the countries. As predicted in H5, habit was positively associated with behavioral intention to use generative AI for health information–seeking throughout the sample (Austria: b=0.29, P<.001; Denmark: b=0.32, P<.001; France: b=0.28, P<.001; Serbia: b=0.28, P<.001). A similar pattern emerged for H6: hedonic motivation was significantly positively related to behavioral intention in all countries (Austria: b=0.45, P<.001; Denmark: b=0.22, P<.001; France: b=0.33, P<.001; Serbia: b=0.23, P<.001).

    Finally, with regard to our second research question—which examined whether health literacy and health status predict the intention to seek health information using generative AI—we found no substantial associations. Only in France did health status show a marginal negative effect (b=−0.09; P=.007).

    Principal Results

    This study investigated the predictors of intention to use generative AI for health information–seeking, drawing on the UTAUT2 framework and expanding it with health literacy and health status. Using cross-national survey data from Austria, Denmark, France, and Serbia, our findings show that generative AI is still only rarely used for health information–seeking. At the same time, performance expectancy, facilitating conditions, habit, and hedonic motivation consistently emerged as significant predictors of behavioral intention, whereas effort expectancy, social influence, health literacy, and health status were not related to intention. These patterns were consistent across all 4 countries, suggesting a robust set of psychological drivers underlying the early adoption of generative AI in health contexts. A detailed examination of these findings is provided as follows.

    First, with regard to overall usage patterns, the data shows that generative AI currently plays only a minor role in health information–seeking: 60% of the respondents reported never having used a generative AI tool for health-related questions. This result lends itself to 2 contrasting interpretations.

    On the one hand, it challenges the popular narrative that generative AI is rapidly transforming health information–seeking behavior. Instead, the findings align with previous studies, showing that generative AI is currently infrequently used in the context of health information []. Traditional sources—such as medical professionals and search engines—continue to dominate [], underscoring that generative AI has yet to achieve mainstream adoption.

    On the other hand, despite persistent concerns about data privacy, algorithmic bias, and accuracy, it is noteworthy that 40% of the respondents have already experimented with generative AI for health purposes. Given that this technology only became widely accessible relatively recently, such early uptake is remarkable. From the perspective of technology adoption models, such as the Rogers Diffusion of Innovations [], this pattern is characteristic of early adopters. It is therefore plausible to assume that the use of generative AI for health information–seeking will increase further as the technology matures and moves toward mainstream adoption.

    To better understand the drivers of future uptake, we applied an extended version of the UTAUT2 model. Our findings confirmed the predictive power of performance expectancy, facilitating conditions, habit, and hedonic motivation. This aligns with prior research on digital health tools, indicating that users value usefulness, access, familiarity, and enjoyment [,,].

    In detail, the results show that performance expectancy—the perceived usefulness of the technology—had a strong positive effect on behavioral intention in all four countries. This finding suggests that the more respondents believe generative AI is useful to manage health-related questions, the more they will use it. Thus, if public health stakeholders or developers aim to encourage responsible AI use, they should emphasize the tangible benefits of generative AI, such as 24/7 availability, rapid response times, and the potential for personalized information. Perceived usefulness may also be fostered when individuals try out generative AI for the first time, that is, they learn that they can benefit from the technology.

    At the same time, our study challenges established UTAUT2 assumptions. Effort expectancy, often seen as central to technology adoption, was not a relevant factor—possibly due to the intuitive nature of generative AI tools and the ubiquity of basic digital skills []. Using generative AI does not require any specific background knowledge beyond opening a webpage. Since online search engines are already the most frequently used health source, the basic skills needed for generative AI are widely present, potentially rendering effort expectancy less decisive.

    Taken together, this emerging pattern—the strong effect of performance expectancy and the null effect of effort expectancy—underscores the distinction between usefulness and usability, which are closely related but not identical []. Usability refers to the ease of interacting with a system (eg, ease of learning and error prevention), whereas usefulness (utility) captures whether the system provides the functions and information that users actually need. Our findings suggest that in health contexts, utility is the decisive factor: people intend to use generative AI if its outputs are perceived as useful, while usability-related aspects appear less influential.

    Importantly, this does not mean that barriers to adoption are absent. Rather, our findings show that they lie not in usability but in facilitating conditions—the structural and contextual resources that enable technology use. Across all countries, the availability of digital infrastructure, devices, and basic knowledge significantly shaped behavioral intention. In other words, while generative AI may be easy to use once accessed, unequal access to the necessary resources continues to pose a substantial adoption barrier. Consequently, facilitating conditions emerge as a key digital health equity concern []. Without adequate access, disadvantaged populations may be excluded from benefiting from generative AI, meaning that the technology risks widening rather than narrowing the digital divide in health information–seeking.

    We also found that social influence—an important predictor in other studies on AI uptake [,]—did not play a meaningful role in shaping behavioral intention. This suggests that health-related information search is a rather personal topic, and that individuals may not always be willing to disclose what kind of information they are looking for. As a result, the intention to use generative AI for health information–seeking is largely independent of peer opinions or social norms.

    In contrast, habit consistently predicted behavioral intention across all countries. From this finding, we may conclude that generative AI use for health information is likely to occur automatically, similar to how people use search engines. When individuals feel familiar with a technology, they are more likely to rely on it without conscious deliberation. However, this finding should be interpreted with caution, as the majority of participants had never used generative AI for health purposes. Much of the variance in habit may therefore reflect mere use versus nonuse. Accordingly, variables capturing initial adoption should be clearly distinguished from those measuring habit.

    By including health literacy and health status as additional predictors, our study adds a novel dimension to existing research. In contrast to studies showing direct paths between these constructs and online health information–seeking [,,], we found no such association for AI health information–seeking. However, each of these findings carries different implications. First, the absence of a significant association between health literacy and intention indicates that individuals’ ability to understand and evaluate health information was not related to whether they reported turning to generative AI. This finding may suggest that the use of such tools is driven less by informed decision-making and more by general curiosity or interest in new technologies. Importantly, this raises concerns: people with lower health literacy may be just as likely to consult generative AI as those with higher health literacy—despite being less equipped to critically assess its outputs. Given the known risk of AI hallucinations—fabricated or inaccurate information presented in a confident tone []—this could lead to misinformation and, in the worst case, harmful health decisions, as users with limited health literacy might find it difficult to distinguish between reliable and misleading content.

    Second, the lack of an association between self-reported health status and intention suggests that the current use of generative AI is not primarily driven by medical need or urgency. People do not seem more likely to consult generative AI when facing a health problem; rather, usage may occur proactively or even recreationally. This challenges assumptions that such tools are primarily used in response to a health issue, and it underscores the importance of understanding user motivations beyond immediate health concerns.

    Importantly, these patterns were largely consistent across all 4 countries, as confirmed by the measurement-invariant structural model. This cross-national consistency suggests that the psychological drivers of generative AI adoption in health contexts may transcend national boundaries and cultural differences, pointing to a universal set of adoption mechanisms.

    Limitations

    Several limitations should be acknowledged. First, due to the cross-sectional nature of this study, no causal conclusions can be drawn. Future research should therefore aim to replicate these findings using experimental or longitudinal designs. Second, we relied on self-reported data, which may be subject to social desirability bias. The use of behavioral data is thus warranted to validate these findings. Third, including additional predictors—such as individual differences or specific concerns—could provide deeper insights into the use of generative AI. Fourth, our comparative findings are based on data from only 4 countries, which limits the ability to conduct multilevel analyses. Also, as in all cross-sectional research, there is a risk of unmeasured third variables. In particular, we did not include AI trust and perceived AI risk. However, these constructs are conceptually close to performance expectancy, as trust reduces uncertainty about the system’s outputs and thereby enhances expected performance gains, whereas perceived risk erodes expected utility. In this sense, they are likely to be partially reflected in the performance expectancy construct already included in our model. That said, and highlighting that our model explains around 80% of the variance, trust and perceived risk could still suppress some of the predictors we have modeled. Thus, future research should include additional constructs outside the UTAUT2 framework []. Finally, health status was measured with a single self-rated item. While single-item measures of subjective health may not capture the full complexity of an individual’s medical condition, this approach is widely used in demographic and population health research. Prior work has demonstrated that the self-rated health item is a valid and reliable indicator, predicting key outcomes such as mortality, use of health services, and health expenditures in large-scale surveys []. Nevertheless, we acknowledge that a more fine-grained measure (eg, including specific chronic conditions or severity indices) could have provided additional insights, and future studies may benefit from applying such extended health measures.

    Conclusions

    This study applied the UTAUT2 model to investigate the factors that drive the use of generative AI for health information–seeking. Although overall usage remains limited, our findings show that performance expectancy, facilitating conditions, habit, and hedonic motivation are positively associated with behavioral intentions. These patterns, observed across all 4 countries—Austria, Denmark, France, and Serbia—suggest that current users of generative AI are likely to be early adopters: individuals who are tech-savvy, curious, and open to innovation. This aligns with the Rogers Diffusion of Innovations theory, which conceptualizes adoption as a gradual process beginning with a small, innovation-oriented segment of the population.

    The lack of significant effects for effort expectancy and social influence across all countries reinforces this interpretation: early adopters tend to base their decisions on personal evaluations rather than external opinions and are rarely deterred by usability concerns. Furthermore, the fact that behavioral intention was unrelated to health status or health literacy underscores that current usage is not driven by acute medical need or advanced health literacy, but rather by interest, convenience, and technological exploration.

    The cross-national consistency of these findings is particularly striking. Despite differences in health care systems, digital infrastructures, and culture, the same psychological and contextual factors influenced generative AI use in all countries surveyed. This suggests a shared adoption logic that transcends national boundaries—at least in the early stages of diffusion.

    Looking ahead, these insights help illuminate how generative AI might transition from a niche tool to a widely used resource. As the technology becomes more embedded in everyday life, broader segments of the population—the so-called early and late majority—will likely demand stronger assurances of trustworthiness, safety, and added value. To enable responsible and inclusive adoption, it is therefore crucial to reduce digital access barriers, enhance transparency, and implement safeguards against health misinformation, especially for users with limited health literacy.

    From a practical perspective, our findings suggest that communication strategies aiming to promote generative AI for health purposes should emphasize concrete benefits, ease of access, and even enjoyment. Rather than exclusively targeting individuals with chronic or urgent health needs, positioning generative AI as an engaging, low-barrier tool may broaden its appeal—reaching users who might otherwise be disengaged from traditional health information sources.

    In sum, generative AI holds significant potential as a future health information resource—but its trajectory will depend on how well we understand and support the evolving needs of its users across different adoption phases and contexts.

    The authors used ChatGPT (OpenAI) to support language editing of prewritten text sections (eg, to improve grammar and phrasing). All suggestions from the artificial intelligence tool were reviewed by the authors and revised or rejected as necessary. No content was generated solely by the artificial intelligence, and the authors remain fully responsible for the final text.

    The work was supported by Circle U [2024-09 – AIHEALTH].

    All data and supplementary materials related to this study are openly accessible via the Open Science Framework (OSF) at the following link [].

    None declared.

    Edited by Amaryllis Mavragani; submitted 08.Apr.2025; peer-reviewed by Armaun Rouhi, Dillon Chrimes, Sonish Sivarajkumar; accepted 27.Oct.2025; published 28.Jan.2026.

    © Jörg Matthes, Anne Reinhardt, Selma Hodzic, Jaroslava Kaňková, Alice Binder, Ljubisa Bojic, Helle Terkildsen Maindal, Corina Paraschiv, Knud Ryom. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.Jan.2026.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • Cognizant and Travel + Leisure Co. Renew Strategic Collaboration to Accelerate Digital Transformation

    Cognizant and Travel + Leisure Co. Renew Strategic Collaboration to Accelerate Digital Transformation

    The collaboration aims to modernize technology infrastructure and infuse AI to enhance member experiences

    TEANECK, N.J., Jan. 28, 2026 /PRNewswire/ — Cognizant (Nasdaq: CTSH) announced today the renewal of a multi-million-dollar strategic collaboration with Travel + Leisure Co. (NYSE: TNL), a leading leisure travel company. The extended collaboration will focus on accelerating the digital transformation of Travel + Leisure Co. by modernizing its technological infrastructure and infusing AI to deliver enhanced experiences for its members and owners.

    Under the agreement, Cognizant will leverage its extensive hospitality domain expertise to optimize the technology ecosystem at Travel + Leisure Co., with the goal of elevating digital service experiences for its travel club members and 800,000 owner families.

    “Renewing our partnership with Cognizant reflects the deep collaboration and mutual trust we’ve built over the years,” said Sy Esfahani, Chief Technology Officer at Travel + Leisure Co. “Cognizant’s broad technology expertise and global resources will propel our continued digital transformation, helping us deliver innovative solutions to service our members and guests at every touchpoint.”

    Throughout the term of the agreement with Travel + Leisure Co., Cognizant will assist with modernizing application landscape, strengthening infrastructure scalability and reliability, and harnessing data- and AI-driven capabilities.

    “We are thrilled to deepen our long-standing relationship with Travel + Leisure Co., a valued partner whose forward-looking vision aligns with our commitment to reimagine how the modern traveler interacts with technology,” said Anup Prasad, SVP and Consumer Business Head at Cognizant. “This expanded partnership reinforces our focus on delivering tailored digital transformation solutions that meet the evolving needs of the global leisure and hospitality industry.”

    About Cognizant

    Cognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we’re improving everyday life. See how at www.cognizant.com or @cognizant.

    About Travel + Leisure Co.

    Travel + Leisure Co. (NYSE:TNL) is a leading leisure travel company, providing more than six million vacations to travelers around the world every year. The company operates a portfolio of vacation ownership, travel club, and lifestyle travel brands designed to meet the needs of the modern leisure traveler, whether they’re traversing the globe or staying a little closer to home. With hospitality and responsible tourism at its heart, the company’s nearly 19,000 dedicated associates around the globe help the company achieve its mission to put the world on vacation. Learn more at travelandleisureco.com.

    For more information, contact:
    Katrina Cheung
    [email protected]

    SOURCE Cognizant Technology Solutions Corporation

    Continue Reading

  • CrowdStrike Named Customers’ Choice in 2026 Gartner® EPP Report

    CrowdStrike Named Customers’ Choice in 2026 Gartner® EPP Report

    CrowdStrike has the most 5-star ratings of any Customers’ Choice vendor and is the only vendor named a Customers’ Choice in every iteration of the Voice of the Customer for EPP report since its launch

    AUSTIN, Texas – January 28, 2026 – CrowdStrike (NASDAQ: CRWD) today announced it has been named the Customers’ Choice in the 2026 Gartner Peer Insights™ ‘Voice of the Customer’ for Endpoint Protection Platforms report. CrowdStrike received the most 5-star ratings of any Customers’ Choice vendor with a 97% Willingness to Recommend score, based on 800 overall responses as of November 2025. CrowdStrike is the only vendor named a Customers’ Choice in every iteration of the Voice of the Customer for Endpoint Protection Platforms report since its launch, earning this recognition six times. 

    “The strongest validation in cybersecurity comes from customers,” said Elia Zaitsev, chief technology officer at CrowdStrike. “To me, this recognition reflects what teams value most in endpoint security: simple deployment, low operational overhead, and protection they can rely on to stop breaches. That day-to-day experience is why organizations continue to trust CrowdStrike.”

    What Customers Are Saying

    Here is a sampling of our reviews:

    • Best in Class Detection and Ease of Use: “The tool is the best on the market and does exactly what is expected… easy to deploy and maintain.” – System Security Manager, Services (non-Government) Industry
    • CrowdStrike Offers Strong Threat Protection with Simple Deployment and Low CPU Usage: “I decided that we needed the best protection in the market and it made sense to select CrowdStrike. Very easy to deploy and also low CPU demand. High protection levels from new threats.” – Director, IT Security and Risk Management, Retail Industry
    • Seamless Deployment and Strong Detection Capabilities Highlight CrowdStrike Experience: “My overall experience has been excellent. As a previous customer for several years, I have brought CrowdStrike into several organizations. The main need has been to detect novel malicious and anomalous endpoint behavior. After evaluating several vendors, CrowdStrike was the clear winner… As a CISO, I have peace of mind knowing I can verify its monitoring and blocking.” – CISO, IT Services Industry


    Additional Resources

    • To learn more about CrowdStrike’s recognition in the 2026 Gartner Peer Insights™ ‘Voice of the Customer’ for Endpoint Protection Platforms report, visit our website and read our blog.
    • To learn about CrowdStrike’s recognition as a Leader for the sixth consecutive time in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms, visit our website and read our blog.
    • To learn more about CrowdStrike Falcon Endpoint Security, visit our website.


    GARTNER is a registered trademark and service mark, and MAGIC QUADRANT and PEER INSIGHTS are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

    Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences, and should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose.

    About CrowdStrike

    CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world’s most advanced cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity and data.

    Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities.

    Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value.

    CrowdStrike: We stop breaches.

    Learn more: https://www.crowdstrike.com/

    Follow us: Blog | X | LinkedIn | Instagram

    Start a free trial today: https://www.crowdstrike.com/trial

    © 2026 CrowdStrike, Inc. All rights reserved. CrowdStrike and CrowdStrike Falcon are marks owned by CrowdStrike, Inc. and are registered in the United States and other countries. CrowdStrike owns other trademarks and service marks and may use the brands of third parties to identify their products and services.

    Media Contact

    Jake Schuster

    CrowdStrike Corporate Communications

    press@crowdstrike.com

     



    1Gartner, Voice of the Customer for Endpoint Protection Platforms, Peer Contributors, January 23, 2026



    Continue Reading

  • Stock market today: Live updates

    Stock market today: Live updates

    Traders work at the New York Stock Exchange on Jan. 27, 2026.

    NYSE

    The S&P 500 reached a milestone level on Wednesday, hitting 7,000 for the first time, before pulling back ahead of the Federal Reserve’s interest rate decision and earnings reports from major tech companies.

    The broad market index was last down 0.1% after advancing 0.3% to an all-time intraday high of 7,002.28 earlier in the session. The Nasdaq Composite traded around the flatline, as did the Dow Jones Industrial Average.

    Stock Chart IconStock chart icon

    S&P 500, 1-year

    The broader market’s earlier rise was bolstered by gains in chip stocks following upbeat earnings results. Seagate Technology shares jumped more than 19% after the storage infrastructure company’s second-quarter earnings and revenue topped analyst expectations, with CEO Dave Mosley citing strong demand for artificial intelligence data storage. Additionally, semiconductor equipment giant ASML reported record orders and issued rosy 2026 guidance due to the AI boom. However, the stock reversed its gains from earlier Wednesday.

    Beyond those earnings, China has given approval to ByteDance, Alibaba and Tencent to buy Nvidia’s H200 AI chips, Reuters reported Wednesday. Nvidia shares rose more than 1%. Fellow semiconductor names Micron Technology and Taiwan Semiconductor Manufacturing saw gains as well. The VanEck Semiconductor ETF (SMH) moved about 2% higher and hit a new 52-week high.

    “The story for 2023, 2024, most of 2025 was AI-related semiconductors — awesome, great demand. All the other semiconductor-demand sources, whether that be auto or industrial or telecom, etc. — weak. That has shifted now,” Jed Ellerbroek of Argent Capital Management told CNBC. “Demand is well in excess of supply really everywhere at this point within semiconductors,” the portfolio manager also said.

    The rally failed to broaden past chip stocks, however, as the S&P 500 was eventually dragged lower heading into the Fed decision.

    The central bank is widely expected to keep its benchmark interest rate steady at a target range of 3.5% to 3.75%, but traders will be seeking hints on longer-term changes to monetary policy. Fed funds futures trading suggests two quarter percentage point cuts by the end of 2026, according to the CME FedWatch Tool.

    Earnings from a slate of major technology companies are on deck. Microsoft, Meta Platforms and Tesla are set to post their quarterly financial results Wednesday after the closing bell. Apple will post its results on Thursday.

    Outside tech, Starbucks traded higher by 2% after the coffee chain reported that its traffic grew for the first time in two years. Its first-quarter revenue also beat expectations, while its earnings missed.

    Continue Reading

  • Who holds the keys? Navigating legal and privacy governance in third-party AI API access

    Who holds the keys? Navigating legal and privacy governance in third-party AI API access

    In today’s rapidly evolving artificial intelligence environment, organizations are increasingly relying on third-party application programming interfaces from platforms like OpenAI, Google and Amazon Web Services to embed advanced features into their products. These APIs offer significant benefits, particularly in terms of time and cost savings, by enabling companies to leverage existing technology rather than building solutions from scratch. 

    While this approach can speed up deployment and reduce the burden of managing complex infrastructure, it also raises key legal and privacy issues — like how data flows are controlled, who is responsible for data security, and how licensing restrictions are enforced. The situation becomes even more challenging when the procuring organization opts to use its own API keys instead of those provided by the AI feature developer.

    Data flow and responsibilities when developers access AI services on behalf of a procuring organization

    When developers leverage third‑party AI APIs to build and deliver their own AI features, they often do so using their own licensed API keys to access those services. Prompts — for example, data queries, order‑processing commands, or report generation instructions — are sent from the procuring organization’s systems to the developer’s platform and then forwarded to the API provider. The provider applies its AI models and returns outputs, which the developer delivers to the procuring organization.

    In this process, the developer assumes the role of the data controller because it determines the purpose and means of processing: it decides which prompts to collect, how to combine or enrich them, including developer-supplied templates, and how outputs are used and delivered. As controller, the developer must ensure lawful processing, provide transparency and implement appropriate technical and organizational measures — such as encryption, access controls, logging and regular audits — to protect personal data throughout the life cycle in line with the EU General Data Protection Regulation.

    If there is sensitive data involved — such as personal data under the GDPR or personal health information under the Health Insurance Portability and Accountability Act — the developer, who has control over its API keys, can apply appropriate privacy-enhancing technologies before transmitting. These include measures like anonymization, pseudonymization, zero data retention endpoints, and in-flight filtering, to prevent identification and reduce risk, thereby supporting compliance with applicable data protection laws. 

    Once the developer submits prompt data to the API provider, the provider acts as a data processor and is responsible for processing data only in accordance with the developer’s documented instructions. To ensure proper governance, the parties should establish a written agreement — such as a data processing agreement that clearly outlines the scope and lawful purposes of processing, as well as the provider’s obligations regarding data retention and deletion. 

    The agreement should also require the provider to maintain records of processing activities, cooperate with audits, assist the developer with data subject requests and breach notifications, and implement appropriate safeguards — including encryption, access controls, logging, and incident detection/response — all in compliance with GDPR requirements.

    Shifting dynamics: Customers bringing their own API keys to developer AI features

    As organizations increasingly use AI internally — whether embedding off‑the‑shelf features or developing bespoke capabilities — there is a good chance they already hold API licenses for major platforms such as OpenAI or Azure.

    As such, it is increasingly common for procuring organizations to ask that the AI feature developer use the organization’s own API keys to access the feature. This gives the procuring organization more direct control over the data, use and costs associated with the API. However, this shift significantly impacts the role and control of the AI feature developer. 

    When the procuring organization uses its own API keys to access a developer AI feature, responsibility for transmitting, storing and controlling access to the data mostly shifts to them. This means the developer no longer has full visibility into how the data is handled once it leaves their infrastructure. As a result, it becomes much harder for the developer to verify if safeguards — like encryption, access controls or quick data deletion — are properly in place, or to enforce policies that prevent misuse or breaches.

    Because of this, it’s crucial to have clear, well-structured contracts between the developer and the organization. These should lay out who’s responsible for what — covering data security, liability and compliance — and reflect the actual level of control each party has over the data and the API.

    Key takeaways

    Effectively managing third‑party AI integrations requires balancing the benefits of rapid deployment and cost savings with the obligation to address privacy and data protection exposures. 

    Whether data flows go through company‑controlled APIs or customer‑managed keys, robust data‑governance frameworks ensure risks are equitably allocated and information is safeguarded in line with applicable jurisdictional requirements and the sensitivity of the data involved. 

    Ultimately, clear contractual responsibilities, active oversight and strong governance are essential when deploying AI features via third‑party APIs, especially as organizations increasingly want to use and control access to the AI capabilities they procure.

    Rachel Webber, AIGP, CIPP/E, CIPP/US, CIPM, CIPT, FIP, is senior counsel for a software as a service and AI organization. 

    Continue Reading

  • Insurance Newsletter – January 2026 | Insights

    1. Refinements to Solvency II – Third-Country Insurance Branches
    2. Solvency II Reporting and Disclosure: Post-implementation Amendments
    3. UK Berne Financial Services Agreement Guidelines
    4. Enhancing Banks’ and Insurers’ Approaches to Managing Climate-Related Risks
    5. Alternative Life Capital: Supporting Innovation in the Life Insurance Sector
    6. FCA Simplifies Complaints Reporting Process
    7. FCA and PRA Announce Plans to Support Growth of Mutuals Sector
    8. FCA Simplification of the Insurance Rules
    9. FCA Confirms Final Guidance to Tackle Serious Non-financial Misconduct in Financial Services
    10. Reform of Anti-Money-Laundering and Counter-Terrorism Financing Supervision
    11. Lloyd’s Market Bulletin: Update to the Agency Circumstances Procedure
    12. EIOPA Issues Guidance on Group Supervision

    1. Refinements to Solvency II – Third-Country Insurance Branches

    PRA Proposal of Further Refinements to Solvency II Regarding Third-Country Insurance Branches (CP20/25)

    Following Brexit, the UK Government has worked with the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) to implement Solvency II into the UK’s financial services regulatory framework. Implementation of the amended regime (Solvency UK) was largely completed in December 2024, although the PRA and FCA continue to implement amendments as necessary.

    From September to December 2025, the PRA consulted on proposed changes to the treatment of third-country insurance branches.

    The primary change proposed is the increase of the subsidiarisation threshold from £500m to £600m in liabilities covered by the Financial Services Compensation Scheme. The PRA believes that inflation has artificially caused some branches to reach the liability threshold and therefore unnecessarily become UK subsidiaries.

    In addition, third-country branches are required to notify the PRA in the event they anticipate reaching the subsidiarisation threshold within three years.

    The PRA also confirmed a series of minor PRA Rulebook amendments, namely:

    • Removal of volatility adjustment eligibility for branches;
    • Absorption of two modifications by consent into the PRA Rulebook;
    • Reinstatement of reporting templates IR.19.01.01 (non-life-insurance claims) and IR.20.01.01 (development of the distribution of the claims incurred) for category 3 and 4 branches and discontinuation of quarterly reporting for these branches.
      The changes to the subsidiarisation threshold will take effect upon publication of the relevant policy statement (expected in the first half of 2026). The other changes noted will take effect on December 31, 2026.

    2. Solvency II Reporting and Disclosure: Post-implementation Amendments

    PRA Consultation on UK Solvency II Reporting and Disclosure: Post-implementation Amendments (CP22/25)

    As part of the continued review of Solvency UK, the PRA has opened consultation on amendments to the reporting and disclosure requirements under the regime.

    Key proposals:

    •  Introduction of a requirement for third-country branches to report total projected Financial Services Compensation Scheme liabilities data to enhance risk visibility;
    • Introduction of new and amended templates for non-life income/expenditure/business-line reporting;
    • Clarification on paired asset, derivatives, and dividends reporting and removal of certain quarterly items considered non-essential;
    • Transfer of the reporting format of the Matching Adjustment Asset and Liability Information Return templates to XBRL (presently formatted in Excel);
    • Removal of duplicate reporting, and
    • Simplification of reporting where proportional reinsurance data is unavailable at a subclass level.

    Consultation closes on March 4, 2026. The implementation date of any resulting changes is anticipated to be on or after December 31, 2026.

    3. UK Berne Financial Services Agreement Guidelines

    PRA/FCA Guidelines on the UK Berne Financial Services Agreement (BFSA)

    In November 2025, the PRA and FCA jointly published guidelines for providing services under the BFSA.

    UK Insurance Firms

    Eligible UK insurance firms must notify the Swiss Financial Market Supervisory Authority (FINMA) with specified information and be placed on the FINMA register before providing services.

    To be eligible, UK insurance firms (insurers and intermediaries) must:

    • Be incorporated or formed under UK law, a UK resident, or a UK branch of a Covered Swiss Financial Services Supplier;
    • Be authorized or supervised as a UK insurer or insurance intermediary; and
    • Supply covered services for non-Swiss risks.

    To be eligible, UK insurers must:

    • Be subject to Solvency II (excluding UK branches of Covered Swiss Financial Services Suppliers);
    • Meet the solvency requirements without capital relief measures;
    • Fulfil company-specific management buffer requirements;
    • Have no life insurance liabilities exceeding 10% of the total best estimate liabilities under Solvency II (without capital relief measures); and
    • Ensure staff are knowledgeable of relevant Swiss legislation.

    Clients to whom a UK insurer may provide services must be incorporated in Switzerland and meet at least two of the following requirements:

    1.  Net turnover in excess of CHF 40 million
    2.  Balance sheet total in excess of CHF 20 million
    3.  In excess of 250 employees

    4. Enhancing Banks’ and Insurers’ Approaches to Managing Climate-Related Risks

    PRA Policy Statement: Enhancing Banks’ and Insurers’ Approaches to Managing Climate-Related Risks (PS25/25)

    On December 3, 2025, the PRA published its final policy on banks’ and insurers’ (Firms) management of climate-related risks, following consultation in April 2025.

    The final policy builds on the PRA’s 2019 expectations for Firms’ management of climate-related risks, providing greater clarity and aligning the expectations with international standards.

    Key changes to the final policy (from the draft proposals):

    • Further guidance on how Firms should apply expectations proportionately, reflecting their exposure to material climate-related risks and the scale and complexity of their business.
    • Clarification that the six-month review period proposed is not an implementation timeline but rather a period during which Firms would be expected to conduct an internal review of their current status in meeting the final policy expectations.
    • Firms may integrate climate-related risks into existing risk registers/governance frameworks rather than establishing new ones if risk identification remains robust.
    • Confirmation of the PRA’s view that existing Solvency Capital Rules “provide sufficient flexibility for an insurer to take account of climate-related risks in a way that it considers appropriate.” Although, insurers may also exercise discretion in making appropriate adjustments for internal models and market prices of climate-related risks.#

    The policy took effect upon publication.

    5. Alternative Life Capital: Supporting Innovation in the Life Insurance Sector

    PRA Discussion Paper on Alternative Life Capital: Supporting Innovation in the Life Insurance Sector (DP2/25)

    The PRA is seeking feedback on potential policy changes that could enable life insurers to transfer defined tranches of risk to the capital markets. At this stage, no specific policy changes are proposed. Rather, the PRA is gathering stakeholder feedback on how to facilitate life insurers’ access to alternative forms of capital that do not derive from equity or debt issuance, with particular focus on identifying regulatory barriers to capital entering the sector. Feedback is sought by February 6, 2026.

    The PRA indicates that it is open to a broad range of innovative structures – including potential reforms to the Insurance Special Purpose Vehicle (ISPV) framework and adaptation of mechanisms used in other markets, such as banking.

    Nevertheless, the PRA has highlighted a non-exhaustive set of risk transformation examples (below) and is seeking views on their feasibility/attractiveness and associated risks: i) ISPVs; ii) significant risk transfers (SRTs); and iii) life insurance sidecars and joint ventures.

    ISPVs

    The PRA acknowledges that the current UK regime is targeted towards non-life and short-term risks, which may present challenges when considering its application to longer-term insurance liabilities.

    SRTs

    The PRA invites views on the potential adaptation of SRTs (well established in the banking sector) for use by life insurers, noting uncertainty over long-term outcomes and the degree/effectiveness of risk transfer may vary over time depending on asset performance.

    Life Insurance Sidecars and Joint Ventures

    The PRA notes that some alternative structures (including strategic partnerships and joint ventures) are more prevalent internationally and have been developing in the UK, and it invites views on the potential use of life insurance sidecars.

    The Discussion Paper also sets out six overarching principles intended to guide the PRA’s consideration of alternative life capital.

    6.  FCA Simplifies Complaints Reporting Process

    FCA Simplifies Complaints Reporting Process (PS25/19)

    The FCA has confirmed plans to streamline the way firms report complaints. Five existing complaints returns will be replaced by a single consolidated return. The first reporting period under the new process will run from January 1, 2026 to June 30, 2027.

    Insurance sector impact:

    • All firms will now report their complaints data on a fixed six-month and calendar-year basis. This replaces the use of each firm’s Accounting Reference Date.
    • Complaints reporting will be based on firms’ permissions; firms will only need to complete the sections of the new return relevant to their regulated activities.
    • Group reporting has been removed such that firms must now submit complaints data at the individual legal entity level.
    • The FCA are imposing a threshold of 500 complaints or more for insurers (and banks), above which it will publish the relevant firm’s data.
    • Clarification has been provided on the scope of product categories, insurance complaint issues, and the parameters of insurance permissions for the purposes of insurance consolidated complaints returns.

    7. FCA and PRA Announce Plans to Support Growth of Mutuals Sector

    Regulators Announce Plans to Support Growth of Mutuals Sector (Mutuals Landscape Report)

    In December 2025, the PRA and FCA jointly published the Mutuals Landscape Report (the Report) and announced a series of measures designed to support the growth of the mutuals sector.

    The Report focuses on the mutuals sector as a whole, noting the UK Government’s commitment to doubling its size and sets out plans for credit unions, building societies, and mutual insurers.

    The Report notes challenges specifically for mutual insurers, including the following:

    • Smaller insurers face issues of economies of scale and business model sustainability, augmented in recent years by rising operational costs;
    • Insurance mutuals face difficulties in capital raising when looking to invest and grow.
    • There are legislative barriers to capital management.

    The new targeted initiatives to support mutual insurers include the following:

    1. The establishment of a new FCA Mutual Societies Development Unit

    This will act as a central hub to help mutuals (including insurers) navigate policy and regulatory change by offering expertise on legislation and regulatory processes.

    2. Reduced barriers to entry

    The FCA is to provide free preapplication support for firms wishing to form/convert to mutual societies.

    The application processing window for new societies is also to be reduced from 15 to 10 working days, intended to incentivise more society registrations.

    3. The launch of a joint PRA and FCA Scale-up Unit to provide regulatory support to eligible firms, including mutuals that are looking to grow rapidly.

     

    8. FCA Simplification of the Insurance Rules

    FCA Policy Statement: Simplifying the Insurance Rules (PS25/21)

    The FCA has confirmed various measures that simplify regulations for firms across the insurance and funeral plans sectors and announced further changes affecting insurance firms in 2026 to support growth and innovation.

    Key final rules:

    • Firms now have the option to appoint a single lead manufacturer responsible for all Product Intervention and Product Governance Sourcebook 4 (insurance product governance) obligations.
    • Removal of the requirement for insurance product manufacturers to review insurance products at least every 12 months; firms must now determine the appropriate review frequency based on risk and potential customer harm.
    • Removal of the 15-hour minimum training and competence requirement for insurance employees; firms are now permitted to tailor training and competence arrangements to business needs.
    • Removal of the existing notification and annual reporting requirements regarding employers’ liability insurance, although firms must continue to notify the FCA of significant rule breaches.
    • The FCA has clarified its expectations of firms working together to manufacture products or services under the Consumer Duty. Firms are not required have a say in each other’s decisions, nor is joint decision making or even allocation of responsibilities required.

    Key proposed changes:

    • Consulting on changes to the client categorisation rules, in line with the Consumer Duty.
    • Consultation on disapplying the Consumer Duty to non-UK business by the end of Q2 2026.

    Review of core FCA Handbook definitions to promote “consistency and clarity.”

    9. FCA Confirms Final Guidance to Tackle Serious Non-financial Misconduct in Financial Services

    FCA Confirms Final Guidance to Tackle Serious Non-financial Misconduct in Financial Services (PS25/23)

    In July 2025, the FCA updated its rules to more broadly capture non-financial misconduct (NFM) across banks and non-banks (including insurers) that are subject to the Code of Conduct (COCON) and the Fit and Proper test (FIT). In December 2025, the FCA finalised its regulatory framework on NFM and provided further guidance on how to determine when there has been a breach and how to proceed thereafter.

    The FCA has introduced a new COCON rule that extends NFM to include “unwanted conduct that has the purpose or effect of violating a colleague’s dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment for them.”

    Not all poor behaviour satisfies the regulatory threshold. But serious NFM is considered a conduct breach and requires declaration to the FCA. It may result in enforcement action against firms and/or FIT consequences for individuals.

    The rules do not require employers to monitor employee’s private lives; external conduct will be relevant only where there is risk of future regulatory breach(es) or the conduct is serious enough to erode public confidence.

    The guidance will come into effect on September 1, 2026 and will not have retrospective effect.

    Please see the corresponding Sidley briefing note for further information.

    10. Reform of Anti-Money-Laundering and Counter-Terrorism Financing Supervision

    HM Treasury Consultation Response: Reform of the Anti-Money-Laundering and Counter-Terrorism Financing Supervision Regime

    In 2022, HM Treasury undertook review of the UK’s anti-money-laundering and counter-terrorism financing (AML/CTF) supervisory system. The review concluded that weaknesses in supervision may require structural reform. In summer 2023, HM Treasury then consulted on reform of the supervisory regime. As part of its consultation, HM Treasury requested feedback on four possible models for reform. In October 2025, HM Treasury published its response to this consultation.

    At present, the supervisory system comprises three supervisors: the FCA; His Majesty’s Revenue & Customs (HMRC); and 22 private sector professional body supervisors (PBSs).
    The UK Government has decided to proceed with model 3, the creation of a single professional services supervisory (SPSS). Under this model, the FCA will be granted responsibility for all AML/CTF supervision for the legal and accountancy sectors and trust and company service providers. As SPSS, the FCA will carry out these functions independently of HM Treasury and will in practice replace PBSs and HMRC in AML/CTF supervision.

    The UK Government has confirmed that efforts are underway to introduce the necessary primary legislation and establish a transition plan but has not committed itself to a strict timetable.
    A separate consultation on SPSS specific powers closed in December 2025, with a response anticipated in early 2026.

    Following the 2022 review, HM Treasury has also proposed reform of the Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017 and related legislation via the draft Money Laundering and Terrorist Financing (Amendment and Miscellaneous Provisions) Regulations 2025 (the SI). The instrument is focused on targeting amendments to close regulatory loopholes, address proportionality concerns; and account for evolving risks in relation to AML/CTF. For example, the SI provides clarity on the scope of “unusually complex or unusually large” transactions for the purposes of enhanced due diligence – confirming that this measure is relative to what is standard for the sector/nature of the transaction.

    The final statutory instrument is expected to be laid out in early 2026.

    11. Lloyd’s Market Bulletin: Update to the Agency Circumstances Procedure

    Lloyd’s Market Bulletin: Update to the Agency Circumstances Procedure (Ref: Y5474)

    Introduced in 2000, Part A of the Agency Agreements (Amendment No.20) Byelaw amended the standard managing agent’s agreement and provided that no transaction, arrangement, relationship, act or event which would or might otherwise be regarded as constituting or giving rise to a contravention of a managing agent’s fiduciary obligations shall be regarded as constituting a contravention if it occurs in circumstances, and in line with requirements, specified by the Council (the Agency Circumstances Procedure).

    In December 2025, Lloyd’s confirmed amendments to the ballot requirement of the Agency Circumstances Procedure.

    Previously, when a certain proportion of syndicate members objected to a specified proposal by the syndicate’s managing agent, it was required that a ballot be held of unaligned members (seeking approval of the proposal, usually following efforts by the managing agent to address concerns).

    December 2025 changes:

    • The ballot, if required, must be undertaken by unaligned syndicate members who are not Related Persons (as defined in Lloyd’s Market Bulletin, Agency Circumstances Procedure, Ref: Y3439, 2004).
    • The managing agent must offer a postal option for voting.

    The managing agent has discretion to offer the option to vote by email or other electronic means so long as the integrity of the voting process is maintained

    12. EIOPA Issues Guidance on Group Supervision

    European Insurance and Occupational Pensions Authority (EIOPA) Final Report on Guidelines on Exclusion of Undertakings From the Scope of Group Supervision

    EIOPA has published guidelines on exclusions from group supervision, specifying the conditions under which group supervisors may exclude undertakings from group supervision. The Guidelines will become applicable on January 30, 2027.

    Exclusions are only permissible in “exceptional circumstances” and must be duly justified to EIOPA and, where applicable, to the other supervisory authorities concerned.

    Guideline 1: Supervisors should not exclude if the entity (i) has material intragroup transactions, (ii) has significant influence/coordination over group insurers, or (iii) is needed to understand group risk.

    Guideline 2: Supervisors should only consider exclusion based on legal impediments to information exchange between authorities where (i) the undertaking is located in a third country with no equivalence decision; (ii) the undertaking is not party to the International Association of Insurance Supervisors Multilateral Memorandum of Understanding; and (iii) the entity is small relative to the group and its risks are already captured and managed at sole level. Before excluding, supervisors should first consider signing a memorandum of understanding with the third-country supervisor.

    Guideline 3: Where exclusion would lead to non-application of group supervision for an undertaking under Article 214(2) point (B) or (C), exclusion should occur only if the entity is small relative to the group, their risks are already captured and managed at individual entity level, and (if a parent undertaking) the risks arise almost entirely from the group.

    Guideline 4: Ultimate parents should only be excluded if the parent is not in any of the circumstances set out in Guideline 1; all group risks arising from all other undertakings and intra-group transactions that could affect the undertakings are fully captured at the intermediate level; and the supervisor has adequate information on parent-level group transactions.

    Guideline 5: Exclusions must be reassessed and monitored – including ongoing review of intragroup transactions – to ensure conditions remain met.

    Continue Reading

  • Green Assist: transforming agricultural waste into cosmetics and food

    Green Assist: transforming agricultural waste into cosmetics and food

    Plinius Labs, a Belgian research company in the bioeconomy sector, is transforming agricultural waste into bio-based premium ingredients for cosmetics and food. The company has developed AMPLE – a new process to extract natural compounds from flax shives – and sought Green Assist support to scale it up and overcome financial and industrialisation challenges.

    Plinius Labs is built on the belief that plant waste is full of useful natural chemicals. Too often treated as simple waste, biomass contains molecules that serve essential roles in nature – from protection to healing. Plinius Labs works to extract and valorise this potential, combining 40+ years of green chemistry expertise with cutting-edge R&D to develop natural ingredients and help industries reduce their environmental footprint.

    The company’s mission is therefore to replace petroleum‑derived additives with greener alternatives and, to move forward, it needed to build a pilot plant and attract funding. The team faced challenges around investment planning, market positioning, and how to scale the AMPLE process efficiently and sustainably. That’s when they turned to Green Assist for help.

    Between July and October 2025, Green Assist provided expert advisory support tailored to the project’s needs. This included guidance on developing a business model, refining the company’s value proposition, and identifying potential investors and clients. The expert also helped the team strengthen their financial strategy and assess how to grow in line with circular economy goals.

    Thanks to Green Assist, Plinius Labs is now ready to implement its pilot project and bring its eco-innovation closer to the market. With clear business and financial plans in place, the company is better equipped to demonstrate the environmental and commercial value of its work and to form key partnerships.

    “Green Assist boosted the scaling-up of Plinius Labs. Potential investors were successfully identified as well as target customers for partnering contracts in the development of our proprietary process to produce bio-based cosmetic components. The assigned expert showed dedication and expertise, keeping our team focused and enhancing the market credibility of Plinius Labs’ core competences in green chemistry,” said Yves Boonen, CEO of the company.

     

    Green Assist aims to build a pipeline for high-impact green investment projects in sectors related to biodiversity, natural capital and circular economy, as well as in non-environmental sectors. 

    Learn more about how Green Assist can help you get free tailored support for your green project or contact us at cinea-green-assistec [dot] europa [dot] eu (cinea-green-assist[at]ec[dot]europa[dot]eu). To request advisory services from Green Assist, simply fill out this short form.

    Continue Reading