Category: 3. Business

  • NVIDIA Sets Conference Call for Fourth-Quarter Financial Results

    NVIDIA Sets Conference Call for Fourth-Quarter Financial Results

    Written CFO Commentary to Be Provided Ahead of Call

    SANTA CLARA, Calif., Jan. 28, 2026 (GLOBE NEWSWIRE) — NVIDIA will host a conference call on Wednesday, February 25, at 2 p.m. PT (5 p.m. ET) to discuss its financial results for the fourth quarter and fiscal year 2026, which ended January 25, 2026.

    The call will be webcast live (in listen-only mode) on investor.nvidia.com. The company’s prepared remarks will be followed by a Q&A session, which will be limited to questions from financial analysts and institutional investors.

    Ahead of the call, NVIDIA will provide written commentary on its fourth-quarter results from Colette Kress, the company’s executive vice president and chief financial officer. This material will be posted to investor.nvidia.com immediately after the company’s results are publicly announced at approximately 1:20 p.m. PT.

    The webcast will be recorded and available for replay until the company’s conference call to discuss financial results for its first quarter of fiscal year 2027.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Investor Relations Corporate Communications
    NVIDIA Corporation  NVIDIA Corporation
    ir@nvidia.com  press@nvidia.com
       

    © 2026 NVIDIA Corporation. All rights reserved. NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries.

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Background

    When people look for health information today, they no longer only consult physicians, pharmacists, or search engines. Increasingly, they also encounter generative artificial intelligence (AI) tools such as ChatGPT or the World Health Organization (WHO)’s chatbot Sarah, which simulate human-like conversations and provide instant responses. These tools promise a new way of accessing medical knowledge: fast, convenient, and interactive. At first glance, this accessibility seems to hold great potential for reducing barriers to health information, therefore directly impacting digital health equity—defined as equitable access to and use of digital health information technology that supports informed decision-making and enhances health [].

    However, the picture is more complex. On the one hand, generative AI can offer cost-free entry points (eg, basic versions of ChatGPT or automatically displayed answers in Google search via Google’s Gemini), deliver content in multiple languages, and rephrase complex medical concepts into more understandable terms. In doing so, it could strengthen patient education, address health inequalities, and help bridge communication gaps between citizens and health care providers [,]. On the other hand, effective use still depends on internet-enabled devices and adequate digital skills, which are not equally distributed. As a result, the very technology that appears open and inclusive may also risk exacerbating existing digital divides [].

    Moreover, unlike other types of information, health-related questions are often sensitive and personal. At the same time, the inner workings of generative AI remain opaque, and the accuracy of its outputs is not guaranteed []. All these tensions raise important questions about adoption: Who is most likely to turn to generative AI for health information, and what factors shape this intention? Moreover, since health communication practices and digital infrastructures differ across countries, cross-national research is urgently needed.

    To address these questions, this study draws on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) []. The model proposes that performance expectancy, effort expectancy, facilitating conditions, social influence, habit, and hedonic motivation shape technology use. We extend this framework by also examining the roles of health literacy and health status in predicting intention to use generative AI for health information. Using cross-national survey data from Austria, Denmark, France, and Serbia, we investigate the drivers of adoption to shed light on both individual and contextual factors that may guide the diffusion of generative AI in health contexts.

    Generative AI as Novel Health Information Source

    Generative AI constitutes a potentially disruptive force in the health information ecosystem []. However, despite its rapid advancement and widespread availability, empirical research on its role in health information–seeking remains limited []. At the same time, broader trends highlight the ongoing digitalization of health-related knowledge acquisition. A representative survey conducted in Germany in 2019 revealed that only 48% of respondents consulted a medical professional for their most recent health issue, while 1 in 3 turned first to the internet []. Similar findings show that online sources—particularly search engines—are the primary means of accessing health information, both for caregivers and the general population [,]. Family and friends and traditional mass media (eg, print media and health-related TV programming) rank behind medical professionals and online sources [].

    The introduction of generative AI tools like ChatGPT may shift these established hierarchies. Unlike static web content or conventional search engines, generative AI enables dialogic, personalized interactions that simulate human conversation. These features may position generative AI as a compelling alternative to established online and offline health information sources. However, current evidence suggests that trust in generative AI—especially regarding complex health-related issues—is still limited [], which might restrict its present adoption potential to early adopters []. This raises questions about how generative AI integrates into the broader ecosystem of health information sources.

    To address this gap, we first explore: How does the use of generative AI for health information–seeking compare to that of more established health information sources?

    Explaining Predictors of Technology Adoption: UTAUT2

    The UTAUT2 [] is one of the most popular models to explain technology adoption. It builds on the technology acceptance model [], emphasizing perceived usefulness and ease of use, and the initial UTAUT model [], which added performance expectancy, effort expectancy, facilitating conditions, and social influence as predictors of adoption behavior. UTAUT2 extends these frameworks to consumer contexts by incorporating hedonic motivation and habit [,] . The UTAUT2 model has demonstrated its versatility in explaining the adoption of diverse eHealth technologies, such as wearable devices [], health websites [], and health apps []. Additionally, recent studies have highlighted its relevance in understanding the uptake of generative AI technologies [-], showcasing its capacity to extend beyond traditional eHealth domains. However, so far, studies on predictors of usage intentions in the context of AI health information–seeking are lacking.

    Performance expectancy, a central construct in the UTAUT2 framework, reflects the belief that using technology will lead to performance benefits []. In the context of health information–seeking using generative AI, performance expectancy is shaped by users’ perceptions of how effectively these tools can enhance their own life, including aspects such as health–decision-making and task efficiency []. Consequently, as users anticipate greater usefulness from adopting generative AI as a health information source, their intention to use such technologies strengthens [-,]. Based on this, we propose the following hypothesis: “the higher the performance expectancy, the stronger the intention to use generative AI for health information–seeking” (H1). Effort expectancy, closely tied to ease of use, emphasizes simplicity in technology adoption []. Generative AI tools like ChatGPT benefit from high effort expectancy when users find them intuitive and easy to integrate into their workflows, particularly during the early adoption phase [,,,]. Addressing usability concerns early can reduce resistance and build user confidence, strengthening behavioral intention [,]. Therefore, we propose that “the higher the effort expectancy, the stronger the intention to use generative AI for health information–seeking” (H2). Facilitating conditions refer to the resources, skills, and support necessary for using technology []. These include training, knowledge, technical assistance, and system compatibility, which significantly enhance behavioral intention and usage [,]. In technologically mature settings, facilitating conditions are critical for sustained adoption and user satisfaction []. In line with the UTAUT2, we hypothesize that “the better the facilitating conditions, the stronger the intention to use generative AI for health information–seeking” (H3). Social influence indicates the perception that peers, such as family, friends, or colleagues, believe one should adopt a technology []. It plays a crucial role in early adoption, where external validation often outweighs personal experience []. Positive reinforcement within social or professional networks can normalize usage [,,,]: If people perceive that their peers already use generative AI for health information–seeking, their own intention to do so might increase as well. We, therefore, propose that “the greater the perceived social influence, the stronger the intention to use generative AI for health information–seeking” (H4). Habit specifies the extent to which behavior becomes automatic through repetition and prior use []. It strongly influences behavioral intention and long-term adoption, emphasizing the importance of regular engagement with technology [,]. For generative AI as a health information source, fostering habitual use can solidify its integration into daily routines and enhance sustained adoption []. This leads us to state, “the more it is a habit to use generative AI, the stronger the intention to use generative AI for health information–seeking” (H5). Hedonic motivation refers to the enjoyment or pleasure derived from using technology, particularly relevant in consumer contexts []. It directly impacts behavioral intention, especially for technologies involving entertainment or leisure []. For generative AI like ChatGPT, it can be expected that the interaction is perceived as fun or entertaining, which can boost user engagement and drive adoption []. Accordingly, we suggest the following hypothesis: “the higher the hedonic motivation, the stronger the intention to use generative AI for health information–seeking” (H6).

    Influence of Health Literacy and Health Status

    With the growing integration of digital tools into everyday lives, the role of health literacy in online health information–seeking has garnered increasing attention. Health literacy has been conceptualized as an individual’s capacity to search, access, comprehend, and critically evaluate health information, as well as to use the acquired knowledge to effectively address health-related issues [,]. Digital health literacy refers to these abilities in the context of digital environments [-]. Generally, low health literacy scores have been associated with undesirable health outcomes [].

    Research suggests that low levels of health literacy are associated with decreased trust in online health resources [], including the outputs of AI tools [], and lower overall adoption of online health technologies []. Furthermore, initial studies indicate a positive association between health literacy levels and attitudes toward the use of AI tools for medical consultations [].

    On the other hand, individuals with higher levels of health literacy are generally better equipped to critically evaluate online health information and scrutinize it in greater detail [,]. This heightened evaluative capacity could make them more aware of the limitations and potential risks of generative AI outputs, such as inaccurate information, bias, data privacy concerns, or oversimplified medical advice []. Moreover, individuals with higher health literacy are more likely to trust and use high-quality medical online resources, whereas those with limited health literacy prefer accessible but potentially less reliable sources []. In this context, outputs from generative AI might be perceived as lower-quality sources by highly digital health–literate individuals. As a result, while higher health literacy could foster openness to using generative AI for health purposes, it might also lead to greater skepticism or hesitancy in relying on these tools. Nonetheless, there is not enough research in the context of generative AI specifically to make conclusive predictions.

    Another well-established factor in online health information–seeking, yet underexplored in the context of AI, is individuals’ health status: Studies suggest that people with poor health are significantly more likely to consult the internet for health information compared to those with good health [,]. Being chronically ill has also been associated with increased reliance on internet-based technologies for health-related purposes []. This relationship can be explained by the fact that individuals in poor health often experience greater health-related concerns, which in turn heightens their motivation to seek information online.

    Given these complex relationships, we propose the second research question: How does health literacy and health status influence the intention to seek health information using generative AI?

    Cross-National Comparison

    In this study, we investigate the predictors of generative AI adoption for health information–seeking across 4 European countries: Austria, Denmark, France, and Serbia. While these countries share certain similarities, they also display notable differences that could shape the strength of the UTAUT2 predictors on the intention to use generative AI for health purposes. Thus, this cross-national approach ensures that the observed effects are generalizable and not confined to specific national contexts or unique country conditions.

    The selected countries share two key characteristics. First, all 4 countries provide universal health coverage, ensuring broad access to health care services for their populations. Second, a significant portion of health care expenditure in these countries is publicly funded [-].

    Despite these commonalities, there are also critical factors that differ among the countries and may shape the predictors of generative AI adoption. On the one hand, variations in digital infrastructure could significantly impact facilitating conditions, effort expectancy, and social influence as predictors of generative AI use. Denmark consistently ranks among Europe’s most digitally advanced nations, boasting high internet penetration and widespread adoption of e-health solutions []. This strong digital ecosystem likely enhances the perceived ease of use and social endorsement of generative AI. In contrast, Austria, France, and Serbia exhibit more moderate levels of digital adoption in the context of health information, which may limit the perceived use and social norms regarding such technologies [].

    On the other hand, access to and trust in health care providers vary significantly across these countries, potentially influencing performance expectancy and social influence. In nations with robust health care systems—characterized by a high availability of medical professionals and easy access to care—individuals are more likely to rely on doctors for health advice, as they are often viewed as the most trusted source of health information []. Denmark exemplifies this with its high levels of public trust in the health care system [], which may reduce the perceived benefits and social norms around using generative AI for health purposes. Conversely, in western Balkan countries like Serbia, studies report generally low levels of trust in the health care system []. In such contexts, individuals may be more inclined to seek alternative information sources, potentially amplifying the perceived benefits of generative AI use.

    By examining these diverse national contexts, this study not only tests the universality of the UTAUT2 model but also deepens our understanding of the contextual factors that shape generative AI adoption for health purposes. We ask: How do the predictors of generative AI use for health information–seeking differ across Austria, Denmark, France, and Serbia?

    Ethical Considerations

    Before data collection, the study received ethical approval from the institutional review board of the Department of Communication at the University of Vienna (approval ID: 1205). All participants provided written informed consent prior to participating in the study. The data were collected in anonymized form and no personal identifiers were recorded or stored. Participants received a compensation of €1.50 (US $1.74) for completing the study through the panel provider.

    Recruitment

    Recruitment of participants occurred during September 2024 via Bilendi, an international panel provider. Bilendi recruited the participants via email. The panel is checked for quality and attendance on a regular basis. The study was conducted in Austria, France, Denmark, and Serbia, with participants randomly selected to achieve samples representative of age, gender, and educational background. The provider’s panel sizes in the respective countries were as follows: Austria: n=60,000; Denmark: n=90,000; France: n=815,000; and Serbia: n=15,100. Per country, the study aimed to reach 500 participants.

    Inclusion criteria required participants to be aged between 16 and 74 years. Additionally, respondents who completed the survey in less than one-third of the median completion time (speeders) were excluded. Completion rates (excluding screened-out participants) were high across all 4 countries, ranging from 84.3% to 89.8%. Further details on survey design, administration, and response rates are provided in the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) checklist ().

    Procedure and Measures

    Overview

    The study consisted of two components. The first component, a survey, investigated predictors of the intention to use generative AI for health information–seeking. The second component, an experimental study, explored the influence of disease-related factors on these intentions []. To ensure respondents shared a common understanding of the concept, the survey began with a short definition of “generative AI,” describing it as technologies that engage in natural language conversations with users and generate responses in real-time. Examples such as ChatGPT, Google’s Gemini, and Microsoft Copilot were provided.

    The original questionnaire was developed in English and subsequently translated into German, French, Danish, and Serbian. Each translation was performed by a bilingual team member and back-translated into English by a different native speaker to ensure conceptual equivalence with the original items.

    This study focused on the following constructs (a complete item list with descriptive analysis and construct reliability values can be found in ).

    Dependent Variables
    Sources of Health Information–Seeking

    Participants rated the frequency with which they use 8 health information [] sources on a 7-point Likert scale (1 = “never” to 7 = “very often”). The sources included conversations with medical professionals, pharmacists, and family or friends, as well as books, mass media, internet search engines (eg, Google and Ecosia), and generative AI.

    AI Usage Intentions

    Participants’ willingness to use generative AI [] for health information–seeking was assessed using 3 items (eg, “I intend to use generative AI for health information seeking”), rated on a 7-point Likert scale (1 = “strongly disagree” to 7 = “strongly agree”).

    UTAUT2 Predictor Variables

    All UTAUT2 model [,,,] predictors were measured on a 7-point Likert scale (1 = “strongly disagree” to 7 = “strongly agree”). The predictors have been described below.

    Performance Expectancy

    Perceived benefits of using generative AI for health information–seeking were measured with 4 items (eg, “Using generative AI would save me time when researching health topics”).

    Effort Expectancy

    The perceived ease of using generative AI as a health information source was assessed with 4 items (eg, “Learning to use generative AI for health information–seeking seems easy for me”).

    Social Influence

    Three items measured the extent to which participants felt that important others encouraged their use of generative AI for health information–seeking (eg, “People who are important to me think that I should use generative AI for health-information seeking”).

    Hedonic Motivation

    The enjoyment of using generative AI for health information–seeking was assessed with 3 items (eg, “I think using generative AI for health-information seeking could be fun”).

    Facilitating Conditions

    Participants’ perceptions of available resources and support for using generative AI to seek health information were measured with 4 items (eg, access to devices and reliable internet, and knowledge).

    Habit

    The extent to which turning to generative AI when seeking health information had become a habitual behavior was measured with 3 items (eg, “I automatically turn to generative AI whenever I have questions about my health”).

    Model Extension Variables

    Health Literacy

    Health literacy [] was assessed with 10 items, asking participants to rate their confidence in tasks such as finding understandable health information. Responses were recorded on a 4-point Likert scale (1 = “not at all true” to 4 = “absolutely true”).

    Health Status

    We measured participants’ health status using 1 item (“How would you describe your current health status?”; 1 = “very poor” to 7 = “very good”).

    Control Variables: Sociodemographic Variables

    We further measured participants’ age, gender, and educational level.

    Statistical Analysis

    Power

    An a priori power analysis was conducted to determine the required sample size for structural equation modeling. Assuming an anticipated effect size of 0.25, a desired statistical power of 0.95, and a significance level of .05, the analysis indicated that a minimum of 391 participants per country would be necessary to detect the hypothesized effects [].

    Analytical Plan

    We used AMOS version 26 (IBM Corp) to run a latent variable, multigroup structural equation model using a maximum-likelihood estimator with full information. We computed the comparative fit index, the Tucker-Lewis Index, the chi-square to degrees of freedom ratio (χ2/df), and the root mean square error of approximation. We also secured metric measurement invariance to be able to compare the paths across countries. We controlled for age, gender, and education (binary coded).

    User Statistics

    In total, data were collected from 1990 respondents, comprising 502 from Austria, 507 from Denmark, 498 from France, and 483 from Serbia. The overall mean age of participants was 45.1 (SD 15.7) years, with 50.2% (n=998) identifying as female participants. In terms of educational attainment, 83.8% (n=1634) of the sample reported completing at least a medium or higher level of education (secondary level II or higher). Furthermore, 87.4% (n=787) of respondents indicated prior use of generative AI for health information–seeking (at least rarely). Detailed demographic and background characteristics of the sample are summarized in .

    Table 1. Descriptive characteristics of survey respondents from Austria, Denmark, France, and Serbia (N=1990; September 2024).
    Demographic characteristics Overall, n (%) Austria, n (%) Denmark, n (%) France, n (%) Serbia, n (%)
    Education
    Secondary I or lower 356 (18.24) 93 (18.6) 136 (26.9) 109 (21.9) 18 (3.7)
    Secondary II 1080 (55.36) 303 (60.3) 179 (35.3) 224 (45.0) 374 (77.4)
    Tertiary 554 (28.39) 106 (21.1) 192 (37.9) 165 (33.1) 91 (18.8)
    Gender
    Female 998 (50.15) 252 (49.8) 251 (49.5) 256 (51.4) 239 (49.5)
    Male 992 (49.85) 250 (50.2) 256 (50.5) 242 (48.6) 244 (50.5)
    Prior experience
    No 1203 (60.45) 316 (62.9) 328 (64.7) 326 (65.5) 233 (48.2)
    Yes 787 (39.54) 186 (37.1) 179 (35.3) 172 (34.5) 250 (51.8)

    aEducational attainment was categorized as low (secondary level I or below) and medium or high (secondary level II or higher). In Serbia, however, representativeness was achieved by grouping educational levels into low or medium (secondary level II or below) and high (tertiary education) due to sampling limitations.

    bPrior experience: no = “I have never used Generative AI for health information seeking” and yes = “I have used Generative AI for health information seeking at least rarely.”

    Statistical tests revealed no significant differences in gender distribution across countries (χ²3=0.48; P=.92) and no significant differences in age (Kruskal-Wallis χ²3=2.15; P=.54). In contrast, educational attainment varied significantly between countries (χ²3=550.76; P<.001), reflecting sampling-related imbalances in Serbia where low versus medium or high education was assessed differently than in the other countries. Finally, prior experience with health information–seeking showed significant country differences (Kruskal-Wallis χ²3=30.95; P<.001), with higher levels reported in Serbia.

    Evaluation Outcomes

    Descriptive Analysis

    In our first research question, we explored how generative AI compares to more established health information sources in terms of usage frequency across countries. As illustrated in , generative AI ranks last among all measured sources, indicating that, as of autumn 2024, it is rarely used for health information–seeking (mean 2.08, SD 1.66). In stark contrast, online search engines like Google are highly used, ranking second with a mean usage frequency of 4.57 (SD 1.88), following conversations with physicians, which hold the top position (4.77, SD 1.70). Family and friends also play a significant role, ranking third (4.27, SD 1.73), alongside pharmacists (3.52, SD 1.81). In comparison, traditional mass media such as TV, newspapers, and magazines are used less frequently (2.74, SD 1.68), as are books (2.68, SD 1.70) and free magazines provided by pharmacies or health insurance companies (2.60, SD 1.65). The relative ranking of information sources was consistent across all 4 countries, with physicians, internet search engines, and family or friends occupying the top positions and generative AI ranking last. However, some variation in mean usage frequencies was observed between countries; detailed country-level results are presented in .

    Figure 1. Mean usage frequency of different health information sources among survey respondents (N=1990) in Austria, Denmark, France, and Serbia (95% CI; September 2024). AI: artificial intelligence.
    Model Evaluation

    For the hypothesis tests, the results are shown in . Model fit was good (comparative fit index=0.95; Tucker-Lewis-Index=.93; χ2/df =2.47, P<.001; root mean square error of approximation=0.03, 95% CI 0.03-0.03). We examined the metric measurement invariance of all latent variables by constraining all factor loadings as equal for all 4 countries. When comparing the constrained model to the unconstrained model, we found no significant difference in model fit (P=.16). Thus, metric invariance across countries was established.

    Table 2. Structural equation model predicting the intention to use generative artificial intelligence (AI) for health information among survey respondents in Austria, Denmark, France, and Serbia (N=1990; September 2024).
    Predictor variables Austria Denmark France Serbia
    b SE P value b SE P value b SE P value b SE P value
    Performance expectancy 0.47 0.05 <.001 0.52 0.05 <.001 .053 0.05 <.001 0.44 0.05 <.001
    Effort expectancy −0.07 0.05 .20 0.03 0.05 .54 −0.11 0.05 .04 -0.02 0.06 .77
    Facilitating conditions 0.12 0.04 .01 0.17 0.05 <.001 0.22 0.05 <.001 0.24 0.05 <.001
    Social influence −0.05 0.04 .29 −0.08 0.05 .17 −0.09 0.05 .10 −0.05 0.04 .27
    Habit 0.29 0.04 <.001 0.32 0.04 <.001 0.28 0.05 <.001 0.28 0.04 <.001
    Hedonic motivation 0.45 0.06 <.001 0.22 0.05 <.001 0.33 0.05 <.001 0.23 0.05 <.001
    Health literacy −0.004 0.09 .97 0.04 0.10 .67 −0.02 0.08 .08 0.08 0.10 .40
    Health status −0.002 0.03 .95 0.02 0.03 .61 −0.09 0.03 .01 −0.05 0.03 .13

    aExplained variance=0.84.

    bExplained variance=0.80.

    cExplained variance=0.86.

    dExplained variance=0.79.

    eThe different subscripts in each row indicate a significant difference between paths (P<.05)

    fThe different subscripts in each row indicate a significant difference between paths (P<.05)

    In line with H1, we found a highly significant positive association between performance expectancy and the intention to use generative AI for health information–seeking across all 4 countries (Austria: b=0.47, P<.001; Denmark: b=0.52, P<.001; France: b=0.53, P<.001; Serbia: b=0.44, P<.001). In contrast, H2 was not supported: effort expectancy showed no significant association with behavioral intention in any of the countries. Turning to H3, results revealed a positive association between facilitating conditions and the intention to use generative AI as a health information source, consistently observed across all 4 contexts (Austria: b=0.12, P=.005; Denmark: b=0.17, P<.001; France: b=0.22, P<.001; Serbia: b=0.24, P<.001). By contrast, no support was found for H4: perceived social influence was unrelated to behavioral intention in any of the countries. As predicted in H5, habit was positively associated with behavioral intention to use generative AI for health information–seeking throughout the sample (Austria: b=0.29, P<.001; Denmark: b=0.32, P<.001; France: b=0.28, P<.001; Serbia: b=0.28, P<.001). A similar pattern emerged for H6: hedonic motivation was significantly positively related to behavioral intention in all countries (Austria: b=0.45, P<.001; Denmark: b=0.22, P<.001; France: b=0.33, P<.001; Serbia: b=0.23, P<.001).

    Finally, with regard to our second research question—which examined whether health literacy and health status predict the intention to seek health information using generative AI—we found no substantial associations. Only in France did health status show a marginal negative effect (b=−0.09; P=.007).

    Principal Results

    This study investigated the predictors of intention to use generative AI for health information–seeking, drawing on the UTAUT2 framework and expanding it with health literacy and health status. Using cross-national survey data from Austria, Denmark, France, and Serbia, our findings show that generative AI is still only rarely used for health information–seeking. At the same time, performance expectancy, facilitating conditions, habit, and hedonic motivation consistently emerged as significant predictors of behavioral intention, whereas effort expectancy, social influence, health literacy, and health status were not related to intention. These patterns were consistent across all 4 countries, suggesting a robust set of psychological drivers underlying the early adoption of generative AI in health contexts. A detailed examination of these findings is provided as follows.

    First, with regard to overall usage patterns, the data shows that generative AI currently plays only a minor role in health information–seeking: 60% of the respondents reported never having used a generative AI tool for health-related questions. This result lends itself to 2 contrasting interpretations.

    On the one hand, it challenges the popular narrative that generative AI is rapidly transforming health information–seeking behavior. Instead, the findings align with previous studies, showing that generative AI is currently infrequently used in the context of health information []. Traditional sources—such as medical professionals and search engines—continue to dominate [], underscoring that generative AI has yet to achieve mainstream adoption.

    On the other hand, despite persistent concerns about data privacy, algorithmic bias, and accuracy, it is noteworthy that 40% of the respondents have already experimented with generative AI for health purposes. Given that this technology only became widely accessible relatively recently, such early uptake is remarkable. From the perspective of technology adoption models, such as the Rogers Diffusion of Innovations [], this pattern is characteristic of early adopters. It is therefore plausible to assume that the use of generative AI for health information–seeking will increase further as the technology matures and moves toward mainstream adoption.

    To better understand the drivers of future uptake, we applied an extended version of the UTAUT2 model. Our findings confirmed the predictive power of performance expectancy, facilitating conditions, habit, and hedonic motivation. This aligns with prior research on digital health tools, indicating that users value usefulness, access, familiarity, and enjoyment [,,].

    In detail, the results show that performance expectancy—the perceived usefulness of the technology—had a strong positive effect on behavioral intention in all four countries. This finding suggests that the more respondents believe generative AI is useful to manage health-related questions, the more they will use it. Thus, if public health stakeholders or developers aim to encourage responsible AI use, they should emphasize the tangible benefits of generative AI, such as 24/7 availability, rapid response times, and the potential for personalized information. Perceived usefulness may also be fostered when individuals try out generative AI for the first time, that is, they learn that they can benefit from the technology.

    At the same time, our study challenges established UTAUT2 assumptions. Effort expectancy, often seen as central to technology adoption, was not a relevant factor—possibly due to the intuitive nature of generative AI tools and the ubiquity of basic digital skills []. Using generative AI does not require any specific background knowledge beyond opening a webpage. Since online search engines are already the most frequently used health source, the basic skills needed for generative AI are widely present, potentially rendering effort expectancy less decisive.

    Taken together, this emerging pattern—the strong effect of performance expectancy and the null effect of effort expectancy—underscores the distinction between usefulness and usability, which are closely related but not identical []. Usability refers to the ease of interacting with a system (eg, ease of learning and error prevention), whereas usefulness (utility) captures whether the system provides the functions and information that users actually need. Our findings suggest that in health contexts, utility is the decisive factor: people intend to use generative AI if its outputs are perceived as useful, while usability-related aspects appear less influential.

    Importantly, this does not mean that barriers to adoption are absent. Rather, our findings show that they lie not in usability but in facilitating conditions—the structural and contextual resources that enable technology use. Across all countries, the availability of digital infrastructure, devices, and basic knowledge significantly shaped behavioral intention. In other words, while generative AI may be easy to use once accessed, unequal access to the necessary resources continues to pose a substantial adoption barrier. Consequently, facilitating conditions emerge as a key digital health equity concern []. Without adequate access, disadvantaged populations may be excluded from benefiting from generative AI, meaning that the technology risks widening rather than narrowing the digital divide in health information–seeking.

    We also found that social influence—an important predictor in other studies on AI uptake [,]—did not play a meaningful role in shaping behavioral intention. This suggests that health-related information search is a rather personal topic, and that individuals may not always be willing to disclose what kind of information they are looking for. As a result, the intention to use generative AI for health information–seeking is largely independent of peer opinions or social norms.

    In contrast, habit consistently predicted behavioral intention across all countries. From this finding, we may conclude that generative AI use for health information is likely to occur automatically, similar to how people use search engines. When individuals feel familiar with a technology, they are more likely to rely on it without conscious deliberation. However, this finding should be interpreted with caution, as the majority of participants had never used generative AI for health purposes. Much of the variance in habit may therefore reflect mere use versus nonuse. Accordingly, variables capturing initial adoption should be clearly distinguished from those measuring habit.

    By including health literacy and health status as additional predictors, our study adds a novel dimension to existing research. In contrast to studies showing direct paths between these constructs and online health information–seeking [,,], we found no such association for AI health information–seeking. However, each of these findings carries different implications. First, the absence of a significant association between health literacy and intention indicates that individuals’ ability to understand and evaluate health information was not related to whether they reported turning to generative AI. This finding may suggest that the use of such tools is driven less by informed decision-making and more by general curiosity or interest in new technologies. Importantly, this raises concerns: people with lower health literacy may be just as likely to consult generative AI as those with higher health literacy—despite being less equipped to critically assess its outputs. Given the known risk of AI hallucinations—fabricated or inaccurate information presented in a confident tone []—this could lead to misinformation and, in the worst case, harmful health decisions, as users with limited health literacy might find it difficult to distinguish between reliable and misleading content.

    Second, the lack of an association between self-reported health status and intention suggests that the current use of generative AI is not primarily driven by medical need or urgency. People do not seem more likely to consult generative AI when facing a health problem; rather, usage may occur proactively or even recreationally. This challenges assumptions that such tools are primarily used in response to a health issue, and it underscores the importance of understanding user motivations beyond immediate health concerns.

    Importantly, these patterns were largely consistent across all 4 countries, as confirmed by the measurement-invariant structural model. This cross-national consistency suggests that the psychological drivers of generative AI adoption in health contexts may transcend national boundaries and cultural differences, pointing to a universal set of adoption mechanisms.

    Limitations

    Several limitations should be acknowledged. First, due to the cross-sectional nature of this study, no causal conclusions can be drawn. Future research should therefore aim to replicate these findings using experimental or longitudinal designs. Second, we relied on self-reported data, which may be subject to social desirability bias. The use of behavioral data is thus warranted to validate these findings. Third, including additional predictors—such as individual differences or specific concerns—could provide deeper insights into the use of generative AI. Fourth, our comparative findings are based on data from only 4 countries, which limits the ability to conduct multilevel analyses. Also, as in all cross-sectional research, there is a risk of unmeasured third variables. In particular, we did not include AI trust and perceived AI risk. However, these constructs are conceptually close to performance expectancy, as trust reduces uncertainty about the system’s outputs and thereby enhances expected performance gains, whereas perceived risk erodes expected utility. In this sense, they are likely to be partially reflected in the performance expectancy construct already included in our model. That said, and highlighting that our model explains around 80% of the variance, trust and perceived risk could still suppress some of the predictors we have modeled. Thus, future research should include additional constructs outside the UTAUT2 framework []. Finally, health status was measured with a single self-rated item. While single-item measures of subjective health may not capture the full complexity of an individual’s medical condition, this approach is widely used in demographic and population health research. Prior work has demonstrated that the self-rated health item is a valid and reliable indicator, predicting key outcomes such as mortality, use of health services, and health expenditures in large-scale surveys []. Nevertheless, we acknowledge that a more fine-grained measure (eg, including specific chronic conditions or severity indices) could have provided additional insights, and future studies may benefit from applying such extended health measures.

    Conclusions

    This study applied the UTAUT2 model to investigate the factors that drive the use of generative AI for health information–seeking. Although overall usage remains limited, our findings show that performance expectancy, facilitating conditions, habit, and hedonic motivation are positively associated with behavioral intentions. These patterns, observed across all 4 countries—Austria, Denmark, France, and Serbia—suggest that current users of generative AI are likely to be early adopters: individuals who are tech-savvy, curious, and open to innovation. This aligns with the Rogers Diffusion of Innovations theory, which conceptualizes adoption as a gradual process beginning with a small, innovation-oriented segment of the population.

    The lack of significant effects for effort expectancy and social influence across all countries reinforces this interpretation: early adopters tend to base their decisions on personal evaluations rather than external opinions and are rarely deterred by usability concerns. Furthermore, the fact that behavioral intention was unrelated to health status or health literacy underscores that current usage is not driven by acute medical need or advanced health literacy, but rather by interest, convenience, and technological exploration.

    The cross-national consistency of these findings is particularly striking. Despite differences in health care systems, digital infrastructures, and culture, the same psychological and contextual factors influenced generative AI use in all countries surveyed. This suggests a shared adoption logic that transcends national boundaries—at least in the early stages of diffusion.

    Looking ahead, these insights help illuminate how generative AI might transition from a niche tool to a widely used resource. As the technology becomes more embedded in everyday life, broader segments of the population—the so-called early and late majority—will likely demand stronger assurances of trustworthiness, safety, and added value. To enable responsible and inclusive adoption, it is therefore crucial to reduce digital access barriers, enhance transparency, and implement safeguards against health misinformation, especially for users with limited health literacy.

    From a practical perspective, our findings suggest that communication strategies aiming to promote generative AI for health purposes should emphasize concrete benefits, ease of access, and even enjoyment. Rather than exclusively targeting individuals with chronic or urgent health needs, positioning generative AI as an engaging, low-barrier tool may broaden its appeal—reaching users who might otherwise be disengaged from traditional health information sources.

    In sum, generative AI holds significant potential as a future health information resource—but its trajectory will depend on how well we understand and support the evolving needs of its users across different adoption phases and contexts.

    The authors used ChatGPT (OpenAI) to support language editing of prewritten text sections (eg, to improve grammar and phrasing). All suggestions from the artificial intelligence tool were reviewed by the authors and revised or rejected as necessary. No content was generated solely by the artificial intelligence, and the authors remain fully responsible for the final text.

    The work was supported by Circle U [2024-09 – AIHEALTH].

    All data and supplementary materials related to this study are openly accessible via the Open Science Framework (OSF) at the following link [].

    None declared.

    Edited by Amaryllis Mavragani; submitted 08.Apr.2025; peer-reviewed by Armaun Rouhi, Dillon Chrimes, Sonish Sivarajkumar; accepted 27.Oct.2025; published 28.Jan.2026.

    © Jörg Matthes, Anne Reinhardt, Selma Hodzic, Jaroslava Kaňková, Alice Binder, Ljubisa Bojic, Helle Terkildsen Maindal, Corina Paraschiv, Knud Ryom. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.Jan.2026.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • Cognizant and Travel + Leisure Co. Renew Strategic Collaboration to Accelerate Digital Transformation

    Cognizant and Travel + Leisure Co. Renew Strategic Collaboration to Accelerate Digital Transformation

    The collaboration aims to modernize technology infrastructure and infuse AI to enhance member experiences

    TEANECK, N.J., Jan. 28, 2026 /PRNewswire/ — Cognizant (Nasdaq: CTSH) announced today the renewal of a multi-million-dollar strategic collaboration with Travel + Leisure Co. (NYSE: TNL), a leading leisure travel company. The extended collaboration will focus on accelerating the digital transformation of Travel + Leisure Co. by modernizing its technological infrastructure and infusing AI to deliver enhanced experiences for its members and owners.

    Under the agreement, Cognizant will leverage its extensive hospitality domain expertise to optimize the technology ecosystem at Travel + Leisure Co., with the goal of elevating digital service experiences for its travel club members and 800,000 owner families.

    “Renewing our partnership with Cognizant reflects the deep collaboration and mutual trust we’ve built over the years,” said Sy Esfahani, Chief Technology Officer at Travel + Leisure Co. “Cognizant’s broad technology expertise and global resources will propel our continued digital transformation, helping us deliver innovative solutions to service our members and guests at every touchpoint.”

    Throughout the term of the agreement with Travel + Leisure Co., Cognizant will assist with modernizing application landscape, strengthening infrastructure scalability and reliability, and harnessing data- and AI-driven capabilities.

    “We are thrilled to deepen our long-standing relationship with Travel + Leisure Co., a valued partner whose forward-looking vision aligns with our commitment to reimagine how the modern traveler interacts with technology,” said Anup Prasad, SVP and Consumer Business Head at Cognizant. “This expanded partnership reinforces our focus on delivering tailored digital transformation solutions that meet the evolving needs of the global leisure and hospitality industry.”

    About Cognizant

    Cognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we’re improving everyday life. See how at www.cognizant.com or @cognizant.

    About Travel + Leisure Co.

    Travel + Leisure Co. (NYSE:TNL) is a leading leisure travel company, providing more than six million vacations to travelers around the world every year. The company operates a portfolio of vacation ownership, travel club, and lifestyle travel brands designed to meet the needs of the modern leisure traveler, whether they’re traversing the globe or staying a little closer to home. With hospitality and responsible tourism at its heart, the company’s nearly 19,000 dedicated associates around the globe help the company achieve its mission to put the world on vacation. Learn more at travelandleisureco.com.

    For more information, contact:
    Katrina Cheung
    [email protected]

    SOURCE Cognizant Technology Solutions Corporation

    Continue Reading

  • CrowdStrike Named Customers’ Choice in 2026 Gartner® EPP Report

    CrowdStrike Named Customers’ Choice in 2026 Gartner® EPP Report

    CrowdStrike has the most 5-star ratings of any Customers’ Choice vendor and is the only vendor named a Customers’ Choice in every iteration of the Voice of the Customer for EPP report since its launch

    AUSTIN, Texas – January 28, 2026 – CrowdStrike (NASDAQ: CRWD) today announced it has been named the Customers’ Choice in the 2026 Gartner Peer Insights™ ‘Voice of the Customer’ for Endpoint Protection Platforms report. CrowdStrike received the most 5-star ratings of any Customers’ Choice vendor with a 97% Willingness to Recommend score, based on 800 overall responses as of November 2025. CrowdStrike is the only vendor named a Customers’ Choice in every iteration of the Voice of the Customer for Endpoint Protection Platforms report since its launch, earning this recognition six times. 

    “The strongest validation in cybersecurity comes from customers,” said Elia Zaitsev, chief technology officer at CrowdStrike. “To me, this recognition reflects what teams value most in endpoint security: simple deployment, low operational overhead, and protection they can rely on to stop breaches. That day-to-day experience is why organizations continue to trust CrowdStrike.”

    What Customers Are Saying

    Here is a sampling of our reviews:

    • Best in Class Detection and Ease of Use: “The tool is the best on the market and does exactly what is expected… easy to deploy and maintain.” – System Security Manager, Services (non-Government) Industry
    • CrowdStrike Offers Strong Threat Protection with Simple Deployment and Low CPU Usage: “I decided that we needed the best protection in the market and it made sense to select CrowdStrike. Very easy to deploy and also low CPU demand. High protection levels from new threats.” – Director, IT Security and Risk Management, Retail Industry
    • Seamless Deployment and Strong Detection Capabilities Highlight CrowdStrike Experience: “My overall experience has been excellent. As a previous customer for several years, I have brought CrowdStrike into several organizations. The main need has been to detect novel malicious and anomalous endpoint behavior. After evaluating several vendors, CrowdStrike was the clear winner… As a CISO, I have peace of mind knowing I can verify its monitoring and blocking.” – CISO, IT Services Industry


    Additional Resources

    • To learn more about CrowdStrike’s recognition in the 2026 Gartner Peer Insights™ ‘Voice of the Customer’ for Endpoint Protection Platforms report, visit our website and read our blog.
    • To learn about CrowdStrike’s recognition as a Leader for the sixth consecutive time in the 2025 Gartner® Magic Quadrant™ for Endpoint Protection Platforms, visit our website and read our blog.
    • To learn more about CrowdStrike Falcon Endpoint Security, visit our website.


    GARTNER is a registered trademark and service mark, and MAGIC QUADRANT and PEER INSIGHTS are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

    Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences, and should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose.

    About CrowdStrike

    CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world’s most advanced cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity and data.

    Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities.

    Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value.

    CrowdStrike: We stop breaches.

    Learn more: https://www.crowdstrike.com/

    Follow us: Blog | X | LinkedIn | Instagram

    Start a free trial today: https://www.crowdstrike.com/trial

    © 2026 CrowdStrike, Inc. All rights reserved. CrowdStrike and CrowdStrike Falcon are marks owned by CrowdStrike, Inc. and are registered in the United States and other countries. CrowdStrike owns other trademarks and service marks and may use the brands of third parties to identify their products and services.

    Media Contact

    Jake Schuster

    CrowdStrike Corporate Communications

    press@crowdstrike.com

     



    1Gartner, Voice of the Customer for Endpoint Protection Platforms, Peer Contributors, January 23, 2026



    Continue Reading

  • Stock market today: Live updates

    Stock market today: Live updates

    Traders work at the New York Stock Exchange on Jan. 27, 2026.

    NYSE

    The S&P 500 reached a milestone level on Wednesday, hitting 7,000 for the first time, before pulling back ahead of the Federal Reserve’s interest rate decision and earnings reports from major tech companies.

    The broad market index was last down 0.1% after advancing 0.3% to an all-time intraday high of 7,002.28 earlier in the session. The Nasdaq Composite traded around the flatline, as did the Dow Jones Industrial Average.

    Stock Chart IconStock chart icon

    S&P 500, 1-year

    The broader market’s earlier rise was bolstered by gains in chip stocks following upbeat earnings results. Seagate Technology shares jumped more than 19% after the storage infrastructure company’s second-quarter earnings and revenue topped analyst expectations, with CEO Dave Mosley citing strong demand for artificial intelligence data storage. Additionally, semiconductor equipment giant ASML reported record orders and issued rosy 2026 guidance due to the AI boom. However, the stock reversed its gains from earlier Wednesday.

    Beyond those earnings, China has given approval to ByteDance, Alibaba and Tencent to buy Nvidia’s H200 AI chips, Reuters reported Wednesday. Nvidia shares rose more than 1%. Fellow semiconductor names Micron Technology and Taiwan Semiconductor Manufacturing saw gains as well. The VanEck Semiconductor ETF (SMH) moved about 2% higher and hit a new 52-week high.

    “The story for 2023, 2024, most of 2025 was AI-related semiconductors — awesome, great demand. All the other semiconductor-demand sources, whether that be auto or industrial or telecom, etc. — weak. That has shifted now,” Jed Ellerbroek of Argent Capital Management told CNBC. “Demand is well in excess of supply really everywhere at this point within semiconductors,” the portfolio manager also said.

    The rally failed to broaden past chip stocks, however, as the S&P 500 was eventually dragged lower heading into the Fed decision.

    The central bank is widely expected to keep its benchmark interest rate steady at a target range of 3.5% to 3.75%, but traders will be seeking hints on longer-term changes to monetary policy. Fed funds futures trading suggests two quarter percentage point cuts by the end of 2026, according to the CME FedWatch Tool.

    Earnings from a slate of major technology companies are on deck. Microsoft, Meta Platforms and Tesla are set to post their quarterly financial results Wednesday after the closing bell. Apple will post its results on Thursday.

    Outside tech, Starbucks traded higher by 2% after the coffee chain reported that its traffic grew for the first time in two years. Its first-quarter revenue also beat expectations, while its earnings missed.

    Continue Reading

  • Who holds the keys? Navigating legal and privacy governance in third-party AI API access

    Who holds the keys? Navigating legal and privacy governance in third-party AI API access

    In today’s rapidly evolving artificial intelligence environment, organizations are increasingly relying on third-party application programming interfaces from platforms like OpenAI, Google and Amazon Web Services to embed advanced features into their products. These APIs offer significant benefits, particularly in terms of time and cost savings, by enabling companies to leverage existing technology rather than building solutions from scratch. 

    While this approach can speed up deployment and reduce the burden of managing complex infrastructure, it also raises key legal and privacy issues — like how data flows are controlled, who is responsible for data security, and how licensing restrictions are enforced. The situation becomes even more challenging when the procuring organization opts to use its own API keys instead of those provided by the AI feature developer.

    Data flow and responsibilities when developers access AI services on behalf of a procuring organization

    When developers leverage third‑party AI APIs to build and deliver their own AI features, they often do so using their own licensed API keys to access those services. Prompts — for example, data queries, order‑processing commands, or report generation instructions — are sent from the procuring organization’s systems to the developer’s platform and then forwarded to the API provider. The provider applies its AI models and returns outputs, which the developer delivers to the procuring organization.

    In this process, the developer assumes the role of the data controller because it determines the purpose and means of processing: it decides which prompts to collect, how to combine or enrich them, including developer-supplied templates, and how outputs are used and delivered. As controller, the developer must ensure lawful processing, provide transparency and implement appropriate technical and organizational measures — such as encryption, access controls, logging and regular audits — to protect personal data throughout the life cycle in line with the EU General Data Protection Regulation.

    If there is sensitive data involved — such as personal data under the GDPR or personal health information under the Health Insurance Portability and Accountability Act — the developer, who has control over its API keys, can apply appropriate privacy-enhancing technologies before transmitting. These include measures like anonymization, pseudonymization, zero data retention endpoints, and in-flight filtering, to prevent identification and reduce risk, thereby supporting compliance with applicable data protection laws. 

    Once the developer submits prompt data to the API provider, the provider acts as a data processor and is responsible for processing data only in accordance with the developer’s documented instructions. To ensure proper governance, the parties should establish a written agreement — such as a data processing agreement that clearly outlines the scope and lawful purposes of processing, as well as the provider’s obligations regarding data retention and deletion. 

    The agreement should also require the provider to maintain records of processing activities, cooperate with audits, assist the developer with data subject requests and breach notifications, and implement appropriate safeguards — including encryption, access controls, logging, and incident detection/response — all in compliance with GDPR requirements.

    Shifting dynamics: Customers bringing their own API keys to developer AI features

    As organizations increasingly use AI internally — whether embedding off‑the‑shelf features or developing bespoke capabilities — there is a good chance they already hold API licenses for major platforms such as OpenAI or Azure.

    As such, it is increasingly common for procuring organizations to ask that the AI feature developer use the organization’s own API keys to access the feature. This gives the procuring organization more direct control over the data, use and costs associated with the API. However, this shift significantly impacts the role and control of the AI feature developer. 

    When the procuring organization uses its own API keys to access a developer AI feature, responsibility for transmitting, storing and controlling access to the data mostly shifts to them. This means the developer no longer has full visibility into how the data is handled once it leaves their infrastructure. As a result, it becomes much harder for the developer to verify if safeguards — like encryption, access controls or quick data deletion — are properly in place, or to enforce policies that prevent misuse or breaches.

    Because of this, it’s crucial to have clear, well-structured contracts between the developer and the organization. These should lay out who’s responsible for what — covering data security, liability and compliance — and reflect the actual level of control each party has over the data and the API.

    Key takeaways

    Effectively managing third‑party AI integrations requires balancing the benefits of rapid deployment and cost savings with the obligation to address privacy and data protection exposures. 

    Whether data flows go through company‑controlled APIs or customer‑managed keys, robust data‑governance frameworks ensure risks are equitably allocated and information is safeguarded in line with applicable jurisdictional requirements and the sensitivity of the data involved. 

    Ultimately, clear contractual responsibilities, active oversight and strong governance are essential when deploying AI features via third‑party APIs, especially as organizations increasingly want to use and control access to the AI capabilities they procure.

    Rachel Webber, AIGP, CIPP/E, CIPP/US, CIPM, CIPT, FIP, is senior counsel for a software as a service and AI organization. 

    Continue Reading

  • Insurance Newsletter – January 2026 | Insights

    1. Refinements to Solvency II – Third-Country Insurance Branches
    2. Solvency II Reporting and Disclosure: Post-implementation Amendments
    3. UK Berne Financial Services Agreement Guidelines
    4. Enhancing Banks’ and Insurers’ Approaches to Managing Climate-Related Risks
    5. Alternative Life Capital: Supporting Innovation in the Life Insurance Sector
    6. FCA Simplifies Complaints Reporting Process
    7. FCA and PRA Announce Plans to Support Growth of Mutuals Sector
    8. FCA Simplification of the Insurance Rules
    9. FCA Confirms Final Guidance to Tackle Serious Non-financial Misconduct in Financial Services
    10. Reform of Anti-Money-Laundering and Counter-Terrorism Financing Supervision
    11. Lloyd’s Market Bulletin: Update to the Agency Circumstances Procedure
    12. EIOPA Issues Guidance on Group Supervision

    1. Refinements to Solvency II – Third-Country Insurance Branches

    PRA Proposal of Further Refinements to Solvency II Regarding Third-Country Insurance Branches (CP20/25)

    Following Brexit, the UK Government has worked with the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) to implement Solvency II into the UK’s financial services regulatory framework. Implementation of the amended regime (Solvency UK) was largely completed in December 2024, although the PRA and FCA continue to implement amendments as necessary.

    From September to December 2025, the PRA consulted on proposed changes to the treatment of third-country insurance branches.

    The primary change proposed is the increase of the subsidiarisation threshold from £500m to £600m in liabilities covered by the Financial Services Compensation Scheme. The PRA believes that inflation has artificially caused some branches to reach the liability threshold and therefore unnecessarily become UK subsidiaries.

    In addition, third-country branches are required to notify the PRA in the event they anticipate reaching the subsidiarisation threshold within three years.

    The PRA also confirmed a series of minor PRA Rulebook amendments, namely:

    • Removal of volatility adjustment eligibility for branches;
    • Absorption of two modifications by consent into the PRA Rulebook;
    • Reinstatement of reporting templates IR.19.01.01 (non-life-insurance claims) and IR.20.01.01 (development of the distribution of the claims incurred) for category 3 and 4 branches and discontinuation of quarterly reporting for these branches.
      The changes to the subsidiarisation threshold will take effect upon publication of the relevant policy statement (expected in the first half of 2026). The other changes noted will take effect on December 31, 2026.

    2. Solvency II Reporting and Disclosure: Post-implementation Amendments

    PRA Consultation on UK Solvency II Reporting and Disclosure: Post-implementation Amendments (CP22/25)

    As part of the continued review of Solvency UK, the PRA has opened consultation on amendments to the reporting and disclosure requirements under the regime.

    Key proposals:

    •  Introduction of a requirement for third-country branches to report total projected Financial Services Compensation Scheme liabilities data to enhance risk visibility;
    • Introduction of new and amended templates for non-life income/expenditure/business-line reporting;
    • Clarification on paired asset, derivatives, and dividends reporting and removal of certain quarterly items considered non-essential;
    • Transfer of the reporting format of the Matching Adjustment Asset and Liability Information Return templates to XBRL (presently formatted in Excel);
    • Removal of duplicate reporting, and
    • Simplification of reporting where proportional reinsurance data is unavailable at a subclass level.

    Consultation closes on March 4, 2026. The implementation date of any resulting changes is anticipated to be on or after December 31, 2026.

    3. UK Berne Financial Services Agreement Guidelines

    PRA/FCA Guidelines on the UK Berne Financial Services Agreement (BFSA)

    In November 2025, the PRA and FCA jointly published guidelines for providing services under the BFSA.

    UK Insurance Firms

    Eligible UK insurance firms must notify the Swiss Financial Market Supervisory Authority (FINMA) with specified information and be placed on the FINMA register before providing services.

    To be eligible, UK insurance firms (insurers and intermediaries) must:

    • Be incorporated or formed under UK law, a UK resident, or a UK branch of a Covered Swiss Financial Services Supplier;
    • Be authorized or supervised as a UK insurer or insurance intermediary; and
    • Supply covered services for non-Swiss risks.

    To be eligible, UK insurers must:

    • Be subject to Solvency II (excluding UK branches of Covered Swiss Financial Services Suppliers);
    • Meet the solvency requirements without capital relief measures;
    • Fulfil company-specific management buffer requirements;
    • Have no life insurance liabilities exceeding 10% of the total best estimate liabilities under Solvency II (without capital relief measures); and
    • Ensure staff are knowledgeable of relevant Swiss legislation.

    Clients to whom a UK insurer may provide services must be incorporated in Switzerland and meet at least two of the following requirements:

    1.  Net turnover in excess of CHF 40 million
    2.  Balance sheet total in excess of CHF 20 million
    3.  In excess of 250 employees

    4. Enhancing Banks’ and Insurers’ Approaches to Managing Climate-Related Risks

    PRA Policy Statement: Enhancing Banks’ and Insurers’ Approaches to Managing Climate-Related Risks (PS25/25)

    On December 3, 2025, the PRA published its final policy on banks’ and insurers’ (Firms) management of climate-related risks, following consultation in April 2025.

    The final policy builds on the PRA’s 2019 expectations for Firms’ management of climate-related risks, providing greater clarity and aligning the expectations with international standards.

    Key changes to the final policy (from the draft proposals):

    • Further guidance on how Firms should apply expectations proportionately, reflecting their exposure to material climate-related risks and the scale and complexity of their business.
    • Clarification that the six-month review period proposed is not an implementation timeline but rather a period during which Firms would be expected to conduct an internal review of their current status in meeting the final policy expectations.
    • Firms may integrate climate-related risks into existing risk registers/governance frameworks rather than establishing new ones if risk identification remains robust.
    • Confirmation of the PRA’s view that existing Solvency Capital Rules “provide sufficient flexibility for an insurer to take account of climate-related risks in a way that it considers appropriate.” Although, insurers may also exercise discretion in making appropriate adjustments for internal models and market prices of climate-related risks.#

    The policy took effect upon publication.

    5. Alternative Life Capital: Supporting Innovation in the Life Insurance Sector

    PRA Discussion Paper on Alternative Life Capital: Supporting Innovation in the Life Insurance Sector (DP2/25)

    The PRA is seeking feedback on potential policy changes that could enable life insurers to transfer defined tranches of risk to the capital markets. At this stage, no specific policy changes are proposed. Rather, the PRA is gathering stakeholder feedback on how to facilitate life insurers’ access to alternative forms of capital that do not derive from equity or debt issuance, with particular focus on identifying regulatory barriers to capital entering the sector. Feedback is sought by February 6, 2026.

    The PRA indicates that it is open to a broad range of innovative structures – including potential reforms to the Insurance Special Purpose Vehicle (ISPV) framework and adaptation of mechanisms used in other markets, such as banking.

    Nevertheless, the PRA has highlighted a non-exhaustive set of risk transformation examples (below) and is seeking views on their feasibility/attractiveness and associated risks: i) ISPVs; ii) significant risk transfers (SRTs); and iii) life insurance sidecars and joint ventures.

    ISPVs

    The PRA acknowledges that the current UK regime is targeted towards non-life and short-term risks, which may present challenges when considering its application to longer-term insurance liabilities.

    SRTs

    The PRA invites views on the potential adaptation of SRTs (well established in the banking sector) for use by life insurers, noting uncertainty over long-term outcomes and the degree/effectiveness of risk transfer may vary over time depending on asset performance.

    Life Insurance Sidecars and Joint Ventures

    The PRA notes that some alternative structures (including strategic partnerships and joint ventures) are more prevalent internationally and have been developing in the UK, and it invites views on the potential use of life insurance sidecars.

    The Discussion Paper also sets out six overarching principles intended to guide the PRA’s consideration of alternative life capital.

    6.  FCA Simplifies Complaints Reporting Process

    FCA Simplifies Complaints Reporting Process (PS25/19)

    The FCA has confirmed plans to streamline the way firms report complaints. Five existing complaints returns will be replaced by a single consolidated return. The first reporting period under the new process will run from January 1, 2026 to June 30, 2027.

    Insurance sector impact:

    • All firms will now report their complaints data on a fixed six-month and calendar-year basis. This replaces the use of each firm’s Accounting Reference Date.
    • Complaints reporting will be based on firms’ permissions; firms will only need to complete the sections of the new return relevant to their regulated activities.
    • Group reporting has been removed such that firms must now submit complaints data at the individual legal entity level.
    • The FCA are imposing a threshold of 500 complaints or more for insurers (and banks), above which it will publish the relevant firm’s data.
    • Clarification has been provided on the scope of product categories, insurance complaint issues, and the parameters of insurance permissions for the purposes of insurance consolidated complaints returns.

    7. FCA and PRA Announce Plans to Support Growth of Mutuals Sector

    Regulators Announce Plans to Support Growth of Mutuals Sector (Mutuals Landscape Report)

    In December 2025, the PRA and FCA jointly published the Mutuals Landscape Report (the Report) and announced a series of measures designed to support the growth of the mutuals sector.

    The Report focuses on the mutuals sector as a whole, noting the UK Government’s commitment to doubling its size and sets out plans for credit unions, building societies, and mutual insurers.

    The Report notes challenges specifically for mutual insurers, including the following:

    • Smaller insurers face issues of economies of scale and business model sustainability, augmented in recent years by rising operational costs;
    • Insurance mutuals face difficulties in capital raising when looking to invest and grow.
    • There are legislative barriers to capital management.

    The new targeted initiatives to support mutual insurers include the following:

    1. The establishment of a new FCA Mutual Societies Development Unit

    This will act as a central hub to help mutuals (including insurers) navigate policy and regulatory change by offering expertise on legislation and regulatory processes.

    2. Reduced barriers to entry

    The FCA is to provide free preapplication support for firms wishing to form/convert to mutual societies.

    The application processing window for new societies is also to be reduced from 15 to 10 working days, intended to incentivise more society registrations.

    3. The launch of a joint PRA and FCA Scale-up Unit to provide regulatory support to eligible firms, including mutuals that are looking to grow rapidly.

     

    8. FCA Simplification of the Insurance Rules

    FCA Policy Statement: Simplifying the Insurance Rules (PS25/21)

    The FCA has confirmed various measures that simplify regulations for firms across the insurance and funeral plans sectors and announced further changes affecting insurance firms in 2026 to support growth and innovation.

    Key final rules:

    • Firms now have the option to appoint a single lead manufacturer responsible for all Product Intervention and Product Governance Sourcebook 4 (insurance product governance) obligations.
    • Removal of the requirement for insurance product manufacturers to review insurance products at least every 12 months; firms must now determine the appropriate review frequency based on risk and potential customer harm.
    • Removal of the 15-hour minimum training and competence requirement for insurance employees; firms are now permitted to tailor training and competence arrangements to business needs.
    • Removal of the existing notification and annual reporting requirements regarding employers’ liability insurance, although firms must continue to notify the FCA of significant rule breaches.
    • The FCA has clarified its expectations of firms working together to manufacture products or services under the Consumer Duty. Firms are not required have a say in each other’s decisions, nor is joint decision making or even allocation of responsibilities required.

    Key proposed changes:

    • Consulting on changes to the client categorisation rules, in line with the Consumer Duty.
    • Consultation on disapplying the Consumer Duty to non-UK business by the end of Q2 2026.

    Review of core FCA Handbook definitions to promote “consistency and clarity.”

    9. FCA Confirms Final Guidance to Tackle Serious Non-financial Misconduct in Financial Services

    FCA Confirms Final Guidance to Tackle Serious Non-financial Misconduct in Financial Services (PS25/23)

    In July 2025, the FCA updated its rules to more broadly capture non-financial misconduct (NFM) across banks and non-banks (including insurers) that are subject to the Code of Conduct (COCON) and the Fit and Proper test (FIT). In December 2025, the FCA finalised its regulatory framework on NFM and provided further guidance on how to determine when there has been a breach and how to proceed thereafter.

    The FCA has introduced a new COCON rule that extends NFM to include “unwanted conduct that has the purpose or effect of violating a colleague’s dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment for them.”

    Not all poor behaviour satisfies the regulatory threshold. But serious NFM is considered a conduct breach and requires declaration to the FCA. It may result in enforcement action against firms and/or FIT consequences for individuals.

    The rules do not require employers to monitor employee’s private lives; external conduct will be relevant only where there is risk of future regulatory breach(es) or the conduct is serious enough to erode public confidence.

    The guidance will come into effect on September 1, 2026 and will not have retrospective effect.

    Please see the corresponding Sidley briefing note for further information.

    10. Reform of Anti-Money-Laundering and Counter-Terrorism Financing Supervision

    HM Treasury Consultation Response: Reform of the Anti-Money-Laundering and Counter-Terrorism Financing Supervision Regime

    In 2022, HM Treasury undertook review of the UK’s anti-money-laundering and counter-terrorism financing (AML/CTF) supervisory system. The review concluded that weaknesses in supervision may require structural reform. In summer 2023, HM Treasury then consulted on reform of the supervisory regime. As part of its consultation, HM Treasury requested feedback on four possible models for reform. In October 2025, HM Treasury published its response to this consultation.

    At present, the supervisory system comprises three supervisors: the FCA; His Majesty’s Revenue & Customs (HMRC); and 22 private sector professional body supervisors (PBSs).
    The UK Government has decided to proceed with model 3, the creation of a single professional services supervisory (SPSS). Under this model, the FCA will be granted responsibility for all AML/CTF supervision for the legal and accountancy sectors and trust and company service providers. As SPSS, the FCA will carry out these functions independently of HM Treasury and will in practice replace PBSs and HMRC in AML/CTF supervision.

    The UK Government has confirmed that efforts are underway to introduce the necessary primary legislation and establish a transition plan but has not committed itself to a strict timetable.
    A separate consultation on SPSS specific powers closed in December 2025, with a response anticipated in early 2026.

    Following the 2022 review, HM Treasury has also proposed reform of the Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017 and related legislation via the draft Money Laundering and Terrorist Financing (Amendment and Miscellaneous Provisions) Regulations 2025 (the SI). The instrument is focused on targeting amendments to close regulatory loopholes, address proportionality concerns; and account for evolving risks in relation to AML/CTF. For example, the SI provides clarity on the scope of “unusually complex or unusually large” transactions for the purposes of enhanced due diligence – confirming that this measure is relative to what is standard for the sector/nature of the transaction.

    The final statutory instrument is expected to be laid out in early 2026.

    11. Lloyd’s Market Bulletin: Update to the Agency Circumstances Procedure

    Lloyd’s Market Bulletin: Update to the Agency Circumstances Procedure (Ref: Y5474)

    Introduced in 2000, Part A of the Agency Agreements (Amendment No.20) Byelaw amended the standard managing agent’s agreement and provided that no transaction, arrangement, relationship, act or event which would or might otherwise be regarded as constituting or giving rise to a contravention of a managing agent’s fiduciary obligations shall be regarded as constituting a contravention if it occurs in circumstances, and in line with requirements, specified by the Council (the Agency Circumstances Procedure).

    In December 2025, Lloyd’s confirmed amendments to the ballot requirement of the Agency Circumstances Procedure.

    Previously, when a certain proportion of syndicate members objected to a specified proposal by the syndicate’s managing agent, it was required that a ballot be held of unaligned members (seeking approval of the proposal, usually following efforts by the managing agent to address concerns).

    December 2025 changes:

    • The ballot, if required, must be undertaken by unaligned syndicate members who are not Related Persons (as defined in Lloyd’s Market Bulletin, Agency Circumstances Procedure, Ref: Y3439, 2004).
    • The managing agent must offer a postal option for voting.

    The managing agent has discretion to offer the option to vote by email or other electronic means so long as the integrity of the voting process is maintained

    12. EIOPA Issues Guidance on Group Supervision

    European Insurance and Occupational Pensions Authority (EIOPA) Final Report on Guidelines on Exclusion of Undertakings From the Scope of Group Supervision

    EIOPA has published guidelines on exclusions from group supervision, specifying the conditions under which group supervisors may exclude undertakings from group supervision. The Guidelines will become applicable on January 30, 2027.

    Exclusions are only permissible in “exceptional circumstances” and must be duly justified to EIOPA and, where applicable, to the other supervisory authorities concerned.

    Guideline 1: Supervisors should not exclude if the entity (i) has material intragroup transactions, (ii) has significant influence/coordination over group insurers, or (iii) is needed to understand group risk.

    Guideline 2: Supervisors should only consider exclusion based on legal impediments to information exchange between authorities where (i) the undertaking is located in a third country with no equivalence decision; (ii) the undertaking is not party to the International Association of Insurance Supervisors Multilateral Memorandum of Understanding; and (iii) the entity is small relative to the group and its risks are already captured and managed at sole level. Before excluding, supervisors should first consider signing a memorandum of understanding with the third-country supervisor.

    Guideline 3: Where exclusion would lead to non-application of group supervision for an undertaking under Article 214(2) point (B) or (C), exclusion should occur only if the entity is small relative to the group, their risks are already captured and managed at individual entity level, and (if a parent undertaking) the risks arise almost entirely from the group.

    Guideline 4: Ultimate parents should only be excluded if the parent is not in any of the circumstances set out in Guideline 1; all group risks arising from all other undertakings and intra-group transactions that could affect the undertakings are fully captured at the intermediate level; and the supervisor has adequate information on parent-level group transactions.

    Guideline 5: Exclusions must be reassessed and monitored – including ongoing review of intragroup transactions – to ensure conditions remain met.

    Continue Reading

  • Green Assist: transforming agricultural waste into cosmetics and food

    Green Assist: transforming agricultural waste into cosmetics and food

    Plinius Labs, a Belgian research company in the bioeconomy sector, is transforming agricultural waste into bio-based premium ingredients for cosmetics and food. The company has developed AMPLE – a new process to extract natural compounds from flax shives – and sought Green Assist support to scale it up and overcome financial and industrialisation challenges.

    Plinius Labs is built on the belief that plant waste is full of useful natural chemicals. Too often treated as simple waste, biomass contains molecules that serve essential roles in nature – from protection to healing. Plinius Labs works to extract and valorise this potential, combining 40+ years of green chemistry expertise with cutting-edge R&D to develop natural ingredients and help industries reduce their environmental footprint.

    The company’s mission is therefore to replace petroleum‑derived additives with greener alternatives and, to move forward, it needed to build a pilot plant and attract funding. The team faced challenges around investment planning, market positioning, and how to scale the AMPLE process efficiently and sustainably. That’s when they turned to Green Assist for help.

    Between July and October 2025, Green Assist provided expert advisory support tailored to the project’s needs. This included guidance on developing a business model, refining the company’s value proposition, and identifying potential investors and clients. The expert also helped the team strengthen their financial strategy and assess how to grow in line with circular economy goals.

    Thanks to Green Assist, Plinius Labs is now ready to implement its pilot project and bring its eco-innovation closer to the market. With clear business and financial plans in place, the company is better equipped to demonstrate the environmental and commercial value of its work and to form key partnerships.

    “Green Assist boosted the scaling-up of Plinius Labs. Potential investors were successfully identified as well as target customers for partnering contracts in the development of our proprietary process to produce bio-based cosmetic components. The assigned expert showed dedication and expertise, keeping our team focused and enhancing the market credibility of Plinius Labs’ core competences in green chemistry,” said Yves Boonen, CEO of the company.

     

    Green Assist aims to build a pipeline for high-impact green investment projects in sectors related to biodiversity, natural capital and circular economy, as well as in non-environmental sectors. 

    Learn more about how Green Assist can help you get free tailored support for your green project or contact us at cinea-green-assistec [dot] europa [dot] eu (cinea-green-assist[at]ec[dot]europa[dot]eu). To request advisory services from Green Assist, simply fill out this short form.

    Continue Reading

  • Starbucks Says Turnaround ‘Ahead of Schedule’ as Sales Rebound – The New York Times

    1. Starbucks Says Turnaround ‘Ahead of Schedule’ as Sales Rebound  The New York Times
    2. STARBUCKS CORP SEC 10-Q Report  TradingView
    3. SBUX Forecasts Growth in Sales and Expansion by FY26  GuruFocus
    4. Starbucks’s (NASDAQ:SBUX) Q4 CY2025 Sales Top Estimates, Stock Soars  FinancialContent
    5. Starbucks Stock Jumps As the Coffee Giant’s Turnaround Starts to Click  Business Insider

    Continue Reading

  • UK media groups should be allowed to opt out of Google AI Overviews, CMA says | Digital media

    UK media groups should be allowed to opt out of Google AI Overviews, CMA says | Digital media

    Web publishers and news organisations could be given the power to stop Google scraping their content for its AI Overviews, under measures announced by the UK competition watchdog to loosen its grip on online search.

    Media organisations have experienced a drop in click-through traffic to their websites – and therefore their revenue – since Google started posting AI summaries at the top of search results, which many people read without clicking through to the original journalism.

    Sites have been unable to opt out of their content being scraped for those overviews without also withdrawing from traditional Google search, which, given the company’s market dominance, would hugely affect the visibility of their journalism.

    On Wednesday, the Competition and Markets Authority proposed “a fairer deal” over how their content was used and launched a month-long consultation on allowing publishers to “be able to opt out of their content being used to power AI features such as AI Overviews or to train AI models outside of Google search”.

    In the first measures to be announced under the UK’s new digital markets competition regime, the CMA also said Google would have to rank its search results fairly, including not uprating organisations with which it has commercial relationships or potentially punishing websites for speaking out against it. Google says it does not provide special treatment based on an organisation’s relationship with it.

    News media organisations hope the changes will increase their leverage to get paid if their content is used in Google’s AI mode. However, there was disappointment that the CMA also announced it would wait a year to decide whether to take further action to ensure publishers receive fair and reasonable terms for their content.

    Owen Meredith, the chief executive of the News Media Association trade body, welcomed the moves. He said the CMA had recognised Google was “able to extract valuable data without reward, harming publishers and giving the company an unfair advantage over competitors in the AI model market, including British startups”.

    Google said: “Any new controls need to avoid breaking search in a way that leads to a fragmented or confusing experience.” But added that it was “working on ways to let news sites opt out of AI Overviews”.

    The CMA is also expected to legally require Google to install “choice screens” to allow users to more easily switch to other search services on Android mobiles and introduce them on the Google Chrome browser.

    This month a report from the Reuters Institute for the Study of Journalism found media executives around the world feared search engine referrals would fall by 43% over the next three years amid the rise of AI summaries and chatbots.

    Google search is down 33% globally, according to data for more than 2,500 news sites sourced by Chartbeat, with lifestyle, celebrity and travel content more heavily affected than current affairs and news outlets.

    Sarah Cardell, the CMA chief executive, said the moves would give UK businesses and consumers more control over how they interacted with Google search, unlock opportunity for innovation across the UK tech sector and “provide a fairer deal for content publishers, particularly news organisations, over how their content is used in Google’s AI Overviews”.

    Ron Eden, Google’s principal for product management, said: “Our goal is to protect the helpfulness of search for people who want information quickly, while also giving websites the right tools to manage their content. We look forward to engaging in the CMA’s process and will continue discussions with website owners and other stakeholders on this topic.”

    Continue Reading