Category: 3. Business

  • Slightly firmer tone to end the week

    Slightly firmer tone to end the week

    Continue Reading

  • Dollar set for weekly loss amid investor unease about trade – Reuters

    1. Dollar set for weekly loss amid investor unease about trade  Reuters
    2. Dollar down against major currencies  Business Recorder
    3. Dollar set for weekly slide as trade, shutdown concerns weigh  Dunya News
    4. U.S. Dollar Gains Ground As Treasury Yields Rise: Analysis For EUR/USD, GBP/USD, USD/CAD, USD/JPY  FXEmpire
    5. DXY: Softer on the day for now – OCBC  FXStreet

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Background

    Patient experience is the general satisfaction a patient obtains during the process of receiving health care services []. In particular, patient experience has been considered as one of the central pillars of health care quality [,]. There are currently various developed questionnaires or scales to measure the patient experience in different health care settings, such as the Outpatient Experiences Questionnaire [,], the Chinese Patient Experience Questionnaire [] and the Picker scale []. Previous research pointed out that patient experience was closely related to the quality of health care delivery, involving outcomes such as patient safety and clinical effectiveness [,]. Some qualitative studies further indicated that a more positive patients’ communication experience with their physicians is related to higher general satisfaction with the quality of health care delivery []. Thus, policymakers worldwide increasingly prefer using patient-experience data over performance indicators to evaluate the quality of health care services.

    In recent years, the Chinese government has been dedicated to improving the patient experience [] and encouraged the development and application of artificial intelligence in the health services in various scenarios (eg, conversational agents [], artificial intelligence-assisted diagnosis [] and decision-making []). This initiative promotes the construction of smart hospitals with the aim of solving the urgent and difficult problems when people seek health services, thereby continuously improving the quality of health services.

    Conversational agents are artificial intelligence programs (ie, chatbots) that engage in dialogs with patients via the mobile devices before a consultation with their visit physicians in outpatient departments []. With the contextual question-answering agents, the patients’ information on their conditions, symptoms, and past medical history (eg, disease history, examination or medication or operation history, allergy history, family history, and personal history of drinking or smoking) can be collected and then sent to their physicians’ workstations in a structured form []. Some review studies have pointed out that artificial intelligence-assisted conversational agents had the potential to save time by reducing the time required for history taking and improve consultation efficiency, thereby resulting in high levels of satisfaction [,]. And these studies further indicated there were few quantitative studies that evaluate the above-mentioned effects or outcomes of conversational agents with objective measures [,].

    Objectives

    Under the national policies on the digital transformation of the health care industry, artificial intelligence-assisted conversational agents have begun applying to enhance the health care delivery in tertiary public hospitals in economically developed regions of China (eg, Shanghai in 2021) []. The artificial intelligence-assisted conversational agents have not only helped patients report their information in detail with enough time, but also made their visit physicians quickly grasp their conditions before a consultation []. It is apparent that the artificial intelligence-assisted conversational agents play a positive role in improving the efficiency and quality of health care delivery related to physicians.

    Despite the application of artificial intelligence-assisted conversational agents in the tertiary public hospitals in economically developed regions of China, evidence to assess their effectiveness in improving health care delivery is lacking to date. Some studies investigated the factors influencing the adoption and continuance intention of patients toward artificial intelligence-assisted conversational agents in outpatient departments []. Several studies explored the design of the intelligence-assisted conversational agents [] and what issues and barriers exist in their usage []. Another study assessed the impact of artificial intelligence-based conversational agent on the operational performance []. However, there has been little further research that evaluates the effect of artificial intelligence-assisted conversational agents on the patient experience related to physicians during outpatient visits.

    Therefore, this study aimed to examine whether the use of artificial intelligence-assisted conversational agents during outpatient visits improves the patient experience related to physicians and to further find out the difference in the patient experience between conversational agent users and nonusers.

    Questionnaire Design

    The Chinese Outpatient Experience Questionnaire was the basis of our survey, including 6 dimensions (physical environment and convenience, medical service fees, physician-patient communication, health information, short-term outcome, and general satisfaction), 28 items and participant characteristics (eg, sex, age, marital status, education, living place, monthly income, self-rated health status, and visit information) []. This outpatient experience questionnaire was verified with good reliability and validity (χ2/df =2.775, goodness-of-fit index=0.893, comparative fit index=0.930, Tucker-Lewis index=0.921, root mean square error of approximation=0.055, root mean square residual=0.038) []. However, we selected the above 4 dimensions (physician-patient communication, health information, short-term outcome, and general satisfaction; Cronbach alpha=.968 in this study) and the corresponding 19 items to survey the outpatient experience related to physicians. Moreover, we also added another question in the section of participant characteristics—“Did you use the artificial intelligence-assisted conversational agents during this outpatient visit?”—to divide the conversational agent users and nonusers.

    Data Collection

    The target population was adult residents who sought outpatient services from tertiary public hospitals within the past 2 weeks in China, selected using the random sampling. We used a professional data collection platform (Credamo) in China to create an electronic questionnaire in which to survey the targeted residents. The sample database of the Credamo included more than 3.0 million members with confirmed personal information from all provinces and regions in China []. With the support of Credamo, this study distributed electronic questionnaires to the targeted population nationwide and invited them to participate in the survey from April 1 to 15, 2025. Specifically, this study randomly sent the questionnaire links to the members who met the inclusion criteria nationwide through the Credamo in a targeted manner. These criteria were mainly set using the sample feature screening function of the Credamo as follows: being aged 18 years or older, being located within China, and having an outpatient experience in a tertiary public hospital within the past 2 weeks. Each invited participant could click on the link via their mobile phones to access and complete the electronic questionnaire. Before the survey, we introduced the nature and objective of the study and guaranteed that the collected data would not be used for other purposes. The survey was conducted accordingly after an individual’s consent was obtained. Each invited participant was prompted to fill in the electronic questionnaire based on their outpatient experience in tertiary public hospitals within the past 2 weeks. Each internet protocol address could be set to fill in the questionnaire only once.

    Ethical Considerations

    The institutional review board of Xuzhou Medical University approved this study before data collection (number 2024Z048). The general information about the nature and objective of this study was also provided and informed at the beginning of the survey as a means of informed consent. All the participants were informed that their participation was voluntary and they were free to refuse or discontinue their participation at any time. And only after an individual’s consent was obtained online could she or he continue to participate in this survey. During the data collection, no identifying information was collected, and the researchers only had access to the user ID. We also provided the participants who met the inclusion criteria and carefully completed the questionnaire with a monetary reward (US $0.70). This online survey was designed in accordance with the CHERRIES checklist.

    Measures

    The dependent variable was the total patient experience scores related to physicians in the multiple linear regression analysis. The four dimensions (physician-patient communication, health information, short-term outcome, and general satisfaction) and the corresponding 19 items of the Chinese Outpatient Experience Questionnaire were used to calculate the patient experience scores related to physicians during outpatient visits []. Each item was rated on a 5-point Likert scale, with a higher score indicating a better experience []. Each dimension score was calculated by adding up the scores of all items in the dimension and then dividing that sum by the total number of items in that dimension. We further calculated the total patient experience scores by summing the scores of the 19 items in the questionnaire and then dividing that sum by the total number of items []. Therefore, the total patient experience scores related to physicians ranged from 1 to 5. The independent variables included whether the artificial intelligence-assisted conversational agents were used during this outpatient visit (coded as 1=yes, 0=no), as well as the participant characteristics, including demographic and visit information.

    Statistical Analysis

    Descriptive statistics were performed to summarize data on the characteristics of participants. The t tests were then used to test the mean difference in the patient experience scores between conversational agent users and nonusers when the data followed a normal distribution. And multiple linear regression analysis was further performed to determine whether the use of artificial intelligence-assisted conversational agents during outpatient visits was associated with a better patient experience related to physicians. Moreover, the average percentage change in the dependent variable associated with a one-unit increase in an independent variable was calculated by dividing the independent variable’s unstandardized regression coefficient value by the mean value of the dependent variable and then multiplying by 100%. Benjamini-Hochberg adjusted P values ≤.05 were considered statistically significant. All data analyses were done using SPSS (version 23.0; IBM Corp) and STATA (version 15.0).

    Participant Characteristics of Conversational Agent Users and Nonusers

    A total of 462 online responses were received, and 394 eligible responses remained, whereas 68 responses were excluded because they showed a certain logical contradiction based on the screening question (ie, whether you had an outpatient experience in a tertiary public hospital within the past 2 wk), or they contained the same answers to all questions, or because the time they were filled in was less than 120 seconds. The detailed characteristics of the participants are shown in . Among these participants, 53.0% (209/394) reported they used conversational agents during this outpatient visit. And the conversational agent users and nonusers differed in the sex (χ21=9.90, P=.002), educational level ( χ22=6.025, P=.049), monthly income (χ23=24.262, P<.001), self-rated health status ( χ22=31.247, P<.001) and physician title (χ23=9.643, P=.02). Moreover, these participants who rated their health status better were more likely to use the conversational agents during this outpatient visit.

    Table 1. Differences in the participant characteristics of conversational agent users and nonusers.
    Characteristic Overall, n (%) Conversational agent users, n (%) Nonusers, n (%) χ2 (df) P value
    Sex 9.90 (1) .002
    Male 120 (30.5) 78 (37.3) 42 (22.7)
    Female 274 (69.5) 131 (62.7) 143 (77.3)
    Age (years) 3.393 (3) .34
    18‐25 187 (47.5) 91 (43.5) 96 (51.9)
    26‐30 82 (20.8) 46 (22.0) 36 (19.5)
    31‐40 80 (20.3) 44 (21.1) 36 (19.5)
    40 45 (11.4) 28 (13.4) 17 (9.2)
    Marital status 2.924 (1) .09
    Unmarried 251 (63.7) 125 (59.8) 126 (68.1)
    Married 143 (36.3) 84 (40.2) 59 (31.9)
    Educational level 6.025 (2) .049
    High School and below 59 (15.0) 30 (14.4) 29 (15.7)
    College and undergraduate 272 (69.0) 154 (73.7) 118 (63.8)
    Postgraduate and above 63 (16.0) 25 (12.0) 38 (20.5)
    Monthly income (US $) 24.262 (3) <.001
    <417.97 139 (35.3) 54 (25.8) 85 (45.9)
    ‐696.48 86 (21.8) 58 (27.8) 28 (15.1)
    696.62‐1114.45 82 (20.8) 54 (25.8) 28 (15.1)
    ≥1114.59 87 (22.1) 43 (20.6) 44 (23.8)
    Current living place 0.279(1) .60
    Urban areas 315 (79.9) 165 (78.9) 150 (81.1)
    Rural areas 79 (20.1) 44 (21.1) 35 (18.9)
    Self-rated health status 31.247 (2) <.001
    Fair 112 (28.4) 36 (17.2) 76 (41.1)
    Good 198 (50.3) 114 (54.5) 84 (45.4)
    Very good 84 (21.3) 59 (28.2) 25 (13.5)
    Specialty services 7.777(8) .46
    Internal medicine 131 (33.2) 69 (33.0) 62 (33.5)
    Surgery 67 (17.0) 40 (19.1) 27 (14.6)
    Obstetrics and gynecology 33 (8.4) 11 (5.3) 22 (11.9)
    Orthopedics 29 (7.4) 14 (6.7) 15 (8.1)
    Traditional Chinese medicine 24 (6.1) 13 (6.2) 11 (5.9)
    Otorhinolaryngology 23 (5.8) 14 (6.7) 9 (4.9)
    Ophthalmology 17 (4.3) 9 (4.3) 8 (4.3)
    Stomatology 30 (7.6) 18 (8.6) 12 (6.5)
    Dermatology 40 (10.2) 21 (10.0) 19 (10.3)
    Physician title 9.643 (3) .02
    Senior 140 (35.5) 86 (41.1) 54 (29.2)
    Deputy Senior 113 (28.7) 53 (25.4) 60 (32.4)
    Intermediate 119 (30.2) 63 (30.1) 56 (30.3)
    Junior 22 (5.6) 7 (3.3) 15 (8.1)
    Whether this outpatient visit was a revisit 0.126 (1) .72
    Yes 57 (14.5) 29 (13.9) 28 (15.1)
    No 337 (85.5) 180 (86.1) 157 (84.9)

    aRepresents a significant difference between the 2 groups.

    Differences in Patient Experience Related to Physicians Between Conversational Agent Users and Nonusers

    shows the patient experience scores of conversational agent users and nonusers. And the analysis results further showed that there was a significant difference in the total patient experience scores, the 4 dimensions, and the 19 items between the 2 groups.

    Specifically, in the total patient experience scores related to physicians, conversational agent users obtained significantly higher scores than nonusers (t392=5.589, P<.001). In these dimensions of physician-patient communication (t392=5.013, P=.006), health information (t392=5.758, P<.001), short-term outcome (t392=4.608, P<.001) and general satisfaction (t392=5.080, P<.001), conversational agent users reported significantly higher scores than nonusers as well.

    Moreover, conversational agent users also reported significantly higher scores than nonusers in the 19 items of patient experience related to physicians (see ).

    Table 2. Patient experience scores of conversational agent users and nonusers.
    Dimension/item Conversational agent users scores, mean (SD) Nonusers scores, mean (SD) t test (df) P value
    Physician-patient communication 4.11 (0.74) 3.75 (0.70) 5.013 (392) .006,
    Clear explanation 4.13 (0.78) 3.91 (0.77) 2.753 (392) <.001
    Careful listening 4.22 (0.84) 3.89 (0.81) 3.920 (392) <.001
    Enough time for communication 3.93 (1.00) 3.55 (1.02) 3.726 (392) <.001
    Courtesy and respect attitude 4.20 (0.79) 3.84 (0.81) 4.498 (392) <.001
    Cared about anxieties or fears 4.03 (0.92) 3.52 (1.05) 5.141 (392) <.001
    Involve in decision making 4.04 (0.92) 3.67 (0.99) 3.807 (392) <.001
    Respect opinions 4.10 (0.82) 3.72 (0.85) 4.519 (392) <.001
    Protect personal privacy 4.22 (0.93) 3.87 (0.88) 3.892 (392) <.001
    Health information 4.17 (0.74) 3.73 (0.75) 5.758 (392) <.001
    Explanations for your illness 4.13 (0.91) 3.84 (0.84) 3.324 (392) .001
    Dangerous signals at home 4.22 (0.85) 3.90 (0.83) 3.857 (392) <.001
    Health knowledge 4.13 (0.88) 3.71 (0.99) 4.494 (392) <.001
    Explain following examination 4.17 (0.91) 3.64 (0.96) 5.608 (392) <.001
    Explain examination result 4.16 (0.92) 3.71 (0.97) 4.734 (392) <.001
    Explain drug effects in a way you could understand 4.07 (0.89) 3.54 (1.01) 5.511 (392) <.001
    Medication precautions 4.27 (0.72) 3.79 (0.93) 5.626 (392) <.001
    Short-term outcome 4.19 (0.79) 3.81 (0.84) 4.608 (392) <.001,
    Reduce/prevent from health problems 4.23 (0.84) 3.86 (0.91) 4.155 (392) <.001
    Handle health problems after visit 4.15 (0.86) 3.76 (0.90) 4.401 (392) <.001
    General satisfaction 4.24 (0.76) 3.85 (0.78) 5.080 (392) <.001,
    Satisfaction overall 4.27 (0.81) 3.84 (0.82) 5.210 (392) <.001
    Choose this hospital again 4.22 (0.81) 3.85 (0.86) 4.293 (392) <.001
    Total patient experience scores 4.15 (0.71) 3.76 (0.69) 5.589 (392) <.001

    aRepresents a significant difference between the 2 groups.

    bRepresents the dimensions in the questionnaire.

    Influence of Conversational Agents on Patient Experience Related to Physicians

    As shown in , after controlling for other factors on participant characteristics including demographic and visit information and adjusting the P value using Benjamini-Hochberg procedure, whether the conversational agent was used or not during this outpatient visit was a significant factor influencing the total patient experience scores related to physicians (B=0.298, P=.013). The standardized regression coefficient of whether the conversational agent was used was 0.205. Thus, when other covariates were held constant, the use of the artificial intelligence-assisted conversational agents averagely increased the total patient experience scores related to physicians by 7.51% (0.298/3.97*100%).

    Table 3. Factors influencing the total patient experience scores related to physicians in the multiple linear regression.
    Variables B SE t test P value Adjusted P value
    Constant 3.241 0.211 15.35 <.001
    Whether the conversational agent was used (ref: No)
    Yes 0.298 0.076 3.95 <.001 .013
    Sex (ref: male)
    Female 0.047 0.082 0.57 .57 .780
    Age (ref: 26‐30 y old)
    18‐25 0.197 0.127 1.55 .12 .496
    31‐40 −0.111 0.117 −0.95 .34 .678
    40 0.189 0.150 1.26 .21 .496
    Marital status (ref: unmarried)
    Married 0.279 0.117 2.38 .02 .156
    Educational level (ref: high school and below)
    College and undergraduate 0.020 0.120 0.16 .87 .906
    Postgraduate and above −0.101 0.148 −0.68 .497 .780
    Monthly income (ref:

    ‐696.48 −0.065 0.111 −0.58 .56 .780
    ‐1114.45 −0.023 0.102 −0.23 .82 .886
    ≥1114.59 0.172 0.124 1.40 .16 .496
    Current living place (ref: Rural areas)
    Urban areas 0.226 0.103 2.20 .03 .182
    Self-rated health status (ref: fair)
    good 0.079 0.090 0.88 .38 .678
    Very good 0.520 0.110 4.74 <.001 .013
    Specialty services (ref: dermatology)
    Internal medicine 0.096 0.112 0.86 .39 .678
    Surgery 0.011 0.128 0.08 .93 .933
    Obstetrics and gynecology −0.062 0.156 −0.40 .69 .781
    Orthopedics −0.231 0.171 −1.35 .18 .496
    Traditional Chinese medicine 0.222 0.135 1.65 .10 .496
    Otorhinolaryngology −0.240 0.189 −1.27 .21 .496
    Ophthalmology −0.090 0.202 −0.45 .66 .781
    Stomatology 0.172 0.131 1.31 .19 .496
    Physician title (ref: senior)
    Deputy Senior −0.082 0.090 −0.91 .36 .678
    Intermediate 0.036 0.089 0.40 .69 .781
    Junior −0.070 0.173 −0.40 .69 .781
    Whether this outpatient visit was a re-visit (ref: Yes)
    No −0.062 0.102 −0.60 .55 .780

    aB: unstandardized regression coefficient.

    bSE: standard error.

    cThe adjusted P values of independent variables were adjusted by Benjamini-Hochberg procedure.

    dRepresents the variable is significant in the multiple linear regression.

    Among these control factors, self-rated health status (B=0.520, P=.013) was a significant factor that influenced the total patient experience scores related to physicians. And these residents who rated their health status as “very good” were more likely to report a higher patient experience score related to physicians during this outpatient visit.

    Moreover, the regression model explained 25.54% of the variance in the total patient experience scores related to physicians (R2=0.2554). And we further calculated the value of the variance inflation factor to check for collinearity. The variance inflation factor value of all independent variables was between 1.15 and 2.51, which indicated that there was no collinearity.

    Principal Findings

    We found that the use of artificial intelligence-assisted conversational agents averagely increased the total patient experience scores related to physicians by 7.51%. In this study, conversational agent users reported a better experience in the physician-patient communication, access to health information, short-term outcomes, and general satisfaction as well as their specific 19 items.

    The use of artificial intelligence-assisted conversational agents can improve communication efficiency between physicians and patients during outpatient visits. After completing the registration, the patients can click on the “pre-consultation” on the registration and appointment page of the hospital’s mobile app by using their mobile phones and then engage in a dialog with artificial intelligence chatbots []. And a corresponding structured preconsultation report is formed and delivered to their visit physicians, and the physicians thereby quickly grasp the patients’ conditions before a consultation and further conduct a targeted inquiry []. This could improve the consultation efficiency between physicians and patients and, in turn, contribute to positive outcomes, such as making physician-patient communication better, accessing more targeted health information, ameliorating short-term outcomes, and increasing general satisfaction. These positive outcomes that appear to result from the use of the artificial intelligence-assisted conversational agents have been confirmed in our quantitative study.

    Currently, the situation where Chinese patients have a lack of adequate communication with their physicians during outpatient visits in the tertiary hospitals has not been effectively improved, which hinders the improvement of the current physician-patient relationship [,]. The artificial intelligence-assisted conversational agents can help patients communicate more effectively with their visit physicians and access more targeted health information within the existing limited time. This could, in turn, result in a better physician-patient relationship during outpatient visits.

    Our quantitative study found that the overall patient experience related to physicians could be improved significantly when the artificial intelligence-assisted conversational agent was used during outpatient visits. This finding could be supported by the findings of Lu et al [] regarding the effect of mobile health apps on the patient experience that using mobile health apps could improve the patient experience. And the extent to which the artificial intelligence-assisted conversational agents improved the patient experience in this study was higher than that of the mobile health apps reported in the previous research in 2018 (7.51% vs 5.35%) []. This difference might be relevant to the fact that the artificial intelligence-assisted conversational agents not only allow the patients to report their information in detail before a consultation, but also further make them communicate more efficiently with their visit physicians within the existing limited time [,], thereby bringing them a better communication experience with their visit physicians. In contrast, the past mobile health apps were dedicated to saving the patients’ waiting time throughout their visits and thereby improving their visit experience []. More importantly, there is increasing evidence supporting that improved health care system delivery could improve the patient experience, which in turn brings a better health outcome to patients [,]. Therefore, we have reason to believe that the increased application and use of the artificial intelligence-assisted conversational agents in outpatient departments could contribute to a better health outcome for outpatients.

    Nevertheless, the current application of artificial intelligence-assisted conversational agents in outpatient departments is mainly in the tertiary public hospitals in big cities of China (eg, Shanghai, Shenzhen, and Wuhan). And the existing survey research in 2022 found that six months after tertiary hospitals in Shanghai deployed the artificial intelligence-assisted conversational agents in outpatient departments, the patients’ usage rates fell short of expectations (26% and 20% for the second- and fourth-ranked hospitals, respectively) []. Therefore, we suggest that public hospitals should be encouraged to promote the application of the artificial intelligence-assisted conversational agents in outpatient departments and integrate them into the functions of their existing mobile health apps to continuously improve the patient experience related to physicians during outpatient visits. More importantly, given that public hospitals in less-developed regions generally lack sufficient funds to deploy the artificial intelligence-assisted conversational agents, we also suggest that the Chinese government should increase financial support for these public hospitals. This would accelerate the promotion of the artificial intelligence-assisted conversational agents in outpatient departments and thereby improve the patient experience on a large scale.

    Moreover, our study also showed that self-rated health status was a significant factor influencing the patient experience related to physicians during outpatient visits. This result is similar to the findings of several studies on the effect of mobile health apps on the patient experience that the patients who rated their health status better were more likely to report a better patient experience []. Another study by Li et al also indicated that the patients with worse self-rated health status would be more likely to experience a negative physician-patient relationship []. Therefore, we also suggest that hospitals should make full use of the artificial intelligence-assisted conversational agents to further improve the medical experience of these patients with worse self-rated health status.

    Limitations

    There are some limitations in this study. First, data collection was self-reported by adult residents based on their outpatient experience within the past 2 weeks, which might have a recall bias and selection bias. Second, our conclusions might have been biased by distributions, such as sex and age. Therefore, after controlling for the influence of the confounding factors on participant characteristics, multiple linear regression analysis was performed to examine whether the use of artificial intelligence-assisted conversational agents improves the patient experience related to physicians during outpatient visits, which could have resulted in a reliable and stable conclusion. Third, patients can freely choose to use or not use the artificial intelligence-assisted conversational agents during outpatient visits, which might be influenced by these factors (eg, digital literacy, education, and general attitude toward health technology). And this might also act as a confounder in masking the patient experience. Furthermore, further research is necessary to explore the intrinsic mechanism by which the use of artificial intelligence-assisted conversational agents improves the patient experience related to physicians during outpatient visits.

    Conclusions

    Our work provides evidence supporting the use of artificial intelligence-assisted conversational agents for improving the patient experience related to physicians during outpatient visits, especially in terms of making physician-patient communication better, accessing more targeted health information, ameliorating short-term outcomes, and increasing general satisfaction. All of these may further bring positive health outcomes to patients. Therefore, we suggest that public hospitals should consider the benefits of the artificial intelligence-assisted conversational agents and actively deploy the conversational agents in outpatient departments so as to continuously improve the patient experience related to physicians during outpatient visits.

    The authors would like to thank all participants involved in the survey. This work was supported by the National Social Science Foundation of China (grant number 19BGL251). The funder had no involvement in the study design, data collection, analysis, interpretation, or the writing of the manuscript.

    The datasets generated during or analyzed during this study are available from the corresponding author on reasonable request.

    None declared.

    Edited by Alicia Stone; submitted 25.Apr.2025; peer-reviewed by John Grosser, Maria Chatzimina; final revised version received 16.Sep.2025; accepted 16.Sep.2025; published 17.Oct.2025.

    ©Dehe Li, Heman Zhang, Chuntao Lu, Chunxia Miao. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 17.Oct.2025.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • FTC Approves Final Divestiture Order in Synopsys and Ansys Deal

    FTC Approves Final Divestiture Order in Synopsys and Ansys Deal

    The Federal Trade Commission finalized a consent order that requires Synopsys, Inc. and Ansys, Inc. to divest certain assets to resolve antitrust concerns surrounding their $35 billion merger.

    The FTC’s final consent order preserves competition across several software tool markets that are critical for the design of semiconductors and light simulation devices. 

    The final consent order resolves FTC allegations that the deal would eliminate head-to-head competition between Synopsys and Ansys in three critical software tool markets, leading to higher prices for the design of semiconductors and light simulation devices, as well as decreased innovation. Under the order, Synopsys is required to divest its optical software tools and its photonic software tools, while Ansys will divest a power consumption analysis tool called PowerArtist. Both Synopsys and Ansys will divest their assets to Keysight Technologies, Inc. Without the divestitures, the FTC alleged that the deal would ultimately harm device manufacturers and consumers. 

    Following a public comment period, the Commission voted 3-0 to approve the final order.

    Continue Reading

  • Bank of Canada to focus more on risks ahead of rate decision, says head – Reuters

    1. Bank of Canada to focus more on risks ahead of rate decision, says head  Reuters
    2. Bank of Canada Survey, CPI Data to Weigh on Next Rate Decision, Gov. Macklem Says  The Wall Street Journal
    3. BOC’s Macklem: We’re putting more emphasis on risk when it comes to the next rate decision  TradingView
    4. BoC to resume economic forecasting with rate decision this month  Advisor.ca
    5. Bank of Canada’s Macklem sees slow growth, soft job market ahead of rate decision  The Globe and Mail

    Continue Reading

  • BPI and The Clearing House Association Urge Treasury to Modernize AML Rules for the Digital Age

    BPI and The Clearing House Association Urge Treasury to Modernize AML Rules for the Digital Age

    Washington. D.C. – The Bank Policy Institute and The Clearing House Association today issued recommendations to the U.S. Department of the Treasury as part of Treasury’s request for information to modernize the Bank Secrecy Act. The recommendations aim to align Bank Secrecy Act requirements with the growth of digital asset technologies and the adoption of emerging technologies that enhance banks’ ability to combat illicit finance.

    “A financial crime is a financial crime, whether it happens in a bank branch or on the blockchain,” the associations stated after filing the letter. “Banks are in an arms race with increasingly sophisticated global criminals, and the laws designed to prevent financial crime must match the latest technologies and tactics. These recommendations position the AML framework to evolve alongside innovation, not behind it.”

    Why This Matters: America’s anti-money laundering laws must be flexible enough to adapt to new technologies and respond to more sophisticated illicit finance tactics. Technology contributes to a safer financial system; however, rules must apply consistently to the riskiness of an activity, not the technology or the type of company engaged in those activities.

    Our Recommendations:

    • Apply anti-money laundering rules consistently. Apply “same activity, same risk, same rules” across traditional banks and digital asset firms and clearly define when entities must comply with Know Your Customer obligations.
    • Encourage banks to adopt technologies that help combat illicit finance. Allow financial institutions to experiment and adopt proven new technologies, such as AI, machine learning and digital ID, through clear and consistent guidance and coordination among federal banking agencies. The banking regulators should not require banks to maintain legacy systems without clear and reasonable phase-out periods.
    • Rescind the existing model risk management guidance, which stifles innovation. Financial crimes monitoring platforms should not necessarily be treated as “models” subject to the same standards as other models, such as those created for capital and liquidity requirements.
    • Improve information sharing. Remove existing restrictions or create a safe harbor so banks can share illicit finance intelligence with law enforcement, national security experts and other affected industries.
    • Bring DeFi into the regulatory framework. Clarify when decentralized finance actors must adhere to digital asset service provider rules.

     To access a copy of the letter, please click here.

    ###

    About Bank Policy Institute.

    The Bank Policy Institute is a nonpartisan public policy, research and advocacy group that represents universal banks, regional banks and the major foreign banks doing business in the United States. The Institute produces academic research and analysis on regulatory and monetary policy topics, analyzes and comments on proposed regulations, and represents the financial services industry with respect to cybersecurity, fraud, and other information security issues.

    About The Clearing House Association.

    The Clearing House Association L.L.C., the country’s oldest banking trade association, is a nonpartisan organization that provides informed advocacy and thought leadership on critical payments-related issues. Its sister company, The Clearing House Payments Company L.L.C., owns and operates core payments system infrastructure in the U.S., clearing and settling more than $2 trillion each day.

    Media Contacts

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Heart disease remains the leading cause of death for women in the United States []. Over 60 million women in the United States are living with heart disease []. Despite public campaigns, such as “Go Red for Women” by the American Heart Association [], awareness of heart disease as the leading cause of death among women has declined from 65% in 2009 to 44% in 2019 []. The most significant declines have been observed among Hispanic women, Black women, and younger women []. Given this troubling trend, there is an urgent need for alternative and scalable approaches to increase knowledge and awareness of heart disease in women.

    An artificial intelligence (AI) chatbot could be one of the promising approaches to improve women’s awareness of heart disease. AI chatbots are built on natural language processing, natural language understanding, and machine learning. Several systematic reviews have investigated the usability and potential efficacy of AI chatbots in managing patients with various health conditions. Overall, AI chatbot–based interventions have shown the potential to improve mental health, such as depressive and anxiety symptoms; promote healthy diets; and enhance cancer screenings [-]. However, the fast-growing capabilities of AI chatbots raise questions about their ability to compete with human cognitive and emotional intelligence. Yet, only a few randomized controlled trials (RCTs) have directly compared the efficacy of AI chatbots to that of human agents. For example, the studies reported that the AI chatbot offers efficacious counseling to patients with breast cancer comparable to that of health professionals [-]. To the best of our knowledge, no clinical trial exists on whether an AI chatbot is effective in increasing women’s heart attack awareness and knowledge. Further empirical investigation is needed to more comprehensively evaluate the efficacy of AI chatbots compared to human agents.

    Our research team initiated an AI chatbot development project aimed at increasing women’s knowledge and awareness of heart attack. As a first step, we collected a conversational dataset in which a research interventionist texted each participant with educational content on heart health (Human dataset) over 2 days. We subsequently developed and tested a fully automated SMS text messaging–based AI chatbot system named HeartBot, available 24/7, designed to achieve similar objectives and collected a conversational dataset between HeartBot and participants. The detailed study design, including HeartBot’s development mechanism and algorithmic structure, is published elsewhere []. This project presents a valuable opportunity for a comparative secondary analysis, and this paper focuses specifically on examining the outcomes of the 2 studies.

    The aim of this secondary data analysis is to evaluate and explore the potential efficacy of the 2 heart attack education interventions (SMS text messaging intervention delivered by a human research interventionist vs an AI chatbot [hereafter HeartBot]) in community-dwelling women without a history of heart disease. The primary outcome is participants’ knowledge and awareness of symptoms and response to a heart attack. In addition, we examined differences in participants’ evaluations of user experience and conversational quality across the 2 formats by assessing message effectiveness, message humanness, naturalness, coherence, and conversational metrics. Our study is among the first to provide a detailed understanding and multidimensional comparison of human-delivered and automated AI chatbot interventions in the context of heart attack education. These findings contribute new insights into the relative strengths of human- and AI-driven health communication, offering practical guidance for designing more effective education and behavior change programs.

    Study Design and Sample

    This was a secondary analysis on 2 datasets collected from the AI Chatbot Development Project conducted from September 2022 to January 2024 []. The aims of the AI Chatbot Development Project are to conduct a series of studies to develop a fully automated AI chatbot to increase knowledge and awareness of heart attack in women in the United States. After convening a multidisciplinary team, we developed a knowledge bank using the clinical guidelines, published papers, and American Heart Association’s “Go Red for Women” materials [] to develop the content of the conversation. Then, we conducted a Wizard of Oz experiment with the Human dataset cohort, where participants interacted with a system they believed to be autonomous but was operated by a research interventionist [], to test the content and aid in the development of a text-based HeartBot with natural language capabilities. The research interventionist, who was a master-prepared, experienced cardiovascular nurse, served as the research interventionist to interact with the participants through SMS text messaging (phase 1: Human dataset).

    After the first study (phase 1), we developed a fully automated AI chatbot, the HeartBot, to deliver the intervention through SMS text messaging (phase 2: HeartBot dataset). The detailed design of the project, including the protocol, participant eligibility criteria, and description of the HeartBot platform, was published elsewhere [].

    The eligibility criteria for both studies were women (1) being aged 25 years or older, (2) living in the United States, (3) having access to the internet to complete the online survey and a cell phone with SMS text messaging capabilities, (4) having no history of cognitive impairment or history of heart disease or stroke, and (5) who were not health care professionals or students. The eligibility criteria were consistent throughout the 2 studies. Participants in both studies were mainly recruited from Facebook (Meta) and Instagram (Meta) from September 2022 to January 2023 and from October 2023 to January 2024, respectively.

    Procedure and Interventions

    For the Human dataset (phase 1), participants who were interested in the study were recruited online and underwent screening to confirm eligibility. Eligible participants provided written informed consent prior to enrollment and completed a baseline survey online. Then, participants engaged in 2 online conversation sessions over the course of 2 days over a week with a research interventionist, with each session covering educational content related to heart attack symptoms and response. presents the content of heart attack topics used in both studies. After having a text conversation, participants completed a post online survey to measure knowledge and awareness of symptoms and response to a heart attack, message effectiveness, message humanness, conversation naturalness and coherence, and perception of chatbot identity. Participants were provided with a $40 Amazon e-gift card upon completion of all study procedures.

    Table 1. Content of heart attack topics in the artificial intelligence (AI) Chatbot Development Project.
    Phase 1: Human dataset Phase 2: HeartBot dataset
    Session 1
    • Greetings
    • What is heart attack
    • Symptoms of heart attacks
    • Leading cause of death for women in the United States
    • Gender factors of heart attacks
    • How angina happens
    • Risk factors for heart disease
    • Female-specific risk factors for heart disease
    • Racial risk factors of heart disease
    • Greetings
    • Participants’ name retrieval
    • Knowledge on heart attacks
    • Symptoms of heart attacks
    • Leading cause of death for women in the United States
    • Gender factors of heart attacks
    • First action
    • Importance of calling 911
    • Waiting duration
    • Treatment of heart attacks
    • Action during waiting for 911
    • Risk factors for heart disease
    • Female-specific risk factors for heart disease
    • Racial risk factor for heart disease
    • Multiple-choice quiz questions
    • Further questions to ask
    • End of the conversation
    Session 2
    • First action
    • Importance of calling 911
    • Waiting duration
    • Tests to diagnose a heart attack
    • Medicines for heart attack
    • Operational procedures for treating heart attack
    • Prevention of heart attacks
    • End of the conversation

    We conducted a follow-up phase, developing and evaluating the text-based AI chatbot called HeartBot. A comprehensive description of the HeartBot was published previously []. In short, HeartBot was designed as a rule-guided, SMS-based conversational agent that delivers pre-authored educational messages in a structured format. We implemented it on the Google Dialoflow CX platform and linked it to Twilio [] for text messaging conversation based on the intents and entities paradigm []. HeartBot identified the general intent of each incoming message and responded with an appropriate, scripted reply. Although HeartBot can recognize a range of user inputs, its responses are intentionally constrained to maintain accuracy and consistency in delivering heart disease education. HeartBot engaged in 1 conversation session with the participants. We decided to condense the conversational messaging to only 1 session to reduce the chance of participant attrition and make sure participants can receive all educational information within 1 interaction. In contrast to the first phase of the project (Human dataset), 3 topics (how angina happens, medicines for heart attack, and operational procedures for treating heart attack) were dropped in the second phase (HeartBot dataset), and 2 quiz questions were included at the end of the conversation to assess participants’ retention of key knowledge outcomes. Participants then completed the post online survey and received a $20 Amazon e-gift card. Both studies used the same questionnaires for both the baseline online survey and the post online survey to measure knowledge and awareness of symptoms and response to a heart attack, hosted on a secure online tool called Research Electronic Data Capture [].

    Ethical Considerations

    The first and second studies (phases 1 and 2) were conducted in accordance with the ethical standards outlined in the Declaration of Helsinki. Institutional Review Board approvals were obtained from the University of California, Los Angeles (approval number: 23-000878), for the first study and from University of California, San Francisco (approval number: 23-29793), for the second study. For both studies, all participants provided written informed consent prior to study enrollment. Participation was voluntary, and participants were informed that they could withdraw at any time without penalty. All collected data were deidentified prior to analysis, and no personally identifiable information was retained. Data were stored on secure, password-protected servers accessible only to the research team. As part of the compensation, participants in the first and second studies who completed all study requirements received a $40 e-gift and a $20 e-gift card, respectively.

    Measures

    Primary Outcomes: Knowledge and Awareness of Symptoms and Response to Heart Attack

    To assess the potential efficacy of a conversational intervention to increase the knowledge and awareness of symptoms and response to a heart attack, we adapted a previously validated scale [,]. These items have also been used in prior research involving women from diverse backgrounds to ensure broad applicability [-]. Participants were asked the following 4 questions on a scale of 1-4 where 1 indicated “not sure” and 4 indicated “sure”: (1) “How sure are you that you could recognize the signs and symptoms of a heart attack in yourself?,” (2) “How sure are you that you could tell the difference between the signs or symptoms of a heart attack and other medical problems?,” (3) “How sure are you that you could call an ambulance or dial 911 if you thought you were having a heart attack?,” and (4) “How sure are you that you could get to an emergency room within 60 minutes after onset of your symptoms of a heart attack? The same questions were asked before and after the interaction with the research interventionist and HeartBot. A higher score indicates better knowledge and awareness of symptoms and response to a heart attack.

    Other Measures 
    Overview

    We used the AI Chatbot Behavior Change Model [] to assess user experience and conversational quality as key dimensions of effective chatbot communication. Message effectiveness and perceived message humanness were assessed to capture how participants interpreted and responded to the HearBot’s messages. These key measures were selected to better understand how participants evaluated the interaction and how specific communication features may have influenced their experience.

    User Experience: Message Effectiveness

    Based on the AI chatbot Behavior Change Model [], message effectiveness is conceptualized as an aspect of the broader category of “user experiences,” which measures the level of usefulness and convenience in chatbot conversations. Participants completed a post-survey measure known as the Effectiveness Scale, a semantic-differential scale originally developed based on prior research [,]. The scale consists of 5 items, including bipolar adjective pairs (effective vs ineffective, helpful vs unhelpful, beneficial vs not beneficial, adequate vs not adequate, and supportive vs not supportive). Each item was rated on a 7-point Likert scale, 1 being the negative pole (eg, “ineffective”) and 7 being the positive pole (eg, “effective”). The scores for each item were summed and averaged to create a mean composite score, with higher scores indicating greater perceived effectiveness of the messages.

    Conversational Quality
    Message Humanness

    The humanness of chatbot messages in the AI chatbot Behavior Change Model [] is conceptualized as a part of “conversational quality,” a construct that reflects the perceived human-likeness and naturalness of chatbot interactions. To evaluate participants’ impressions of the messages sent by the research interventionist and HeartBot, participants completed the Anthropomorphism Scale [] during the post-survey. The scale includes 5 pairs of bipolar adjectives (natural vs fake, humanlike vs machine-like, conscious vs unconscious, lifelike vs artificial, and adaptive vs rigid). Participants rated each pair on a 7-point Likert scale, where 1 indicated the first adjective in the pair (eg, “natural”) and 7 indicated the second adjective (eg, “fake”). The scores for each item were summed and averaged to create a mean composite score. The higher scores indicate a greater perception of chatbot messages as more mechanical or artificial.

    Conversational Naturalness and Coherence

    Conversational quality can be assessed by participants’ subjective evaluation of the conversation’s naturalness and coherence []. To evaluate conversational quality, participants were asked to answer the following question in the post-survey: “Overall, how would you rate the conversations with your texting partner?” The response options are as follows: (1) Very unnatural, (2) Unnatural, (3) Neutral, (4) Natural, and (5) Very natural. Participants were also asked to answer the following question in the post-survey: “Overall, how would you rate the messages you received? The response options are as follows: (1) Very incoherent, (2) Incoherent, (3) Neutral, (4) Coherent, and (5) Very coherent.

    Conversational Metrics

    Objective content and linguistic analyses of conversations can be used to evaluate specific dimensions of conversations, such as the length of conversations and the amount of information exchanged []. To measure these dimensions, the Linguistic Inquiry and Word Count (LIWC-22; Pennebaker Conglomerates) software [] was used to process and quantify the total word count of a conversation between the participant and the research interventionist or HeartBot. The number of words used by each agent (participant, research interventionist, and HeartBot) was separately measured to process individual contributions within each conversation.

    Perception of Chatbot Identity (Human vs AI Chatbot)

    At the end of the intervention, we asked the question: Do you think you texted a human or an artificial intelligent chatbot during your conversation? Participants were asked to select either of the 2 response options, which were dichotomous: (1) human or (2) artificial agent.

    Sociodemographic, Past Chatbot Use, and Cardiovascular Risks

    Self-reported sociodemographic information (ie, age, race or ethnicity, education, household income, marital status, and employment status) and cardiovascular risks (ie, smoking history, prescribed blood pressure, cholesterol, and diabetes medication intake, and family history of heart disease) were collected in the baseline survey online. The cardiovascular risk factor variables were selected based on the latest clinical guidelines []. In addition, the question “Have you used any chatbot in the past 30 days? was used to assess past AI chatbot use experience. The participants were asked to select either Yes or No.

    Statistical Analysis

    We conducted a descriptive analysis to calculate counts and percentages, or means and SD for sociodemographic characteristics, past chatbot use, and cardiovascular risks. To compare the 2 datasets, we performed independent t tests to assess mean differences for continuous variables and used χ2 tests to examine group distributions. We first conducted Wilcoxon signed-rank tests to evaluate for statistically significant changes in heart attack knowledge and awareness outcome responses (not sure, somewhat not sure, somewhat sure, and sure) between the baseline and the post-interaction, within the Human dataset (phase 1) and HeartBot dataset (phase 2). Then, to adjust for potential confounders, we fit a series of ordinal mixed-effects models using the R (version 4.1.0; The R Foundation for Statistical Computing) [] package ordinal v2022.11.16 [], for each of the 4 knowledge questions as outcomes. We first fit these models stratified by Human dataset (phase 1) and HeartBot dataset (phase 2), and adjusting for fixed effects of post (vs pre; the primary coefficient of interest for these models, indicating whether each of the 2 interventions was successful), White (vs non-White), age, interaction group type, education, number of words used by the participants, mean text message effectiveness and humanness of scores, and a random effect for individual. We then fit a model on the entire dataset additionally adjusting for HeartBot (vs Human), and the interaction between HeartBot and post timepoint (ie, whether HeartBot is more effective than human; the primary coefficient of interest for this model). As an attempted sensitivity analysis, we tried to fit a mixed effects multinomial logistic regression model in Stata (version 16.1; StataCorp LLC) [] via the generalized structural equations command, but the models would not converge (likely owing to the small sample size and increased number of parameters to estimate compared to an ordinal logistic regression model). A 2-sided test was used with significance set at P<.05.

    Sample Characteristics

    shows screening, enrollment, and follow-up of the study participants. A total of 171 participants in the Human dataset (phase 1) and 92 participants in the HeartBot dataset (phase 2) completed the study. presents the baseline sample characteristics for the 2 datasets. The mean age (SD) of participants was 41.06 (12.08) years in phase 1 and 45.85 (11.94) years in phase 2. In the Human dataset (phase 1), participants were primarily Black/African American (n=70, 40.9%), college graduates (n=103, 60.3%), and earning moderate-to-high income (n=68, 39.8%). Participants in the HeartBot dataset (phase 2) were primarily White (n=37, 40.2%), college graduates (n=66, 71.7%), and earning moderate-to-high income (n=39, 42.4%). A majority of participants in the Human dataset (phase 1) reported having experience in using chatbot (n=96, 56.1%) as did participants in the HeartBot dataset (phase 2; n=53, 57.6%).

    Table 2. Study sample characteristics: sociodemographic, previous chatbot use, cardiovascular risks.
    Characteristic Human dataset (n=171) HeartBot dataset (n=92) P value
    Age (years), mean (SD)/[range] 41.06 (12.08)/[25.0‐76.0] 45.85 (11.94)/[26.0‐70.0] .002
    Race/ethnicity, n (%) .097
    Black/African American (non-Hispanic) 70 (40.9) 22 (23.9)
    Hispanic/Latino 29 (17.0) 19 (25.0)
    Asian 10 (5.8) 6 (6.5)
    White (non-Hispanic) 50 (29.2) 37 (40.2)
    American Indian/Native Hawaiian/more than 1 race/ethnicity 12 (7.0) 8 (8.7)
    Education, n (%) .06
    Completed some college course work, but did not finish or less 68 (39.8) 26 (28.3)
    Completed college/graduate school 103 (60.3) 66 (71.7)
    Household income, n (%)
    59 (34.5) 23 (25.0) .24
    US $40,001-$75,000 44 (25.7) 30 (32.6)
    >US $75,000 68 (39.8) 39 (42.4)
    Marital status, n (%) .63
    Never married 46 (26.9) 21 (22.8)
    Currently married/cohabitating 108 (63.2) 59 (64.1)
    Divorced/widowed 17 (9.9) 12 (13.0)
    Employment status, n (%) .15
    Full-time/part-time 108 (63.2) 56 (60.9)
    Unemployed/homemaker/student 42 (24.5) 17 (18.5)
    Retired/disabled/other 21 (12.3) 19 (20.7)
    Chatbot use (eg,
    Amazon’s Alexa, Google Assistant, Siri, Facebook Messenger bot etc) in the past 30 days, n (%)
    .82
    Yes 96 (56.1) 53 (57.6)
    No 75 (43.9) 39 (42.4)
    Cardiovascular risks, n (%)
    Smoked at least one cigarette in the last 30 days .08
    Yes 14 (8.2) 14 (15.2)
    No 157 (91.8) 78 (84.8)
    Blood pressure medication .045
    Yes 71 (41.5) 25 (27.2)
    No/don’t know 100 (58.5) 67 (72.8)
    Cholesterol medication .69
    Yes 62 (36.3) 29 (31.5)
    No/don’t know 109 (63.8) 63 (68.5)
    Diabetes medication .91
    Yes 23 (13.5) 12 (13.0)
    No/don’t know 148 (86.6) 80 (87.0)
    Family history of heart disease/stroke .11
    Yes 38 (22.2) 13 (14.1)
    No/don’t know 133 (77.8) 79 (85.9)

    Changes in Knowledge and Awareness of Heart Disease

    presents the results of Wilcoxon signed-rank tests examining pre- to post-changes in 4 knowledge and awareness of heart disease outcomes. Supplementary Tables S1-S3 (in ) present the full ordinal logistic regression models: Table S1 for the human-delivered conversations, Table S2 for HeartBot conversations, and Table S3 for the combined data. Overall, Wilcoxon signed-rank tests revealed a significant increase in knowledge and awareness of heart disease across all 4 outcome measures following interactions with both research interventionist and HeartBot (human-delivered conversations: all P<0.001; HeartBot conversations: P<.001 for Q1-Q3 and P=.002 for Q4).

    Table 3. Change in participants’ knowledge and awareness of symptoms and response to heart attack between pre- and post-human conversation (n=171) and pre- and post-HeartBot conversation (n=92) for 4 outcome questions.
    Human-delivered conversations (n=171) HeartBot conversations (n=92)
    Pre-human conversation (%) Post-human conversation (%) P value (%) Pre-HeartBot conversation (%) Post-HeartBot conversation (%) P value
    Q1: Recognizing signs and symptoms of a heart attack <.001 <.001
    1: Not sure 17.50 1.20 26.10 3.30
    2: Somewhat unsure 37.40 4.10 34.80 30.40
    3: Somewhat sure 36.30 56.10 35.90 43.50
    4: Sure 8.80 38.60 3.30 22.80
    Q2: Telling the difference between the signs or symptoms of a heart attack and other medical problems <.001 <.001
    1: Not sure 26.90 6.40 30.40 8.70
    2: Somewhat unsure 48.00 20.50 41.30 38.00
    3: Somewhat sure 19.30 50.90 26.10 43.50
    4: Sure 5.80 22.20 2.20 9.80
    Q3: Calling an ambulance or dialing 911 when experiencing a heart attack <.001 <.001
    1: Not sure 19.90 1.80 14.10 3.30
    2: Somewhat unsure 24.60 7.00 21.70 14.10
    3: Somewhat sure 22.80 23.40 34.80 21.70
    4: Sure 32.70 67.80 29.30 60.90
    Q4: Getting to an emergency room within 60 minutes after onset of symptoms of a heart attack <.001 .002
    1: Not sure 18.10 1.80 18.50 6.50
    2: Somewhat unsure 22.80 7.00 18.50 13.00
    3: Somewhat sure 26.30 18.70 31.50 33.70
    4: Sure 32.70 72.50 31.50 46.70

    aWilcoxon matched pairs tests were conducted.

    shows the adjusted odds ratios (AORs) from a series of ordinal logistic regression analyses for predicting each knowledge question for the Human dataset (phase 1). In the Human dataset (phase 1), after controlling for age, ethnicity, education, message effectiveness, message humanness, and chatbot use history, the human-delivered conversations improved participants’ knowledge and awareness in recognizing the signs and symptoms of a heart attack response (AOR 15.19, 95% CI 8.46‐27.25, P<.001), telling the difference between the signs or symptoms of a heart attack response (AOR 9.44, 95% CI 5.60‐15.91, P<.001), calling an ambulance or dialing 911 during a heart attack response (AOR 6.87, 95% CI 4.09‐11.55, P<.001), and getting to an emergency room within 60 minutes after onset of symptoms response (AOR 8.68, 95% CI 4.98‐15.15, P<.001). In the HeartBot dataset (phase 2), these effects were generally reduced but still substantially improved (see ; full model in ), for example, in recognizing the signs and symptoms questions (AOR 7.18, 95% CI 3.59-14.36, P<.001). A formal interaction test showed a statistically significant improvement of Human versus HeartBot dataset for all but the third question (calling an ambulance; P=.09) as shown in (Table S3 in ). We could not adjust for word count, as all human-delivered conversations in the Human dataset (phase 1) were longer than any of the HeartBot conversations in the HeartBot dataset (phase 2), and so the model would not fit; thus, we could not really differentiate the intervention effect from the word count.

    Table 4. Ordinal logistic regression models comparing post- versus pre-intervention (Human or HeartBot) on the 4 knowledge questions.
    Cohort Term Q1: Recognizing signs and symptoms of a heart attack Q2: Telling the difference between the signs or symptoms of a heart attack and other medical problems Q3: Calling an ambulance or dialing 911 when experiencing heart attack Q4: Getting to an emergency room within 60 minutes after onset of symptoms of a heart attack
    AOR
    (95% CI)
    P value AOR
    (95% CI)
    P value AOR
    (95% CI)
    P value AOR
    (95% CI)
    P value
    Human-delivered conversation Post (vs pre) 15.19 (8.46, 27.25) <.001 9.44 (5.60, 15.91) <.001 6.87 (4.09, 11.55) <.001 8.68 (4.98, 15.15) <.001
    HeartBot conversation Post (vs pre) 7.18 (3.59, 14.36) <.001 5.44 (2.76, 10.74) <.001 5.74 (2.84, 11.60) <.001 2.86 (1.55, 5.28) <.001
    All Post × HeartBot 0.38 (0.19, 0.78) .008 0.40 (0.20, 0.80) 0.01 0.53 (0.25, 1.10) .09 0.26 (0.12, 0.55) <.001

    aModels are additionally adjusted for White (vs non-White), age, group type, education, user word count, mean text message effectiveness, and humanness of scores (full models in Table S2); Q1, How sure are you that you could recognize the signs and symptoms of a heart attack in yourself? (Select a number from 1: not sure to 4: sure); Q2: How sure are you that you could tell the difference between the signs or symptoms of a heart attack and other medical problems? (Select a number from 1: not sure to 4: sure); Q3: How sure are you that you could call an ambulance or dial 911 if you thought you were having a heart attack? (Select a number from 1: not sure to 4: sure); Q4, How sure are you that you could get to an emergency room within 60 minutes after onset of your symptoms? (Select a number from 1: not sure to 4: sure).

    bAOR, adjusted odds ratio.

    c95% CI, 95% confidence interval.

    d***P<.001.

    e **P<.01.

    Human-Delivered Conversation Versus HeartBot Conversation

    presents the comparison of the evaluation of conversation quality between the 2 studies. In the Human dataset (phase 1), participants interacted with the research interventionist and completed conversation sessions over the course of 2 days. The mean (SD) and median number of words used by the participants and their conversing agent overall were 2322.00 (875.65) and 2097.00 words in the Human dataset (phase 1), and 888.04 (76.04) and 852 words in the HeartBot dataset (phase 2). Participants in the Human dataset (phase 1) ranked all conversational qualities, which include message effectiveness, message humanness, conversation naturalness, and coherence, significantly higher than those in the HeartBot dataset (phase 2). About 74.3% (127/171) and 66.3% (61/92) of the participants in the Human and HeartBot datasets in both groups correctly identified when they were conversing with a human or HeartBot, respectively.

    Table 5. Comparing the evaluation of conversation quality between the Human dataset and the HeartBot dataset.
    Human dataset (n=171) HeartBot dataset (n=92) P value
    User experience, mean (SD) <.001
     Score of Message Effectiveness scale 6.35 (0.85) 5.66 (1.23)
    Conversation quality (subjective measure) <.001
     Score of Message Humanness scale, mean (SD) 5.86 (1.24) 5.19 (1.19)
     Overall, how would you rate the conversations with your texting partner?, n (%) <.001
     Very unnatural/unnatural 9 (5.3) 5 (5.4)
     Neutral 19 (11.1) 33 (35.9)
     Natural/very natural 143 (83.6) 54 (58.7)
     Overall, how would you rate the messages you received?, n (%) <.001
     Very incoherent/incoherent 1 (0.6) 0 (0)
     Neutral 4 (2.3) 23 (25.0)
     Coherent/very coherent 143 (83.6) 69 (75.0)
    Conversation quality (objective measure), mean (SD)/[range]/median <.001
     Number of words used by the participants and research interventionist/HeartBot 2322.55 (875.65)/
    [1314.0‐8073.0]/2097.0
    888.04 (76.4)/ [778‐1274]/852
     Number of words used by the participants 298.94 (227.90)/
    [83.0‐1986.0]/231.0
    80.57 (60.19)/
    [34-377]/63
    Do you think you texted a human or artificial intelligent chatbot during your conversation?, n (%) <.001
     Human 127 (74.3) 31 (33.7)
     Artificial intelligence chatbot 44 (25.7) 61 (66.3)

    Principal Results

    We compared the potential efficacy of human-delivered conversations versus HeartBot conversations in increasing participants’ knowledge and awareness of symptoms and the appropriate response to a heart attack in the United States, while controlling for potential confounding factors. Since this study was not an RCT, the efficacy of the HeartBot intervention, compared to the SMS text messaging intervention delivered by a research interventionist, cannot be established. Caution needs to be exercised when interpreting the findings. The findings suggest that interacting with both the research interventionist and HeartBot was associated with increased knowledge and awareness of a heart attack among participants (ie, recognizing signs and symptoms of a heart attack, telling the difference between the signs or symptoms of a heart attack and other medical problems, calling an ambulance or dialing 911 when experiencing heart attack, getting to an emergency room within 60 minutes after onset of symptoms of a heart attack). However, human-delivered conversations appeared to have a stronger association than HeartBot conversations for all except for the question regarding calling an ambulance (P=.09). This may be due to the fact that calling emergency services is a well-known emergency response behavior, which may not require adaptive or relational communication to be effectively understood. Yet, this does not suggest that HeartBot was ineffective. Interacting with HeartBot still led to significant improvements in increasing knowledge and awareness of a heart attack. Given its automated nature and lower cost, we view HeartBot as a promising and useful alternative, particularly in contexts where human resources are limited.

    Several potential explanations can be considered due to the fundamental structural differences in the content and duration of the conversation sessions between the 2 studies. First, human-delivered conversations involved a more extended engagement process, comprising 2 separate sessions over a week, allowing participants to engage in a more prolonged and reflective learning process. In contrast, the HeartBot conversation was limited to a single session, which may have constrained the depth of discussion. Second, participants in the Human dataset (phase 1) produced significantly more words during the conversation, with a mean (SD) word count of 298.94 (227.90), compared to 80.57 (60.19) in the HeartBot dataset (phase 2). The greater verbosity in the Human dataset (phase 1) may have contributed to deeper discussions and enhanced knowledge reinforcement, potentially explaining the observed increase in efficacy. However, we were not able to statistically account for word count, as models adjusting for the covariate would not converge, likely owing to having very different distributions of word counts with little overlap in the 2 groups (humans a mean [SD] of 2322.00 [875.65] words, HeartBot a mean [SD] of 888.04 [76.04] words). Finally, human-delivered conversations were facilitated by a research interventionist, who is a master-prepared, cardiovascular nurse, allowing for greater flexibility in language use, response adaptation, and addressing participant queries in a more personalized manner. In contrast, HeartBot had the inherent limitation in its conversational algorithm, which appears less personalized and less flexible, following a structured script, limiting its ability to adjust dynamically to participants’ specific concerns.

    HeartBot, a fully automated AI chatbot, was significantly associated with increased participants’ knowledge and awareness of symptoms and response to a heart attack and demonstrates significant potential as an innovative AI intervention. AI chatbots offer a scalable, 24/7 accessible, and personalized approach to health education for broader populations. AI chatbots’ adaptive algorithms allow for dynamic personalization, tailoring responses to individual user queries and comprehension levels, which may enhance engagement and knowledge retention beyond one-size-fits-all campaigns. Additionally, chatbot interactions require active engagement as participants read, process, and respond to information, reinforcing learning through interaction rather than passive intake []. The anonymized nature of chatbot conversations can also reduce psychological barriers, encouraging users to seek information more openly, especially on sensitive health topics []. Finally, HeartBot integrates structured quiz components, encouraging reinforcement of learning through immediate self-assessment and cognitive recall.

    While these advantages highlight AI chatbots’ potential, findings from this study suggest room for improvement to further enhance their efficacy. First, increasing the number of interaction sessions—rather than a single 1-time interaction—may allow for more sustained engagement and deeper knowledge retention, aligning more closely with the multi-session format of human-delivered conversations. Second, further iterations could leverage machine learning algorithms to continuously refine conversation models and improve HeartBot’s flexibility in answering participants’ queries, which could make interaction with HeartBot feel more responsive and personalized. Lastly, to fully evaluate HeartBot’s long-term efficacy and potential parity with human-delivered conversations, a rigorously designed RCT would be instrumental. While this study provides promising preliminary insights, causal relationships cannot be established. Future research should prioritize RCTs to confirm these findings and support evidence-based deployment of such interventions.

    Interestingly, user experience and conversational quality were perceived to be high across both studies, as participants generally rated the message as effective, humanlike, coherent, and natural. However, these perceptions were significantly higher in the Human dataset (phase 1). This may be due to participants subconsciously detecting cues that felt more human. Although the identity of the conversing partner was not disclosed, a substantial portion of participants misperceived whether they were interacting with a human or an AI chatbot. While the perception of partner identity was not a primary focus of this study, these misattributions nonetheless provide insight into how users process conversational agency. They highlight the inherent ambiguity in conversational agency and may reflect the challenge of replicating human communication subtleties in algorithmic interactions. While HeartBot demonstrated considerable communicative competence, it encountered limitations in fully imitating the nuanced relational aspects of human dialog. Drawing from the Computers Are Social Actors paradigm [], participants apply social interaction schemas to technological interfaces yet experience these interactions with less emotional depth and relational intimacy. Key communication studies have consistently highlighted the critical role of relational cues in establishing trust and engagement and promoting human-chatbot relationships. For example, research has shown that conversational agents can build positive relationships in health and well-being settings through verbal behaviors like humor [], social dialog [], and empathy []. Although HeartBot successfully delivered equivalent factual content, it inherently struggled to reproduce the affective dimensions that characterize human-to-human communication. These findings suggest that while AI chatbots provide a promising technological intervention, they must continue to evolve in their ability to simulate the nuanced relational components of effective human health communication.

    Limitations and Suggestions for Future Studies

    Several limitations of this study need to be acknowledged. Without a true RCT, the causal inferences regarding the 2 interventions cannot be determined, and the findings provide only exploratory comparative insights due to the following reasons. The 2 datasets were not collected under a single randomized protocol. Participants were not randomly assigned, making the study vulnerable to selection bias and unmeasured confounders. In particular, human-delivered conversations were much longer (~2322 words) than HeartBot (~888 words). Statistical adjustment was not possible due to nonoverlapping distributions. This is a major confounder that prevents clear attribution of effects to delivery mode versus conversation length. In other words, the differences in exposure length make it impossible to disentangle “agent effect” (human vs HeartBot) from “dose effect” (amount of content). The interventions also differed not only in delivery agent but also in structure: (1) the human-delivered arm included 2 sessions, while the chatbot was a single session; (2) some topics were omitted in the HeartBot group; and (3) incentives differed ($40 vs $20). An RCT addressing these limitations is warranted to validate this study’s findings.

    Another limitation is related to the study measures and the timing of the measures. The outcome assessment relied on subjective Likert scale responses, which may be influenced by recall or social desirability bias. Furthermore, the outcomes were assessed between 4 and 6 weeks after the intervention. Thus, the study only captures short-term awareness and knowledge gains rather than sustained retention or behavior change. Future studies need to include objective or performance-based measures (eg, quizzes, simulated scenarios) to complement self-reports, longitudinal follow-up (ie, 2-24 mo) to assess retention, and whether increased awareness and knowledge translate into real-world emergency response behaviors. Additionally, the multinomial mixed effects logistic regression model would not converge. This is a known problem with these models due to a combination of small cell counts in specific outcome categories and the high-dimensional nature of the random effects in the model. However, our more parsimonious ordinal mixed effects logistic regression model converged and appeared to fit the data well.

    The last limitation is related to the generalizability of the finding. The current recruitment strategy relied on social media (Facebook or Instagram) and self-selected women who were comfortable with technology. This may skew the sample toward digitally literate participants and limit generalizability to more diverse or higher-risk groups. Thus, future studies should include purposive recruitment strategies targeting underrepresented groups (ie, older women, nondigital populations, and those with lower health literacy).

    Conclusions

    The study’s findings provide new insights into the fully automated AI HeartBot, compared to the human-driven text message conversation, and suggest that it has potential in improving women’s knowledge and awareness of heart attack symptoms and appropriate response behaviors. Nevertheless, the current evidence remains preliminary. To rigorously establish the efficacy of the HeartBot intervention, future research should employ RCT designs with the capacity to reach broad and diverse populations.

    The project was supported by the Noyce Foundation and the UCSF School of Nursing Emile Hansen Gaine Fund. The project sponsors had no role in the study design, collection, analysis, or interpretation of data, writing the report, or deciding to submit the report for publication.

    The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.

    None declared.

    Edited by Javad Sarvestan; submitted 26.Feb.2025; peer-reviewed by Chidinma Chikwe, Neeladri Misra, Reenu Singh; final revised version received 22.Sep.2025; accepted 22.Sep.2025; published 17.Oct.2025.

    © Diane Dagyong Kim, Jingwen Zhang, Kenji Sagae, Holli A DeVon, Thomas J Hoffmann, Lauren Rountree, Yoshimi Fukuoka. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 17.Oct.2025.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • Abemaciclib Plus Endocrine Therapy Provides OS Benefit in HR+/HER2-Negative Breast Cancer

    Abemaciclib Plus Endocrine Therapy Provides OS Benefit in HR+/HER2-Negative Breast Cancer

    Adjuvant abemaciclib (Verzenio) plus endocrine therapy significantly reduced the risk of death by 15.8% compared with endocrine therapy alone in patients with hormone receptor–positive, HER2-negative, high-risk early breast cancer (HR, 0.842; 95% CI, 0.722-0.981; P = .0273), according to findings from the primary overall survival (OS) analysis of the phase 3 monarchE trial (NCT03155997), which were presented at the 2025 ESMO Congress.1

    At a median follow-up of 76 months (6.3 years) and a data cutoff date of July 15, 2025, all patients had stopped receiving abemaciclib for at least 4 years. There were 301 OS events in the abemaciclib arm vs 360 in the endocrine therapy–alone arm. The OS rates at 60, 72, and 84 months were 91.2%, 89.2%, and 86.8% in the abemaciclib arm vs 90.2% (Δ = 1.0), 87.9% (Δ = 1.3), and 85.0% (Δ = 1.8) in the control arm.

    “Abemaciclib represents the first CDK4/6 inhibitor to achieve a statistically significant improvement in OS for these high-risk, node-positive patients,” Stephen Johnston, MD, PhD, stated in the presentation.

    Johnston is head of the Breast Unit, a professor of breast cancer medicine, and a consultant medical oncologist at The Royal Marsden NHS Foundation Trust and The Institute of Cancer Research in London, United Kingdom.

    Primary OS Analysis of the Phase 3 monarchE Trial: Key Takeaways

    • The use of adjuvant abemaciclib plus endocrine therapy significantly reduced the risk of death by 15.8% (HR, 0.842; 95% CI, 0.722-0.981; P = .0273) compared with endocrine therapy alone in patients with hormone receptor–positive, HER2-negative, high-risk early breast cancer.
    • The addition of abemaciclib to endocrine therapy reduced the risk of IDFS events by 26.6% (HR, 0.734; 95% CI, 0.657-0.820; nominal P < .0001) compared with endocrine therapy alone, with IDFS rates at 84 months standing at 77.4% in the abemaciclib arm vs 70.9% in the control arm.
    • In the primary OS analysis, the control arm experienced a higher percentage of patients who died due to breast cancer (10.5%; n = 296) and were alive with metastatic disease (9.4%, n = 266), compared with the abemaciclib arm (7.9%, n = 221 and 6.4%, n = 180, respectively).

    What Was the Impetus for Conducting the monarchE Trial?

    “Improving OS and cure rates is the goal of adjuvant therapy in early breast cancer, but it’s difficult to prove OS, and we often approve treatments on benefits in reducing risk of recurrence,” Johnston explained.

    He summarized key findings from the past several years of investigating endocrine therapy in patients with hormone receptor–positive early breast cancer, including reductions in the risk of death conferred by the use of tamoxifen vs no treatment, the use of an aromatase inhibitor (AI) vs tamoxifen, and the use of extended AI treatment vs 5 years of endocrine therapy. He noted that unlike in prior trials, previously reported data from monarchE indicated that an OS benefit could emerge from the addition of abemaciclib to endocrine therapy vs endocrine therapy alone.

    What Was the Design of the monarchE Trial?

    monarchE enrolled patients with hormone receptor–positive, HER2-negative, mode-positive, high-risk early breast cancer. Cohort 1 included patients with high-risk disease based on clinical and pathological factors, including at least 4 positive axillary lymph nodes or 1 to 3 positive axillary lymph nodes and grade 3 disease and/or a tumor size of at least 5 cm. Cohort 2 included patients with high-risk disease based on Ki-67 score, defined as 1 to 3 positive axillary lymph nodes plus a Ki-67 score of at least 20%, as well as grade 3 or lower disease and a tumor size of less than 5 cm.

    Patients in cohort 1 (n = 5637) were randomly assigned 1:1 to receive abemaciclib at 150 mg twice daily plus endocrine therapy or endocrine therapy alone for the on-treatment study period of 2 years. During the follow-up period, patients received endocrine therapy for 3 to 8 years as clinically indicated. Patients were stratified by prior chemotherapy, menopausal status, and region.

    The primary end point was invasive disease–free survival (IDFS). Secondary end points included IDFS in the high Ki-67 populations, distant relapse–free survival (DRFS), OS, safety, pharmacokinetics, and patient-reported outcomes.

    What Data Have Been Previously Reported From monarchE?

    The initial readout of the monarchE trial showed that patients who received 2 years of adjuvant abemaciclib plus endocrine therapy had significant improvements in IDFS compared with those who received endocrine therapy alone (HR, 0.75; 95% CI, 0.60-0.93; P = .01).2 Furthermore, at the 5-year landmark analysis of the trial, at a median follow-up of 54 months (IQR, 49-59), the IDFS (HR, 0.68; 95% CI, 0.599-0.772; nominal P < .001) and DRFS (HR, 0.675; 95% CI, 0.588-0.774; nominal P < .001) benefits with abemaciclib were sustained, and an OS trend favoring the abemaciclib arm had emerged, although it had not yet reached statistical significance (HR, 0.903; 95% CI, 0.749-1.088; P = .284).3 

    What Additional Data Were Seen in the Primary OS Analysis of monarchE?

    In the primary OS analysis, the OS benefit with the addition of abemaciclib was consistent across prespecified patient subgroups.1 However, Johnston emphasized that the point estimates for the individual subgroups should be interpreted with caution because the trial was not powered or controlled to evaluate treatment effects in individual subgroups.

    There were approximately 30% fewer patients living with metastatic disease in the abemaciclib arm vs the control arm. In the abemaciclib arm, 2.8% of patients (n = 80) had died from causes unrelated to breast cancer, 7.9% of patients (n = 221) had died due to breast cancer, and 6.4% of patients (n = 180) were alive with metastatic disease. In the control arm, these rates were 2.3% (n = 64), 10.5% (n = 296), and 9.4% (n = 266), respectively.

    The addition of abemaciclib to endocrine therapy reduced the risk of IDFS events by 26.6% compared with endocrine therapy alone (HR, 0.734; 95% CI, 0.657-0.820; nominal P < .0001). There were 547 and 722 IDFS events in these respective arms.

    During the follow-up period, the respective IDFS rates in the abemaciclib and control arms were as follows:

    • 24 months: 92.7% vs 89.9%
    • 36 months: 89.2% vs 84.4%
    • 48 months: 85.9% vs 80.0%
    • 60 months: 83.1% vs 76.5%
    • 72 months: 80.0% vs 74.1%
    • 84 months: 77.4% vs 70.9%

    A consistent IDFS benefit was observed across all prespecified subgroups.

    Johnston noted that most IDFS events observed in the trial were distant metastatic disease, and that the addition of abemaciclib reduced the number of patients with metastases at common sites. In the abemaciclib arm (n = 2808), 18.0% of patients in the intent-to-treat (ITT) population had a first recurrence, consisting of distant recurrence (13.6%), local/regional recurrence (2.5%), second primary neoplasm (1.7%), and contralateral breast cancer (0.5%). The sites of initial distant recurrence in this arm included bone (5.9%), liver (3.5%), lung (2.5%), brain/central nervous system (CNS; 1.1%), lymph node (1.0%), and pleura (0.3%).

    In the control arm (n = 2829), 24.5% of patients in the ITT population had a first recurrence, consisting of distant recurrence (18.5%), local/regional recurrence (3.9%), second primary neoplasm (1.8%), and contralateral breast cancer (0.8%). The sites of initial distant recurrence in this arm included bone (9.4%), liver (4.7%), lung (2.7%), brain/CNS (1.1%), lymph node (1.6%), and pleura (1.0%).

    Overall, investigators observed low rates of second primary neoplasms across both arms.

    A sustained DRFS benefit with the addition of abemaciclib was also observed, reducing the risk of DRFS events by 25.4% compared with endocrine therapy alone (HR, 0.746; 95% CI, 0.662-0.840; nominal P < .0001). There were 476 DRFS events in the abemaciclib arm vs 621 DRFS events in the control arm.

    During the follow-up period, the respective DRFS rates in the abemaciclib and control arms were as follows:

    • 24 months: 94.0% vs 91.5% (Δ = 2.5)
    • 36 months: 90.9% vs 86.6% (Δ = 4.3)
    • 48 months: 88.2% vs 83.1% (Δ = 5.1)
    • 60 months: 85.4% vs 79.5% (Δ = 5.9)
    • 72 months: 82.6% vs 77.6% (Δ = 5.0)
    • 84 months: 80.0% vs 74.9% (Δ = 5.1)

    The investigators also reported a consistent DRFS benefit with abemaciclib across prespecified subgroups.

    Among patients in the abemaciclib arm with distant recurrence who entered the post-2-year treatment follow-up period (n = 407), 78.9% received any first systemic therapy in the first-line metastatic setting, 32.7% received chemotherapy, 46.7% received endocrine therapy, 33.2% received targeted therapy (CDK4/6 inhibitor, 30.0%; PI3K/AKT/mTOR inhibitor, 3.2%), and 5.2% received other therapy. These respective rates in the control arm (n = 565) were 83.4%, 23.7%, 58.4%, 484.8% (CDK4/6 inhibitor, 47.3%; PI3K/AKT/mTOR inhibitor, 0.7%), and 4.8%.

    Differences in CDK4/6 inhibitor and chemotherapy use between the arms were predominantly seen among patients with early recurrences and were less pronounced among those with later recurrences. Among patients with early recurrences, the rates of chemotherapy and CDK4/6 inhibitor use were 43.5% and 15.2%, respectively, in the abemaciclib arm (n = 191) vs 26.8% and 44.7%, respectively, in the control arm (n = 313). Among patients with late recurrences, the usage rates of these respective classes of therapy were 23.1% and 43.1% in the abemaciclib arm (n = 216) vs 19.8% and 50.4% in the control arm (n = 252).

    “This makes clinical sense,” Johnston reported. “Patients relapsing on their adjuvant CDK4/6 inhibitor may be more likely to be offered chemotherapy. Those who have not had it in the adjuvant setting may be offered it for their metastatic disease. Remember, this was a global trial. Investigators could treat the patients as they saw fit. Globally, not all therapies in metastatic disease are equally available around the world, so we do not believe that this analysis confounds the OS impact we’ve seen. If there were more CDK4/6 inhibitors [used] in the endocrine therapy–alone arm, that might diminish the OS benefit, not enhance it.”

    What Were the Long-Term Safety Findings From monarchE?

    Safety results from long-term follow-up were consistent with those from prior analyses, because all treated patients had completed treatment at least 4 years prior. Investigators observed no relevant differences in adverse effect (AE)–related causes of death between the 2 arms.

    Among safety-evaluable patients in the abemaciclib arm (n = 2791), during therapy, 15 deaths occurred; the most common causes of death were infections and infestations (COVID-19, n = 3) and cardiac disorders (n = 5). Following treatment discontinuation in this arm, 197 patients had at least 1 serious AE during late-term follow-up, regardless of causality. There were 44 deaths attributed to AEs, including infections and infestations (n = 13; COVID-19, n = 6), second primary neoplasm (n = 13), and cardiac disorders (n = 6).

    Among safety-evaluable patients in the control arm (n = 2800), during therapy, 11 deaths occurred; the most common causes of death were infections and infestations (n = 5; COVID-19, n = 1) and second primary neoplasm (n = 1). Following treatment discontinuation in this arm, 213 patients had at least 1 serious AE during late-term follow-up, regardless of causality. There were 30 deaths attributed to AEs, including infections and infestations (n = 5; COVID-19, n = 2), second primary neoplasm (n = 7), and cardiac disorders (n = 9).

    “The 7-year analysis has continued to show a sustained benefit in IDFS and DRFS, and there are no new safety signals,” Johnston concluded.

    Disclosures: Johnston reported performing consultant or advisory roles with Eli Lilly and Company, Novartis, AstraZeneca, Roche-Genentech, and Pfizer; receiving grant/research funding from Pfizer, Eli Lilly and Company, and AstraZeneca; receiving honoraria from AstraZeneca, Roche-Genentech, Eli Lilly and Company, Novartis, and Pfizer; and giving expert testimony for Novartis.

    References

    1. Johnston S, Martin M, O’Shaughnessy J, et al. Overall survival with abemaciclib in early breast cancer. Ann Oncol. Published online October 17, 2025. doi:10.1016/j.annonc.2025.10.005
    2. Johnston SRD, Harbeck N, Hegg R, et al. Abemaciclib combined with endocrine therapy for the adjuvant treatment of HR+, HER2-, node-positive, high-risk, early breast cancer (monarchE). J Clin Oncol. 2020;38(34):3987-3998. doi:10.1200/JCO.20.02514
    3. Rastogi P, O’Shaughnessy J, Martin M, et al. Adjuvant abemaciclib plus endocrine therapy for hormone receptor-positive, human epidermal growth factor receptor 2-negative, high-risk early breast cancer: results from a preplanned monarchE overall survival interim analysis, including 5-year efficacy outcomes. J Clin Oncol. 2024;42(9):987-993. doi:10.1200/JCO.23.01994

    Continue Reading

  • Orrick Continues to Expand Asia Energy Platform with Tokyo Hire

    Orrick Continues to Expand Asia Energy Platform with Tokyo Hire

    • Akira Takahashi joins Orrick from RWE Renewables as a partner on our Energy & Infrastructure team in Tokyo.
    • Akira, who is Japanese and New York law-qualified, brings deep experience advising on international energy projects and project finance transactions. He will collaborate with the firm’s Tokyo and Singapore teams representing Japanese export credit and insurance agencies, development banks, trading houses and other commercial lending institutions in their investments in Singapore and across Southeast Asia.
    • Prior to his time as senior legal counsel at RWE Renewables, Akira practiced at Allen & Overy for 11 years. He also spent four years on secondment to the Japan Bank for International Cooperation, where he supported their energy and natural resources businesses. His inbound and outbound experience spans the full spectrum of E&I technologies, including offshore wind, solar, LNG, hydropower and storage.
    • With his arrival, Orrick has added 15 partners to its global Energy & Infrastructure platform in 2025, including Chambers Band 1 projects advisor Adam Moncrieff in Singapore, and Global Head of Oil & Gas Anna Howell in London, who is also Chambers Band 1-ranked. The firm now has nine E&I partners across Tokyo and Singapore.

    “We’re excited to welcome a top-quality practitioner like Akira and strengthen our support for Japanese clients in their investments across Asia, as well as our work in inbound investment in Japan,” said E&I Partner Minako Wakabayashi, the leader of Orrick’s Tokyo office.

    Orrick’s Tokyo office, in collaboration with the firm’s Singapore team, advises on both traditional and innovative energy projects across Japan, Singapore, Indonesia, Vietnam, India, Taiwan and other markets with accelerating energy needs. This includes leading some of the largest wind and solar projects, as well as some of the first renewables PPAs to support data centers in the region.

    “Akira brings a deep understanding of the needs of today’s energy market participants, having advised on complex outbound transactions across Southeast Asia,” said Singapore-based Orrick Energy & Infrastructure Partner Michael Tardif, also a Chambers Band 1 advisor. “I’ve seen firsthand his versatile skillset and commitment to client service. We’re thrilled to welcome him as we continue building a full-service energy offering for our clients.”

    “I’m delighted to reunite with my former colleagues, Michael and Adam, and to collaborate with the entire Orrick team to support our clients on their most sophisticated financings and projects,” Akira said.

    Orrick is the No. 2 law firm globally for Energy Transition and No. 1 globally for PPAs (inspiratia, 2024). The firm acts for four of the top 10 oil & gas majors, 60 of the top 100 energy & infrastructure investors globally and half of the top 50 renewables sponsors worldwide.

    Continue Reading

  • Customers sue maker of popular On Cloud shoes over ‘noisy and embarrassing’ squeaks

    Customers sue maker of popular On Cloud shoes over ‘noisy and embarrassing’ squeaks

    Athletic shoe company On is facing a lawsuit from customers who claim that its popular sneakers make a “noisy and embarrassing squeak”.

    The “CloudTec” sneakers typically cost around $200 (£150) and have holes in the sole designed to make users feel like they are “running on clouds”. Instead, the lawsuit says, they cause issues in daily life – especially for nurses who wear them all day.

    “No reasonable consumer would purchase Defendant’s shoes – or pay as much for them as they did – knowing each step creates an audible and noticeable squeak,” the customers allege.

    The company, which did not immediately respond to a BBC enquiry, has declined to comment on the allegations.

    The class action lawsuit was filed on October 9 in US District Court in Oregon.

    The customers say that multiple On sneaker styles are unwearable without “significant DIY modifications”. They accused the company of “deceptive marketing”.

    The plaintiffs, who claim they were unable to return the shoes after complaining about the noise, are seeking refunds and other damages.

    The Switzerland-based sneaker company could have “fixed the design, and/or offered to fix the shoes or [given] consumers their money back but did none of those things”, the complaint alleges, citing the Cloudmonster and Cloudrunner models, among others.

    One customer claimed in the complaint that she was “no longer able to use her shoes as intended due to the embarrassment and annoyance”.

    The plaintiffs in their complaint reference social media posts, on TikTok and Reddit, from other frustrated customers who have suggested at-home remedies for the noise – including applying coconut oil to the soles of the shoes.

    On, which is backed by the tennis player Roger Federer, reported better-than-expected earnings in August. Its quarterly revenue was boosted by direct-to-consumer sales.

    Earlier this year, the company said sales of its Cloudmonster and Cloudsurfer sneaker models contributed “significantly” to its growth.

    Continue Reading