Category: 3. Business

  • A novel presurgical risk prediction model for chronic post-surgical pa

    A novel presurgical risk prediction model for chronic post-surgical pa

    Introduction

    Chronic post-surgical pain (CPSP) is known as a debilitating disease that significantly reduces quality of life and carries substantial biopsychosocial and economic consequences for both patients and society. Furthermore, low return-to-work rates and increased school absenteeism further contribute to the high socio-economic burden of chronic pain.1 Given that over 230 million people undergo surgery globally each year, with CPSP developing in 10% of surgical procedures and reaching up to 85% in certain outliers,2,3 a vast potential for CPSP is represented.4,5 Additionally, the number of surgical procedures is expected to expand with increasing obesity, inflammatory diseases and increased life expectancy. After recognizing the problem of CPSP in the 1990s, the definition underwent several modifications in recent years and many different definitions of CPSP are maintained in the literature.6–8 Recently, the definition of CPSP became more standardized after inclusion in the International Classification of Disease (ICD-11). However, since this ICD-11 definition is relatively new, its implementation is still in progress.3 The clinical success of (preventive) CPSP management remains unfortunately often unsatisfactory with persistent pain complaints and accompanying anxiety and depressive symptoms.9 In the last two decades, research on risk factors in the development of CPSP has grown significantly.2,5 Nevertheless, this has not yet led to improved postsurgical patient outcomes.9 A presurgical CPSP prediction model, suitable for daily use across a large surgical population, which has the potential to allocate high risk patients to the appropriate type of care, is urgently needed.

    Up to now, different models have been developed estimating postsurgical acute and chronic pain.10–13 However, current predictive models for CPSP often face limitations due to a narrow selection of surgical procedures, which restricts their generalizability. They also struggle with incorporating parameters from the postoperative period and rely on data that are challenging to collect in routine practice. Furthermore, many models lack robust validation, reducing their clinical utility in identifying high-risk patients across diverse surgical contexts. Additionally, the increasing adoption of the International Association for the Study of Pain (IASP) definition of CPSP raises concerns about the clinical use in cases where patients experience ongoing pain after surgery, either due to pre-existing complex pain unrelated to surgery, mixed pain conditions or minimal reductions in pain intensity post-surgically, reflecting even more its complexity.13

    Despite the complex biopsychosocial interplay of chronic pain, various surgery– and patient-related risk factors consistently emerge in the likelihood of CPSP occurrence.9 Yet, many of these factors are not easily modifiable, and others are too labor-intensive to assess effectively in a typical preoperative clinical setting. However, estimating probability of CPSP occurrence using a generalizable preoperative model could not only improve patient care and surgical outcomes by facilitating early pain management but also guide further research in the treatment of CPSP such as evaluation of pharmacological strategies in identified high-risk individuals. Moreover, CPSP prediction can have economic benefits by enhanced recovery post-surgery with early pain management and a more fluent reintegration including return to work.

    Therefore, the aim of this study was to develop a presurgical CPSP risk prediction model with good discriminative power, clinical applicability, and possible generalization to a broad group of adults undergoing different types of surgery.

    Materials and Methods

    Participants

    After approval by the Ethics Committee (BUN B3002022000112, September 2022), this single center observational pragmatic pilot study, called PERISCOPE trial, was conducted at the Antwerp University Hospital (UZA), Belgium, in accordance with the Helsinki Declaration and GCP guidelines. The protocol, including the design, of this observational pragmatic pilot study (ClinicalTrials.gov NCT05526976) has been published previously.14 Between December 2022 and September 2023, 660 Dutch-speaking adults scheduled for any type of surgery were recruited preoperatively at the tertiary Antwerp University Hospital. A written informed consent was provided by all participants prior to participation. Patients were excluded if one of the following was present: age below 18 years, not able to complete questionnaires, (diagnostic) procedures without scheduled intervention (such as bronchoscopy, hysteroscopy, gastroscopy, and colonoscopy), or informed consent refusal. Included patients received an analgesic regimen prescribed for postoperative pain by the attending anesthesiologist according to the surgery-specific anesthesia guidelines as performed in our center.

    Study Design, Data Collection and Outcome Variable

    During the study duration, there were no deviations from the standard of care nor additional interventions were executed. This study followed the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) guidelines for prediction model development.15,16

    Patient‐reported data acquired at screening visit, post-surgery day 1, month 1 and month 3 were collected electronically (REDCAP®, Research Electronic Data Capture, Version 13.6.1, Vanderbilt University, Nashville, Tennessee, USA). During the screening visit, the socio-demographic characteristics (age, sex, level of education and BMI) of all participants, as well as their medical history, preoperative analgesic usage and surgical details were recorded and verified through the electronic medical record. Additionally, participants were instructed in the pain assessment that would be conducted throughout the study trajectory. Participants were asked to electronically complete the following patient-reported outcome measurements (PROMs) at three different timepoints (before surgery, 1 month and 3 months after surgery): surgical-site pain intensity (11-level numeric rating scale (NRS)),17 health-related quality of life (EQ-5D-5L),18 patient experienced level of depression and anxiety (Hospital Anxiety and Depression Scale (HADS) and Spielberger’s Trait Anxiety Inventory (STAI-trait).19–21 Herewith, experienced concerns about the surgery were assessed considering its prevalence in daily practice and suggested opportunity for future intervention.9,21–23 When, pain intensity was scored above two on the 11-level NRS, pain assessment was extended with a screening for neuropathic pain characteristics (Douleur Neuropathique questionnaire (questionnaire part of the DN4)24 and a validated self-report of pain impact on life (Multidimensional Pain Inventory (MPI) part 125). Additionally, a modified version of Kalkman and Althaus index, which previously showed good predictive properties, was assessed presurgical.26,27

    Acute postoperative pain intensity (NRS) on post-surgery day 1 was registered as an additional timepoint. A condensed version of the study design is illustrated in Figure 1. To minimize follow-up non-compliance, up to five reminders were sent via Email and telephone. During follow-up contacts changes in medication, diagnosed surgical complications and visits to the general practitioner/psychologist/surgeon/pain physician were logged.

    Figure 1 Study design. Perioperative patients flow with indicated times when questionnaires were asked to be filled in.

    The objective of this study was to develop a presurgical CPSP risk prediction model useful for clinical practice, with a generalizability to a variety of surgical procedures. The outcome parameter of interest for CPSP was defined as the pain intensity localized at the surgical field of ≥3 on NRS, three months post-surgery.

    Candidate Predictors

    Based on clinical knowledge and a review of the literature, we identified 33 candidate predictors (comprehensive list available in Appendix 1). Within this predictors group, we considered sociodemographic characteristics including sex, age, BMI and educational level (low (no secondary education)/intermediate (secondary education)/high (higher education)), presurgical analgesic consumption (Yes/No, opioids and antineuropathics), surgical procedure and the above mentioned PROMs (modified version of Kalkman and Althaus, NRS, EQ-5D-5L, HADS, STAI).18,19,26–28

    The 44 different executed surgical procedures across 11 disciplines were categorized by 3 independent pain physicians into six categories according to the Kalkman classification: Ophthalmology, Laparoscopy, Ear-nose throat (ENT) surgery, Orthopedic surgery, Laparotomy and Other surgeries.26 Furthermore, the procedures were classified into the following categories: small procedures with high risk, large procedures with high risk, and other, as outlined in the protocol.14 Additionally, these surgical procedures were divided into a categorization using 7 categories (Surgical-7 categorization) as an alternative to the Kalkman classification.

    Sample Size Calculation

    A logistic regression model has been developed to predict the probability of CPSP 3 months post-surgery. The estimated probability from this model was then used to construct a ROC curve (receiver operating characteristic) to discriminate between CPSP and non-CPSP and determine the optimal cut-off risk value regarding maximal sensitivity and specificity.

    The sample size calculation was based on constructing a 95% confidence interval for the area under the ROC curve (AUC). A width of 0.2 and assuming an AUC of 0.7 showed the need for at least a group of 56 CPSP-patients.29 Based on the available scientific evidence at the time of study design, considering a possible mixed CPSP incidence of 10%, we needed a minimum inclusion of 560 patients scheduled for surgery. Taking subject withdrawal, incomplete data or lost to follow-up into account, a total of 660 patients were recruited.

    Statistical Analysis

    Numeric variables are summarized with mean and standard deviation or median and interquartile range and categorical variables with observed frequency and percentage. An initial logistic regression model was fitted with Kalkman score and modified Althaus as predictors and CPSP as outcome. Given a multivariable model with all predictors was not possible (only 80 cases), a stepwise forward approach was followed to build the prediction model. Starting with univariable models evaluating all 33 candidate predictors, the predictor resulting in the best model in terms of highest AUC and significance was then retained. In the next step, multivariable models with 2 predictors are considered, keeping in each model the best univariable candidate and adding one of the other candidate predictors. Overlapping candidates were left out (eg, different instances of pain, different instances or surgery). These steps are repeated for multivariable models with 3 (keeping the 2 best candidates from the previous model fixed and adding one extra candidate) and 4 predictors leading to a final model. The ROC curve of the final model is then compared with the ROC curve of the initial model using a DeLong’s test for correlated ROC curves. Bootstrap techniques were used to evaluate the model’s performance in similar future patients. Random bootstrap samples were drawn with replacement (100 replications) from the data set consisting of all patients who filled in the Kalkman and modified Althaus questionnaire preoperatively and the NRS score at month 3 (n = 415). Forward selection of the candidates was repeated within each bootstrap sample. This allowed us to adjust the estimated model performance and regression coefficients for overoptimism or overfitting. A calibration plot was constructed to examine the agreement between the predicted probabilities and the observed frequencies and calibration measures (Expected/Observed) E/O ratio, calibration slope and calibration in the large (CITL) are reported.

    A complete case analysis on n = 415 patients was used per considered model (if a patient has missings in a variable not included in the model, the patient contributed to the considered model) (Figure 2). No single candidate predictor had missing values >5%. As the outcome was missing in 16% of 496 patients, multiple imputation was considered with 50 imputed datasets and model selection and bootstrap validation were repeated on the imputed datasets.

    Figure 2 Flowchart of sample cohort for analysis.

    All analyses were performed with the statistical software R version 4.3.1. except the bootstrap validation of the final model which was done in Stata version 18.5. Multiple imputation was done with the R-packages MICE and psfmi was used for model estimation, pooling and validation after imputation.

    Results

    Patient Demographics and Characteristics

    In this pilot study, 660 patients were included. Of the 660 recruited subjects, 164 were excluded from analysis due to preoperative factors (24.8%). Of the 496 subjects having preoperative data, 81 were excluded due to postoperative factors (16.3%). Figure 2 summarizes the study sample cohort for analysis. Table 1 provides descriptive statistics for the 415 patients at the screening visit. These 415 patients underwent 44 different operations in 11 disciplines. Three months post-surgery 19.3% of the surveyed subjects reported a pain NRS score ≥3 at the surgical site (Figure 3).

    Table 1 Descriptive Statistics at Screening Visit

    Figure 3 Overview of the distribution of pain intensity (NRS) at the surgical site area three months post-surgery.

    Development of a New Predictive Model

    The initial model with Kalkman score (p < 0.0001) and Althaus risk index (p = 0.074) as predictors for CPSP, results in an AUC of 0.72 (95% CI [0.66,0.78]). From the univariable models with each of the 33 predictors, several predictors are significant where highest AUCs are obtained for the preoperative pain questions (NRS q1 (p < 0.001), q2 (p < 0.001) and Kalkman preoperative pain (p < 0.001)), respectively, 0.76, 0.77 and 0.76). Next, the 30 predictors (leaving out NRS q2 and Kalkman preoperative pain as they are also pain variables) were now added to the model with NRS q1. This model with two predictors led to the highest AUC of 0.80 when including Kalkman surgery (p = 0.001) besides NRS q1. As the category Ophthalmology is very small the Kalkman surgery is from now on recategorized into five categories with Ophthalmology and Other as one category. In the next step evaluating a model with three predictors including the concern question “I am worried about the procedure” (p = 0.032) is retained with an AUC of 0.81. Finally, a model with four predictors adding Education (p = 0.047) as fourth variable gives an AUC of 0.81 (95% CI [0.76,0.87]). Comparing the final model with the four predictors NRS q1, surgery, concern question and education to the initial model with Kalkman score and Althaus risk index gives a p-value of 0.0003 (DeLong’s test) showing a significant improvement (higher AUC) of the final model to this initial model. Together, these four questions form the Persistent Post-surgical Pain Prediction (P4)-Prevoque™ questionnaire (Table 2).

    Table 2 Final Model for Presurgical CPSP Prediction: PrevoqueTM Questionnaire

    Bootstrap was used to adjust for overfitting and the AUC of the final model was 0.76 (overoptimism 0.05). The odds (adjusted for overfitting) on CPSP were 27% higher when the pre-operative pain score goes up 1 unit on the 11-level scale. ENT surgery has the smallest odds on CPSP, and abdominal surgery has more than 8 times higher odds on CPSP compared to ENT, other and ophthalmic surgery 6 times higher odds compared to ENT, orthopedic surgery more than 4 times and laparoscopic surgery 2.5 times higher compared to ENT. The odds on CPSP were 19% higher when the answer on “I worry about the operation” goes up with one unit. The patients with low education level have an almost 3 times higher odds on CPSP compared to intermediate and high education level. The predicted probabilities using the regression coefficients adjusted for overfitting in the final model were then calculated. According to the Youden index the ideal cutoff on these predicted probabilities is 23.9% resulting in a sensitivity of 69.7% and specificity of 82.0%. Using the closest in the top left corner method, a cutoff of 20.58% is chosen resulting in a sensitivity of 73.7% and a specificity of 77.0%. Figure 4 displays the calibration plot of the observed outcomes versus the predicted outcomes with the performance measures E/O ratio, calibration slope and calibration in the large where we see a slight tendency that estimates are a bit too high for individuals at high risk, and too low for those at low risk but overall calibration measures are fair.

    Figure 4 Calibration plot for validation of the proposed prediction model.

    Pooling and selecting the logistic regression models of the 50 imputed datasets revealed a similar model with the same 4 predictors. Internal validation across the imputed datasets with bootstrapping resulted in an optimism corrected AUC of 0.76 (95% CI [0.67,0.82]). Optimism corrections were larger for calibration results.

    Discussion

    This study presents the development of a presurgical CPSP prediction model for adults undergoing a wide range of surgical procedures. Prediction models are being developed to help healthcare providers estimate the likelihood of a particular event occurring so that they can adjust their decisions accordingly.30 So far, various models have been created in recent years to predict postsurgical pain. However, to date, no generalizable CPSP risk stratification model independent for type of surgery is extensively applied. Our multivariable developed model, P4-Prevoque™ questionnaire, can presurgical classify adults undergoing a scheduled surgical procedure, forecasting an individual likelihood of CPSP based on four pre-operative patients’ characteristics: preoperatively pain score at the surgical area (0 to 10 on NRS) [1], the type of surgery (in 5 categories) [2], education level (in 3 levels) [3] and concerns reported about the planned surgical procedure (in 6 levels) [4]. Model performance is good in terms of discriminative power and calibration meaning the model is a useful tool for detecting CPSP.31

    In our single center study cohort, 19.3% of the included patients reported a pain intensity of more than 2 to 10, three months after surgery. Although this finding falls within the broad spectrum of reported CPSP incidence, this average may still be considered comparatively high.3,5,9,32,33 Type of surgery is a known contributing factor to the substantial incidence variation.9 Moreover, this high mean incidence in our study cohort may be indicative of the tertiary hospital surgical procedures and population. Additionally, the inconsistent application of CPSP-definitions may serve as a confounding factor. Since the ICD-11 definition is relatively recent and implementation is ongoing, it is important to note that many different definitions are still maintained in the literature as described by Glare et al.34 In a supplementary analysis, the most recent definition of CPSP as outlined in the ICD-11 defining CPSP as an increase in NRS score of 1 or more at 3 months post-surgery compared to preoperative values was considered for the subjects who had an increase in pain intensity at month 3 compared to pre-surgery.3 This resulted in a group of 72 CPSP patients according to ICD-11 definition. Of those 72 patients, 29 had CPSP both according to the ICD-11 definition and the primary outcome variable. Furthermore, a second predictive model analysis for this subsequent outcome variable was executed. This included presurgical pain intensity at the surgical area (11-level NRS), type of surgery (Surgery-7 classification) and STAI trait and is as such similar to the proposed primary model. Despite the necessity for uniformity in definition, we acknowledge the concerns raised in the recent publication by Papadomanolakis-Pakis et al regarding the ICD-11 definition. Specifically, patients with stable or reduced pain levels are not classified as CPSP by the ICD-11 definition, highlighting the need for pain assessment in educated patients or clinical confirmation in the diagnosis of CPSP.13 Furthermore, patients exhibiting the maximum indicated pain score preoperatively are unable to report a higher score on the NRS post-surgery, thereby rendering them ineligible for the diagnosis of CPSP under this definition. This CPSP definition, in response to the need for a scientifically rigorous and practically applicable framework, is likely to evolve over the next few years. It may also incorporate considerations regarding the impact of analgesic use and changes in pain type within its diagnostic criteria.

    Presurgical pain is found to be a strong predictor in our proposed model. This finding is in line with literature that the presence of persistent nociceptive stimulation may cause pain physiology changes leading to a sensitized nervous system.35,36 Also, the predictive role of surgical type is in accordance with previous literature as surgical tissue trauma, surgery duration and neuronal damage are important contributing factors.9 Furthermore, a recent meta-analysis by Giusti evaluated psychosocial predictors for CPSP and concluded that a heterogeneous group of psychological predictors are significantly associated with CPSP.22 In this study, concerns about the surgery, and anxiety and depression as identified psychosocial predictors were assessed. Worrying about the planned surgical procedure as a single question answer appears to be more predictive than anxiety or depressive states in this study cohort.

    As mentioned, during recent years, a handful of prognostic models have been developed on postoperative pain intensity.10,13,32,37 Two existing prediction models identified as potentially useful in daily practice were incorporated in this research and compared with our prediction model.26,27 However, the Althaus risk index was specifically developed to predict pain 6 months post-surgery and included post-surgical acute pain as a predictor (although a version of the model without this variable is also presented in their publication).27 Similarly, the Kalkman score focused on the presence of severe postoperative pain within the first hour following surgery as the outcome.26 Consequently, these models differ in their outcomes and are therefore not entirely comparable. Notwithstanding, the modified version of Kalkman and Althaus risk index (without the post-surgical acute pain item) was assessed preoperatively, and logistic model evaluation resulted in an AUC of 0.72. Comparatively, the developed prediction model resulted in an AUC of 0.81. Both ROC curves were then compared with a DeLong’s test for correlated ROC curves leading to a p-value of 0.0003 showing the significant improvement of the AUC of our model. Our findings suggests that both the Kalkman score and the Althaus index are effective presurgical prediction models for CPSP, although they were not used in this study for the intended outcome or at the appropriate timepoint. However, the P4-Prevoque™ model, as designed, demonstrates greater accuracy and significantly enhanced predictive power compared to the two previously mentioned models.

    A comparison with even more recently developed prediction models11,12,38 is not feasible due to substantial differences in the study populations and outcome variables including varying interpretations and applications of CPSP definition, timing and the selection of predictors. Beyond the similarities and differences between the P4-Prevoque™ model and the few existing alternatives, the proposed P4-Prevoque™ model distinguishes itself by its inclusive nature, enhancing its generalizability. Our objective was to design a risk model for CPSP that is both clinically relevant and suitable for widespread implementation, by reducing the number of questionnaires and categorizing the responses in a manner conducive to preoperative screening visits, telephone consultations, and digital preoperative care pathways. The P4-Prevoque™ questionnaire facilitates a rapid assessment of the risk for developing CPSP or transitioning to a more severe pain condition. It offers multiple applications for vulnerable patients scheduled for surgery, enhancing patient allocation in research trials, informing tailored management strategies, and ultimately improving their comprehensive postoperative outcomes.

    The potential benefit in reducing the incidence of CPSP using a prediction model, to date, still remains unclear.9–13 Nevertheless, CPSP as a complex biopsychosocial phenomenon with an often challenging treatment approach could benefit from an early, presurgical patient-centered care.9,22,34,39 Early identification of individuals at risk for developing CPSP is essential to ensure prompt assignment to the appropriate care. Subsequently, it will have to be investigated whether early non-pharmacological and pharmacological approaches in at-risk subjects planned for various types of surgery could result in CPSP incidence reduction.9,40,41

    Thereafter, decision analysis methods can be used to assess whether a prediction model should be used in practice by incorporating and quantifying its clinical impact, considering the anticipated benefits, risks, and costs. Furthermore, this study focuses on CPSP three months after surgery. Between three and six months postoperatively, pain complaints may fluctuate in terms of prevalence, intensity, and clinical relevance. As a result, prediction models targeting outcomes at three and six months may differ. Yet, by targeting the three-month outcome, we aim to identify patients early, when there is a meaningful opportunity for intervention.

    In addition to the described strengths, our study has several limitations. First, only 415 out of 660 patients completed questionnaires, resulting in a considerable amount of missing data. Comparisons between completers and non-completers on sex, age, BMI, education and surgery type only showed a significant difference in education level. Thirty percent of non-completers had a high education level compared to 45% of completers. This is an important group missing in the analysis, already known for previously identified patient-related risk factors. Besides that, the small sample size also prohibited a backward selection procedure in the model building step. Another potential limitation is the probability of misclassification of the endpoint CPSP. Similar to others, in this pragmatic study, postoperative in-person follow-up visits were not conducted.11,13 Participants underwent remote pain assessments, after education during the screening visit. Diagnosed surgical complications and readmissions were verified via telephone and cross-checked with the medical records. However, the lack of a physical examination to thoroughly assess the characteristics of CPSP might be point of discussion as reported in recent research.33 Finally, no detailed assessment of the presurgical pain complaints was performed. This could affect the excitability of a nervous system such as in preexisting nociplastic pain syndromes as described by Fitzcharles et al.42

    In recent years, more research has addressed machine learning (ML) models. Langford and colleagues43 reviewed the use of ML to predict postoperative pain and opioid use, highlighting the growing potential of these methods to improve early risk identification. They emphasized the need for rigorous methodological standards and validation to ensure clinical applicability. These findings support the relevance of our approach in developing a robust and interpretable CPSP prediction model.

    Conclusion

    In conclusion, using the designed model, the occurrence of CPSP can be presurgical estimated in adults scheduled for surgery with a sensitivity of 74% and specificity of 77% in the studied population. The P4-Prevoque™ model, composed of four questions, can be easily obtained and has the potential to seamlessly integrate into preoperative workflows through digital tools such as online forms and mobile apps, as well as during in-person visits via kiosks in waiting areas or at healthcare providers’ offices, supporting both modern and traditional care approaches. Future research should prioritize the external validation of the prediction model using an independent dataset, its evaluation in non-university hospital surgical settings, and subsequently its implementation and valorization. If CPSP-at risk subjects can be identified early, preventive pharmacological and non-pharmacological antinociceptive interventions may be reconsidered. Given the relative immutability of surgery type and educational level, we argue that research and prevention efforts should concentrate not only on pain but also on the psychological aspects, such as patient fear, anxiety, and concerns about the surgical procedure. Following prediction model validation, it is important to evaluate its impact on patient-reported outcome measures and patient-reported experience measures. Ultimately, it remains to be determined whether and which interventions targeting high-risk individuals will lead to a reduction in the burden of CPSP.

    Data Sharing Statement

    Requests for (de-identified) raw data used in this clinical trial can be directed to the corresponding author.

    Acknowledgments

    This research has been conducted with screening at the preoperative screening anesthesiology department. We would like to thank the staff, especially Dr H Vandervelde, for their contributions.

    Author Contributions

    All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

    Funding

    No funding was obtained for this research project.

    Disclosure

    The authors declare that they have no conflicts of interest in this work.

    References

    1. Turk DC, Wilson HD, Cahana A. Pain 2 treatment of chronic non-cancer pain. Lancet. 2011;377(9784):2226–2235. doi:10.1016/S0140-6736(11)60402-9

    2. Fletcher D, Stamer UM, Pogatzki-Zahn E, et al. Chronic postsurgical pain in Europe: an observational study. Eur J Anaesthesiol. 2015;32(10):725–734. doi:10.1097/EJA.0000000000000319

    3. Schug SA, Lavand’Homme P, Barke A, Korwisi B, Rief W, Treede RD. The IASP classification of chronic pain for ICD-11: chronic postsurgical or posttraumatic pain. Pain. 2019;160(1):45–52. doi:10.1097/j.pain.0000000000001413

    4. Weiser TG, Haynes AB, Molina G, et al. Surgical services: access and coverage estimate of the global volume of surgery in 2012: an assessment supporting improved health outcomes. Available from: www.thelancet.com. Accessed July 1, 2025.

    5. Kehlet H, Jensen TS, Woolf CJ. Persistent postsurgical pain: risk factors and prevention. Lancet. 2006;367(9522):1618–1625. doi:10.1016/S0140-6736(06)68700-X

    6. Macrae WA. Chronic post-surgical pain: 10 years on. Br J Anaesth. 2008;101(1):77–86. doi:10.1093/bja/aen099

    7. Macrae WA. Chronic pain after surgery. Br J Anaesth. 2001;87(1):88–98. doi:10.1093/bja/87.1.88

    8. Werner MU, Kongsgaard UE. Defining persistent post-surgical pain: is an update required? Br J Anaesth. 2014;113(1):1–4. doi:10.1093/bja/aeu012

    9. Rosenberger DC, Pogatzki-Zahn EM. Chronic post-surgical pain – update on incidence, risk factors and preventive treatment options. BJA Educ. 2022;22(5):190–196. doi:10.1016/j.bjae.2021.11.008

    10. Papadomanolakis-Pakis N, Uhrbrand P, Haroutounian S, Nikolajsen L. Prognostic prediction models for chronic postsurgical pain in adults: a systematic review. Pain. 2021;162(11):2644–2657. doi:10.1097/j.pain.0000000000002261

    11. van Driel MEC, van Dijk JFM, Baart SJ, Meissner W, Huygen FJPM, Rijsdijk M. Development and validation of a multivariable prediction model for early prediction of chronic postsurgical pain in adults: a prospective cohort study. Br J Anaesth. 2022;129(3):407–415. doi:10.1016/j.bja.2022.04.030

    12. Sluka KA, Wager TD, Sutherland SP, et al. Predicting chronic postsurgical pain: current evidence and a novel program to develop predictive biomarker signatures. Pain. 2023;164(9):1912–1926. doi:10.1097/j.pain.0000000000002938

    13. Papadomanolakis-Pakis N, Haroutounian S, Sørensen JK, et al. Development and internal validation of a clinical risk tool to predict chronic postsurgical pain in adults: a prospective multicentre cohort study. Pain. 2022. doi:10.1097/j.pain.0000000000003405

    14. Wildemeersch D, Meeus I, Wauters E, et al. Evaluating the predictive value of a short preoperative holistic risk factor screening questionnaire in preventing persistent pain in elective adult surgery: study protocol for a prospective observational pragmatic trial [PERISCOPE]. J Pain Res. 2023;16:4281–4287. doi:10.2147/JPR.S439824

    15. Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. BMC Med. 2015;13(1):1. doi:10.1186/s12916-014-0241-z

    16. Collins GS, Moons KGM, Dhiman P, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024. doi:10.1136/bmj-2023-078378

    17. Downie WW, Leatham PA, Rhind VM, Wright V, Branco JA, Anderson JA. Studies with pain rating scales. Ann Rheum Dis. 1978;37(4):378–381. doi:10.1136/ard.37.4.378

    18. Devlin N, Pickard S, Busschbach J. The development of the EQ-5D-5L and its value sets. In: Value Sets for EQ-5D-5L. Springer International Publishing; 2022:1–12. doi:10.1007/978-3-030-89289-0_1

    19. Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983;67(6):361–370. doi:10.1111/j.1600-0447.1983.tb09716.x

    20. Marteau TM, Bekker H. The development of a six‐item short‐form of the state scale of the Spielberger State—Trait Anxiety Inventory (STAI). Br J Clin Psychol. 1992;31(3):301–306. doi:10.1111/j.2044-8260.1992.tb00997.x

    21. Moerman N, van Dam FSAM, Muller MJ, Oosting H. The Amsterdam Preoperative Anxiety and Information Scale (APAIS). Anesth Analg. 1996;82(3):445–451. doi:10.1097/00000539-199603000-00002

    22. Giusti EM, Lacerenza M, Manzoni GM, Castelnuovo G. Psychological and psychosocial predictors of chronic postsurgical pain: a systematic review and meta-analysis. Pain. 2021;162(1):10–30. doi:10.1097/j.pain.0000000000001999

    23. Weinrib AZ, Azam MA, Birnie KA, Burns LC, Clarke H, Katz J. The psychology of chronic post-surgical pain: new frontiers in risk factor identification, prevention and management. Br J Pain. 2017;11(4):169–177. doi:10.1177/2049463717720636

    24. Bouhassira D, Attal N, Alchaar H, et al. Comparison of pain syndromes associated with nervous or somatic lesions and development of a new neuropathic pain diagnostic questionnaire (DN4). Pain. 2005;114(1):29–36. doi:10.1016/j.pain.2004.12.010

    25. Lousberg R, Van Breukelen GJP, Groenman NH, Schmidt AJM, Arntz A, Winter FAM. Psychometric properties of the multidimensional pain inventory, Dutch language version (MPI-DLV). Behav Res Ther. 1999;37(2):167–182. doi:10.1016/S0005-7967(98)00137-5

    26. Kalkman CJ, Visser K, Moen J, Bonsel GJ, Grobbee DE, Moons KGM. Preoperative prediction of severe postoperative pain. Pain. 2003;105(3):415–423. doi:10.1016/S0304-3959(03)00252-5

    27. Althaus A, Hinrichs-Rocker A, Chapman R, et al. Development of a risk index for the prediction of chronic post-surgical pain. European J Pain. 2012;16(6):901–910. doi:10.1002/j.1532-2149.2011.00090.x

    28. van der Bij AK, de Weerd S, Cikot RJLM, Steegers EAP, Braspenning JCC. Validation of the Dutch short form of the state scale of the Spielberger State-trait anxiety inventory: considerations for usage in screening outcomes. Public Health Genomics. 2003;6(2):84–87. doi:10.1159/000073003

    29. Hajian-Tilaki K. Sample size estimation in diagnostic test studies of biomedical informatics. J Biomed Inform. 2014;48:193–204. doi:10.1016/j.jbi.2014.02.013

    30. Moons KGM, Royston P, Vergouwe Y, Grobbee DE, Altman DG. Prognosis and prognostic research: what, why, and how? BMJ. 2009;338(feb23 1):b375–b375. doi:10.1136/bmj.b375

    31. de Hond AAH, Steyerberg EW, van Calster B. Interpreting area under the receiver operating characteristic curve. Lancet Digit Health. 2022;4(12):e853–e855. doi:10.1016/S2589-7500(22)00188-1

    32. Fletcher D, Lavand’homme P. Towards better predictive models of chronic post-surgical pain: fitting to the dynamic nature of the pain itself. Br J Anaesth. 2022;129(3):281–284. doi:10.1016/j.bja.2022.06.010

    33. Martinez V, Lehman T, Lavand’homme P, et al. Chronic postsurgical pain A European survey. Eur J Anaesthesiol. 2024;41(5):351–362. doi:10.1097/EJA.0000000000001974

    34. Glare P, Aubrey KR, Myles PS. Transition from acute to chronic pain after surgery. Lancet. 2019;393(10180):1537–1546. doi:10.1016/S0140-6736(19)30352-6

    35. Nijs J, George SZ, Clauw DJ, et al. Central sensitisation in chronic pain conditions: latest discoveries and their potential for precision medicine. Lancet Rheumatol. 2021;3(5):e383–e392. doi:10.1016/S2665-9913(21)00032-1

    36. Feizerfan A, Sheh G. Transition from acute to chronic pain. Continuing Educ Anaesth Crit Care Pain. 2015;15(2):98–102. doi:10.1093/bjaceaccp/mku044

    37. Papadomanolakis-Pakis N, Haroutounian S, Christiansen CF, Nikolajsen L. Prediction of chronic postsurgical pain in adults: a protocol for multivariable prediction model development. BMJ Open. 2021;11(12):e053618. doi:10.1136/bmjopen-2021-053618

    38. Montes A, Roca G, Cantillo J, Sabate S. Presurgical risk model for chronic postsurgical pain based on 6 clinical predictors: a prospective external validation. Pain. 2020;161(11):2611–2618. doi:10.1097/j.pain.0000000000001945

    39. Katz J, Weinrib A, Fashler S, et al. The Toronto General Hospital Transitional Pain Service: development and implementation of a multidisciplinary program to prevent chronic postsurgical pain. J Pain Res. 2015;695. doi:10.2147/JPR.S91924

    40. Carley ME, Chaparro LE, Choinière M, et al. Pharmacotherapy for the prevention of chronic pain after surgery in adults: an updated systematic review and meta-analysis. Anesthesiology. 2021;135(2):304–325. doi:10.1097/ALN.0000000000003837

    41. Chaparro LE, Smith SA, Moore RA, Wiffen PJ, Gilron I. Pharmacotherapy for the prevention of chronic pain after surgery in adults. Cochrane Database Syst Rev. 2013;2021(6). doi:10.1002/14651858.CD008307.pub2

    42. Fitzcharles MA, Cohen SP, Clauw DJ, Littlejohn G, Usui C, Häuser W. Chronic pain 2 nociplastic pain: towards an understanding of prevalent pain conditions. Lancet. 2021;397(10289):2098–2110. doi:10.1016/S0140-6736(21)00392-5

    43. Langford DJ, Reichel JF, Zhong H, et al. Machine learning research methods to predict postoperative pain and opioid use: a narrative review. Reg Anesth Pain Med. 2025;50(2):102–109. doi:10.1136/rapm-2024-105603

    Continue Reading

  • Better Artificial Intelligence (AI) Stock: SoundHound AI vs. C3.ai

    Better Artificial Intelligence (AI) Stock: SoundHound AI vs. C3.ai

    • SoundHound AI and C3.ai are pure-play artificial intelligence (AI) software companies with massive opportunities ahead.

    • SoundHound AI stock is more richly valued than C3.ai, but may have a greater runway for growth ahead.

    • Choosing between the two stocks may ultimately boil down to the risk tolerance levels of investors.

    • 10 stocks we like better than SoundHound AI ›

    The adoption of artificial intelligence (AI) software is increasing at an incredible pace because of the productivity and efficiency gains this technology is capable of delivering, and the good part is that this niche is likely to sustain a healthy growth rate over the long run.

    According to ABI Research, the AI software market is expected to clock a compound annual growth rate (CAGR) of 25% through 2030, generating $467 billion in annual revenue at the end of the decade. That’s why it would be a good time to take a closer look at the prospects of SoundHound AI (NASDAQ: SOUN) and C3.ai (NYSE: AI) — two pure-play AI companies that could help investors capitalize on a couple of fast-growing niches within the AI software market — and check which one of them is worth buying right now.

    Image source: Getty Images.

    SoundHound AI provides a voice AI platform where its customers can create conversational AI assistants and voice-based AI agents that can be deployed for multiple uses, such as taking orders in restaurants, car infotainment systems, and customer service applications, among others.

    This particular market is growing at a nice clip, as deploying AI-powered voice solutions can help companies improve productivity and efficiency, since they will be able to automate tasks. Companies can now significantly improve their customer interaction experiences, thanks to the availability of round-the-clock multilingual AI agents and assistants.

    Not surprisingly, SoundHound AI has been witnessing a robust growth in demand for its voice AI solutions, which explains the solid revenue growth in the past year.

    SOUN Revenue (TTM) Chart

    SOUN Revenue (TTM) data by YCharts.

    But here’s what investors should look forward to: The conversational AI market could grow at an annual average rate of almost 24% through 2030, generating over $41 billion in annual revenue by the end of the decade. SoundHound AI has been growing at a much faster pace than the overall market, suggesting it is gaining a bigger share of this lucrative space.

    SoundHound’s revenue guidance of $167 million at the mid-point for 2025, is nearly double the revenue it reported last year. Importantly, its cumulative subscriptions and bookings backlog stood at a massive $1.2 billion last year. This metric is a measure of the potential revenue that the company expects to “realize over the coming several years,” suggesting it can maintain its healthy growth rates for a long time to come thanks to the AI-fueled opportunity it’s sitting on.

    C3.ai is a pure-play enterprise AI software platform provider that enables its customers to build generative AI applications and agentic AI solutions. The company claims that it provides 130 comprehensive enterprise AI applications ready for deployment across industries such as oil and gas, manufacturing, financial services, utilities, chemicals, defense, and others.

    It has been in the news of late for receiving a bigger contract worth $450 million from the U.S. Air Force for maintaining aircraft, ground assets, and weapons systems for the next four years. However, this is just one of the many contracts that the company has been landing lately.

    C3.ai’s offerings are used across diverse industries, and its customer base includes the likes of Baker Hughes, which recently expanded its partnership with the company; local and state government bodies across multiple U.S. states; and companies such as Ericsson, Bristol Myers Squibb, Chanel, and others. The company’s fast-expanding customer base and the bigger contracts that it is signing with existing customers explain why there has been an uptick in C3.ai’s growth of late.

    AI Revenue (TTM) Chart

    AI Revenue (TTM) data by YCharts.

    The company finished fiscal 2025 (which ended on April 30) with a 25% increase in its revenue to $389 million. Management expects another 20% increase in total revenue in fiscal 2025. Consensus estimates suggest that C3.ai is likely to report similar growth next year, followed by an acceleration in fiscal 2028.

    AI Revenue Estimates for Current Fiscal Year Chart

    AI Revenue Estimates for Current Fiscal Year data by YCharts.

    There’s a strong possibility, however, that C3.ai will exceed expectations and its own forecast for growth this year. That’s because C3.ai ended the previous fiscal year with 174 pilot projects, which it calls initial production deployments. The good part is that the company has been converting its pilots into contracts at a healthy rate.

    C3.ai turned 66 of its initial production deployments into long-term contracts in fiscal 2025. The company ended fiscal 2024 with 123 pilot projects, which means that it has a conversion rate of more than 50%. So the robust increase in the company’s pilot projects last year means that it could close more such initial production deployments into full agreements in the current fiscal year, going by past trends.

    So there is a strong possibility of C3.ai’s growth rate exceeding Wall Street’s expectations, which should ideally turn out to be a tailwind for its stock price in the long run.

    While it is clear both SoundHound and C3.ai are growing at a nice pace because of AI, the former’s growth rate is much higher. However, to buy SoundHound stock, investors will have to pay a handsome price-to-sales ratio of nearly 38. C3.ai, on the other hand, is trading at a much more attractive 8 times sales, which is almost in line with the U.S. technology sector’s average sales multiple.

    So, investors looking for a mix of steady growth and attractive valuation can consider buying shares of C3.ai. However, if you have a higher appetite for risk and are willing to pay for a stock with a richer valuation, then consider buying SoundHound AI, as its faster growth could help it clock more upside, though the expensive valuation also exposes it to more volatility.

    Before you buy stock in SoundHound AI, consider this:

    The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and SoundHound AI wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

    Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $699,558!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $976,677!*

    Now, it’s worth noting Stock Advisor’s total average return is 1,060% — a market-crushing outperformance compared to 180% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.

    See the 10 stocks »

    *Stock Advisor returns as of June 30, 2025

    Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Bristol Myers Squibb. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.

    Better Artificial Intelligence (AI) Stock: SoundHound AI vs. C3.ai was originally published by The Motley Fool

    Continue Reading

  • OPEC+ members agree larger-than-expected oil production hike in August

    OPEC+ members agree larger-than-expected oil production hike in August

    The OPEC logo is displayed on a mobile phone screen in front of a computer screen displaying OPEC icons in Ankara, Turkey, on June 25, 2024.

    Anadolu | Anadolu | Getty Images

    Eight oil-producing nations of the OPEC+ alliance on Saturday agreed to lift their collective crude production by 548,00 barrels per day, as they continue briskly unwinding a set of voluntary supply cuts.

    This subset of the alliance — comprising heavyweight producers Russia and Saudi Arabia, alongside Algeria, Iraq, Kazakhstan, Kuwait, Oman and the United Arab Emirates — met digitally earlier in the day. They had been expected to increase their output by a smaller 411,000 barrels per day.

    In a statement, the OPEC Secretariat attributed the countries’ decision to raise August daily output by 548,000 barrels to “a steady global economic outlook and current healthy market fundamentals, as reflected in the low oil inventories.”

    The eight producers have been implementing two sets of voluntary production cuts outside of the broader OPEC+ coalition’s formal policy.

    One, totaling 1.66 million barrels per day, stays in effect until the end of next year.

    Under the second strategy, the countries reduced their production by an additional 2.2 million barrels per day until the end of the first quarter.

    They initially set out to boost their production by 137,000 barrels per day every month until September 2026, but only sustained that pace in April. The group then tripled the hike to 411,000 barrels per day in each of May, June and July — and are further accelerating the pace of their increases in August.

    Oil prices were briefly boosted in recent weeks by the seasonal summer spike in demand and the 12-day war between Israel and Iran, which threatened both Tehran’s supplies and raised concerns over potential disruptions of supplies transported through the key Strait of Hormuz.

    At the end of the Friday session, oil futures settled at $68.30 per barrel for the September-expiry Ice Brent contract and at $66.50 per barrel for front month-August Nymex WTI.

    Continue Reading

  • Microsoft shuts its Pakistan office after 25 years, sparks economic concerns

    Microsoft shuts its Pakistan office after 25 years, sparks economic concerns

    Tech giant Microsoft has announced to shut down its limited operations in Pakistan as part of its global strategy to reduce workforce, which various stakeholders termed on Friday as a “troubling sign” for the country’s economy.

    Advertisement

    Microsoft, while closing its office in Pakistan on Thursday after 25 years, cited global restructuring and a shift to a cloud-based, partner-led model.

    The move came as the tech giant cut roughly 9,100 jobs worldwide (or about 4 per cent of its workforce) in its largest layoff round since 2023.

    Jawwad Rehman, former founding Country Manager of Microsoft Pakistan, urged the government and IT minister to engage with the tech giants with a bold KPI (Key Performance Indicators) driven plan.

    He said the exit reflected the current business climate. “Even global giants like Microsoft find it unsustainable to stay,” he posted on LinkedIn.

    Former Pakistan president Arif Alvi, in a post on X, also expressed concern over Microsoft shutting down operations.

    “It is a troubling sign for our economic future,” he wrote.

    He claimed Microsoft once considered Pakistan for expansion, but that instability led the company to choose Vietnam instead by late 2022.

    “The opportunity was lost,” he wrote.

    Jawwad explained that Microsoft didn’t operate a full commercial base in Pakistan, relying instead on liaison offices focused on enterprise, education, and government clients.

    Over recent years, much of that work had already shifted to local partners, while licensing and contracts were managed from its European hub in Ireland.


    Continue Reading

  • New research reveals hidden biases in AI’s moral advice

    New research reveals hidden biases in AI’s moral advice

    As artificial intelligence tools become more integrated into everyday life, a new study suggests that people should think twice before trusting these systems to offer moral guidance. Researchers have found that large language models—tools like ChatGPT, Claude, and Llama—consistently favor inaction over action in moral dilemmas and tend to answer “no” more often than “yes,” even when the situation is logically identical. The findings were published in the Proceedings of the National Academy of Sciences.

    Large language models, or LLMs, are advanced artificial intelligence systems trained to generate human-like text. They are used in a variety of applications, including chatbots, writing assistants, and research tools. These systems learn patterns in language by analyzing massive amounts of text from the internet, books, and other sources.

    Once trained, they can respond to user prompts in ways that sound natural and knowledgeable. As people increasingly rely on these tools for moral guidance—asking, for example, whether they should confront a friend or blow the whistle on wrongdoing—researchers wanted to examine how consistent and reasonable these decisions really are.

    “People increasingly rely on large language models to advise on or even make moral decisions, and some researchers have even proposed using them in psychology experiments to simulate human responses. Therefore, we wanted to understand how moral decision making and advice giving of large language models compare to that of humans,” said study author Maximilian Maier of University College London.

    The researchers conducted a series of four experiments comparing the responses of large language models to those of human participants when faced with moral dilemmas and collective action problems. The goal was to see whether the models reasoned about morality in the same ways that people do, and whether their responses were affected by the way questions were worded or structured.

    In the first study, the researchers compared responses from four widely used language models—GPT-4-turbo, GPT-4o, Claude 3.5, and Llama 3.1-Instruct—to those of 285 participants recruited from a U.S. representative sample. Each person and model was given a set of 13 moral dilemmas and 9 collective action problems.

    The dilemmas included realistic scenarios adapted from past research and history, such as whether to allow medically assisted suicide or to blow the whistle on unethical practices. The collective action problems involved conflicts between self-interest and group benefit, like deciding whether to conserve water during a drought or donate to those in greater need.

    The results showed that in moral dilemmas, the language models strongly preferred inaction. They were more likely than humans to endorse doing nothing—even when taking action might help more people. This was true regardless of whether the action involved breaking a moral rule or not. For example, when the models were asked whether to legalize a practice that would benefit public health but involve a controversial decision, they were more likely to recommend maintaining the status quo.

    The models also showed a bias toward answering “no,” even when the situation was logically equivalent to one where “yes” was the better answer. This “yes–no” bias meant that simply rephrasing a question could flip the model’s recommendation. Human participants did not show this same pattern. While people’s responses were somewhat influenced by how questions were worded, the models’ decisions were far more sensitive to minor differences in phrasing.

    The models were also more altruistic than humans when it came to the collective action problems. When asked about situations involving cooperation or sacrifice for the greater good, the language models more frequently endorsed altruistic responses, like donating money or helping a competitor. While this might seem like a positive trait, the researchers caution that this behavior may not reflect deep moral reasoning. Instead, it could be the result of fine-tuning these models to avoid harm and promote helpfulness—values embedded during training by their developers.

    To further investigate the omission and yes–no biases, the researchers conducted a second study with 474 new participants. In this experiment, the team rewrote the dilemmas in subtle ways to test whether the models would give consistent answers across logically equivalent versions. They found that the language models continued to show both biases, while human responses remained relatively stable.

    The third study extended these findings to everyday moral situations by using real-life dilemmas adapted from the Reddit forum “Am I the Asshole?” These stories involved more relatable scenarios, such as helping a roommate or choosing between spending time with a partner or friends. Even in these more naturalistic contexts, the language models still showed strong omission and yes–no biases. Again, human participants did not.

    These findings raise important questions about the role of language models in moral decision-making. While they may give advice that sounds thoughtful or empathetic, their responses can be inconsistent and shaped by irrelevant features of a question. In moral philosophy, consistency and logical coherence are essential for sound reasoning. The models’ sensitivity to surface-level details, like whether a question is framed as “yes” or “no,” suggests that they may lack this kind of reliable reasoning.

    The researchers note that omission bias is common in humans too. People often prefer inaction over action, especially in morally complex or uncertain situations. But in the models, this bias was amplified. Unlike people, the models also exhibited a systematic yes–no bias that does not appear in human responses. These patterns were observed across different models, prompting methods, and types of moral dilemmas.

    “Do not uncritically rely on advice from large language models,” Maier told PsyPost. “Even though models are good at giving answers that superficially appear compelling (for instance, another study shows that people rate the advice of large language models as slightly more moral, trustworthy, thoughtful, and correct than that of an expert ethicist), this does not mean that their advice is actually more sound. Our study shows that their advice is subject to several potentially problematic biases and inconsistencies.”

    In the final study, the researchers explored where these biases might come from. They compared different versions of the Llama 3.1 model: one that was pretrained but not fine-tuned, one that was fine-tuned for general chatbot use, and another version called Centaur that was fine-tuned using data from psychology experiments. The fine-tuned chatbot version showed strong omission and yes–no biases, while the pretrained version and Centaur did not. This suggests that the process of aligning language models with expected chatbot behavior may actually introduce or amplify these biases.

    “Paradoxically, we find that efforts to align the model for chatbot applications based on what the company and its users considered good behavior for a chatbot induced the biases documented in our paper,” Maier explained. “Overall, we conclude that simply using people’s judgments of how positive or negative they evaluate the responses of LLMs (a common method for aligning language models with human preferences) is insufficient to detect and avoid problematic biases. Instead, we need to use methods from cognitive psychology and other disciplines to systematically test for inconsistent responses.”

    As with all research, there are some caveats to consider. The studies focused on how models respond to dilemmas. But it remains unclear how much influence these biased responses actually have on human decision-making.

    “This research only showed biases in the advice LLMs give, but did not examine how human users react to the advice,” Maier said. “It is still an open question to what extent the biases in LLMs’ advice giving documented here actually sway people’s judgements in practice. This is something we are interested in studying in future work.”

    The study, “Large language models show amplified cognitive biases in moral decision-making,” was authored by Vanessa Cheung, Maximilian Maier, and Falk Lieder.

    Continue Reading

  • Companies keep slashing jobs. How worried should workers be about AI replacing them?

    Companies keep slashing jobs. How worried should workers be about AI replacing them?

    Tech companies that are cutting jobs and leaning more on artificial intelligence are also disrupting themselves.

    Amazon’s Chief Executive Andy Jassy said last month that he expects the e-commerce giant will shrink its workforce as employees “get efficiency gains from using AI extensively.”

    At Salesforce, a software company that helps businesses manage customer relationships, Chief Executive Marc Benioff said last week that AI is already doing 30% to 50% of the company’s work.

    Other tech leaders have chimed in before. Earlier this year, Anthropic, an AI startup, flashed a big warning: AI could wipe out more than half of all entry-level white-collar jobs in the next one to five years.

    Ready or not, AI is reshaping, displacing and creating new roles as technology’s impact on the job market ripples across multiple sectors. The AI frenzy has fueled a lot of anxiety from workers who fear their jobs could be automated. Roughly half of U.S. workers are worried about how AI may be used in the workplace in the future and few think AI will lead to more job opportunities in the long run, according to a Pew Research Center report.

    The heightened fear comes as major tech companies, such as Microsoft, Intel, Amazon and Meta cut workers, push for more efficiency and promote their AI tools. Tech companies have rolled out AI-powered features that can generate code, analyze data, develop apps and help complete other tedious tasks.

    “AI isn’t just taking jobs. It’s really rewriting the rule book on what work even looks like right now,” said Robert Lucido, senior director of strategic advisory at Magnit, a company based in Folsom, Calif., that helps tech giants and other businesses manage contractors, freelancers and other contingent workers.

    Disruption debated

    Exactly how big of a disruption AI will have on the job market is still being debated. Executives for OpenAI, the maker of popular chatbot ChatGPT, have pushed back against the prediction that a massive white-collar job bloodbath is coming.

    “I do totally get not just the anxiety, but that there is going to be real pain here, in many cases,” said Sam Altman, chief executive of OpenAI, at an interview with “Hard Fork,” the tech podcast from the New York Times. ”In many more cases, though, I think we will find that the world is significantly underemployed. The world wants way more code than can get written right now.”

    As new economic policies, including those around tariffs, create more unease among businesses, companies are reining in costs while also being pickier about whom they hire.

    “They’re trying to find what we call the purple unicorns rather than someone that they can ramp up and train,” Lucido said.

    Before the 2022 launch of ChatGPT — a chatbot that can generate text, images, code and more —tech companies were already using AI to curate posts, flag offensive content and power virtual assistants. But the popularity and apparent superpowers of ChatGPT set off a fierce competition among tech companies to release even more powerful generative AI tools. They’re racing ahead, spending hundreds of billions of dollars on data centers, facilities that house computing equipment such as servers used to process the trove of information needed to train and maintain AI systems.

    Economists and consultants have been trying to figure out how AI will affect engineers, lawyers, analysts and other professions. Some say the change won’t happen as soon as some tech executives expect.

    “There have been many claims about new technologies displacing jobs, and although such displacement has occurred in the past, it tends to take longer than technologists typically expect,” economists for the U.S. Bureau of Labor Statistics said in a February report.

    AI can help develop, test and write code, provide financial advice and sift through legal documents. The bureau, though, still projects that employment of software developers, financial advisors, aerospace engineers and lawyers will grow faster than the average for all occupations from 2023 to 2033. Companies will still need software developers to build AI tools for businesses or maintain AI systems.

    Worker bots

    Tech executives have touted AI’s ability to write code. Meta Chief Executive Mark Zuckerberg has said that he thinks AI will be able to write code like a mid-level engineer in 2025. And Microsoft Chief Executive Satya Nadella has said that as much as 30% of the company’s code is written by AI.

    Other roles could grow more slowly or shrink because of AI. The Bureau of Labor Statistics expects employment of paralegals and legal assistants to grow slower than the average for all occupations while roles for credit analysts, claims adjusters and insurance appraisers to decrease.

    McKinsey Global Institute, the business and economics research arm of the global management consulting firm McKinsey & Co., predicts that by 2030 “activities that account for up to 30 percent of hours currently worked across the US economy could be automated.”

    The institute expects that demand for science, technology, engineering and mathematics roles will grow in the United States and Europe but shrink for customer service and office support.

    “A large part of that work involves skills, which are routine, predictable and can be easily done by machines,” said Anu Madgavkar, a partner with the McKinsey Global Institute.

    Although generative AI fuels the potential for automation to eliminate jobs, AI can also enhance technical, creative, legal and business roles, the report said. There will be a lot of “noise and volatility” in hiring data, Madgavkar said, but what will separate the “winners and losers” is how people rethink their work flows and jobs themselves.

    Tech companies have announced 74,716 cuts from January to May, up 35% from the same period last year, according to a report from Challenger, Gray & Christmas, a firm that offers job search and career transition coaching.

    Tech companies say they’re slashing jobs for various reasons.

    Autodesk, which makes software used by architects, designers and engineers, slashed 9% of its workforce, or 1,350 positions, this year. The San Francisco company cited geopolitical and macroeconomic factors along with its efforts to invest more heavily in AI as reasons for the cuts, according to a regulatory filing. Other companies such as Oakland fintech company Block, which slashed 8% of its workforce in March, told employees that the cuts were strategic not because they’re “replacing folks with AI.”

    Diana Colella, executive vice president, entertainment and media solutions at Autodesk, said that it’s scary when people don’t know what their job will look like in a year. Still, she doesn’t think AI will replace humans or creativity but rather act as an assistant.

    Companies are looking for more AI expertise. Autodesk found that mentions of AI in U.S. job listings surged in 2025 and some of the fastest-growing roles include AI engineer, AI content creator and AI solutions architect. The company partnered with analytics firm GlobalData to examine nearly 3 million job postings over two years across industries such as architecture, engineering and entertainment.

    Workers have adapted to technology before. When the job of a door-to-door encyclopedia salesman was disrupted because of the rise of online search, those workers pivoted to selling other products, Colella said.

    “The skills are still key and important,” she said. “They just might be used for a different product or a different service.”

    Continue Reading

  • OPEC+ speeds up oil output hikes, adds 548,000 bpd in August – Reuters

    1. OPEC+ speeds up oil output hikes, adds 548,000 bpd in August  Reuters
    2. OPEC+ may approve larger oil output hike for August at key policy meeting  Profit by Pakistan Today
    3. Oil prices steady on solid job market, tariff uncertainty  Dunya News
    4. Oil dips ahead of expected OPEC+ output increase  Business Recorder
    5. Natural Gas, WTI Oil, Brent Oil Forecasts – Oil Retreats As Traders Wait For OPEC+ Production Decision  FXEmpire

    Continue Reading

  • Billionaire Bill Gates Has 66% of His Foundation’s $42 Billion Portfolio Invested in These 5 Dividend Stocks

    Billionaire Bill Gates Has 66% of His Foundation’s $42 Billion Portfolio Invested in These 5 Dividend Stocks

    • Five of the six largest positions held by the Bill & Melinda Gates Foundation Trust pay dividends.

    • Most of these stocks don’t pay super-attractive dividends, but one offers a solid dividend yield of 2.43%.

    • Growth investors could be interested in the foundation’s largest holding.

    • 10 stocks we like better than Microsoft ›

    Bill Gates could have been the world’s first trillionaire. However, his net worth today is “only” around $117 billion. He’s not hurting, to say the least.

    One reason why Gates isn’t even wealthier is that he didn’t hold on to his stake in Microsoft (NASDAQ: MSFT), the software giant he co-founded. Another factor is that Gates has given away a substantial amount of money — a whopping $59 billion — to the charitable organization he and his ex-wife founded, the Bill & Melinda Gates Foundation.

    This foundation has also given away a lot of money to help people around the world. However, it still boasts a sizable investment portfolio of roughly $42 billion at the end of the first quarter of 2025. And Gates has 66% of his foundation’s portfolio invested in the following five dividend stocks.

    Image source: Getty Images.

    Unsurprisingly, Microsoft is the largest holding for the Bill & Melinda Gates Foundation Trust. The company makes up nearly 25.6% of the foundation’s total portfolio, with a stake worth almost $10.7 billion at the end of Q1.

    Although many tech stocks don’t pay dividends, Microsoft initiated its dividend program in 2003. The company has increased its dividend for 20 consecutive years. However, Microsoft’s dividend still isn’t all that attractive, with a forward yield of only 0.68%.

    Gates donated billions of dollars worth of Microsoft shares to his foundation at its inception in 2000. The stock floundered for years, but began to take off in 2015. Its momentum continues today, thanks to a major tailwind from artificial intelligence (AI) adoption.

    Waste Management (NYSE: WM) ranks as the Gates Foundation Trust’s third-largest holding, trailing Microsoft and Berkshire Hathaway. At the end of Q1, the foundation’s position in Waste Management made up nearly 17.9% of its portfolio.

    While Berkshire has never paid a dividend, Waste Management has paid quarterly dividends since 1998. The big waste management services provider has increased its dividend for 22 consecutive years. Its forward dividend yield currently stands at 1.48%.

    The Bill & Melinda Gates Foundation Trust owned over 54.8 million shares of Canadian National Railway (NYSE: CNI) at the end of Q1, worth around $5.34 billion. This position comprised nearly 12.8% of the foundation’s total portfolio.

    Continue Reading

  • Dr Finn on Adding Tiragolumab to Atezolizumab and Bevacizumab in Locally Advanced or Metastatic HCC

    Dr Finn on Adding Tiragolumab to Atezolizumab and Bevacizumab in Locally Advanced or Metastatic HCC

    “The most striking thing with this [updated] dataset is the overall survival, which is now at 26.6 months for the triplet. The objective response rate and duration of response have been very stable.”

    Richard Finn, MD, a professor of medicine at the Geffen School of Medicine in the Department of Medicine, Division of Hematology/Oncology, at UCLA, detailed updated findings from the randomized phase 1/2 MORPHEUS-Liver trial (NCT04524871) evaluating the addition of tiragolumab—a novel anti–TIGIT antibody—to the established combination of atezolizumab (Tecentriq) and bevacizumab (Avastin) in patients with unresectable locally advanced or metastatic hepatocellular carcinoma (HCC).

    The updated analysis presented at the 2025 ESMO Gastrointestinal Cancers Congress, demonstrated a median overall survival (OS) of 26.6 months (95% CI, 22.6-40.6) in patients treated with the triplet regimen (n = 40) compared with 16.0 months (95% CI, 7.5-18.5) in those given atezolizumab plus bevacizumab alone (n = 18; HR, 0.55; 95% CI 0.29-1.04). According to Finn, this represents a notable outcome in the current therapeutic landscape, where multiple systemic options are now available for patients with advanced HCC. The objective response rate (ORR) and duration of response (DOR) data remained consistent with earlier reports. Patients treated with the tiragolumab regimen experienced an ORR of 42.5% (95% CI, 27.0%-59.0%) vs 11.1% (95% CI, 1.4%-34.7%) for those given the doublet. Progression-free survival (PFS) was extended to 12.3 months (95% CI, 8.2-17.5) vs 4.2 months (95% CI, 2.3-7.4) for the triplet and doublet, respectively (HR, 0.63; 95% CI, 0.35-1.15).

    Finn noted that the findings from this study formed the foundation for the phase IMbrave-152/SKYSCRAPER-14 trial (NCT05904886), an ongoing randomized, placebo-controlled trial assessing the triplet of tiragolumab plus atezolizumab and bevacizumab vs atezolizumab and bevacizumab with placebo. The co-primary end points are OS and PFS. These results are highly anticipated, as they will determine the viability of incorporating tiragolumab into standard first-line treatment for advanced HCC, Finn said.

    Finn concluded that the MORPHEUS-Liver study represents one of the first robust datasets to evaluate a triplet immunotherapy regimen built on a bevacizumab-containing backbone in advanced HCC. Given the evolving treatment landscape—highlighted by the adoption of checkpoint inhibitors and anti-angiogenic agents in combination—the study adds important context for refining therapeutic strategies.

    Continue Reading

  • 2 Top Quantum Computing Stocks to Buy in July

    2 Top Quantum Computing Stocks to Buy in July

    Quantum computing is an interesting investment sector. Several companies compete in this field, ranging from tech giants to start-ups. However, most companies within this field agree that monetization of quantum computing is still a few years away, but it’s getting closer.

    Invest in Gold

    Powered by Money.com – Yahoo may earn commission from the links above.

    The tricky thing for investors to balance is getting in at the right time. If you’re too early, you might miss the rise of current investment trends, like artificial intelligence (AI). If you’re too late and quantum computing becomes the next hot investment trend, you’ve missed out on potentially huge returns.

    Right now is a solid time to invest in some quantum computing companies, especially if you pick the right ones.

    Image source: Getty Images.

    Alphabet (NASDAQ: GOOG) (NASDAQ: GOOGL) is Google’s parent company and is devoting significant resources to quantum computing research. Google kicked off the quantum computing investment cycle in December with its Willow chip, which solved an incredibly difficult problem in record time and with superb accuracy. This highlighted other quantum computing stocks, causing the entire industry to rise.

    Should Google deliver top-notch quantum computing technologies, it stands to benefit in one area in particular: AI. Quantum computing could unlock the next phase of AI, leading to unprecedented performance. This would give Google the leadership position in AI, leading to a huge increase in cash flows in its core business.

    This gives Alphabet a huge incentive to continue innovating in quantum computing. With its already massive cash horde and cash flows, it has the resources to devote to this field.

    Even with Alphabet’s innovative mindset and top-notch technology, the market doesn’t respect it. Investors are still worried about it losing market share from its Google Search business, so it trades at a cheap valuation.

    GOOG PE Ratio (Forward) Chart
    GOOG PE Ratio (Forward) data by YCharts

    It’s important to note that 18.5 times forward earnings is unbelievably cheap for a big tech stock like Alphabet. It basically indicates that the market values only its Google Search business, not any of the potential gains from quantum computing. As a result, I think it’s a great pick because it offers massive upside if it can win the quantum computing arms race, while also capitalizing on a potential turnaround in the market’s perception of its search business.

    While Alphabet is a conservative choice for quantum computing, IonQ (NYSE: IONQ) is more aggressive. Its only revenue comes from various contracts that it has signed, and there is no backup plan; it’s quantum computing viability or bust.

    Continue Reading