Category: 3. Business

  • Measuring US workers’ capacity to adapt to AI-driven job displacement

    Measuring US workers’ capacity to adapt to AI-driven job displacement

    Introduction

    Extensive research has investigated the “exposure” of occupations to artificial intelligence (AI). While definitions vary, studies using exposure measures seek to estimate the extent to which AI systems can help complete the work tasks of different jobs. But these measures are not predictions of job displacement. Rather, they provide signals about where AI’s complex labor market effects are most likely to emerge first.

    However, most exposure-focused analyses overlook a critical dimension: workers’ ability to adapt if job loss does occur.

    Capacity to adapt after job loss is not evenly distributed across the workforce. Financial security, age, skills, union membership, and the state of local labor markets are just some of the many factors that can influence the real-life consequences of job loss. For that reason, forecasts of AI exposure, disruption, and potential work dislocation would benefit from incorporating such factors.

    This is the purpose of new research explained here and in a new paper for the National Bureau of Economic Research (NBER) by research colleagues Sam Manning and Tomás Aguirre.

    To address the heterogeneity of how AI-induced job loss may impact workers, the new analysis combines estimates of AI exposure with a novel measure of “adaptive capacity” that takes into account workers’ varied individual characteristics. Along these lines, the new work supplements occupation-level exposure analysis with relevant measures of workers’ savings, age, labor market density, and skill transferability in order to assess their varied capacity to weather job displacement and transition to new work. As such, the new approach provides a useful way of distinguishing between highly AI-exposed workers with relatively strong means to adjust to potential AI-driven job loss and those with more limited adaptive capacity.

    As such, our analysis finds that overall, there is both broad resilience and concentrated pockets of potential vulnerability in the U.S. labor market when it comes to AI job displacement.

    Of the 37.1 million U.S. workers in the top quartile of occupational AI exposure, 26.5 million also have above-median adaptive capacity, meaning they are among those best positioned to make a job transition if displacement occurs. However, the analysis also documents that some 6.1 million workers (4.2% of the workforce in the sample) will likely contend with both high AI exposure and low adaptive capacity. These workers tend to be concentrated in clerical and administrative roles, and about 86% are women (gender shares are calculated using Lightcast data). Geographically, these workers are concentrated in smaller metropolitan areas, particularly university towns and midsized markets in the Mountain West and Midwest.

    In short, the new analysis asks: If AI does cause job displacement, who is best positioned to adapt, and who will struggle most? In asking those questions, this analysis intends to help policymakers focused on AI’s labor market impacts better target their attention and resources.

    Background: Why AI exposure alone does not account for workers’ varied ability to adjust to a changing labor market

    To identify which workers have the most and least capacity to weather AI-driven job displacement, the inquiry discussed here links analysis that forecasts AI’s labor market impacts with research examining how workers adjust to job displacement. As Manning and Aguirre write in their NBER report, “By bridging the two literatures, we move beyond identifying which jobs face potential AI exposure to understanding which workers might face the greatest or least adjustment costs if disruption leads to displacement.”

    Research examining workers’ exposure to AI has frequently mapped descriptions of worker tasks to AI capabilities in order to estimate the potential for AI-driven disruption in different occupations. Studies by Brynjolfsson and others (2018); Webb (2020); Muro and others (2019); Felten and others (2023); Eloundou and others (2024); Kinder and others (2024); and Hampole and others (2025) all find that higher-income, white collar occupations requiring postsecondary education show the highest exposure to AI capabilities.

    High exposure estimates for highly educated, high-income workers have led many to assume that these workers will bear the greatest burden of AI disruption. Yet such exposure measures fail to capture core non-technological factors that influence which workers would experience the most severe welfare costs if AI were to eventually be a cause of job displacement.

    Along these lines, several factors are known to shape worker vulnerability to harms from job displacement (for more detail see the underlying NBER paper).

    • Liquid financial resources: Workers with greater savings weather economic storms more effectively. Chetty (2008) shows individuals with greater liquid savings are less financially distressed after job loss and take longer to find better-matching jobs, while low-wealth individuals are forced into lower-quality employment.
    • Age: Age significantly influences job displacement costs. Farber (2017) shows that workers aged 55 to 64 who experienced job loss during the Great Recession were about 16 percentage points less likely than those aged 35 to 44 to find employment afterward. Older workers are less likely to retrain, relocate, or switch occupations compared to their younger counterparts, and a range of studies has found that job loss for older workers leads to greater earnings losses and lower reemployment rates.
    • Geographic density: Where a worker lives can affect their displacement experience and recovery prospects. Bleakley and Lin (2012) show that workers in more densely populated areas face lower costs to make work transitions compared to those in low-density areas.
    • Skill transferability: Transferable skills—those that can be applied across many different jobs—offer more occupational mobility than highly specialized skills. Nawakitphaitoon and Ormiston (2016) show that skill transferability is associated with smaller earnings losses following displacement.
    • Other factors such as income, the routine-task intensity of one’s job, and union representation may also influence outcomes, but are excluded from the core capacity analysis due to data limitations or ambiguity about their unique contributions to adaptive capacity.

    In sum, linking exposure measures with these indicators of adaptive capacity provides a more complete picture of who is likely to experience the greatest costs if AI exposure translates into job loss. More specifically, such an approach can suggest, on one hand, how the AI disruptions that may befall higher-income, white collar workers may be partly mitigated by those workers’ savings, skills, and networks; while on the other hand, downside risks for less adaptive workers may be harder to manage.

    Approach: Measuring adaptive capacity alongside AI exposure

    Expanding prior research on AI exposure by complementing Eloundou and others’  occupational exposure estimates with various measures of workers’ adaptive capacity allows for a sharper picture of risk and resilience in the labor market. The aim is to identify which occupations are likely to be impacted by large language models (LLMs) as well as which workers are the most and least able to weather a job transition if one becomes necessary.

    To develop this picture, the NBER report combine six primary datasets to create a composite measure of adaptive capacity by occupation. The Survey of Income and Program Participation (SIPP) provides data on workers’ net liquid wealth; the American Community Survey (ACS) offers age distributions; the Occupational Employment and Wage Statistics (OEWS) program contributes wage and employment figures; Lightcast provides employment shares by county and metro area; Bureau of Labor Statistics employment projections provide estimated employment growth rates by occupation; and O*NET details the range of skills required in each occupation. Together, these sources cover the vast majority of the U.S. workforce. (See the following Appendix and the underlying paper here.)

    From this combined data, the NBER report calculates an occupation-level adaptive capacity index based on four standardized components (net liquid wealth, growth-weighted skill transferability, geographic density, and age), which capture financial, skill-based, geographic, and age-related adjustment capacity. This index is then presented alongside AI exposure measures from Eloundou and others to identify how adaptive capacity varies across highly exposed occupations.

    Importantly, the adaptive capacity index focuses on factors influencing workers’ ability to find new jobs and their earnings after reemployment—not the full range of welfare costs that job displacement can impose, such as job insecurity or the loss of meaning and identity that work provides.

    Findings: On average, highly AI-exposed workers appear well-equipped to handle job transitions relative to the rest of the workforce, yet 6.1 million workers still face both high exposure and low adaptive capacity

    Combining AI exposure measures with the new adaptive capacity index paints a new picture of AI’s potential impacts on the workforce.

    To begin, the analysis here shows that the workers with the highest AI exposure rates possess characteristics that give them higher capacity to navigate job transitions successfully—finding new employment quickly and minimizing earnings losses after job displacement. That is, the most exposed workers may have the most resilience if AI automation or another cause leads to job loss.

    Figure 1 shows a large group of 26.5 million workers concentrated to the upper right of the bubble chart. Across this cluster of occupations, many high-exposure occupations such as software developers, financial managers, lawyers, and other professionals benefit from strong pay, financial buffers, diverse skills, and deep professional networks. Given that, these well-positioned workers—who observers often cite as being highly threatened by AI automation—likely possess relatively strong means to adjust to AI-driven dislocation if it were to occur (though of course few such transitions are easy, or come without costs to a worker’s well-being).

    Figure 1

    By contrast, roughly 6.1 million workers (see Appendix) face both high exposure to LLMs and low adaptive capacity to manage a job transition. Concentrated in jobs located in the lower-right quadrant of Figure 1, these potentially more vulnerable workers are employed in occupations with both top-quartile AI exposure and bottom-quartile adaptive capacity.  Many of these workers occupy administrative and clerical jobs where savings are modest, workers’ skill transferability is limited, and reemployment prospects are narrower. As such, if faced with an AI-related job loss, workers in these roles are likely among the most at risk of lower reemployment rates, longer job searches, and more significant relative earnings losses compared to other workers. 

    Looking more closely, the interplay of adaptive capacity scores with AI exposure scores reveals a positive correlation: As exposure increases, adaptive capacity generally increases as well. This reflects the fact that many highly exposed roles are held by financially secure, skilled, and well-networked workers—often in larger cities—who may have more opportunities to find continued employment. In that fashion, numerous workers in managerial and technical occupations are highly exposed to AI yet are nevertheless relatively well positioned to adapt (see Table 1).

    Table 1

    Occupations with highest adaptive capacity among high AI exposure (Top Quartile) (Table)

    At the same time, it’s clear that the collection of occupations characterized by high AI-exposure levels and low adaptive capacity encompasses numerous routine office jobs, which are often held by workers who may struggle to adapt to disruption (see Table 2). Door-to-door sales workers and news and street vendors show the least adaptive capacity among the occupations in the top quartile of AI exposure, followed by a number of clerking and administrative occupations, such as court, municipal, and license clerks; secretaries and administrative assistants; and payroll and timekeeping clerks. In terms of these occupations’ size, office clerks (2.5 million workers); secretaries and administrative assistants (1.7 million); receptionists and information clerks (965,000); and medical secretaries and administrative assistants (831,000) stand out as some of the largest occupations in the list. The combination of employment size, potentially elevated automation impacts, and precarious worker traits highlights occupations where policymakers may benefit from greater visibility into AI’s workforce effects.

    Table 2

    Occupations with lowest adaptive capacity among high AI exposure (Top Quartile) (Table)

    Shifting focus to the geographical incidence of AI exposure and adaptive capacity, the analysis here shows concentrations of highly exposed and highly adaptive workers are greatest in tech hubs such as San Jose, Calif., and Seattle. Conversely, the share of workers in highly exposed but low-adaptive-capacity occupations ranges from 2.4% to 6.9% in the nation’s metro areas, with a national average of 3.9%. The concentration of exposed and vulnerable workers is greatest in smaller metro areas and college towns, particularly in the Mountain West and Midwest—reflecting such areas’ elevated presence of administrative and clerical workers. Key metro areas with elevated shares of potentially vulnerable workers (those with high exposure but low adaptive capacity) include college towns such as Laramie Wyo., Huntsville, Texas, and Stillwater, Okla.; state capitals such as Springfield, Ill., Carson City, Nev., and Frankfort, Ky.; and small towns in New Mexico and Oklahoma.

    Map 1

    Geographic distribution of high exposure and low adaptive capacity occupations (Choropleth map)

    Overall, the figures, charts, and map here suggest that supplementing AI exposure with measures of worker characteristics yields a different (and potentially more useful) level of insight into potential worker resilience and vulnerability.

    Limitations: Significant uncertainty surrounds the question of how AI will impact labor markets, and occupation-level measures cannot tell the whole story

    This analysis is not without limitations, and despite the new evidence generated here, there remains significant uncertainty about both the extent to which AI will impact labor markets as well as the differential burdens and opportunities that AI can bring for affected workers. The full NBER paper includes a more complete description of potential limitations. We briefly discuss several here.

    First, the adaptive capacity index is computed at the occupation level, but the adaptive capacity of different workers within the same occupation can vary substantially. For example, even though computer network architects score highly on the index, a 30-year-old computer network architect with a diverse range of past industry experience living in San Francisco may be better positioned to manage a job transition than a 56-year-old worker who shares the same job title but has worked at one small IT company in a smaller market for their entire career. Similarly, two software developers may have very different levels of liquid savings to help weather an income shock, and two office clerks may work in labor markets that offer very different sets of alternative work opportunities if displaced.

    Additionally, there are numerous ways one could compose a measure of adaptive capacity to displacement. The approach taken here represents an initial attempt to introduce this concept. However, there are dozens of confounding individual, firm, occupation, and local labor market factors that will ultimately shape a worker’s ability to navigate technological displacement, and that evade measurement in this index. The result that AI exposure is positively correlated with measures of adaptive capacity appears robust across many alternative ways of computing the index, but individual occupation-level results will be more sensitive to different approaches. More data from the U.S. context on other factors and their relative importance for shaping post-displacement outcomes would help expand the utility of any adaptive capacity measure.

    Finally, the evidence underlying the adaptive capacity estimates here is derived primarily from observed effects in localized displacement events, rather than from large-scale employment shifts across occupations. As a result, the index may be most informative when displacement is relatively isolated—for example, when a worker loses their job but related occupations remain stable. In scenarios in which AI affects clusters of related occupations simultaneously, structural job availability may matter more than individual-level characteristics. Moreover, if AI fundamentally transforms the economy on a scale comparable to the industrial revolution (as some experts have suggested could be possible), it could make entire skill sets redundant across several occupations simultaneously.

    How the economy will react to structural changes AI may bring is difficult to predict, and any occupation-level adaptive capacity measure could drastically change as AI impacts skill demands and helps create new jobs and industries. The measure discussed here represents one snapshot in time based on available data on the drivers of adaptive capacity.

    Conclusion: Adaptability analysis can help reveal who may be most in need of support to weather AI-driven job transitions

    Overall, this analysis offers a more nuanced picture of AI’s possible impacts on workers than AI exposure measures can on their own.

    Specifically, the analysis focuses on understanding the degree to which workers in different highly exposed occupations could manage a job transition after involuntary displacement. In doing so, it makes clear the existence of both large zones of strong resilience to job loss across the workforce as well as concentrated pockets of heightened vulnerability if displacement were to occur.

    Given this, the report likely has practical use for workforce and employment development practitioners because understanding where workers are most and least resilient to AI-driven labor market change may help inform the optimal use of public funding for workforce adjustment programs.

    Such information can also be used to inform efforts to track labor market impacts. For example, policymakers concerned about potential negative impacts from AI-induced displacement may be able to use adaptive capacity measures to target investment in new data collection on groups of workers with lower estimated adaptive capacity. Additionally, such measures could be considered to target and streamline eligibility for particular workforce transition assistance programs.

    In sum, as AI continues to spread across the economy, adaptability analysis can provide a starting point for policymakers to better understand who may be most in need of better support to weather job transitions.

    • Appendix

      Complete list of high-vulnerability occupations All occupations with high exposure and low adaptive capacity

      Geographic distribution of high-vulnerability occupations 

      Top 40 metropolitan statistical areas by share of workers in high-vulnerability occupations

      State-level geographic patterns 

      State-level concentration of high vulnerability workers

      Data sources

      The authors combine data from seven sources:

      • Survey of Income and Program Participation (SIPP) 2022-2024 Panels: Detailed information on workers’ income, savings, and demographic characteristics used for constructing occupation-level measures of median net liquid wealth.
      • American Community Survey (ACS) 2024: Microdata on workers’ age distributions across occupations used for calculating the share of workers aged 55 and older.
      • Occupational Employment and Wage Statistics (OEWS) 2024: Occupation-level wage and employment data used for cross-dataset harmonization of weights and income measures.
      • Bureau of Labor Statistics Employment Projections: Data on projected employment growth rates by occupation (2024 to 2034) used to calculate growth-weighted skill transferability.
      • Lightcast 2023: Occupation-level employment data by county and metropolitan statistical area.
      • O*NET Database 30.1 (2025): Skill importance ratings to measure skill transferability across occupations.
      • AI exposure data: Measures of occupational exposure to LLMs from Eloundou et al. (2024), specifically their E1+0.5E2 measure.

      For smooth data integration, the authors first harmonize occupation codes across datasets to create a common occupational taxonomy. This includes modifications (such as weighted averages, etc.) to group certain occupations differently classified between data sources. More details are available in the full paper.

      • O*NET > SOC > OEWS > Modified SIPP
      • Census > SIPP > Modified SIPP

      Only occupations meeting strict data quality thresholds (e.g., ≥15 SIPP respondents) are included. The final dataset covers 95.9% of the U.S. workforce (356 occupations) based on OEWS data.

      See Online Appendix available here for more detail.

    The Brookings Institution is committed to quality, independence, and impact.
    We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).

    Continue Reading

  • Journal of Medical Internet Research – Longitudinal Between

    Journal of Medical Internet Research – Longitudinal Between

    Background

    Daytime sleepiness is an important, yet understudied, dimension of adolescents’ sleep health []. Its prevalence varies widely across countries, ranging from 7.8% to 55.8% [], and it is notably higher in adolescence than in adulthood []. Importantly, daytime sleepiness plays a central role in mediating the adverse effects of sleep impairment on adolescent health and well-being []. Studies have linked it to lower health-related quality of life [], depressive symptoms, anxiety [], heightened risk of mood disorders [], and lower educational achievement []. Given its central role in linking sleep impairment to adverse outcomes, understanding the factors and processes contributing to daytime sleepiness in adolescence warrants greater scholarly attention.

    Daytime sleepiness arises from an interplay of intrinsic (eg, brain maturation and sleep disorders) and extrinsic (eg, early school start times and poor sleep hygiene) factors []. Among these, insufficient sleep and late bedtimes on schooldays have been identified as the most direct contributors to daytime sleepiness among adolescents []. Digital media use is an important extrinsic factor known to affect sleep duration and bedtime timing; yet, most research on this association has been cross-sectional, limiting causal interpretations and leaving the direction of effects unclear [,].

    Although the number of longitudinal studies is increasing [], the vast majority do not distinguish between-person from within-person associations, which can lead to misleading conclusions about causal effects [,]. Studies that do separate these effects typically focus on short-term dynamics, such as day-to-day changes [-], often in small convenience samples, which limits their relevance for understanding longer-term processes.

    To address these gaps in prior research, this study is the first to examine the longitudinal, reciprocal associations among screen time, bedtime, and daytime sleepiness, accounting for both stable between-person differences and within-person processes.

    The study also tests whether restricting screen time before sleep moderates these associations. Clarifying whether daytime sleepiness emerges primarily from stable between-person differences, dynamic within-person processes, or both can advance theoretical understanding of how digital media use and adolescent sleep health influence each other. It may also help determine whether interventions should target stable behavioral patterns—such as sleep-related lifestyle habits, household routines, or family norms around screen use—or instead focus on longer-term individual trajectories—such as gradual increases in screen time or seasonal shifts in bedtime habits—or integrate both approaches.

    Prior Work

    Associations Among Screentime, Bedtime, and Daytime Sleepiness

    Cross-sectional research consistently demonstrates positive associations between various screen-based activities—such as television watching, internet use, video gaming, and phone use—and both delayed bedtimes and increased daytime sleepiness [,]. Whereas this evidence cannot be a basis for causal interpretations, it suggests that, for some adolescents, higher screen time, later bedtimes, and greater daytime sleepiness tend to co-occur. This pattern likely reflects stable between-person differences that may be linked to external factors such as individual traits (eg, social anxiety), lifestyle demands (eg, extracurricular commitments), and family environment characteristics (eg, parenting style and household rules) [-].

    The recent synthesis of evidence suggests that the causal link between screen time and sleep health is bidirectional, involving 2 potential pathways []. The screen-time-affecting-sleep pathway posits that media use, in particular before or after bedtime, contributes to shorter sleep duration and poorer sleep quality. Four explanatory mechanisms have been proposed: melatonin suppression due to blue light exposure, psychological arousal, displacement of sleep time, and sleep interruptions [-]. Of these, only displacement—that is, delayed bedtime due to screen time—and nighttime interruptions from notifications appear to have a substantial impact on sleep [].

    Conversely, the impaired-sleep-affecting-screen-time pathway posits that changes in sleep can contribute to increased media use. Three mechanisms explain this effect. Circadian phase shifts in puberty result in extended evening free time for media use [,]. Adolescents may use digital media to cope with sleep difficulties [,]. Daytime sleepiness is associated with more sedentary behavior, including prolonged screen time [].

    Longitudinal evidence supporting the 2 pathways is mixed. Some adolescent studies support the screen-time-affecting-sleep pathway (eg, meta-analysis by Pagano et al []), others report reciprocal associations [,], and some find minimal or no effects [-]. Evidence for the sleep-impairment-affecting-screen-time pathway exists, but in young adult samples []. There are also some longitudinal studies that found little or no support for either pathway or only marginal effects [-]. Such mixed findings may partly stem from conflating between-person and within-person associations in prior longitudinal studies.

    To date, only 2 longitudinal studies have investigated the within-person associations between electronic media use and sleep-related outcomes—one focusing on daytime sleepiness [] and the other on bedtime []. The former did not find significant within-person associations between the frequency of social media use and daytime sleepiness in Dutch adolescents (aged 11-15 years), but did find between-person associations []. The latter, a 5-wave study of Finnish adolescents (aged 13-14 years at baseline), found no lagged effects and limited evidence of concurrent within-person associations—higher-than-usual social media use coincided with later-than-usual bedtime, but only in wave 1 []. These sparse findings suggest that the link between media use and sleepiness may arise from stable individual differences rather than changes over time.

    A Moderating Role of Screen Time Restriction Before Sleep

    Not all screen use is equally adverse for sleep health. In particular, evening screen time is considered detrimental to adolescent sleep [], and restricting it is a common sleep hygiene recommendation []. Among adolescents, presleep screen restriction often results from parent-set technology rules, which cross-sectional studies have linked to less screen use, an earlier bedtime, and longer sleep duration []. While many adolescents do not follow their parents’ technology rules and recommendations [], research synthesis suggests that interventions aimed at reducing prebedtime screen use lead to modest improvements in bedtime and sleep duration []. Although this evidence suggests a potentially protective effect of reducing evening screen use, evidence on whether presleep screen restrictions moderate the longitudinal relationship between adolescents’ screen time and sleep health is largely missing.

    Covariates

    Both screen use and sleep health vary by age and sex and therefore are important to consider when interpreting associations among adolescents’ screen time and sleep health. In particular, older adolescents sleep less, go to bed later, and spend more time on screens, and younger adolescents are more likely to limit evening screen use [-]. Findings on sex differences in sleep are mixed. Some studies report no substantial differences [], while others show girls sleep more than boys [], or the reverse []. Daytime sleepiness findings are also inconsistent []. Sex differences in screen time are clearer. Boys exceed screen time limits more often [], and sleep-disrupting screen activities differ—girls’ sleep is more affected by social media, while boys’ is impacted by video games []. Together, these patterns indicate that age and sex are important individual factors in understanding variation in adolescents’ screen use and sleep health.

    This Study

    Prior longitudinal studies have rarely distinguished between stable between-person differences and within-person fluctuations in digital media use and sleep, leaving uncertainty about whether observed associations reflect enduring individual characteristics or dynamic changes over time []. The few adolescent studies that applied this distinction produced inconclusive results, with limited evidence for lagged or concurrent within-person effects [,]. To address this gap, this study extends prior work by examining the reciprocal longitudinal associations among adolescents’ screen time, bedtime, and daytime sleepiness while separating between- and within-person processes. This allows us to clarify whether screen time and sleep co-vary because they influence each other over time or because of stable individual differences among adolescents. Furthermore, testing the moderating role of screen time restriction before sleep provides evidence on whether this common sleep hygiene recommendation mitigates longer-term effects of screen use on sleep health.

    Specifically, we hypothesize that adolescents with higher overall screen time go to bed later and experience greater daytime sleepiness (Hypothesis 1); that increases in screen time are associated with a corresponding delay in bedtime and an increase in daytime sleepiness at the subsequent wave (Hypothesis 2), as well as within the same wave (Hypothesis 3); that delayed bedtime and increased daytime sleepiness are each associated with a subsequent increase in screen time (Hypothesis 4). The within-person effects expected in Hypotheses 2-4 reflect changes relative to a person’s typical patterns. Finally, we hypothesize that within-person associations are weaker among adolescents who restrict their screen use before sleep (Hypothesis 5).

    Ethical Considerations

    The study was approved by the Research Ethics Committee at Masaryk University (EKV-2018-068). Before participation, respondents were informed about the nature and purpose of the survey, their right to decline involvement, and their ability to skip any questions by selecting the “I prefer not to say” option available for all items. Informed consent was obtained from both adolescents and parents. Parents were instructed not to be present during the adolescent survey to protect privacy. Adolescents were asked to indicate if an adult had observed or intervened. Although most caregivers appeared to comply, this could not be independently verified. All data were fully deidentified prior to analysis, and no identifying information was collected or stored. No identification of individual participants in any images of the manuscript or supplementary material is possible.

    Participants received reward points equivalent to approximately US $4, added to the panelist’s account and redeemable as cash or for charity donations.

    Study Design and Setting

    A longitudinal observational design was used. This 3-wave prospective panel study was a part of a larger multifocal study examining various aspects of adolescents’ use of information and computer technologies and their impact on well-being. The first wave of data collection took place in June 2021, the second in November and December 2021, and the third in May and June 2022, with approximately 6 months between each wave. This study adhered to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) reporting guidelines []; the completed STROBE checklist is provided in [].

    Participants

    This study was conducted on a sample of 2500 Czech adolescents aged 11-16 years (mean age 13.43, SD 1.70 years; 1250/2500, 50% girls). Data were collected in the Czech Republic by an external research agency that recruited participants from existing online panels using face-to-face interviews, computer-assisted telephone interviewing, and online methods. Eligible participants were Czech households with at least 1 adolescent aged 11-16 years and a caregiver, enabling data collection from adolescent-parent dyads within the same household. Quota sampling was used to ensure equal representation of gender, age, and their combination and to ensure that the sample reflected the distribution of Czech households with children based on households’ socioeconomic status (head of the household’s education level) and place of residence (Nomenclature of Territorial Units for Statistics, level 3, municipality size, European Commission, 2020). Out of 2500 participants initially recruited at Wave 1, a total of 1654 completed Wave 2, corresponding to an attrition rate of 33.8% (846/2500). At Wave 3, a total of 1102 participants remained in the study. The overall attrition rate from Wave 1 to Wave 3 was 44.1% (1102/2500), with an incremental attrition rate of 33.4% (552/1654) between Wave 2 and Wave 3.

    Measures

    Screen Time

    Screen time was assessed with 3 items, each starting with the question: “How much time (hours and minutes) do you spend doing the following activities during a typical school day?” The three items were: (1) “using a computer (PC or notebook),” (2) “using a cell phone or tablet,” and (3) “watching TV, including various videos on TV (eg, DVD, Netflix).” In response to these items, respondents picked hours and minutes using a time spinner. The screen time score was then computed by adding up the scores of each item.

    Bedtime

    Bedtime was measured with 1 item: “When do you usually go to bed before school days?” In response to this item, respondents picked hours and minutes using a time spinner.

    Daytime Sleepiness

    Daytime sleepiness was measured using 4 items from the Pediatric Daytime Sleepiness Scale, which contains 8 items assessing the frequency of specific daytime sleepiness symptoms []. The 4 items were “You get sleepy or drowsy while doing your homework,” “You have trouble getting out of bed in the morning,” “You tell yourself that you need more sleep,” and “You are tired and grumpy during the day.” The items were rated on a 5-point scale: (1) “never,”(2) “rarely,” (3) “sometimes,” (4) “often,” and (5) “very often.” For each measurement occasion, a composite score was calculated as the mean of the items measuring the construct. A higher score indicates higher daytime sleepiness. Cronbach α was computed to assess the reliability of the scale across 3 waves. Reliability estimates were: α=0.77 for Wave 1, α=0.81 for Wave 2, and α=0.82 for Wave 3. These results indicate that the scale has acceptable internal consistency over time. Mean scores of the observed items were used for daytime sleepiness in analyses due to convergence issues when the latent variable was incorporated into the trivariate random intercept cross-lagged panel model (RI-CLPM).

    Screen Time Restriction Within 1 Hour Before Sleep

    Screen time restriction within 1 hour before sleep was measured at Wave 1. First, respondents were asked: “How long before going to sleep do you usually stop using all devices with a screen, ie, phone, tablet, computer, television?” Respondents picked hours and minutes using a time spinner in response to this item. Then, these data were transformed into a binary variable with values of 0 for adolescents who reported less than 60 minutes and 1 for adolescents who reported 60 minutes or more.

    Covariates

    Sex and age at baseline were self-reported at Wave 1 and were both included as time-invariant covariates in the analysis. Sex was coded as 0 for girls and 1 for boys, and age was grouped into 11-13 years (0) and 14-16 years (1).

    Statistical Analysis

    To examine the associations between screen time, bedtime, and daytime sleepiness over time while accounting for both between- and within-person sources of variance, we used RI-CLPMs fitted in lavaan (version 0.6-18) in R (version 4.4.1; R Core Team), allowing unbiased estimation of within-person effects net of stable individual differences []. The robust maximum likelihood estimator (MLR) was used, as it adjusts standard errors and chi-square statistics to accommodate nonnormal data (Section 3: “Testing normality assumptions” in supplementary materials provided by Tkaczyk et al []), yielding more accurate parameter estimates []. The proportion of missing data for the key time-varying variables ranged from 0.0% to 7.3% across waves. Little’s Missing Completely at Random (MCAR) test indicated that the data were not completely missing at random (χ²377=787.8; P<.001; normed χ²377=2.1), suggesting a small to moderate deviation from MCAR. Given the low proportion of missing data (<8% per variable), full information maximum likelihood (FIML) estimation was used to handle missing values. For a detailed breakdown of percentages of missingness for each variable and wave, and results of logistic regressions testing the relationship between key analytical variables, demographics, and dropouts are provided in Section 1: “Attrition analysis” in supplementary materials provided by Tkaczyk et al [].

    To obtain more robust estimates, nonparametric bootstrapping with 2000 resamples was used to estimate 95% CIs for both unstandardized and standardized effects. Standardized coefficients represent the SD change in outcomes per 1 SD change in exposure. Chi-square difference tests were used to compare the fit of a nested model with constraints to the fit of the unconstrained model unless otherwise specified. The modeling approach was adapted from Mulder and Hamaker []. In the first step, the unconstrained RI-CLPM was compared to a model where all random intercept variances and covariances were set to zero (statistically equivalent to cross-lagged panel model [CLPM]) to test for stable between-unit differences, using the chi-bar-square test []. The comparison showed that the RI-CLPM fit the data better (Δχ26=286.8; P<.001). In addition, random intercepts of all 3 constructs had significant variance, indicating that there were some stable between-person differences in screen time, bedtime, and daytime sleepiness over time. Second, to assess population-level changes in observed variables, we fixed grand means over time and compared this model to the unconstrained version. The comparison showed that the model without the constraints fit data better (Δχ26=74.1; P<.001), which implies that, on average, there was some change over time in all 3 variables. Third, to test whether the associations between screen time, bedtime, and daytime sleepiness were time-invariant, we constrained the autoregressive and cross-lagged paths, as well as the residual covariances. The model-building procedure indicated the fully unconstrained model as the best-fitting model . At this point, covariates (age and sex) were added to the model. The final model showed an adequate fit (χ215=46.7; P<.001; Comparative Fit Index [CFI]=0.994; Tucker-Lewis Index [TLI]=0.977; root-mean-square error of approximation [RMSEA]=0.029, 90% CI 0.020-0.039; standardized root-mean-square residual [SRMR]=0.020). Fourth, moderation by screen time restriction before bed was tested using a multiple-group extension to RI-CLPM [].

    Table 1. Model fit indices for random intercept cross-lagged panel models (RI-CLPMs) examining longitudinal associations between screen time, bedtime, and daytime sleepiness across 3 waves in a longitudinal study of adolescents (aged 11-16 years). Data were collected in the Czech Republic between June 2021 and June 2022.
    Model χ²(df) CFIa SRMRb RMSEAc TLId AICe BICf
    M0g 7.3 (3) 0.999 0.010 0.024 0.989 45591.676 45888.702
    M1h 294.0 (9) 0.938 0.044 0.113 0.750 45866.448 46128.530
    M2i 81.4 (9) 0.984 0.026 0.057 0.937 45653.804 45915.886
    M3j 42.7 (18) 0.995 0.023 0.029 0.989 45597.131 45806.796
    M3 + Covsk 46.7 (15) 0.994 0.020 0.029 0.977 45301.270 45633.241

    aCFI: Comparative Fit Index.

    bSRMR: standardized root-mean-square residual.

    cRMSEA: root-mean-square error of approximation.

    dTLI: Tucker-Lewis Index.

    eAIC: Akaike information criterion.

    fBIC: Bayesian information criterion.

    gM0: fully unconstrained RI-CLPM.

    hM1: cross-lagged panel model [CLPM].

    iM2: RI-CLPM with grand means constrained over time.

    jM3: RI-CLPM with constraint over time imposed on auto-regressive paths, cross-lagged paths, and residual (co)variances.

    kM3 + Covs: M0 with covariates (age and sex).

    Descriptive Analysis

    displays pairwise correlations for time-varying variables across waves, along with their descriptive statistics, skewness, and kurtosis. The means of daytime sleepiness are close to “sometimes” (Wave 1: 2.81, SD 0.80; Wave 2: 2.82, SD 0.82; Wave 3: 2.84, SD 0.82). At Wave 1, approximately every third (788/2500, 32%) participant reported having trouble getting out of bed in the morning often or very often. Getting sleepy or drowsy while doing homework was the least frequent symptom—at Wave 1, approximately every sixth (400/2494, 16%) participant reported experiencing it often or very often.

    Table 2. Pearson correlations and descriptive statistics for screen time, bedtime, and daytime sleepiness across 3 waves in a longitudinal study of adolescents (aged 11-16 years). Data were collected in the Czech Republic between June 2021 and June 2022. All correlation coefficients (r) are significant at P<.001.
    Variable STa (W1b) ST (W2c) ST (W3d) BTe (W1) BT (W2) BT (W3) DSf (W1) DS (W2) DS (W3)
    ST (W1), r 1.00
    ST (W2), r 0.63 1.00
    ST (W3), r 0.59 0.62 1.00
    BT (W1), r 0.23 0.16 0.14 1.00
    BT (W2), r 0.17 0.20 0.21 0.61 1.00
    BT (W3), r 0.13 0.11 0.18 0.56 0.63 1.00
    DS (W1), r 0.13 0.09 0.09 0.22 0.17 0.13 1.00
    DS (W2), r 0.15 0.15 0.13 0.20 0.23 0.18 0.57 1.00
    DS (W3), r 0.14 0.13 0.16 0.21 0.19 0.20 0.55 0.64 1.00
    Mean (SD), hh:mm or scale 06:23 (02:40) 06:11
    (02:36)
    06:02 (02:37) 09:48
    00:56
    09:47 (00:56) 09:57 (00:58) 2.81 (0.80) 2.82 (0.82) 2.84 (0.82)
    Skewness 0.38 0.51 0.59 0.04 0.31 0.26 0.15 0.13 0.10
    Kurtosis −0.54 −0.29 −0.23 0.28 0.83 0.58 0.14 0.11 0.09

    aST: screen time (hh:mm).

    bW1: Wave 1.

    cW2: Wave 2.

    dW3: Wave 3.

    eBT: bedtime (hh:mm PM).

    fDS: daytime sleepiness (1-5).

    Average bedtimes at each wave were before 10 PM (Wave 1: 9:48, SD 00:56; Wave 2: 9:47, SD 00:56; Wave 3: 9:57, SD 00:58). At Wave 1, a total of 14% (353/2465) of participants reported bedtime at 11:00 PM or later (213/1621, 13% at Wave 2 and 190/1093, 17% at Wave 3). Average total daily screen times were close to 6 hours at each wave and showed a decreasing tendency across time (Wave 1: 06:23, SD 02:40; Wave 2: 06:11, SD 02:36; Wave 3: 06:02, SD 02:37).

    Intraclass correlation coefficients (ICCs) revealed that between-person differences accounted for approximately 64% of the variance in screen time, 60% in bedtime, and 58% in daytime sleepiness, indicating a smaller but substantial proportion of variance due to within-person changes over time. All variables showed statistically significant and positive correlations both within and across waves.

    Between-Person Associations Among Screen Time, Bedtime, and Daytime Sleepiness

    Standardized path coefficients of the final RI-CLPM are presented in .

    The analysis revealed significant positive associations between the random intercepts of screen time and bedtime (r=0.23, 95% CI 0.15-0.31; P<.001), screen time and daytime sleepiness (r=0.25, 95% CI 0.16-0.34; P<.001), and bedtime and daytime sleepiness (r=0.31, 95% CI 0.22-0.41; P<.001). Consistent with Hypothesis 1, these correlations indicate that adolescents who typically use screens more also tend to go to bed later and experience higher daytime sleepiness. Additionally, those with later bedtimes tend to experience higher daytime sleepiness.

    Figure 1. Standardized path coefficients of the final random intercept cross-lagged panel model testing between- and within-person associations among screen time, bedtime, and daily sleepiness across 3 measurement waves in a longitudinal study of adolescents (aged 11-16 years) conducted in the Czech Republic between June 2021 and June 2022. The model controls for the effects of age (at Wave 1 [W1]) and sex on the random intercepts of the time-varying variables. Solid black lines represent significant paths, and dashed lines represent nonsignificant paths. Solid gray paths were fixed to 1. *P<.05; **P<.01; ***P<.001.

    Within-Person Associations Among Screen Time, Bedtime, and Daytime Sleepiness

    The analysis identified 2 significant cross-lagged effects. Consistent with Hypothesis 2, elevated screen time at Wave 1 relative to a person’s usual patterns, was associated with elevated bedtime at Wave 2 (β=.14, 95% CI 0.01-0.27; P=.02). Similarly, in line with Hypothesis 4, elevated bedtime at Wave 2, relative to a person’s usual patterns, was associated with increased screen time at Wave 3 (β=.24, 95% CI 0.11-0.36; P<.001). No evidence was found for the remaining cross-lagged paths hypothesized in Hypothesis 2 or Hypothesis 4.

    Consistent with Hypothesis 3, the analysis revealed consistent concurrent associations between the within-person components of screen time and bedtime (Wave 1: β=.16, 95% CI 0.04-0.27; P=.007; Wave 2: β=.23, 95% CI 0.010-0.36; P<.001; Wave 3: β=.09, 95% CI 0.01-0.19; P=.049), indicating that an increase in screen time—relative to a person’s usual patterns—was associated with a corresponding delay in bedtime with the same wave. No evidence was found to support Hypothesis 3. Additionally, a significant concurrent association between bedtime and daytime sleepiness was observed at Wave 2 (β=.13, 95% CI 0.01-0.26; P=.045) and Wave 3 (β=.08, 95% CI 0.00-0.17; P=.04). This indicates that, within these waves, a delay in bedtime was associated with elevated daytime sleepiness relative to a person’s usual level of sleepiness.

    The analysis also revealed autoregressive effects. Elevated bedtime at Wave 2, relative to a person’s usual patterns, was associated with elevated bedtime at Wave 3 (β=.20, 95% CI 0.05-0.36; P<.001), indicating that a delay in bedtime—relative to a person’s usual patterns—tends to carry over time. A similar autoregressive effect was observed for daytime sleepiness, with elevated sleepiness at Wave 2 associated with elevated sleepiness at Wave 3 (β=.24, 95% CI 0.12-0.37; P<.001).

    The Role of Covariates

    Age significantly predicted the intercepts of screen time (β=.21, 95% CI 0.16-0.25; P<.001), bedtime (β=.36, 95% CI 0.32-0.41; P<.001), and daytime sleepiness (β=.16, 95% CI 0.11-0.21; P<.001), indicating that older adolescents (aged 14-16 years) typically spent more time using screen media, have later bedtimes, and experience higher daytime sleepiness compared with younger adolescents (aged 11-13 years). Sex (boy=1) significantly predicted the intercept of daytime sleepiness (β=.14, 95% CI 0.10-0.19; P<.001), indicating that typical levels of daytime sleepiness are higher for boys than for girls ().

    Table 3. Estimated parameters of the random intercept cross-lagged panel model (RI-CLPM) testing between- and within-person associations of screen time, bedtime, and daytime sleepiness across 3 measurement waves in a longitudinal study of adolescents (aged 11-16 years) conducted in the Czech Republic between June 2021 and June 2022. The model controls for the effects of age (at Wave 1) and sex on the random intercepts of the time-varying variables.
    Parameter B SE 95% CI P value β
    Between-person associations
    Correlations
    STia ↔ BTib 0.314 0.062 0.184 to 0.436 <.001 .229
    STi ↔ DSic 0.296 0.050 0.177 to 0.407 <.001 .250
    BTi ↔ DSi 0.123 0.019 0.084 to 0.159 <.001 .312
    Covariates
    Age → STi 0.859 0.100 0.658 to 1.05 <.001 .206
    Sex → STi 0.050 0.099 −0.138 to 0.247 .62 .012
    Age → BTi 0.523 0.033 0.457 to 0.591 <.001 .362
    Sex → BTi 0.054 0.033 −0.012 to 0.112 .08 .038
    Age → DSi 0.191 0.029 0.135 to 0.248 <.001 .160
    Sex → DSi 0.172 0.028 0.113 to 0.228 <.001 .144
    Within-person associations
    Autoregressive paths
    STd (W1e) → ST (W2f) 0.100 0.070 −0.049 to 0.235 .15 .100
    BTg (W1) → BT (W2) 0.047 0.081 −0.145 to 0.233 .56 .045
    DSh (W1) → DS (W2) 0.027 0.074 −0.147 to 0.183 .72 .025
    ST (W2) → ST (W3i) 0.056 0.068 −0.103 to 0.201 .41 .056
    BT (W2) → BT (W3) 0.224 0.059 0.04 to 0.387 <.001 .200
    DS (W2) → DS (W3) 0.245 0.051 0.112 to 0.362 <.001 .242
    Cross-lagged paths
    ST (W1) → BT (W2) 0.051 0.023 0.002 to 0.099 .03 .139
    ST (W1) → DS (W2) 0.008 0.020 −0.037 to 0.051 .71 .023
    BT (W1) → ST (W2) 0.131 0.171 −0.238 to 0.463 .44 .046
    BT (W1) → DS (W2) 0.006 0.057 −0.124 to 0.133 .92 .006
    DS (W1) → ST (W2) 0.001 0.186 −0.411 to 0.398 >.99 .000
    DS (W1) → BT (W2) -0.018 0.072 −0.178 to 0.125 .81 −.015
    ST (W2) → BT (W3) -0.009 0.022 −0.057 to 0.037 .69 −.021
    ST (W2) → DS (W3) 0.008 0.018 −0.037 to 0.051 .66 .024
    BT (W2) → ST (W3) 0.635 0.162 0.309 to 0.976 <.001 .235
    BT (W2) → DS (W3) -0.010 0.044 −0.113 to 0.084 .83 −.011
    DS (W2) → ST (W3) 0.116 0.173 −0.25 to 0.481 .50 .038
    DS (W2) → BT (W3) 0.102 0.061 −0.025 to 0.23 .09 .080
    Residual covariances
    ST (W1) ↔ BT (W1) 0.156 0.058 0.04 to 0.277 .007 .158
    ST (W1)↔ DS (W1) 0.043 0.047 −0.059 to 0.156 .05 .050
    BT (W1) ↔ DS (W1) 0.025 0.018 −0.011 to 0.063 .15 .084
    ST (W2) ↔ BT (W2) 0.229 0.066 0.087 to 0.364 <.001 .225
    ST (W2) ↔ DS (W2) 0.098 0.057 −0.021 to 0.213 .09 .108
    BT (W2) ↔ DS (W2) 0.042 0.021 −0.005 to 0.088 .045 .127
    ST (W3) ↔ BT (W3) 0.099 0.050 −0.008 to 0.209 .049 .091
    ST (W3) ↔ DS (W3) 0.077 0.040 −0.006 to 0.159 .06 .090
    BT (W3) ↔ DS (W3) 0.030 0.014 −0.001 to 0.061 .04 .083

    aSTi: screen time latent intercept.

    bBTi: bedtime latent intercept.

    cDSi: daytime sleepiness latent intercept.

    dST: screen time.

    eW1: Wave 1.

    fW2: Wave 2.

    gBT: bedtime.

    hDS: daytime sleepiness.

    iW3: Wave 3.

    The Moderating Role of Screen Time Restriction Before Bed

    Against Hypothesis 5, comparisons of multiple group RI-CLPMs with and without constraints across groups showed no differences in correlations between random intercepts (Δχ23=6.0; P=.11), residual covariances (Δχ26=3.5; P=.74), or cross-lagged associations (Δχ26=5.3; P=.51) across adolescents who restricted screen time 1 hour before bed at Wave 1 and those who did not. However, some significant differences between those groups were found (Δχ22=32.0; P<.001). Adolescents who restricted their screen time before bed reported, on average, shorter screen time (by 27 minutes and 28 seconds), earlier bedtime (22 minutes and 12 seconds), and lower daytime sleepiness (Δ=0.159; ).

    Principal Results

    This 3-wave prospective panel study examined bidirectional relationships between screen time, bedtime, and daytime sleepiness in a large representative sample of early to midadolescents in the Czech Republic. Findings at the between-person level showed that higher screen time, later bedtimes, and increased daytime sleepiness tend to co-occur among adolescents. At the within-person level, results revealed a bidirectional, transactional association between screen time and bedtime, suggesting mutual reinforcement over time. Additionally, temporary, wave-specific deviations in screen time and bedtime—relative to a person’s usual patterns—were positively correlated, suggesting that increases in screen time and delays in bedtime tend to co-occur within individuals at the same wave. Finally, while restricting screen time before sleep did not modify these associations, adolescents who restricted screen time had lower typical screen time, earlier bedtimes, and less daytime sleepiness on average.

    Between-Person Associations Among Screen Time, Bedtime, and Daytime Sleepiness

    Consistent with Hypothesis 1, the analysis revealed small to medium positive correlations between screen time, bedtime, and daytime sleepiness at the between-person level, aligning with findings from cross-sectional studies [,,,]. However, previous RI-CLPM studies reported mixed correlation patterns. For instance, Maksniemi et al [] found no significant between-person correlations between active social media use and bedtime, whereas 2 other studies reported medium positive correlations between social media use and daytime sleepiness [] and between media multitasking and sleep problems []. Such inconsistencies may reflect differences across studies in how media use was conceptualized and defined (eg, active vs general social media use).

    The between-person associations observed in this study indicate that higher screen time and poorer sleep co‐occur as relatively stable individual tendencies, likely shaped by other stable factors. For example, late chronotype may predispose some adolescents to later bedtimes and heavier evening media use []. Prior work has shown that modifiable factors—such as parenting style [], parental sleep [], media habits [], and household rules [,]—also influence both adolescent media habits and sleep. To guide better-targeted interventions, future longitudinal RI-CLPM studies should investigate how various modifiable family and lifestyle factors influence the media–sleep association over time.

    Within-Person Associations Among Screen Time, Bedtime, and Daytime Sleepiness Over Time

    Consistent with Hypothesis 2, increased screen time was associated with delayed bedtime 6 months later, but only between Waves 1 and 2. According to the interpretation guidelines proposed by Orth et al [], the effect is considered large. The association, although not consistent across all waves, aligns with prior longitudinal research, including a 6-wave study based on data from the ABCD study among adolescents aged 11‑14 years [] and a 2-wave study among adolescents aged 13-14 years [], both of which link media use to later bedtimes over time. The present findings extend the prior evidence by demonstrating the association even when controlling for stable between‐person differences. Other RI-CLPM studies did not find cross-lagged effects; for instance, Maksniemi et al [] found no association between active social media use and bedtime. Such discrepancies may reflect differences in conceptualizing media use—overall screen time versus active social media use—which involve distinct pathways linking media to sleep. Whereas active social media use mainly disrupts sleep through presleep arousal [], total screen time is more closely related to blue light exposure and sleep displacement, the latter showing stronger and substantial associations with reduced sleep duration [].

    Contrary to Hypothesis 2, this study found no evidence of a direct within-person association between screen time and daytime sleepiness in the long term. This result is consistent with 2 previous RI-CLPM studies. Van Der Schuur et al [] also found no evidence of a long-term within-person association between social media use and daytime sleepiness, aside from a small effect of social media stress among girls. Van der Schuur et al [] found no direct path from media multitasking to sleep problems (including daytime sleepiness), except for a marginally significant effect of media multitasking among girls. Although direct effects were absent, indirect pathways remain plausible. Daytime sleepiness may occur when screen use results in later bedtimes []. Although bedtime was not formally tested as a mediator in this study, which should be considered a limitation, future longitudinal studies might examine bedtime delay as a pathway linking screen time to daytime sleepiness.

    Consistent with Hypothesis 3, temporary increases in screen time coincided with temporary delays in bedtime across all 3 waves, indicating concurrent within-person associations between the two. Similar results were found by Maksniemi et al [] in a single wave, whereas other RI-CLPM studies did not examine concurrent associations [,]. This pattern likely reflects the mutually exclusive nature of screen use and sleep within daily time allocation []; yet, the association manifests itself in period-specific, typical patterns of behavior—during periods when bedtime is delayed, adolescents have more opportunities for screen use, and conversely, during periods with greater screen use, they have less time available for sleep. Findings further indicate that bedtime remains sensitive to short-term, period-specific changes in screen time (and vice versa) and that both may share common contextual drivers.

    Against Hypothesis 3, this study found no evidence of a correlated change between screen time and daytime sleepiness, suggesting that short-term increases in screen time do not directly coincide with increased sleepiness. Similarly, a diary study on smartphone use and next-day sleepiness found no such effects []. Delayed bedtimes in Waves 2 and 3 were concurrently linked to increased daytime sleepiness, likely due to shorter sleep duration []. Overall, the pattern of longitudinal associations found in this study suggests that while screen time and daytime sleepiness are not directly linked at the within-person level, an indirect path is possible, whereby delayed bedtime may mediate the association between technology use and daytime sleepiness.

    Consistent with Hypothesis 4, this study found a within-person cross-lagged effect of bedtime on screen time in the subsequent wave: a later-than-usual bedtime predicted increased screen time 6 months later, but only between Waves 2 and 3. This finding aligns with prior longitudinal research showing reciprocal effects between poorer sleep and greater media use [,]. The RI-CLPM study by Van der Schuur et al [] provided partial evidence for the opposite direction, with increased daytime sleepiness predicting decreased social media use over time among boys. Unlike earlier RI-CLPM studies, this study supports the sleep-impairment-affecting-screen-time pathway, demonstrating a substantial effect even after accounting for stable between-person differences. Although this effect was not consistent across all waves, it suggests that adolescents may extend screen use to fill additional evening hours, which likely arises from circadian shifts or related factors [].

    Overall, discrepancies in cross-lagged effects across RI-CLPM studies may partly reflect differences in the time intervals between measurements. This study used a 6-month interval, whereas Van der Schuur et al [,] used a 3- to 4-month interval, and Maksniemi et al [] used a 1-year interval. The absence of cross-lagged effects in some cases suggests that these intervals may not have been optimal for capturing the underlying dynamics []. Future research could benefit from greater use of different temporal designs, such as shortitudinals, to identify optimal temporal windows for detecting within-person effects and the temporal dynamics through which media use influences sleep across adolescence.

    Taken together, the cross-lagged pattern (screen time → bedtime between Waves 1 and 2; bedtime → screen time between Waves 2 and 3) suggests a reinforcing cycle between increased screen time and delayed bedtime over time. While previous research identified bidirectional links between screen time and sleep [], this study extends prior work by being the first to demonstrate this reinforcing pattern longitudinally using an RI-CLPM that accounts for stable between-person differences. The autoregressive effects further indicate that delayed bedtimes tend to carry over across waves—a finding also reported in other RI-CLPM studies [], which may reflect adolescent circadian shifts or habitual delays associated with greater autonomy or increased school demands []. Considering that delayed bedtimes were concurrently linked to greater daytime sleepiness and prospectively to higher screen time, interventions that promote earlier and more consistent sleep schedules, rather than solely limiting screen use, may be more effective for improving adolescent sleep health.

    Effects of Screen Time Restriction Before Sleep

    Contrary to Hypothesis 5, this study found no evidence that restricting screen time before sleep affected within-person associations between screen time and sleep, particularly regarding the development of sleep displacement over time. Prior findings are mixed—while experimental studies have shown improvements in sleep outcomes [,], observational studies often report no adverse effects of prebedtime smartphone use [,], with inconsistent adherence to parent-set rules frequently cited as a limiting factor []. These discrepancies likely reflect differences in study design, sampling strategies, and time frames (eg, short- vs long-term). It should also be noted that the comparison groups were defined based on screen time restriction assessed at Wave 1 only. However, this behavior was not stable over time—among those who reported limiting their screen use at Wave 1, only 40% (284/710) did so across all 3 waves, and 65% (460/710) did so at least once thereafter. Future research should account for this temporal variability when examining the long-term effects of screen time restriction.

    Adolescents who reported restricting screen use before sleep also tended to report lower overall screen exposure, earlier bedtimes, and less daytime sleepiness than their peers. Although these between-person differences may indicate a protective role of screen time restriction, they could also reflect other stable characteristics such as family environment (eg, parenting style), chronotype, or self-regulation. Prior research has linked adverse parenting styles to poorer sleep quality and greater daytime sleepiness [], and greater sleepiness to lower self-regulation and eveningness chronotype []. Future longitudinal studies should account for these factors and examine their potential moderating roles in the relationship between screen use and sleep outcomes.

    Limitations

    Several limitations should be considered when interpreting these findings. First, the study relied on self-reported measures of screen time and sleep, which may be prone to inaccuracy [,]. Because overall screen time was calculated by summing reported use across multiple devices that could have been used simultaneously, average values may overestimate actual exposure. Future research should integrate digital trace data [] and wrist-worn accelerometers data [] for more accurate measurements.

    Second, measurement simplifications—using total screen time and an abbreviated version of the Pediatric Daytime Sleepiness Scale []—may have reduced precision and obscured associations with sleep []. Future studies should use more detailed measures that account for media functions, content, and context of use [,].

    Third, with only three waves spaced 6 months apart, the design was insufficient for modeling longer-term developmental trajectories [,] or accounting for seasonal variability in screen time and sleep [,]. Longer follow-up and more frequent measurement occasions would allow finer modeling of these changes.

    Fourth, attrition was higher than in comparable school-based studies [-], likely because data were collected through an online panel and required the agreement of both adolescent and parent or caregiver. Online panels typically exhibit higher attrition rates due to the sustained participant burden and email-based recontact [,], and similar rates have been reported in other adolescent panel studies []. Notably, attrition remained high despite offering substantially increased incentives (160% in Waves 1 and 2; 280% in Wave 3). Dropouts reported slightly higher baseline screen time (Tables S1 and S2 in supplementary materials provided by Tkaczyk et al []), which may limit generalizability to heavy screen users.

    Finally, data collection partially overlapped with COVID-19 social distancing measures, which were associated with increased screen time and later bedtimes among adolescents [,]. The stringency of restrictions varied across waves: Wave 1 (June 2021) coincided with the strictest measures, Wave 2 (November-December 2021) with moderate restrictions, and Wave 3 (May-June 2022) after their removal []. This variation may partly explain the observed decrease in screen time and the stability of bedtime between Waves 1 and 2.

    Conclusion

    This study is the first to test reciprocal longitudinal associations among adolescents’ screen time, bedtime, and daytime sleepiness while separating between- and within-person processes, thereby addressing bias common in prior cross-lagged panel studies. The findings refine theoretical understanding by showing a complex, bidirectional, and mutually reinforcing interplay between screen time and bedtime over time, even after accounting for stable individual differences. Between-person associations revealed that adolescents with higher screen use had poorer sleep, likely reflecting the influence of relatively stable individual and environmental factors. Although specific cross-lagged effects varied across waves, the overall pattern supports both the screen-time-affecting-sleep and sleep-impairment-affecting-screen-time pathways, whereas daytime sleepiness was not affected by this dynamic. Negatively correlated within-person fluctuations further indicate that screen time and bedtime are partly mutually exclusive and may share contextual drivers.

    Screen time restriction before sleep did not moderate within-person effects. However, at the between-person level, adolescents who practiced it reported lower screen use, earlier bedtimes, and less daytime sleepiness. Taken together, these findings suggest that interventions emphasizing consistent sleep schedules and supportive family routines—rather than focusing solely on limiting screen use—may be most effective for promoting adolescent sleep health. Future research should incorporate objective measurements on multiple time scales and relevant moderators.

    The authors thank Dr David Lacko, Dr Vojtěch Mýlek, and Martin Tancoš for their thoughtful consultations during the preparation of this manuscript. During the preparation of this work, we used the generative artificial intelligence (AI) tool ChatGPT by OpenAI [] to improve language clarity and readability. After using this service, we reviewed and edited the content as needed and take full responsibility for the publication’s content.

    This work has been funded by a grant from the Programme Johannes Amos Comenius under the Ministry of Education, Youth and Sports of the Czech Republic from the project “Research of Excellence on Digital Technologies and Wellbeing CZ.02.01.01/00/22_008/0004583” which is co-financed by the European Union. The work of AJK was supported from Operational Programme Johannes Amos Comenius—Project MSCAfellow5_MUNI (No. CZ.02.01.01/00/22_010/0003229). The funding sources were not involved in any research decisions.

    The data used in this study are openly available on the Open Science Framework (OSF) [].

    None declared.

    Edited by S Brini; submitted 13.Jun.2025; peer-reviewed by R Taylor, T Poulain; comments to author 08.Sep.2025; revised version received 01.Dec.2025; accepted 02.Dec.2025; published 21.Jan.2026.

    ©Michał Tkaczyk, Albert J Ksinan, David Smahel. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 21.Jan.2026.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

    Continue Reading

  • Gates Foundation, OpenAI Launch $50M AI Health Initiative in Africa

    Gates Foundation, OpenAI Launch $50M AI Health Initiative in Africa

    The Gates Foundation and OpenAI have announced a $50 million initiative, Horizon1000, to harness artificial intelligence (AI) for strengthening primary health care across African countries, starting in Rwanda, according to Reuters.1 The collaboration seeks to equip up to 1000 clinics with AI‑driven tools that support health workers, improve care delivery, and help counter the effects of declining international aid in regions with acute health workforce shortages.

    “It is about using AI responsibly to reduce the burden on health care workers, to improve the quality of care, and to reach more patients,” Paula Ingabire, Rwanda’s minister of information and communications technology and innovation, said in a video statement released on Wednesday.

    Horizon1000 seeks to address the growing challenges faced by African health systems, including declining international aid and shortages of trained medical staff. By integrating AI into primary care workflows, the initiative aims to enhance clinical decision-making, streamline operations, and improve outcomes for patients in underserved regions.

    Last year, the Trump administration’s decision to pause US foreign development assistance for 90 days pending a policy review triggered widespread concern among humanitarian and development experts, as it effectively halted the flow of funds that support health, food security, education, and civil society programs in dozens of countries, according to Reuters.2 Critics warned that this abrupt suspension, which applies to most United States Agency for International Development and State Department foreign aid programs unless specifically exempted, forced organizations to suspend operations, disrupted services like HIV treatment and nutritional support, and left millions of vulnerable people without critical assistance.

    Human rights groups argued that the sudden cuts create a life‑threatening vacuum in essential services that other donors cannot immediately fill, undermining US global health and development leadership.

    Using AI will help get back on track after global development assistance for health fell by just under 27% last year, compared with 2024, Gates told Reuters.1

    The collaboration underscores the potential for technology-driven solutions to strengthen health systems in low-resource settings. OpenAI and the Gates Foundation envision that these efforts could serve as a model for scalable, AI-supported health care interventions across the continent.

    AI has significant potential to transform health care in Africa by supporting the achievement of the United Nations’ Sustainable Development Goal 3, which aims to ensure healthy lives and well-being for all, according to one study.3 Although AI has been applied in medical settings globally since the 1970s, its adoption in Africa has largely been limited to pilot projects addressing maternal and child health, infectious diseases, and non-communicable conditions.

    “We recommend the acceleration of the ongoing improvement in Africa’s infrastructure, especially electricity and Internet penetration,” wrote the researchers of the study. “Reliable power supply and affordable Internet services will catalyze data generation and analysis needed for advanced automation of processes involved in patient care. Widespread use of electronic medical records and large medical databases will foster machine learning that is programmable for Africa.”

    Key challenges include limited clinical data, inadequate digital infrastructure, high costs, legal and policy gaps, and risks of algorithmic bias. Despite these barriers, AI applications—from disease diagnosis and drug authentication to health worker scheduling and telehealth—show promise in improving care quality, efficiency, and access.

    Accelerating infrastructure development, expanding local AI expertise, leveraging smartphones, and implementing supportive policies are critical steps for scaling AI in African health care systems.

    “A major lesson from the experience of AI professionals working in resource-poor settings is that AI implementation should focus on building intelligence into existing systems and institutions rather than attempting to start from scratch or hoping to replace existing systems,” wrote the researchers. “African countries must also enact laws and policies that will guide the application of this technology to health care and protect the users.”

    References

    1. Rigby J. Gates and OpenAI team up for AI health push in African countries. Reuters. January 21, 2026. Accessed January 21, 2026. https://www.reuters.com/business/healthcare-pharmaceuticals/gates-openai-team-up-ai-health-push-african-countries-2026-01-21/

    2. Trump pauses US foreign aid for 90 days pending review. Reuters. January 20, 2025. Accessed January 21, 2026. https://www.reuters.com/world/us/trump-pauses-us-foreign-aid-90-days-pending-review-2025-01-21/

    3. Owoyemi A, Owoyemi J, Osiyemi A, Boyd A. Artificial intelligence for healthcare in Africa. Front Digit Health. 2020;2:6. doi:10.3389/fdgth.2020.00006

    Continue Reading

  • A UW materials lab probes the mysteries of toughness at the nano scale – UW News

    A UW materials lab probes the mysteries of toughness at the nano scale – UW News

    Researchers in the Meza Research Group at the University of Washington draw inspiration from natural structures to develop new materials. On the left is a scanning electron microscope (SEM) image of naturally occurring spider silk. On the right is an SEM image of an engineered plastic material with a similar structure. The plastic is foamed using tiny carbon dioxide bubbles to make it lighter and tougher. Photo: Haynl et. al/Nature Scientific Reports (left) and Dwivedi et. al/Journal of the Mechanics and Physics of Solids (right).

    Biology is full of architecture. Materials like wood, crab shells and bone all contain microscopic structures such as layers, lattices, cells and interwoven fibers. Those structures give natural materials an ideal combination of lightness and toughness, and they’ve inspired engineers to build artificial materials with similar properties. But how those tiny architectures lead to such tough materials is something of a mystery.

    In 2019, Lucas Meza, assistant professor of mechanical engineering, set up the Meza Research Group at the University of Washington to tease out the mechanical secrets of structures that are as small as 100 nanometers, which is about the size of a virus. He arrived with an ambitious plan to build a new generation of nanomaterials, but soon discovered that the field was missing a fundamental understanding of toughness at tiny scales.

    “We had to go back to basics,” Meza said. 

    In the years since, Meza and his team have flipped the script on nanomaterial toughness. They’re applying what they’ve learned to new kinds of bespoke materials, though along the way they’re still surprised by tiny structures behaving in ways they theoretically shouldn’t.

    Meza spoke with UW News about his strange and surprising journey into the nano realm.

    What questions did you establish your lab to tackle?

    Lucas Meza: Very broadly, we’re trying to design better materials, but not by introducing new material chemistries. Instead, we use architecture. This is something humans have done throughout history — think of woven textiles and fabrics, or straw-reinforced mud bricks. These are “architected materials,” where the structure of materials allows us to control useful properties like strength, toughness and flexibility. 

    The thing that I was particularly interested in was introducing architecture at the nanoscale. What if, instead of building a wall with bricks, we could use nanoplatelets? Or instead of making fabrics with yarn, we could use nanofibers? How would those properties change?

    Engineers have found that nanomaterials are stronger, more flaw resistant and more deformable. The challenge is: How do you actually do something with them? We need to build them into large-scale materials in a way that preserves their unique nanoscale properties. 

    What material properties are you most interested in?

    LM: We’re using architecture to tinker with a few interrelated properties. The first is a material’s strength, which is how much stress it can take before it permanently deforms. The second is ductility, which is how much a material can stretch before it breaks. Those two features sort of combine to determine a material’s toughness, which is the total amount of energy you have to put into a material to break it.

    To give a couple of opposing examples: A ceramic plate is strong, meaning it can take a lot of stress, but it has very low ductility, meaning it barely deforms before breaking. So overall, it’s not a very tough material. Conversely, a rubber band is not strong at all — you can bend and stretch it with very little stress. But, it’s extremely ductile — it can stretch to many times its original dimensions without snapping. So as a result, rubber is very tough.

    Credit: University of Washington (left) and Envato (right).

    Toughness is a particularly interesting property to study because there’s no limit on how tough a material can be. There are very hard limits on how strong and how stiff a material can be, and you can use architecture to optimize them, but you can’t exceed the properties of the base material. On the other hand, you can use architecture to improve the overall toughness of a material. 

    Nature has already created a lot of really interesting micro- and nano-structures. Every natural material has to be porous to transport nutrients, and on top of that we see things like lattices in some bone and in sea sponges; shells all have layered architectures; wood and bone are fiber composites; and all of this happens at the micro- and nanoscale. 

    There had to be a reason that nature was making these architectural motifs at the micro and nanoscale, and I had a strong hunch that it had to do with toughness. 

    What has your lab learned about toughness at the small scale?

    LM: Initially, we learned a surprising amount about what we didn’t know. My thought in getting into this work was that people know enough about fracture mechanics — how things break and why — so we can just dive into making really complicated architectures and studying their toughness, like this nanoBouligand material made by my former doctoral student, Zainab Patel. We realized the scientific community has some big gaps in their understanding of fracture toughness. So instead, we had to go simple — basically we pulled and pushed and broke a lot of small things to understand what gives a material ductility and toughness.

    We learned that all material behavior centers around something called a “plastic zone size.” Basically, when you pull on a part that has a crack, a little ball of energy builds up right at the tip of that crack. That energy ball grows as you add more stress, and at a certain point it shoots through the sample and causes a break. The size of the ball at its breaking point is the material’s plastic zone size, and it’s different for every material. 

    We realized that what makes a material ductile or not is the ratio between its size and the plastic zone size. If a material is smaller than its plastic zone size, that ball of energy can’t grow big enough to cause the crack to grow, so instead it spreads outward and the material bends. 

    The four material samples in this video are all the same size, but structural differences at the nanoscale produce different levels of ductility. In each example, the cyan color represents the sample’s plastic zone size. In less ductile samples, the cyan-colored area remains small and the material snaps, whereas in more ductile samples, the cyan area spreads out and the material stretches. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids

    This is the key for how to use architecture to cheat and get more ductility out of a material. If you take a brittle material and make a nanoscale lattice or foam out of it, the building blocks magically become ductile. The new tougher “architected material” can also have a larger plastic zone size, sometimes as much as 100 times larger, meaning it is likely to be ductile as well. This is why things like fabrics and meshes can be really hard to tear. 

    How are you applying what you’re learning to real-world materials?

    LM: We’re building lots of our material architectures painstakingly at the small scale using resources like the Washington Nanofabrication Facility and the UW Molecular Analysis Facility. That “bottom-up” approach — building things one nanofeature at a time — gives us lots of control over the building blocks we’re playing with, but it’s a real challenge to scale.

    The “top-down” approach, where you let physics and kinetics just self-assemble things for you, is much easier. One example is “solid state foaming”, a technique my colleague Vipin Kumar has been working on for decades. Basically, you take a thermoplastic material — something that melts when you heat it up — throw it in a chamber with high pressure carbon dioxide so it saturates the sample, then heat it up so that dissolved gas forms tiny bubbles in the material. With this process we have less control over the precise architecture — it’s a random foam — but by controlling the amount of dissolved gas we can easily control the size of the bubbles. Those materials turned out to be super tough! My doctoral student Kush Dwivedi has a paper on nanofoam fracture, where we show they could even be tougher than the material they were made from. This goes against everything we knew about normal foam fracture processes. 

    A plastic nanofoam material created by Kush Dwivedi, a doctoral student in Meza’s lab, seen at 2,500x, 12,000x and 35,000x magnifications. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids.

    I’m currently pursuing an earlier-stage commercialization effort to use tiny foams as a filtration material for biomedical applications. We can make nanoporous filter materials — think of the reverse osmosis system that might be under your sink — but we can do it without using any of the harsh chemical processes that are currently used. 

    I also recently got an NSF CAREER grant to study fracture in architected materials, and we’re exploring ways to make tougher sustainable and biodegradable materials. Think of the last time you used a biodegradable fork that broke off in your food. Materials like wood are actually great alternatives for this, but we’re trying to figure out how to do it without cutting down a tree or harvesting bamboo. 

    For more information contact Meza at lmeza@uw.edu.

    Continue Reading

  • Journal of Medical Internet Research

    Journal of Medical Internet Research

    Key Takeaways

    • UnitedXR Europe 2025 highlighted that health care extended reality (XR) is no longer constrained by technical capability but by alignment between industry, academic evidence, and clinical governance.
    • Persistent gaps remain between how success, risk, and readiness are defined by developers, researchers, and health care professionals.
    • Formats that enable direct cross-stakeholder dialogue may be as critical as technological advances for translating XR potential into routine clinical practice.

    Extended reality (XR) is not new to health care, but over the past decade, its use has become more widespread in medical training, rehabilitation, pain management, and mental health care, supported by a growing body of evidence [-]. Despite clear gains in technical maturity, the adoption of XR in health care remains uneven. Immersive tools are increasingly capable, yet their integration into routine clinical practice appears to depend less on technical performance than on organizational readiness, governance, and professional acceptance—a pattern well described in implementation research [,].

    UnitedXR Europe 2025 offered a concentrated view of this tension and an effort to resolve it. The event marked the first joint edition of two historically distinct strands of the European XR ecosystem. Stereopsia—anchored in Brussels since 2009—has long served as a meeting point for immersive research, cultural production, and policy dialogue. In parallel, the Augmented World Expo evolved into a global, industry-led platform focused on enterprise deployment and market scale.

    Their integration under UnitedXR Europe has brought these traditions into direct contact, highlighting gaps and areas of convergence, as well as creating a shared space for dialogue between industry, academia, health care, and policy actors.

    UnitedXR Europe 2025 brought together more than 125 exhibitors across 14 parallel tracks, including a dedicated Healthcare, Pharma, & Wellbeing track. Across the event’s agenda, health care XR appeared to be entering an infrastructure phase, positioned not as a peripheral demonstration but as a maturing vertical confronted with questions of scale, integration, and long-term sustainability.

    Beyond formal sessions, interaction extended into curated environments, such as the European Market for Immersive Creativity and business-to-business matchmaking between developers, researchers, and institutional actors. Roundtables, workshops, start-up pitch competitions, and the European XR Awards Gala reinforced the event’s dual role as a space for exchange and an indicator of technical maturity.

    This shift was visible on the expo floor, where novelty gave way to utility. Across the XR ecosystem, major headset vendors relied on real-world implementation partners to demonstrate use cases rather than on product-centric stands, with a dedicated demonstration pavilion hosted by the International Virtual Reality Healthcare Association. Within this context, hands-on demonstrations, such as those by VirtualiSurg [], illustrated how VR is being embedded into established training pathways, particularly when combined with modular haptic systems, such as those provided by SenseGlove [].

    The scientific program reflected a similar maturation. As noted by Oliver Schreer, PhD, track chair, submissions increased from barely a dozen in previous editions to over 45 this year, accompanied by a clear shift toward more application-oriented research.

    Moving between tracks revealed a persistent asymmetry. Enterprise discussions were supported by a settled language of scaling, procurement, and return on investment. Health care discussions, by contrast, were anchored in regulation, safety, and professional accountability. Side by side, these perspectives exposed a central tension: while the XR industry largely seems prepared to scale, health care systems are still negotiating why, and under what conditions, that scale should occur.

    What emerged from these observations was a coordination problem. The remaining barriers to health care XR adoption lie less in hardware or software performance than in the absence of shared decision-making structures.

    This gap surfaced repeatedly in discussions about adoption criteria. In one interactive panel within the health care track, participants were asked to rank factors that guided XR implementation. Priorities clearly diverged, with safety treated as a nonnegotiable baseline by some and as one consideration among many by others. The exchange was brief but revealing, with high audience participation.

    Audience engagement during an interactive panel session. Faces have been blurred to protect participant privacy. Photograph credit: Sonya Seddarasan, track chair for Healthcare, Pharma, & Wellbeing at UnitedXR Europe 2025.

    What this exchange made visible was that XR initiatives often stall not because immersive systems fail to perform but because organizations lack an agreed framework for guiding early-stage decisions about readiness for clinical use. A recurring takeaway from the session was the growing consensus that health care XR must move beyond repeated proofs of concept. Without a shared way to assess safety, credibility, and contextual fit before implementation begins, pilots tend to accumulate without a clear pathway toward sustained, real-world integration.

    Points of agreement and divergence became most visible during interactive panel sessions. For example, open discussion forced participants to make their criteria explicit and revealed how differently evidence, risk, and readiness were understood across institutional and professional contexts.

    The organizers appeared attentive to this dynamic. Sonya Haskins—Augmented World Expo’s head of programming—noted that future editions may place greater emphasis on roundtable formats intended to support cross-disciplinary negotiation. Alexandra Gérard—codirector of UnitedXR Europe—similarly pointed out the need to broaden participation within the health care track, including participation by patients and frontline practitioners with direct insight into care delivery.

    For health care XR, where adoption depends on trust and legitimacy as much as it does on performance metrics, creating conditions for this kind of dialogue may be as consequential as any technical advance.

    Health care discussions at UnitedXR Europe reflected a growing recognition that immersive technologies carry a different kind of responsibility than that of most digital health tools. XR was framed not merely as a delivery medium but as a technology that directly shapes perception, attention, and embodied experience—a distinction that is well established in prior experimental and neuroscientific work [,,].

    This concern surfaced most clearly in informal exchanges. Sonya Haskins described XR creation as “hacking the brain,” using the phrase as a metaphor to underscore why immersive systems cannot be treated as neutral software development. When technologies act directly on perception and experience, questions of intent, safety, and oversight move rapidly to the foreground.

    Policy frameworks are beginning to respond, albeit unevenly. The event coincided with the launch of the European Partnership for Virtual Worlds, which explicitly identifies health care as a strategic domain []. This marks progress when compared with earlier global digital health strategies, including those of the World Health Organization, where immersive technologies remain largely unnamed despite their growing use in practice [,].

    Terminology remains a point of friction. As several clinicians noted, framing clinical XR under the umbrella of “virtual worlds” sits uneasily with health care practice. Recent European policy work differentiates professional and health care uses of immersive technologies from consumer-oriented virtual worlds, emphasizing that clinical applications are bounded; task specific; and subject to heightened safety, ethical, and governance requirements []. In this sense, UnitedXR functioned as a translation layer, highlighting where policy language must be refined to align with the risk-aware logic of patient care.

    UnitedXR Europe 2025 made clear that health care XR has moved beyond the phase of technical proof. What remains unresolved is not what the technology can do but how it is integrated into the social, ethical, and organizational fabric of health care. From my perspective as a clinician-researcher involved in real-world XR deployment, these tensions are familiar from practice and, importantly, addressable.

    The challenge ahead is one of alignment; synchronizing industrial velocity with clinical deliberation; and narrowing the vocabulary gap between policymakers, developers, and practitioners.

    As this event demonstrated, that work rarely happens through polished presentations alone. It happens in shared spaces where assumptions are tested, priorities collide, and the real conditions for responsible adoption begin to take shape.

    None declared.

    © JMIR Publications. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 21.Jan.2026.

    Continue Reading

  • Boehringer Ingelheim Launches Phase 2 Trial of IL-11 Antibody for IPF Treatment

    Boehringer Ingelheim Launches Phase 2 Trial of IL-11 Antibody for IPF Treatment

    A new clinical trial (NCT07036523) evaluating a novel monoclonal antibody to treat patients with idiopathic pulmonary fibrosis (IPF), conducted by Boehringer-Ingelheim, a biopharmaceutical company and investor in research and development, began recruiting participants in Germany.1,2

    The drug BI 765423 signifies another step forward in Boehringer-Ingelheim’s goal to advance innovative care for those living with IPF.1 Despite numerous standard therapies that may help slow the progression of the disease, there still remains an unmet need to stop disease progression and reverse lung tissue damage completely. Prior research and ongoing clinical trials suggest a potential breakthrough in stopping the progression of the disease with experimental, targeted, oral therapies.3 BI 765423 targets pleiotropic cytokine IL-11 inhibition—a key biomarker and mediator of fibrosis.1

    “With BI 765423, we aim to go beyond slowing disease and to pursue next-generation therapies that could restore lung functionality for people living with IPF,” Vittoria Zinzalla, global head of Experimental Medicine at Boehringer Ingelheim, said in a press release.

    BI 765423 is designed to bind directly to IL-11, thus interrupting the signaling pathways that cause fibrosis. Prior preclinical studies have shown that inhibiting IL-11 can halt fibrosis and restore barrier function.1 Another phase 2a clinical trial demonstrated reduced type 2 (IL-4 and IL-13) and type 3-associated cytokines (IL-17a and IL-22) when targeting invariant natural killer T cells (iNKT) in patients with IPF. Thus, supporting Boehringer-Ingelheim’s objective to target profibrotic cytokines, which may reduce fibrosis and lung tissue damage in patients with IPF while restoring lung functionality.

    In phase 1 of the biopharmaceutical company’s clinical trial, BI 765432 demonstrated favorable safety and tolerability in participants. Phase 2 of the clinical trial aims to evaluate the efficacy of the drug.

    The double-blind, randomized, placebo-controlled, parallel-group study is open to adults with IPF aged 40 or older of either sex. Participants must have a forced vital capacity greater than or equal to 45% and fibrosis of 20% or more confirmed by a high-resolution computed tomography. Patients must also have a hemoglobin-corrected diffusing capacity of the lungs for carbon monoxide (DLCO) of 20% or greater.

    Participants will be randomized 1:1 to receive BI-765432 or a placebo, administered intravenously every 4 weeks for 8-10 months. During the study, participants can continue their regular treatment for IPF and frequently visit the study site for screenings, treatment, and follow-up.2

    Some of the outcomes expected to be measured include absolute change in FVC, DLCO, oxygen saturation, distance walked during a 6-minute walk test, and log10-transformed SPD plasma concentration from baseline to 12 weeks.

    There are currently 71 participants enrolled in the clinical trial. The study is expected to be completed in September 2027. Of the 50 study locations, 6 sites have begun recruiting participants: Chesterfield, Missouri; Franklin, Tennessee; Chermside, Queensland, Australia; Hanover, Germany; Fukuim Yoshida-gun, Japan; and Busan, South Korea.

    Although Boehringer Ingelheim is evaluating the safety and efficacy of BI 765423, the company acquired the drug from Enleofen to continue its development and maintain control. The drug’s intellectual property—including patents, data, and know-how—was in-licensed by Enleofen from Singapore Health Services and the National University of Singapore.1

    “Our aim is to transform the lives of patients and their families by demonstrating the potential of this first-in-class IL-11 inhibitor to deliver clear benefits for patients, backed by compelling evidence and delivered at speed,” Zinzalla said.

    References:

    1. Boehringer Ingelheim advances potential first-in-class IL-11 inhibitor to phase II clinical research in idiopathic pulmonary fibrosis. News release. Boehringer-Ingelheim. January 13, 2026. Accessed January 20, 2026. https://www.boehringer-ingelheim.com/science-innovation/human-health-innovation/new-trial-explores-advancements-ipf-treatment

    2. A study to find out whether BI 765423 has an effect on lung function in people with idiopathic pulmonary fibrosis (IPF) with or without standard treatment. Clinicaltrials.gov. January 6, 2026. Accessed January 20, 2026. https://clinicaltrials.gov/study/NCT07036523?a=11#contacts-and-locations

    3. McCrear S. Phase 2a trial shows Gri-0621 improves biomarkers and lung repair in IPF. AJMC. January 20, 2026. Accessed January 20, 2026. https://www.ajmc.com/view/phase-2a-trial-shows-gri-0621-improves-biomarkers-and-lung-repair-in-ipf

    4. Gri Bio announces additional positive data from phase 2A study in idiopathic pulmonary fibrosis, strengthening clinical proof-of-concept for GRI-0621. News release. GRIbio. January 8, 2026. Accessed January 20, 2026. https://gribio.com/gri-bio-announces-additional-positive-data-from-phase-2a-study-in-idiopathic-pulmonary-fibrosis-strengthening-clinical-proof-of-concept-for-gri-0621

    Continue Reading

  • Gen Xers the new baby boomers: analysis identifies Australia’s richest landholders by generation | Housing

    Gen Xers the new baby boomers: analysis identifies Australia’s richest landholders by generation | Housing

    Gen X households now hold the most property wealth of any generation, as baby boomers downsize their homes and move more of their money into cash and retirement accounts.

    Once known as the “slacker generation”, those born between 1965 and 1980 are mostly now aged over 50 years and have enjoyed years of inflated home prices.

    Gen X households average $1.455m in wealth from dwellings and land, according to an analysis of ABS and census data by KPMG.

    Sign up: AU Breaking News email

    This compared to $1.36m in average property wealth among boomers – who remain the wealthiest overall, due to their super holdings and relative lack of debts.

    Terry Rawnsley, an urban economist at KPMG, said the figures showed a generational “passing of the baton” when it came to property riches, which remains a “cornerstone” of Australians’ wealth.

    In contrast, household property wealth among millennials – aged between 29 and 44 – was $890,000 on average, the analysis shows, reflecting lower rates of home ownership among this generation.

    Table

    But with housing more unaffordable than ever, home ownership among younger households has dropped sharply and risks a major rise in intergenerational inequity, Rawnsley said.

    By age bracket, households of those aged between 25 and 34 had $575,000 in property wealth on average – although this is offset by an average $346,000 in debt.

    Rawnsley said while home ownership rates among older generations hovered about 80%, only around half of households in this younger cohort owned homes.

    “The median house price at almost $1m across the country, so with that 50% home ownership rate you get to $500,000 in property wealth pretty easily”.

    Table

    It was the remaining half of younger households who risk being left behind by the time they are in their 40s and 50s if they are unable to get on to the property ladder.

    “If you miss out on that property purchase when you are in your 20s or 30s, the impact of that on your wealth will carry on for the next 30-40 years,” Rawnsley said.

    With only slightly more than one in 10 homes for sale affordable for the average first home buyer, according to KPMG research, the pressure on younger Australians to make the right decisions was growing.

    “There are a lot more difficult questions being asked of a 25-year-old today than there were 10, 20 or 40 years ago,” Rawnsley said.

    “Being a ‘forever renter’ is not just about a lifestyle choice, it can be about locking in generational disadvantage – and not just for you, but for your kids as well, given the growing role of the bank of mum and dad.”

    There has been strong criticism of the government’s first home buyer scheme on the basis it simply adds to demand, and therefore prices.

    But Rawnsley’s “contrarian” view was that paying what he judged as 1-2% more for a home because of such schemes was worth it.

    “If someone can save five years of rent in Sydney and get into the property market five years earlier, then that means a lot of people who can’t rely on the bank of mum and dad will ultimately be better off.”

    Continue Reading

  • CrowdStrike Endpoint Security Delivers 273% ROI in Forrester Study

    CrowdStrike Endpoint Security Delivers 273% ROI in Forrester Study

    Total Economic Impact study quantifies the ROI organizations achieve by modernizing endpoint security with CrowdStrike

    AUSTIN, Texas – January 21, 2026 – CrowdStrike (NASDAQ: CRWD) today announced the findings of a commissioned Total Economic Impact™ (TEI) study, conducted by Forrester Consulting on behalf of CrowdStrike. The study found that a composite organization representative of interviewed customers that replaced legacy endpoint security with CrowdStrike achieved a 273% return on investment (ROI) by reducing breach risk and simplifying security operations, with a payback period of under six months, and $5 million in total quantified benefits over three years.

    “The endpoint is a primary risk and productivity point in today’s enterprise, but many organizations are still relying on legacy endpoint security built for a different threat era,” said Elia Zaitsev, chief technology officer at CrowdStrike. “Our Forrester study shows that modern endpoint security isn’t just more effective, it’s more economically rational. Replacing legacy endpoint approaches with CrowdStrike reduces breach risk, simplifies operations, and delivers measurable ROI that makes the decision to modernize clear.”

    Endpoint Security Modernization Drives Measurable Outcomes

    Key findings from the Forrester TEI study include clear economic and operational value tied directly to endpoint consolidation and modernization, including: 

    • Economic Value from Endpoint Modernization: CrowdStrike Endpoint Security delivered $5 million in total benefits over three years, driven by lower technology and labor costs, simplified security operations, and faster deployment across new environments and acquisitions.
    • Stopping Breaches at the Endpoint: Interviewed organizations reported a significant reduction in endpoint-related breach risk, with Forrester quantifying $1.7 million in avoided breach-related costs over three years for a representative organization based on four interviewed customers.
    • Improved Analyst Experience – by Design: By deploying a single, lightweight endpoint sensor, organizations reduced endpoint security management labor by 95% and significantly reduced alert noise and false positives, allowing analysts to focus on real threats and accelerate investigations without adding headcount.
    • Built for Consolidation and Scale: The study notes that Falcon’s cloud-native, single-sensor architecture enables organizations to expand protection across identity, next-gen SIEM, cloud security, and additional modules without new deployments or operational disruption.


    Customer interviews:

    “[Our legacy provider] was very hard to manage and we wanted to go to something simpler. Then we looked at CrowdStrike, did the proof of concept, we liked it, and we decided to go all in. We have their Endpoint product, Identity product, and then some of the other SIEM solutions as well.” – Enterprise Security Manager, Oil & Gas 

    “I was pleasantly surprised by how, from just that single agent deployment, we were able to expand past EDR with little to no effort and there weren’t additional deployments.”  – Director of Cyber Defense, Healthcare

    “The visibility that we get in CrowdStrike is second to none. Being able to query and do those types of investigations across your enterprise at a moment’s notice in five minutes is just really handy.” – CISO, Retail

    To learn more about the Total Economic Impact™ study and CrowdStrike Endpoint Security, visit our website and read our blog.

    About CrowdStrike

    CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world’s most advanced cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity and data.

    Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft, and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting, and prioritized observability of vulnerabilities.

    Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity, and immediate time-to-value.

    CrowdStrike: We stop breaches.

    Learn more: https://www.crowdstrike.com/

    Follow us: Blog | X | LinkedIn | Instagram

    Start a free trial today: https://www.crowdstrike.com/trial

    © 2026 CrowdStrike, Inc. All rights reserved. CrowdStrike and CrowdStrike Falcon are marks owned by CrowdStrike, Inc. and are registered in the United States and other countries. CrowdStrike owns other trademarks and service marks and may use the brands of third parties to identify their products and services.

    Media Contact

    Jake Schuster

    CrowdStrike Corporate Communications

    press@crowdstrike.com



    Continue Reading

  • Next buyout saves footwear brand Russell & Bromley but 400 jobs likely to be lost | Next

    Next buyout saves footwear brand Russell & Bromley but 400 jobs likely to be lost | Next

    Next has rescued the footwear retailer Russell & Bromley out of administration for £3.8m but about 400 jobs are likely to go at 33 shops not included in the deal.

    The British brand, founded in 1879 in Eastbourne, East Sussex, trades from 36 stores and nine concessions across the UK and Ireland. Next will take on only three stores – in Chelsea, Mayfair and the Bluewater shopping centre – and about 48 store staff, it is understood.

    The rescue deal, which includes Russell & Bromley’s brand and other assets including £1.3m of stock, is the latest brand acquisition for Next. The British fashion and homeware retailer now controls a swathe of labels from FatFace, Joules and Made.com to the UK distribution of Gap and Victoria’s Secret.

    Next said in a statement: “This acquisition secures the future of a much-loved British footwear brand. Next intends to build on this legacy and provide the operational stability and expertise to support Russell & Bromley’s next chapter, allowing it to return to its core mission, the design and curation of world-class, premium footwear and accessories, for many years to come.”

    Andrew Bromley, the chief executive of Russell & Bromley, which until now has been a family-owned business, said: “Following a strategic review with external advisers, we have taken the difficult decision to sell the Russell & Bromley brand. This is the best route to secure the future for the brand and we would like to thank our staff, suppliers, partners and customers for their support throughout our history.”

    Will Wright, the head of Interpath, which is acting as administrator to Russell & Bromley, said the 33 stores and nine concessions not included in the deal would remain open and continue to trade while the joint administrators continue to assess options for them.

    He said: “Across its 147-year history, Russell & Bromley has been at the forefront of contemporary style. We’re pleased therefore to have concluded this transaction, which will preserve the brand and the commitment to quality craftsmanship that it has become so well known for.

    “Our intention is to continue to trade the remaining portfolio of stores for as long as we can while we explore the options available.”

    Continue Reading

  • Start of the Circular economy: Zwickau vehicle plant launches business areas

    Start of the Circular economy: Zwickau vehicle plant launches business areas

    Andreas Walingen, Head of Group Circular Economy: “The circular economy will become increasingly important for Volkswagen AG in the coming years. It addresses key challenges facing the automotive industry: raw material resilience, decarbonisation, economic efficiency and employment. Specifically, we are pursuing the goal of reusing raw materials for the construction of new vehicles. This will make Volkswagen less dependent on the global raw materials trade, reduce the CO2 footprint of its vehicles and create new business models. The circular economy promotes technological and digital innovation and secures jobs at the site and value creation in Germany. That is the mission of the Zwickau vehicle plant. Here, we define, test and review all the necessary processes and standards. In the medium term, we will need a CE value creation network with additional locations and partnerships throughout Europe in order to scale the circular economy successfully in economic terms.

    To get started with the circular economy, up to 90 million euros will be invested in conversion work, technical equipment and AI applications at the site over the next few years. This year, 500 pre-series vehicles (test vehicles) are already being processed. From 2027, the number of vehicles will increase. A modular dismantling concept will allow capacity to be gradually increased to 15,000 vehicles per year by 2030.

    Danny Auerswald, spokesperson for the management board of Volkswagen Saxony: “Volkswagen Saxony is once again taking on a pioneering role. We were the first plant to switch completely to e-mobility. Now we are tapping into the important business area of the circular economy. With our experience in large-scale production and the excellent university landscape in Saxony, we will examine these new business areas for the Group, present them in an economically viable manner and expand them.”

    Dirk Panter, Minister for Economic Affairs in the Free State of Saxony: “With the recycling topic here in Zwickau, we are breaking new ground for VW as a whole. The plant in Mosel is thus taking on an important function and pioneering role within the Group. Saxony can once again prove that it has solutions for the future of the automotive industry. The new project highlights the responsible use of existing resources and also offers new prospects for employees in Mosel. The diversification of the Zwickau location thus strengthens the future viability of this Saxon automotive region.”

    The circular economy will play a greater role in future apprenticeships and university courses. In close cooperation with the Volkswagen Education Institute and the West Saxon University of Applied Sciences, existing career paths and courses of study will be supplemented with content on the circular economy. The Zwickau site will thus also take on the training and further education of employees at future locations.

    The move into the circular economy was agreed for the Zwickau site during collective bargaining negotiations in December 2024. In addition to vehicle production, this business area is a second pillar for securing sustainable employment and building expertise in the Central Germany region.

    Continue Reading