Category: 3. Business

  • Virtual Brain Inference (VBI), a flexible and integrative toolkit for efficient probabilistic inference on whole-brain models

    Virtual Brain Inference (VBI), a flexible and integrative toolkit for efficient probabilistic inference on whole-brain models

    Understanding the complex dynamics of the brain and their neurobiological underpinnings, with the potential to advance precision medicine (Falcon et al., 2016; Tan et al., 2016; Vogel et al., 2023; Williams and Whitfield Gabrieli, 2025), is a central goal in neuroscience. Modeling these dynamics provides crucial insights into causality and mechanisms underlying both normal brain function and various neurological disorders (Breakspear, 2017; Wang et al., 2023b; Ross and Bassett, 2024). By integrating the average activity of large populations of neurons (e.g. neural mass models; Wilson and Cowan, 1972; Jirsa and Haken, 1996; Deco et al., 2008; Jirsa et al., 2014; Montbrió et al., 2015; Cook et al., 2022) with information provided by structural imaging modalities (i.e. connectome; Honey et al., 2009; Sporns et al., 2005; Schirner et al., 2015; Bazinet et al., 2023), the whole-brain network modeling has proven to be a powerful tractable approach for simulating brain activities and emergent dynamics as recorded by functional imaging modalities (such as (s)EEG, MEG, and fMRI; Sanz-Leon et al., 2015; Schirner et al., 2022; Amunts et al., 2022; D’Angelo and Jirsa, 2022; Patow et al., 2024; Hashemi et al., 2025).

    The whole-brain models have been well-established in network neuroscience (Sporns, 2016; Bassett and Sporns, 2017) for understanding the brain structure and function (Ghosh et al., 2008; Honey et al., 2010; Park and Friston, 2013; Melozzi et al., 2019; Suárez et al., 2020; Feng et al., 2024; Tanner et al., 2024) and investigating the mechanisms underlying brain dynamics at rest (Deco et al., 2011; Wang et al., 2019; Ziaeemehr et al., 2020; Kong et al., 2021), normal aging (Lavanga et al., 2023; Zhang et al., 2024), and also altered states such as anesthesia and loss of consciousness (Barttfeld et al., 2015; Hashemi et al., 2017; Luppi et al., 2023; Perl et al., 2023b). This class of computational models, also known as virtual brain models (Jirsa et al., 2010; Sanz Leon et al., 2013; Sanz-Leon et al., 2015; Schirner et al., 2022; Jirsa et al., 2023; Wang et al., 2024), has shown remarkable capability in delineating the pathophysiological causes of a wide range of brain diseases, such as epilepsy (Jirsa et al., 2017; Proix et al., 2017; Wang et al., 2023b), multiple sclerosis (Wang et al., 2024; Mazzara et al., 2025), Alzheimer’s disease (Yalçınkaya et al., 2023; Perl et al., 2023a), Parkinson’s disease (Jung et al., 2022; Angiolelli et al., 2025), neuropsychiatric disorders (Deco and Kringelbach, 2014; Iravani et al., 2021), stroke (Allegra Mascaro et al., 2020; Idesis et al., 2022), and focal lesions (Rabuffo et al., 2025). In particular, they enable the personalized simulation of both normal and abnormal brain activities, along with their associated imaging recordings, thereby stratifying between healthy and diseased states (Liu et al., 2016; Patow et al., 2023; Perl et al., 2023a) and potentially informing targeted interventions and treatment strategies (Jirsa et al., 2017; Proix et al., 2018; Wang et al., 2023b; Jirsa et al., 2023; Hashemi et al., 2025). Although there are only a few tools available for forward simulations at the whole-brain level, for example the brain network simulator The Virtual Brain (TVB; Sanz Leon et al., 2013), there is a lack of tools for addressing the inverse problem, that is finding the set of control (generative) parameters that best explains the observed data. This study aims to bridge this gap by addressing the inverse problem in large-scale brain networks, a crucial step toward making these models operable for clinical applications.

    Accurately and reliably estimating the parameters of whole-brain models remains a formidable challenge, mainly due to the high dimensionality and nonlinearity inherent in brain activity data, as well as the non-trivial effects of noise and network inputs. A large number of previous studies in whole-brain modeling have relied on optimization techniques to identify a single optimal value from an objective function, scoring the model’s performance against observed data (Wang et al., 2019; Kong et al., 2021; Cabral et al., 2022; Liu et al., 2023). This approach often involves minimizing metrics such as the Kolmogorov-Smirnov distance or maximizing the Pearson correlation between observed and generated data features such as functional connectivity (FC), functional connectivity dynamics (FCD), and/or power spectral density (PSD). Although fast, such a parametric approach results in only point estimates and fails to capture the relationship between parameters and their associated uncertainty. This limits the generalizability of findings and hinders identifiability analysis, which explores the uniqueness of solutions. Furthermore, optimization algorithms can easily get stuck in local extrema, requiring multi-start strategies to address potential parameter degeneracies. These additional steps, while necessary, ultimately increase the computational cost. Critically, the estimation heavily depends on the form of the objective function defined for optimization (Svensson et al., 2012; Hashemi et al., 2018). These limitations can be overcome by employing Bayesian inference, which naturally quantifies the uncertainty in the estimation and statistical dependencies between parameters, leading to more robust and generalizable models. Bayesian inference is a principal method for updating prior beliefs with information provided by data through the likelihood function, resulting in a posterior probability distribution that encodes all the information necessary for inferences and predictions. This approach has proven essential for understanding the intricate relationships between brain structure and function (Hashemi et al., 2021; Lavanga et al., 2023; Rabuffo et al., 2025), as well as for revealing the pathophysiological causes underlying brain disorders (Hashemi et al., 2023; Yalçınkaya et al., 2023; Wang et al., 2024; Wang et al., 2024; Hashemi et al., 2025; Hashemi et al., 2024).

    In this context, simulation-based inference (SBI; Cranmer et al., 2020; Gonçalves et al., 2020; Hashemi et al., 2023; Hashemi et al., 2024) has gained prominence as an efficient methodology for conducting Bayesian inference in complex models where traditional inference techniques become inapplicable. SBI leverages computational simulations to generate synthetic data and employs advanced probabilistic machine learning methods to infer the joint distribution over parameters that best explain the observed data, along with associated uncertainty. This approach is particularly well-suited for Bayesian inference on whole-brain models, which often exhibit complex dynamics that are difficult to retrieve from neuroimaging data with conventional estimation techniques. Crucially, SBI circumvents the need for explicit likelihood evaluation and the Markovian (sequential) property required in sampling. Markov chain Monte Carlo (MCMC; Gelman et al., 1995) is the gold-standard nonparametric technique and asymptotically exact for sampling from a probability distribution. However, for Bayesian inference on whole-brain models given high-dimensional data, the likelihood function becomes intractable, rendering MCMC sampling computationally prohibitive. SBI offers significant advantages, such as parallel simulation while leveraging amortized learning, making it effective for personalized inference from large datasets (Hashemi et al., 2024). Amortization in artificial neural networks refers to the idea of reusing learned computations across multiple tasks or inputs (Gershman and Goodman, 2014). Amortization in Bayesian inference refers to the process of training a shared inference network (e.g. a neural network) with an intensive upfront computational cost, to perform fast inference across many different observations. Instead of re-running inference for each new observation, the trained model can rapidly return posterior estimates, significantly reducing computational cost at test time. Following an initial computational cost during simulation and training to learn all posterior distributions, subsequent evaluation of new hypotheses can be conducted efficiently, without additional computational overhead for further simulations (Hashemi et al., 2023). Importantly, SBI sidesteps the convergence issues caused by complex geometries that are often encountered when using gradient-based MCMC methods (Betancourt and Girolami, 2013; Betancourt et al., 2014; Hashemi et al., 2020). It also substantially outperforms approximate Bayesian computation (ABC) methods, which rely on a threshold to accept or reject samples (Sisson et al., 2007; Beaumont et al., 2009; Gonçalves et al., 2020). Such a likelihood-free approach provides us with generic inference on complex systems as long as we can provide three modules:

    1. A prior distribution, describing the possible range of parameters from which random samples can be easily drawn, that is θp(θ).

    2. A simulator in computer code that takes parameters as input and generates data as output, that is xp(xθ).

    3. A set of low-dimensional data features, which are informative of the parameters that we aim to infer.

    These elements prepare us with a training data set {(θi,xi)}i=1Nsim with a budget of Nsim simulations. Then, using a class of deep neural density estimators, such as masked autoregressive flows (MAFs; Papamakarios and Pavlakou, 2017) or neural spline flows (NSFs; Durkan et al., 2019), we can approximate the posterior distribution of parameters given a set of observed data, that is p(θxobs). Therefore, a versatile toolkit should be flexible and integrative, adeptly incorporating these modules to enable efficient Bayesian inference over complex models.

    To address the need for widely applicable, reliable, and efficient parameter estimation from different (source-localized) neuroimaging modalities, we introduce Virtual Brain Inference (VBI), a flexible and integrative toolkit for probabilistic inference at whole-brain level. This open-source toolkit offers fast simulation through just-in-time (JIT) compilation of various brain models in different programming languages (Python/C++) and devices (CPUs/GPUs). It supports space-efficient storage of simulated data (HDF5/NPZ/PT), provides a memory-efficient loader for batched data, and facilitates the extraction of low-dimensional data features (FC/FCD/PSD). Additionally, it enables the training of deep neural density estimators (MAFs/NSFs), making it a versatile tool for inference on neural sources corresponding to (s)EEG, MEG, and fMRI recordings. VBI leverages high-performance computing, significantly enhancing computational efficiency through parallel processing of large-scale datasets, which would be impractical with current alternative methods. Although SBI has been used for low-dimensional parameter spaces (Gonçalves et al., 2020; Wang et al., 2024; Baldy et al., 2024), we demonstrate that it can scale to whole-brain models with high-dimensional unknown parameters, as long as informative data features are provided. VBI is now accessible on the cloud platform EBRAINS (https://ebrains.eu), enabling users to explore more realistic brain dynamics underlying brain (dys)functioning using Bayesian inference.

    In the following sections, we will describe the architecture and workflow of the VBI toolkit and demonstrate the validation through a series of case studies using in silico data. We explore various whole-brain models corresponding to different types of brain recordings: a whole-brain network model of Wilson-Cowan (Wilson and Cowan, 1972), Jansen-Rit (Jansen and Rit, 1995; David and Friston, 2003), and Stuart-Landau (Selivanov et al., 2012) for simulating neural activity associated with EEG/MEG signals, the Epileptor (Jirsa et al., 2014) related to stereoelectro-EEG (sEEG) recordings, and Montbrió (Montbrió et al., 2015), and Wong-Wang (Wong and Wang, 2006; Deco et al., 2013) mapped to fMRI BOLD signals. Although these models represent source signals and could be applied to other modalities (e.g. Stuart-Landau representing generic oscillatory dynamics), we focused on their capabilities to perform optimally in specific contexts. For instance, some are better suited for encephalographic signals (e.g. EEG/MEG) due to their ability to preserve spectral properties, while others have been used for fMRI data, emphasizing their ability to capture dynamic features such as bistability and time-varying functional connectivity.

    VBI workflow

    Figure 1 illustrates an overview of our approach in VBI, which combines virtual brain models and SBI to make probabilistic predictions on brain dynamics from (sourc-localized) neuroimaging recordings. The inputs to the pipeline include the structural imaging data (for building the connectome), functional imaging data such as (s)EEG/MEG, and fMRI as the target for fitting, and prior information as a plausible range over control parameters for generating random simulations. The main computational costs involve model simulations and data feature extraction. The output of the pipeline is the joint posterior distribution of control parameters (such as excitability, synaptic weights, or effective external input) that best explains the observed data. Since the approach is amortized (i.e. it learns across all combinations in the parameter space), it can be readily applied to any new data from a specific subject.

    The workflow of Virtual Brain Inference (VBI).

    This probabilistic approach is designed to estimate the posterior distribution of control parameters in virtual brain models from whole-brain recordings. (A) The process begins with constructing a personalized connectome using diffusion tensor imaging and a brain parcellation atlas, such as Desikan-Killiany (Desikan et al., 2006), Automated Anatomical Labeling (Tzourio-Mazoyer et al., 2002), or VEP (Wang et al., 2021). (B) The personalized virtual brain model is then assembled. Neural mass models describing the averaged activity of neural populations, in the generic form of x˙=f(x,θ,Iinput), are placed to each brain region and interconnected via the structural connectivity matrix. Initially, the control parameters are randomly drawn from a simple prior distribution. (C) Next, the VBI operates as a simulator that uses these samples to generate time series data associated with neuroimaging recordings. (D) We extract a set of summary statistics from the low-dimensional features of the simulations (FC, FCD, PSD) for training. (E) Subsequently, a class of deep neural density estimators is trained on pairs of random parameters and their corresponding data features to learn the joint posterior distribution of the model parameters. (F) Finally, the amortized network allows us to quickly approximate the posterior distribution for new (empirical) data features, enabling us to make probabilistic predictions that are consistent with the observed data.

    In the first step, non-invasive brain imaging data, such as T1-weighted MRI and Diffusion-weighted MRI (DW-MRI), are collected for a specific subject (Figure 1A). T1-weighted MRI images are processed to obtain brain parcellation, while DW-MRI images are used for tractography. Using the estimated fiber tracts and the defined brain regions from the parcellation, the connectome (i.e. the complete set of links between brain regions) is constructed by counting the fibers connecting all regions. The SC matrix, with entries representing the connection strength between brain regions, forms the structural component of the virtual brain which constrains the generation of brain dynamics and functional data at arbitrary brain locations (e.g. cortical and subcortical structures).

    Subsequently, each brain network node is equipped with a computational model of average neuronal activity, known as neural mass models (see Figure 1B and Materials and methods). They can be represented in the generic form of a dynamical model as x˙=f(x,θ,Iinput), with the system variables x (such as membrane potential and firing rate), the control parameters θ (such as excitability), and the input current Iinput (such as stimulation). This integration of mathematical mean-field modeling (neural mass models) with anatomical information (connectome) allows us to efficiently analyze functional neuroimaging modalities at the whole-brain level.

    To quantify the posterior distribution of control parameters given a set of observations, p(θx), we first need to define a plausible range for the control parameters based on background knowledge p(θ), that is a simple base distribution known as a prior. We draw random samples from the prior and provide them as input to the VBI simulator (implemented by Simulation module) to generate simulated time series associated with neuroimaging recordings, as shown in Figure 1C. Subsequently, we extract low-dimensional data features (implemented by Features module), as shown in Figure 1D for FC/FCD/PSD, to prepare the training dataset {(θi,xi)}i=1Nsim , with a budget of Nsim simulations. Then, we use a class of deep neural density estimators, such as MAF or NSF models, as schematically shown in Figure 1E, to learn all the posterior p(θx). Finally, we can readily sample from p(θxobs), which determines the probability distribution in parameter space that best explains the observed data.

    Figure 2 depicts the structure of the VBI toolkit, which consists of three main modules. The first module, referred to as the Simulation module, is designed for fast simulation of whole-brain models, such as Wilson-Cowan (Wilson-Cowan model), Jansen-Rit (Jansen-Rit model), Stuart-Landau (Stuart-Landau oscillator), Epileptor (Epileptor model), Montbrió (Montbrió model), and Wong-Wang (Wong-Wang model). These whole-brain models are implemented across various numerical computing libraries such as Cupy (GPU-accelerated computing with Python), C++ (a high-performance systems programming language), Numba (a JIT compiler for accelerating Python code), and PyTorch (an open-source machine learning library for creating deep neural networks).


    Flowchart of the VBI Structure.

    This toolkit consists of three main modules: (1) The Simulation module, implementing various whole-brain models, such as Wilson-Cowan (WCo), Jansen-Rit (JR), Stuart-Landau (SL), Epileptor (EPi), Montbrió (MPR), and Wong-Wang (WW), across different numerical computing libraries (C++, Cupy, PyTorch, Numba). (2) The Features module, offering an extensive toolbox for extracting low-dimensional data features, such as spectral, temporal, connectivity, statistical, and information theory features. (3) The Inference module, providing neural density estimators (such as MAF and NSF) to approximate the posterior of parameters.

    The second module, Features, provides a versatile tool for extracting low-dimensional features from simulated time series (see Comprehensive feature extraction). The features include, but are not limited to, spectral, temporal, connectivity, statistical, and information theory related features, and the associated summary statistics. The third module focuses on Inference, that is training the deep neural density estimators, such as MAF and NSF (see Simulation-based inference), to learn the joint posterior distribution of control parameters. See Figure 2—figure supplement 1 and Figure 2—figure supplement 2 for benchmarks comparing CPU/GPU and MAF/NSF performances, and Figure 2—figure supplement 3 for the estimation of the global coupling parameter across different whole-brain network models, evaluated under multiple configurations.

    Continue Reading

  • HENSOLDT supplies radars for Rheinmetall air defence systems

    HENSOLDT supplies radars for Rheinmetall air defence systems

    With the framework agreement now in place, both companies have laid the foundation for predictable, efficient and reliable cooperation. The agreement offers both sides a high degree of planning security and binding purchase conditions and ensures that core components of the SPEXER 2000 product range are available as needed and in resilient supply chains. Various companies within the Rheinmetall Group can place orders under the framework agreement.
    The SPEXER radar family offers high-performance surveillance radars for various ranges for the automatic detection and classification of ground, sea and low-flying air targets. The SPEXER 2000 is used by the German Armed Forces in field camp protection (ASUL), the high-energy laser for drone defence (HoWiSM) and the air defence system for close and next-range protection (NNbS), among other things.

    Together, Rheinmetall and HENSOLDT are making an important contribution to strengthening the air defence capabilities of the German Armed Forces and allied nations. The cooperation underscores the shared goal of strengthening the European defence industry, keeping critical technologies available and providing long-term support for the operational readiness of modern air defence systems.

    Continue Reading

  • Lululemon boss to step down early next year

    Lululemon boss to step down early next year

    The boss of Lululemon Athletica, the brand known for its expensive yoga leggings and other sports clothing, is to leave the company early next year.

    Calvin McDonald, the firm’s chief executive, will depart at the end of January after more than seven years at the helm.

    The decision comes amid a run of poor sales for Lululemon in the US, its main market, in recent times and its share price falling almost 50% in the past year.

    However, this week the company upgraded its annual revenue forecast after better-than-expected sales in the past few months.

    Mr McDonald said the decision to leave the company was taken after discussions with the board.

    “As we near the end of our five-year strategy, and with our strong senior leadership team in place, we all agree that now is the time for a change,” he said in a LinkedIn post.

    While the Canadian company’s latest results revealed a boost to its sales internationally driven by its business in China, its performance in the Americas has been going in the opposite direction.

    The brand’s share price on the US Nasdaq index peaked in late 2023 and has been on a downward trend since. In September, its shares fell sharply after it warned of the impact tariffs imposed by US President Donald Trump would have on its business.

    Lululemon’s particular concern was over the ending of the so-called de-minimis exemption, a former duty-free loophole for low-cost goods entering America from countries such as China.

    Many of the Canadian company’s suppliers are based in China, Vietnam and other Asian countries. In September, it estimated the newly-imposed import taxes would cost it about $240m (£178.4m) this year.

    However, sales in China and around the rest of the world have been positive, driving its net revenues to the start of November to $2.6bn.

    “As we enter the holiday season, we are encouraged by our early performance,” said Mr McDonald. However, he said that despite good Thanksgiving period, demand had slowed since as consumers continued to look for cheaper products.

    Lululemon has faced increasing competition from lower-priced rivals such as Vuori and Alo Yoga for its products.

    Dan Coatsworth, head of markets at AJ Bell, told the BBC that competition had been “fierce”, and that the brand needed to “go back to the drawing board and work out ways to make its products ‘must-have’ items again”.

    Mr Coastworth also said that under Mr McDonald the brand had gone through the “embarrassment of having to pull its Breezethrough product line after negative reviews and customer complaints about the leggings being uncomfortable to wear” last year.

    The company halted sales of its then newly-launched $98 leggings last summer after shoppers criticized the V-shaped back seam of the tights as “unflattering” and others said the seam at the top of the waistband dug into their waists.

    Lululemon was also mocked on social media in 2020 for promoting an event about how to “resist capitalism”.

    Lululemon named its finance boss Meghan Frank and chief commercial officer André Maestrini as co-interim chief executives while it searches for a new leader.

    Marti Morfitt, chair of the brand’s board, thanked Mr McDonald for “his visionary leadership building Lululemon into one of the strongest brands in retail”.

    “During his tenure, Calvin led Lululemon through a period of impressive revenue growth, with differentiated products and experiences that resonated with guests around the world.”

    Mr Coatsworth likened Mr McDonalds tenure as “akin to the highs and lows of an athlete hitting their peak and then rapidly going off the boil”.

    “He steered Lululemon to greatness as the athleisure trend boomed, with people happy to pay top dollar for posh leggings.

    “Then came a series of mistakes which were compounded by factors outside of the company’s control. It’s no wonder Lululemon is looking to bring in fresh leadership,” he added.

    Continue Reading

  • Advocate of the Global South, global provider of green tech: China has come to dominate the climate discourse

    Advocate of the Global South, global provider of green tech: China has come to dominate the climate discourse

    The COP30 climate conference ended on November 21st without much success. The hoped-for roadmap for phasing out climate-damaging energy sources such as coal, oil, and gas failed due to fierce resistance from some countries. But the conference made clear: China is dominating the climate discourse now, says Johanna Krebs. 

    “China keeps its promises and delivers on its commitments.” This is how China’s leading negotiator on international climate issues, Vice Premier Ding Xuexiang, described his country’s approach at the COP30 conference in November. And China’s contribution to the conference was indeed more visible than that of European countries, let alone of the United States. 

    China sent the second largest delegation to COP after host country Brazil, signaling that it considers these meetings important. In the runup to the conference, Ding had stated China’s priorities: further drive the green and low-carbon transformation, uphold the principle of “common but differentiated responsibilities” and remove trade barriers that are an obstacle to the development of green products. China also openly pushed to put “unilateral trade measures” – like the EU’s Carbon Border Adjustment Mechanism (CBAM) and EV tariffs – on the agenda of COP. It didn’t succeed, but the move showed that climate and trade interests are increasingly intertwined, pitting China against the EU. 

    At the conference, it became clear that China dominates the climate discourse now. Its envoys engaged in the discussions around green development in the Global South, with an event on the topic in the China pavilion reportedly being “packed”. China offers real solutions for climate mitigation like, for instance, the export of cheap green tech, wind turbines and solar panels. By doing so, it shifts the international climate discourse to an international green tech discourse – an area where China and the EU are competing. 

    China’s image as a climate actor benefits from the US’s withdrawal and Europe’s dividedness 

    Germany and the EU used to be pioneers in climate action, but this image has suffered greatly, also because the EU published weakened NDC (nationally determined contributions) due to disagreement among its member states. Germany has a legacy as a climate leader, however, at this year’s COP it did not stand out as such, also because the Merz government has ranked the topic lower on its priority list. 

    China, on the other hand, stresses that it will stay the course: In autumn, Xi Jinping stated that “green and low-carbon transition is the trend of the time. While some country is acting against it, the international community should stay focused on the right direction.” The United States’ (second) withdrawal from the Paris agreement and other formats enables China to present itself as an alternative model for tackling the climate crisis. 

    Beijing also assertively criticizes the EU, with climate envoy Liu Zhenmin calling the Union’s pollution-cutting targets insufficient and a Chinese diplomat allegedly calling the EU’s backtracking on its climate targets right before COP30 “shameful”. Vice Premier Ding stressed that China still expects developed countries to “implement their obligations to take lead in reducing emissions” and “fulfill their investment commitments”. By positioning itself as a defender of Global South interests and criticizing developed countries for abandoning the developing world, China’s government currently also occupies much of the international climate negotiation space and avoids having to justify its own, widely seen as less ambitious climate commitments, untransparent climate finance, or the installation of new coal fired plants. 

    Bridging the gap between industrial upgrading and climate policy in the next Five-Year Plan

    How is China making progress on its domestic climate goals? The proposal for the 15th Five-Year Plan (FYP), that will be passed next spring, shows that Beijing views decarbonization and industrial development as two sides of the same coin. It offers a glance into how China imagines its industrial future: being a climate role model which fuses industry and climate policy to the benefit of all. In the coming five years, China plans to double down on domestic industrial modernization and technological progress. As the document states, “green development is a defining feature of Chinese modernization” and part of “high quality development”. 

    For the first time, an FYP proposal mentions the “safe, reliable, and orderly replacement of fossil fuels”, describing top level momentum to accelerate the structural transition to non-fossil fuel energy. Moreover, it stresses the need to control both the total amount and intensity of emissions, a levelling up compared to the 14th FYP which strongly focused on emission intensity. The proposal also identifies the installation of energy storage technologies, and construction of smart-grids and microgrids as crucial. 

    However, at the same time, it advocates for the “clean and efficient use of fossil fuels” and the upgrading of coal-fired power plants. Fossil fuelled energy production will continue to play a considerable role in China in the upcoming years.  

    China’s climate risks: China is highly vulnerable to climate change and due to its large territory will experience various climate related risk profiles. Historically, China is prone to experiencing heat waves, storms, floods and droughts, all of which will be more extreme as climate change progresses. Moreover, China is already witnessing alarming glacier retreats. Lastly, given that a large part of the population lives in China’s coastal provinces, China will also be greatly impacted by sea level rise, with sea levels projected to rise 40-60 centimeters until the end of the current century, potentially receding China’s coastline more than 10 meters in some parts of China. 

    China’s own climate targets remain “highly insufficient”

    China’s officially announced policies and climate targets, including the 2035 NDC remain “highly insufficient” and, according to estimates, will lead to a global warming above 4°C. Its five-year carbon intensity target, as defined in the 14th FYP, would have required a total emissions reduction of four percent in 2025 – China is set to underdeliver here. More stringent policies will be required to reach the energy intensity targets for 2030. 

    Observers fear that carbon-reduction policies might be counter-played by the renewed uptake in permissions for coal-plans, which reached a high in 2023. Although the number of plants permitted has noticeably decreased since then, the fact that some are still in construction and have yet to come online means that China is still adding coal capacity to its power grid. 

    However, as data from the last five years shows, adding coal capacity to the grid does not predict a proportional increase in coal-power output;  despite building new coal power plants, China does not seem to increase its reliance on coal, leading to a falling utilization rate of coal plants. More relevant to the future trajectory of Chinese carbon emissions are China’s industries, especially the chemical industry which have seen a rising demand for plastic production. The trajectory strongly depends on how fast China can decouple its industries from CO2 emissions. 

    Continue Reading

  • Pakistan’s car sales rise 43 pct in first 5 months of fiscal year-Xinhua

    ISLAMABAD, Dec. 12 (Xinhua) — Pakistan’s passenger car sales recorded strong growth during the first five months of the current fiscal year, supported by improved economic sentiment, new model launches and better financing conditions, industry data showed on Thursday.

    According to the Pakistan Automotive Manufacturers Association, car sales rose 43 percent year on year to 55,239 units from July to November in fiscal year 2025-26, compared with 38,597 units in the same period last year.

    In November alone, car sales reached 12,408 units, up more than 50 percent compared with the same month last year, though they declined 8 percent on a monthly basis.

    Sales of jeeps and pickups increased by 62 percent to 19,803 units in the first five months, while truck sales rose 101 percent to 2,753 units and bus sales grew 72 percent to 407 units. Motorcycles and rickshaws also posted growth of 32 percent, reaching 762,778 units.

    However, tractor sales continued to decline due to lower demand from farmers, who industry experts say are facing reduced agricultural output linked to climate-related challenges.

    Continue Reading

  • Gold prices soar in Pakistan, up Rs10,700 per tola

    Gold prices soar in Pakistan, up Rs10,700 per tola

    December 12, 2025 (MLN): Gold price in Pakistan increased on Friday, with 24-karat gold being sold at Rs454,262 per tola, up Rs10,700.

    Similarly, 24-karat gold per 10-gram was sold at Rs389,456 after a gain of Rs9,174, according to rates shared by the All-Pakistan Gems and Jewelers Sarafa Association (APGJSA).

    The price of 22-karat gold was also quoted higher at Rs357,014 per 10-gram.

     

    Similarly, silver prices rose in the domestic market, with 24-karat silver being sold at Rs6,684 per tola (+Rs232) and Rs5,730 per 10-gram (+Rs199).

    PKR (24-karat per tola) Dec 12, 2025 Dec 11, 2025 DoD 1 Month FYTD CYTD
    Gold 454,262 443,562 10,700 11,200 104,062 181,662
    Silver 6,684 6,452 232 1,022 2,902 3,334

    Globally, spot gold traded near $4,329 an ounce, up $53.4 or 1.25% from the previous session, supported by a weaker dollar.

     

    Copyright Mettis Link News

    Continue Reading

  • Tissue-Free ctDNA in Early TNBC

    Tissue-Free ctDNA in Early TNBC

    The c-TRAK TN Trial , presented at the 2025 San Antonio Breast Cancer Symposium (SABCS), reported results from a tissue-free circulating tumor DNA (ctDNA) analysis, providing new evidence that methylation-based, tumor-agnostic assays detect minimal residual disease earlier and in more patients with high-risk early triple-negative breast cancer than tumor-informed approaches.

    Background and Rationale

    Detection of ctDNA after completion of curative-intent therapy is a powerful prognostic marker for recurrence in early breast cancer. To date, most supporting evidence has come from tumor-informed assays, which require sequencing of the primary tumor to design personalized mutation panels. While analytically sensitive, these approaches are limited by the need for archival tissue, sequencing turnaround time, and logistical complexity.

    Tissue-free ctDNA assays, which do not rely on prior tumor sequencing, offer a potential alternative if sufficient accuracy and prognostic performance can be demonstrated. The current analysis aimed to evaluate the prognostic significance of tissue-free ctDNA detection in early TNBC and to directly compare its performance with tumor-informed digital droplet PCR (ddPCR).

    c-TRAK TN Trial’s Study Design

    The analysis utilized plasma samples from c-TRAK TN, the first prospective ctDNA surveillance study in early-stage TNBC. In c-TRAK TN, patients at moderate to high risk of relapse underwent serial ctDNA testing every three months following completion of standard therapy, using ddPCR to track tumor-specific mutations.

    Archived plasma samples from this cohort were retrospectively analyzed using a tissue-free, methylation-based ctDNA assay, enabling a direct comparison between approaches within the same well-characterized population.

    Tissue-Free ctDNA Assay Methodology

    The tissue-free assay exploits differentially methylated regions (DMRs) between cancer and non-cancer DNA. Cell-free DNA was extracted from 2–4 mL of plasma, partitioned by methylation status, and enriched using a targeted capture panel covering approximately 20,000 DMRs, including 3,000 breast-specific regions. Tumor methylation fraction was reported in ctDNA-positive samples and correlated with tumor purity.

    Importantly, this approach eliminates the need for tumor sequencing while maintaining biological specificity for breast cancer–derived ctDNA.

    Patient Cohort and Follow-Up

    A total of 1,026 plasma samples from 159 patients were analyzed using the tissue-free assay, with a median of 10 samples per patient. Quality control pass rates were high (98.6%). Median follow-up from the first surveillance blood draw was 33.9 months.

    The cohort was representative of high-risk early TNBC, with the majority of tumors being high grade and most patients having received neoadjuvant and/or adjuvant chemotherapy.

    ctDNA Detection and Risk of Recurrence

    Detection of ctDNA at any time point during serial surveillance was strongly associated with recurrence risk. Patients with detectable ctDNA had a median recurrence-free survival (RFS) of 14.9 months, whereas median RFS was not reached in patients without ctDNA detection.

    The association was robust, with a time-dependent hazard ratio of 28.7 (p < 0.001), underscoring the powerful prognostic value of ctDNA detection in this high-risk population.

    Tissue-Free Assay Versus Tumor-Informed ddPCR

    When directly compared with ddPCR, the tissue-free assay demonstrated 95.4% concordance at the sample level. Among 42 patients with ctDNA detected by both methods at any time point, two-thirds were detected simultaneously by both assays. Notably, 33.3% were detected earlier by the tissue-free assay, while no patients were detected earlier by ddPCR.

    At 12 months, the estimated ctDNA detection rate was 29.0% with the tissue-free assay compared with 23.7% with ddPCR, indicating that the tissue-free approach identified ctDNA in more patients.

    Clinical Lead Time to Relapse

    The tissue-free assay also showed a trend toward a longer clinical lead time between ctDNA detection and overt relapse. Median time from first ctDNA detection to recurrence was 7.8 months with the tissue-free assay versus 5.8 months with ddPCR. Although this difference did not reach conventional statistical significance (HR 0.63, p = 0.07), the numerical advantage suggests earlier molecular detection may be achievable without tumor sequencing.

    c-TRAK TN Trial

    Conclusions and Implications

    This SABCS 2025 presentation demonstrates that tissue-free ctDNA detection can anticipate relapse with high accuracy in patients with high-risk early TNBC. Compared with tumor-informed ddPCR, the tissue-free assay detected ctDNA more frequently and at earlier time points, while maintaining strong prognostic discrimination.

    Ongoing comparisons with whole-exome sequencing–based tumor-informed assays will further clarify relative performance. While these findings establish strong analytical and prognostic validity, prospective interventional studies will be required to determine whether tissue-free ctDNA surveillance can guide treatment decisions and improve clinical outcomes.

     

     

    For more information click here. 

    Continue Reading

  • Charting new ground in cruise ship LNG dry-docks

    Charting new ground in cruise ship LNG dry-docks

    This collaborative planning effort included shipboard visits, risk assessments, technical workshops and repeated sessions with Carnival’s technical teams in Miami, Southampton, Marseille and at the Carnival training centre. The aim was to build a shared and granular understanding of system conditions, survey requirements, and the operational profile of each vessel. 

    Vincenzo Prinzi, Technical Operation Director, CCL, reflects: The preparation for this project was extensive, and it allowed us to engage in meaningful discussions and work together with determination. Our strong communication played a key role in reaching a consensus on everything we planned.

    John Waters, LNG Inspection Project Manager, CUK, says: “The recent inspection of the LNG fuel system on Iona marked a first for Carnival UK and the collaboration with LR was important in its success. The communication and constructive approach helped navigate this new ground with confidence. Working closely together allowed us to address challenges quickly, maintain high safety standards and strengthen our capabilities in this emerging area.”

    The inherent complexity of LNG fuel systems shaped every decision. Both Iona and Mardi Gras feature three fuel tanks and dual fuel trains designed for full redundancy, supported by sophisticated control logic and an extensive cryogenic piping network.  

    “Managing the inspection, testing and recommissioning of these systems within the confines of a passenger ship’s operational profile required not only expert knowledge but tight integration between the shipboard team, the technical office and the attending class surveyors,” says van Ee.  

    “We had to define a very detailed Inspection and Test Plan for these vessels, down to the smallest valve which required overhaul well in advance of the actual surveys.”

    Continue Reading

  • IKEA store transforms for one night only: Inside SMAKFEST in Stockholm

    IKEA store transforms for one night only: Inside SMAKFEST in Stockholm

    In Swedish, SMAKFEST means “taste feast,” and for the first time in its 80-year history, the IKEA store was transformed into an artistic installation built around food, culture, and imagination. More than 30 sensory moments unfolded across the space, blending Swedishness with global influences, which is a true reflection of modern Swedish society.  

    Inside the event, guests were welcomed by pantomimes and a full gospel choir singing from the long escalator that led to the top floor. A giant FRAKTA bar, built solely for the night and shaped like the iconic IKEA blue bag, served the first drink of the evening. Across the store, installations and roomset exhibits created a series of surprising moments: a human kaleidoscope, eating in the dark, and spoon-fed tastings delivered through a tiny peep hole. Live music performances were woven directly into the installations and appeared throughout the entire space, elevating the energy and the experience. The music stars of the night were headlined by Cherrie, the Stockholm-based R&B artist with a fast-growing global audience, alongside Diaspora, the creative collective known for connecting local and international talent through music, as well as Mike Näselius, Hatami Siamak, Chez Ali, Diana Emerita, and DJ Majk. 

    The highlight of the night was an entire floor dedicated to the IKEA meatball’s 40-year birthday. The milestone was celebrated with a meatball birthday cake, gravy fountains, and a rich and diverse showcase of meatballs from around the world, from Eastern Europe to Latin America to Oceania to South Asia. True to the IKEA food experience, plant balls, veggie balls, fish balls, and chicken balls were also part of the menu. 

    Part feast and part art installation, SMAKFEST was an exploration of a new kind of IKEA experience that reflects the deep curiosity that lives in the brand’s DNA. And this is only a glimpse of what’s to come. 

     

    About Ingka Group 

    With IKEA retail operations in 32 markets, Ingka Group is the largest IKEA retailer and represents 87% of IKEA retail sales. It is a strategic partner to develop and innovate the IKEA business and help define common IKEA strategies. Ingka Group owns and operates IKEA sales channels under franchise agreements with Inter IKEA Systems B.V. It has three business areas: IKEA Retail, Ingka Investments and Ingka Centres. Read more on Ingka.com.

    Continue Reading

  • Cultura Bank selects Tieto Banktech in 5-year tech partnership

    Cultura Bank selects Tieto Banktech in 5-year tech partnership

    Tieto Banktech has entered into a 5-year agreement with Cultura Bank to deliver modern full-service technology solutions, ensuring cost-effective banking operations and predictable technology costs. 

    Cultura Bank, a Norwegian ethical savings bank with approximately NOK 1.4 billion in assets and more than 4,500 customers, is joining Lokalbanksamarbeidet (the Local Bank Collaboration), a consortium of 16 independent Norwegian savings banks. This provides access to competitive and modern solutions in line with the strategic partnership entered with Tieto Banktech in May 2025. 

    “Through the Local Bank Collaboration and our long-term technology agreement with Tieto Banktech, we are laying a solid foundation for sound banking operations going forward. Modern technological solutions enable us to be a positive driving force for growth and development for our customers. This strategic move represents a major step towards becoming Norway’s leading bank in sustainable finance,” says Karoline Bakka Hjertø, acting CEO of Cultura Bank. 

    Overall, the partnership with Tieto Banktech strengthens the competitiveness of the entire 16-bank collaboration. 

    “Together with the other banks in the Local Bank Collaboration, Cultura Bank will operate on a unified technology platform that streamlines operations. In close collaboration with Tieto Banktech, we are developing digital processes and user-friendly tools that will free up employees for proactive advisory and sales activities. Combined with a modern digital customer interface, this delivers a superior customer experience,” says Bent R. Eidem, CEO of Lokalbanksamarbeidet. 

    Modern technology platform 

    Tieto Banktech will deliver an end-to-end core banking platform based on industry-standard technology that is scalable and adapted to Norwegian conditions and regulatory requirements. The agreement includes mobile and online banking solutions, card and payment solutions, and effective tools for combating financial crime. 

    “We’re honored that Cultura Bank has chosen Tieto Banktech as its strategic partner through the Local Bank model. We will now migrate Cultura Bank, together with the entire collaboration, to our modernized core banking system in a well-tested and controlled manner. We are intensifying our work on innovation and renewal of customer-centric solutions with increasing integration of responsible AI that streamlines internal work processes and enhances customer engagement and relevance for Cultura Bank and the entire Local Bank Collaboration,” says Mario Blazevic, Managing Director Tieto Banktech. 

    For more information, please contact:
    Geir Remman, Head of Communication and Marketing, Tieto Banktech, geir.remman@tietoevry.com
    Tietoevry Newsdesk: +358 40 5704072 

    Tieto is a leading software and digital engineering services company with global market reach and capabilities. We provide customers across different industries with mission-critical solutions through our specialized software businesses Tieto Caretech, Tieto Banktech and Tieto Indtech as well as Tieto Tech Consulting business. Our around 15 000 talented vertical software, design, cloud and AI experts are dedicated to empowering our customers to succeed and innovate with latest technology.  

    Tieto’s annual revenue is approximately EUR 2 billion. The company’s shares are listed on the NASDAQ exchange in Helsinki and Stockholm, as well as on Oslo Børs. www.tieto.com 

    Continue Reading