Author: admin

  • Virtual Brain Inference (VBI), a flexible and integrative toolkit for efficient probabilistic inference on whole-brain models

    Virtual Brain Inference (VBI), a flexible and integrative toolkit for efficient probabilistic inference on whole-brain models

    Understanding the complex dynamics of the brain and their neurobiological underpinnings, with the potential to advance precision medicine (Falcon et al., 2016; Tan et al., 2016; Vogel et al., 2023; Williams and Whitfield Gabrieli, 2025), is a central goal in neuroscience. Modeling these dynamics provides crucial insights into causality and mechanisms underlying both normal brain function and various neurological disorders (Breakspear, 2017; Wang et al., 2023b; Ross and Bassett, 2024). By integrating the average activity of large populations of neurons (e.g. neural mass models; Wilson and Cowan, 1972; Jirsa and Haken, 1996; Deco et al., 2008; Jirsa et al., 2014; Montbrió et al., 2015; Cook et al., 2022) with information provided by structural imaging modalities (i.e. connectome; Honey et al., 2009; Sporns et al., 2005; Schirner et al., 2015; Bazinet et al., 2023), the whole-brain network modeling has proven to be a powerful tractable approach for simulating brain activities and emergent dynamics as recorded by functional imaging modalities (such as (s)EEG, MEG, and fMRI; Sanz-Leon et al., 2015; Schirner et al., 2022; Amunts et al., 2022; D’Angelo and Jirsa, 2022; Patow et al., 2024; Hashemi et al., 2025).

    The whole-brain models have been well-established in network neuroscience (Sporns, 2016; Bassett and Sporns, 2017) for understanding the brain structure and function (Ghosh et al., 2008; Honey et al., 2010; Park and Friston, 2013; Melozzi et al., 2019; Suárez et al., 2020; Feng et al., 2024; Tanner et al., 2024) and investigating the mechanisms underlying brain dynamics at rest (Deco et al., 2011; Wang et al., 2019; Ziaeemehr et al., 2020; Kong et al., 2021), normal aging (Lavanga et al., 2023; Zhang et al., 2024), and also altered states such as anesthesia and loss of consciousness (Barttfeld et al., 2015; Hashemi et al., 2017; Luppi et al., 2023; Perl et al., 2023b). This class of computational models, also known as virtual brain models (Jirsa et al., 2010; Sanz Leon et al., 2013; Sanz-Leon et al., 2015; Schirner et al., 2022; Jirsa et al., 2023; Wang et al., 2024), has shown remarkable capability in delineating the pathophysiological causes of a wide range of brain diseases, such as epilepsy (Jirsa et al., 2017; Proix et al., 2017; Wang et al., 2023b), multiple sclerosis (Wang et al., 2024; Mazzara et al., 2025), Alzheimer’s disease (Yalçınkaya et al., 2023; Perl et al., 2023a), Parkinson’s disease (Jung et al., 2022; Angiolelli et al., 2025), neuropsychiatric disorders (Deco and Kringelbach, 2014; Iravani et al., 2021), stroke (Allegra Mascaro et al., 2020; Idesis et al., 2022), and focal lesions (Rabuffo et al., 2025). In particular, they enable the personalized simulation of both normal and abnormal brain activities, along with their associated imaging recordings, thereby stratifying between healthy and diseased states (Liu et al., 2016; Patow et al., 2023; Perl et al., 2023a) and potentially informing targeted interventions and treatment strategies (Jirsa et al., 2017; Proix et al., 2018; Wang et al., 2023b; Jirsa et al., 2023; Hashemi et al., 2025). Although there are only a few tools available for forward simulations at the whole-brain level, for example the brain network simulator The Virtual Brain (TVB; Sanz Leon et al., 2013), there is a lack of tools for addressing the inverse problem, that is finding the set of control (generative) parameters that best explains the observed data. This study aims to bridge this gap by addressing the inverse problem in large-scale brain networks, a crucial step toward making these models operable for clinical applications.

    Accurately and reliably estimating the parameters of whole-brain models remains a formidable challenge, mainly due to the high dimensionality and nonlinearity inherent in brain activity data, as well as the non-trivial effects of noise and network inputs. A large number of previous studies in whole-brain modeling have relied on optimization techniques to identify a single optimal value from an objective function, scoring the model’s performance against observed data (Wang et al., 2019; Kong et al., 2021; Cabral et al., 2022; Liu et al., 2023). This approach often involves minimizing metrics such as the Kolmogorov-Smirnov distance or maximizing the Pearson correlation between observed and generated data features such as functional connectivity (FC), functional connectivity dynamics (FCD), and/or power spectral density (PSD). Although fast, such a parametric approach results in only point estimates and fails to capture the relationship between parameters and their associated uncertainty. This limits the generalizability of findings and hinders identifiability analysis, which explores the uniqueness of solutions. Furthermore, optimization algorithms can easily get stuck in local extrema, requiring multi-start strategies to address potential parameter degeneracies. These additional steps, while necessary, ultimately increase the computational cost. Critically, the estimation heavily depends on the form of the objective function defined for optimization (Svensson et al., 2012; Hashemi et al., 2018). These limitations can be overcome by employing Bayesian inference, which naturally quantifies the uncertainty in the estimation and statistical dependencies between parameters, leading to more robust and generalizable models. Bayesian inference is a principal method for updating prior beliefs with information provided by data through the likelihood function, resulting in a posterior probability distribution that encodes all the information necessary for inferences and predictions. This approach has proven essential for understanding the intricate relationships between brain structure and function (Hashemi et al., 2021; Lavanga et al., 2023; Rabuffo et al., 2025), as well as for revealing the pathophysiological causes underlying brain disorders (Hashemi et al., 2023; Yalçınkaya et al., 2023; Wang et al., 2024; Wang et al., 2024; Hashemi et al., 2025; Hashemi et al., 2024).

    In this context, simulation-based inference (SBI; Cranmer et al., 2020; Gonçalves et al., 2020; Hashemi et al., 2023; Hashemi et al., 2024) has gained prominence as an efficient methodology for conducting Bayesian inference in complex models where traditional inference techniques become inapplicable. SBI leverages computational simulations to generate synthetic data and employs advanced probabilistic machine learning methods to infer the joint distribution over parameters that best explain the observed data, along with associated uncertainty. This approach is particularly well-suited for Bayesian inference on whole-brain models, which often exhibit complex dynamics that are difficult to retrieve from neuroimaging data with conventional estimation techniques. Crucially, SBI circumvents the need for explicit likelihood evaluation and the Markovian (sequential) property required in sampling. Markov chain Monte Carlo (MCMC; Gelman et al., 1995) is the gold-standard nonparametric technique and asymptotically exact for sampling from a probability distribution. However, for Bayesian inference on whole-brain models given high-dimensional data, the likelihood function becomes intractable, rendering MCMC sampling computationally prohibitive. SBI offers significant advantages, such as parallel simulation while leveraging amortized learning, making it effective for personalized inference from large datasets (Hashemi et al., 2024). Amortization in artificial neural networks refers to the idea of reusing learned computations across multiple tasks or inputs (Gershman and Goodman, 2014). Amortization in Bayesian inference refers to the process of training a shared inference network (e.g. a neural network) with an intensive upfront computational cost, to perform fast inference across many different observations. Instead of re-running inference for each new observation, the trained model can rapidly return posterior estimates, significantly reducing computational cost at test time. Following an initial computational cost during simulation and training to learn all posterior distributions, subsequent evaluation of new hypotheses can be conducted efficiently, without additional computational overhead for further simulations (Hashemi et al., 2023). Importantly, SBI sidesteps the convergence issues caused by complex geometries that are often encountered when using gradient-based MCMC methods (Betancourt and Girolami, 2013; Betancourt et al., 2014; Hashemi et al., 2020). It also substantially outperforms approximate Bayesian computation (ABC) methods, which rely on a threshold to accept or reject samples (Sisson et al., 2007; Beaumont et al., 2009; Gonçalves et al., 2020). Such a likelihood-free approach provides us with generic inference on complex systems as long as we can provide three modules:

    1. A prior distribution, describing the possible range of parameters from which random samples can be easily drawn, that is θp(θ).

    2. A simulator in computer code that takes parameters as input and generates data as output, that is xp(xθ).

    3. A set of low-dimensional data features, which are informative of the parameters that we aim to infer.

    These elements prepare us with a training data set {(θi,xi)}i=1Nsim with a budget of Nsim simulations. Then, using a class of deep neural density estimators, such as masked autoregressive flows (MAFs; Papamakarios and Pavlakou, 2017) or neural spline flows (NSFs; Durkan et al., 2019), we can approximate the posterior distribution of parameters given a set of observed data, that is p(θxobs). Therefore, a versatile toolkit should be flexible and integrative, adeptly incorporating these modules to enable efficient Bayesian inference over complex models.

    To address the need for widely applicable, reliable, and efficient parameter estimation from different (source-localized) neuroimaging modalities, we introduce Virtual Brain Inference (VBI), a flexible and integrative toolkit for probabilistic inference at whole-brain level. This open-source toolkit offers fast simulation through just-in-time (JIT) compilation of various brain models in different programming languages (Python/C++) and devices (CPUs/GPUs). It supports space-efficient storage of simulated data (HDF5/NPZ/PT), provides a memory-efficient loader for batched data, and facilitates the extraction of low-dimensional data features (FC/FCD/PSD). Additionally, it enables the training of deep neural density estimators (MAFs/NSFs), making it a versatile tool for inference on neural sources corresponding to (s)EEG, MEG, and fMRI recordings. VBI leverages high-performance computing, significantly enhancing computational efficiency through parallel processing of large-scale datasets, which would be impractical with current alternative methods. Although SBI has been used for low-dimensional parameter spaces (Gonçalves et al., 2020; Wang et al., 2024; Baldy et al., 2024), we demonstrate that it can scale to whole-brain models with high-dimensional unknown parameters, as long as informative data features are provided. VBI is now accessible on the cloud platform EBRAINS (https://ebrains.eu), enabling users to explore more realistic brain dynamics underlying brain (dys)functioning using Bayesian inference.

    In the following sections, we will describe the architecture and workflow of the VBI toolkit and demonstrate the validation through a series of case studies using in silico data. We explore various whole-brain models corresponding to different types of brain recordings: a whole-brain network model of Wilson-Cowan (Wilson and Cowan, 1972), Jansen-Rit (Jansen and Rit, 1995; David and Friston, 2003), and Stuart-Landau (Selivanov et al., 2012) for simulating neural activity associated with EEG/MEG signals, the Epileptor (Jirsa et al., 2014) related to stereoelectro-EEG (sEEG) recordings, and Montbrió (Montbrió et al., 2015), and Wong-Wang (Wong and Wang, 2006; Deco et al., 2013) mapped to fMRI BOLD signals. Although these models represent source signals and could be applied to other modalities (e.g. Stuart-Landau representing generic oscillatory dynamics), we focused on their capabilities to perform optimally in specific contexts. For instance, some are better suited for encephalographic signals (e.g. EEG/MEG) due to their ability to preserve spectral properties, while others have been used for fMRI data, emphasizing their ability to capture dynamic features such as bistability and time-varying functional connectivity.

    VBI workflow

    Figure 1 illustrates an overview of our approach in VBI, which combines virtual brain models and SBI to make probabilistic predictions on brain dynamics from (sourc-localized) neuroimaging recordings. The inputs to the pipeline include the structural imaging data (for building the connectome), functional imaging data such as (s)EEG/MEG, and fMRI as the target for fitting, and prior information as a plausible range over control parameters for generating random simulations. The main computational costs involve model simulations and data feature extraction. The output of the pipeline is the joint posterior distribution of control parameters (such as excitability, synaptic weights, or effective external input) that best explains the observed data. Since the approach is amortized (i.e. it learns across all combinations in the parameter space), it can be readily applied to any new data from a specific subject.

    The workflow of Virtual Brain Inference (VBI).

    This probabilistic approach is designed to estimate the posterior distribution of control parameters in virtual brain models from whole-brain recordings. (A) The process begins with constructing a personalized connectome using diffusion tensor imaging and a brain parcellation atlas, such as Desikan-Killiany (Desikan et al., 2006), Automated Anatomical Labeling (Tzourio-Mazoyer et al., 2002), or VEP (Wang et al., 2021). (B) The personalized virtual brain model is then assembled. Neural mass models describing the averaged activity of neural populations, in the generic form of x˙=f(x,θ,Iinput), are placed to each brain region and interconnected via the structural connectivity matrix. Initially, the control parameters are randomly drawn from a simple prior distribution. (C) Next, the VBI operates as a simulator that uses these samples to generate time series data associated with neuroimaging recordings. (D) We extract a set of summary statistics from the low-dimensional features of the simulations (FC, FCD, PSD) for training. (E) Subsequently, a class of deep neural density estimators is trained on pairs of random parameters and their corresponding data features to learn the joint posterior distribution of the model parameters. (F) Finally, the amortized network allows us to quickly approximate the posterior distribution for new (empirical) data features, enabling us to make probabilistic predictions that are consistent with the observed data.

    In the first step, non-invasive brain imaging data, such as T1-weighted MRI and Diffusion-weighted MRI (DW-MRI), are collected for a specific subject (Figure 1A). T1-weighted MRI images are processed to obtain brain parcellation, while DW-MRI images are used for tractography. Using the estimated fiber tracts and the defined brain regions from the parcellation, the connectome (i.e. the complete set of links between brain regions) is constructed by counting the fibers connecting all regions. The SC matrix, with entries representing the connection strength between brain regions, forms the structural component of the virtual brain which constrains the generation of brain dynamics and functional data at arbitrary brain locations (e.g. cortical and subcortical structures).

    Subsequently, each brain network node is equipped with a computational model of average neuronal activity, known as neural mass models (see Figure 1B and Materials and methods). They can be represented in the generic form of a dynamical model as x˙=f(x,θ,Iinput), with the system variables x (such as membrane potential and firing rate), the control parameters θ (such as excitability), and the input current Iinput (such as stimulation). This integration of mathematical mean-field modeling (neural mass models) with anatomical information (connectome) allows us to efficiently analyze functional neuroimaging modalities at the whole-brain level.

    To quantify the posterior distribution of control parameters given a set of observations, p(θx), we first need to define a plausible range for the control parameters based on background knowledge p(θ), that is a simple base distribution known as a prior. We draw random samples from the prior and provide them as input to the VBI simulator (implemented by Simulation module) to generate simulated time series associated with neuroimaging recordings, as shown in Figure 1C. Subsequently, we extract low-dimensional data features (implemented by Features module), as shown in Figure 1D for FC/FCD/PSD, to prepare the training dataset {(θi,xi)}i=1Nsim , with a budget of Nsim simulations. Then, we use a class of deep neural density estimators, such as MAF or NSF models, as schematically shown in Figure 1E, to learn all the posterior p(θx). Finally, we can readily sample from p(θxobs), which determines the probability distribution in parameter space that best explains the observed data.

    Figure 2 depicts the structure of the VBI toolkit, which consists of three main modules. The first module, referred to as the Simulation module, is designed for fast simulation of whole-brain models, such as Wilson-Cowan (Wilson-Cowan model), Jansen-Rit (Jansen-Rit model), Stuart-Landau (Stuart-Landau oscillator), Epileptor (Epileptor model), Montbrió (Montbrió model), and Wong-Wang (Wong-Wang model). These whole-brain models are implemented across various numerical computing libraries such as Cupy (GPU-accelerated computing with Python), C++ (a high-performance systems programming language), Numba (a JIT compiler for accelerating Python code), and PyTorch (an open-source machine learning library for creating deep neural networks).


    Flowchart of the VBI Structure.

    This toolkit consists of three main modules: (1) The Simulation module, implementing various whole-brain models, such as Wilson-Cowan (WCo), Jansen-Rit (JR), Stuart-Landau (SL), Epileptor (EPi), Montbrió (MPR), and Wong-Wang (WW), across different numerical computing libraries (C++, Cupy, PyTorch, Numba). (2) The Features module, offering an extensive toolbox for extracting low-dimensional data features, such as spectral, temporal, connectivity, statistical, and information theory features. (3) The Inference module, providing neural density estimators (such as MAF and NSF) to approximate the posterior of parameters.

    The second module, Features, provides a versatile tool for extracting low-dimensional features from simulated time series (see Comprehensive feature extraction). The features include, but are not limited to, spectral, temporal, connectivity, statistical, and information theory related features, and the associated summary statistics. The third module focuses on Inference, that is training the deep neural density estimators, such as MAF and NSF (see Simulation-based inference), to learn the joint posterior distribution of control parameters. See Figure 2—figure supplement 1 and Figure 2—figure supplement 2 for benchmarks comparing CPU/GPU and MAF/NSF performances, and Figure 2—figure supplement 3 for the estimation of the global coupling parameter across different whole-brain network models, evaluated under multiple configurations.

    Continue Reading

  • Components of RNA among life’s building blocks found in NASA asteroid sample – Physics World

    Components of RNA among life’s building blocks found in NASA asteroid sample – Physics World






    Components of RNA among life’s building blocks found in NASA asteroid sample – Physics World


















    With the framework agreement now in place, both companies have laid the foundation for predictable, efficient and reliable cooperation. The agreement offers both sides a high degree of planning security and binding purchase conditions and ensures that core components of the SPEXER 2000 product range are available as needed and in resilient supply chains. Various companies within the Rheinmetall Group can place orders under the framework agreement.
    The SPEXER radar family offers high-performance surveillance radars for various ranges for the automatic detection and classification of ground, sea and low-flying air targets. The SPEXER 2000 is used by the German Armed Forces in field camp protection (ASUL), the high-energy laser for drone defence (HoWiSM) and the air defence system for close and next-range protection (NNbS), among other things.

    Together, Rheinmetall and HENSOLDT are making an important contribution to strengthening the air defence capabilities of the German Armed Forces and allied nations. The cooperation underscores the shared goal of strengthening the European defence industry, keeping critical technologies available and providing long-term support for the operational readiness of modern air defence systems.

    Continue Reading

  • Liouville Quantum Gravity Demonstrates Geodesic Confluence, Proving Almost All Geodesics Merge Before Reaching A Fixed Target

    Liouville Quantum Gravity Demonstrates Geodesic Confluence, Proving Almost All Geodesics Merge Before Reaching A Fixed Target

    The behaviour of lines representing the shortest paths between points, known as geodesics, within complex, randomly shaped spaces presents a long-standing challenge in theoretical physics. Manan Bhatia and Konstantinos Kavvadias demonstrate a…

    Continue Reading

  • A nearby Earth-size planet just got much more mysterious

    A nearby Earth-size planet just got much more mysterious

    Of the seven Earth-sized planets circling the red dwarf star TRAPPIST-1, one world has become a special focus for astronomers. This planet, TRAPPIST-1e, orbits within the star’s “Goldilocks zone” — a region where temperatures could allow liquid…

    Continue Reading

  • expert reaction to study looking at benefits and risks of plant-based diets in children

    A study published in the Critical Reviews in Food Science & Nutrition looks at the risks and benefits of vegan…

    Continue Reading

  • Gut microbe Turicibacter shows promise as obesity therapeutic

    Gut microbe Turicibacter shows promise as obesity therapeutic

    Medication alone won’t solve the global obesity crisis, claims the WHO’s director-general – yet, GLP-1 therapies are now recommended by the global health body to support the billion people who live with obesity.

    But the weight-loss jabs are…

    Continue Reading

  • Lululemon boss to step down early next year

    Lululemon boss to step down early next year

    The boss of Lululemon Athletica, the brand known for its expensive yoga leggings and other sports clothing, is to leave the company early next year.

    Calvin McDonald, the firm’s chief executive, will depart at the end of January after more than seven years at the helm.

    The decision comes amid a run of poor sales for Lululemon in the US, its main market, in recent times and its share price falling almost 50% in the past year.

    However, this week the company upgraded its annual revenue forecast after better-than-expected sales in the past few months.

    Mr McDonald said the decision to leave the company was taken after discussions with the board.

    “As we near the end of our five-year strategy, and with our strong senior leadership team in place, we all agree that now is the time for a change,” he said in a LinkedIn post.

    While the Canadian company’s latest results revealed a boost to its sales internationally driven by its business in China, its performance in the Americas has been going in the opposite direction.

    The brand’s share price on the US Nasdaq index peaked in late 2023 and has been on a downward trend since. In September, its shares fell sharply after it warned of the impact tariffs imposed by US President Donald Trump would have on its business.

    Lululemon’s particular concern was over the ending of the so-called de-minimis exemption, a former duty-free loophole for low-cost goods entering America from countries such as China.

    Many of the Canadian company’s suppliers are based in China, Vietnam and other Asian countries. In September, it estimated the newly-imposed import taxes would cost it about $240m (£178.4m) this year.

    However, sales in China and around the rest of the world have been positive, driving its net revenues to the start of November to $2.6bn.

    “As we enter the holiday season, we are encouraged by our early performance,” said Mr McDonald. However, he said that despite good Thanksgiving period, demand had slowed since as consumers continued to look for cheaper products.

    Lululemon has faced increasing competition from lower-priced rivals such as Vuori and Alo Yoga for its products.

    Dan Coatsworth, head of markets at AJ Bell, told the BBC that competition had been “fierce”, and that the brand needed to “go back to the drawing board and work out ways to make its products ‘must-have’ items again”.

    Mr Coastworth also said that under Mr McDonald the brand had gone through the “embarrassment of having to pull its Breezethrough product line after negative reviews and customer complaints about the leggings being uncomfortable to wear” last year.

    The company halted sales of its then newly-launched $98 leggings last summer after shoppers criticized the V-shaped back seam of the tights as “unflattering” and others said the seam at the top of the waistband dug into their waists.

    Lululemon was also mocked on social media in 2020 for promoting an event about how to “resist capitalism”.

    Lululemon named its finance boss Meghan Frank and chief commercial officer André Maestrini as co-interim chief executives while it searches for a new leader.

    Marti Morfitt, chair of the brand’s board, thanked Mr McDonald for “his visionary leadership building Lululemon into one of the strongest brands in retail”.

    “During his tenure, Calvin led Lululemon through a period of impressive revenue growth, with differentiated products and experiences that resonated with guests around the world.”

    Mr Coatsworth likened Mr McDonalds tenure as “akin to the highs and lows of an athlete hitting their peak and then rapidly going off the boil”.

    “He steered Lululemon to greatness as the athleisure trend boomed, with people happy to pay top dollar for posh leggings.

    “Then came a series of mistakes which were compounded by factors outside of the company’s control. It’s no wonder Lululemon is looking to bring in fresh leadership,” he added.

    Continue Reading