Blog

  • Tiny protein pairs may hold the secret to life’s origin

    Tiny protein pairs may hold the secret to life’s origin

    Genes are the building blocks of life, and the genetic code provides the instructions for the complex processes that make organisms function. But how and why did it come to be the way it is? A recent study from the University of Illinois Urbana-Champaign sheds new light on the origin and evolution of the genetic code, providing valuable insights for genetic engineering and bioinformatics.

    “We find the origin of the genetic code mysteriously linked to the dipeptide composition of a proteome, the collective of proteins in an organism,” said corresponding author Gustavo Caetano-Anollés, professor in the Department of Crop Sciences, the Carl R. Woese Institute for Genomic Biology, and Biomedical and Translation Sciences of Carle Illinois College of Medicine at U. of I.

    Caetano-Anollés’ work focuses on phylogenomics, which is the study of evolutionary relationships between the genomes of organisms. His research team previously built phylogenetic trees mapping the evolutionary timelines of protein domains (structural units in proteins) and transfer RNA (tRNA), an RNA molecule that delivers amino acids to the ribosome during protein synthesis. In this study, they explored the evolution of dipeptide sequences (basic modules of two amino acids linked by a peptide bond), finding the histories of domains, tRNA, and dipeptides all match.

    Life on Earth began 3.8 billion years ago, but genes and the genetic code did not emerge until 800,000 million years later, and there are competing theories about how it happened.

    Some scientists believe RNA-based enzymatic activity came first, while others suggest proteins first started working together. The research of Caetano-Anollés and his colleagues over the past decades supports the latter view, showing that ribosomal proteins and tRNA interactions appeared later in the evolutionary timeline.

    Life runs on two codes that work hand in hand, Caetano-Anollés explained. The genetic code stores instructions in nucleic acids (DNA and RNA), while the protein code tells enzymes and other molecules how to keep cells alive and running. Bridging the two is the ribosome, the cell’s protein factory, which assembles amino acids carried by tRNA molecules into proteins. The enzymes that load the amino acids onto the tRNAs are called aminoacyl tRNA synthetases. These synthetase enzymes serve as guardians of the genetic code, monitoring that everything works properly.

    “Why does life rely on two languages – one for genes and one for proteins?” Caetano-Anollés asked. “We still don’t know why this dual system exists or what drives the connection between the two. The drivers couldn’t be in RNA, which is functionally clumsy. Proteins, on the other hand, are experts in operating the sophisticated molecular machinery of the cell.”

    The proteome appeared to be a better fit to hold the early history of the genetic code, with dipeptides playing a particularly significant role as early structural modules of proteins. There are 400 possible dipeptide combinations whose abundances vary across different organisms.

    The research team analyzed a dataset of 4.3 billion dipeptide sequences across 1,561 proteomes representing organisms from the three superkingdoms of life: Archaea, Bacteria, and Eukarya. They used the information to construct a phylogenetic tree and a chronology of dipeptide evolution. They also mapped the dipeptides to a tree of protein structural domains to see if similar patterns arose.

    In previous work, the researchers had built a phylogeny of tRNA that helped provide a timeline of the entry of amino acids into the genetic code, categorizing amino acids into three groups based on when they appeared. The oldest were Group 1, which included tyrosine, serine, and leucine, and Group 2, with 8 additional amino acids. These two groups were associated with the origin of editing in synthetase enzymes, which corrected inaccurate loading of amino acids, and an early operational code, which established the first rules of specificity, ensuring each codon corresponds to a single amino acid. Group 3 included amino acids that came later and were linked to derived functions related to the standard genetic code.

    The team had already demonstrated the co-evolution of synthetases and tRNA in relation to the appearance of amino acids. Now, they could add dipeptides to the analysis.

    “We found the results were congruent,” Caetano-Anollés explained. “Congruence is a key concept in phylogenetic analysis. It means that a statement of evolution obtained with one type of data is confirmed by another. In this case, we examined three sources of information: protein domains, tRNAs, and dipeptide sequences. All three reveal the same progression of amino acids being added to the genetic code in a specific order.”

    Another novel finding was duality in the appearance of dipeptide pairs. Each dipeptide combines two amino acids, for example, alanine-leucine (AL), while a symmetrical one — an anti-dipeptide — has the opposite combination of leucine-alanine (LA). The two dipeptides in a pair are complementary; they can be considered mirror images of each other.

    “We found something remarkable in the phylogenetic tree,” Caetano-Anollés said. “Most dipeptide and anti-dipeptide pairs appeared very close to each other on the evolutionary timeline. This synchronicity was unanticipated. The duality reveals something fundamental about the genetic code with potentially transformative implications for biology. It suggests dipeptides were arising encoded in complementary strands of nucleic acid genomes, likely minimalistic tRNAs that interacted with primordial synthetase enzymes.”

    Dipeptides did not arise as arbitrary combinations but as critical structural elements that shaped protein folding and function. The study suggests that dipeptides represent a primordial protein code emerging in response to the structural demands of early proteins, alongside an early RNA-based operational code. This process was shaped by co-evolution, molecular editing, catalysis, and specificity, ultimately giving rise to the synthetase enzymes, the modern guardians of the genetic code.

    Uncovering the evolutionary roots of the genetic code deepens our understanding of life’s origin, and it informs modern fields such as genetic engineering, synthetic biology, and biomedical research.

    “Synthetic biology is recognizing the value of an evolutionary perspective. It strengthens genetic engineering by letting nature guide the design. Understanding the antiquity of biological components and processes is important because it highlights their resilience and resistance to change. To make meaningful modifications, it is essential to understand the constraints and underlying logic of the genetic code,” Caetano-Anollés said.

    The paper, “Tracing the origin of the genetic code and thermostability to dipeptide sequences in proteomes,” is published in the Journal of Molecular Biology Authors include Minglei Wang, M. Fayez Aziz and Gustavo Caetano-Anollés.

    The study was supported by grants from the National Science Foundation (MCB-0749836 and OISE-1132791), the United States Department of Agriculture (ILLU-802-909 and ILLU-483-625) and Blue Waters supercomputer allocations from the National Center for Supercomputing Applications to Caetano-Anollés.

    Continue Reading

  • Kebinatshipi becomes Botswana's first men's world champion with 400m win in Tokyo | News | Tokyo 25 – worldathletics.org

    1. Kebinatshipi becomes Botswana’s first men’s world champion with 400m win in Tokyo | News | Tokyo 25  worldathletics.org
    2. How to watch 200m final at World Athletics Championships 2025  TechRadar
    3. World Athletics Championships Tokyo 2025: Busang Collen Kebinatshipi smokes another world lead to win men’s 400m gold  Olympics.com
    4. ‘The Dream’ sprints to silver medal  Trinidad Express Newspapers
    5. Team SA pin medal hopes on Zakithi Nene in 400m as Wayde van Niekerk cruise in 200m  dailynews.co.za

    Continue Reading

  • EU research funding should focus on ‘sizeable priority programmes,’ Draghi says

    The €175 billion budget proposed by the European Commission for the next iteration of Horizon Europe “is welcome” but the funding could “fall short” of its goal to boost EU’s competitive advantage in advanced technologies, according to Mario Draghi, a former president of the European Central Bank and the godfather of the EU’s competitiveness agenda.

    Draghi made the comments on September 16, at an event marking one year since the publication of his 401-page report warning Europe that its global relevance is set to decline unless member states and Brussels agree to push through swift reforms and raise investments in cutting edge technologies. 

    The report prompted a flurry of proposals, including a future restructuring of the EU budget, which signalled that the Commission is indeed putting economic competitiveness at the top of its political agenda.

    However, Draghi said the Commission’s actions have not matched the ambitions his report set a year ago, as rapid geopolitical shifts since the start of Donald Trump’s second term in the White House are making his doomsday diagnosis even more acute.

    In July, the Commission presented plans to almost double Horizon Europe’s budget to €175 billion as part of a wider €409 billion European Competitiveness Fund (ECF), which is set to be launched under the next long-term budget for 2028-34. 

    But, according to Draghi, doubling the budget will not help the EU close the innovation gap with China and the US “unless the additional resources are concentrated into sizeable priority programmes.” 


    Related articles


    In his speech, Draghi said that the EU executive, together with member states, should agree and implement faster the “28th regime” for start-ups, a new EU-wide incorporation method that would reduce administrative friction and allow innovative businesses to operate, trade and raise money across all 27 member states. 

    Draghi acknowledged that the Commission is already moving in this direction but pointed to “uncertain backing from member states” as the main reason for an underwhelming start. “The first step towards the 28th regime will likely be limited to a digital business identity,” he said.  

    Budget uncertainty

    Meanwhile, at another Brussels conference on September 16, research Commissioner Ekaterina Zaharieva did not seem confident that the €175 billion figure will survive two years of negotiations with national governments. “Hopefully, during the negotiations, we are going to keep this budget,” Zaharieva said.

    However, she added that the Commission will not wait until 2028, when the next EU budget kicks in, to make changes to the way it coordinates its R&D. It is already allocating large chunks of R&D funding to industries that Draghi deemed existential for the competitiveness of the EU, with €1 billion going research and innovation projects in that would help Europe’s car manufacturers catch up with China on electric vehicles. 

    Zaharieva also urged the private sector to chip in more, as private R&D investment in Europe is much lower than in the US and China, while public investments are at comparable levels. The Commission will use its main tool for national financial reviews  to give out advice to governments on how they could boost private R&D investments through tax exemptions, she said. 

    Continue Reading

  • Huawei challenges Nvidia with new AI chip technology

    Huawei challenges Nvidia with new AI chip technology

    Huawei reports that it has developed its own HBM memory for the next generation of AI chips. The company presented the technology as an important step in increasing the performance of its Ascend processors and positioning itself as a serious competitor to Nvidia.

    HBM, or High-Bandwidth Memory, plays a crucial role in the operation of modern AI chips. By stacking DRAM layers vertically, signal paths become shorter and the chip’s bandwidth increases significantly. This not only delivers higher performance, but also reduces energy consumption for data-intensive tasks such as training and applying large language models. Because the memory is placed directly next to the processor, unnecessary data movement is minimized.

    This step is particularly important for Huawei, as US sanctions prevent it from accessing HBM technology from foreign suppliers. With its own solution, the company aims to break this dependency and strengthen its technological autonomy.

    The first generation consists of two variants. The HiBL 1.0 has a bandwidth of 1.6 terabytes per second and a capacity of 128 gigabytes. This version will be used for the Ascend 950PR, which will be launched in the first quarter of next year. The memory supports a variety of low-precision data types, including FP8 and MXFP8, and is designed to provide improved vector computing power and double the number of interconnections.

    In addition, there is the HiZQ 2.0, intended for the Ascend 950DT. This variant has a bandwidth of 4 terabytes per second and a capacity of 144 gigabytes. According to Huawei, the emphasis here is on accelerating inference and improving decoding performance.

    New SuperPod technology

    At the same time, Huawei presented the new SuperPod technology, which allows up to 15,488 graphics cards with Ascend chips to be linked together. The company states that it now has a supercluster with approximately one million cards in operation. This approach is intended to compensate for the fact that a single Huawei chip is less powerful than Nvidia’s most advanced AI processors. By bringing chips together in large clusters, Huawei aims to deliver competitive performance.

    The manufacturer has also announced a roadmap for the coming years. The Ascend 950PR, which will be released early next year, will be followed by the 950DT at the end of 2026, the 960 at the end of 2027, and the 970 at the end of 2028. This underscores Huawei’s ambition to gain market share in the AI chip market in the coming years.

    With the combination of self-developed HBM memory, cluster technology, and a new chip series, Huawei is taking a clear step toward challenging Nvidia. The introductions show that, despite Western sanctions, the Chinese chip industry is steadily continuing to develop its own alternatives.

    Continue Reading

  • As obesity management medications explode in popularity, obesity experts issue caution on who should prescribe them – Temple University

    1. As obesity management medications explode in popularity, obesity experts issue caution on who should prescribe them  Temple University
    2. Beyond the scale  wng.org
    3. What Hospitalists Should Know About GLP-1s  Medscape
    4. The Weight of Progress: A New Era Dawns for Anti-Obesity Drugs, Reshaping Markets and Health  FinancialContent
    5. 1 in 8 Americans have already tried Ozempic and similar weight loss medications  ScienceDaily

    Continue Reading

  • Oscar Piastri insists McLaren ‘know how we’re going to go racing going forward’ following Monza team order debate

    Oscar Piastri insists McLaren ‘know how we’re going to go racing going forward’ following Monza team order debate

    Oscar Piastri has voiced his “trust” in McLaren’s handling of the battle for the Formula 1 Drivers’ title, after being ordered to allow team mate Lando Norris past at Monza.

    McLaren sparked debate at the Italian Grand Prix when instructing Piastri to allow his title rival past for second place following a slow pit stop for Norris – this after the latter had suggested Piastri stopped first to prevent an unlikely undercut from Charles Leclerc.

    Prior to this weekend’s Azerbaijan Grand Prix, McLaren discussed their ‘papaya rules’ – the team’s code for their rules of engagement on track – with both drivers said to be “aligned” on racing scenarios going forward.

    “Naturally, there have been thoughts, yes,” said Piastri. “We’ve had good discussions with the team.

    “Obviously, it’s a highly talked about moment, but we’ve had a lot of discussions, have clarified a lot of things, and we know how we’re going to go racing going forward, which is the most important thing. What’s happened is done, and I’m excited to go racing here.”

    To this point, McLaren have maintained a focus on sealing the Teams’ Championship for a second year running, an achievement which could be sealed this weekend with a record-breaking seven events to go.

    Pressed on whether he expects less interference from the pit wall after this consideration is out of the way, Piastri added: “Not necessarily because of the Constructors’ Championship.

    “But I think we’ve had a lot of discussions about how we want to go racing and a lot of that is to stay for us, because ultimately, if we give out that information, then we become very easy targets to pick off because everyone knows what we’re going to do. That’s all very aligned with all of us, but stays in-house.”

    While unwilling to detail wider scenarios, points-leader Piastri did confirm an expectation that papaya rules would be invoked should the McLaren pairing find themselves in “exactly the same scenario”.

    Conceding a belief that he didn’t deserve to finish higher than third given his pace across the weekend, Piastri had an element of sympathy for his team, highlighting how the on-track scenario placed McLaren between a rock and a hard place.

    “If we had of done the opposite thing, then you’d have had the opposite half of the fans saying that it was wrong. Ultimately, there’s no correct decision in that,” he said.

    “Am I surprised? Not really. Obviously, it’s a big moment from the race and I feel like a lot of fans are quite quick to jump on things that are deemed controversial. I’m not that surprised, but I do think that we have enough freedom to control our own destiny in the championship.”

    Continue Reading

  • Nvidia to invest $5B in Intel and develop chips with onetime rival

    Nvidia to invest $5B in Intel and develop chips with onetime rival

    Nvidia, the semiconductor company powering the artificial intelligence revolution, said Thursday that it was buying a $5 billion stake in ailing rival Intel.

    The two companies will also begin a partnership to develop chips together for PCs and data centers.

    Intel shares surged more than 25% on the announcement, lifting its stock market value to about $147 billion. The stock is now up about 55% this year. Nvidia, valued at more than $4.2 trillion, advanced 3%.

    The announcement Thursday from Nvidia comes just weeks after the U.S. government purchased a 10% stake in Intel, worth nearly $9 billion, and after Japan’s SoftBank invested $2 billion.

    “This is a game changer deal for Intel as it now brings them front and center into the AI game,” wrote tech analyst Dan Ives in a note Thursday. “Along with the recent U.S. Government investment for 10% this has been a golden few weeks for Intel after years of pain and frustration for investors.”

    “Today’s announcement further strengthens the US lead in the AI Arms Race against China as Intel now goes from a laggard to a catalyst,” Ives concluded.

    It is highly unusual for the U.S. government to hold positions in private companies outside of major financial crises.

    Intel was once the standard-bearer for semiconductors for computers, servers and other electronics. But the company has been struggling in recent years with multiple CEO changes, technical blunders and strategy shifts. The firm fell far behind Nvidia and other rivals such as AMD and Broadcom in the mobile phone space and AI arms-race.

    Sales and profit margins at the company have been hit hard as Intel lost its dominance.

    There are major stakes for all chipmakers, the stock market and economy in the technically-challenging industry. The White House has viewed AI and chipmaking as a top national security priority.

    Intel CEO Lip-Bu Tan, named in March, faced a call from President Donald Trump in August to resign. Trump said Tan was “highly conflicted” after Sen. Tom Cotton, R-Ark., sent a letter to the company expressing “concerns” about Tan’s past work for Chinese firms.

    “There is no other solution to this problem,” Trump added in the social media post.

    Days later, Tan and Intel management met with Trump in the Oval Office. Trump’s tone toward Tan changed dramatically after that encounter. “The meeting was a very interesting one. His success and rise is an amazing story,” Trump wrote in a Truth Social post.

    A week later, the administration and Intel announced the unprecedented investment in the company, using money from the Biden-era CHIPS Act.

    At the same time, Nvidia CEO Jensen Huang has been a familiar face at Trump administration events. On Wednesday night, Huang attended the state dinner at Windsor Castle alongside Trump and King Charles III.

    As part of Trump’s visit to the U.K., Nvidia announced it was plowing more than $14 billion into AI and data center infrastructure around the country.

    At a news conference Thursday alongside U.K. Prime Minister Kier Starmer, Trump gave Huang a shoutout: “You’re taking over the world, Jensen.”

    Continue Reading

  • FUJIFILM Biotechnologies Expands Strategic Partnership with argenx to Include U.S. Manufacturing Operations

    FUJIFILM Biotechnologies Expands Strategic Partnership with argenx to Include U.S. Manufacturing Operations

    HOLLY SPRINGS, NORTH CAROLINA – September 18, 2025 – FUJIFILM Biotechnologies, a world-leading contract development and manufacturing organization for biologics, vaccines, and advanced therapies, today announced a significant expansion of its global partnership with argenx SE, a global immunology company. As part of the expanded agreement, FUJIFILM Biotechnologies will initiate manufacturing of argenx’ drug substance for efgartigimod at the Holly Springs, North Carolina, site in 2028.

    argenx is the first announced tenant in FUJIFILM Biotechnologies’ Phase II expansion in Holly Springs, which will add 8 x 20,000-liter (L) mammalian cell culture bioreactors to the site’s existing 8 x 20,000 reactors. 

    Efgartigimod is a monoclonal antibody (mAb) fragment that targets the neonatal Fc receptor (FcRn) in patients with severe autoimmune disease. It is approved globally (as VYVGART® and VYVGART® Hytrulo) for the treatment of adults with generalized myasthenia gravis (gMG) and chronic inflammatory demyelinating polyneuropathy (CIDP) – both chronic autoimmune neuromuscular diseases characterized by significant muscle weakness.

    With the expanded global manufacturing agreement, argenx will benefit from FUJIFILM Biotechnologies’ global kojoX™ network, which provides local-for-local supply, manufacturing in close proximity to patients. Through kojoX, the industry’s largest interconnected modular network, FUJIFILM Biotechnologies offers flexible capacity at clinical and commercial scales from its manufacturing sites in United States, United Kingdom, Denmark, and FUJIFILM Group’s site in Japan.

    “This partnership with argenx marks our first global end-to-end program in support of a customer utilizing our kojoX modular network of facilities. By expanding manufacturing in the United States, we will help to meet argenx’ global supply chain needs for efgartigimod,” said Lars Petersen, president and chief executive officer, FUJIFILM Biotechnologies. “We are honored to support the manufacturing of this life-impacting therapy to supply to patients in need.”

    “Our expanded partnership with FUJIFILM Biotechnologies at its Holly Springs site adds to our existing U.S. manufacturing footprint and further strengthens our global supply chain,” said Filip Borgions, chief technology innovation officer, argenx. “The kojoX concept enables consistent capabilities across the U.S. and Europe, allowing us to manufacture medicines in the U.S. for American patients while supporting our broader global reach. We’re excited to partner with the FUJIFILM Biotechnologies team to unlock the full potential of the kojoX platform.”

    “Our kojoX manufacturing approach provides our global biopharmaceutical partners with the flexibility and agility to strengthen supply chain resilience, supporting a seamless delivery of critical therapies to patients worldwide,” added Petersen.

    Continue Reading

  • Something Deep Within the Earth is Altering Our Planet’s Gravity—and Satellite Data May Hold Clues to the Mystery

    Something Deep Within the Earth is Altering Our Planet’s Gravity—and Satellite Data May Hold Clues to the Mystery

    Processes occurring deep within the Earth could be responsible for our planet’s changing gravitational field, according to new research that scoured orbital data for clues to the mystery.

    The shift occurred nearly two decades ago, between 2006 and 2008, but went unnoticed at the time. Only through a recent reexamination of the data were scientists able to detect gravitational variations likely linked to processes near the Earth’s core.

    Action at the Earth’s Core

    For decades, scientists have studied Earth’s core, though direct observation remains impossible. The idea that rocks at the core–mantle boundary could be responsible is still a hypothesis, but one consistent with what researchers have inferred from indirect observations of this hidden region.

    “It’s a really new observation,” said co-author Isabelle Panet, a geophysicist at the University Gustave Eiffel in Paris, about using observational data as the team has in their new work published in Geophysical Research Letters.

    Earth’s layers each have distinct properties: the crust is brittle, the mantle is solid, and the outer core is molten. Where these boundaries meet, some of the planet’s most powerful forces—earthquakes, the magnetic field, and others—are generated.

    Collecting Gravitational Field Data

    The findings draw on information from GRACE, the U.S.–German Gravity Recovery and Climate Experiment, which consisted of two satellites that orbited Earth in formation from 2002 to 2017. GRACE measured variations in gravity by detecting tiny changes in the satellites’ positions as they responded to mass concentrations on Earth, such as mountain ranges.

    Originally, GRACE was designed to monitor large-scale water displacement, including groundwater depletion and glacial melt. Panet, however, had already been applying the data to study mass changes preceding large earthquakes. In this case, the analysis probed even deeper, down to 2,900 kilometers at the core–mantle boundary.

    A Strange Signal

    The team’s analysis zeroed in on an unusual signal located off the coast of Africa. Peaking in 2007, the detection defied the researchers’ first attempt at an explanation when it couldn’t be correlated to any shifting surface water. 

    “So at least partially, there has to be an origin within the solid Earth,” Panet says. “It has to come from very deep.”

    Other satellite data monitoring Earth’s magnetic field showed disturbances in the same region and time period, suggesting a possible link. The researchers propose that perovskite—a mineral common in mantle rocks—underwent a phase transition under extreme pressure. This change could have increased rock density, triggering shifts in surrounding material and ripples that reached as far down as the core–mantle boundary. These ripples may have deformed the core by up to 10 centimeters, altering molten flow and affecting the planet’s magnetic field. Though the hypothesis best fits the current data, further research will be required to confirm it.



    “For the first time, we have convincing evidence of dynamic processes at the base of the mantle that are occurring quickly enough to study as they happen,” said Barbara Romanowicz, a seismologist at the University of California, Berkeley.

    While the new findings are unique, Panet and her colleagues now plan to examine data from satellites still in orbit to see if similar events can be found.

    The paper, “GRACE Detection of Transient Mass Redistributions During a Mineral Phase Transition in the Deep Mantle,” appeared in Geophysical Research Letters on August 28, 2025.

    Ryan Whalen covers science and technology for The Debrief. He holds an MA in History and a Master of Library and Information Science with a certificate in Data Science. He can be contacted at ryan@thedebrief.org, and follow him on Twitter @mdntwvlf.

    Continue Reading

  • Neuroimaging in traumatic brain injury: a bibliometric analysis | International Journal of Emergency Medicine

    Neuroimaging in traumatic brain injury: a bibliometric analysis | International Journal of Emergency Medicine

    Traumatic Brain Injury (TBI) is a significant cause of death and disability [1, 2]. It is caused by an external force such as a blow, jolt, or penetrating injury, resulting in bleeding, swelling, and damage to brain tissue. Globally, an estimated 69 million people sustain a TBI each year [3, 4]. This disparity reflects a heavier burden in low- and middle-income countries, where road traffic collisions and falls are more prevalent [5]. In the United States, there are over 25 million injury-related emergency department visits annually. According to the Centers for Disease Control and Prevention (CDC), TBIs are a major cause of death and disability nationwide with approximately 3 to 4 million new cases annually [2, 6].

    Young adults aged 15 to 24 account for the highest number of emergency medicine visits related to TBI. While most TBIs are classified as mild, severe injuries can lead to prolonged unconsciousness, memory loss, seizures, and sleep disturbances [2]. The Glasgow Coma Scale (GCS) is routinely used in the initial evaluation to assess levels of consciousness (LOC) but does not provide detailed information about the injury itself. Beyond the clinical implications, the global economic impact of TBI is estimated to exceed $400 billion annually, placing a significant burden not only on the individual and family, but the healthcare systems [3, 4, 7].

    Depending on the force and its effect, TBIs can range widely in severity and lasting symptoms as noted in Table 1 [2, 8]. Specifically, the GCS, duration of LOC, presence and length of post-traumatic amnesia (PTA), and imaging findings help define the classification of TBI. The majority of cases (70–90%) of TBI are considered mild and are typically limited to headaches, dizziness, confusion, and other symptoms that do not suggest permanent or long-term damage [2].

    Table 1 Classification of TBI

    Neuroimaging techniques identify the locations and qualities of the TBI, as well as prognoses [6, 9,10,11,12,13]. Noncontrast CT is the neuroimaging scan of choice for effectively and quickly locating hemorrhages, providing context when determining severity of the injury or the need for surgical intervention. However, MRI scans, less commonly used, have proved to be more effective at locating non-hemorrhagic or micro-hemorrhagic injuries that may not be as severe at the moment.

    The heterogeneity of TBI cases makes it difficult to standardize treatments at an individual level, and there is no consensus on guidelines to determine when neuroimaging is necessary for a TBI patient [14]. Neuroimaging use for TBI management is a highly debated and researched field, with rules that have been developed, such as a Canadian head CT rule, with the purpose of limiting neuroimaging for head injuries [15]. At the same, there is evidence to suggest that neuroimaging use should be increased for acute care of mild TBIs, due to its value in predicting the occurrence of post-concussion syndrome, ED readmission, and length of hospital admission [16]. This bibliometric analysis study aims to provide an overview of the research done on neuroimaging use for TBI patients from the past four decades, looking at where research is originating from, what categories comprise it, and where trends are pointing.

    Continue Reading