Category: 7. Science

  • Tech From NASA’s Hurricane-hunting TROPICS Flies on Commercial Satellites

    Tech From NASA’s Hurricane-hunting TROPICS Flies on Commercial Satellites

    NASA science and American industry have worked hand-in-hand for more than 60 years, transforming novel technologies created with NASA research into commercial products like cochlear implants, memory-foam mattresses, and more. Now, a NASA-funded device for probing the interior of storm systems has been made a key component of commercial weather satellites.

    The novel atmospheric sounder was originally developed for NASA’s TROPICS (short for Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of SmallSats), which launched in 2023. Boston-based weather technology company Tomorrow.io integrated the same instrument design into some of its satellites.

    Atmospheric sounders allow researchers to gather data describing humidity, temperature, and wind speed — important factors for weather forecasting and atmospheric analysis. From low-Earth orbit, these devices help make air travel safer, shipping more efficient, and severe weather warnings more reliable.

    In the early 2000s, meteorologists and atmospheric chemists were eager to find a new science tool that could peer deep inside storm systems and do so multiple times a day. At the same time, CubeSat constellations (groupings of satellites each no larger than a shoebox) were emerging as promising, low-cost platforms for increasing the frequency with which individual sensors could pass over fast-changing storms, which improves the accuracy of weather models.

    The challenge was to create an instrument small enough to fit aboard a satellite the size of a toaster, yet powerful enough to observe the innermost mechanisms of storm development. Preparing these technologies required years of careful development that was primarily supported by NASA’s Earth Science Division.

    William Blackwell and his team at MIT Lincoln Laboratory in Cambridge, Massachusetts, accepted this challenge and set out to miniaturize vital components of atmospheric sounders. “These were instruments the size of a washing machine, flying on platforms the size of a school bus,” said Blackwell, the principal investigator for TROPICS. “How in the world could we shrink them down to the size of a coffee mug?”

    With a 2010 award from NASA’s Earth Science Technology Office (ESTO), Blackwell’s team created an ultra-compact microwave receiver, a component that can sense the microwave radiation within the interior of storms.

    The Lincoln Lab receiver weighed about a pound and took up less space than a hockey puck. This innovation paved the way for a complete atmospheric sounder instrument small enough to fly aboard a CubeSat. “The hardest part was figuring out how to make a compact back-end to this radiometer,” Blackwell said. “So without ESTO, this would not have happened. That initial grant was critical.”

    In 2023, that atmospheric sounder was sent into space aboard four TROPICS CubeSats, which have been collecting torrents of data on the interior of severe storms around the world.

    By the time TROPICS launched, Tomorrow.io developers knew they wanted Blackwell’s microwave receiver technology aboard their own fleet of commercial weather satellites. “We looked at two or three different options, and TROPICS was the most capable instrument of those we looked at,” said Joe Munchak, a senior atmospheric data scientist at Tomorrow.io.

    In 2022, the company worked with Blackwell to adapt his team’s design into a CubeSat platform about twice the size of the one used for TROPICS. A bigger platform, Blackwell explained, meant they could bolster the sensor’s capabilities.

    “When we first started conceptualizing this, the 3-unit CubeSat was the only game in town. Now we’re using a 6-unit CubeSat, so we have room for onboard calibration,” which improves the accuracy and reliability of gathered data, Blackwell said.

    Tomorrow.io’s first atmospheric sounders, Tomorrow-S1 and Tomorrow-S2, launched in 2024. By the end of 2025, the company plans to have a full constellation of atmospheric sounders in orbit. The company also has two radar instruments that were launched in 2023 and were influenced by NASA’s RainCube instrument — the first CubeSat equipped with an active precipitation radar.

    More CubeSats leads to more accurate weather data because there are more opportunities each day — revisits — to collect data. “With a fleet size of 18, we can easily get our revisit rate down to under an hour, maybe even 40 to 45 minutes in most places. It has a huge impact on short-term forecasts,” Munchak said.

    Having access to an atmospheric sounder that had already flown in space and had more than 10 years of testing was extremely useful as Tomorrow.io planned its fleet. “It would not have been possible to do this nearly as quickly or nearly as affordably had NASA not paved the way,” said Jennifer Splaingard, Tomorrow.io’s senior vice president for space and sensors.

    The relationship between NASA and industry is symbiotic. NASA and its grantees can drive innovation and test new tools, equipping American businesses with novel technologies they may otherwise be unable to develop on their own. In exchange, NASA gains access to low-cost data sets that can supplement information gathered through its larger science missions.

    Tomorrow.io was among eight companies selected by NASA’s Commercial SmallSat Data Acquisition (CSDA) program in September 2024 to equip NASA with data that will help improve weather forecasting models. “It really is a success story of technology transfer. It’s that sweet spot, where the government partners with tech companies to really take an idea, a proven concept, and run with it,” Splaingard said.

    By Gage Taylor

    NASA’s Goddard Space Flight Center, Greenbelt, Md.

    Continue Reading

  • NASA’s Webb Space Telescope Reveals Secrets of Interstellar Comet 3I/ATLAS

    NASA’s Webb Space Telescope Reveals Secrets of Interstellar Comet 3I/ATLAS

    Science News

    from research organizations


    Date:
    September 2, 2025
    Source:
    NASA
    Summary:
    Webb, Hubble, and SPHEREx are joining forces to study the interstellar comet 3I/ATLAS, revealing details about its structure and chemistry. The comet isn’t dangerous, but it’s offering scientists a rare chance to explore material from outside our solar system.
    Share:

    FULL STORY


    NASA’s James Webb Space Telescope observed interstellar comet 3I/ATLAS Aug. 6, with its Near-Infrared Spectrograph instrument. The research team has been analyzing insights from Webb’s data, and a preprint is available online. Webb is one of NASA’s space telescopes observing this comet, together providing more information about its size, physical properties, and chemical makeup. For example, NASA’s Hubble Space Telescope and the recently launched SPHEREx mission have also observed the comet. While the comet poses no threat to Earth, NASA’s space telescopes help support the agency’s ongoing mission to find, track, and better understand solar system objects.


    Story Source:

    Materials provided by NASA. Note: Content may be edited for style and length.


    Journal Reference:

    1. Martin A. Cordiner, Nathaniel X. Roth, Michael S. P. Kelley, Dennis Bodewits, Steven B. Charnley, Maria N. Drozdovskaya, Davide Farnocchia, Marco Micheli, Stefanie N. Milam, Cyrielle Opitom, Megan E. Schwamb, Cristina A. Thomas. JWST detection of a carbon dioxide dominated gas coma surrounding interstellar object 3I/ATLAS. arXiv, 29 Aug 2025 DOI: 10.48550/arXiv.2508.18209

    Cite This Page:

    NASA. “NASA’s Webb Space Telescope Reveals Secrets of Interstellar Comet 3I/ATLAS.” ScienceDaily. ScienceDaily, 2 September 2025. /releases/2025/09/250902084955.htm>.

    NASA. (2025, September 2). NASA’s Webb Space Telescope Reveals Secrets of Interstellar Comet 3I/ATLAS. ScienceDaily. Retrieved September 2, 2025 from www.sciencedaily.com/releases/2025/09/250902084955.htm

    NASA. “NASA’s Webb Space Telescope Reveals Secrets of Interstellar Comet 3I/ATLAS.” ScienceDaily. www.sciencedaily.com/releases/2025/09/250902084955.htm (accessed September 2, 2025).

    Explore More

    from ScienceDaily


    RELATED STORIES


    Continue Reading

  • Supernova theory links an exploding star to global cooling and human evolution

    Supernova theory links an exploding star to global cooling and human evolution

    What’s the link between an exploding star, climate change and human evolution? Francis Thackeray, who has researched ancient environments and fossils for many years, sets out his ideas about what happened in the distant past – with enormous consequences.

    Global cooling that happened millions of years ago was thought to be the result of ocean currents. He suggests instead it could have been due to the impacts of remnants of supernovae. The timing of supernovae, climate changes and species evolution coincides.

    What is your supernova hypothesis?

    My hypothesis is that remnants of a supernova – an exploding star – had an impact on the Earth’s past climate, causing global cooling, between 3 million and 2.6 million years ago and that this indirectly affected the evolution of hominins (ancient relatives of humans).

    How does this change assumptions held until now?

    It has been considered by some that global cooling in the Plio-Pleistocene might have been due to changes in ocean currents. This may well be correct to some extent, but I think that the supernova hypothesis needs to be explored.

    It’s super-exciting to think that our evolution may to some extent be associated with supernovae as part of our dynamic universe.

    How did you come to your supernova hypothesis?

    Supernovae include stars which are extremely massive (as much as five times the mass of our Sun) and have reached the end of their stellar evolution. These explosions are rare. On average, within our galaxy (the Milky Way), only one or two per century are visible from Earth as temporary bright stars.

    As a result of such explosions, material is expelled into outer space at almost the speed of light. Chemical elements are formed, including a kind of iron (the element Fe) known as isotope Fe-60. It has 26 protons and as many as 34 neutrons.

    Traces of Fe-60 iron isotopes from supernovae within the last ten million years have been discovered on Earth in marine deposits such as those drilled in cores in the east Indian Ocean.




    Read more:
    Our oceans give new insights on elements made in supernovae


    The deep-sea deposits with Fe-60 can be dated using radioactive elements which decay at a known rate. This is called radiometric dating.

    There was a regular increase in extremely small traces of Fe-60 for the period between 3 million and 2.6 million years ago. We know this from data published by Anton Wallner and his colleagues. Since this is a linear trend I have been able to extrapolate back to 3.3 million years when initial cosmic rays may have first hit Earth. I have proposed in the Quest magazine that this initial cosmic impact correlates with a major glaciation (cooling) event called M2 in an otherwise warm period.

    A “near earth” supernova could have produced cosmic rays (radiation from outer space) which might have caused a reduction in the earth’s ozone layer. Increased cloud cover associated with cosmic radiation could have been a factor related to changes in global climate. Specifically, the change would have been global cooling.

    This cooling would have affected the distribution and abundance of plant species, in turn affecting that of animals dependent on such vegetation.

    What potential new insights does the hypothesis give us into human evolution?

    Populations of Australopithecus may have been indirectly affected by the decrease in temperature.

    Australopithecus is the genus name for distant human relatives which lived in Africa in geological periods called the Pliocene and Pleistocene. The boundary between these time intervals is 2.58 million years ago. At that time, certain species went extinct. The period coincides closely with the maximum of Fe-60 in marine deposits and a change in Earth’s magnetic field.

    Australopithecus africanus: cast of Taung child.
    Wikimedia Commons, CC BY-SA

    The first fossil of Australopithecus to be described, 100 years ago, was placed by the palaeontologist Raymond Dart in a species called A. africanus. Dubbed the “Taung Child”, it was discovered in South Africa. Its biochronological age, recently based on mathematical analyses of tooth dimensions, is about 2.6 million years – at the Plio-Pleistocene boundary.

    It cannot be concluded that the death of the Taung Child was directly caused by a supernova. This would be far-fetched. There is in fact evidence that this individual, about 3 years old, was killed by an eagle.

    However, it is plausible to suggest that in Africa, in the Plio-Pleistocene, populations of Australopithecus were affected by a decrease in temperature affecting the distribution and abundance of vegetation and the animals dependent on it.

    Recently, a new species of Australopithecus (as yet not named, from Ledi-Geraru) has been discovered in Ethiopia, in deposits dated at about 2.6 million years ago – also the time of the maximum in Fe-60 in deep-sea deposits.

    The appearance of the genus Homo is close to the Plio-Pleistocene boundary, reflected by fossils reported recently by Brian Villmoare and his colleagues and well dated at about 2.8 million years ago. The origin of Homo may relate to changes in temperature and associated changes in habitat, as recognised five decades ago by South African palaeontologists Elisabeth Vrba and Bob Brain, although they emphasised a date of 2.5 million years ago.

    Is it possible that cosmic radiation stimulated genetic changes?

    I have been told by my peers that I am inclined to think “out of the box”. Well, in this case I would like to propose a “hominoid mutation hypothesis”. The hypothesis states that the speciation of hominoids (including human ancestors and those of chimpanzees and gorillas) was to some extent associated with mutations and genetic variability caused by cosmic rays.

    It is interesting to consider the possibility that the origin of our genus Homo relates in part to cosmic radiation. Going deeper back in time, Henrik Svensmark has demonstrated that there is a correlation between supernova frequency and speciation (increased biodiversity associated with the evolution of new species), for the last 500 million years (the Phanerozoic period). I think it’s entirely possible that one important cause behind this correlation was the mutagenic (mutation-causing) effect of cosmic rays on DNA, such that rates of speciation exceeded those of extinction.




    Read more:
    Exploding stars are rare but emit torrents of radiation − if one happened close enough to Earth, it could threaten life on the planet


    In hominoids, cosmic rays could have contributed not only to global cooling but also to genetic changes, with subsequent anatomical (morphological) changes related to speciation.

    If we go back to about 7 million years ago (when Fe-60 again reflects supernova activity), we would expect to find fossils that are close to a common ancestor for chimpanzees and humans. In terms of the hominoid mutation hypothesis, the split could have been associated with cosmic radiation. One hominoid species about 7 million years old is Sahelanthropus (discovered by Michel Brunet in Chad). In my opinion this species is very close to the common ancestor for Homo sapiens (us) and chimps.

    Continue Reading

  • 60% of Earth’s Land Now Outside “Safe Zone”

    60% of Earth’s Land Now Outside “Safe Zone”

    A new global study warns that most of Earth’s land is already beyond safe ecological limits, with far-reaching consequences for climate, ecosystems, and humanity’s future. Credit: Stock

    Sixty percent of global land is outside safe biosphere limits, with human use of biomass driving widespread ecological strain.

    A new study provides the first detailed mapping of the planetary boundary known as “functional biosphere integrity,” tracing its status across centuries and in specific regions. The analysis shows that 60 percent of the world’s land surface has already moved beyond the safe operating range, with 38 percent falling into the category of high risk. The research was conducted by the Potsdam Institute for Climate Impact Research (PIK) in collaboration with BOKU University in Vienna and published in the journal One Earth.

    Functional biosphere integrity describes the capacity of the plant world to help regulate the stability of the Earth system. To do so, vegetation must generate sufficient energy through photosynthesis to sustain the circulation of carbon, water, and nitrogen that underpins ecosystems and their interconnected processes, even under conditions of intense human disturbance.

    Alongside biodiversity loss and climate change, this measure of integrity forms one of the central elements of the Planetary Boundaries framework, which defines the conditions necessary for a safe operating space for humanity.

    “There is an enormous need for civilization to utilize the biosphere – for food, raw materials and, in the future, also for climate protection,” says Fabian Stenzel, lead author of the study and member of the PIK research group Terrestrial Safe Operating Space. “After all, human demand for biomass continues to grow – and on top of that, the cultivation of fast-growing grasses or trees for producing bioenergy with carbon capture and storage is considered by many to be an important supporting strategy for stabilizing the climate. It is therefore becoming even more important to quantify the strain we’re already putting on the biosphere – in a regionally differentiated manner and over time – to identify overloads. Our research is paving the way for this.”

    Two indicators to measure the strain and the risk

    The research builds on the most recent update of the Planetary Boundaries framework, released in 2023. “The framework now squarely puts energy flows from photosynthesis in the world’s vegetation at the center of those processes that co-regulate planetary stability,” explained Wolfgang Lucht, head of PIK’s Earth System Analysis department and coordinator of the study. “These energy flows drive all of life – but humans are now diverting a sizeable fraction of them to their own purposes, disturbing nature’s dynamic processes.”

    The resulting strain on the Earth system can be assessed by examining how much of natural biomass productivity is redirected for human use—such as harvested crops, timber, and plant residues—as well as by the decline in photosynthetic activity caused by land conversion and soil sealing. To complement this, the study also introduced a second key measure of biosphere integrity: an indicator of ecosystem destabilization risk, which tracks large-scale structural shifts in vegetation along with imbalances in the water, carbon, and nitrogen cycles.

    Europe, Asia, and North America are particularly affected

    Based on the global biosphere model LPJmL, which simulates water, carbon, and nitrogen flows on a daily basis at a resolution of half a degree of longitude/latitude, the study provides a detailed inventory for each individual year since 1600, based on changes in climate and human land use.

    The research team not only computed, mapped, and compared the two indicators for functional integrity of the biosphere, but also evaluated them by conducting a mathematical comparison with other measures from the literature for which “critical thresholds” are known. This resulted in each area being assigned a status based on local tolerance limits of ecosystem change: Safe Operating Space, Zone of Increasing Risk, or High Risk Zone.

    The model calculation indicates that worrying developments began as early as 1600 in the mid-latitudes. By 1900, the proportion of global land area where ecosystem changes went beyond the locally defined safe zone, or were even in the high-risk zone, was 37 percent and 14 percent, respectively, compared to the 60 percent and 38 percent we see today. Industrialization was beginning to take its toll; land use affected the state of the Earth system much earlier than climate warming. At present, this biosphere boundary has been transgressed on almost all land surface – primarily in Europe, Asia, and North America – that underwent strong land cover conversion, mainly due to agriculture.

    PIK Director Rockström: Impetus for international climate policy

    “This first world map showing the overshoot of the boundary for functional integrity of the biosphere, depicting both human appropriation of biomass and ecological disruption, is a breakthrough from a scientific perspective, offering a better overall understanding of planetary boundaries,” says Johan Rockström, PIK Director and one of the co-authors of the study. “It also provides an important impetus for the further development of international climate policy. This is because it points to the link between biomass and natural carbon sinks, and how they can contribute to mitigating climate change. Governments must treat it as a single overarching issue: comprehensive biosphere protection together with strong climate action.”

    Reference: “Breaching planetary boundaries: Over half of global land area suffers critical losses in functional biosphere integrity” by Fabian Stenzel, Liad Ben Uri, Johanna Braun, Jannes Breier, Karlheinz Erb, Dieter Gerten, Helmut Haberl, Sarah Matej, Ron Milo, Sebastian Ostberg, Johan Rockström, Nicolas Roux, Sibyll Schaphoff and Wolfgang Lucht, 15 August 2025, One Earth.
    DOI: 10.1016/j.oneear.2025.101393

    Never miss a breakthrough: Join the SciTechDaily newsletter.

    Continue Reading

  • ‘Very rare’ pygmy sperm whale found dead on Devon beach

    ‘Very rare’ pygmy sperm whale found dead on Devon beach

    Zhara SimpsonBBC News, South West

    Sarah Porretta A tractor with a trailer at Bigbury Beach. There is a large whale tale hanging over the edge of it, with three people stood on the right. The sea is behind it and the tractor is on the sand.Sarah Porretta

    Devon Wildlife Trust said the rare pygmy sperm whale was found dead at Bigbury

    A “very rare” species of whale has been found dead on a beach, a wildlife trust has said.

    Devon Wildlife Trust (DWT) said the pygmy sperm whale, which was about 11ft-13ft (3.5m-4m) long, was found dead at Bigbury on Sea on Monday.

    The trust said it was rarely seen alive in UK waters and there was “only a handful” of strandings since records by the Cetacean Strandings Investigation Programme (CSIP) started in the 1990s.

    Tom Miller, British Divers Marine Life Rescue (BDMLR) mammal medic, said a local landowner and farmer helped lift the animal from the rocks with a tractor, on to a trailer and up to a secure location, where it would be collected by CSIP on Wednesday.

    The CSIP, based at Zoological Society of London, hoped to carry out an autopsy to find the cause of death of the “magnificent creature”, the trust said.

    “As their name suggests, pygmy sperm whales are much smaller than true sperm whales, only reaching a maximum size of around four metres.

    “They have only been recognized as a species since 1966.

    “They are so rare, very little is known about them, but it is thought that they eat a variety of fish, octopus, crab and shrimp.”

    Sarah A dead pygmy sperm whale between rocks on a sandy beach. It is dark grey and about four meters long.Sarah

    Marine engagement officer Coral Smith said it was incredibly sad to see

    It added: “They are usually found in warm, deep water but have been spotted in all temperate subtropical and tropical seas.”

    Marine engagement officer Coral Smith said: “This is a very rare stranding indeed, with only a handful occurring in UK waters since CSIP began recording.

    “Although incredibly sad to see such a magnificent and rare mammal, this incident highlighted the brilliant partnership working between BDMLR and Devon Wildlife Trust volunteers.

    “It shows the huge value that local people in local communities can play in marine citizen science and conservation.”

    Sarah Sarah and her 11-year-old son Osian smiling at the camera. Sarah has long black hair and a striped white and blue top. Osian is wearing a grey jumper and has light brown hair.Sarah

    Sarah and her son Osain were surfing at Bigbury when they heard about the whale

    Sarah and her 11-year-old son Osian, from Dorset, were surfing at Bigbury when they were told about the whale stranding.

    “It was kind of wedged in some rock pools,” she said.

    “It was quite a calm atmosphere. The farmers turned up in their wellies and really calmly worked out how to get the whale out without any fuss.

    “I think everybody was sad to see it but also fascinated to see this amazing creature… you would rather see them swimming free.

    Osian said: “It was sad to see a dead whale but i think it was also interesting.”

    The farmer told them the whale weighed about 700kg which was about the size of a bull.

    Continue Reading

  • ‘World Models,’ an Old Idea in AI, Mount a Comeback

    ‘World Models,’ an Old Idea in AI, Mount a Comeback

    The latest ambition of artificial intelligence research — particularly within the labs seeking “artificial general intelligence,” or AGI — is something called a world model: a representation of the environment that an AI carries around inside itself like a computational snow globe. The AI system can use this simplified representation to evaluate predictions and decisions before applying them to its real-world tasks. The deep learning luminaries Yann LeCun (of Meta), Demis Hassabis (of Google DeepMind) and Yoshua Bengio (of Mila, the Quebec Artificial Intelligence Institute) all believe world models are essential for building AI systems that are truly smart, scientific and safe.

    The fields of psychology, robotics and machine learning have each been using some version of the concept for decades. You likely have a world model running inside your skull right now — its how you know not to step in front of a moving train without needing to run the experiment first.

    So does this mean that AI researchers have finally found a core concept whose meaning everyone can agree upon? As a famous physicist once wrote: Surely youre joking. A world model may sound straightforward — but as usual, no one can agree on the details. What gets represented in the model, and to what level of fidelity? Is it innate or learned, or some combination of both? And how do you detect that its even there at all?

    It helps to know where the whole idea started. In 1943, a dozen years before the term “artificial intelligence” was coined, a 29-year-old Scottish psychologist named Kenneth Craik published an influential monograph in which he mused that “if the organism carries a ‘small-scale model’ of external reality … within its head, it is able to try out various alternatives, conclude which is the best of them … and in every way to react in a much fuller, safer, and more competent manner.” Craiks notion of a mental model or simulation presaged the “cognitive revolution” that transformed psychology in the 1950s and still rules the cognitive sciences today. What’s more, it directly linked cognition with computation: Craik considered the “power to parallel or model external events” to be “the fundamental feature” of both “neural machinery” and “calculating machines.”

    The nascent field of artificial intelligence eagerly adopted the world-modeling approach. In the late 1960s, an AI system called SHRDLU wowed observers by using a rudimentary “block world” to answer commonsense questions about tabletop objects, like “Can a pyramid support a block?” But these handcrafted models couldn’t scale up to handle the complexity of more realistic settings. By the late 1980s, the AI and robotics pioneer Rodney Brooks had given up on world models completely, famously asserting that “the world is its own best model” and “explicit representations … simply get in the way.”

    It took the rise of machine learning, especially deep learning based on artificial neural networks, to breathe life back into Craik’s brainchild. Instead of relying on brittle hand-coded rules, deep neural networks could build up internal approximations of their training environments through trial and error and then use them to accomplish narrowly specified tasks, such as driving a virtual race car. In the past few years, as the large language models behind chatbots like ChatGPT began to demonstrate emergent capabilities that they weren’t explicitly trained for — like inferring movie titles from strings of emojis, or playing the board game Othello — world models provided a convenient explanation for the mystery. To prominent AI experts such as Geoffrey Hinton, Ilya Sutskever and Chris Olah, it was obvious: Buried somewhere deep within an LLM’s thicket of virtual neurons must lie “a small-scale model of external reality,” just as Craik imagined.

    The truth, at least so far as we know, is less impressive. Instead of world models, today’s generative AIs appear to learn “bags of heuristics”: scores of disconnected rules of thumb that can approximate responses to specific scenarios, but don’t cohere into a consistent whole. (Some may actually contradict each other.) It’s a lot like the parable of the blind men and the elephant, where each man only touches one part of the animal at a time and fails to apprehend its full form. One man feels the trunk and assumes the entire elephant is snakelike; another touches a leg and guesses it’s more like a tree; a third grasps the elephant’s tail and says it’s a rope. When researchers attempt to recover evidence of a world model from within an LLM — for example, a coherent computational representation of an Othello game board — they’re looking for the whole elephant. What they find instead is a bit of snake here, a chunk of tree there, and some rope.

    Of course, such heuristics are hardly worthless. LLMs can encode untold sackfuls of them within their trillions of parameters — and as the old saw goes, quantity has a quality all its own. That’s what makes it possible to train a language model to generate nearly perfect directions between any two points in Manhattan without learning a coherent world model of the entire street network in the process, as researchers from Harvard University and the Massachusetts Institute of Technology recently discovered.

    So if bits of snake, tree and rope can do the job, why bother with the elephant? In a word, robustness: When the researchers threw their Manhattan-navigating LLM a mild curveball by randomly blocking 1% of the streets, its performance cratered. If the AI had simply encoded a street map whose details were consistent — instead of an immensely complicated, corner-by-corner patchwork of conflicting best guesses — it could have easily rerouted around the obstructions.

    Given the benefits that even simple world models can confer, it’s easy to understand why every large AI lab is desperate to develop them — and why academic researchers are increasingly interested in scrutinizing them, too. Robust and verifiable world models could uncover, if not the El Dorado of AGI, then at least a scientifically plausible tool for extinguishing AI hallucinations, enabling reliable reasoning, and increasing the interpretability of AI systems.

    That’s the “what” and “why” of world models. The “how,” though, is still anyones guess. Google DeepMind and OpenAI are betting that with enough “multimodal” training data — like video, 3D simulations, and other input beyond mere text — a world model will spontaneously congeal within a neural network’s statistical soup. Meta’s LeCun, meanwhile, thinks that an entirely new (and non-generative) AI architecture will provide the necessary scaffolding. In the quest to build these computational snow globes, no one has a crystal ball — but the prize, for once, may just be worth the AGI hype.

    Continue Reading

  • Earth could be sitting in the centre of a giant cosmic void, according to astronomers

    Earth could be sitting in the centre of a giant cosmic void, according to astronomers

    It’s human to feel alarmed by the sheer emptiness of space.

    Now, astronomers from the University of Portsmouth in the UK suggest this unsettling vastness may be worse than we thought.

    More mind-blowing space science

    Credit: solarseven / Getty Images

    They reckon Earth, our entire Solar System and even our entire Milky Way sits inside a mysterious giant hole.

    This void, they believe, may cause the cosmos to expand more quickly in our local environment than in other parts of the Universe. 

    We (the green dot) may be adrift in a giant cosmic void, with matter flowing away into denser regions beyond. Credit: Moritz Haslbauer and Zarija Lukic
    We (the green dot) may be adrift in a giant cosmic void, with matter flowing away into denser regions beyond. Credit: Moritz Haslbauer and Zarija Lukic

    A solution to a cosmic problem

    The idea that Earth and the Milky Way are sitting in a void was proposed as a way of solving what’s known as the Hubble tension (or the Hubble crisis).

    This is a conundrum that has puzzled astronomers for years.

    The tension refers to the fact that the rate of the expansion of the Universe varies depending on where it’s measured.

    Illustration of the expansion of the Universe. Understanding more about this phenomenon could reveal clues as to how the Universe will end. Credit: Mark Garlick / Science Photo Library
    Illustration of the expansion of the Universe. Understanding more about this phenomenon could reveal clues as to how the Universe will end. Credit: Mark Garlick / Science Photo Library

    This discrepancy is a major problem for cosmologists, who need to know the expansion rate to accurately determine the Universe’s age.

    “A potential solution to this inconsistency is that our Galaxy is close to the centre of a large, local void,” explains Dr Indranil Banik, who proposed the idea at an astronomy conference in Durham in the UK.

    Banik presented data examining baryon acoustic oscillations (BAOs) – which roughly translate as the ‘sound’ of the Big Bang.

    The findings showed that a void model is about 100 million times more likely than a void-free model.

    Hubble tension is the name given to differences in measurements of the rate of the expansion of the Universe. Credit: Mark Garlick / Science Photo Library / Getty Images
    Hubble tension is the name given to differences in measurements of the rate of the expansion of the Universe. Credit: Mark Garlick / Science Photo Library / Getty Images. Credit: Mark Garlick / Science Photo Library / Getty Images

    Could the theory really be correct?

    For this theory to hold, Earth and our local galactic neighbourhood would need to lie near the centre of a void around a billion lightyears in radius, with a density about 20% lower than the Universe’s average. 

    But the idea of voids is controversial within the cosmology community.

    The standard cosmological model instead suggests that matter should be more uniformly spread throughout the Universe.

    While the answer hasn’t been definitively found, evidence for the void model could help cosmologists advance our understanding of the Universe’s structure.

    This article appeared in the September 2025 issue of BBC Sky at Night Magazine

    Continue Reading

  • Humans inherited Neanderthal genes that limit our muscle activity

    Humans inherited Neanderthal genes that limit our muscle activity

    Most of us carry a small trace of Neanderthal ancestry and, in some cases, that legacy sits in our legs. A single change in a muscle enzyme can subtly throttle how hard muscles can work under pressure.

    People outside Africa typically carry about 2 percent Neanderthal DNA in their genomes, a result of ancient interbreeding between populations. That shared history still influences traits today, including how our muscles manage energy during all out effort.

    What the enzyme variant does


    In an influential 2017 study, lead author Dominik Macak from the Max Planck Institute for Evolutionary Anthropology (MPI EVA), and colleagues, focused on AMPD1. This is an enzyme that helps skeletal muscle recycle energy-rich molecules during effort.

    In their new study, they show that Neanderthals carried a version of AMPD1 with lower activity than the typical modern human form.

    The team expressed both versions of the enzyme in cells and measured activity in a controlled setup. The Neanderthal version showed about a quarter less activity in test tubes, and when the change was engineered into mice, total AMPD activity measured in leg muscle extracts dropped sharply.

    How the muscle enzyme reached modern humans

    The variant appears in all sequenced Neanderthals and is absent in other primates, which points to a change specific to that lineage. Some modern humans carry the Neanderthal-derived form because of archaic introgression, the movement of DNA across populations through interbreeding.

    Today, the gene for this variant enzyme is found most often in Europe and Western Asia at modest frequencies. The pattern is consistent with gene flow into early modern humans who met Neanderthals around 50,000 years ago, interbred and then spread across Eurasia.

    From bench to muscle

    The lab assays matter because AMPD1 sits in a critical energy pathway known as the purine nucleotide cycle. When muscles need ATP (the molecule that provides power for cellular activities in all living organisms) in a hurry, AMPD1 helps pull the chemical levers that keep ATP production humming.

    In practice, the variant’s impact shows up most clearly when muscles are pushed. Mouse muscle carrying the engineered change showed large decreases in measured AMPD activity in extracts. In addition, prior case reports hint at reduced enzyme activity in rare human carriers with combined AMPD1 defects.

    Neanderthal muscle enzyme and sport

    The research also looked at athletic outcomes by using a well-known human knockout allele of AMPD1 as a stand in for reduced enzyme function. That analysis covered more than a thousand elite athletes across endurance and power disciplines.

    “Carrying one dysfunctional AMPD1 allele confers approximately a 50 percent lower probability of achieving elite athletic performance,” wrote Macak. The sentence sums up where the enzyme matters most, at the razor’s edge where physiology meets peak performance.

    Health signals to watch

    Reduced AMPD1 activity is common in clinic genetics, yet many carriers feel fine most of the time. The clinical picture of myoadenylate deaminase deficiency (MAD) ranges from exercise-induced cramps and early fatigue to no symptoms, a pattern known as incomplete penetrance.

    Large data resources add nuance. Biobank analyses suggest a small increase in risk for varicose veins among people with AMPD1 variants that reduce activity, although replication across cohorts is mixed and the absolute risk increase is modest.

    Why the Neanderthal muscle enzyme stuck around

    If reduced enzyme activity can hinder elite performance, why did the variant enzyme persist. One likely factor is relaxed purifying selection, which occurs when a gene becomes less crucial for day-to-day survival across a population.

    Another possibility is that culture and technology reduced the constant demand for extreme muscle output. If everyday life did not require maximum sprint power or heavy loads, then a small energetic inefficiency would be tolerated.

    What counts as a meaningful effect

    The findings do not imply that someone with the variant cannot excel at sports or live a healthy life. Most carriers have no obvious health problems, and plenty of other genes and training factors shape performance.

    Still, the enzyme’s role appears during stress. When energy turnover spikes, AMPD1 helps buffer the system, and slightly less activity can tip the balance in high stakes settings like championship level competition.

    A closer look at enzyme chemistry

    To keep terms clear, an enzyme is a protein catalyst that speeds up chemical reactions in cells. Purine molecules are key building blocks for DNA and RNA, and they also form ATP, the energy currency that pays for muscle contractions.

    An allele is one version of a gene among alternatives, and the Neanderthal-derived allele in AMPD1 swaps one amino acid in a position that helps the enzyme’s subunits stick together. That subtle change lowers catalytic efficiency without removing the protein entirely.

    A bigger shift in energy chemistry

    This is not the first sign that energy metabolism took a different path in humans when compared to other primates. Earlier work found that modern humans carry a unique change in another enzyme, ADSL, which tunes the same pathway and is linked to lower purine levels in key tissues, especially the brain.

    Together, these threads suggest that parts of our energy machinery became less dependent on certain purine reactions over evolutionary time. The Neanderthal AMPD1 story adds a muscle-specific chapter and ties it directly to present day physiology.

    Where this leaves us

    The signal here is not alarm, it is perspective. Daily life proceeds as usual for almost everyone who carries the muscle enzyme variant. However, a centuries-old interbreeding event still leaves a fingerprint on who is more likely to reach the top tier of sport.

    This work also emphasizes why population history matters in medicine and performance science. Small shifts in enzyme activity, inherited across tens of thousands of years, can still modulate outcomes when humans are pushed to the limit.

    The study is published in Nature Communications.

    —–

    Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

    Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

    —–

    Continue Reading

  • Scientists Uncover Hidden Megathrust That Could Trigger Massive Earthquakes

    Scientists Uncover Hidden Megathrust That Could Trigger Massive Earthquakes

    Scientists have captured the first detailed images of the Queen Charlotte fault system offshore Haida Gwaii, revealing that the region has the potential to unleash powerful megathrust earthquakes. Credit: SciTechDaily.com

    Scientists used advanced hydrophone technology to image the Queen Charlotte fault, confirming its potential for destructive megathrust earthquakes.

    New research on the Queen Charlotte fault system has produced the first images of its subsurface structure off the coast of Haida Gwaii, confirming that northern British Columbia is capable of generating megathrust earthquakes.

    These types of earthquakes occur where one tectonic plate is forced beneath another—in this case, the Pacific plate being driven under the North American plate—and they are known for producing both intense shaking and tsunamis.

    Advanced hydrophone technology

    An international team of scientists from American and Canadian institutions, including Dalhousie University, collected the data using a 15-kilometre-long hydrophone streamer. This instrument, equipped with thousands of underwater microphones, was towed through the region to capture seismic signals and map the deep structure of Earth’s crust.

    Map Showing Queen Charlotte Fault and Surrounding Tectonic Plates
    Map of the study area, showing the location of the Queen Charlotte Fault (QCF) in relation to the Pacific (PAC), North America (NA), Yakutat (YAK), Explorer (EXP) and Juan De Fuca (JdF) tectonic plates. Credit: 10.1126/sciadv.adt3003

    The findings, published in Science Advances, present the first definitive evidence that the Pacific plate is beginning to collide with and subduct beneath the North American plate in the Haida Gwaii area. In practical terms, this means the region has the potential to generate earthquakes capable of both strong ground shaking and destructive tsunamis.

    In fact, the Queen Charlotte fault system represents the greatest seismic hazard in Canada, producing the country’s largest recorded earthquake in 1949.

    “This region is actively becoming a subduction zone, so understanding the fault structure here tells us about the early stages of subduction zone development,” says lead author Collin Brandl, a postdoctoral research scientist at the Lamont-Doherty Earth Observatory, part of the Columbia Climate School.

    “Our study provides the first direct observations of the Haida Gwaii thrust, the “megathrust” of this system, which can help improve hazard analysis in the region, better preparing residents for future earthquakes and tsunamis.”

    Reference: “Seismic imaging reveals a strain-partitioned sliver and nascent megathrust at an incipient subduction zone in the northeast Pacific” by Collin C. Brandl, Lindsay L. Worthington, Emily C. Roland, Maureen A. L. Walton, Mladen R. Nedimović, Andrew C. Gase, Olumide Adedeji, Jose Castillo Castellanos, Benjamin J. Phrampus, Michael G. Bostock, Kelin Wang and Sarah Jaye Oliva, 18 July 2025, Science Advances.
    DOI: 10.1126/sciadv.adt3003

    Never miss a breakthrough: Join the SciTechDaily newsletter.

    Continue Reading

  • Earth’s violent birth: What it takes to make a living world

    Earth’s violent birth: What it takes to make a living world

    Earth today is teeming with life. We have oceans, breathable air, and the perfect combination of chemical ingredients necessary for living organisms to thrive. But when Earth first started forming, it lacked some of the most fundamental elements required for life.

    So how did our world transition from being barren and inhospitable to what it is today?


    A team of scientists just found new clues that show Earth’s original mix of elements was complete surprisingly early – only a short time after the solar system came together.

    Formation of the solar system

    When the solar system began to form billions of years ago, it emerged from a gigantic cloud of gas and dust. This cloud contained important elements such as hydrogen, carbon, and sulfur – chemicals essential for life.

    Not everything in the solar system was equally formed, though. The inner zone, the region nearest the Sun, was extremely hot.

    Due to this heat, most of the life-critical components never condensed into solid form. Instead, they remained in the form of gas and didn’t persist long enough to become part of the rocky material that formed the tiny inner worlds such as Mercury, Venus, Earth, and Mars.

    As a result, early Earth was built mostly from dry, rocky stuff. It missed out on a lot of the “wet” ingredients that came from the cooler, outer parts of the solar system.

    The puzzle of life on Earth

    Scientists have long wondered when Earth picked up the materials that would one day allow life to appear. If the inner solar system didn’t have them, then they had to come from somewhere else. And if they came later, when exactly did that happen?

    That’s what scientists at the University of Bern’s Institute of Geological Sciences wanted to know. They analyzed rocks from ancient Earth and meteorites, using radioactive isotopes to calculate time with astonishing accuracy.

    “A high-precision time measurement system based on the radioactive decay of manganese-53 was used to determine the precise age. This isotope was present in the early solar system and decayed to chromium-53 with a half-life of around 3.8 million years,” said Dr. Pascal Kruttasch, who led the study.

    The team’s method allowed them to measure ages with less than a million years of error – even on materials that are billions of years old.

    “These measurements were only possible because the University of Bern has internationally recognized expertise and infrastructure for the analysis of extraterrestrial materials and is a leader in the field of isotope geochemistry,” noted Klaus Mezger, co-author of the study.

    Earth’s chemistry was locked in fast

    The team found that Earth’s chemical signature – the unique mix of elements that made up the young planet – was complete in less than 3 million years after the solar system formed.

    “Our solar system formed around 4,568 million years ago. Considering that it only took up to 3 million years to determine the chemical properties of Earth, this is surprisingly fast,” said Kruttasch.

    “Thanks to our results, we know that the proto-Earth was initially a dry rocky planet. It can therefore be assumed that it was only the collision with Theia that brought volatile elements to Earth and ultimately made life possible there,” explained Kruttasch.

    The collision that changed everything

    Scientists have long believed that Earth was hit by a planet-sized object called Theia early in its history. This impact is also what likely created the Moon.

    However, this study adds something new: evidence that Theia may have delivered the materials that made Earth capable of supporting life.

    Theia likely formed farther from the Sun, where cooler temperatures allowed water and other volatiles to collect. When it slammed into Earth, it didn’t just shake things up – it may have delivered the very elements we needed to build oceans, an atmosphere, and the chemistry of life.

    “The Earth does not owe its current life-friendliness to a continuous development, but probably to a chance event – the late impact of a foreign, water-rich body. This makes it clear that life-friendliness in the universe is anything but a matter of course,” said Mezger.

    What this means for other planets

    If Earth only became habitable thanks to a lucky collision, that has big implications for other planets – both in our solar system and beyond.

    Even if a rocky planet forms in the right zone around its star, it might not be enough. The timing and location of volatile delivery, plus the exact kind of collision, may all play a role. And those things don’t happen everywhere.

    It’s possible that many planets stay dry forever. Others might get hit too hard or too often. Earth’s path may not be typical – it may be one of the rare cases where the right ingredients arrived at the right time, in just the right way.

    Understanding Earth’s massive collision

    We still don’t fully understand what happened during that massive collision between proto-Earth and Theia. Kruttasch and his team want to explore the event further.

    “So far, this collision event is insufficiently understood. Models are needed that can fully explain not only the physical properties of the Earth and moon, but also their chemical composition and isotope signatures,” Kruttasch said.

    In other words, scientists still need to untangle the chemistry of Earth and its satellite. The Moon and Earth share strikingly similar chemical fingerprints – a mystery that challenges the idea of a foreign body like Theia delivering the missing ingredients for life.

    If Theia really formed farther from the Sun, where water and volatile elements were abundant, why don’t those differences show up more clearly in the Moon’s composition?

    Future research could help answer that question – and may also help us figure out how common this kind of planet-forming “recipe” is in the universe.

    The full study was published in the journal Science Advances.

    —–

    Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

    Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

    —–

    Continue Reading