Shocking find: You can reprogram your immune system with electricity | Health | homenewshere.com
We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.
GC content, total number and their frequencies of PQS in H. volcanii
genome. (A) Schematic presentation of a G-quartet (left) and a G-quadruplex (right).
(B) Total PQS counts, percentage of GC and PQS frequency characteristics for the
main chromosome and mini-chromosomes. (C,D) G4 prediction in H. volcanii’s
promoters: number and localisation relative to the TSS. — biorxiv.org
Archaea, a domain of microorganisms found in diverse environments including the human microbiome, represent the closest known prokaryotic relatives of eukaryotes.
This phylogenetic proximity positions them as a relevant model for investigating the evolutionary origins of nucleic acid secondary structures such as G-quadruplexes (G4s), which play regulatory roles in transcription and replication. Although G4s have been extensively studied in eukaryotes, their presence and function in archaea remain poorly characterized.
In this study, a genome-wide analysis of the halophilic archaeon Haloferax volcanii identified over 5, 800 potential G4-forming sequences. Biophysical validation confirmed that many of these sequences adopt stable G4 conformations in vitro. Using G4-specific detection tools and super-resolution microscopy, G4 structures were visualized in vivo in both DNA and RNA across multiple growth phases.
Comparable findings were observed in the thermophilic archaeon Thermococcus barophilus. Functional analysis using helicase-deficient H. volcanii strains further identified candidate enzymes involved in G4 resolution. These results establish H. volcanii as a tractable archaeal model for G4 biology.
Archaeal G-Quadruplexes: A Novel Model for Understanding Unusual DNA/RNA Structures Across the Tree of Life
Astrobiology, Genomics,
Explorers Club Fellow, ex-NASA Space Station Payload manager/space biologist, Away Teams, Journalist, Lapsed climber, Synaesthete, Na’Vi-Jedi-Freman-Buddhist-mix, ASL, Devon Island and Everest Base Camp veteran, (he/him) 🖖🏻
Linguists have long known that when cultures collide, languages rub off on one another. We borrow words, swap sounds, and even reshape grammar. But charting those exchanges across centuries and continents is hard when the written record is patchy or nonexistent.
A new study flips the problem on its head: instead of starting with history books, the researchers read our DNA to reconstruct past human contact – then asked what happened to the languages.
DNA traces human language history
A team led by Anna Graff at the University of Zurich pulled together genetic data from more than 4,700 people in 558 populations. They paired this with two of the world’s largest linguistic databases, which catalogue everything from word order to consonant inventories across thousands of languages.
Genetic “admixture” – the telltale signature of populations mixing – stands in for a reliable, globally comparable record of contact. With that proxy in hand, the team could identify more than 125 instances where groups clearly met and mingled, even if historians never wrote the encounters down.
What they saw was unambiguous: when people mix, their languages tend to move closer together. Unrelated languages spoken by populations with genetic contact were four to nine percent more likely to share features than expected.
That might sound modest, but across entire grammars and sound systems, it’s a strong, consistent nudge toward similarity.
One surprise was how steady that effect proved to be. Whether the contact was vast and recent – say, colonial movements between continents – or ancient and regional, like Neolithic migrations within Eurasia, the degree of linguistic convergence was strikingly similar.
“No matter where in the world populations come into contact, their languages become more alike to remarkably consistent extents,” said senior author Chiara Barbieri, now at the University of Cagliari.
That consistency cuts against a common assumption that only intense, prolonged encounters reshape grammar and sound systems, or that small-scale neighborly contact leaves barely a trace.
The genetic lens suggests languages are sensitive instruments, registering social touchpoints across the full spectrum – from trade and intermarriage to conquest and diaspora.
Not all grammar is transferable
Of course, not every part of a language is equally malleable. Some features seem to travel more readily. Word order patterns and certain consonant sounds were more likely to converge than deeper layers of morphology or prosody.
But here, too, the study counsels caution. The researchers didn’t find a one-size-fits-all rulebook that says, for example, “Nouns move; verb endings don’t.”
That challenges decades of “borrowability hierarchies” that rank which features can be shared across languages. Instead, the authors argue, social dynamics – prestige, power, identity, and the practicalities of multilingual life – can override structural inertia.
If a community prizes a dominant group’s speech, it may adopt conspicuous elements quickly; if it resists assimilation, the most “borrowable” features might barely budge.
You can see the everyday version of this push and pull in familiar loanwords. English picked up “sausage” from French after the Norman Conquest; centuries later, French borrowed “sandwich” from English.
Vocabulary swaps like these are the visible tip. Beneath the surface, the new study shows, subtler shifts in sound and syntax can ripple outward whenever people share space and stories.
The team even found the mirror image of borrowing: divergence on purpose. In some places, features became less alike after contact, not more.
That happens when communities lean into linguistic differences to mark who they are – tightening vowel systems, restoring older word orders, or reinforcing local pronunciations as a badge of identity. In other words, contact doesn’t always melt boundaries; sometimes it sharpens them.
This duality – convergence alongside intentional distancing – is central to how languages evolve. It helps explain why some neighboring tongues grow steadily more alike, while others cling to distinctiveness despite centuries of cohabitation.
DNA tells story of language
Methodologically, the study’s move is elegant. Historical documents and oral traditions can be rich but patchy, and for many regions and eras they simply don’t exist.
Genes keep a different kind of ledger. When populations intermix, they leave a durable statistical imprint that persists for thousands of years.
By aligning those genetic signals with language structures, the authors could quantify contact and its linguistic consequences on a global canvas – from Amazonia and the Sahel to the Pacific and the Arctic.
The approach also helps disentangle coincidence from contact. Two unrelated languages might independently develop similar features for internal reasons. But when that similarity lines up with clear genetic admixture between the speakers, the balance of probability shifts toward historical interaction.
Languages evolve in real time
Beyond satisfying our curiosity about how English, Hausa, Quechua, or Hmong came to be what they are, the findings carry a warning for the present. Contact has always been part of human life, but the pace and scale are accelerating.
Globalization, urbanization, climate-driven displacement, and land-use change are bringing communities together – and pushing others apart – in new ways. As that churn intensifies, we should expect languages to converge more in some respects and to splinter or vanish in others.
It also reframes the stakes of language loss. We often focus on shrinking vocabularies and disappearing oral literatures.
This study suggests that deeper, structural layers – sound patterns, grammatical architectures, the hidden wiring of a language – can erode under sustained contact, even when a language survives on the surface.
Protecting linguistic diversity, then, isn’t only about counting how many languages remain; it’s about preserving the full range of their internal variety.
DNA and language share histories
The research tells a familiar human story with fresh evidence. When people meet – through trade or conquest, migration or marriage – we share more than DNA. We pass around technologies, beliefs, recipes, and ways of speaking.
Some of those exchanges are voluntary, others are imposed; some knit communities together, others spark efforts to stand apart. Our languages, like our DNA, carry the scars and gifts of those encounters.
By treating genes as a time machine, this study gives linguistics a new global baseline. It doesn’t replace fieldwork, historical scholarship, or typological theory.
It makes them sharper, pointing to where contact likely mattered and how deeply. And it leaves us with a clear takeaway for the century ahead: as our lives intertwine ever more tightly, the sounds and structures we use to make meaning will change with them.
The study is published in the journal Science Advances.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
We stroll past wheat, clover, and grass and see only the green half of the story. The other half – the roots – do the heavy lifting: anchoring plants, pulling up water and nutrients, and locking away carbon in the soil.
Yet because roots are hidden, scientists have spent decades using muddy, labor-intensive methods to guess their size and spread, often missing the finest, most active threads. According to a team of researchers at Aarhus University, that guesswork can finally stop.
A high-tech root census
“It’s a bit like studying marine ecosystems without ever being able to dive,” said senior author Henrik Brinch-Pedersen, a professor in the Department of Agroecology.
Until now, the standard approach was to carve out big blocks of soil, wash away the dirt, sort and dry what remained, and weigh the roots. It’s slow, destructive, and misses the fine roots that absorb nutrients and release carbon to the soil.
The new approach swaps spades for droplet digital PCR (ddPCR), a DNA technology that partitions a teaspoon of soil into tens of thousands of microscopic droplets. Each droplet then answers a yes/no question: Does it contain plant DNA with a specific genetic signature?
The team uses a marker called ITS2 – think of it as a barcode that differs among species – so a single run can reveal not just that roots are present, but whose roots they are. Crucially, it also shows how much underground biomass each species contributes.
“It’s a bit like giving the soil a DNA test,” Brinch-Pedersen said. “We can suddenly see the hidden distribution of species and biomass without digging up the whole field.”
Mapping roots with precision
Because ddPCR counts DNA molecules across thousands of droplets, it can quantify roots that would otherwise be pulverized or rinsed away.
That makes it possible to map root communities at high resolution in living fields, pastures, and mixed-species grasslands and to repeat measurements over time without disturbing the site.
The payoff spans several fronts. For climate research, it lets scientists measure how much carbon different crops actually push belowground – data that’s been frustratingly hard to pin down but is essential for credible climate accounting in agriculture.
For plant breeding, the digital DNA method creates a path to select varieties that invest more in roots without sacrificing grain or forage aboveground.
And for biodiversity science, the technology finally illuminates the underground dynamics in species mixes – who’s competing, who’s complementing – insights that were “almost impossible before,” Brinch-Pedersen noted.
Roots matter for the climate
We tend to picture wind turbines and EVs when we think of climate solutions, but roots are a vast, quiet carbon pump.
As plants photosynthesize, some of the captured carbon flows belowground into roots and the surrounding soil. Depending on the crop, soil type, and management, a fraction of that carbon can persist for decades or even centuries.
Farmers and policymakers talk about “soil carbon sequestration,” but without precise measurement tools it’s been hard to document gains in ways that stand up to scrutiny. A rapid, species-resolved root assay changes that equation.
DNA test in soil
In practice, researchers collect small soil cores, extract DNA, and run ddPCR with species-specific probes keyed to the ITS2 barcode. The number of positive droplets scales with root biomass for that species in that sample.
Because the test targets DNA directly in soil, it captures fine roots and root fragments as well as thicker roots that are tough to wash and weigh.
There are, however, limits. Close relatives can be tricky to tell apart when their barcodes are nearly identical – ryegrass and Italian ryegrass hybrids, for instance, can blur the signal.
And the method recognizes only what it’s trained to find. Researchers must validate a probe for each species, so expanding the “DNA library” is both the hurdle and the goal.
“For us, the most important thing is that we have shown it can be done,” Brinch-Pedersen said. “Our vision is to expand the library so we can measure many more species directly in soil samples.”
Rapid answers from soil
Speed is the other advantage. Traditional root studies hinge on days to weeks of field and lab work per site. The ddPCR method turns around results in hours, making it practical to scale from a few experimental plots to entire farms, seasons, and regions.
That opens the door to experiments that were previously unrealistic. Researchers can compare cover crops for their belowground carbon contributions across soil types.
They can track how drought shifts root allocation among species in a pasture. They can also screen hundreds of breeding lines for deeper, denser root systems.
Plants that work smarter
Brinch-Pedersen sees a straight line from measurement to design. If breeders can quantify underground investment as easily as they count kernels or measure protein, they can begin to select for crops that are not only high-yielding but also high-sequestering. These are plants that do more of the climate work for us.
The same logic applies to mixtures. With species-level root data, agronomists can compose plant communities that pack more carbon belowground while maintaining forage or grain output above.
The bigger picture is simple: half a plant lives out of sight, and that half shapes soil health, farm resilience, and the climate. With a “DNA test for dirt,” researchers can finally watch that hidden half at work – no shovels required.
The study is published in the journal Plant Physiology.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
Most people recognize Alzheimer’s from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now a sweeping new study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.
The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.
The resulting atlas revealed many insights, described in a paper that appears in the September 4th issue of Cell, and shows that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.
Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability but where epigenomic stability remained, so did cognition.
To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease, we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in. This is the first large-scale single-cell multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”
Manolis Kellis, senior author, professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group
By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.
“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” said Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”
Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.
Compromised compartments and eroded information
Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.
In the new study, Liu and Zhang combined analyses of single cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g. neurons or other glial cell types) and 67 subtypes of cells types (e.g. 17 kinds of excitatory neurons or 6 kinds of inhibitory ones).
The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings and ultimately loss of function.
For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.
“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explained first author Zunpeng Liu.
But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.
Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.
Risk genes and ‘chromatin guardians’
Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu noted. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.
Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.
In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.
“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis said. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.
“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”
Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.
Funding for the research came from The National Institutes of Health, The National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.
Source:
Journal reference:
Liu, Z., et al. (2025). Single-cell multiregion epigenomic rewiring in Alzheimer’s disease progression and cognitive resilience. Cell. doi.org/10.1016/j.cell.2025.06.031
New research investigates the possibility that different spacecraft could visit Comet 3I/ATLAS, giving scientists a unique on-location view of the interstellar visitor, or even offering the chance to collect material that could be much older than the bodies of our solar system.
Discovered on July 1 by the ATLAS (Asteroid Terrestrial-impact Last Alert System), 3I/ATLAS is just the third-ever object found drifting through our solar system that is believed to have originated from around another star. It therefore offers scientists a unique opportunity to study material from another planetary system. However, recent examination of this interstellar intruder’s trajectory and its velocity of around 130,000 mph (219,000 km/h) has revealed that it could actually originate from a region of our galaxy much older than the solar system and its immediate surroundings.
The fact that 3I/ATLAS seems to originate from the Milky Way’s “thick disk” of stars means that it could be 7 billion years old or even older, making it at least around 2.5 billion years older than our sun, the planets of the solar system, and its horde of asteroids and comets. Thus, it represents the chance to study material that formed much earlier than that of the solar system during a period of the universe called “cosmic noon,” an exciting prospect for scientists. But there is a problem.
Like the comets of the solar system do, 3I/ATLAS is shedding material as it approaches the sun, where solar radiation heats the ice at its core and converts it directly to gas, which erupts from the comet’s surface. This process gives comets their distinctive tails and haloes (or “comas”) and gives scientists a chance to assess what chemicals lie at the core of the comet.
A comet loses the most material when it comes closest to the sun, a passage known as its perihelion. The authors of this new research point out that when 3I/ATLAS makes its perihelion passage, it will be at the other side of the sun in relation to Earth.
That means that the telescopes that have been integral to the study of 3I/ATLAS will miss what will be potentially the most revealing period of its passage through the solar system. Earth-based telescopes like the four ATLAS telescopes located in Hawaii, Chile, and South Africa, and Earth-orbiting telescopes like the James Webb Space Telescope (JWST) and the Hubble Space Telescope have all studied the comet so far, but they will be out of view during its perihelion.
“A telescope on Earth will be at a huge disadvantage, as 3I/ATLAS will unfortunately reach its closest point to the sun behind the sun, if viewed from the Earth. It means that we would need to look through or past the sun to observe 3I/ATLAS,” research author and University of Luxembourg researcher Andreas M. Hein told Space.com. “This is a big issue, as the sun is a very bright object and 3I/ATLAS is very faint. We would not be able to see 3I/ATLAS from Earth.”
Breaking space news, the latest updates on rocket launches, skywatching events and more!
3I/ATLAS traveling through a background of stars as seen by the ground-based telescopes of the Las Cumbres Observatory. (Image credit: ESA)
Thus, Hein and his colleagues, including lead author T. Marshall Eubanks, Chief Scientist at Space Initiatives Inc., set about assessing which spacecraft could avoid this issue and get an up close and personal view of 3I/ATLAS, or even pass through its cometary tail and collect some material.
“The Earth and its satellites, including our space telescopes, are, purely by chance, poorly situated to observe 3I/ATLAS when it is closest to the sun, as it will also happen to be on the far side of the sun as seen from Earth,” Eubanks said. “The period at and just after perihelion is when a comet (including 3I/ATLAS) is heated the most, and thus throws off the most material, has the biggest tail, and has the highest chance of fragmenting, or even disappearing.
“Spacecraft offer a chance of observing this period.”
What spacecraft could catch a glimpse (or a chunk) of 3I/ATLAS?
3I/ATLAS will pass within the orbit of Mars, which will allow several spacecraft from Earth to catch a glimpse of it. Eubanks pointed out that two interplanetary voyagers that are in prime position to observe 3I/ATLAS: the NASA mission Psyche, heading to the metal-rich asteroid 16 Psyche located in the main asteroid belt between Mars and Jupiter; and the European Space Agency (ESA) spacecraft Jupiter Icy Moons Explorer (JUICE) heading to the Jovian system.
“Thanks to its Venus gravity assist on Aug. 31, JUICE will be in the best position for the important period around the 3I/ATLAS perihelion, when observations from Earth will be the hardest,” Eubanks explained. “Various spacecraft orbiting Mars, including the Mars Reconnaissance Orbiter (MRO), Tianwen-1, and Hope, all have both the vantage point and good equipment to provide good data on 3I/ATLAS.
“Of all of these, I think that the JUICE data near perihelion is likely to be the most critical.”
The researcher added that at its closest approach, Psyche will be around 28 million miles (45 million km) from 3I/ATLAS, while JUICE will come to around 43 million miles (68 million km) from the comet. At their closest, the spacecraft orbiting Mars will be around 18 million miles (29 million km) from the comet.
All of these spacecraft will be much closer to 3I/ATLAS than the comet will ever come to Earth. The interstellar comet’s closest approach to our planet is estimated to be 168 million miles (269 million km) on Dec. 19, 2025.
The Venus flyby perform by JUICE in August 2024 could put it in prime position to observe 3I/ATLAS. (Image credit: ESA)
Eubanks added that 3I/ATLAS will also pass through the fields of view of the ESA’s Solar and Heliospheric Observatory (SOHO), NASA’s PUNCH solar monitoring spacecraft, and the Parker Solar Probe as the comet passes near the sun. That means these spacecraft will provide scientists the opportunity to monitor the day-by-day behavior of 3I/ATLAS, albeit at larger distances and lower resolutions.
Other spacecraft may not get a good look at 3I/ATLAS, but could still play an important role in investigating the interstellar visitor.
“The Europa Clipper, Hera, and the Lucy spacecraft will all be exterior to 3I/ATLAS, with the sun being behind the interstellar object from their vantage point, which will mostly or entirely prevent them from imaging 3I/ATLAS,” Eubanks said. “However, 3I/ATLAS will also provide the possibility that they will fly through the cometary tail of this body, providing a different means of studying it.”
The likelihood of a spacecraft passing through the cometary tail of 3I/ATLAS will depend on how it develops.
“It depends on the direction of the tail. The Europa Clipper and Hera spacecraft are moving at a farther distance than 3I/ATLAS and may pass through its tail or observe it at a closer distance,” Hein said. “Ideally, we would be able to do mass spectroscopy to find out about the composition of 3I/ATLAS. The results would tell us if 3I/ATLAS indeed has its origin in the thick disk.
“It’s like an aeon-old fridge, which will open during the next months to release some of its contents.”
A rare oppotunity to study the universe
Eubanks and Hein explained to Space.com just why scientists shouldn’t miss this opportunity to study 3I/ATLAS and to potentially snatch some material from its tail.
“Stars in the galactic thick disk have formed billions of years earlier than those in the thin disk, which includes our sun,” Hein explained. “If 3I/ATLAS has been ejected from a thick disk star system, it means that we could gain insights into it without flying there – something which we will not be able to do for the foreseeable future. Hence, observing 3I/ATLAS is a literal example of: If the prophet cannot go to the mountain, let the mountain come to the prophet.”
Eubanks added that as far as scientists know, almost everything in the solar system was formed around 4.6 billion years ago, or afterwards, barring some small “pre-solar” grains in some meteorites that are older than the solar system.
“The oldest of these, the pre-solar grains found in the Murchison meteorite, are ‘only’ 5 to 7 billion years old. 3I/ATLAS should be considerably older than that,” Eubanks continued. “The cosmic noon, the period in which it seems to have formed, a time of high star formation, can be observed with other galaxies, but at a distance of 6 billion light-years. Now, we can study it in our own solar system, many trillions of times closer.”
Images of 3I/ATLAS as seen by the James Webb Space Telescope. The $10 billion telescope may miss the best part of 3I/ATLAS traversal of the solar system (Image credit: NASA/James Webb Space Telescope)
As for just how rare an opportunity this is, even this team of scientists isn’t sure. That’s because we currently can’t say exactly how many interstellar invaders from beyond the solar system are zipping through our cosmic backyard.
“Imagine being in a dark room. There might be all kinds of objects buzzing around, but you cannot see them in the dark, and hence, it is impossible to know how many there are. Telescopes are a bit like searchlights in a dark room that allow us to see some of those objects,” Hein said. “Of course, this is just an analogy as real telescopes collect light instead of shining light on objects. Their resolution is limited, and a lot of them can only observe narrow spots. Hence, we still only get a sketchy picture of what is going on. This is basically the situation until recently.
“Some believe that those objects are fairly common, but I am skeptical. If similar objects had buzzed through the solar system, we would have observed them much earlier than in 2017 when we observed ʻOumuamua or 2019 when we observed 2I/Borisov.”
A stunning image of Rubin observing the night sky over Earth as it conducts the 10-year LSST, a groundbreaking astronomical survey that could uncover more interstellar invaders in the solar system (Image credit: NSF-DOE Rubin Observatory/AURA/B. Quint)
Hein added that the Vera Rubin Observatory, which has just opened its eye to the cosmos, is acting as a sweeping searchlight that should be able to observe large parts of the sky, thus allowing scientists to identify many more interstellar objects.
“However, I believe that these objects are rarer than commonly assumed and particularly thick disk objects, as the number of thick disk stars is only a fraction of thin disk stars,” Hein continued. “Hence, it might be decades or even longer until we may have another opportunity to observe a potential thick disk object.”
Whether 3I/ATLAS is unique in the solar system or not, the team is clear that this is the first time we’ve been presented with a clear opportunity to study an object from cosmic dawn in our own backyard.
“This is certainly the first time we have been offered such an opportunity,” Eubanks said. “This could be literally a once-in-a-lifetime opportunity.”
Excitingly, 3I/ATLAS may leave behind fragments that allow it to be studied after it has departed the solar system to continue its voyage through the Milky Way.
“Due to the size of 3I/ATLAS, it may actually have satellites up to a distance intersecting with the orbits of Mars and Earth. Hence, we might be able to observe dust particles in the form of a meteor shower entering the atmospheres of Mars and Earth or even larger pieces of 3I flying past those planets,” Hein concluded. “The coming months will be very exciting as more and more data from 3I/ATLAS will be collected. We hope that some of its mysteries will be solved!”
The team’s research is available on the paper repository site arXiv.
There’s a cacophony of acoustic signals below the range of human hearing, many quite intense, that you can pick up with the right “ears.”
view more
Credit: Elena Zhukova
(Santa Barbara, Calif.) — Along the coast, waves break with a familiar sound. The gentle swash of the surf on the seashore can lull us to sleep, while the pounding of storm surge warns us to seek shelter.
Yet these are but a sample of the sounds that come from the coast. Most of the acoustic energy from the surf is far too low in frequency for us to hear, traveling through the air as infrasound and through the ground as seismic waves.
Scientists at UC Santa Barbara have recently characterized these low-frequency signals to track breaking ocean waves. In a study published in Geophysical Journal International, they were able to identify the acoustic and seismic signatures of breaking waves and locate where along the coast the signals came from. The team hopes to develop this into a method for monitoring the sea conditions using acoustic and seismic data.
The low rumble of the waves
The surf produces infrasound and seismic waves in addition to the higher frequency sound we hear at the beach. Exactly how this works is still an open question, but scientists believe it’s connected to the air that mixes into a breaking wave. “All those bubbles oscillate due to the pressure instability, expanding and contracting basically in synch,” said first author Jeremy Francoeur, a former graduate student in Professor Robin Matoza’s group. This generates an acoustic signal that transfers into the air at the sea surface and into the ground on the sea floor.
While pressure waves below 20 hertz (Hz) are still ordinary acoustic waves down to about 0.01 Hz, the frequency, or “pitch,” is too low for humans to hear. “These hidden sounds of Earth’s atmosphere are produced by numerous natural and anthropogenic sources,” explained senior author Matoza, a geophysicist in UCSB’s Department of Earth Science. These include volcanoes, earthquakes and landslides; ocean storms, hurricanes and tornadoes; even auroras and the wind flow over mountains. Understanding the type of signals generated by each phenomenon can provide a bounty of information about these events.
Working from UCSB’s seaside campus, it was natural that Matoza eventually turned his attention toward the beach. He and his students were curious what their seismo-acoustic techniques could tell them about the surf breaking along the coast.
Francoeur deployed an array of sensors atop the headland at UCSB’s Coal Oil Point Reserve, part of the UC Natural Reserve System, to record infrasound and seismic waves produced by the surf. He paired this data with video footage of the beach to identify what signals corresponded to a breaking wave.
Seeing the surf with sound
Many infrasound studies have used only one sensor. Deploying an array provided the team with much more information. The crash of a wave acted like the snap of a clapperboard on a Hollywood set, allowing Francoeur to align the video and infrasound channels with each other. This enabled them to better identify the specific signal from crashing waves since they could correlate the footage with pulses in the infrasound. They then searched for the same signature in the longer archive of infrasound data they recorded at Coal Oil Point.
While many phenomena produce infrasound, the signal from the surf was fairly clear in the data. It arrived at the sensors as repetitive pulses between 1 and 5 Hz.
It was also fairly loud. Well, sort of. “‘Loudness’ is a description of a human perception,” Matoza explained, “so infrasound cannot have ‘loudness.’” However, what we perceive as volume relates to the amplitude of the acoustic wave.
Most of the wave infrasound was around 0.1 to 0.5 pascals. This would be about the volume of busy traffic (74 to 88 decibels (dB) relative to a 20 µPa reference pressure), or about the volume of a busy restaurant, if it were shifted into the frequency range of human hearing. Particularly strong swells reached 1 to 2 Pa, or the din of a noisy factory (94 to 100 dB).
“The sound of the surf is pretty loud when you’re out there on the beach,” Francoeur said, “so it’s interesting that the majority of energy is actually produced in the infrasound range.”
The team was curious whether this signal would align with sea conditions. They found that the infrasound amplitude correlated with significant wave height, which is the height of swells on the open ocean. “But the correlation between what we were seeing with the video data compared to what we were seeing acoustically and seismically was a lot more complex than we initially imagined it to be,” Matoza said.
Francoeur was also able to use the array to triangulate the signals’ origin from small differences in arrival times, a technique called reverse-time-migration. “It was interesting to me that all of the directions seemed to align to the same region of the beach,” he said: “the rock shelf at Coal Oil Point.” The authors suspect that the point’s bathymetry forces a large proportion of waves to crash simultaneously, producing those synchronized bubble oscillations.
Future opportunities
The researchers are curious if it’s common for one area of a beach to produce most of the infrasound, like they observed in this study. They also want to know if the signals they detected are typical of breaking surf. “Does a wave here have the same infrasound signal as, say, a wave in Tahiti?” Francoeur asked. “And as tides change, as winds change, and the conditions out there change, how does that affect the infrasound that’s produced?”
Matoza will continue to investigate these questions with his lab, a task made simpler by the project’s location merely 2.5 miles from his office. “Having this field site very close to campus was really a fantastic opportunity because it was a lot of trial and error trying to figure out good array geometries,” he said. “The proximity meant that we could quickly deploy.”
It’s also a boon to his students’ budding careers. “They get to take part in the whole geophysical workflow — from collecting data in the field, deploying the instruments, analyzing the data, hypothesis testing and writing the paper. And we can do that all within Goleta,” Matoza said.
He hopes to ultimately develop a way to characterize surf conditions solely from infrasound and seismic signatures. This could complement video monitoring systems that may be limited by darkness and fog.
Journal
Geophysical Journal International
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
The use of assisted reproductive technologies like in vitro fertilization is becoming more common worldwide. However, while these technologies successfully create viable embryos, a little over half of all embryos are lost because they fail to implant into the uterus.
Now, in an article published recently in Nature Communications, researchers describe a technique for studying embryonic implantation in mice by using uterine tissue outside the body (or “ex vivo“), which they hope will lead to improved implantation rates in humans.
Studying implantation is inherently more difficult than the prior stages of in vitro fertilization because it requires observing deep tissues in a live uterine environment. These challenges made researchers wonder: what if they could find a way to keep part of the uterus alive outside of the body, so that implantation could be observed without any barriers?
Previous studies have used model embryos, created from stem cells, to emulate embryonic development pre- and post-implantation. However, without the uterus, they cannot genuinely replicate embryo implantation, and studies have been unable to recreate this process.”
Takehiro Hiraoka, lead author
Using a specialized culture method, in which mouse uterine tissue is positioned between liquid and gas surfaces on either side, the researchers were able to place mouse embryos onto small pieces of endometrium tissue. They could then evaluate how the embryos implanted and developed. Impressively, their technique had over 90% efficacy for implantation, which was followed by embryo development and invasion of the uterine lining by the embryo.
“We also saw some features that are characteristic of what happens in implantation inside the body,” says Masahito Ikawa, senior author. “For example, the maternal implantation regulator COX-2 was induced at the site of embryonic attachment.”
To further highlight the potential of their system, the research team looked at the downstream pathways of COX-2 induction. They found that embryonic AKT, a protein that is involved in placental formation and arrangement as well as in cell survival, migration, and invasion, was affected by maternal COX-2.
“Further experiments indicated that introducing an activated form of AKT into embryos recovered defective implantation that was triggered by maternal COX-2 inhibition,” confirms Ikawa. “We were thus able to find a potential way to overcome the issue of implantation failure, indicating the strong potential of our technique for improving assisted reproduction in the future.”
Although challenges remain, such as how to keep the embryos developing past embryonic day 5.5, the results are promising. With future methodological improvements, this technique will allow for the development of treatments for people with recurrent implantation failure. It may also improve implantation rates in assisted reproductive technologies, thereby allowing families with previously untreatable conditions to achieve their dreams.
Source:
Journal reference:
Hiraoka, T., et al. (2025) An ex vivo uterine system captures implantation, embryogenesis, and trophoblast invasion via maternal-embryonic signaling. Nature Communications. doi.org/10.1038/s41467-025-60610-x
An explosive new space image reveals a star’s inner conflict hours before it blew up.
The picture shows the inside of a star turning on itself in the short time before it spectacularly exploded, according to a new study from NASA’s Chandra X-ray Observatory.
The shattered star, known as the Cassiopeia A supernova remnant, is one of the best-known, well-studied objects in the sky.
NASA explains: “Over three hundred years ago, however, it was a giant star on the brink of self-destruction.
“The new Chandra study reveals that just hours before it exploded, the star’s interior violently rearranged itself.
“This last-minute shuffling of its stellar belly has profound implications for understanding how massive stars explode and how their remains behave afterwards.”
Cassiopeia A (Cas A for short) was one of the first objects the telescope looked at after its launch in 1999, and astronomers have repeatedly returned to observe it.
(NASA/CXC/Meiji Uni/CXC/SAO et al via SWNS)
By Talker
“It seems like each time we closely look at Chandra data of Cas A, we learn something new and exciting,” said Toshiki Sato of Meiji University in Japan who led the study. “Now we’ve taken that invaluable X-ray data, combined it with powerful computer models, and found something extraordinary.”
The new research with Chandra data reveals a change that happened deep within the star at the very last moments of its life. After more than a million years, Cas A underwent major changes in its final hours before exploding.
“Our research shows that just before the star in Cas A collapsed, part of an inner layer with large amounts of silicon traveled outwards and broke into a neighboring layer with lots of neon,” said co-author Kai Matsunaga of Kyoto University in Japan. “This is a violent event where the barrier between these two layers disappears.”
The strong turbulent flows created by the star’s internal changes may have promoted the development of the supernova blast wave, facilitating the star’s explosion.
“Perhaps the most important effect of this change in the star’s structure is that it may have helped trigger the explosion itself,” said co-author Hiroyuki Uchida, also of Kyoto University. “Such final internal activity of a star may change its fate—whether it will shine as a supernova or not.”
The results have been published in the latest issue of The Astrophysical Journal.
NASA captures star’s final moments before blowing up | Features | homenewshere.com
We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.