Andy has been interested in space exploration ever since reading Pale Blue Dot in middle school. An engineer by training, he likes to focus on the practical challenges of space exploration, whether that’s getting rid of perchlorates on Mars or making ultra-smooth mirrors to capture ever clearer data. When not writing or engineering things he can be found entertaining his wife, four children, six cats, and two dogs, or running in circles to stay in shape.
Citizen science projects result in an overwhelmingly positive impact on the polar tourism experience. That’s according to a new paper analyzing participant experiences in the first two years of FjordPhyto, a NASA Citizen Science project..
The FjordPhyto citizen science project invites travelers onboard expedition cruise vessels to gather data and samples during the polar summer season, helping researchers understand changes in microalgae communities in response to melting glaciers. Travelers in Antarctica from November to March help collect phytoplankton and ocean data from polar regions facilitated by trained expedition guides.
The new research found that ninety-seven percent of respondents reported that participating in citizen science enriched their travel experience. The paper provides a first understanding of the impact of citizen science projects on the tourism experience.
“I was worried that I would feel guilty being a tourist in a place as remote and untouched as Antarctica,” said one anonymous FjordPhyto participant. “But being able to learn and be a part of citizen science, whilst constantly being reminded of our environmental responsibilities, made me feel less like just a visitor and more a part of keeping the science culture that Antarctica is known for alive and well.”
For more information and to sign up, visit the FjordPhyto website.
Space might be far colder than Earth, but the frozen water found on comets, icy moons, and interstellar dust is far more complex than the ice in a home freezer.
A new computational and laboratory study from scientists at University College London (UCL) and the University of Cambridge overturns a decades-old assumption about “space ice.”
Instead of being a totally shapeless solid, low-density amorphous ice contains countless nanoscopic crystals. The team reports that these crystals are only a few billionths of a meter across.
Earth ice versus space ice
Earthly ice is familiar because its molecules assemble into a highly ordered lattice. The six-fold symmetry of snowflakes reflects that underlying structure.
By contrast, researchers have long believed that water freezing in the ultra-cold, near-vacuum conditions of space lacks the energy needed to form such lattices, locking into a completely random arrangement. The new study challenges that view.
Viewing space ice at an atomic level
Using state-of-the-art molecular dynamics simulations, the team modeled how water vapor condenses at temperatures around –120 °C (-184 °F), a typical environment for amorphous ice formation.
Next, the researchers compared the predicted structures with archival X-ray diffraction data collected from laboratory samples.
The best match occurred when about 20 percent of the simulated solid consisted of orderly nanocrystals embedded within a disordered matrix only slightly wider than a DNA strand.
Study lead author Michael B. Davies, a professor at UCL, argued the discovery provides a long-missing atomic-level picture of the Universe’s dominant ice.
“We now have a good idea of what the most common form of ice in the Universe looks like at an atomic level,” said Davies. “This is important as ice is involved in many cosmological processes, for instance in how planets form, how galaxies evolve, and how matter moves around the Universe.”
Hidden crystals grow with warming
To confirm the simulations, the researchers fabricated several kinds of amorphous ice in cryogenic chambers.
One method mimicked interstellar dust conditions by depositing water vapor on a surface kept at –110 °C (-166 °F). Another began with high-density amorphous ice – created by mechanically crushing ordinary ice at –200 °C (-328 °F) – and then let it relax until it reached the low-density state.
Next, the team warmed each sample gently so that hidden crystals could grow large enough to be detected.
Crucially, the final crystalline arrangement differed depending on how the ice had originally formed, implying that each amorphous sample had retained a “memory” of its past in the form of pre-existing seed crystals.
If the material had been fully disordered to begin with, every warm-up should have produced identical crystals.
Space ice and life’s origins
For decades, some astrobiologists have proposed that comets delivered the first amino acids to early Earth, a scenario called panspermia.
These theories often assumed that low-density amorphous ice could store organic molecules in its wide, random pores.
Since the new work shows the ice is partly crystalline, those cavities may be scarcer than previously thought.
Visual representation of the structure of low-density amorphous ice. Many tiny crystallites (white) are concealed in the amorphous material (blue). Click image to enlarge. Credit: Michael B Davies, UCL and University of Cambridge
“Our findings suggest this ice would be a less good transport material for these origin of life molecules,” Davies explained.
“That is because a partly crystalline structure has less space in which these ingredients could become embedded. The theory could still hold true, though, as there are amorphous regions in the ice where life’s building blocks could be trapped and stored.”
Implications for advanced technology
The results ripple into materials science on Earth. Study co-author Christoph Salzmann pointed out that everyday technologies – from fiber-optic cables to pharmaceutical tablets – depend on truly amorphous solids.
“Ice on Earth is a cosmological curiosity due to our warm temperatures. You can see its ordered nature in the symmetry of a snowflake,” Salzmann said.
“Ice in the rest of the Universe has long been considered a snapshot of liquid water – that is, a disordered arrangement fixed in place.
“Our findings show this is not entirely true. Our results also raise questions about amorphous materials in general. These materials have important uses in much advanced technology.”
Glass fibers that carry data over long distances must remain amorphous – or disordered – to function properly, so eliminating any tiny crystals within them could enhance their performance.
An expanding ice family
Scientists already recognized two amorphous ices – low-density and high-density – before the same UCL-Cambridge collaboration revealed a medium-density form in 2023.
The medium type matches liquid water’s density, so it would neither sink nor float in a glass. The latest study shows that even supposedly structureless ice can contain hidden order.
Researchers now wonder: could there be a truly crystal-free variant, or does some degree of hidden order inevitably form whenever water freezes in space?
A foundation for future missions
Future investigations will probe whether the fraction or size of these nanocrystals depends on how quickly the ice forms, the presence of salts and organics, or exposure to cosmic radiation.
Answers could refine climate models for icy moons, improve planning for missions that mine extraterrestrial ice as fuel, and deepen fundamental understanding of water’s many quirks.
“Water is the foundation of life but we still do not fully understand it. Amorphous ices may hold the key to explaining some of water’s many anomalies,” said co-author Professor Angelos Michaelides.
According to Davies, ice is potentially a high-performance material in space. “It could shield spacecraft from radiation or provide fuel in the form of hydrogen and oxygen. So we need to know about its various forms and properties.”
Whether for fueling rockets, protecting astronauts, or seeding life, the Universe’s most common ice now appears more sophisticated – and more structured – than scientists ever suspected.
The study is published in the journal Physical Review B.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
A Los Alamos collaboration has replicated an important but largely forgotten physics experiment: the first deuterium-tritium (DT) fusion observation. As described in Physical Review C, the reworking of the previously unheralded experiment confirmed the role of University of Michigan physicist Arthur Ruhlig, whose 1938 experiment and observation of deuterium-tritium fusion likely planted the seed for a physics process that informs national security work and nuclear energy research to this day.
“As we’ve uncovered, Ruhlig’s contribution was to hypothesize that DT fusion happens with very high probability when deuterium and tritium are brought sufficiently close together,” said Mark Chadwick, associate Laboratory director for Science, Computation and Theory at Los Alamos. “Replicating his experiment helped us interpret his work and better understand his role, and what proved to be his essentially correct conclusions. The course of nuclear fuel physics has borne out the profound consequences of Arthur Ruhlig’s clever insight.”
The DT fusion reaction is central to enabling fusion technologies, whether as part of the nation’s nuclear deterrence capabilities or in ongoing efforts to develop fusion for civilian energy. For instance, the deuterium-tritium reaction is at the center of efforts at the National Ignition Facility to harness fusion. Los Alamos physicists developed a theory about where the idea came from — Ruhlig — and then built an experiment that would confirm the import and accuracy of Ruhlig’s suggestion.
Tracking down the origin of DT fusion
In 2023, Chadwick was working with colleagues, including theoretical physicist Mark Paris, on a compendium of the early history of the development of fusion. A reasonably well-known part of that history is the suggestion — at a meeting at future Manhattan Project-director J. Robert Oppenheimer’s July 1942 Berkeley physics conference — by physicist Emil Konopinski that DT fusion, among numerous possible fusion reactions, would be particularly advantageous as a component of a fission-reaction weapon.
But Chadwick and his Los Alamos colleagues wondered: How did Konopinski arrive at this insight about DT fusion? Seizing on the most viable fusion process among several options this early in the program — the Manhattan Project had started in earnest only months before — certainly proved a fortuitous decision.
Searching the National Security Research Center archive one night, Chadwick came across a 1986 audio recording of Konopinski recounting the decision making around deuterium-tritium reactions. (The team made the Konopinski recording available on YouTube.) His voice coming out of the past, reflecting on an even further past, Konopinski several times attributed his motivation to explore deuterium-tritium research to his knowledge of “pre-war” research.
Tritium was discovered in 1934 by experimental physicist Ernest Rutherford’s team. Rutherford was a titan of early physics, having advanced the model of the atom with Neils Bohr, and also having overseen work by James Chadwick to discover the neutron. Starting with 1934, Paris scoured the published physics literature, eventually coming across Ruhlig’s single-author, 1938 letter to the editor published in Physical Review about a gamma-ray experiment.
Ruhlig was working with deuterium-on-deuterium interactions: blasting deuterium with a beam of deuterons and studying gamma-ray effects. (A deuteron is the nucleus — one neutron and one proton — of a deuterium atom.) In what is nearly an aside in the final paragraph of the letter, Ruhlig described observing protons with extremely high energies, inferring that they were generated by secondary reactions: tritium-on-deuterium fusion neutrons scattering protons out of a thin cellophane foil placed inside a cloud chamber. He cited a private conversation with Hans Bethe in unpacking what he’d seen. The DT reaction “must be an exceedingly probable one,” he concluded, offering a quantitative estimate of one in 1,000 energetic protons against lower-energy protons.
And there the matter dropped; Ruhlig’s paper was infrequently cited, with the few citations bearing mostly on the gamma-ray issues. But Konopinski appears to have remembered the work.
Paris and Chadwick put together the pieces: As it happens, Ruhlig and Konopinski were both University of Michigan students, overlapping in their doctoral studies in the 1930s. Ruhlig’s thesis adviser, Richard Crane, was a colleague of Bethe, and Konopinski served on a research fellowship overseen by Bethe. They also shared a mentor in University of Michigan physicist George Uhlenbeck, co-discoverer of electron spin. And though Ruhlig’s paper was not often cited, that does not necessarily mean it was unknown — the journal would have been part of many physicists’ regular reading.
“The evidence for Konopinski interpreting and taking up Ruhlig’s suggestion of the probability of DT fusion is circumstantial, but nonetheless strong,” Paris said. “We’re left to ask, what did Ruhlig actually observe? Are his conclusions consistent with what we would arrive at with a computational approach and an understanding of modern cross sections? Ultimately, the way to answer those remaining questions is to replicate the experiment.”
Chadwick mentioned the Ruhlig paper, and their theories about the 1938 experiment’s role in the development of DT fusion, to Lab Director Thom Mason, who insisted on the team conducting an experiment — not just a simulation — to validate their conclusions.
Replicating the experiment
The team collaborated with experimental physicists from Duke University, based at the Triangle Universities Nuclear Laboratory in North Carolina, to replicate Ruhlig’s work with a modern, rigorously executed duplication of the original experiment. The reproduction would be accompanied by theoretical and computational analysis.
The team used the laboratory’s Tandem accelerator at its lowest operating power, producing a 3.5-mm deuteron beam. They paired that beam with a thin, cobalt-alloy foil between the accelerator vacuum and target that effectively duplicated as best as possible Ruhlig’s 500 keV beam. As in 1938, the beam was directed at a target of deuterated phosphoric acid, with a liquid scintillator neutron detector tracking the neutrons of interest to gauge the secondary reactions.
“In contrast to fusion experiments such as in the inertial confinement fusion efforts at the National Ignition Facility, we were able to perform, for the first time at a low-energy nuclear physics facility, a DT fusion experiment as a secondary reaction following the initial deuterium-deuterium interaction which provides the tritium,” said Werner Tornow, Duke University physicist for the Triangle Universities Nuclear Laboratory. “This work helps answer some intriguing questions about physics history, but it’s also impactful in extending our ability to work with DT fusion in a considerably more challenging environment.”
Confirming Ruhlig’s essential observations
In analyzing their results, the modern experiment did observe secondary DT reactions, although it also suggests that Ruhlig overestimated the ratio at which he was seeing excess neutron production, the products of fusion; the researchers detected a much smaller ratio. As Ruhlig’s 1938 letter describing the experiment provides only sparse details as to how he arrived at his determination, though, it is ultimately difficult to decisively gauge the Michigan physicist’s accuracy against the modern results. The team’s calculated value using modern methods did agree with the measure value gleaned from the replicated experiment.
Importantly, the measurements derived from the experimental techniques employed by Ruhlig and re-tested by the Los Alamos and Triangle Universities Nuclear Laboratory researchers can be applied to active fusion efforts such as at NIF.
“Regardless of the inconsistency of Ruhlig’s rate of fusion against our modern understanding, our replication leaves no doubt that he was at least qualitatively correct when he said that DT fusion was ‘exceedingly probable,’” Chadwick said. “Ruhlig’s accidental observation of DT fusion, together with subsequent Manhattan Project cross section measurements, contributed to the peaceful application of DT fusion in tokamaks focused on energy projects and in inertial confinement fusion experiments like NIF. I think we’re all proud to lift Arthur Ruhlig up again out of history as an important contributor to ongoing, vital research.”
Notably, the team published its results in Physical Review — the same journal that published Ruhlig’s first observation of DT fusion in 1938.
Arthur J. Ruhlig: A Physics Life
While Arthur (Art) Ruhlig never received wide acclaim for his initial observation of deuterium-tritium fusion, he was an important contributor to essential physics for many years. Born June 13, 1912, in Michigan, he graduated from high school in Fort Wayne, Indiana, before setting off for the University of Michigan. Studying under H. Richard Crane, Ruhlig was awarded a doctorate in physics in January 1938 for his thesis, “The Passage of Fast Electrons and Positrons Through Lead.” Ruhlig’s critical publication in Physical Review, “Search for Gamma-Rays from the Deuteron-Deuteron Reaction,” with its observation of DT fusion, followed in August of that year.
Ruhlig’s career spanned government and private industry research across a few disciplines. He joined the Naval Research Laboratory in 1940 as an electrical engineer, and he was with the Laboratory for more than 15 years. In 1946, he was with the Rocket Sonde Research Branch, an arm of the Naval Research Laboratory charged with developing rocketry that uses instruments to study the atmosphere. He went on to become the branch head first of the radiation division and then the electron tubes group. Much of his work from this time was and remains classified, though he occasionally published in open literature.
In a serendipitous turn, Ruhlig was a member of a Naval Research Laboratory team that supported Los Alamos’ 1951 Operation Greenhouse in the Pacific. Ruhlig led a diagnostic group responsible for amplifiers and transmission lines. Having been the first to observe DT fusion in 1938, he was thus among the first to observe ignited burning fusion plasma as deployed in the series of thermonuclear tests. Ruhlig developed a formula, widely used for decades, to infer the temperature of a burning plasma from the observed neutron spectrum.
In 1956, Ruhlig joined the engineering and research company Aeronutronic (later purchased by Ford and merged with Philco), managing a radar and electronics laboratory. In 1960, Ruhlig was named manager of physics and computing for what was now the Aeronutronic division of Ford Motor Company in Newport Beach, California, and then in 1961 was named a senior staff scientist. The company noted Ruhlig’s “wide-ranging competence” on display during his tenure there in the 1960s, including his valuable role in developing a laser system proposal for the U.S. Air Force. He could fluently read in German, French and Russian and was praised as a “brilliant scientist,” whose “company loyalty and (…) personal and professional integrity are of the highest order.”
A family-oriented man, Ruhlig married his wife, Emily, in 1934, and they were married for nearly 67 years before her death in 2001. Arthur Ruhlig died in 2003 in Santa Ana, California. The Los Alamos-Duke University research team replicating the experiment connected with Ruhlig’s daughter Vivian Lamb, living in North Carolina. She had been searching family history to share with her granddaughter, and, seeing the team’s request for information about Ruhlig online, reached out to the research team and graciously shared her time and memories. She also passed along a picture of her hardworking father, likely from sometime following his 1938 work — a portrait of, in Vivian’s words, the “consummate scientist,” one who paired a “lifelong curiosity about problems in physics” with abiding “respect for careful scientific experiments.”
Scientists have traditionally described long-term changes to Earth’s marine ecosystems by measuring biodiversity—the number of different species that show up in ancient rock samples.
Until now, no one had measured how marine biomass—the sheer amount of organic material—fluctuated over hundreds of millions of years. A new study published in Current Biologydoes just that, using limestone samples to show for the first time that marine biomass and biodiversity trends aligned over the past 541 million years. The results may help answer questions about how ecosystems evolve over geologic time and how humans are driving a mass extinction in the modern world.
“[Biomass] patterns really followed the biodiversity curve, at least on macroevolutionary timescales.”
“[Biomass] patterns really followed the biodiversity curve, at least on macroevolutionary timescales,” said Pulkit Singh, a paleobiologist at Stanford University and coauthor of the new study. Singh’s graduate research forms the basis of the new study.
“This provides a new type of data that allows us for the first time to test some very influential ideas about the causality of long-term biodiversity changes,” said Seth Finnegan, a paleobiologist at the University of California, Berkeley, who was not involved in the new study.
Counting Skeletons and Shells
As organisms living in shallow marine environments die and settle to the seafloor, their calcium carbonate shells and skeletons are preserved as fossil-filled limestone. The successive layers of this limestone serve as an inventory of the diversity and abundance of life in the oceans over millions of years and are especially valuable to paleontologists because of their high shell content as well as the fact that limestone deposition rates likely stay stable over time, even in the absence of shells and skeletons.
To get a comprehensive picture of biomass over the Phanerozoic eon, Singh and the research team collected troves of data from previous studies that included counts of skeleton and shell fragments in marine limestone samples. In all, the team found data for more than 7,000 samples from 111 studies and conducted point counts for 73 new samples, too.
The data collection required a lot of “intellectual courage” from Singh, said Jonathan Payne, a paleobiologist at Stanford University and coauthor of the new study. “It took a lot of hard work with no guarantee that we’d get anything informative in the end.”
The gamble paid off: Results showed that “shelliness,” as Payne calls it—a proxy for biomass—generally increased over the past 541 million years alongside recorded trends in marine biodiversity, with dips in biomass aligning with known major extinction events.
The study “provides a link that has been missing until now” that connects long-term biodiversity processes to biomass trends, Finnegan said. The data appear to confirm an idea many paleobiologists expected but had not had the data to demonstrate—that marine animal biomass and biodiversity aligned over Earth’s history, he said.
Singh and the team performed a series of analyses to ensure the trends they were seeing weren’t due to other factors such as depositional environment, latitude, ocean depth, and ecosystem type. No matter how they sliced up the data, the results showed the same trends.
“It’s really rare to get the first chance to document a pattern about life across long histories of time,” Payne said. “There’s theory, but in the end, theory is meaningful when you can compare it to real data.”
The patterns the team uncovered in the limestone were reflected, too, in language past researchers used to describe their samples: An analysis of nearly 16,000 abstracts including descriptions of sedimentary carbonate rock over geologic time showed that the “shelliness” of words used to describe limestone samples increased alongside biomass trends. Words like “skeletal” and “fossiliferous” showed up at higher ratios compared to nonskeletal words in descriptions of samples from times in Earth’s history when biomass was higher.
“It was an interesting, independent confirmation of the rest of the study,” Payne said.
What Biomass Tells Us
Biomass indicates how much energy is available in an ecosystem. For animals, the ultimate source of that energy is created via the primary productivity of photosynthetic organisms such as plants and algae. Understanding the relationship between biomass and biodiversity can provide insight into how ecosystems evolve, how diversity arises and collapses, and what the ultimate factor that limits biodiversity in an ecosystem is.
“When there is more stuff to eat at the base of the food chain, ecosystems can support more and larger individuals, and maybe they can also support more different kinds of organisms.”
“It has been suggested for a long time that the long-term increase in biodiversity is a response to higher primary productivity,” Finnegan said. “When there is more stuff to eat at the base of the food chain, ecosystems can support more and larger individuals, and maybe they can also support more different kinds of organisms.”
In the ecology of the modern world, scientists have evidence that this is true. But modern scientists live in a “thin little time slice” in which any observations of ecosystems occur on very short timescales relative to Earth’s history, Finnegan said.
Scientists don’t know whether ecosystems work the same now as they did for all of Earth’s history. Long ago, biodiversity may have dictated biomass instead, or the relationship may have been a feedback loop. “Really understanding biodiversity processes means understanding them on the million-year timescale,” he said.
Since humans started to dominate ecosystems, biodiversity has plummeted. Biomass, however, has increased significantly, mostly as a result of animal husbandry and pet ownership. “We have a lot of humans, and a lot of cats and dogs, but not a lot of diversity,” Singh said. The world’s oceans are also “very likely in the early stages of a significant extinction event,” Finnegan said.
Deeper knowledge of how biomass and biodiversity relate over geologic time could help scientists better understand the effects of human-caused ecosystem changes and the drivers of this sixth mass extinction. Humans are altering the planet in a “massive experiment,” Payne said. And the only way to understand planetary-scale experiments is to use the geologic record, he said. “It is the only source of information at the same temporal and spatial scales.”
At least during the Phanerozoic, biomass and biodiversity seem to have been coupled, according to the new study. The results provide a coarse, but robust, picture, Payne said, though “there’s a lot more to learn.”
—Grace van Deelen (@gvd.bsky.social), Staff Writer
Citation: van Deelen, G. (2025), Biomass and biodiversity were coupled in Earth’s past, Eos, 106, https://doi.org/10.1029/2025EO250243. Published on 9 July 2025.
Using the James Webb Space Telescope (JWST), astronomers observing a monstrous black hole in an unspoiled galaxy just 700 million years after the Big Bang have found a hint at how these celestial titans grew.
The observations could indicate that supermassive black holes in the early universe grew from so-called primordial black holes, created by density fluctuations just after the Big Bang.
This theory has an advantage over supermassive black hole formation ideas that need time for the first generation of massive stars to form and die, and then for the black holes they birth to merge and feed on copious amounts of gas and dust.
The supermassive black hole observed by JWST is A2744-QSO1 (QSO1), which has a mass of around 10 million times that of the sun, equivalent to 10% of the total mass of its host galaxy.
Supermassive black holes seen in the local universe and thus during later cosmic epochs have masses as low as 0.005% that of their galactic hosts. But the host galaxy of QSO1 isn’t just remarkably diminutive.
This galaxy, seen as it was 13 billion years ago, is also poor in metals, the name that astronomers give to elements heavier than hydrogen and helium. This indicates it has experienced very few stars exploding in supernovas and dispersing metals forged during their lifetimes.
“QSO1 is extremely poor in oxygen abundance, less than 1% of the solar value, and makes it one of the most chemically unevolved systems found in the early universe,” research team member Roberto Maiolino, an astrophysicist at the Cavendish Laboratory at the University of Cambridge in England, told Space.com.
Breaking space news, the latest updates on rocket launches, skywatching events and more!
“As oxygen is quickly produced by the first generations of stars, the extremely low chemical enrichment indicates that the host galaxy of this black hole must be fairly unevolved,” Maiolino added. “This is a remarkable finding, as it is telling us that massive black holes can form and grow fairly big in the early universe without being accompanied by much star formation.”
QSO1 can’t have formed from mergers of many smaller black holes created when stars die in supernovas. The lack of metals indicates that widespread stellar death hadn’t happened in this galaxy at the time that JWST saw it.
This could help solve the mystery of how supermassive black holes grew so fast in the early universe, by favoring mechanisms that skip dying stars as cosmic middlemen.
Black holes and ‘heavy seeds’
Ever since JWST started making observations of the early universe, the powerful space telescope has presented cosmologists with a problem: the detection of supermassive black holes prior to 1 billion years after the Big Bang.
“The standard scenario is that supermassive black holes should originate from stellar remnants, black holes with masses of a few tens of solar masses, and then grow to very large masses by accreting gas from the surrounding medium,” team member and University of Cambridge researcher Hannah Uebler explained to Space.com. “However, there is a theoretical limit at which black holes can accrete gas and grow.”
Uebler explained that black holes are fed this meal of gas from flattened clouds that surround them, called accretion disks. The immense gravity of these central black holes generates huge amounts of friction in the accretion disk, which becomes very hot and radiates energy strongly, especially in ultraviolet wavelengths.
“The light emitted by the accretion disk exerts pressure on the incoming gas, hence counteracting the effect of gravity,” Uebler said. “If radiation pressure exceeds the black hole’s gravitational pull on gas, that’s when accretion must (in theory) stop. This is the so-called Eddington limit.”
What JWST seems to have found in the early universe, especially within the first billion years after the Big Bang, are black holes that are far too massive to have formed and grown via this scenario.
“Very simply, at such early epochs in the universe, there was not enough time to produce such monsters starting from small seeds and with a growth constrained by the Eddington limit,” Uebler said.
An illustration of the region surrounding a feeding supermassive black hole. (Image credit: Robert Lea (created with Canva))
Scientists have posited a few theories to explain how black holes could have reached supermassive status, with masses millions of times that of the sun, in the early cosmos.
“One possibility is that black holes were born big. This is typically the ‘Direct Collapse Black Hole’ scenario,” Maiolino said. “Under some specific conditions, pristine massive clouds of gas may have collapsed directly to form an extremely massive protostar which would have then collapsed into a very massive black hole, possibly with a mass of 100,000 times the mass of the sun, which can then grow further via gas accretion.”
Another possibility, Maiolino added, is that the cores of primeval galaxies were highly dense with stars, and the rapid merging of stars and stellar remnants may have produced intermediate-mass black holes several thousand times heftier than the sun.
“Yet, another possibility is that early black holes were ‘reckless’ and managed to exceed the Eddington limit,” Maiolino said. “Even short, recurrent bursts of this ‘super-Eddington accretion’ can be very effective in rapidly growing the mass of black holes that were originally very small.”
Maiolino explained that the above scenarios manage to explain QSO1 in extreme cases. However, in the majority of simulations and models involving these mechanisms, it’s very difficult to reproduce the very high black hole mass, the very high black hole to galaxy mass ratio, and, crucially, the very low metallicity of QSO1 simultaneously.
“In the direct collapse scenario, the black hole should be located near an active region where stars have formed vigorously,” Maiolino continued. “Gas from such active nearby regions must rapidly pollute also the surroundings of the newly formed black hole while it grows.
“The super-Eddington accretion scenario runs into similar problems. The large amounts of gas needed to boost its accretion must unavoidably also form a lot of stars, which rapidly enrich the surrounding medium with metals.”
There is another scenario devised to account for the rapid growth of monster black holes that involves primordial black holes, which are hypothesized to have formed within the first second after the Big Bang.
“In this scenario, such putative primordial black holes would have been the very first structures formed in the universe, well before stars and galaxies,” Maiolino said.
This scenario may well be a better fit for the team’s JWST observations of QSO1.
Starting off small: Growth from primordial black holes
The team behind these observations of QSO1 with the JWST points out that the concept of primordial black holes is one that has grown in favor over the last four decades.
“Primordial black holes can emerge from the very early universe pretty much already very massive,” Uebler said. “Additionally, theories predict that they should be very clustered; hence, they could be merging quickly and therefore grow rapidly even before gas accretion.”
According to some theories, these primordial black holes would be the initial seeds around which galaxies subsequently form. The initial phases of accretion onto the black hole would be from pristine or nearly pristine gas, not enriched with metals.
Indeed, recently published research has also suggested the idea that supermassive black holes could have grown from black holes created shortly after the Big Bang.
An illustration (not to scale) of a primordial black hole growing to supermassive scales (Image credit: Robert Lea (created with Canva))
“As first suggested in the 1960s, a population of primordial black holes may have formed even earlier than the first stars,” Lewis Prole, the lead author of that separate research, told Space.com.
“The existence of primordial black holes would bypass the need for massive stars in the early universe, by acting as the initial seeds for the supermassive black holes observed with JWST,” added Prole, a postdoctoral researcher at Maynooth University in Ireland. “Depending on the formation mass of primordial black holes, which is currently unknown, they can have different effects.”
Prole and colleagues performed the first detailed simulations of the cosmos factoring in primordial black holes. They found that those with intermediate masses could implant themselves in halos containing dense gas and start growing early enough to achieve supermassive black hole status prior to the universe being 1 billion years old.
“Very large primordial black holes may already be massive enough to account for the observed supermassive black holes, while smaller primordial black holes would need to embed themselves into early galaxies and accrete up to the observed masses,” Prole said.
Indeed, the new JWST observations of QSO1 could be the first observational evidence of this growth by primordial black holes. But there is a long way to go before this can be confirmed.
“It is important to note that the primordial black hole scenario also has caveats and does not perfectly reproduce the observations,” Mailino said. “These issues should be explored with additional modeling and simulations.”
In addition to better modeling, higher resolution observations could help researchers to better constrain the actual number of stars that are present in the surroundings of this black hole and, if detected, their properties.
“This kind of data would help to understand whether the black hole really formed unaccompanied by much star formation,” Mailino said. “Ultimately, a definite proof of the primordial black hole scenario would come from detecting such massive black holes at even earlier times in the universe.”
The team’s research has been submitted to the journal Nature and appears as a preprint on the repository site arXiv.
Newswise — NEWPORT NEWS, VA – A rather unassuming particle is playing an important role in the hunt for subatomic oddities. Similar to protons and neutrons, mesons are composed of quarks bound together by the strong nuclear force. But these short-lived particles have different characteristics that can reveal new information about the atomic nucleus and how the universe works.
Advancing this understanding could one day enable new discoveries in many fields, ranging from nuclear power to medicine and materials engineering.
The so-called a2 meson is a relatively lightweight system of quarks. It is produced in experiments at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility.
Now, for the first time, scientists at Jefferson Lab have measured the probability of the a2 being produced by a polarized photon beam hitting a proton target. This measurement is called a cross section and was recently published in the journal Physical Review C. The result enables the search for the lightest spin-exotic meson, the pi1, with an ultimate goal of mapping the mass spectrum of hybrid systems.
“We took a well-understood particle and measured a quantity that is new, using a sophisticated technique that we will need to study exotics, such as hybrids,” said Malte Albrecht, a Jefferson Lab staff scientist who is part of the Gluonic Excitations Experiment (GlueX). “It’s a physics result on its own but also a step on our roadmap toward exotic results with GlueX.”
GlueX at Jefferson Lab
Gluons are fundamental particles that carry the strong nuclear force, or the “glue” that clumps quarks into composites such as protons, neutrons, mesons and other hadrons.
The theory of quantum chromodynamics (QCD) describes this strong force. It is the main research thrust of the GlueX Collaboration, an international team of scientists who conduct their research at Jefferson Lab.
“One thing GlueX is specifically interested in understanding is the quark-gluon degrees of freedom,” said Sean Dobbs, an associate professor at Florida State University and a member of the GlueX collaboration. “The excitation of the gluons that hold hybrid mesons together contributes directly to their properties.”
To explore QCD, GlueX takes advantage of photons produced from the electrons of the Continuous Electron Beam Accelerator Facility (CEBAF), a DOE Office of Science user facility enabling the research of more than 1,650 physicists worldwide.
The GlueX apparatus uses a thin wafer of diamond to convert CEBAF’s electrons into a beam of photons. These light particles are linearly polarized, meaning their electric field oscillates in a particular direction as they hurtle toward their atomic target. A specialized detector system measures the spectral spray from the resulting collisions, allowing scientists to peer into the “black box” of particle production and decay.
“What goes on inside that box could be a lot of things, but the polarization gives us a hint at what it could be,” said Lawrence Ng, a postdoctoral researcher at Jefferson Lab who studied under Dobbs at Florida State. “It tells us a little bit about how these mesons are produced.”
Piecing the Pi
The pi1 meson is a close cousin of the a2, just a little weirder.
Both are QCD systems that exist in nearby energy regions. They have similar quark content – some combination of the “up” and “down” flavors of quark and antiquark. They have relatively light masses, with a2 checking in at 1,320 million electron-volts (MeV) and pi1 at around 1,600 MeV. But they have different spin states, represented by the numerals after their names, and inside a pi1 meson, the gluons behave differently.
“Imagine quarks as billiard balls and gluons as rubber bands holding them together,” Dobbs said. “The difference between a ‘normal’ meson like the a2 and hybrids like the pi1 is that in the latter, the rubber band is excited. You pluck it and let it vibrate, giving you extra energy.”
The pi1(1600) has been difficult to pin down experimentally, but the a2(1320) is relatively well understood. That made a2 the ideal candidate.
In the search for the pi1 at GlueX, physicists refer to the a2 as their “standard candle.” The term is borrowed from astronomy and describes an interstellar object with a known brightness that provides a stable reference signal from which additional information, like cosmic distances, could be inferred.
To identify the a2(1320) among the many other particles produced in the photon-on-proton collisions, the GlueX team used an elaborate technique called a partial-wave analysis. This method filters out contributions from other reactions, allowing researchers to home in on the a2.
Now that the concept is proven using GlueX’s machinery, software and extraction technology, the group can use the measurement as a reference for exploring the pi1(1600) and reaching for an established spectrum of hybrid states.
According to Albrecht, the next step is to continue the hunt for these rare meson systems to not only confirm their existence but also find evidence for systems not previously observed.
“Proving our ability to perform a partial-wave analysis and get something new out, with a particle that is well known, is the first step toward understanding contributions that are possibly much smaller and more elusive, because they are exotic.”
Further Reading
Journal publication placeholder: First Measurement of a2(1320) Polarized Photoproduction Cross Section
GlueX Experiment | GlueX is a particle physics experiment located at Jefferson Lab in Newport News, VA. Its primary goal is to search for and study exotic hybrid mesons.
GlueXWiki
Jefferson Lab Experimental Hall D
First Result from Jefferson Lab’s Upgraded CEBAF Opens Door to Exploring the Universal Glue | Jefferson Lab
GlueX Completes First Phase | Jefferson Lab
-end-
Jefferson Science Associates, LLC, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energy’s Office of Science. JSA is a wholly owned subsidiary of the Southeastern Universities Research Association, Inc. (SURA).
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science
A fascinating glimpse into how a solar system like our own is born has been revealed with the detection of planet-forming ‘pebbles’ around two young stars.
These seeds to make new worlds are thought to gradually clump together over time, in much the same way Jupiter was first created 4.5 billion years ago, followed by Saturn, Uranus, Neptune, Mercury, Venus, Earth and Mars.
The planet-forming discs, known as protoplanetary discs, were spotted out to at least Neptune-like orbits around the young stars DG Tau and HL Tau, both around 450 light-years from Earth.
The new observations, revealed at the Royal Astronomical Society’s National Astronomy Meeting 2025 in Durham, are helping to fill in a missing piece of the planet formation puzzle.
“These observations show that discs like DG Tau and HL Tau already contain large reservoirs of planet-forming pebbles out to at least Neptune-like orbits,” said researcher Dr Katie Hesterly, of the SKA Observatory.
“This is potentially enough to build planetary systems larger than our own solar system.”
The latest research is part of the PEBBLeS project (Planet Earth Building-Blocks — a Legacy eMERLIN Survey), led by Professor Jane Greaves, of Cardiff University.
By imaging the rocky belts of many stars, the team are looking for clues to how often planets form, and where, around stars that will evolve into future suns like our own.
The survey uses e-MERLIN, an interferometer array of seven radio telescopes spanning 217 km (135 miles) across the UK and connected by a superfast optical fibre network to its headquarters at Jodrell Bank Observatory in Cheshire.
It is currently the only radio telescope able to study protoplanetary discs — the cosmic nurseries where planets are formed — at the required resolution and sensitivity for this science.
“Through these observations, we’re now able to investigate where solid material gathers in these discs, providing insight into one of the earliest stages of planet formation,” said Professor Greaves.
Since the 1990s, astronomers have found both disks of gas and dust, and nearly 2,000 fully-formed planets, but the intermediate stages of formation are harder to detect.
“Decades ago, young stars were found to be surrounded by orbiting discs of gas and tiny grains like dust or sand,” said Dr Anita Richards, of the Jodrell Bank Centre for Astrophysics at the University of Manchester, who has also been involved in the research.
“Enough grains to make Jupiter could be spread over roughly the same area as the entire orbit of Jupiter, making this easy to detect with optical and infra-red telescopes, or the ALMA submillimeter radio interferometer.
“But as the grains clump together to make planets, the surface area of a given mass gets smaller and harder to see.”
For that reason, because centimetre-sized pebbles emit best at wavelengths similar to their size, the UK interferometer e-MERLIN is ideal to look for these because it can observe at around 4 cm wavelength.
In one new e-MERLIN image of DG Tau’s disc, it reveals that centimetre-sized pebbles have already formed out to Neptune-like orbits, while a similar collection of planetary seeds has also been detected encircling HL Tau.
These discoveries offer an early glimpse of what the Square Kilometre Array (SKA) telescope in South Africa and Australia will uncover in the coming decade with its improved sensitivity and scale, paving the way to study protoplanetary discs across the galaxy in unprecedented detail.
“e-MERLIN is showing what’s possible, and SKA telescope will take it further,” said Dr Hesterly.
“When science verification with the SKA-Mid telescope begins in 2031, we’ll be ready to study hundreds of planetary systems to help understand how planets are formed.”
Editors’ Highlights are summaries of recent papers by AGU’s journal editors.
Source: AGU Advances
Over two-thirds of Earth’s surface lies underwater, and the boundary of the hydrosphere and the lithosphere at the seafloor represents an important area of study of both materials–rock, sediment, fluid, and gas– and ecosystems for scientists studying Earth and ocean processes.
In a new commentary, FUTURE 2024 PI-team et al. [2025] report on the U.S. Seafloor Sampling Capabilities 2024 Workshop, which assessed the current state and future needs for U.S. oceanographic assets, including the evolution and design of multiscale science infrastructure. A key finding of the workshop is that future study of science at the seafloor interface will be severely limited by recent reductions in the oceanographic infrastructure available in the U.S.
Such infrastructure includes, among others, scientific deep drilling platforms, which enable human access to ice-covered seas in the polar regions; an expansion of ships in the U.S.-Academic Research Fleet that can handle heavy over-the-side shipboard coring and deeper rock dredging; and sample repository infrastructure that maximizes the value of returned samples by better supporting discoverability and accessibility of archived materials. The authors also emphasize the importance of workforce training and knowledge transfer through inclusive educational and professional development opportunities, particularly for early-career researchers.
Citation: FUTURE 2024 PI-team, Appelgate, B., Dugan, B., Eguchi, N., Fornari, D., Freudenthal, T., et al. (2025). The FUTURE of the US marine seafloor and subseafloor sampling capabilities. AGU Advances, 6, e2024AV001560. https://doi.org/10.1029/2024AV001560