Category: 7. Science

  • Researchers synthesise elusive methanetetrol molecule by simulating interstellar conditions in the laboratory

    Researchers synthesise elusive methanetetrol molecule by simulating interstellar conditions in the laboratory

    For the first time, researchers have synthesised and characterised the ‘only possible structure’ with four hydroxyl groups linked to a carbon atom – methanetetrol.

    Also known as orthocarbonic acid, this molecule was first hypothesised in 1922 by physical chemist Ernst Wilke. Despite the stability and prevalence of tetravalent structures like orthocarbonic esters, the detection of the more modest methanetetrol – unstable and elusive – remained a mystery. ‘We used a different synthetic strategy,’ says lead author Ralf Kaiser from the University of Hawaii at Mānoa, US.

    Similar to previous papers reporting the preparation of methanetriol, researchers resorted to interstellar conditions, simulated in the lab. To create this exotic environment, they exposed a mixture of carbon dioxide and water to highly energetic electron beams at ultra-high vacuum conditions and temperatures close to absolute zero. These conditions cause counterintuitive chemical reactions, such as the formation of otherwise unstable methanetetrol.

    Under irradiation of the ‘interstellar ice’, researchers simultaneously synthesise and detect the unique tetra-alcohol. The ultra-high vacuum ‘avoids collisions with other molecules’ and prevents the otherwise usual decomposition to structures like water and carbonic acid.

    To detect the presence of methanetetrol, the team combined computational and experimental techniques. The latter included photoionisation, which involves using powerful beams of UV light to split molecules into smaller ions, and mass spectrometry to ‘trap’ and find the different fragments. Although the identified fragments could come from contradictory combinations of atoms, isotopic substitution served as an easy solution – using marked carbon dioxide and deuterium oxide produced peaks specific to methanetetrol. ‘It confirmed the findings,’ says Kaiser.

    After this ‘final frontier’, Kaiser is already planning the next extreme experiment – tetrahydroperoxymethane, a methane attached to four hydroperoxyl groups. ‘It would be a unique challenge,’ he adds.

    Continue Reading

  • Humans And Neanderthals Hooked Up Three Times. Here’s Where It Happened

    Humans And Neanderthals Hooked Up Three Times. Here’s Where It Happened

    Like Ross and Rachel, modern humans and Neanderthals had something of an on-again, off-again love affair. Yet while Friends may have gone extinct after 10 seasons, our ancient ancestors repeatedly reunited over hundreds of thousands of years, and new research may have pinpointed exactly where on the planet these romantic episodes occurred.

    Previous studies have identified three waves of gene flow between the two human species, with the first interbreeding event taking place some 250,000 to 200,000 years ago. A second hook-up then happened around 120,000 to 100,000 years ago, followed by a final fling roughly 60,000 to 50,000 years ago.

    Recently, it was demonstrated that the second of these get-togethers likely went down in the Zagros Mountains, which stretch across the Persian Plateau and were inhabited by both modern humans and Neanderthals at the time of this middle mating event. To learn more about the most recent admixture, the authors of an as yet un-peer-reviewed study analyzed all known archaeological sites showing evidence of occupation by the two lineages between 60 and 50 millennia ago.

    They also analyzed paleoenvironmental data to identify regions capable of supporting large populations of both Homo sapiens and Neanderthals at the same time. Overall, they found that the most suitable habitats for ancient members of our own species were in southern Europe, northern and southern Africa, and large patches of Asia.

    Neanderthals, meanwhile, are likely to have thrived along the coasts of the Mediterranean and the Black Sea. Seeking regions where the two hominids may have overlapped, the researchers identified the Iberian Peninsula and the Levant as the most likely settings for the third and final installment of our inter-species romance.

    “The most suitable habitats of Neanderthals were located in Western Europe particularly the Iberian Peninsula at [60-50,000 years ago] where Neanderthals and modern humans live[d] alongside each other,” write the study authors. They therefore conclude that the region encompassing modern-day Spain and Portugal was “a highly probable area for the two species interbreeding.”

    Around the same time, groups of Homo sapiens were flowing out of Africa and into Eurasia via a key corridor running through the Levant, which includes the Mediterranean shores of the Middle East. Though smaller than the Iberian Peninsula, this highly important region was something of a hominin melting pot during the Middle and Late Pleistocene, where populations of Homo sapiens, Neanderthals, and other human lineages lived side by side.

    Putting all of this evidence together, the researchers say that these two areas were likely to have seen considerable species overlap at the time of the third interbreeding event, and that modern humans and Neanderthals may have done the dirty in either of these locations. Not wishing to sit on the fence, though, the authors put their money on the Levant, which they describe as the “main potential interbreeding area.”

    The study is currently available as a preprint on bioRxiv.

    Continue Reading

  • Quantum entanglement can now be recycled for networks

    Quantum entanglement can now be recycled for networks

    Quantum scientists have long treated quantum entanglement as precious cargo, forging fresh links for every secure message or computation. A new theoretical study proposes a thriftier route, letting an existing pair pass portions of its entanglement down an extended chain.

    Experts from the Harish-Chandra Research Institute (HRI) report that the hand-off could continue, in principle, without end, though each recipient would receive a smaller slice of the original connection.

    Why quantum entanglement matters


    Entanglement links the properties of two or more particles so tightly that measuring one instantly tells you the state of the other, regardless of separation. This non-local bond underpins quantum key distribution, distributed sensing, and proposals for a global quantum internet.

    Because the link cannot be copied, every application has relied on fresh pairs produced on demand, often with delicate lasers or cryogenic circuits. Generating and storing those pairs remains an experimental bottleneck.

    A strategy that lets devices recycle an existing pair would trim that overhead and simplify network design. It could also cut the number of fragile qubit memories required at future repeater stations.

    Networks built from recycled links could therefore run longer without waiting for central stations to reset and re-synchronize. That endurance matters for satellites or remote sensors that see only brief contact windows with the ground.

    The team models two theoretical participants (quantum systems) in the mathematical model they called Alice and Bob, who start with one entangled pair.

    Two newcomers, who they called Charu and Debu, each hold a separate particle that can interact locally with Alice and Bob but not with each other.

    After one withdrawal, Alice and Bob still retain a reduced connection. That leftover can be used to fund the next pair in line without requiring any fresh entanglement.

    Math behind quantum network sharing

    The authors prove that the protocol can be iterated indefinitely, as long as each new customer is satisfied with a smaller balance. Their proof relies on a measure called concurrence, which tracks how much two-particle entanglement survives each step.

    Charu and Debu receive an amount that depends only on the local operation chosen at their step, not on the growing crowd waiting behind them. That independence lets the chain grow as long as the math permits.

    The authors also derive an upper bound on the entanglement each newcomer can hope to gain, ensuring the protocol respects the monogamy rules that forbid one qubit from sharing full strength links with multiple partners. 

    The calculations confirmed that no party can cheat the system by claiming more than its share.The algebra, however, kept refusing to impose a hard cutoff.

    Real-world devices limit repeated sharing

    Real hardware will impose limits long before the math does. Each interaction leaks photons, picks up phase noise, or generates detector errors that sap usable correlations.

    Engineers must therefore weigh the cumulative losses against the cost of preparing fresh pairs. A hybrid architecture might recycle links only once or twice before moving on.

    Numerical simulations with realistic loss and detector models suggest usefulness may fade after a dozen rounds at telecom wavelengths. Superconducting circuits, with higher gate fidelities, could squeeze a few more iterations before noise dominates.

    Testing quantum networks in labs

    The protocol needs just local interactions and classical coordination, tools already common in superconducting and photonic labs.

    A tabletop demonstration could couple two microwave qubits to a shared resonator and then transfer part of their entanglement to two fresh qubits downstream.

    Optical groups could pursue a version with time-bin entangled photons stored in fiber loops. Each loop could spin a photon back through a beam splitter network that hands off correlations to a new path without touching the partner loop.

    Researchers will need a clear metric for “useful” entanglement in realistic settings. Measures linked to secret-key rate or teleportation fidelity would offer concrete benchmarks.

    Cross-platform trials, in which a superconducting node hands off entanglement to a spin qubit or trapped ion, could test compatibility between disparate hardware families. Such demonstrations would mirror the heterogeneous nature of future metropolitan networks.

    Future of quantum networks

    The new model also raises new theoretical questions. How many independent customers can extract a fixed minimum entanglement from a single source before the well runs dry?

    Future research could combine the sharing approach with entanglement distillation, replenishing the degraded entanglement after multiple uses.

    Scientists might also investigate whether multipartite connections – like Greenberger-Horne-Zeilinger states – behave differently under repeated sharing.

    Policymakers investing in quantum infrastructure will likely pay close attention to these findings. Greater utilization could mean quicker returns on costly hardware deployments.

    The study is published in the journal Physical Review A.

    —–

    Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

    Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

    —–


    Continue Reading

  • New Heat-Conducting Behaviors Appear in ‘Twisted Bilayers’

    New Heat-Conducting Behaviors Appear in ‘Twisted Bilayers’

    Aug. 4, 2025 — Twisted 2D material bilayers, consisting of two atom-thick layers on top of each other, can have different electrical and heat conducting properties when the layers are rotated. Using the Pittsburgh Supercomputing Center‘s (PSC’s) flagship Bridges-2 system and the National Center for Supercomputing Applications’ Delta in a dialog between computer simulation and microscopic imaging, a team from the University of Illinois reported today how, for the first time, they directly observed atomic vibrations stemming from the angle of the misalignment. This phenomenon can affect the thermal vibrational behavior of the atoms, and may be useful in designing new, more heat-resistant electronics.

    Twisted 2D material bilayers, consisting of two atom-thick layers on top of each other, can form alignment patterns that convey different electrical and heat conducting properties.

    If you’ve ever looked at a smartphone or computer screen through polarized sunglasses at just the right angle, the image blacks out. That’s because the light coming from a flat LCD screen is polarized. When the polarization angle of that light hits the polarization angle of the glasses just right, the light can’t get through.

    Nature is full of instances where a simple twist can change the picture dramatically. One of these, whose behavior is even stranger than that polarity trick, is a set of substances called twisted 2D materials, or twisted bilayers. These are made by layering a sheet of atoms atop another identical layer at an angle. For example, when sheets of tungsten diselenide (WSe₂) are aligned perfectly, the atoms of each layer lie directly over one another. But when you twist them, even by a degree or so, you get a pattern of alignments, with some atoms in line and others displaced.

    “Imagine you have a two-dimensional material like a layer of atoms,” said Elif Ertekin, associate professor of mechanical science and engineering at the University of Illinois. “You take a second one and you [twist] it … You get this super-pattern forming. It’s called a moiré superlattice … you have regions where the atoms of both layers are right on top of each other … and then in the middle … you have regions where in the two layers the atoms are offset from each other … They’re kind of important.”

    That pattern matters, because as the alignment of the atoms in each layer changes, those atoms’ vibrational energy can either build on or interfere with each other. They can form moiré phason modes, soft and squishy regions in the material that can help prevent the transport of heat. This makes these regions super-efficient at thermal insulation — preventing heat from moving through and out of the material. These properties make twisted bilayers and their phason modes promising both for a number of practical applications. These include creating new, more efficient heat protection for satellites and spacecraft, insulating windows in houses, and miniaturizing electronic components that won’t melt under the heat of their own operation.

    Pinshane Huang, an associate professor of materials science and engineering at the University of Illinois, wanted to use a new, ultra-precise microscopy technique called electron ptychography to image the moiré phasons in WSe₂. Her goal was to learn exactly how atoms in the twisted bilayers vibrate as a result of thermal fluctuations arising from their interaction with the surrounding environment.

    To understand these images more completely and to generate questions that could be answered by her measurements, she partnered with Ertekin, and a specialist in molecular dynamics computer simulations of materials. Ertekin, whose lab has conducted many such simulations of promising materials, uses PSC’s flagship Bridges-2 supercomputer and the National Center for Supercomputing Applications’ Delta system for her work. She obtained time on both systems through ACCESS, the National Science Foundation’s network of supercomputing resources.

    Electron ptychography can image objects so small that individual atoms can be seen. Even better, its resolution is in theory good enough that scientists can use it to measure atoms’ vibrations. In effect, it creates a time-lapse image showing the atoms’ movements.

    “Mapping phasons in moiré superlattices requires exceptional spatial resolutions enabled by electron ptychography,” said the University of Maryland’s Yichao Zhang. “This computation-aided microscopy technique relies on high-performance computing to achieve the resolution needed for thermal vibration mapping. Thanks to the computational power of NCSA Delta, we drastically reduced reconstruction times and accelerated optimization of reconstruction parameters. Comparison with simulations validated our experimental observation. This allows us to map thermal vibrations atom by atom, which is a breakthrough that wouldn’t have been possible without such advanced resources.”

    Yichao Zhang, now an assistant professor at the University of Maryland but then a post-doctoral researcher in Huang’s group, used the Delta supercomputer to reconstruct atomic images from data from the electron microscope. To understand the movements, they would need to know what telltale signs to look for in the time-lapse. Here previous theoretical research would be their guide, in combination with simulations of the atoms’ behavior in Bridges-2.

    Bridges-2 is Ertekin’s go-to for such molecular dynamics work. The system combines three strengths to support a variety of the simulations that her lab conducts.

    For crunching the complex equations governing atomic vibrations, Bridges-2 offers 1,008 parallel regular memory, or RM, central processing units (CPUs). These allow difficult computations to be broken up into little pieces that can rapidly be processed at the same time.

    Some calculations require the computer to remember large molecular structures and data. To make it unnecessary for the computer to make many time-consuming trips between storage and processors, Bridges-2 offers RAM memories of 246 or 512 megabytes in its RM nodes. In addition, it has a monster 4,000 MB of RAM in four extreme memory nodes.

    Finally, Bridges-2 has 360 high-powered graphics processing units (GPUs). These accelerate image processing, as well as other calculations that naturally separate into many, many parallel tasks.

    “We rely on Bridges-2 a lot for the work in our group,” said Ertekin. “We do all sorts of atomistic simulations, sometimes at the quantum level of theory and sometimes classical. In this work we used classical molecular dynamics simulations, which show the time dynamics of atoms vibrating. So [we are] using Bridges to help us integrate equations of motion, to tell us microscopically about the way that materials behave, and why they behave the way that they behave more or less directly from solving fundamental equations … And it would be really hard to do the things we do without Bridges.”

    Thanks to the dialog between Bridges-2’s simulations and the microscope’s images, Huang and Ertekin were able to show for the first time how, when the two layers were twisted by small angles, phason patterns dominate the thermal vibration patterns of the atoms. In the other parts of the material, the atoms vibrate randomly. In the phason areas, they vibrate only in the direction of the pattern.

    This phenomenon had been predicted by theory but never before seen directly. This was because the computations needed to resolve the atomic vibrations at a resolution of less than 15 picometers. That’s about a billionth of an inch, which was impossible to achieve before the current work.

    The result also shows why these materials are so good at thermal insulation and offers an important clue as to how such materials will behave in future electronic devices.

    The team reported their results in an article in the prestigious journal Science. Their results demonstrate that electron ptychography is a powerful method for directly measuring the behavior of materials at the tiniest scales, allowing scientists to see for the first time these types of atomic vibration and study how they govern materials’ behaviors.


    Source: Ken Chiacchia, PSC

    Continue Reading

  • Space hurricanes are real — and they wreak more havoc than we thought

    Space hurricanes are real — and they wreak more havoc than we thought

    Hurricane season has a new contender, and it’s swirling above the poles.

    Behold, the space hurricane. Just like its terrestrial namesake, it spins in vast spirals and has a calm, eye-like center. But instead of clouds and rain, these electromagnetic tempests are made of plasma, charged particles whipped into motion by Earth’s magnetic field.

    Continue Reading

  • This star survived its own supernova and shined even brighter

    This star survived its own supernova and shined even brighter

    Rich with detail, the spiral galaxy NGC 1309 shines in this NASA/ESA Hubble Space Telescope Picture of the Week. NGC 1309 is situated about 100 million light-years away in the constellation Eridanus.

    This stunning Hubble image encompasses NGC 1309’s bluish stars, dark brown gas clouds and pearly white centre, as well as hundreds of distant background galaxies. Nearly every smudge, streak and blob of light in this image is an individual galaxy. The only exception to the extragalactic ensemble is a star, which can be identified near the top of the frame by its diffraction spikes. It is positively neighborly, just a few thousand light-years away in the Milky Way galaxy.

    Hubble has turned its attention toward NGC 1309 several times; previous Hubble images of this galaxy were released in 2006 and 2014. Much of NGC 1309’s scientific interest derives from two supernovae, SN 2002fk in 2002 and SN 2012Z in 2012. SN 2002fk was a perfect example of a Type Ia supernova, which happens when the core of a dead star (a white dwarf) explodes.

    SN 2012Z, on the other hand, was a bit of a renegade. It was classified as a Type Iax supernova: while its spectrum resembled that of a Type Ia supernova, the explosion wasn’t as bright as expected. Hubble observations showed that in this case, the supernova did not destroy the white dwarf completely, leaving behind a ‘zombie star’ that shone even brighter than it did before the explosion. Hubble observations of NGC 1309 taken across several years also made this the first time the white dwarf progenitor of a supernova has been identified in images taken before the explosion.

    Continue Reading

  • Hubble Sees Dusty Clouds in Tarantula Nebula

    Hubble Sees Dusty Clouds in Tarantula Nebula

    A striking new image from the NASA/ESA Hubble Space Telescope shows incredible details in the Tarantula Nebula, a turbulent star-birth region located in the Large Magellanic Cloud.

    This Hubble image shows part of the Tarantula Nebula, a star-forming region some 163,000 light-years away in the constellation of Dorado. The color image is a composite of separate exposures acquired by Hubble’s Wide Field Camera 3 (WFC3) instrument in the ultraviolet, near-infrared, and optical parts of the spectrum. It is based on data obtained through four filters. The color results from assigning different hues to each monochromatic image associated with an individual filter. Image credit: NASA / ESA / Hubble / C. Murray.

    The Tarantula Nebula lies about 163,000 light-years away in the southern constellation of Dorado.

    Also known as NGC 2070 or 30 Doradus, this nebula is part of the Large Magellanic Cloud, one of our closest galactic neighbors.

    The bright glow of the nebula was first recorded by the French astronomer Nicolas-Louis de Lacaille in 1751.

    At its heart are some of the most massive stars known, a few with up to 200 solar masses, making the region perfect for studying how gas clouds collapse under gravity to form new stars.

    “The Tarantula Nebula is the largest and brightest star-forming region not just in the Large Magellanic Cloud, but in the entire group of nearby galaxies to which the Milky Way belongs,” the Hubble astronomers wrote in a statement.

    “The nebula is home to the most massive stars known, some of which are roughly 200 times as massive as our Sun.”

    “The scene pictured here is located away from the center of the nebula, where there is a super star cluster called R136, but very close to a rare type of star called a Wolf-Rayet star.”

    “Wolf-Rayet stars are massive stars that have lost their outer shell of hydrogen and are extremely hot and luminous, powering dense and furious stellar winds,” they explained.

    The Tarantula Nebula is a frequent target for Hubble, whose multiwavelength capabilities are critical for capturing sculptural details in the nebula’s dusty clouds.

    “The data used to create this image come from an observing program called Scylla, named for a multi-headed sea monster from the Greek myth of Ulysses,” the astronomers said.

    “The Scylla program was designed to complement another Hubble observing program called ULYSSES (Ultraviolet Legacy library of Young Stars as Essential Standards).”

    “ULYSSES targets massive young stars in the Small and Large Magellanic Clouds, while Scylla investigates the structures of gas and dust that surround these stars.”

    Continue Reading

  • Radar that could find life on Europa just nailed its first big test

    Radar that could find life on Europa just nailed its first big test

    NASA’s largest interplanetary probe tested its radar during a Mars flyby. The results include a detailed image and bode well for the mission at Jupiter’s moon Europa.

    As it soared past Mars in March, NASA’s Europa Clipper conducted a critical radar test that had been impossible to accomplish on Earth. Now that mission scientists have studied the full stream of data, they can declare success: The radar performed just as expected, bouncing and receiving signals off the region around Mars’ equator without a hitch.

    Called REASON (Radar for Europa Assessment and Sounding: Ocean to Near-surface), the radar instrument will “see” into Europa’s icy shell, which may have pockets of water inside. The radar may even be able to detect the ocean beneath the shell of Jupiter’s fourth-largest moon.

    “We got everything out of the flyby that we dreamed,” said Don Blankenship, principal investigator of the radar instrument, of the University of Texas at Austin. “The goal was to determine the radar’s readiness for the Europa mission, and it worked. Every part of the instrument proved itself to do exactly what we intended.”

    The radar will help scientists understand how the ice may capture materials from the ocean and transfer them to the surface of the moon. Above ground, the instrument will help to study elements of Europa’s topography, such as ridges, so scientists can examine how they relate to features that REASON images beneath the surface.

    Limits of Earth

    Europa Clipper has an unusual radar setup for an interplanetary spacecraft: REASON uses two pairs of slender antennas that jut out from the solar arrays, spanning a distance of about 58 feet (17.6 meters). Those arrays themselves are huge — from tip to tip, the size of a basketball court — so they can catch as much light as possible at Europa, which gets about 1/25th the sunlight as Earth.

    The instrument team conducted all the testing that was possible prior to the spacecraft’s launch from NASA’s Kennedy Space Center in Florida on Oct. 14, 2024. During development, engineers at the agency’s Jet Propulsion Laboratory in Southern California even took the work outdoors, using open-air towers on a plateau above JPL to stretch out and test engineering models of the instrument’s spindly high-frequency and more compact very-high-frequency antennas.

    But once the actual flight hardware was built, it needed to be kept sterile and could be tested only in an enclosed area. Engineers used the giant High Bay 1 clean room at JPL, where the spacecraft was assembled, to test the instrument piece by piece. To test the “echo,” or the bounceback of REASON’s signals, however, they’d have needed a chamber about 250 feet (76 meters) long — nearly three-quarters the length of a football field.

    Enter Mars

    The mission’s primary goal in flying by Mars on March 1, less than five months after launch, was to use the planet’s gravitational pull to reshape the spacecraft’s trajectory. But it also presented opportunities to calibrate the spacecraft’s infrared camera and perform a dry run of the radar instrument over terrain NASA scientists have been studying for decades.

    As Europa Clipper zipped by the volcanic plains of the Red Planet — starting at 3,100 miles (5,000 kilometers) down to 550 miles (884 kilometers) above the surface — REASON sent and received radio waves for about 40 minutes. In comparison, at Europa the instrument will operate as close as 16 miles (25 kilometers) from the moon’s surface.

    All told, engineers were able to collect 60 gigabytes of rich data from the instrument. Almost immediately, they could tell REASON was working well. The flight team scheduled the full dataset to download, starting in mid-May. Scientists relished the opportunity over the next couple of months to examine the information in detail and compare notes.

    “The engineers were excited that their test worked so perfectly,” said JPL’s Trina Ray, Europa Clipper deputy science manager. “All of us who had worked so hard to make this test happen — and the scientists seeing the data for the first time — were ecstatic, saying, ‘Oh, look at this! Oh, look at that!’ Now, the science team is getting a head start on learning how to process the data and understand the instrument’s behavior compared to models. They are exercising those muscles just like they will out at Europa.”

    Europa Clipper’s total journey to reach the icy moon will be about 1.8 billion miles (2.9 billion kilometers) and includes one more gravity assist — using Earth — in 2026. The spacecraft is currently about 280 million miles (450 million kilometers) from Earth.

    More About Europa Clipper

    Europa Clipper’s three main science objectives are to determine the thickness of the moon’s icy shell and its interactions with the ocean below, to investigate its composition, and to characterize its geology. The mission’s detailed exploration of Europa will help scientists better understand the astrobiological potential for habitable worlds beyond our planet.

    Managed by Caltech in Pasadena, California, NASA’s Jet Propulsion Laboratory in Southern California leads the development of the Europa Clipper mission in partnership with the Johns Hopkins Applied Physics Laboratory in Laurel, Maryland, for NASA’s Science Mission Directorate in Washington. APL designed the main spacecraft body in collaboration with JPL and NASA’s Goddard Space Flight Center in Greenbelt, Maryland, NASA’s Marshall Space Flight Center in Huntsville, Alabama, and Langley Research Center in Hampton, Virginia. The Planetary Missions Program Office at NASA Marshall executes program management of the Europa Clipper mission. NASA’s Launch Services Program, based at NASA Kennedy, managed the launch service for the Europa Clipper spacecraft. The REASON radar investigation is led by the University of Texas at Austin.

    Continue Reading

  • 2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech

    2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech

    During a solar eclipse, astronomers who study heliophysics are able to study the Sun’s corona – its outer atmosphere – in ways they are unable to do at any other time.

    The brightest part of the Sun is so bright that it blocks the faint light from the corona, so it is invisible to most of the instruments astronomers use. The exception is when the Moon blocks the Sun, casting a shadow on the Earth during an eclipse. But as an astronomer, I know eclipses are rare, they last only a few minutes, and they are visible only on narrow paths across the Earth. So, researchers have to work hard to get their equipment to the right place to capture these short, infrequent events.

    In their quest to learn more about the Sun, scientists at the European Space Agency have built and launched a new probe designed specifically to create artificial eclipses.

    Meet Proba-3

    This probe, called Proba-3, works just like a real solar eclipse. One spacecraft, which is roughly circular when viewed from the front, orbits closer to the Sun, and its job is to block the bright parts of the Sun, acting as the Moon would in a real eclipse. It casts a shadow on a second probe that has a camera capable of photographing the resulting artificial eclipse.

    The two spacecraft of Proba-3 fly in precise formation about 492 feet (150 meters) apart.
    ESA-P. Carril, CC BY-NC-ND

    Having two separate spacecraft flying independently but in such a way that one casts a shadow on the other is a challenging task. But future missions depend on scientists figuring out how to make this precision choreography technology work, and so Proba-3 is a test.

    This technology is helping to pave the way for future missions that could include satellites that dock with and deorbit dead satellites or powerful telescopes with instruments located far from their main mirrors.

    The side benefit is that researchers get to practice by taking important scientific photos of the Sun’s corona, allowing them to learn more about the Sun at the same time.

    An immense challenge

    The two satellites launched in 2024 and entered orbits that approach Earth as close as 372 miles (600 kilometers) – that’s about 50% farther from Earth than the International Space Station – and reach more than 37,282 miles (60,000 km) at their most distant point, about one-sixth of the way to the Moon.

    During this orbit, the satellites move at speeds between 5,400 miles per hour (8,690 kilometers per hour) and 79,200 mph (127,460 kph). At their slowest, they’re still moving fast enough to go from New York City to Philadelphia in one minute.

    While flying at that speed, they can control themselves automatically, without a human guiding them, and fly 492 feet (150 meters) apart – a separation that is longer than the length of a typical football stadium – while still keeping their locations aligned to about one millimeter.

    They needed to maintain that precise flying pattern for hours in order to take a picture of the Sun’s corona, and they did it in June 2025.

    The Proba-3 mission is also studying space weather by observing high-energy particles that the Sun ejects out into space, sometimes in the direction of the Earth. Space weather causes the aurora, also known as the northern lights, on Earth.

    While the aurora is beautiful, solar storms can also harm Earth-orbiting satellites. The hope is that Proba-3 will help scientists continue learning about the Sun and better predict dangerous space weather events in time to protect sensitive satellites.

    Continue Reading

  • Meet ‘lite intermediate black holes,’ the supermassive black hole’s smaller, much more mysterious cousin

    Meet ‘lite intermediate black holes,’ the supermassive black hole’s smaller, much more mysterious cousin

    Black holes are massive, strange and incredibly powerful astronomical objects. Scientists know that supermassive black holes reside in the centers of most galaxies.

    And they understand how certain stars form the comparatively smaller stellar mass black holes once they reach the end of their life. Understanding how the smaller stellar mass black holes could form the supermassive black holes helps astronomers learn about how the universe grows and evolves.

    But there’s an open question in black hole research: What about black holes with masses in between? These are much harder to find than their stellar and supermassive peers, in size range of a few hundred to a few hundred thousand times the mass of the Sun.

    We’re a team of astronomers who are searching for these in-between black holes, called intermediate black holes. In a new paper, two of us (Krystal and Karan) teamed up with a group of researchers, including postdoctoral researcher Anjali Yelikar, to look at ripples in space-time to spot a few of these elusive black holes merging.

    Take me out to the (gravitational wave) ball game

    To gain an intuitive idea of how scientists detect stellar mass black holes, imagine you are at a baseball game where you’re sitting directly behind a big concrete column and can’t see the diamond. Even worse, the crowd is deafeningly loud, so it is also nearly impossible to see or hear the game.

    But you’re a scientist, so you take out a high-quality microphone and your computer and write a computer algorithm that can take audio data and separate the crowd’s noise from the “thunk” of a bat hitting a ball.

    You start recording, and, with enough practice and updates to your hardware and software, you can begin following the game, getting a sense of when a ball is hit, what direction it goes, when it hits a glove, where runners’ feet pound into the dirt and more.

    Admittedly, this is a challenging way to watch a baseball game. But unlike baseball, when observing the universe, sometimes the challenging way is all we have.

    This principle of recording sound and using computer algorithms to isolate certain sound waves to determine what they are and where they are coming from is similar to how astronomers like us study gravitational waves. Gravitational waves are ripples in space-time that allow us to observe objects such as black holes.

    Now imagine implementing a different sound algorithm, testing it over several innings of the game and finding a particular hit that no legal combination of bats and balls could have produced. Imagine the data was suggesting that the ball was bigger and heavier than a legal baseball could be. If our paper was about a baseball game instead of gravitational waves, that’s what we would have found.

    Listening for gravitational waves

    While the baseball recording setup is designed specifically to hear the sounds of a baseball game, scientists use a specialized observatory called the Laser Interferometer Gravitational-Wave Observatory, or LIGO, to observe the “sound” of two black holes merging out in the universe.

    The LIGO detector in Hanford, Wash., uses lasers to measure the minuscule stretching of space caused by a gravitational wave.
    LIGO Laboratory

    Scientists look for the gravitational waves that we can measure using LIGO, which has one of the most mind-bogglingly advanced laser and optics systems ever created.

    In each event, two “parent” black holes merge into a single, more massive black hole. Using LIGO data, scientists can figure out where and how far away the merger happened, how massive the parents and resultant black holes are, which direction in the sky the merger happened and other key details.

    Most of the parent black holes in merger events originally form from stars that have reached the end of their lives – these are stellar mass black holes.

    An illustration of a black hole with gas swirling around it, coming from a large cloud around a star on the right.
    This artist’s impression shows a binary system containing a stellar mass black hole called IGR J17091-3624. The strong gravity of the black hole, on the left, is pulling gas away from a companion star on the right.
    NASA/CXC/M.Weiss, CC BY-NC

    The black hole mass gap

    Not every dying star can create a stellar mass black hole. The ones that do are usually between about 20 to 100 times the mass of the Sun. But due to complicated nuclear physics, really massive stars explode differently and don’t leave behind any remnant, black hole or otherwise.

    These physics create what we refer to as the “mass gap” in black holes. A smaller black hole likely formed from a dying star. But we know that a black hole more massive than about 60 times the size of the Sun, while not a supermassive black hole, is still too big to have formed directly from a dying star.

    The exact cutoff for the mass gap is still somewhat uncertain, and many astrophysicists are working on more precise measurements. However, we are confident that the mass gaps exist and that we are in the ballpark of the boundary.

    We call black holes in this gap lite intermediate mass black holes or lite IMBHs, because they are the least massive black holes that we expect to exist from sources other than stars. They are no longer considered stellar mass black holes.

    Calling them “intermediate” also doesn’t quite capture why they are special. They are special because they are much harder to find, astronomers still aren’t sure what astronomical events might create them, and they fill a gap in astronomers’ knowledge of how the universe grows and evolves.

    Evidence for IMBHs

    In our research, we analyzed 11 black hole merger candidates from LIGO’s third observing run. These candidates were possibly gravitational wave signals that looked promising but still needed more analysis to conclusively confirm.

    The data suggested that for those 11 we analyzed, their final post-merger black hole may have been in the lite IMBH range. We found five post-merger black holes that our analysis was 90% confident were lite IMBHs.

    Even more critically, we found that one of the events had a parent black hole that was in the mass gap range, and two had parent black holes above the mass gap range. Since we know these black holes can’t come from stars directly, this finding suggests that the universe has some other way of creating black holes this massive.

    A parent black hole this massive may already be the product of two other black holes that merged in the past, so observing more IMBHs can help us understand how often black holes are able to “find” each other and merge out in the universe.

    LIGO is in the end stages of its fourth observing run. Since this work used data from the third observing run, we are excited to apply our analysis to this new dataset. We expect to continue to search for lite IMBHs, and with this new data we will improve our understanding of how to more confidently “hear” these signals from more massive black holes above all the noise.

    We hope this work not only strengthens the case for lite IMBHs in general but helps shed more light on how they are formed.

    Continue Reading