- Starlink Satellites Interfere With Nearly a Third of Low Frequency Radio Astronomy extremetech.com
- Scientists analyze 76 million radio telescope images, find Starlink satellite interference ‘where no signals are supposed to be present’ Space
- Musk messing with cosmic data. Will alien hunters save us? theregister.com
- Interference to astronomy the unintended consequence of faster internet Curtin University
- SpaceX’s Starlink Satellites May Be Threatening Astronomical Discoveries, Say Researchers The Daily Galaxy
Category: 7. Science
-
Starlink Satellites Interfere With Nearly a Third of Low Frequency Radio Astronomy – extremetech.com
-
Meet Apep, a swirling nebula around 2 dying stars
This is Apep, a multiple star system in the direction of the small, southern constellation Norma. Apep is named for an ancient Egyptian deity (Apep, aka Apophis), the embodiment of chaos and darkness. Image via NASA/ JWST/ Judy Schmidt. - Apep is a multiple star system. Surrounding it is an intricate, swirling nebula that has intrigued scientists.
- The James Webb Space Telescope has taken a new image of Apep. The image gives astronomers an even better view of what is happening within this star system.
- Two Wolf-Rayet stars lie at the heart of this system, along with a third companion. And the third star is taking a bite out of their dust shells.
By Benjamin Pope, Macquarie University
The twisted world of Apep
The day before my thesis examination, my friend and radio astronomer Joe Callingham showed me an image we’d been awaiting for five long years. It was an infrared photo of two dying stars we’d requested from the Very Large Telescope in Chile.
I gasped. The stars were wreathed in a huge spiral of dust, like a snake eating its own tail.
We named it Apep, for the Egyptian serpent god of destruction. Now, our team has finally been lucky to use NASA’s James Webb Space Telescope to look at Apep.
If anything could top the first shock of seeing its beautiful spiral nebula, it’s this breathtaking new image. The new Webb data are now analyzed in two papers on arXiv.
The European Space Observatory’s Very Large Telescope captured the coils of Apep. Image via ESO/ Callingham et al., CC BY. Violent star deaths
Right before they die as supernovas, the universe’s most massive stars violently shed their outer hydrogen layers, leaving their heavy cores exposed.
These are Wolf-Rayet stars, named after their discoverers Charles Wolf and Georges Rayet. Wolf and Rayet noticed powerful streams of gas blasting out from these objects, much stronger than the stellar wind from our sun. The Wolf-Rayet stage lasts only millennia – a blink of the eye in cosmic time scales – before they violently explode.
Unlike our sun, many stars in the universe exist in pairs known as binaries. This is especially true of the most massive stars, such as Wolf-Rayets.
When the fierce gales from a Wolf-Rayet star clash with their weaker companion’s wind, they compress each other. In the eye of this storm forms a dense, cool environment in which the carbon-rich winds can condense into dust. The earliest carbon dust in the cosmos – the first of the material making up our own bodies – was made this way.
The dust from the Wolf-Rayet blows out in almost a straight line. And the orbital motion of the stars wraps it into a spiral-shaped nebula. So it appears exactly like water from a sprinkler when viewed from above.
We expected Apep to look like one of these elegant pinwheel nebulas, discovered by our colleague and co-author Peter Tuthill. To our surprise, it did not.
The ‘pinwheel’ nebula of the triple Wolf-Rayet star system WR104. Image via Peter Tuthill. Equal rivals
Webb’s infrared camera took the new image. The camera is like the thermal cameras that hunters or the military use. The image represents hot material as blue and colder material in green to red.
It turns out Apep isn’t just one powerful star blasting a weaker companion, but two Wolf-Rayet stars. The rivals have near-equal strength winds, and the dust spreads out in a wide cone, wrapping into a wind-sock shape.
When we originally described Apep in 2018, we noted a third, more distant star, speculating whether it was also part of the system or a chance interloper along the line of sight.
The dust appeared to be moving much slower than the winds, which was hard to explain. We suggested the dust might be carried on a slow, thick wind from the equator of a fast-spinning star, rare today but common in the early universe.
The new, much more detailed data from Webb reveals three more dust shells zooming farther out, each cooler and fainter than the last and spaced perfectly evenly, against a background of swirling dust.
The Apep nebula in false color, displaying infrared data from Webb’s MIRI camera. Image via Han et al./White et al./Dholakia; NASA/ESA. New data on Apep, new knowledge
Researchers have now published the Webb data, interpreted in a pair of papers. One is led by Caltech astronomer Yinuo Han, and the other by Macquarie University Masters student Ryan White.
Han’s paper reveals how the nebula’s dust cools, links the background dust to the foreground stars and suggests the stars are farther away from Earth than we thought. This implies they are extraordinarily bright, but weakens our original claim about the slow winds and rapid rotation.
In White’s paper, he develops a fast computer model for the shape of the nebula and uses this to decode the orbit of the inner stars very precisely.
He also noticed there’s a “bite” taken out of the dust shells, exactly where the wind of the third star would be chewing into them. This proves the Apep family isn’t just a pair of twins … they have a third sibling.
An illustration of the cavity carved by the 3rd star companion in the Apep system. Image via White et al. (2025). Understanding systems like Apep tells us more about star deaths and the origins of carbon dust. But these systems also have a fascinating beauty that emerges from their seemingly simple geometry.
The violence of stellar death carves puzzles that would make sense to Newton and Archimedes, and it is a scientific joy to solve them and share them.
Benjamin Pope, Associate Professor, School of Mathematical and Physical Sciences, Macquarie University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Bottom line: The James Webb Space Telescope has captured an image of Apep, which includes the swirling nebula around two dying stars. A scientist explains what we know about it now.
Continue Reading
-
Functional and genomic analysis of Enterococcus phage A155: a potential agent for reducing VRE gut colonization | Virology Journal
Bacteria strains
The 59 strains of E. faecalis, 5 strains of E. faecium, 2 strains of Staphylococcus aureus, 2 strains of Listeria monocytogenes, and 2 strains of Escherichia coli used in this study (for details, see Table 2) were conserved by Institute of Microbe & Host Health, Linyi University (Linyi, Shandong province, China). VR-Efs V583 was provided by the Utrecht University (Netherlands) and stored by the Institute of Microbe & Host Health (Linyi, Shandong province, China).
Isolation and purification of phage
Phage A155 was isolated from a sewage sample from a farm (Minggang Farm, Linyi) using VR-Efs V583 through the modified enrichment technique [14]. In brief, 15 mL of pre-settled sewage supernatant was centrifuged at 2876 ×g for 5 min (Pingke-165–6N). Subsequently, 2.5 mL of clarified supernatant was mixed with 2.5 mL of 2× Brain Heart Infusion (BHI) broth. The mixtures were incubated overnight at 37 °C with shaking(200 r/min). Then, 1.0 mL of the culture was centrifuged at 11,586 ×g for 5 min (ALLSheng-mini-15k), and the supernatant was filtered with a 0.22 μm membrane filter(Jinteng, China), using the double-layer plate method [15] to screen phage presence.
For purification, individual plaques were excised using sterile pipette tips, soaked in 200 µL SM buffer overnight, and subjected to serial dilution and repeated with the double-layer plate method. The purification cycle was iterated until homogeneous plaque morphology was achieved. The phage A155 was cultured and stored at 4℃ and − 80℃ in 25% glycerol.
Phage morphology
The purified phage was amplified using PEG-8000. After overnight treatment at 4 °C, the supernatant was discarded by centrifugation at 6236 ×g for 10 min (Eppendorf-5810R-SL142). The precipitate was resuspended in 2.0 mL of SM buffer (NaCl 5.8 g/L, MgSO4 1.5 g/L, Tris 6.06 g/L, 1 M HCl, gum Aladdin 0.1 g/L, pH 7.5), followed by extraction with an equal volume of chloroform to obtain the phage concentrate. The concentrate was mixed with 1.5 g CsCl in a 15-mL centrifuge tube and centrifuged at 6236 ×g for 20 min at 4℃ using a centrifuge machine (Eppendorf-5810R-SL142). The phage band was collected with a syringe and dialysed overnight in phosphate-buffered saline (PBS) to obtain purified phage particles. The samples were stained using the phosphotungstic acid negative staining method [16] and observed under a transmission electron microscope at 80 kV after 15 min.
Optimal MOI
The multiplicity of infection (MOI) is the ratio of phage to host bacteria used for phage amplification. VR-Efs V583 was cultured to the logarithmic growth phase (OD6₀₀≈0.7), and the bacterial concentration was adjusted to 1 × 10⁹CFU/mL. The phage was serially diluted and mixed with VR-Efs V583 and BHI broth at MOIs of 10, 1, 0.1, 0.01, and 0.001. The mixtures were incubated at 37 °C with shaking at 200 r/min for 6 h. Phage potency in each mixture was determined using the double-layer plate method, and the mixture with the highest potency was identified as the optimal MOI for the phage.
One-step growth curve
VR-Efs V583 was cultured to the log-phase growth (OD6₀₀≈0.7), and 1.0 mL of the culture was centrifuged at 2200 ×g for 5 min at 4 °C (Eppendorf-5418R). Following centrifugation, the collected precipitate was subjected to two rounds of PBS resuspension and centrifugation under the same conditions to remove residual impurities. The supernatant was discarded, and the pellet was resuspended in 1.0 mL of sterile BHI broth. Phage was added at the optimal MOI (MOI = 0.001) and incubated at 37 °C for 15 min. The mixture was then centrifuged at 4827 ×g for 5 min (AllSheng-mini-15k), the supernatant was discarded, and the pellet was resuspended in 10 mL of sterile BHI broth. The suspension was incubated at 37 °C with shaking at 200 r/min. Samples (50 µL) were collected every 5 min for the first 30 min and every 10 min for the next 90 min (total 2 h) for phage titer determination. Three replicates were performed at each time point, and the average values were used. The one-step growth curve was plotted with sampling time on the x-axis and the logarithm of phage titer on the y-axis.
Temperature and pH stability
Difference from previous assay [13], phage A155 (10⁹ PFU/mL) was incubated in a thermostatic water bath at 20 °C, 30 °C, 40 °C, 50 °C, 60 °C, 70 °C, and 80 °C for 30 min and 60 min. After incubation, 100 µL of the phage solution was serially diluted 10-fold, and the phage titer was determined using the double-layer plate method. Three replicates were performed at each temperature, and the average values were used to analyze the changes in phage titer.
Phage A155 (10⁹ PFU/mL) was incubated at pH levels ranging from 2.0 to 14.0 (in increments of 1.0) for 60 min at 37 °C. After incubation, 100 µL of the phage solution was serially diluted 10-fold, and the phage titer was determined using the double-layer plate method. Three replicates were performed, and the average values were used to analyze the changes in phage titer under different pH conditions.
Sequencing and bioinformatics analysis of phage A155 genome
Following the instructions of the TIANamp Virus DNA/RNA Kit (Tiangen Bio-Tek Inc., Beijing, China), phage A155 was amplified, and concentrated, and its nucleic acid was extracted. Whole genome sequencing was conducted by Shanghai Personal Biotechnology Co., Ltd., following the method described by Han [17]. Briefly, using the Illumina TruSeq Nano DNA LT protocol (Illumina TruSeq DNA Sample Preparation Guide) to custom 2 × 250 bp paired-end DNA library, the average fragment length was 400 bp. Raw sequencing data quality was assessed by FastQC v0.11.7, followed by adapter trimming. The genome sequence was assembled using A5-MiSeq [18] and SPAdes [19]. Viral genome identification was performed by extracting high-depth sequences and conducting BLASTn alignment against the NCBI NT database [20]. Synteny analysis and gap closure were conducted by MUMmer [21], with subsequent error correction performed via Pilon v1.24 [22]. Protein-coding genes were predicted via GeneMarkS [23]. The genome data were uploaded to the National Center for Biotechnology Information (NCBI) database, GenBank accession number: PQ093903, and a comparative gene circle map was made by BRIG.
Phylogenetic tree construction
Based on the whole sequences, phylogenetic tree analysis was performed comparing phage A155 with multiple Enterococcus faecalis phages. Sequence alignment and phylogenetic tree construction were conducted using VICTOR [24].
Efficiency of plating (EOP)
Following a previously described method with certain modifications [25], the phage host range was quantitatively assessed through the efficiency of plating (EOP). Briefly, phage A155 solution was serially diluted 10-fold. Phage titers were determined in triplicate for each bacterial strain using the double-layer agar method. EOP values were calculated as (mean PFU on target bacteria / mean PFU on host bacteria). An EOP value of 0.5 or higher was classified as “high production”, indicating that at least 50% of PFUs were produced in target bacteria compared to host bacteria. An EOP value between 0.1 and 0.5 was classified as “Medium production.” An EOP value between 0.001 and 0.1 was considered “low production,” and an EOP less than 0.001 was referred to as inefficient.
Inhibitory effect of phage A155 on VR-Efs V583 in vitro
VR-Efs V583 was cultured to the stationary phase, harvested, and resuspended in fresh BHI broth. The suspension was distributed into 12-well plates, adjusted to a final bacterial concentration of 10⁷ CFU/mL(1/100 of the original bacterial liquid volume), and supplemented with BHI broth. Phage A155 was added at MOI of 0.001, 0.01, 0.1, 1, or 10, with a final volume of 2.5 mL per well. Bacterial growth without phage served as the control. All experiments were performed in triplicate. The plates were incubated in a Bacterial Growth Curve Instrument (Scientz MGC-200, Ningbo, China), and OD6₀₀ was measured every 30 min for 16 h to assess the lytic efficiency of phage A155 in vitro.
Phage therapy in the murine bacteremia model
Female BALB/c mice aged 6–8 weeks were purchased from a commercial supplier (Pengyue, Shandong, China). They were provided with water and standard mouse chow ad libitum and monitored daily.
An intestinal model of E. faecalis infection in mice was established as previously described [26] (Fig. 7a). Sixteen SPF BALB/c mice (6–8 weeks old) were randomly divided into two groups (n = 8). After one week of acclimatisation, antibiotics were administered orally to deplete the intestinal flora and create more ecological niches for E. faecalis colonization [27]. Mice received a mixture of antibiotics (vancomycin 10 mg, neomycin 10 mg, ampicillin 10 mg, metronidazole 10 mg) by gavage for 3 days, followed by the same antibiotics in drinking water (vancomycin 500 mg/L, neomycin 500 mg/L, ampicillin 500 mg/L, metronidazole 500 mg/L) for 7 days. Faecal bacterial counts, particularly Enterococci, were monitored using the plate-counting method during this period. After 2 days, 100 µL of VR-Efs V583 suspension (1 × 10⁹ CFU/mL) was administered to both groups to simulate enterococcal proliferation. Faecal Enterococci levels were monitored using PSE agar.
To assess the effect of phage treatment, a single dose of phage A155 (2.4 × 10⁸ PFU/mouse) was administered orally 4 days post-bacterial challenge. The control group received an equivalent volume of BHI broth. Faecal samples were collected daily, and Enterococci counts on selective media were used to evaluate the efficacy of phage treatment in reducing E. faecalis colonization.
Data analysis
GraphPad Prism 8.3.0 was used to analyse the experimental data using statistics. Data were analyzed using one-way ANOVA. Results are expressed as standard deviation (SD). Error bars indicate the standard deviation of the mean, and the p-value was used to indicate the statistical significance of the data.
Continue Reading
-
Climate change could make ‘droughts’ for wind power 15% longer, study says
Extreme “wind droughts” that reduce power output from turbines for extended periods could become 15% longer by the end of the century across much of the northern hemisphere under a moderate warming scenario.
That is according to a new study in Nature Climate Change, which explores how climate change could impact the length and frequency of prolonged low-wind events around the world.
According to the study, “prominent” wind droughts have already been documented in Europe, the US, northeastern China, Japan and India.
As the planet warms, wind droughts will become longer in the northern hemisphere and mid-latitudes – especially across the US, northeastern China, Russia and much of Europe – the paper says.
The study – which focuses on onshore wind – warns that “prolonged” wind droughts could “threaten global wind power security”.
However, they add that research into the effects of climate change on wind supply can help “prepare for and mitigate the adverse impacts” of these prolonged low-wind events.
Combining wind power with other energy technologies – such as solar, hydro, nuclear power and energy storage – can help reduce the impact of wind droughts on global energy supply, the study says.
One expert not involved in the research tells Carbon Brief that the findings do not “spell doom for the wind industry”.
Instead, he says the study is a “navigation tool” which could help the energy industry to “counteract” future challenges.
Wind drought
Wind power is one of the fastest-growing sources of energy in the world and currently makes up around 8% of global electricity supply. It is also playing a crucial role in the decarbonisation of many countries’ energy systems.
Wind is the result of air moving from areas of high pressure to areas of low pressure. These differences in air pressure are often due to the Earth’s surface being heated unevenly.
Human-caused climate change is warming the planet’s atmosphere and oceans. However, different regions are heating at different rates, resulting in a shift in global wind patterns. The IPCC finds that global average wind speeds (excluding Australia) slowed down slightly over 1979-2018.
There have already been dozens of recorded instances of prolonged low-wind events, known as wind droughts, which can drive down power production from wind turbines.
Dr Iain Staffell is an associate professor at the Centre for Environmental Policy at Imperial College London who was not involved in the study. He tells Carbon Brief that wind droughts often “push up power prices” as countries turn to more expensive alternative energy supplies, such as fossil fuels.
For example, Staffell tells Carbon Brief that, in the winter of 2024-25, Germany saw an “extended cold-calm spell which sent power prices to record highs”. (In German, this type of weather event is referred to as a “dunkelflaute”, often translated as “dark doldrums”.) He adds:
“It’s important to note that I’m not aware of anywhere in the world that has suffered a blackout because of a wind drought.”
Capacity factor
The productivity of wind power sites is often measured by their “capacity factor” – the amount of electricity that is actually generated over a period of time, relative to the maximum amount that could have been generated in theory.
A capacity factor of one indicates that wind turbines are generating the maximum possible amount of electricity, while zero indicates that they are not producing any power.
The authors define a wind drought as the 20th percentile in each grid cell – in other words, winds ranking in the slowest bottom fifth of winds typically recorded in the region.
They look at the frequency of prolonged wind droughts and how that might change as the world warms.
The map below shows regions’ average capacity factor at 100 metres above the ground level, derived from the ERA5 reanalysis data over 1980-2022, where darker shading indicates a higher capacity factor.
It also shows 19 wind droughts recorded since the year 2000 across Europe, the US, northeastern China, Japan and India. Wind droughts are indicated by yellow triangles for local events and hashed areas for larger-scale events.
Wind droughts, indicated by yellow triangles for local events and hashed areas for larger regions. Shading shows the region’s average capacity factor at 100 metres above the ground level, derived from the ERA5 reanalysis data over 1980-2022, where darker shading indicates a higher capacity factor. Source: Qu et al (2025). The map also shows that the darker shading for “abundant wind resources” is typically found in the mid-latitudes near “major storm tracks”, including the central US, northern Africa, northwestern Europe, northern Russia, northeastern China and Australia.
Modelling wind
To assess the severity of past and future wind droughts, the authors consider both the frequency and duration of these low-wind events.
To calculate wind drought duration, the authors use reanalysis data and models from the sixth Coupled Model Intercomparison Project (CMIP6) – the international modelling effort that feeds into the influential assessment reports from the Intergovernmental Panel on Climate Change (IPCC).
The authors then look at how wind drought conditions may change in the future, by modelling wind speeds over 2015-2100 under a range of future warming scenarios.
They find that wind drought frequency and duration will both increase in the northern hemisphere and mid-latitudes by the end of the century. The authors identify “particularly notable increases” in wind drought frequency in the US, northeastern China, Russia and much of Europe.
In the northern mid-latitudes, there will be a one-to-two hour increase in average wind drought duration by the end of the century under the moderate SSP2-4.5 scenario, according to the study. This is a 5-15% increase compared to today’s levels.
The authors also assess “extreme long-duration events” by looking at the longest-lasting wind drought that could happen once every 25 years.
The study projects roughly a 10%, 15% and 20% “elongation” in these long-duration wind droughts across “much of the northern mid-latitude regions” under the low, moderate and very high warming scenarios, by the end of the century.
However, the authors find “strong asymmetric changes” in their results, projecting a decrease in wind drought frequency and intensity in the southern hemisphere.
The authors suggest that the increase in wind droughts in the northern hemisphere is partly because of Arctic amplification – the phenomenon whereby the Arctic warms more quickly than the rest of the planet.
Accelerated warming in the Arctic narrows the temperature gap between the north pole and the equator and alters atmosphere-ocean interactions, which reduces wind speeds in the northern hemisphere.
Conversely, the authors suggest that increasing wind speeds in the southern hemisphere are caused by the land warming faster than the ocean, resulting in a greater difference in temperature between the land and the sea.
Record-breaking wind droughts
Finally, the authors also investigate the risk of “record-breaking wind droughts” – extreme events that would only be expected once every 1,000 years under the current climate.
They use CMIP6 models, based on historical data over 1980-2014, to assess how long-lasting such an event would be in different regions of the world. These results are shown on the map below, where darker brown indicates longer-duration wind droughts.
One-in-1,000 year “record-breaking wind droughts”, based on observed data over 1980-2014. Source: Qu et al (2025). These 1,000-year record-breaking wind droughts typically last for 150-350 hours (6-15 days), occasionally reaching up to 400 hours in regions such as India, East Russia, east Africa and east Brazil, the paper says.
The authors go on to assess the risk of record-breaking wind droughts for existing wind turbines under different warming scenarios.
The plot below shows the fraction of the CMIP6 models used in this study that project record-breaking wind droughts for onshore wind turbines.
Blue bars show the percentage of wind turbines that face a “weak” risk of exposure, meaning that fewer than 25% of models predict that the turbine will be exposed to record-breaking wind droughts by the year 2100. Green bars indicate a “moderate” risk of 25-50% and brown bars denote “severe” risk of greater than 50%.
Each panel shows a different region of the world, with results for low (left) moderate (middle) and very high (right) warming scenarios.
Fraction of models used that predict record-breaking wind droughts for currently deployed wind turbines under different climate scenarios. Blue bars show turbines with “weak” riskgreen bars indicate a “moderate” risk and brown bars denote “severe” risk. Source: Qu et al (2025). The study finds that, globally, around 15% of wind turbines will face “severe” risk from record-breaking wind droughts by the end of the century, regardless of the future warming scenario. However, different parts of the globe are expected to face different trends.
In North America, the percentage of turbines facing a “severe” risk from such extended wind droughts in the year 2100 rises from 14% in a low warming scenario to 39% in a very high warming scenario. Europe also faces a higher risk to its wind turbines under higher emissions scenarios.
However, the trends vary across the world. In south-east Asia, for example, the percentage of wind turbines at “severe” risk of the longest wind droughts drops from 18% under a low warming scenario to 11% under a very high warming scenario.
Energy security
The planet currently has 1,136GW of wind capacity. The authors say that, according to a report by the International Renewable Energy Agency, “wind power capacity is projected to grow substantially as the world pursues decarbonisation, aiming for 6,000GW by 2050”.
The paper sets out a number of ways that energy suppliers could reduce their exposure to record-breaking wind droughts.
The authors say that developers can avoid building new turbines in areas that are prone to frequent wind droughts. They add:
“Other effective mitigation measures include complementing wind power with other renewable energy sources, such as solar, hydro, nuclear power and energy storage.”
Staffell tells Carbon Brief the study provides helpful insights for how the world’s power supply could be made less vulnerable to prolonged low-wind events:
“I don’t see this study as spelling doom for the wind industry, instead it’s a navigation tool, telling us where to expect challenges in future so that we can counteract them.”
Staffell argues that there are “many solutions” for combatting wind droughts – including building the infrastructure to enable “more interconnection” between countries’ power grids.
For example, he says the UK could benefit from connecting its grid to Spain’s, noting that “wind droughts in the UK tend to coincide with [periods of] higher wind production in Spain”.
He adds:
“Increasing flexibility and diversity in power systems is a way to insure ourselves against extreme weather and cheaper than panic-buying gas whenever the wind drops.”
Similarly, Dr Enrico Antonini, a senior energy system modeller at Open Energy Transition, who was not involved in the study, tells Carbon Brief that wind droughts “do not necessarily threaten the viability of wind power”. He continues:
“Areas more exposed to these events can enhance their resilience by diversifying energy sources, strengthening grid connections over large distances and investing in energy storage solutions.”
In a news and views piece about the new study, Dr Sue Ellen Haupt, director of the weather systems assessment programme at the University of Colorado, praises the “robust” analysis.
She says the work “would ideally be accomplished with higher-resolution simulations that better resolve terrain, land-water boundaries and smaller-scale processes”, but acknowledges that “such datasets are not yet available on the global scale”.
Meanwhile, Dr Frank Kaspar is the head of hydrometeorology at Germany’s national meteorological service. He tells Carbon Brief how additions to this study could further help energy system planning in Germany.
Kaspar tells Carbon Brief it would be helpful to know how climate change will affect seasonal trends in wind drought, noting that in Germany, wind power “dominat[es] in winter” while solar plays a larger role in the energy mix in summer. [The UK sees a similar pattern.]
He adds that the study does not address offshore wind – a component of Germany’s energy mix that is “important” for the country.
Qu, M. et al (2025), Prolonged wind droughts in a warming climate threaten global wind power security, Nature Climate Change, doi:10.1038/s41558-025-02387-x
Continue Reading
-
Stable 20-Electron Ferrocene Molecule Thought “Improbable” Has Been Made For First Time
A derivative of the metal-organic complex ferrocene has 20 valence electrons in a stable configuration, overturning the expectation for the last 100 years of a ceiling at 18 valence electrons.
Combinations of metals and carbon-based molecules show rich possibilities for unusual chemistry. One class of these metal-organic complexes is metallocenes, where organic rings sit either side of a metal atom. Ferrocene (Fe(C5H5)2), the first metallocene discovered, was considered such a breakthrough when it was made in 1951 that it won its makers the 1973 Nobel Prize. Its creation is considered to mark the start of modern organometallic chemistry, but it seems there is still plenty for us to learn about it.
Ferrocene normally has 18 valence electrons (those that can contribute to forming chemical bonds), and according to Dr Satoshi Takebayashi of Okinawa Institute of Science and Technology Graduate University, that’s part of a pattern first observed 30 years earlier. “For many transition metal complexes, they are most stable when surrounded by 18 formal valence electrons. This is a chemical rule of thumb on which many key discoveries in catalysis and materials science are based,” Takebayashi said in a statement.
Indeed, the expectation that certain molecules will have 18 valence electrons is so useful for chemists looking to predict molecular structure, it has contributed to three more recent Nobel Prizes.
However, just because 18 electrons represent a sweet spot does not mean there are no alternatives. Some 16-electron complexes are stable, and paramagnetic nickelocene has 20 valence electrons. Now the list of exceptions to the rule has expanded following Takebayashi and co-authors’ creation of a 20-electron ferrocene derivative using an iron-nitrogen bond.
The new 20-electron ferrocene derivative’s structure, with nitrogen (blue), iron (orange), hydrogen (green), and carbon (grey) atoms highlighted.
Image Credit: Modified from Figure 2c in Takebayashi et al., 2025, Nature Communications CC-By-4.0“The additional two valence electrons induced an unconventional redox property that holds potential for future applications,” Takebayashi said. Ferrocene and its derivatives are used in reactions where electrons are transferred (redox), both as a catalyst and reactant, with applications as diverse as solar cells and medical instruments. However, its previous narrow range of oxidation states has limited the reactions it can drive. The discovery should change that. Indeed, it is noted for its stability in many environments.
A 19-electron anion ferrocene derivative has been made previously, but only with a powerful reductant; 20-electron ferrocenes turned out to require a less extreme approach. When a nitrogen atom bonds to the iron atom at ferrocene’s heart, two electrons are directed away from the iron, ready to bond to other molecules.
The study is open access in Nature Communications.
Continue Reading
-
World-first maps inform call for better protection of underground fungal networks
The mycorrhizal mushroom Cortinarius sp. emerges from a hyper-diverse but hidden underground fungal community in Tierra de Fuego, Chile (image credit: Tomás Munita). Scientists have released the world’s first high-resolution, predictive biodiversity maps of Earth’s underground mycorrhizal fungal communities, which they explain shows that over 90% of Earth’s most diverse underground mycorrhizal fungal ecosystems remain unprotected, threatening carbon drawdown, crop productivity, and ecosystem resilience to climate extremes.
The research, published on 23 July in the journal Nature, marks the first large-scale scientific application of the global mapping initiative launched by the Society for the Protection of Underground Networks (SPUN) in 2021.
Mycorrhizal fungi help regulate Earth’s climate and ecosystems by forming underground networks that provide plants with essential nutrients, while drawing ~13 billions tons of CO2 per year into soils – equivalent to roughly one-third of global emissions from fossil fuel. Despite their seemingly key role as planetary circulatory systems for carbon and nutrients, mycorrhizal fungi have been overlooked in climate change strategies, conservation agendas, and restoration efforts. This is problematic because disruption of networks accelerates climate change and biodiversity loss.
Using machine learning techniques on a dataset containing more than 2.8 billion fungal sequences sampled from 130 countries, scientists have created the first high-resolution diversity maps to predict mycorrhizal diversity at a 1km2 scale across the planet. Surprisingly, only 9.5% of these fungal biodiversity hotspots fall within existing protected areas, revealing major conservation gaps.
“For centuries, we’ve mapped mountains, forests, and oceans. But these fungi have remained in the dark, despite the extraordinary ways they sustain life on land”, says Dr. Toby Kiers, Executive Director, SPUN. “They cycle nutrients, store carbon, support plant health, and make soil. When we disrupt these critical ecosystem engineers, forest regeneration slows, crops fail, and biodiversity aboveground begins to unravel. This is the first time we’re able to visualize these biodiversity patterns —and it’s clear we are failing to protect underground ecosystems.”
This effort, led by SPUN, brings together GlobalFungi, Fungi Foundation, the Global Soil Mycobiome consortium, and researchers around the world to reveal patterns of fungal richness and rarity across biomes—from the Amazon to the Arctic and marks a major breakthrough in how we understand and visualize life beneath our feet.
“For too long, we’ve overlooked mycorrhizal fungi. These maps help alleviate our fungus blindness and can assist us as we rise to the urgent challenges of our times,” says Dr. Merlin Sheldrake, Director of Impact at SPUN.
Advancing underground science
In 2021, SPUN launched with a clear goal: to map Earth’s underground fungal communities with an aim to develop concrete resources for decision-makers, including in law, policy, and conservation and climate initiatives.“These maps are more than scientific tools—they can help guide the future of conservation,” said Dr. Michael Van Nuland, lead-author & SPUN’s Lead Data Scientist. “Food security, water cycles, and climate resilience all depend on safeguarding these underground ecosystems.”
This work is being guided by a team of prominent advisors including conservationist Jane Goodall, authors Michael Pollan, and author Paul Hawken, and the founder of the Fungi Foundation, Giuliana Furci.
A new tool for conservation
SPUN’s findings are now accessible through an interactive tool, Underground Atlas, allowing users to explore mycorrhizal diversity patterns anywhere on Earth. “The idea is to ensure underground biodiversity becomes as fundamental to environmental decision-making as satellite imagery”, says Jason Cremerius, Chief Strategy Officer at SPUN.Conservation groups, researchers, and policymakers can use the platform to identify biodiversity hotspots, prioritize interventions, and inform protected area designations. The tool enables decision-makers to search for underground ecosystems predicted to house unique, endemic fungal communities and explore opportunities to establish underground conservation corridors.
The maps will also be critical in leveraging fungi to regenerate degraded ecosystems. “Restoration practices have been dangerously incomplete because the focus has historically been on life aboveground,” said Dr. Alex Wegmann a Lead Scientist for The Nature Conservancy. “These high-resolution maps provide quantitative targets for restoration managers to establish what diverse mycorrhizal communities could and should look like.”
Urgent action is needed to incorporate findings into international biodiversity law and policy. For example, the Ghanaian coast is a global hotspot for mycorrhizal biodiversity. But the country’s coastline is eroding at roughly two meters per year. Scientists worry this critical biodiversity will soon be washed into the sea.
“Underground fungal systems have been largely invisible in law and policy,” said César Rodriguez-Garavito, Professor of Law and Faculty Director of the More-Than-Human Life (MOTH) Program at NYU School of Law. “These data are incredibly important in strengthening law and policy on climate change and biodiversity loss across all of Earth’s underground ecosystems”
Global reach, local impact
Together with partners, SPUN has now assembled a dataset of over 40,000 samples comprising 95,000 mycorrhizal fungal taxa. With a global network of over 400 scientists, and 96 “Underground Explorers” from 79 countries, the international team is now sampling the Earth’s most hard-to-access, remote underground ecosystems, including in Mongolia, Bhutan, Pakistan, and Ukraine.This global effort establishes a critical baseline to understand how these underground communities function and respond to environmental changes. “These maps reveal what we stand to lose if we fail to protect the underground,” says Dr Kiers.
SPUN is seeking new collaborators and funders to scale this work. Currently, only 0.001% of Earth’s surface has been sampled. More data means better maps, more precise restoration benchmarks, and more accurate identification of at-risk underground biodiversity. SPUN invites the public, conservationists, researchers and restoration groups to make use of the Underground Atlas, and provide feedback to help refine future versions.
Supporters of SPUN
Dr. Rebecca Shaw, Chief Scientist at WWF, explains “Mycorrhizal fungi need to be recognized as a priority in the ‘library of solutions’ to the some of the world’s greatest challenges – biodiversity decline, climate change, and declining food productivity. They deliver powerful ecosystem services whose benefits flow directly to people. This research should help elevate the protection and restoration of fungi and their networks to the top of conservation priorities.”Call to action:
- Researchers: Partner with SPUN to expand data collection and analysis.
- Conservationists: Collaborate with SPUN to design informed conservation priorities and strategies
- Policymakers: Leverage SPUN’s research to include fungi and underground ecosystems into global and national biodiversity policies and restoration targets
- Public: Explore the Underground Atlas and donate
- Funders and donors: Connect with SPUN to fund the next phase of sampling and community-led restoration
Continue Reading
-
The Media Tsunami of 3I/ATLAS. The two phone calls from Washington DC… | by Avi Loeb | Jul, 2025
(Image credit: Times of India) The two phone calls from Washington DC arrived at the same split of a second but I declined both on my Apple Watch as I was answering questions on a live TV interview about the new interstellar object 3I/ATLAS discovered on July 1, 2025. As soon as the interview ended, I listened to the two recorded messages and found that one was from the office of Representative Anna Paulina Luna and the second from Reuters. Both wanted to know more about 3I/ATLAS. I informed Representative Luna about the opportunity to get a closer look of 3I/ATLAS with the Juno spacecraft in orbit around Jupiter, based on a paper that I submitted for publication a few hours earlier (accessible here) with Adam Hibberd and Adam Crowl from the Initiative for Interstellar Studies.
If I had only a few months left to live with a choice of where to spend them, I would have loved to board Juno on a collision course with 3I/ATLAS. The pre-collision view of a large interstellar object that travelled through interstellar space for billions of years must be magnificent.
A few hours later, I was informed that Joe Rogan posted a YouTube segment in which he discusses my recent papers on 3I/ATLAS. This segment received more than a million views within 12 hours. It was followed by interview requests that I received from CBS, CNN, NewsNation and international TV channels.
Communicating my research to the public is an important responsibility. However, I find less value in talking about scientific work than doing it. This is why I wrote a total of nine new scientific papers over the past month alone (accessible here). Before the internet was invented and when written news was printed on paper, it was often said that “Today’s newspaper is tomorrow’s fish and chip paper.” People forget the news of yesterday, but the physical reality maintains its nature. Therefore, any new scientific knowledge about that physical reality is far more precious than the chitchat in social media or news outlets about it. The nature of 3I/ATLAS will not be revealed by listening to opinions of commentators but rather by analyzing data collected by state-of-the-art telescopes. We can learn more by observing the interstellar show of 3I/ATLAS on the sky during the coming months with our best telescopes from the radio band to X-rays.
As a scientist, I respond to evidence collected by instruments. As of now, we have anomalies but we need more data on 3I/ATLAS or other interstellar objects in order to ascertain whether any one of them is technological in origin. Once we find an interstellar artifact beyond a reasonable doubt, the next step will be to figure out its technological capabilities and intent. The analysis of the available data could benefit from artificial intelligence (AI), especially if the object shows complex patterns. The reverse engineering of the alien technologies could spark new growth frontiers in human-made products. Those would represent a quantum leap in our abilities if the aliens benefitted from thousands or millions of years of advanced scientific research compared to the one century we had so far after quantum mechanics was discovered.
But there is also a security aspect to technological interstellar objects. For objects inside the Earth’s atmosphere, it is necessary to employ state-of-the-art cameras to monitor the sky around the globe at infrared, optical and radio wavebands and then analyze the data with the best AI software available. This is the approach taken by the Galileo Project, under my leadership. For interstellar objects far away from Earth, one would like to analyze data coming from the NSF-DOE Vera C. Rubin Observatory in the southern hemisphere and build a similar observatory in the northern hemisphere. Again, the data should be analyzed by AI algorithms. Over the coming decade, the Rubin Observatory is expected to discover a new interstellar object every few months.
To do a good job on both types of objects, it is imperative to allocate major funds and attract the best minds in the world, as I noted in a congressional briefing, attended by Representative Luna, on May 1, 2025 (accessible here).
When approaching a `blind date,’ it is prudent to listen to the other side before speaking. This is a particularly good practice for a blind date with a visitor from another star, since all bets are off. We must first learn what the alien visitor is about and then design optimal communication and mitigation strategies. A friendly appearance might be misleading as it may potentially reflect a `Trojan Horse’. A neighbor with superior intelligence could manipulate us. This is a concern for the artificial intelligence we are currently creating, but even more so for alien intelligence. It is unclear which type of AI poses a bigger existential threat to the future of humanity. The answer will depend on the nature of any interstellar artifacts uncovered by the Rubin Observatory and other survey telescopes over the next decade.
As I suggested in my CBS interview (accessible here), it is important to establish a risk scale for interstellar encounters, with 0 marking a natural comet or asteroid and 10 marking an alien spaceship of unknown intent.
The U.S. government has little to offer other than alert citizens to major natural disasters, such as the new Tsunami Warning to Hawaiian residents — triggered by an 8.8 magnitude earthquake off Russia’s eastern Kamchatka Peninsula. An encounter with an artifact on the interstellar risk scale of 10 would be a tsunami of astronomical proportions. Trading options on the stock market volatility would not make much sense because money will lose its value in the aftermath of the encounter.
ABOUT THE AUTHOR
Zoom image will be displayed(Image Credit: Chris Michel, National Academy of Sciences, 2023) Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. The paperback edition of his new book, titled “Interstellar”, was published in August 2024.
Continue Reading
-
After 100 Years of Quantum Mechanics, Physicists Still Can’t Agree on Anything
In July 1925—exactly a century ago—famed physicist Werner Heisenberg wrote a letter to his equally famous colleague, Wolfgang Pauli. In it, Heisenberg confesses that his “views on mechanics have become more radical with each passing day,” requesting Pauli’s prompt feedback on an attached manuscript he’s considering whether to “complete…or to burn.”
That was the Umdeutung (reinterpretation) paper, which set the foundation for a more empirically verifiable version of quantum mechanics. For that reason, scientists consider Umdeutung’s publication date as quantum mechanics’s official birthday. To commemorate this 100th anniversary, Nature asked 1,101 physicists for their take on the field’s most fiercely debated questions, revealing that, as in the past, the field of quantum physics remains a hot mess.
Published today, the survey shows that physicists rarely converge on their interpretations of quantum mechanics and are often unsure about their answers. They tend to see eye-to-eye on two points: that a more intuitive, physical interpretation of math in quantum mechanics is valuable (86%), and that, perhaps ironically, quantum theory itself will eventually be replaced by a more complete theory (75%). A total of 15,582 physicists were contacted, of which 1,101 responded, giving the survey a 7% response rate. Of the 1,101, more than 100 respondents sent additional written answers with their takes on the survey’s questions.
‘Textbook’ approach still tops, with a caveat
Participants were asked to name their favored interpretation of the measurement problem, a long-standing conundrum in quantum theory regarding the uncertainty of quantum states in superposition. No clear majority emerged from the options given. The frontrunner, with 36%, was the Copenhagen interpretation, in which (very simply) quantum worlds are distinct from classical ones, and particles in quantum states only gain properties when they’re measured by an observer in the classical realm.
© Nature It’s worth noting that detractors of the Copenhagen interpretation scathingly refer to it as the “shut up and calculate” approach. That’s because it often glosses over weedy details for more practical pursuits, which, to be fair, is really powerful for things like quantum computing. However, more than half of physicists who chose the Copenhagen interpretation admitted they weren’t too confident in their answers, evading follow-up questions asking them to elaborate.
Still, more than half of the respondents, 64%, demonstrated a “healthy following” of several other, more radical viewpoints. These included information-based approaches (17%), many worlds (15%), and the Bohm-de Broglie pilot wave theory (7%). Meanwhile, 16% of respondents submitted written answers that either rejected all options, claimed we don’t need any interpretations, or offered their personal takes on the best interpretation of quantum mechanics.
So, much like many other endeavors in quantum mechanics, we’ll just have to see what sticks (or more likely, what doesn’t).
Divided results, equivocal reviews
Physicists who discussed the results with Nature had mixed feelings about whether the lack of consensus is concerning. Elise Crull at the City University of New York, for instance, told Nature that the ambiguity suggests “people are taking the question of interpretations seriously.”
Experts at the cross-section of philosophy and physics were more critical. Tim Maudlin, a philosopher of physics at New York University, told Gizmodo that the survey’s categorization of certain concepts is misleading and conducive to contradictory answers—a discrepancy that the respondents don’t seem to have realized, he said. “I think the main takeaway from this is that physicists do not think clearly—and have not formed strongly held views—about foundational issues in quantum theory,” commented Maudlin, my professor in graduate school.
In an email to Gizmodo, Sean Carroll, a theoretical physicist at Johns Hopkins who responded to the survey, expressed similar concerns. Several factors may be behind this lack of consensus, he said, but there’s a prevalent view that it “doesn’t matter as long as we can calculate experimental predictions,” which he said is “obviously wrong.”
“It would be reasonable if we thought we otherwise knew the final theory of physics and had no outstanding puzzles,” added Carroll, who was part of an expert group consulted for the survey. “But nobody thinks that.”
“It’s just embarrassing that we don’t have a story to tell people about what reality is,” admitted Carlton Caves, a theoretical physicist at the University of New Mexico in Albuquerque who participated in the survey, in Nature’s report.
However, the survey’s results do seem to hint at a general belief in the importance of a solid theoretical groundwork, with almost half of the participants agreeing that physics departments don’t give sufficient attention to quantum foundations. On the other hand, 58% of participants answered that experimental results will help inform which theory ends up being “the one.”
Schrödinger’s consensus, kind of
For better or worse, the survey represents the lively, fast-developing field of quantum science—which, if you’ve been following our coverage, can get really, really weird. A lack of explanation or consensus isn’t necessarily bad science—it’s just future science. After all, quantum mechanics, for all its complexity, remains one of the most experimentally verified theories in the history of science.
It’s fascinating to see how these experts can disagree so wildly about quantum mechanics, yet still offer solid evidence to support their views. Sometimes, there’s no right or bad answer—just different ones.
Wolfgang Pauli, Werner Heisenberg, and Enrico Fermi during the 1927 International Congress of Physicists, where the new quantum mechanics was discussed in depth. To the left are the first lines of Heisenberg’s letter to Fermi on July 9, 1925. Credit: Heisenberg Society/CERN, Wolfgang Pauli Archive For you fellow quantum enthusiasts, I highly recommend that you check out the full report for the entire account of how and where physicists were split. You can also find the original survey, the methodology, and an anonymized version of all the answers at the end of the report.
And if you do take the survey, or at least part of it, feel free to share your answers. Oh, and let me know whether you believe Heisenberg should have burned Umdeutung after all.
Continue Reading
-
Hubble Space Telescope spots rogue planet with a little help from Einstein: ‘It was a lucky break’
Astronomers discovered a new rogue planet lurking in archival data gathered by the Hubble Space Telescope, and the find is thanks to a little serendipity — and a little help from the genius himself, Albert Einstein.
Rogue, or “free-floating,” planets are worlds that don’t orbit a star. They earn their rogue status when they are ejected from their home systems due to interactions with their sibling planets or via gravitational upheaval caused by passing stars.
The most successful way of detecting an extrasolar planet, or exoplanet, in general is waiting until it crosses, or “transits,” the face of its parent star. Being cosmic orphans without a stellar parent, however, rogue planets can’t be detected in this way. Fortunately, a phenomenon first predicted by Einstein in 1915 offers a way to spot these rogue worlds.
“Free-floating planets, unlike most known exoplanets, don’t orbit any star. They drift alone through the galaxy, in complete darkness, with no sun to illuminate them. That makes them impossible to detect using traditional planet-detection techniques, which rely on light from a host star,” Przemek Mroz, study team member and a professor at the University of Warsaw, told Space.com. “To find these elusive objects, we use a technique called gravitational microlensing.”
How Einstein became a rogue planet hunter
Einstein’s 1915 theory of gravity, general relativity, suggests that objects with mass cause the very fabric of space to “warp.” The bigger the mass, the greater the warp and thus the stronger the gravity that arises from the warp.
Gravitational lensing arises when light from a background source passes by the warp. Its path gets curved. This can amplify that background source, an effect that astronomers use with Hubble and the James Webb Space Telescope (JWST) to study extremely distant galaxies that would usually be too faint to see.
“This phenomenon occurs when a massive object, the lens, passes in front of a distant star (the source), magnifying the star’s light due to the lens’s gravity,” Mroz explained. “The beauty of microlensing is that it works even if the lensing object emits no light at all.
“During microlensing events, the source star gets temporarily magnified. We can estimate the mass of the lensing object by measuring the duration and other properties of the event.”
Mroz added that when microlensing events are generated by passing rogue planets, they are usually very short, lasting less than a day.
A diagram shows an exaggerated gravitational microlensing situation (Image credit: Robert Lea (created with Canva)) The particular microlensing event the team studied to reveal this new rogue world is designated OGLE-2023-BLG-0524 and was observed by Hubble on May 22, 2023, remaining buried in data from the space telescope.
“It was discovered in the direction of the Galactic bulge by the Optical Gravitational Lensing Experiment [OGLE] survey, and independently observed by the Korea Microlensing Telescope Network [KMTNet],” Mroz said. “The Einstein timescale of the event was just eight hours, making it one of the shortest microlensing events on record.”
Based on the microlensing event’s properties, Mroz and colleagues were able to estimate that the lensing body object could be either a Neptune-mass planet located in the Milky Way’s galactic disk, around 15,000 light-years away. Alternatively, the rogue world could be a larger but more distant Saturn-mass object in the Milky Way’s galactic bulge, roughly 23,000 light-years away.
“Both scenarios are consistent with the microlensing signal we observed,” Mroz said.
Hunting for planets in Hubble’s archives
One of the most important tasks that faced the team upon the discovery of the microlensing event OGLE-2023-BLG-0524 was determining that this was indeed caused by a rogue planet, and not by a planet associated with a star but on a wide orbit far from its stellar parent.
They reasoned that if the planet had a nearby host star, within 10 times the distance between Earth and the sun (10 AU), they would have likely seen a second, longer-lasting microlensing signal from the star. The researchers saw no such signature, so they could rule out that the planet had a close stellar companion.
However, if the planet orbits a star at a much wider separation, greater than 10 AU, the odds of detecting the host star are much lower.
“This means we can’t fully rule out the wide-orbit scenario, but here’s where it gets interesting,” Mroz said. “Because the lens and the background star are slowly moving relative to each other, they will eventually separate in the sky.
“If we detect light from the lensing object at that point, we’ll know it’s not completely free-floating.”
This artist’s impression shows the free-floating planet CFBDSIR2149, at 100 light-years away the closest such rogue world to our own solar system. (Image credit: ESO/L. Calçada/P. Delorme/Nick Risinger (skysurvey.org)/R. Saito/VVV Consortium) Unfortunately, Mroz explained that the distance between the planet and the background star means their relative motion appears incredibly small, about 5 milliarcseconds per year.
“It will take at least a decade before we can hope to resolve them with current instruments, such as the Hubble Space Telescope or large ground-based telescopes,” Mroz said.
Hubble was particularly useful in this rogue planet hunt because the region of the sky that hosts the microlensing event was observed by the long-serving space telescope way back in 1997. That’s over 25 years before the microlensing event.
“That gave us a unique opportunity to test whether there might be a star associated with the lens,” Mroz said. “According to our model, by 1997, the lens and source should have been separated by 0.13 arcseconds. That’s tiny, but within Hubble’s capabilities. If the lens were a bright star, we would have seen it in those old images. But we didn’t.”
The absence of detectable light at the expected lens position told the team that any potential host star would have to be very faint.
“Depending on the stellar population model we use, that rules out around 25% to 48% of possible companion stars,” Mroz said. “That pushes us further toward the conclusion that this may truly be a free-floating planet.”
Mroz explained that OGLE-2023-BLG-0524 was discovered by team member Mateusz Kapusta by chance while the team was following up on microlensing events.
“This discovery was partly serendipity!” Mroz said. “It was a lucky break, but we believe there are many more such opportunities hidden in the data.
“Microlensing events occur all the time in dense stellar fields, and many of those fields have been observed by Hubble in the past. That means there could be more interesting events waiting to be discovered in the Hubble data.”
The team’s research is available as a preprint on the paper repository arXiv.
Continue Reading
-
Greenland subglacial flood bursts through ice sheet surface
Applications 30/07/2025
406 views
3 likesUsing data from several Earth-observing satellites, including ESA’s CryoSat and the Copernicus Sentinel-1 and Sentinel-2 missions, scientists have discovered that a huge flood beneath the Greenland Ice Sheet surged upwards with such force that it fractured the ice sheet, resulting in a vast quantity of meltwater bursting through the ice surface.
Partially funded by ESA’s Earth Observation FutureEO programme, an international team of researchers, led by scientists at Lancaster University and the Centre for Polar Observation and Modelling in the UK, studied a previously undetected lake beneath the ice sheet in a remote region of northern Greenland.
Using 3D models of the ice sheet surface from the ArcticDEM project, alongside data from multiple satellite missions including ESA’s ERS, Envisat and CryoSat, and Europe’s Copernicus Sentinel-1 and Sentinel-2, and NASA’s ICESat-2 missions, the researchers discovered that, in 2014, this subglacial lake suddenly drained.
Their research, published today in Nature Geoscience, reveals how, under extreme conditions, flooding from the drainage of a lake underneath the ice could force its way upwards and escape at the ice sheet surface.
These new finds shed new light on the destructive potential of meltwater stored beneath the ice sheet.
Greenland subglacial lake outburst
Over a 10-day period in the summer of 2014, a massive crater – 85 meters deep and spanning 2 square kilometres – formed on the surface of the ice sheet as 90 million cubic metres of water were suddenly released from this hidden subglacial lake.
This is equivalent to about nine hours’ worth of water thundering over Niagara Falls at peak flow, making it one of the largest recorded subglacial floods in Greenland.
While the sudden surge of meltwater was startling in itself, even more alarming was the accompanying damage – towering 25-metre-high ice blocks torn from the surface, deep fractures in the ice sheet, and the ice surface scoured by the flood’s destructive force.
Subglacial lake outburst fractures and elevation change
Jade Bowling, who led this work as part of her PhD at Lancaster University, said, “When we first saw this, because it was so unexpected, we thought there was an issue with our data. However, as we went deeper into our analysis, it became clear that what we were observing was the aftermath of a huge flood of water escaping from underneath the ice.
“The existence of subglacial lakes beneath the Greenland Ice Sheet is still a relatively recent discovery, and – as our study shows – there is still much we don’t know about how they evolve and how they can impact on the ice sheet system.
“Importantly, our work demonstrates the need to better understand how often they drain, and, critically, what the consequences are for the surrounding ice sheet.”
While it was previously believed that meltwater travels downwards from the surface of the ice sheet to its base and eventually flows into the ocean, these new findings reveal that water can also move in the opposite direction – upwards through the ice.
Cross-section (A–A) of elevation change Greenland Ice Sheet
Even more unexpected was the discovery that the flood took place in an area where models had indicated that the ice bed was frozen. This led researchers to suggest that intense pressure caused fractures beneath and through the ice sheet, creating channels through which the water could rise.
Current models that predict how ice sheets will respond to climate change and increased melting do not account for these upward-flowing, fracture-driven processes.
Mal McMillan, Co-Director of the Centre of Excellence in Environmental Data Science at Lancaster University, and Co-Director of Science at the UK Centre for Polar Observation and Modelling, said, “This research demonstrates the unique value of long-term satellite measurements of Earth’s polar ice sheets, which – due to their vast size – would otherwise be impossible to monitor.
“Satellites represent an essential tool for monitoring the impacts of climate change, and provide critical information to build realistic models of how our planet may change in the future.
“This is something that all of us depend upon for building societal resilience and mitigating the impacts of climate change.”
Cross-section (B–B) of elevation change Greenland Ice Sheet
ESA’s Diego Fernandez, Head of the Earth Observation Science Section, noted, “This discovery is remarkable, and we’re proud that our Science for Society 4D Greenland project has played a key role in making it possible.
“The project’s goal is to deepen our understanding of the hydrology of the Greenland Ice Sheet by leveraging data from Earth observation satellites, and, in particular, to shed light on how the ice sheet is responding to climate change.
“This result adds to the body of knowledge we are establishing through the ESA Polar Science Cluster on how the Arctic is changing in response to increased warming. Gaining insight into its hydrology is crucial for understanding these changes – and for predicting how the ice sheet will contribute to global sea-level rise in a warming climate.
“We congratulate the research team on advancing our understanding of this vulnerable region.”
Continue Reading