- An Unusual Carbon dioxide-rich Disk Detected Around A Young Star Challenges Planet Formation Models astrobiology.com
- James Webb Space Telescope spots odd planet-forming disk around infant star Space
- Abundant carbon dioxide in planet-forming disk challenges planet origin models Penn State University
- Butterfly Star shows off its planet-forming disc Earth.com
- James Webb Telescope Captures Space Butterfly The Weather Channel
Category: 7. Science
-
An Unusual Carbon dioxide-rich Disk Detected Around A Young Star Challenges Planet Formation Models – astrobiology.com
-
Species-specific detection of Schistosoma japonicum using the ‘SNAILS’ DNA-based biosensor
McManus, D. P. et al. Schistosomiasis. Nat. Rev. Dis. Primers 4, 1–19 (2018).
World Health Organization. & UNICEF. Progress on Sanitation and Drinking-Water: 2014 Update.
Hotez, P. J. et al. The Global Burden of Disease Study 2010: Interpretation and Implications for the Neglected Tropical Diseases. PLoS Negl. Trop. Dis. 8, e2865 (2014).
Google Scholar
Ferrari, A. J. et al. Global incidence, prevalence, years lived with disability (YLDs), disability-adjusted life-years (DALYs), and healthy life expectancy (HALE) for 371 diseases and injuries in 204 countries and territories and 811 subnational locations, 1990–2021: a systematic analysis for the Global Burden of Disease Study 2021. The Lancet. 403, 2133–2161 (2024).
Wang, H. et al. Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980–2015: a systematic analysis for the Global Burden of Disease Study 2015. Lancet. 388, 1459–1544 (2016).
Boissier, J. et al. Outbreak of urogenital schistosomiasis in Corsica (France): an epidemiological case study. Lancet Infect. Dis. 16, 971–979 (2016).
Google Scholar
He, Y.-X., Salafsky, B. & Ramaswamy, K. Host–parasite relationships of Schistosoma japonicum in mammalian hosts. Trends Parasitol. 17, 320–324 (2001).
Google Scholar
Wang, T. P. et al. Does multiple hosts mean multiple parasites? Population genetic structure of Schistosoma japonicum between definitive host species. Int J. Parasitol. 36, 1317–1325 (2006).
Google Scholar
Ross, A. G. P. et al. Schistosomiasis in the People’s Republic of China: Prospects and Challenges for the 21st Century. Clin. Microbiol Rev. 14, 270–295 (2001).
Google Scholar
Collins, C., Xu, J. & Tang, S. Schistosomiasis control and the health system in P.R. China. Infect. Dis. Poverty 1, 8 (2012).
Google Scholar
Rollinson, D. et al. Time to set the agenda for schistosomiasis elimination. Acta Trop. 128, 423–440 (2013).
Google Scholar
Grimes, J. E. et al. The roles of water, sanitation and hygiene in reducing schistosomiasis: a review. Parasit. Vectors 8, 156 (2015).
Google Scholar
Webb, A. J. et al. Specific Nucleic AcId Ligation for the detection of Schistosomes: SNAILS. PLoS Negl. Trop. Dis. 16, e0010632 (2022).
Google Scholar
Ying, Z.-M. et al. Spinach-based fluorescent light-up biosensors for multiplexed and label-free detection of microRNAs. Chem. Commun. 54, 3010–3013 (2018).
Woo, C. H., Jang, S., Shin, G., Jung, G. Y. & Lee, J. W. Sensitive fluorescence detection of SARS-CoV-2 RNA in clinical samples via one-pot isothermal ligation and transcription. Nat. Biomed. Eng. 4, 1168–1179 (2020).
Google Scholar
Edgar, R. C. MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 32, 1792–1797 (2004).
Google Scholar
Emery, A. M., Allan, F. E., Rabone, M. E. & Rollinson, D. Schistosomiasis collection at NHM (SCAN). Parasit. Vectors 5, 185 (2012).
Google Scholar
Xu, J. et al. Evolution of the National Schistosomiasis Control Programmes in The People’s Republic of China. in 1–38 (2016). https://doi.org/10.1016/bs.apar.2016.02.001.
Katz, N., Chaves, A. & Pellegrino, J. A simple device for quantitative stool thick-smear technique in Schistosomiasis mansoni. Rev. Inst. Med Trop. Sao Paulo 14, 397–400 (1972).
Google Scholar
Lin, D.-D. et al. Routine Kato–Katz technique underestimates the prevalence of Schistosoma japonicum: A case study in an endemic area of the People’s Republic of China. Parasitol. Int 57, 281–286 (2008).
Google Scholar
Cai, P. et al. Comparison of Kato Katz, antibody-based ELISA and droplet digital PCR diagnosis of schistosomiasis japonica: Lessons learnt from a setting of low infection intensity. PLoS Negl. Trop. Dis. 13, e0007228 (2019).
Google Scholar
Dang-Trinh, M. A. et al. Analyses of the expression, immunohistochemical properties and serodiagnostic potential of Schistosoma japonicum peroxiredoxin-4. Parasit. Vectors 13, 436 (2020).
Google Scholar
Angeles, J. et al. Utilization of ELISA Using Thioredoxin Peroxidase-1 and Tandem Repeat Proteins for Diagnosis of Schistosoma japonicum Infection among Water Buffaloes. PLoS Negl. Trop. Dis. 6, e1800 (2012).
Google Scholar
Angeles, J. et al. Human Antibody Response to Thioredoxin Peroxidase-1 and Tandem Repeat Proteins as Immunodiagnostic Antigen Candidates for Schistosoma japonicum Infection. Am. Soc. Tropical Med. Hyg. 85, 674–679 (2011).
Angeles, J. et al. Serological evaluation of the schistosome’s secretory enzyme phytochelatin synthase and phosphoglycerate mutase for the detection of human Schistosoma japonicum infection. Parasitol. Res. 121, 2445–2448 (2022).
Google Scholar
Champion, T. S., Connelly, S., Smith, C. J. & Lamberton, P. H. L. Monitoring schistosomiasis and sanitation interventions—The potential of
environmental DNA < /scp > . WIREs Water 8, (2021). Carlton, E. J., Bates, M. N., Zhong, B., Seto, E. Y. W. & Spear, R. C. Evaluation of Mammalian and Intermediate Host Surveillance Methods for Detecting Schistosomiasis Reemergence in Southwest China. PLoS Negl. Trop. Dis. 5, e987 (2011).
Google Scholar
Sturrock, R. F., Karamsadkar, S. J. & Ouma, J. Schistosome infection rates in field snails: Schistosoma mansoni in Biomphalaria pfeifferi from Kenya. Ann. Trop. Med Parasitol. 73, 369–375 (1979).
Google Scholar
Frandsen, F. & Christensen, N. O. An introductory guide to the identification of cercariae from African freshwater snails with special reference to cercariae of trematode species of medical and veterinary importance. Acta Trop. 41, 181–202 (1984).
Google Scholar
Allan, F. et al. Use of sentinel snails for the detection of Schistosoma haematobium transmission on Zanzibar and observations on transmission patterns. Acta Trop. 128, 234–240 (2013).
Google Scholar
Pennance, T. et al. Development of a Molecular Snail Xenomonitoring Assay to Detect Schistosoma haematobium and Schistosoma bovis Infections in their Bulinus Snail Hosts. Molecules 25, 4011 (2020).
Google Scholar
Allan, F. et al. Snail-Related Contributions from the Schistosomiasis Consortium for Operational Research and Evaluation Program Including Xenomonitoring, Focal Mollusciciding, Biological Control, and Modeling. Am. J. Trop. Med Hyg. 103, 66–79 (2020).
Google Scholar
Abbasi, I., King, C. H., Muchiri, E. M. & Hamburger, J. Detection of Schistosoma mansoni and Schistosoma haematobium DNA by Loop-Mediated Isothermal Amplification: Identification of Infected Snails from Early Prepatency. Am. J. Trop. Med Hyg. 83, 427–432 (2010).
Google Scholar
Hamburger, J. et al. Evaluation of Loop-Mediated Isothermal Amplification Suitable for Molecular Monitoring of Schistosome-Infected Snails in Field Laboratories. Am. J. Trop. Med Hyg. 88, 344–351 (2013).
Google Scholar
Zhang, Y. et al. Circulating cell-free DNA as a biomarker for diagnosis of Schistosomiasis japonica. Parasit. Vectors 17, 114 (2024).
Google Scholar
Sengupta, M. E. et al. Environmental DNA for improved detection and environmental surveillance of schistosomiasis. Proc. Natl Acad. Sci. 116, 8931–8940 (2019).
Google Scholar
Alzaylaee, H. et al. Schistosoma species detection by environmental DNA assays in African freshwaters. PLoS Negl. Trop. Dis. 14, e0008129 (2020).
Google Scholar
Lodh, N., Naples, J. M., Bosompem, K. M., Quartey, J. & Shiff, C. J. Detection of Parasite-Specific DNA in Urine Sediment Obtained by Filtration Differentiates between Single and Mixed Infections of Schistosoma mansoni and S. haematobium from Endemic Areas in Ghana. PLoS One 9, e91144 (2014).
Google Scholar
Hung, Y. W. & Remais, J. Quantitative Detection of Schistosoma japonicum Cercariae in Water by Real-Time PCR. PLoS Negl. Trop. Dis. 2, e337 (2008).
Google Scholar
Gordon, C. atherine A. et al. Real-time PCR Demonstrates High Prevalence of Schistosoma japonicum in the Philippines: Implications for Surveillance and Control. PLoS Negl. Trop. Dis. 9, e0003483 (2015).
Google Scholar
Kane, R. A. et al. Detection and quantification of schistosome DNA in freshwater snails using either fluorescent probes in real-time PCR or oligochromatographic dipstick assays targeting the ribosomal intergenic spacer. Acta Trop. 128, 241–249 (2013).
Google Scholar
Cnops, L., Tannich, E., Polman, K., Clerinx, J. & Van Esbroeck, M. Schistosoma real-time PCR as diagnostic tool for international travellers and migrants. Tropical Med. Int. Health 17, 1208–1216 (2012).
ten Hove, R. J. et al. Multiplex real-time PCR for the detection and quantification of Schistosoma mansoni and S. haematobium infection in stool samples collected in northern Senegal. Trans. R. Soc. Trop. Med Hyg. 102, 179–185 (2008).
Google Scholar
Sun, K. et al. Recombinase polymerase amplification combined with a lateral flow dipstick for rapid and visual detection of Schistosoma japonicum. Parasit. Vectors 9, 476 (2016).
Google Scholar
Deng, W. et al. Laboratory Evaluation of a Basic Recombinase Polymerase Amplification (RPA) Assay for Early Detection of Schistosoma japonicum. Pathogens 11, 319 (2022).
Google Scholar
Qin, Z.-Q. et al. Field Evaluation of a Loop-Mediated Isothermal Amplification (LAMP) Platform for the Detection of Schistosoma japonicum Infection in Oncomelania hupensis Snails. Trop. Med Infect. Dis. 3, 124 (2018).
Google Scholar
Xu, J. et al. Sensitive and rapid detection of Schistosoma japonicum DNA by loop-mediated isothermal amplification (LAMP). Int J. Parasitol. 40, 327–331 (2010).
Google Scholar
Young, N. D. et al. Exploring molecular variation in Schistosoma japonicum in China. Sci. Rep. 5, 17345 (2015).
Google Scholar
Souza, A. A. et al. Diagnostics and the neglected tropical diseases roadmap: setting the agenda for 2030. Trans. R. Soc. Trop. Med Hyg. 115, 129–135 (2021).
Google Scholar
Valentim, C. L. L. et al. Genetic and Molecular Basis of Drug Resistance and Species-Specific Drug Action in Schistosome Parasites. Science (1979) 342, 1385–1389 (2013).
Chevalier, F. D. et al. Oxamniquine resistance alleles are widespread in Old World Schistosoma mansoni and predate drug deployment. PLoS Pathog. 15, e1007881 (2019).
Google Scholar
Marchant, J. S. Progress interrogating TRPMPZQ as the target of praziquantel. PLoS Negl. Trop. Dis. 18, e0011929 (2024).
Google Scholar
Geneva: World Health Organization. Diagnostic Target Product Profiles for Monitoring, Evaluation and Surveillance of Schistosomiasis Control Programmes. (2021).
Attwood, S. W., Fatih, F. A. & Upatham, E. S. DNA-Sequence Variation Among Schistosoma mekongi Populations and Related Taxa; Phylogeography and the Current Distribution of Asian Schistosomiasis. PLoS Negl. Trop. Dis. 2, e200 (2008).
Google Scholar
Kane, R. A. et al. A phylogeny based on three mitochondrial genes supports the division of Schistosoma intercalatum into two separate species. Parasitology 127, 131–137 (2003).
Google Scholar
Lockyer, A. E. et al. The phylogeny of the Schistosomatidae based on three genes with emphasis on the interrelationships of Schistosoma Weinland, 1858. Parasitology 126, 203–224 (2003).
Google Scholar
Webster, B. L. et al. DNA barcoding of Schistosoma haematobium on Zanzibar reveals substantial genetic diversity and two major phylogenetic groups. Acta Trop. 128, 206–217 (2013).
Google Scholar
Djuikwo-Teukeng, F. F. et al. Population genetic structure of Schistosoma bovis in Cameroon. Parasit. Vectors 12, 56 (2019).
Google Scholar
Hanelt, B. et al. Schistosoma kisumuensis n. sp. (Digenea: Schistosomatidae) from murid rodents in the Lake Victoria Basin, Kenya and its phylogenetic position within the S. haematobium species group. Parasitology 136, 987–1001 (2009).
Google Scholar
Moore, L. et al. The mutational landscape of normal human endometrial epithelium. Nature 580, 640–646 (2020).
Google Scholar
Doyle, S. R. et al. Evaluation of DNA Extraction Methods on Individual Helminth Egg and Larval Stages for Whole-Genome Sequencing. Front Genet 10, 826 (2019).
Google Scholar
Continue Reading
-
Researchers fold glass for optical devices using photonic origami
Stay up to date with everything that is happening in the wonderful world of AM via our LinkedIn community.Researchers from Tel Aviv University have developed a chip-based method to fold glass sheets into microscopic 3D photonic structures – a process called photonic origami. The technique could produce tiny, complex optical devices for data processing, sensing, and experimental physics.
“Existing 3D printers produce rough 3D structures that aren’t optically uniform and thus can’t be used for high-performance optics,” said Tal Carmon of Tel Aviv University. Inspired by pinecone scales bending to release seeds, his team used a laser-induced technique to bend ultra-thin glass sheets into ultra-smooth, transparent structures.
The paper’s first author, Manya Malhotra. Credit: Tal Carmon, Tel Aviv University. In Optica, the team reported record-setting 3mm-long structures just 0.5 microns thick, with less than a nanometer of surface variation. They fabricated helixes and concave and convex mirrors that reflect light without distortion. “Similar to how large 3D printers can fabricate almost any household item, photonic origami could enable a variety of tiny optical devices,” said Carmon. Potential uses include micro-zoom lenses for smartphones and light-based components for faster, more efficient computing.
The discovery happened by accident. Graduate student Manya Malhotra was asked to locate an invisible laser on glass by raising the power until it glowed – instead, the glass folded. She became the pioneering expert in photonic origami. The folding occurs when one side liquifies under laser heat and surface tension overtakes gravity, pulling the glass into a fold.
Researchers used their new technique to fold a glass bar (a), create an optical resonator (b) to achieve helical bending (c) and to create a table with a parabolic reflector (middle, lower row). Credit: Tal Carmon, Tel Aviv University. Lab engineer Ronen Ben Daniel fabricated silica glass layers on silicon chips, undercutting them with etching before folding them with CO2 laser pulses. Sheets folded in under a millisecond at speeds of 2m/s and accelerations above 2000m/s². “The level of control we had over 3D microphotonic architecture came as a pleasant surprise,” said Carmon.
The team folded sheets up to 10 microns thick into 90-degree bends and helices with precision of 0.1 microradians. They also created a microscopic glass ‘table’ with a concave cavity mirror, inspired by work from P.K. Lam of Australian National University on exploring deviations from Newtonian gravity. Starting with a sheet 5 microns thick, they patterned and folded it into a 3D structure light enough to be optically levitated. Such experiments could help probe dark matter mysteries.
“High-performance, 3D microphotonics had not been previously demonstrated,” said Carmon. “This new technique brings silica photonics – using glass to guide and control light – into the third dimension, opening up entirely new possibilities for high-performance, integrated optical devices.”
Continue Reading
-
Aug. 30, 1984: Space Shuttle Discovery launches
Today in the history of astronomy, a space program workhorse takes flight.
Discovery‘s launchpad abort was the first since 1965’s Gemini VI-A. Credit: NASA
- NASA’s Space Shuttle Discovery, the program’s third orbiter, experienced significant pre-launch setbacks including thermal shield flaws and multiple aborted launch attempts due to computer and engine malfunctions.
- Initial launch attempts were delayed or aborted in June and July 1984 due to identified technical issues necessitating repairs and software updates.
- Discovery successfully launched on August 30, 1984, commencing a 27-year operational lifespan.
- During its operational life, Discovery participated in pivotal missions including International Space Station construction and resupply, Hubble Space Telescope deployment, and Mir Space Station visits, ultimately accumulating 149 million miles of flight.
The third member of NASA’s space shuttle program, Discovery had a fraught journey to its launch. First, a test in June of 1984 found a flaw in the thermal shield; then a launch scheduled for later the same month was delayed due to a computer failure. Attempt No. 2 was halted only four seconds before lifting off, when the computer registered a problem with one of the engines. Once the engine had been replaced, a third launch attempt was scheduled and then scratched due to a software problem. Finally, on Aug. 30, 1984, Discovery blasted off, beginning a 27-year-career – it would become the most-flown orbiter in NASA’s fleet. During that long lifetime, Discovery took part in construction and supplying of the International Space Station, launched the Hubble Space Telescope, and flew two missions to the Mir Space Station. By the time it was retired after its final mission, in 2011, it had flown 149 million miles.
Continue Reading
-
A new mega-earthquake hotspot could be forming beneath the Atlantic
A new tectonic fault could be emerging beneath the Atlantic Ocean, raising the risk of powerful earthquakes and tsunamis that could ripple across the basin. That’s according to a new study published this week in Nature Geoscience.
For centuries, scientists have puzzled over why Portugal has suffered huge earthquakes despite lying far from the world’s major fault lines.
On 1 November 1755, Lisbon was devastated by a magnitude 8.7 quake that killed tens of thousands and sent tsunami waves as far as the Caribbean. More recently, a magnitude 7.8 tremor struck off Portugal’s coast in 1969, killing 25 people.
“One of the problems is that these earthquakes occurred on a completely flat plain, far from the faults,” Prof João Duarte, a geologist at the University of Lisbon and lead author of the study, told BBC Science Focus.
“After the 1969 earthquake, people started to realise that something strange was going on, because it had the signature of a subduction zone, yet there isn’t one there.”
Subduction zones – where one tectonic plate dives beneath another – are responsible for the planet’s most devastating ‘megathrust’ quakes, such as the 2004 Indian Ocean and 2011 Tōhoku disasters. But the Atlantic has long been considered relatively calm as its plates slowly drift apart along a mid-ocean ridge.
Duarte’s team pieced together seismic records and computer models of the Horseshoe Abyssal Plain, a stretch of deep seafloor southwest of Portugal. They found evidence that the mantle – the hot, dense layer beneath Earth’s crust – is peeling away in a process called delamination.
“The base of the plate is separating like the sole of a shoe peeling off,” Duarte said. “That was the first Eureka moment when I thought, ‘aha, there’s something there’. The second was when the computer models also showed delamination was happening.”
This engraving depicts the 1755 Lisbon earthquake. A combination of the earthquake, tsunami and subsequent fires almost completely destroyed the Portuguese capital – Credit: Getty Such unpeeling is almost unheard of in the oceanic crust, which usually behaves like a “crème brûlée”, as it has a rigid buoyant layer sitting atop a squishier one beneath.
But here, water appears to have seeped into the rock over millions of years, chemically weakening it and allowing chunks of mantle to sink into Earth’s depths.
The findings suggest we may be witnessing the birth of a new subduction zone in the Atlantic – one that could eventually pull Africa, Europe and the Americas back together into a future supercontinent.
For now, the more immediate concern is seismic hazard.
“Big earthquakes are going to happen again,” Duarte said, warning that the impacts of these could devastate unprepared coastal regions across the Atlantic.
“If you see on the forecast that it’s going to rain tomorrow, you take an umbrella,” he continued. “You don’t need to know exactly what minute it will start raining because you are prepared.
“With earthquakes, it’s the same thing; we don’t know when a major one will hit, but we know that one will, so we need to be prepared for that.”
Read more:
About our expert
João Duarte is an assistant professor in tectonics at the University of Lisbon and president of the Tectonics and Structural Geology Division of the European Geosciences Union. His research has been published in journals including Geophysical Research Letters, Nature Communications and Geology.
Continue Reading
-
Unlocking the hidden half of plants with DNA
When we walk across farmland, we see only the parts of plants above ground. Below the surface, roots provide stability, take up water and nutrients, and store carbon in the soil. Yet despite their importance, researchers have had no precise way to measure roots.
“We have always known that roots are important, but we have lacked a precise tool to measure them. It’s a bit like studying marine ecosystems without ever being able to dive beneath the surface of the water,” noted Henrik Brinch-Pedersen from Aarhus University.
Old ways failed to measure roots
The usual way meant digging up soil, rinsing roots, drying them, and weighing them. It took hours and destroyed many of the fine roots that matter most.
Scientists also tried methods like rhizotrons or tracer studies. These helped but often underestimated root biomass.
Fine roots – the ones that take up nutrients and release carbon – were especially easy to lose. Without them, much of the underground story stayed invisible.
DNA reveals hidden root systems
Now comes droplet digital PCR, or ddPCR. Instead of tearing apart the soil, scientists divide a tiny sample into thousands of droplets and test each for DNA. They use a genetic marker called ITS2, which acts like a barcode.
This doesn’t just show that roots exist in the soil. It shows which species they belong to and how much they contribute.
“It’s a bit like giving the soil a DNA test. We can suddenly see the hidden distribution of species and biomass without digging up the whole field,” said Brinch-Pedersen.
Reliable root measurements
Earlier approaches like qPCR struggled with soil contaminants and inconsistent reactions. The ddPCR method avoids these problems by turning each droplet into a tiny reaction chamber.
The technology counts DNA molecules directly, using statistics to link those counts to root biomass. The result is a reliable measurement, even in complex soils with multiple plant species.
The method also works across a wide range of concentrations. Whether the sample contains lots of roots or very few, ddPCR maintains accuracy. That flexibility makes it suitable for both small experimental plots and large, diverse fields.
DNA method works in mixed roots
To test it, researchers mixed soils containing ryegrass, white clover, and yarrow. They found that ddPCR identified each plant and measured its share of the underground biomass.
Hybrids like ryegrass crosses sometimes confused the system, but overall, the accuracy remained high.
Even when species grew together, the method could still tease apart their contributions. That is something old root washing methods could never do with confidence.
Why it matters
The new digital DNA method has the potential to transform how we study crops. Climate researchers can now calculate how much carbon plants actually store underground.
Breeders can choose crop varieties that send more growth into roots without hurting yields. Ecologists can finally see how plants compete or cooperate beneath the surface.
“We see great potential in using this method to develop varieties that store more carbon in the soil. It could become an important tool in future agriculture,” noted Brinch-Pedersen.
Roots play a role in storing carbon
Roots are more than anchors. They could be powerful allies in slowing climate change. Plants pull in carbon dioxide and send some of it underground, where it can stay locked away for decades. But without precise tools, proving this has been difficult. The ddPCR technology gives scientists that missing tool.
Imagine grasslands where certain species funnel extra carbon into deep roots. Or croplands where breeders select varieties with stronger underground storage potential. With the right choices, agriculture could shift from being a major emitter to a carbon sink.
Challenges still remain
The method isn’t perfect. Hybrids remain tricky, since their DNA overlaps with parent species. Each plant type needs its own probe, which takes effort to design and test.
Probe development requires careful validation, because even small genetic changes can throw results off. Still, once a probe library expands, the technology becomes faster and easier to apply.
“For us, the most important thing is that we have shown it can be done. That is the foundation we can build upon. Our vision is to expand the DNA library so that in the future we can measure many more species directly in soil samples,” noted Brinch-Pedersen.
Droplet digital PCR already works in medicine for spotting rare mutations and pathogens. Applying it to roots is a natural next step.
Once established, the method is simple, scalable, and powerful. It could help create crops that store more carbon belowground while keeping food production steady.
Beyond crops, the technology could also reshape how we manage ecosystems. Grasslands, forests, and mixed farmlands all depend on root systems we rarely measure well. Now, we finally have a way to look underground without tearing everything apart.
For the first time, researchers can explore the hidden world beneath our feet – and measure it with confidence.
The study is published in the journal Plant Physiology.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–
Continue Reading
-
AI exposes 1,000+ fake science journals
A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.
The study, published Aug. 27 in the journal “Science Advances,” tackles an alarming trend in the world of research.
Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers — for a hefty fee.
Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.
“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”
His group’s new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?
Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.
But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.
“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”
The shake down
When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality — or, at least, that’s the goal.
A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.
Often, they target researchers outside of the United States and Europe, such as in China, India and Iran — countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.
“They will say, ‘If you pay $500 or $1,000, we will review your paper,’” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”
A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)
But keeping pace with the spread of those publications has been daunting for humans.
To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.
Among those journals, the AI initially flagged more than 1,400 as potentially problematic.
Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.
“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”
A firewall for science
Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.
“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”
The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.
The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data — what he calls a “firewall for science.”
“As a computer scientist, I often give the example of when a new smartphone comes out,” he said. “We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”
Co-authors on the study included Han Zhuang at the Eastern Institute of Technology in China and Lizheng Liang at Syracuse University in the United States.
Continue Reading
-
Future directions for understanding the coevolution of life and oxygen
Defining an oxygenated world
In order to constrain Earth System changes associated with environmental oxygen availability, a variety of proxies have been developed. These proxies are calibrated through careful laboratory-based investigations that determine specific redox sensitive chemical behaviours, alongside threshold calibrations derived from large datasets of modern and ancient depositional environments, where redox/palaeoredox state is either known, or can be estimated using independent data (e.g., presence/absence of fossil organisms inferred to have had high metabolic demands). After careful calibration, these proxies are then applied to suitable lithologies throughout the sedimentary rock record, and oxygen dynamics are reconstructed based on collations and collective interpretations of geochemical proxy data.
Herein, experts were asked what proxies they preferred to use to define changes to Earth’s surface oxygen concentrations through time (n = 49). Despite only 49 responses to this question, there are a total of 41 self-identified geochemists in the survey, meaning that this is likely a fair representation of experts in the field (Fig. 1). To produce Fig. 1, to ensure each respondent with one or more chosen proxies had a singular vote, weightings were applied where for each additional proxy their vote was partitioned in the figure; for example, if someone chose two proxies they had half a vote for each. Respondents who self-identified as having multiple disciplines also had their vote partitioned into the separate disciplines in Fig. 1 but this does not alter the overall total votes for the proxies. The raw data is available at https://doi.org/10.6084/m9.figshare.29591216.v1.
Fig. 1: Current and future proxy development. A Geochemical proxies for constraining oxygenation through time, with preference allocated by expert respondents. B Proxies that may require further development, based on votes by expert respondents. Numbered proxies are listed in full in the Supplementary Information.
The top four “favourite” palaeoredox proxies were: MIF-S (>14% of responses), redox sensitive element enrichments (RSEs) (>12% of responses), cerium anomalies (>11% of responses), and iron (Fe) speciation (>7% of responses).
As noted above, the loss of MIF-S has been used as a metric for defining the timing of the GOE for over two decades7 and constrains the point at which atmospheric oxygen rose across the 10−6 PAL threshold10,11. Based on the clear mechanistic interpretation for the loss of MIF-S, and consistent support for this interpretation based on independent multi-proxy records, it is considered as one of the defining characteristics, not only of the GOE, but as a benchmark for Earth’s overall oxygen trajectory. Redox sensitive elements were also cited as a preferred palaeoredox proxy, and incorporate a suite of elements including molybdenum (Mo), vanadium (V), uranium (U), and rhenium (Re), whose absolute and relative enrichments (most commonly calculated as enrichment factors, or ‘EFs’, in fine grained siliciclastic rocks, e.g., shale) can be used to distinguish the dominant redox conditions of overlying waters that existed during deposition39,40,41. Cerium anomalies (Ce/Ce*), which also constitute one of the top four favourite proxies, are distinguished by enrichments or depletions in Ce relative to a specific rare Earth element (REE) profile20,42. Ce is readily adsorbed onto manganese oxides, and oxygen concentrations sufficient to oxidise reduced Mn can therefore lead to more efficient Ce removal from seawater, equating to a negative Ce/Ce* anomaly (relative to the REE profile), most commonly recorded in early marine carbonate cements or carbonate sediments20,42. The fourth favourite palaeoredox proxy was Fe speciation. The Fe speciation protocol for modern sediments and ancient sedimentary rocks (commonly shales) has remained largely unchanged for almost 20 years43, and is contingent upon the quantification of the proportion of total Fe (FeT) considered highly reactive (FeHR) to biological or abiological reduction under anoxic conditions. FeHR constitutes the sum of Fe in carbonates (Fecarb), oxides (Feox) and magnetite (Femag), which are operationally defined via a three step sequential extraction procedure, in addition to pyrite (Fepy), which is quantified via a separate extraction43,44. All proxy datasets require thorough screening to determine the degree to which post-depositional conditions may have skewed geochemical data, and numerous articles have outlined best practices in the production and interpretation of Fe speciation data45.
Experts were also asked what proxies they would like to see better developed in the future (n = 44). Responses to this question can be interpreted in several ways; either the expert considers their chosen proxy to require a more thorough calibration prior to widespread use (amounting to concern with regards to overinterpreting premature datasets or methodological/analytical uncertainties), or that the expert believes that an underappreciated proxy is sufficiently novel to warrant more concerted development and application, or both. The top proxies that experts wish to see better developed are: redox sensitive elements (RSEs) (>8% of responses), carbonate-bound iodine (>13%), mass independent fractionation of oxygen isotopes (>7%), trace metal isotopes (>6%), and oxygen requirements of biology (>4%).
Only RSEs are found on both the favourite and further development lists, which may be due to the wide range of elements that can be grouped into this term, and ongoing research associated with the redox sensitivities and behaviours of specific elements. Carbonate-bound iodine has recently shown promise as a means to constrain shallow water oxygen concentrations in deep time, given that oxidised iodate is the only species of iodine that can be readily incorporated into the carbonate lattice46,47. Mass independent fractionation of oxygen isotopes attempt to capture a transition in atmospheric pO2 driven by ozone formation48,49,50. Interpretations of O-MIF data have also been taken further, in attempts to constrain productivity dependence on pO2 and pCO2 estimates51. The umbrella term ‘trace metal isotopes’ encapsulates a variety of elemental systems that, to describe in full, would be beyond the scope of this manuscript. As an example, Cr isotopes have been used over the past decade to constrain atmospheric O2 concentrations, given the sensitivity of Cr isotope fractionation to oxidative weathering and manganese cycling52,53. Meanwhile, Mo isotopes can be used to constrain the extent of oceanic sulfidic conditions54 and U isotopes can be used as a means to quantify the extent of oxic and anoxic global seafloor55, due to the long residence time of these elements in the modern ocean. The variety of proxies that sit within the term ‘trace metal isotopes’ demonstrates that they must be extensively calibrated independently, but will provide more nuanced interpretations of depositional redox conditions when employed efficiently, and in concert. Additionally, research on the O2 requirements of animals may offer a more direct means by which to constrain oxygen concentrations in the water column, given that specific oxygen concentrations are required to sustain specific metabolisms. However, determining the precise oxygen requirements of these metabolisms demands both careful study under laboratory conditions28, as well as assumptions regarding the affinities and life habits of complex multicellular organisms preserved in the fossil record42.
Understanding the evidence across disciplines: is there a holistic view?
Pieces of information are often misinterpreted during interdisciplinary communication. The Precambrian fossil record is rife in enigmatic fossils, many of which suffer an uncertain phylogenetic placement or even questioned biogenicity56. A specific example of a recent palaeontological finding that has been publicly refuted is the reassessment of a putative fossil from India that superficially resembled the Ediacaran organism Dickinsonia57 as the decaying impression of a modern beehive58. However, published rebuttals are relatively uncommon, and in the majority of cases, uncertain or disputed fossils themselves tend to be largely ignored in subsequent scientific literature, without published critique. From the perspective of an exterior discipline (e.g., geochemistry), such lack of published critique may be interpreted as acceptance of the outdated perspective, especially if open cross-discipline communication is lacking. In sum, this highlights the necessity for active cross-discipline communication.
Expert respondents were asked to self-categorise their primary discipline, and then to suggest which pieces of respective geochemical and palaeontological evidence are most often misinterpreted or misused. Experts who identified as palaeobiologists repeatedly outlined several instances of misinterpretation of the fossil record with regards to understanding the coevolution of life and oxygen (n = 34). Occurrences of stromatolites are often misinterpreted as direct evidence for the presence of cyanobacteria and thus onset of oxygenic photosynthesis (highest proportion of responses, c. 20%). This confusion likely derives from concerns with affinities of stromatolites in deep time. Recent microbialites, built largely by prokaryotic and eukaryotic photosynthesizers, are not always suitable analogues to Archean and Proterozoic ones59 and the metabolisms of ancient microbialite- and stromatolite-builders are not limited to oxygenic photosynthesis60,61. A lack of, or poor, age constraints and/or established affinity of Precambrian fossils are the second most often misinterpreted palaeontological evidence (>17%), and may lead to confusion about the timing of the appearance of major lineages. This not only impacts evolutionary studies, but also our understanding of the coevolution of life and the changing environment. This is associated with the third most often misinterpreted palaeontological evidence; the interpretation of fossil biogenicity, especially of some Archean microfossils (>14%). This response is likely driven by recent experimental studies that show inorganic growth of sulfur biomorphs with similar morphologies to early fossil cells62. Beyond the Archean, the biogenicity of the Paleoproterozoic ‘Francevillian biota’63,64 are also specifically referred to (>14% of responses). As an indicator of common scientific perspective, these expert responses identify areas of contention in deep time palaeontological research that might not always be obvious to external disciplines.
Numerous geochemical data are also often misinterpreted, leading to an extensive list of proxies named by self-categorised experts in geochemistry (n = 47) including; carbon and oxygen isotopes (>13% of responses), molecular biomarkers (>11% of responses), iron speciation (>9% of responses), trace metal concentrations (>9% of responses), chromium, sulfur and metal isotopes (>23% of responses in sum) and the S-MIF record (>3% of responses). In addition to these, specific reference was made to the misinterpretation of local/regional proxies as being informative of global environmental change (>5% of responses). The diversity of proxies being reported either suggests poor intra-disciplinary communication or a common agreement that the majority of proxies require ongoing detailed study and continued calibration (e.g., incorporating novel laboratory or field-based insights). It is important to note, however, that a large proportion of oxygenation/deoxygenation events are supported by a combination of proxies rather than relying on a single one.
A further, broader question posed to respondents, asked what they deemed to be the most pressing questions in the field (total of 106 unique responses from 45 respondents). For example, one question that was commonly raised by respondent experts is how to better distinguish between local and global signatures, or ‘how spatially representative is my geochemical dataset?’ (>16%). Specific responses highlighted the need to clarify local vs. global environmental drivers of oxygenation, as well as the extent of oxygen oases, throughout the Proterozoic. Several respondents also acknowledged that limited geological material exists with which to clarify a global understanding. Another pressing question that was repeatedly raised in the responses is how best to improve our proxy calibrations (>16% of responses). Geochemical proxies used to constrain past environmental change rely on calibrations that rely on using observations in modern depositional environments or under controlled laboratory conditions, which attempt to recreate representative environments that may have existed in the geologic past. Given the likelihood of non-uniformitarian environmental conditions throughout much of Earth’s history, it is necessary to continue investigating how these proxies behave in as many different chemical environments or depositional settings as possible. Furthermore, given that some ocean chemistries in the rock record are not available to study in nature today, theoretical or experimental studies are also required to supplement interpretations based on proxy data. The final, and potentially most pressing question, as defined by numerous experts, concerns whether or not O2 availability actually limits eukaryotic evolution, or indeed the timing, and pace, of early animal evolution (>20% of responses). Current estimates of atmospheric O2 requirements for early eukaryotes are between 0.001% and 0.4% PAL28,29,65. While the oldest eukaryotic fossils are found in ~1.63 Ga66 sedimentary rocks, some propose O2 concentrations of this magnitude are thought to have been reached on Earth much later, sometime during the Neoproterozoic Era, further complicating potential linkages between environmental oxygenation and early evolution1,2 (Fig. 2). Additional pressing questions arose repeatedly, including the timing of oxygenation events (>10%), which require both better chronostratigraphic constraints as well as a more integrated use of geochemical proxies, and a more nuanced understanding of geochemical proxy interpretations. Importantly, each of these thematic questions still only received a maximum of 20% of responses, highlighting that the broad research community is approaching this prospect with a wide variety of methods.
Fig. 2: Summary of current understanding and intervals of time for future research. A Reported intervals of high priority according to experts (n = 37). Highest reported interval of priority occurs during the mid-late Tonian 800–720 Ma (timescale overlap of 14 individual responses). B Atmospheric oxygen throughout Earth history2. C Global carbon isotope record based on a preliminary compilation72.
Future directions
This questionnaire offers a direct method of investigating and communicating a subsample of the current state of research and cross-disciplinary understanding in the Earth sciences. It also provides an opportunity for open communication and discussion regarding areas of future research that may advance, or challenge, the current paradigm. With this in mind, experts were asked what field-based, experimental and/or modelling efforts they believed would be most interesting or useful going forward, to better understand the evolution of environmental oxygenation and its importance for the biosphere (total of 80 unique responses from 38 respondents). The most frequently requested items are listed in the Supplementary Information, but we expand on the most noted three here.
The first recurring item calls for more isotope fractionation experiments, in order to better understand how isotope systems inform specific environmental changes (>8%). As noted previously, this reiterates the widespread view that ongoing geochemical proxy calibration is essential.
The second item calls for more accurate temporal calibration of the rock record, resulting in a higher precision temporal calibration of geochemical and fossil data (>13%). Specifically, it is widely recognised that more data are required constrain age models for the Paleoproterozoic to early Neoproterozoic. The resulting dataset would ideally permit the calibration of geochemical and palaeontological information from mixed lithologies within a unified chronostratigraphic framework, or series of possible frameworks, each of which would be anchored in time by absolute ages derived from high-precision radiometric methods. Exploring alternative correlation frameworks may also help to discriminate between the relative likelihoods of alternative palaeogeographic reconstructions in deep time. In particular, more high-precision radiometric constraints are needed for time intervals of high research priority as identified by the survey responses (Fig. 2).
The third item calls for greater efforts to pursue global syntheses that demand interdisciplinary collaboration (>6%). Numerous international initiatives that aim to collate sedimentary, geochemical and palaeontological data throughout Earth history are enhancing our ability to constrain aspects of the rock record with greater confidence over both longer and shorter timescales. These databases include the Sedimentary Geochemistry and Palaeoenvironments Project67, the North American MacroStrat database68, and the growing Deeptime Digital Earth initiative (DDE, formerly known as the Geobiodiversity Database69), each of which not only collate data, but also facilitate large dataset interpretations by providing novel user interfaces and statistical methods for database interrogation.
Experts were also asked what they believe to be the greatest remaining uncertainties in attempts to constrain the influence of evolving atmospheric and oceanic oxygen concentrations on life, throughout Earth history. The following items dominated responses, and summarise three key questions that captivate the research community, as a whole:
(1) What environmental O2 concentration is required to facilitate the evolution of animal life?
(2) What were the major mechanisms driving long-term changes in environmental O2?
(3) What were the upper and lower limits of environmental O2 through time?
Attempting to answer these questions demands interdisciplinary studies that interrogate large swathes of Earth history. However, the patchiness of the rock record in deep time often limits research during certain periods. With this in mind, experts were asked what intervals of the geologic record they believe to be priority targets for future sampling and investigation (Fig. 2). This revealed a notable interval of interest during the mid-Paleoproterozoic (c. 2.2–1.8 Ga), in the immediate aftermath of the GOE, encompassing the Lomagundi-Jatuli Event and the estimated origin of eukaryotes (per Betts et al.24). Other target intervals include the late Mesoproterozoic (c. 1.3–1.0 Ga), coinciding with the radiation of eukaryotic lineages and the origin of multicellularity70,71, and the mid-late Tonian Period (c. 0.8–0.72 Ga) during eukaryotic diversification and the inception of climatic instability associated with the onset of the Sturtian Snowball Earth26.
Overall, we hope this study promotes future interdisciplinary research and conversations that attempt to further our understanding of how the Earth became habitable for complex life. We believe that future conferences and workshops should continue to promote interdisciplinary research and conversations in order to address several of the topics highlighted herein. More specifically, we hope that the suggestions here concerning specific sedimentary intervals, geochemical analyses, and palaeobiological assessments, which together represent a community-wide perspective of themes that warrant further investigation, provide direction and support for continued research.
Continue Reading
-
As Ocean Water Gets Worse, Sharks’ Teeth Start to Dissolve
Sharks have been on this planet for more than 400 million years. They’re older than the first trees, the North Star, and even the rings of Saturn. They’ve seen and been through it all — but the mounting effects of human driven climate change could be what finally proves too much for these ancient beings.
As we continue to pump astronomical amounts of carbon dioxide into the atmosphere, nearly a third of it gets absorbed by the ocean, gradually making its water more acidic.
For sharks, the consequences could be horrifying. New research suggests that this acidifying could dissolve and weaken shark teeth, severely damaging the ability of these apex predators to feed and defend themselves. Not even their famed ability to regrow their rows of deadly chompers could be enough to offset the phenomenon.
“Shark teeth, despite being composed of highly mineralized phosphates, are still vulnerable to corrosion under future ocean acidification scenarios,” Maximilian Baum, a biologist at Heinrich Heine University in Germany, and lead author of a new study published in the journal Frontiers in Marine Science, said in a statement about the work. “They are highly developed weapons built for cutting flesh, not resisting ocean acid. Our results show just how vulnerable even nature’s sharpest weapons can be.”
Currently, the average pH level of the ocean is 8.1. In the some 200 years since the industrial revolution began, it’s dropped by about 0.1 pH units, according to NOAA, representing a 30 percent increase in acidity. One study projects that the ocean could plunge to a pH level of 7.3 by 2300, if current rates of emissions hold. Meanwhile, some research has found that current pH levels are already damaging denticles, the tiny serrated scales that form the top layer of a shark’s skin.
“Since ocean acidification is known to damage calcified structures like corals and shells, we wanted to investigate whether shark teeth, especially in species… that swim with their mouths open to ventilate their gills and have constant seawater exposure, might also be vulnerable,” Baum told CNN.
In an experiment, the researchers collected shark teeth that were shed by blacktip reef sharks — a vital predator in tropical coral reefs — housed in a local aquarium. For eight weeks, they submerged one batch in a tank containing water with the ocean’s current pH of 8.1, and another batch in a tank with the projected pH of 7.3.
It was immediately clear which set of teeth were worse off, after examining them with electron microscopy.
“We observed visible surface damage such as cracks and holes, increased root corrosion, and structural degradation,” said coauthor Sebastian Fraune, head of the Institute of Zoology and Organismic Interactions at HHU, in the statement.
Bizarrely, the acid-bathed teeth had a higher circumference — not because they actually grew, but because their surfaces were more irregular. Bigger teeth may sound like an advantage, but not if they’re weakened.
“Many shark species use several rows of teeth at once, and individual teeth can remain in use for weeks or even month,” Baum told CNN, “so cumulative damage can reduce feeding efficiency and increase energy demands, especially in species with slower replacement cycles and many rows of teeth that are used at the same time.”
It’s not a study that perfectly simulates real-world conditions, professor of marine ecology at the University of Adelaide in Australia Ivan Nagelkerken cautioned CNN. The teeth were already shed and came from aquarium sharks. And while sharks generally do keep their mouths open all the time to help them breathe, it’s unclear if simply soaking teeth in acidified water for months on end is a good simulation of that reality. Moreover, the study also relied on a pretty extreme projection of ocean acidification.
Baum agrees, but argues this is an important first foray into an understudied area.
“Our study focused on naturally shed teeth because there’s currently very little data on this topic,” he told CNN. “By isolating the chemical effects of acidified seawater on the mineralized structure itself, we want to provide a baseline for understanding vulnerability of shark teeth.”
More on climate change: The Sounds of a Dying Glacier Might Make You Cry
Continue Reading
-
Scientists Confirm the Incredible Existence of ‘Second Sound’
Here’s what you’ll learn when you read this story:
-
Usually, when something gets warmed up, heat tends to spread outward before eventually dissipating. But things are a little different in the world of superfluid quantum gas.
-
For the first time, MIT scientists have successfully imaged how heat actually travels in a wave, known as a “second sound,” through this exotic fluid.
-
Understanding this dynamic could help answer questions about high-temperature superconductors and neutron stars.
In the world of average, everyday materials, heat tends to spread out from a localized source. Drop a burning coal into a pot of water, and that liquid will slowly rise in temperature before its heat eventually dissipates. But the world is full of rare, exotic materials that don’t exactly play by these thermal rules.
Instead of spreading out as one would expect, these superfluid quantum gasses “slosh” heat side to side—it essentially propagates as a wave. Scientists call this behavior a material’s “second sound” (the first being ordinary sound via a density wave). Although this phenomenon has been observed before, it’s never been imaged. But recently, scientists at the Massachusetts Institute of Technology (MIT) were finally able to capture this movement of pure heat by developing a new method of thermography (a.k.a. heat-mapping).
The results of this study were published in the journal Science, and in an university press release highlighting the achievement, MIT assistant professor and co-author Richard Fletcher continued the boiling pot analogy to describe the inherent strangeness of “second sound” in these exotic superfluids.
Simplified example of “sloshing” heat in a superfluid compared to a normal fluid. MIT
“It’s as if you had a tank of water and made one half nearly boiling,” Fletcher said. “If you then watched, the water itself might look totally calm, but suddenly the other side is hot, and then the other side is hot, and the heat goes back and forth, while the water looks totally still.”
These superfluids are created when a cloud of atoms is subjected to ultra-cold temperatures approaching absolute zero (−459.67 °F). In this rare state, atoms behave differently, as they create an essentially friction-free fluid. It’s in this frictionless state that heat has been theorized to propagate like a wave.
“Second sound is the hallmark of superfluidity, but in ultracold gases so far you could only see it in this faint reflection of the density ripples that go along with it,” lead author Martin Zwierlein said in a press statement. “The character of the heat wave could not be proven before.”
To finally capture this second sound in action, Zweierlein and his team had to think outside the usual thermal box, as there’s a big problem trying to track heat of an ultracold object—it doesn’t emit the usual infrared radiation. So, MIT scientists designed a way to leverage radio frequencies to track certain subatomic particles known as “lithium-6 fermions,” which can be captured via different frequencies in relation to their temperature (i.e. warmer temperatures mean higher frequencies, and vice versa). This novel technique allowed the researchers to essentially zero in on the “hotter” frequencies (which were still very much cold) and track the resulting second wave over time.
This might feel like a big “so what?” After all, when’s the last time you had a close encounter with a superfluid quantum gas? But ask a materials scientist or astronomer, and you’ll get an entirely different answer.
While exotic superfluids may not fill up our lives (yet), understanding the properties of second wave movement could help questions regarding high-temperature superconductors (again, still at very low temperatures) or the messy physics that lie at the heart of neutron stars.
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
Photo credit: Hearst Owned
Get the Issue
You Might Also Like
Continue Reading
-