Category: 7. Science

  • The Missing Giant: Do FAST Spectroscopic Observations Reveal a Scarcity of Large Polycyclic Aromatic Hydrocarbons in Astronomical Environments?

    The Missing Giant: Do FAST Spectroscopic Observations Reveal a Scarcity of Large Polycyclic Aromatic Hydrocarbons in Astronomical Environments?

    Fractional abundances of PAHs as a function of the number of carbon atoms 𝑁C. Dotted lines denote the estimated abundance upper limits of large quasi-symmetric PAHs in NGC 7027, and these values warrant cautious interpretation (refer to the contextual discussion). The solid curve shows the theoretical prediction from Draine & Lazarian (1998). For comparative analysis, values or upper limits from previous studies are overplotted. — astro-ph.GA

    The search for large polycyclic aromatic hydrocarbons (PAHs) with over 100 carbon atoms is crucial to resolving the origin of unidentified infrared emission (UIE) bands.

    These bands are commonly observed in nebulae and the interstellar medium, yet their spectroscopic assignment has remained unknown for decades. Using the Five-hundred-meter Aperture Spherical Radio Telescope (FAST), the world’s most sensitive instrument operating in the decimeter-wavelength range, we conducted a search for rotational transitions of large, quasi-symmetric PAHs.

    Our sample included two prototypical UIE sources, NGC 7027 and TMC-1, along with a non-UIE source, IRC+10216, for comparison. A matched filter technique was employed to isolate comb-like spectral features from quasi-symmetric PAHs containing 138 to 194 carbon atoms in the FAST spectra.

    This method significantly enhanced detection sensitivity to these astrophysically critical molecular signatures. Although no such features were detected, we derived upper limits on the abundance of large PAHs based on simplifying assumptions.

    These upper limits are lower than the values predicted by theoretical models, which might tentatively suggest that large PAHs may not be the primary carriers of UIE bands. However, this conclusion should be treated as tentative, given that it rests on simplistic assumptions which have not been empirically validated.

    Yi Shao, Yong Zhang, Xu-Jia Ouyang, Chuan-Peng Zhang

    Comments: 7 pages, 4 figures. Accepted for publication in MNRAS
    Subjects: Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR)
    Cite as: arXiv:2508.15302 [astro-ph.GA] (or arXiv:2508.15302v1 [astro-ph.GA] for this version)
    https://doi.org/10.48550/arXiv.2508.15302
    Focus to learn more
    Submission history
    From: Yong Zhang
    [v1] Thu, 21 Aug 2025 06:47:39 UTC (1,542 KB)
    https://arxiv.org/abs/2508.15302
    Astrobiology, Astrochemistry,

    Continue Reading

  • Desert Storm Dust Transport Of Bacteria (Implications For Arrakis and Mars)

    Desert Storm Dust Transport Of Bacteria (Implications For Arrakis and Mars)

    Air sampling during a dust storm. Credit: Naama Lang-Yona

    How do living bacteria survive on the surface of dust particles carried by desert storms from the Sahara and Egypt to Israel?

    As a follow-up to a previous study in which they showed that species of Firmicutes, including Bacillus, are active players in dust storms, Dr. Naama Lang-Yona’s lab in the Technion Faculty of Civil and Environmental Engineering conducted a joint study with Dr. Ilana Kolodkin-Gal’s research group at the Scojen Institute for Synthetic Biology at Reichman University and discovered that these bacteria can form microscopic biofilms over dust particles. These protective structures shield the bacteria from desiccation, extreme radiation, and severe nutrient scarcity during their atmospheric journey.

    The research, published in Communications Earth and Environment (part of the Nature portfolio), contributes to the growing field of atmospheric microbiology. This discipline explores the survival and activity of microorganisms while in the atmosphere, sometimes over thousands of kilometers, and their impact on global cycles, ecosystems, and human health. These processes significantly impact disease patterns, atmospheric CO₂ levels, plant diseases, and even antibiotic resistance dispersal.

    “Characterizing metabolically active, living bacterial communities is reshaping our understanding of microbiome-environment interactions,” explained Dr. Lang-Yona. “Our research suggests that the air we breathe contains entire bacterial communities from distant regions, bringing innovative traits that can integrate into local ecosystems, and potentially affect humans.”

    In this study, the researchers successfully isolated and cultured bacteria brought in by dust storms under atmospheric conditions, focusing on beneficial Bacillus strains known for their positive applications in agriculture, construction, and medical probiotics.

    The team believes that natural selection during dust storms favors more innovative bacterial strains – a phenomenon that could potentially enhance their practical applications. This study also expands the traditional soil microbiome concept to include airborne microbial communities, broadening the known repertoire of survival strategies among these remarkable organisms.

    Bacillus biofilm formation and niche adaptation shape long-distance transported dust microbial community, Nature Communications Earth & Environment (open access)

    Astrobiology,

    Continue Reading

  • Modeling Tails Of Escaping Gas In Exoplanet Atmospheres With Harmonica

    Modeling Tails Of Escaping Gas In Exoplanet Atmospheres With Harmonica

    Results from the injection-recovery test for the helium envelope with a variable opacity. The top panel shows the inputted absorbance profiles (red curves), together with the empirically-retrieved absorbance profiles (red points with error bars) for the 2 layer envelope 200 ppm case. The bottom panel shows the injected variable envelope in different shadings of red, with the gray dashed lines indicating the position and shape of the fitted layers. — astro-ph.EP

    Exoplanets that reside close to their host stars, and therefore receive substantial amounts of X-ray and ultraviolet radiation, are prone to suffer from strong atmospheric escape.

    This can lead to the creation of an envelope of escaping gas along the planet’s orbital trajectory, often referred to as a tail. When transiting in front of their host star, these tails can not only produce larger depths in the transit light curves, but also introduce significant asymmetries between ingress and egress.

    Using the publicly available software Harmonica, we present a method to model the light curves of transiting planets surrounded by extended envelopes of escaping gas, and subsequently infer the shape and size of the latter. We apply this method to the JWST NIRISS/SOSS observations of HAT-P-18b, which show pronounced helium tail features in its spectroscopic light curve of the metastable helium triplet at 10830 Å.

    Our model reveals that, in order to fit the observed light curve of HAT-P-18b, the planet must possess a trailing helium tail of 15.79+1.14−1.05 planetary radii. We carry out injection-recovery tests to validate the effectiveness of the proposed methodology.

    We demonstrate that, with sufficient precision, we would be able to fit a multi-layer envelope to the data, which would provide insight into the relative radial variations in the opacity profile.

    Carlos Gascón, Mercedes López-Morales, Shreyas Vissapragada, Morgan MacLeod, Hannah R. Wakeford, David Grant, Ignasi Ribas, Guillem Anglada-Escudé

    Comments: 15 pages, 10 figures, 3 tables. Accepted to APJL
    Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Instrumentation and Methods for Astrophysics (astro-ph.IM)
    Cite as: arXiv:2508.14846 [astro-ph.EP] (or arXiv:2508.14846v1 [astro-ph.EP] for this version)
    https://doi.org/10.48550/arXiv.2508.14846
    Focus to learn more
    Submission history
    From: Carlos Gascon
    [v1] Wed, 20 Aug 2025 16:57:55 UTC (3,354 KB)
    https://arxiv.org/abs/2508.14846

    Astrobiology,

    Continue Reading

  • 3I/ATLAS is Large and Releases Carbon Dioxide (CO2) | by Avi Loeb | Aug, 2025

    3I/ATLAS is Large and Releases Carbon Dioxide (CO2) | by Avi Loeb | Aug, 2025

    Press enter or click to view image in full size

    Three images of 3I/ATLAS, taken by the SPHEREx Space Observatory. The images were observed at wavelengths of 3.0, 4.26, and 4.7 micrometers, corresponding to prominent emission lines of H2O, CO2 and CO gas, from left to right respectively. 3I/ATLAS is undetected in H2O and CO. By contrast, a bright CO2 cloud is observed out to at least 348,000 kilometers. (Credit: C.M. Lisse et al, 2025)

    The team of NASA’s SPHEREx space observatory just reported tantalizing new data on the interstellar object 3I/ATLAS (accessible here). The observations were made between August 8–12, 2025 when 3I/ATLAS was at a distance from the Sun of 3.2 times the Earth-Sun separation (AU) and a distance from Earth of 2.6 AU.

    The new observations reveal a cloud of carbon dioxide (CO2) around 3I/ATLAS corresponding to a mass loss rate of about 70 kilograms per second. No water (H2O) cloud was detected with an upper limit of 4.5 kilograms per second on the water mass loss rate. This is an order of magnitude below the previous claims of water detection with a mass loss rate of order 40 kilograms per second at a larger distance from the Sun of 3.5 AU. These early claims from two research teams were unsubstantiated by the reported data, as I argued in a previous essay (accessible here). The excellent SPHEREx report notes that “The lack of a bright water gas coma is puzzling as 3I/ATLAS was not too far outside the Solar system’s “water ice line” at 2.5 AU during the observations.”

    Although no water (H2O) in gas form was identified, some absorption features in the reflected spectrum from the surface of 3I/ATLAS were consistent with a mix of water and carbon dioxide ices combined with organics, as often found on the surfaces of Kuiper belt objects in the Solar system which are similarly exposed to interstellar cosmic-rays. Could it be that 3I/ATLAS is not a water-rich comet as envisioned by comet experts when it was discovered?

    The SPHEREx images show 3I/ATLAS as a point source. No dust coma was resolved, implying that the glow of scattered sunlight around the object in its Hubble Space Telescope image is compact and amounts to a small amount of dust.

    SPHEREx images were taken at specific wavelengths near the characteristic emission lines of water-H2O (3.0 micrometers), carbon dioxide-CO2 (4.26 micrometers) and carbon monoxide-CO (4.7 micrometers). No coma was detected in water or carbon monoxide. However, the CO2 image shows a symmetric cloud around I/ATLAS with a brightness that declines with projected distance to the power of -3/2 out to distances of at least 348,000 kilometers. This corresponds to a steeply declining CO2 density with 3D-radius to the power of -2.5.

    Most interestingly, the flux detected at a wavelength of 1 micrometer from 3I/ATLAS suggests a large nucleus with a diameter of 46 kilometers. If this represents a solid body, then the mass of 3I/ATLAS must be a million times bigger than the previous interstellar comet 2I/Borisov. This makes little sense since we should have found of order a million objects of the size of 2I/Borisov before discovering a 46-kilometer interstellar object. Moreover, as I noted in my first paper on 3I/ATLAS (accessible here), the amount of rocky material per unit volume in interstellar space is too small by a factor of ten thousand than the value needed to deliver into the inner Solar system one giant rock of this size over the ATLAS decade-long survey.

    Alternatively, 3I/ATLAS may have targeted the inner solar system by technological design. This possibility is consistent with the alignment of its trajectory with the orbital plane of the planets around the Sun, a coincidence of a part in 500 for a random orientation — as observed for 2I/Borisov.

    The lack of a cometary tail in the Hubble Space Telescope image is evidence that there is not much dust around 3I/ATLAS with particle size on the scale of the wavelength of sunlight (0.5 micrometers). In that case, the observed reddening in the spectrum of reflected sunlight should originate from the surface of the object, implying that the object is large and dominates the reflected sunlight rather than the dust cloud around it.

    The CO2 mass loss amounts to the ablation of a millimeter thick layer from the surface of a 46-km rock over a period of 10 years. This means that a relatively thin outer layer is sufficient to maintain the observed cloud of CO2 gas and dust around 3I/ATLAS. What lies under this outer layer is still unknown. We are all waiting for the release of the first data from the Webb Space Telescope, when it observed 3I/ATLAS on August 6, 2025.

    Here’s hoping that as the Sun turns on the heat on 3I/ATLAS in the coming months, it will reveal its true nature.

    ABOUT THE AUTHOR

    Press enter or click to view image in full size

    (Image Credit: Chris Michel, National Academy of Sciences, 2023)

    Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. The paperback edition of his new book, titled “Interstellar”, was published in August 2024.

    Continue Reading

  • Top AI models fail spectacularly when faced with slightly altered medical questions

    Top AI models fail spectacularly when faced with slightly altered medical questions

    Artificial intelligence systems often perform impressively on standardized medical exams—but new research suggests these test scores may be misleading. A study published in JAMA Network Open indicates that large language models, or LLMs, might not actually “reason” through clinical questions. Instead, they seem to rely heavily on recognizing familiar answer patterns. When those patterns were slightly altered, the models’ performance dropped significantly—sometimes by more than half.

    Large language models are a type of artificial intelligence system trained to process and generate human-like language. They are built using vast datasets that include books, scientific papers, web pages, and other text sources. By analyzing patterns in this data, these models learn how to respond to questions, summarize information, and even simulate reasoning. In recent years, several models have achieved high scores on medical exams, sparking interest in using them to support clinical decision-making.

    But high test scores do not necessarily indicate an understanding of the underlying content. Instead, many of these models may simply be predicting the most likely answer based on statistical patterns. This raises the question: are they truly reasoning about medical scenarios, or just mimicking answers they’ve seen before?That’s what the researchers behind the new study set out to examine.

    “I am particularly excited about bridging the gap between model building and model deployment and the right evaluation is key to that,” explained study author Suhana Bedi, a PhD student at Stanford University.

    “We have AI models achieving near perfect accuracy on benchmarks like multiple choice based medical licensing exam questions. But this doesn’t reflect the reality of clinical practice. We found that less than 5% of papers evaluate LLMs on real patient data which can be messy and fragmented.”

    “So, we released a benchmark suite of 35 benchmarks mapped to a taxonomy of real medical and healthcare tasks that were verified by 30 clinicians. We found that most models (including reasoning models) struggled on Administrative and Clinical Decision Support tasks.”

    “We hypothesized that this was because these tasks involved complex reasoning scenarios that couldn’t be solved through pattern matching alone, exactly the kind of clinical thinking that matters in real practice,” Bedi explained. “With everyone talking about deploying AI in hospitals, we thought this was a very important question to answer.”

    To investigate this, the research team created a modified version of the MedQA benchmark. They selected 100 multiple-choice questions from the original test and rewrote a subset of them to replace the correct answer with “None of the other answers,” or NOTA. This subtle shift forced the models to rely on actual medical reasoning rather than simply recognizing previously seen answer formats. A practicing clinician reviewed all changes to ensure the new “None of the other answers” response was medically appropriate.

    Stay informed with the latest psychology and neuroscience research—sign up for PsyPost’s newsletter and get new discoveries delivered straight to your inbox.

    Sixty-eight of the questions met the criteria for this test set. Each question presented a clinical scenario and asked for the most appropriate next step in treatment or diagnosis. One example involved a newborn with an inward-turning foot—a typical case of metatarsus adductus, which usually resolves on its own. In the original version, “Reassurance” was the correct answer. In the modified version, “Reassurance” was removed and replaced with “None of the other answers,” making the task more challenging.

    Bedi and her colleagues then evaluated six widely used artificial intelligence models, including GPT-4o, Claude 3.5 Sonnet, Gemini 2.0 Flash, and others. All models were prompted to reason through each question using a method called chain-of-thought, which encourages step-by-step explanations of their answers. This approach is intended to support more deliberate reasoning rather than simple guesswork.

    The models were tested on both the original and modified questions, and the researchers compared their performance across these two conditions. They used statistical methods to measure the significance of any accuracy drops, with a focus on whether each model could maintain performance when familiar patterns were removed.

    The results suggest that none of the models passed this test unscathed. All six experienced a noticeable decline in accuracy when presented with the NOTA-modified questions. Some models, like DeepSeek-R1 and o3-mini, were more resilient than others, showing drops of around 9 to 16 percent.

    But the more dramatic declines were seen in widely used models such as GPT-4o and Claude 3.5 Sonnet, which showed reductions of over 25 percent and 33 percent, respectively. Llama 3.3-70B had the largest drop in performance, answering nearly 40 percent more questions incorrectly when the correct answer was replaced with “None of the other answers.”

    “What surprised us most was the consistency of the performance decline across all models, including the most advanced reasoning models like DeepSeek-R1 and o3-mini,” Bedi told PsyPost.

    These findings suggest that current AI models tend to rely on recognizing common patterns in test formats, rather than reasoning through complex medical decisions. When familiar options are removed or altered, performance deteriorates, sometimes dramatically.

    The researchers interpret this pattern as evidence that many AI systems may not be equipped to handle novel clinical situations—at least not yet. In real-world medicine, patients often present with overlapping symptoms, incomplete histories, or unexpected complications. If an AI system cannot handle minor shifts in question formatting, it may also struggle with these kinds of real-life variability.

    “These AI models aren’t as reliable as their test scores suggest,” Bedi said. “When we changed the answer choices slightly, performance dropped dramatically, with some models going from 80% accuracy down to 42%. It’s like having a student who aces practice tests but fails when the questions are worded differently. For now, AI should help doctors, not replace them.”

    While the study was relatively small, limited to 68 test questions, the consistency of the performance decline across all six models raised concern. The authors acknowledge that more research is needed, including testing larger and more diverse datasets and evaluating models using different methods, such as retrieval-augmented generation or fine-tuning on clinical data.

    “We only tested 68 questions from one medical exam, so this isn’t the full picture of AI capabilities,” Bedi noted. “Also, we used a specific way to test reasoning, there might be other approaches that reveal different strengths or weaknesses. Real clinical deployment would likely involve more sophisticated setups than what we tested.”

    Still, the authors suggest their results point to three major priorities moving forward: building evaluation tools that separate true reasoning from pattern recognition, improving transparency around how current systems handle novel medical problems, and developing new models that prioritize reasoning abilities.

    “We want to build better tests that can tell the difference between AI systems that reason versus those that just memorize patterns,” Bedi said. “We’re also hoping this work pushes the field toward developing AI that’s more genuinely reliable for medical use, not just good at taking tests.”

    “The main thing is that impressive test scores don’t automatically mean an AI system is ready for the real world. Medicine is complicated and unpredictable, and we need AI systems that can handle that complexity safely. This research is about making sure we get there responsibly.”

    The study, “Fidelity of Medical Reasoning in Large Language Models,” was authored by Suhana Bedi, Yixing Jiang, Philip Chung, Sanmi Koyejo, and Nigam Shah.

    Continue Reading

  • Why tiny bee brains could hold the key to smarter AI

    Why tiny bee brains could hold the key to smarter AI

    A new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.

    iversity of Sheffield built a digital model of a bee’s brain that explains how these movements create clear, efficient brain signals, allowing bees to easily understand what they see

  • This discovery could revolutionize AI and robotics, suggesting that future robots can be smarter and more efficient by using movement to gather relevant information, rather than relying on huge computer networks
  • The study highlights a big idea: intelligence comes from how brains, bodies and the environment work together. It demonstrates how even tiny insect brains can solve complex visual tasks using very few brain cells, which has major implications for both biology and AI
  • A new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.

    By building a computational model — or a digital version of a bee’s brain — researchers have discovered how the way bees move their bodies during flight helps shape visual input and generates unique electrical messages in their brains. These movements generate neural signals that allow bees to easily and efficiently identify predictable features of the world around them. This ability means bees demonstrate remarkable accuracy in learning and recognizing complex visual patterns during flight, such as those found in a flower.

    The model not only deepens our understanding of how bees learn and recognize complex patterns through their movements, but also paves the way for next-generation AI. It demonstrates that future robots can be smarter and more efficient by using movement to gather information, rather than relying on massive computing power.

    Professor James Marshall, Director of the Centre of Machine Intelligence at the University of Sheffield and senior author on the study, said:”In this study we’ve successfully demonstrated that even the tiniest of brains can leverage movement to perceive and understand the world around them. This shows us that a small, efficient system — albeit the result of millions of years of evolution — can perform computations vastly more complex than we previously thought possible.

    “Harnessing nature’s best designs for intelligence opens the door for the next generation of AI, driving advancements in robotics, self-driving vehicles and real-world learning.”

    The study, a collaboration with Queen Mary University of London, is published recently in the journal eLife. It builds on the team’s previous research into how bees use active vision — the process where their movements help them collect and process visual information. While their earlier work observed how bees fly around and inspect specific patterns, this new study provides a deeper understanding of the underlying brain mechanisms driving that behavior.

    The sophisticated visual pattern learning abilities of bees, such as differentiating between human faces, have long been understood; however the study’s findings shed new light on how pollinators navigate the world with such seemingly simple efficiency.

    Dr. HaDi MaBouDi, lead author and researcher at the University of Sheffield, said: “In our previous work, we were fascinated to discover that bees employ a clever scanning shortcut to solve visual puzzles. But that just told us what they do; for this study, we wanted to understand how.

    “Our model of a bee’s brain demonstrates that its neural circuits are optimized to process visual information not in isolation, but through active interaction with its flight movements in the natural environment, supporting the theory that intelligence comes from how the brain, bodies and the environment work together.

    “We’ve learnt that bees, despite having brains no larger than a sesame seed, don’t just see the world — they actively shape what they see through their movements. It’s a beautiful example of how action and perception are deeply intertwined to solve complex problems with minimal resources. This is something that has major implications for both biology and AI.”

    The model shows that bee neurons become finely tuned to specific directions and movements as their brain networks gradually adapt through repeated exposure to various stimuli, refining their responses without relying on associations or reinforcement. This lets the bee’s brain adapt to its environment simply by observing while flying, without requiring instant rewards. This means the brain is incredibly efficient, using only a few active neurons to recognize things, conserving both energy and processing power.

    To validate their computational model, the researchers subjected it to the same visual challenges encountered by real bees. In a pivotal experiment, the model was tasked with differentiating between a ‘plus’ sign and a ‘multiplication’ sign. The model exhibited significantly improved performance when it mimicked the real bees’ strategy of scanning only the lower half of the patterns, a behaviour observed by the research team in a previous study.

    Even with just a small network of artificial neurons, the model successfully showed how bees can recognise human faces, underscoring the strength and flexibility of their visual processing.

    Professor Lars Chittka, Professor of Sensory and Behavioural Ecology at Queen Mary University of London, added: ‘Scientists have been fascinated by the question of whether brain size predicts intelligence in animals. But such speculations make no sense unless one knows the neural computations that underpin a given task.

    “Here we determine the minimum number of neurons required for difficult visual discrimination tasks and find that the numbers are staggeringly small, even for complex tasks such as human face recognition. Thus insect microbrains are capable of advanced computations.”

    Professor Mikko Juusola, Professor in System Neuroscience from the University of Sheffield’s School of Biosciences and Neuroscience Institute said: “This work strengthens a growing body of evidence that animals don’t passively receive information — they actively shape it.

    “Our new model extends this principle to higher-order visual processing in bees, revealing how behaviorally driven scanning creates compressed, learnable neural codes. Together, these findings support a unified framework where perception, action and brain dynamics co-evolve to solve complex visual tasks with minimal resources — offering powerful insights for both biology and AI.”

    By bringing together findings from how insects behave, how their brains work, and what the computational models show, the study shows how studying small insect brains can uncover basic rules of intelligence. These findings not only deepen our understanding of cognition but also have significant implications for developing new technologies.

Continue Reading

  • The Milky Way could contain more than 10 billion exoplanets capable of harboring life

    The Milky Way could contain more than 10 billion exoplanets capable of harboring life

    For centuries, the human being has wondered if he is alone in the universe and if other civilizations would not exist outside of the Milky Way, but also within it. And if currently, no formal evidence of life has been discovered outside our planet, recent work suggests that astronomers and scientists have missed many planets that could harbor life. 

    Indeed, a study published in The Astrophysical Journal earlier this year states that a dead star, also called a white dwarf, would be capable of harbouring several planets on which life could develop. And it is a major turning point, because today, research is mainly carried out on stars that produce heat and have nuclear activity at their center, as is the case with the Sun. 

    Therefore, for white dwarfs, although there is no longer any nuclear activity, they still produce heat. Thus, on this subject, Aomawa Shields, professor at the University of California at Irvine, tells the US National Science Foundation: 

    “Our computer simulations suggest that if rocky planets exist in their orbits, these planets could have more habitable real estate on their surfaces than previously thought.” 

    As a result, if we refer to our galaxy, the Milky Way, there are more than 10 billion white dwarfs, according to the researchers who carried out this study. And taking into account that each star has several planets orbiting it, the chances of life developing elsewhere increase dramatically. 

    However, due to this large number, the research will take years. And the first results could also prove disappointing, although surprises may emerge in the future. In other words, the possibilities of discovering another form of life are higher, but there is still uncertainty about it. 

    Continue Reading

  • Longest canyon in the solar system reveals new secrets — Space photo of the week

    Longest canyon in the solar system reveals new secrets — Space photo of the week

    Mars’ Valles Marineris stretches nearly a quarter of the way around the planet’s equator. (Image credit: NASA/JPL-Caltech/University of Arizona)

    QUICK FACTS

    What it is: Candor Chasma, a large canyon on Mars

    Where it is: Valles Marineris, the biggest canyon network in the solar system

    When it was shared: Aug. 14, 2025

    Mars has a huge network of canyons that stretches about 2,500 miles (4,000 kilometers) across its equator. This canyon system, called Valles Marineris, is the largest in the solar system, dwarfing Earth’s largest canyon, which covers 460 miles (750 km) under Greenland’s ice sheet. (Condolences to the Grand Canyon and its mere 277-mile length.)

    First imaged by NASA’s Mariner 9 spacecraft in 1972, Valles Marineris has been captured by the HiRISE camera on NASA’s Mars Reconnaissance Orbiter many times in its 19 years in orbit. However, this geological wonder still holds many secrets.

    Continue Reading

  • Black holes that transform matter into dark energy could solve ‘cosmic hiccups’ mystery

    Black holes that transform matter into dark energy could solve ‘cosmic hiccups’ mystery

    In a new study, scientists began pondering a pretty wild question: What if black holes can convert dead star matter into dark energy, the mystery force driving the acceleration of the expansion of the universe? If so, then it just might explain a multitude of “hiccups” in our models of the universe.

    This new theory proposes that black holes could actually be tiny “bubbles” of dark energy. This involves the conversion of matter into dark energy because black holes are born when massive stars collapse after exhausting their fuel for nuclear fusion. Thus, if this “cosmologically coupled black hole (CCBH)” hypothesis is correct, the transformation of a massive stellar core to a black hole represents the conversion of stellar matter to dark energy.

    Continue Reading

  • When To See Monday Evening’s Crescent Moon Glow Beside Mars

    When To See Monday Evening’s Crescent Moon Glow Beside Mars

    Moon gazers across the globe will get the chance to catch a fragile lunar crescent in the western twilight sky on Monday, Aug. 25, 2025. Just one day after the razor-thin newborn moon reappears, the waxing crescent will shine 8% illuminated, sitting just below and to the right of Mars. Though the Red Planet is fading in brightness, the pairing will be a rewarding sight for anyone with a clear horizon — and perhaps a pair of binoculars, too.

    Where And When To Look

    The crescent moon will become visible in the western sky about 30 minutes after sunset. At this very early stage of the lunar cycle, the moon lingers low on the horizon and sets soon after sunset, so observers have less than an hour to enjoy the view before both objects sink into the twilight haze.

    Look due west, ideally from an open vantage point without trees, buildings or hills blocking the horizon. The moon will appear as a slim curve of light, hovering just below and slightly to the right of Mars.

    What You’ll See

    The moon will be a delicate crescent in the constellation Virgo, its darkened surface faintly lit by “Earthshine,” the sunlight reflected from Earth’s clouds, ice and oceans. Mars, now getting dimmer, will glow faintly just above and to the left of the lunar crescent.

    Since it reached opposition on Jan. 16 — its brightest and closest appearance to Earth since 2022 — Mars has been steadily getting fainter as Earth pulls away from it on its faster orbit of the sun. Mars eventually become lost in the sun’s glare by late November. The next opposition — when Mars once again shines brilliantly — will not occur until Feb. 19, 2027.

    Observing Tips

    Catching the moon and Mars this evening requires a little preparation. Be outside 20–30 minutes after sunset where you are on Monday evening, and begin scanning low along the horizon. A west-facing overlook, beach, or open field offers the best chance. Binoculars will help you locate Mars if it’s faint against the twilight glow, but what they’ll be really useful for is studying the moon, whose entire disk may be faintly visible thanks to “Earthshine.” The ghostly glow looks fabulous in binoculars.

    What’s Next in the Night Sky

    This pairing is just the start of several beautiful evening arrangements. On Tuesday, Aug. 26, the 14%-lit crescent moon will slide between Mars and Spica, Virgo’s brightest star. By Wednesday, Aug. 27, a 21%-lit crescent will shine next to Spica and Mars, with the brilliant orange star Arcturus directly above.

    Wishing you clear skies and wide eyes.

    Continue Reading