Category: 7. Science

  • Primordial Black Holes Could Act As Seeds For Quasars

    Primordial Black Holes Could Act As Seeds For Quasars

    Plenty of groups have been theorizing about Primordial Black Holes (PBHs) recently. That is in part because of their candidacy as a potential source of dark matter. But, if they existed, they also had other roles to play in the early universe. According to a recent draft paper released on arXiv by Jeremy Mould and Adam Batten of Swinburne University, one of those roles could be as the seeds that eventually form both quasars and radio galaxies.

    PBHs are theorized to have started very early on in the universe, within the first few thousand years. Instead of being created by the collapse of a massive star, like most modern black holes we think of, they would have been created by minor discrepancies in the radiation environment that made up the entire universe at that time. In the cosmic microwave background, we see those discrepancies as small fluctuations in the signal still reaching Earth, though no one has yet definitively tied those discrepancies to the formation of PBHs.

    But if PBHs did exist that early, their gravitational pull could have acted as a “seed” for accreting more gas and dust that would eventually allow them to grow into the supermassive black holes that are at the center of the brightest known objects in the universe – quasars. Quasars vary in their luminosity over time, and the quasar luminosity function (QLF) that defines that change, is an important piece of mathematics that shows how quasars evolve and affect the formation of galaxies around themselves.

    Fraser describes PBHs and why they are garnering so much attention from the cosmological community.

    Importantly, the mathematics behind the QLF, which is defined by observations of the quasars themselves, aligns neatly with the predictions put forward by the theory that shows PBHs acting as a seed that evolves into a quasar. It also fits a mathematical formula called the Schechter Function, lending more credence to the theory. But importantly, it also offers a solution to what fuels quasars in the first place.

    Tiny galaxies could be the fuel that lights up quasars as they swallow them, though they would be invisible to us at such a distance. As the supermassive black hole at the center of the quasar consumes all of the galaxies nearby, it slowly starts to lose its luminosity, tracking the QLF curve that shows its likely to be less bright the older (i.e. the higher red shift) it is.

    Another interesting consequence of the theory put forward in the paper is a tie between quasars and radio galaxies, a type of galaxy that emits strong bursts in the radio spectrum. If the theory about quasars being “seeded” by primordial black holes is correct, they could eventually result in a radio galaxy once they have calmed down and consumed all the matter nearest to them. To prove the point, the authors note there are similarities between the luminosity functions of quasars and radio galaxies, just with the overall amplitude of the radio galaxies scaled down. Since they are scaled down, though, they also last longer, with the expected lifetime of a radio galaxy being about 10 times that of a quasar, according to the paper.

    Fraser describes quasars – the brightest objects in the universe.

    While this theory aligns well with much of the observational data we have collected on quasars and radio galaxies so far, it does also make some predictions to prove itself falsifiable. First, it suggests that quasars could be used as standard candles for the measurement of cosmological distances, a title currently held by Type Ia supernovae due to their standardized brightness. Quasar’s origins from PBHs could prove a baseline from which to understand their brightness, eventually allowing them to be used as a standard candle as well.

    Perhaps more falsifiably, the James Webb Space Telescope will be able to capture information about quasars even further back in time than has ever been possible before. If the new data aligns with the predictions made by the theory, then, according to the idealized version of the scientific method, it will gain traction among other scientists. It might be a while before Webb releases any data that could prove or disprove the theory, but it’s always nice to have one in cosmology with clearly provable predictions. If it happens to result in cosmologists gaining another way to measure distance and a better understanding of galaxy formation in the early universe, that would just be an added bonus.

    Learn More:

    J Mould & A Batten – If quasars form from primordial black holes

    UT – Could Primordial Black Holes Be Hiding in Plain Sight?

    UT – Primordial Black Holes Could Have Accelerated Early Star Formation

    UT – JWST Cycle 4 Spotlight, Part 3: Supermassive Black Holes and Cosmic Noon

    Continue Reading

  • MIT confirms Einstein was wrong in century-old light experiment

    MIT confirms Einstein was wrong in century-old light experiment

    Scientists have long grappled with a fundamental question: what exactly is light?

    Is it a wave, flowing like ripples across water, or is it made up of tiny particles, like miniature paintballs zipping through space?

    This fundamental question was at the heart of the double-slit experiment, demonstrating light’s dual nature.

    Just recently, physicists at MIT conducted an experiment using incredible atomic precision. 

    Interestingly, it has definitively resolved a long-standing debate between quantum giants Albert Einstein and Niels Bohr about the elusive nature of light.

    “MIT physicists confirm that, like Superman, light has two identities that are impossible to see at once,” the researchers noted.

    Light’s dual nature

    The double-slit experiment, first demonstrated by Thomas Young in 1801, has evolved into a cornerstone of quantum mechanics.

    It famously illustrates that light exhibits both wave-like and particle-like behavior.

    Initially, the experiment showed light creating an interference pattern—alternating bright and dark stripes—when passed through two slits, behaving like a wave.

    However, if scientists tried to observe which slit the light went through, the interference pattern vanished, and light suddenly behaved like particles. 

    This puzzling phenomenon highlights a core principle of quantum mechanics: light exists as both a particle and a wave, but you can only observe one aspect at a time.

    This seemingly paradoxical “wave-particle duality” baffled early quantum pioneers.

    This quantum riddle led to a famous debate between Einstein and Bohr in 1927.

    Einstein believed he could devise an experiment to observe light’s particle path and wave interference simultaneously. 

    Bohr, leveraging the uncertainty principle, argued that any attempt to measure the photon’s path would inevitably disturb it and destroy the interference pattern.

    Over the decades, many versions of the double-slit experiment have confirmed Bohr’s view. 

    But now, MIT physicists, led by Professor Wolfgang Ketterle, have performed the most “idealized” version yet, taking it to its quantum core.

    Instead of physical slits, they used individual ultracold atoms as the “slits.” 

    The team cooled over 10,000 atoms to near absolute zero and arranged them in a precise, crystal-like lattice using lasers. Each atom was effectively an isolated, identical slit.

    They then shone a very weak light beam, ensuring that “each atom scattered at most one photon.”

    The scientists hypothesized that their setup—using individual atoms precisely arranged—could serve as a miniature double-slit experiment. 

    By directing a faint light beam at the atoms, they could study how single photons interacted with two neighboring atoms, revealing whether the light behaved as a wave or a particle.

    “What we have done can be regarded as a new variant to the double-slit experiment. These single atoms are like the smallest slits you could possibly build,” said Ketterle. 

     

    Settling the debate

    The team was able to precisely tune the “fuzziness” of these atomic slits by adjusting the laser light holding them.

    The fuzziness of these atoms affected how much information was obtained about a photon’s path.

    Their results fully agreed with quantum theory and definitively sided with Bohr.

    They discovered a clear relationship: the more precisely they determined a photon’s path (confirming its particle-like behavior), the more the wave-like interference pattern faded.

    The researchers observed that the wave interference pattern weakened any time an atom was nudged by a photon passing by. This confirmed that getting information about the photon’s route automatically erased its wave-like properties.

    The findings solidified that Einstein was incorrect in this specific quantum scenario, reaffirming the strange and counterintuitive nature of quantum reality.

    The findings were published in the journal Physical Review Letters.

    Continue Reading

  • Tackling Brain Disorders by Targeting the “Junk” in Their Genes

    Tackling Brain Disorders by Targeting the “Junk” in Their Genes

    If you could look at the entire human genome, you might notice that large sections seem to have been created by a photocopier stuck in the on position.

    Fully half of our genome consists of recurring DNA sequences, from short, three-letter strings arranged one right after another to long stretches of hundreds of base pairs scattered over multiple chromosomes.

    “For a long time, people thought that this was just ‘junk DNA’—byproducts that accumulated during the evolution of the human genome,” says Xiao Shawn Liu, PhD, the Joan and Paul Marks, MD ’49 Assistant Professor of Physiology and Cellular Biophysics at Columbia University Vagelos College of Physicians and Surgeons.

    But scientists have increasingly found that while some repetitive sequences—collectively dubbed the “repeatome”—play important roles in gene regulation, others may have devastating impacts on health.

    Short tandem repeats

    Liu is focused on a small and little understood category of the repeatome called short tandem repeats, or STRs. STRs are brief sequences of DNA, typically two to 12 base pairs in length, repeated one right after the other (e.g., CAGCAGCAGCAG).

    “For reasons we don’t understand, STRs sometimes expand, adding more copies to the sequence,” Liu says. “Though some of these expansions are harmless, others can cause significant problems, particularly in developing or aging neurons.”

    Expanded STRs are now thought to play a role in 50 different neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), and Huntington’s disease.

    Liu’s work hints that it may be possible to prevent STR expansion, or at least counteract their effects, by leveraging the power of epigenetic mechanisms to control these pathological repeats.

    Early in his career, Liu made a name for himself creating tools to edit the epigenome—chemical compounds and proteins that attach themselves to the genome and tell it what to do. Liu’s CRISPR-based methods allow precise editing of methyl groups, a common epigenetic attachment that turns genes on or off. These tools give researchers new abilities to explore how specific epigenetic attachments contribute to disease and, possibly, develop therapies that counteract deleterious attachments.

    “Naturally, when you make a new tool, you want to see what it can do,” explains Liu, who joined Columbia University in 2020 after finishing a postdoctoral fellowship at the Whitehead Institute. “After developing our DNA methylation editing tools, we were curious to see if they could be used to edit expanded STRs.”

    Editing the epigenome

    In tests with motor neurons derived from patients with ALS, DNA methylation editing dramatically reduced the number of repeats inside the STR, eliminated toxic molecules that cause neuronal damage, and restored the function of the diseased neurons—potentially paving the way for an entirely new approach to the treatment of brain diseases.

    Based on these preliminary studies, Liu recently received a coveted MIND (Maximizing Innovation in Neuroscience Discovery) Prize, awarded by Pershing Square Philanthropies to empower early-to-mid-career investigators to “rethink conventional paradigms around neurodegenerative diseases.”

    It is still unclear how methylation editing reduces the repeats inside the ALS gene’s STR, and Liu is using the three-year, $750,000 prize to further explore the mechanism and apply this strategy to other models of neurodegenerative disease.

    In laboratory experiments, Liu’s DNA methylation tools have already shown promise in counteracting STRs that cause fragile X syndrome, the most common inherited cause of intellectual disability and autism spectrum traits. By removing methyl groups around the gene that is silenced in fragile X, Liu’s editing tools reactivate the gene and restore normal functions to affected neurons.

    “We’re just beginning to understand these parts of our DNA,” says Liu. “Some of these copies may well be leftover bits of genetic code, but some clearly have important functions, and others are potentially dangerous. There’s a lot we need to sort out.”

    Continue Reading

  • Chinese lab invents machine to make bricks on moon -Xinhua

    Chinese lab invents machine to make bricks on moon -Xinhua

    Lunar soil samples collected from the far side of the moon are on display at the preview of a science exhibition marking the 10th Space Day of China at Shanghai World Expo Exhibition and Convention Center in east China’s Shanghai, April 23, 2025. (Xinhua/Zhang Jiansong)

    HEFEI, July 28 (Xinhua) — A Chinese research team has developed a “lunar brick-making machine” that can produce bricks from moon soil, bringing the sci-fi vision of “building houses on the moon with local materials” closer to reality.

    The in-situ lunar soil 3D printing system, developed by the Deep Space Exploration Laboratory (DSEL) based in east China’s Hefei, uses concentrated solar energy to melt and mold lunar soil, the Science and Technology Daily reported on Monday.

    MAKING BRICKS ON MOON

    According to Yang Honglun, a senior engineer at DSEL, the lunar brick-making machine uses a parabolic reflector to concentrate solar energy. This concentrated energy is then transmitted through a fiber optic bundle. At the end of this bundle, the solar concentration ratio can exceed 3,000 times the normal intensity. A high-precision optical system then focuses this concentrated sunlight onto a small point, heating it beyond 1,300 degrees Celsius to melt lunar soil.

    The bricks produced by the machine are made entirely from in-situ lunar soil resources without any additional additives. Moreover, these lunar soil bricks exhibit high strength and density, making them suitable not only for constructing buildings but also for infrastructure needs such as equipment platforms and road surfaces.

    From conceptual design to prototype development, the research team spent about two years to figure out how to overcome multiple technical challenges in the future, such as efficient energy transmission and lunar soil transport.

    For example, the mineral composition of lunar soil varies significantly across different regions of the moon. To ensure the machine can adapt to various types of lunar soil, researchers developed multiple simulated lunar soil samples and conducted extensive testing on the machine before finalizing its design.

    BUILDING HOUSES ON MOON

    “Although the lunar brick-making machine has achieved breakthroughs, constructing habitable structures on the moon still requires overcoming other technological barriers,” Yang said.

    He explained that under the moon’s extreme conditions, such as high vacuum and low gravity, lunar soil bricks alone cannot support habitat construction.

    “The bricks will primarily serve as protective surface layers for habitats. They must be integrated with rigid structural modules and inflatable soft-shell modules to complete the construction of a lunar base,” he added.

    He mentioned a series of technological developments, including lunar brick manufacturing, architectural component assembly, and evaluation of building structure, along with operational validation of both the brick-making machine and construction processes under actual lunar surface conditions.

    The habitat modules are designed to withstand the air pressure necessary for human occupancy and are also equipped to integrate with the lunar brick-making machine and surface construction robots, creating a complete building system, he added.

    China initiated the International Lunar Research Station (ILRS), a scientific experimental facility consisting of sections on the lunar surface and in lunar orbit. It is projected to be built in two phases: a basic model to be built by 2035 in the lunar south pole region, and an extended model to be built in the 2040s.

    As of April 2025, a total of 17 countries and international organizations, and more than 50 international research institutions, have joined the ILRS.

    Chinese scientists have made simulated lunar soil bricks and sent them to China’s space station via the Tianzhou-8 cargo spacecraft launched in November 2024. Astronauts aboard the space station are set to conduct space exposure experiments on these bricks to evaluate their mechanical properties, thermal performance, and radiation resistance to acquire critical data for future lunar construction. 

    Continue Reading

  • New system dramatically speeds the search for polymer materials | MIT News

    New system dramatically speeds the search for polymer materials | MIT News

    Scientists often seek new materials derived from polymers. Rather than starting a polymer search from scratch, they save time and money by blending existing polymers to achieve desired properties.

    But identifying the best blend is a thorny problem. Not only is there a practically limitless number of potential combinations, but polymers interact in complex ways, so the properties of a new blend are challenging to predict.

    To accelerate the discovery of new materials, MIT researchers developed a fully autonomous experimental platform that can efficiently identify optimal polymer blends.

    The closed-loop workflow uses a powerful algorithm to explore a wide range of potential polymer blends, feeding a selection of combinations to a robotic system that mixes chemicals and tests each blend.

    Based on the results, the algorithm decides which experiments to conduct next, continuing the process until the new polymer meets the user’s goals.

    During experiments, the system autonomously identified hundreds of blends that outperformed their constituent polymers. Interestingly, the researchers found that the best-performing blends did not necessarily use the best individual components.

    “I found that to be good confirmation of the value of using an optimization algorithm that considers the full design space at the same time,” says Connor Coley, the Class of 1957 Career Development Assistant Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science, and senior author of a paper on this new approach. “If you consider the full formulation space, you can potentially find new or better properties. Using a different approach, you could easily overlook the underperforming components that happen to be the important parts of the best blend.”

    This workflow could someday facilitate the discovery of polymer blend materials that lead to advancements like improved battery electrolytes, more cost-effective solar panels, or tailored nanoparticles for safer drug delivery.

    Coley is joined on the paper by lead author Guangqi Wu, a former MIT postdoc who is now a Marie Skłodowska-Curie Postdoctoral Fellow at Oxford University; Tianyi Jin, an MIT graduate student; and Alfredo Alexander-Katz, the Michael and Sonja Koerner Professor in the MIT Department of Materials Science and Engineering. The work appears today in Matter.

    Building better blends

    When scientists design new polymer blends, they are faced with a nearly endless number of possible polymers to start with. Once they select a few to mix, they still must choose the composition of each polymer and the concentration of polymers in the blend.

    “Having that large of a design space necessitates algorithmic solutions and higher-throughput workflows because you simply couldn’t test all the combinations using brute force,” Coley adds.

    While researchers have studied autonomous workflows for single polymers, less work has focused on polymer blends because of the dramatically larger design space.

    In this study, the MIT researchers sought new random heteropolymer blends, made by mixing two or more polymers with different structural features. These versatile polymers have shown particularly promising relevance to high-temperature enzymatic catalysis, a process that increases the rate of chemical reactions.

    Their closed-loop workflow begins with an algorithm that, based on the user’s desired properties, autonomously identifies a handful of promising polymer blends.

    The researchers originally tried a machine-learning model to predict the performance of new blends, but it was difficult to make accurate predictions across the astronomically large space of possibilities. Instead, they utilized a genetic algorithm, which uses biologically inspired operations like selection and mutation to find an optimal solution.

    Their system encodes the composition of a polymer blend into what is effectively a digital chromosome, which the genetic algorithm iteratively improves to identify the most promising combinations.

    “This algorithm is not new, but we had to modify the algorithm to fit into our system. For instance, we had to limit the number of polymers that could be in one material to make discovery more efficient,” Wu adds.

    In addition, because the search space is so large, they tuned the algorithm to balance its choice of exploration (searching for random polymers) versus exploitation (optimizing the best polymers from the last experiment).

    The algorithm sends 96 polymer blends at a time to the autonomous robotic platform, which mixes the chemicals and measures the properties of each.

    The experiments were focused on improving the thermal stability of enzymes by optimizing the retained enzymatic activity (REA), a measure of how stable an enzyme is after mixing with the polymer blends and being exposed to high temperatures.

    These results are sent back to the algorithm, which uses them to generate a new set of polymers until the system finds the optimal blend.

    Accelerating discovery

    Building the robotic system involved numerous challenges, such as developing a technique to evenly heat polymers and optimizing the speed at which the pipette tip moves up and down.

    “In autonomous discovery platforms, we emphasize algorithmic innovations, but there are many detailed and subtle aspects of the procedure you have to validate before you can trust the information coming out of it,” Coley says.

    When tested, the optimal blends their system identified often outperformed the polymers that formed them. The best overall blend performed 18 percent better than any of its individual components, achieving an REA of 73 percent.

    “This indicates that, instead of developing new polymers, we could sometimes blend existing polymers to design new materials that perform even better than individual polymers do,” Wu says.

    Moreover, their autonomous platform can generate and test 700 new polymer blends per day and only requires human intervention for refilling and replacing chemicals.

    While this research focused on polymers for protein stabilization, their platform could be modified for other uses, like the development or new plastics or battery electrolytes.

    In addition to exploring additional polymer properties, the researchers want to use experimental data to improve the efficiency of their algorithm and develop new algorithms to streamline the operations of the autonomous liquid handler.

    “Technologically, there are urgent needs to enhance thermal stability of proteins and enzymes. The results demonstrated here are quite impressive. Being a platform technology and given the rapid advancement in machine learning and AI for material science, one can envision the possibility for this team to further enhance random heteropolymer performances or to optimize design based on end needs and usages,” says Ting Xu, an associate professor at the University of California at Berkeley, who was not involved with this work.

    This work is funded, in part, by the U.S. Department of Energy, the National Science Foundation, and the Class of 1947 Career Development Chair.

    Continue Reading

  • Fermented meat with a side of maggots: A new look at the Neanderthal diet

    Fermented meat with a side of maggots: A new look at the Neanderthal diet

    Traditionally, Indigenous peoples almost universally viewed thoroughly putrefied, maggot-infested animal foods as highly desirable fare, not starvation rations. In fact, many such peoples routinely and often intentionally allowed animal foods to decompose to the point where they were crawling with maggots, in some cases even beginning to liquefy.

    This rotting food would inevitably emit a stench so overpowering that early European explorers, fur trappers, and missionaries were sickened by it. Yet Indigenous peoples viewed such foods as good to eat, even a delicacy. When asked how they could tolerate the nauseating stench, they simply responded, “We don’t eat the smell.”

    Neanderthals’ cultural practices, similar to those of Indigenous peoples, might be the answer to the mystery of their high δ¹⁵N values. Ancient hominins were butchering, storing, preserving, cooking, and cultivating a variety of items. All these practices enriched their paleo menu with foods in forms that nonhominin carnivores do not consume. Research shows that δ¹⁵N values are higher for cooked foods, putrid muscle tissue from terrestrial and aquatic species, and, with our study, for fly larvae feeding on decaying tissue.

    The high δ¹⁵N values of maggots associated with putrid animal foods help explain how Neanderthals could have included plenty of other nutritious foods beyond only meat while still registering δ¹⁵N values we’re used to seeing in hypercarnivores.

    We suspect the high δ¹⁵N values seen in Neanderthals reflect routine consumption of fatty animal tissues and fermented stomach contents, much of it in a semi-putrid or putrid state, together with the inevitable bonus of both living and dead ¹⁵N-enriched maggots.

    What still isn’t known

    Fly larvae are a fat-rich, nutrient-dense, ubiquitous, and easily procured insect resource, and both Neanderthals and early Homo sapiens, much like recent foragers, would have benefited from taking full advantage of them. But we cannot say that maggots alone explain why Neanderthals have such high δ¹⁵N values in their remains.

    Several questions about this ancient diet remain unanswered. How many maggots would someone need to consume to account for an increase in δ¹⁵N values above the expected values due to meat eating alone? How do the nutritional benefits of consuming maggots change the longer a food item is stored? More experimental studies on changes in δ¹⁵N values of foods processed, stored, and cooked following Indigenous traditional practices can help us better understand the dietary practices of our ancient relatives.

    Melanie Beasley is assistant professor of anthropology at Purdue University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Continue Reading

  • This Week in Astronomy with Dave Eicher: The Crescent Moon meets Mars – Astronomy Magazine

    1. This Week in Astronomy with Dave Eicher: The Crescent Moon meets Mars  Astronomy Magazine
    2. See Mars shine close to the waxing crescent moon after sunset on July 28  Space
    3. Witness the Stunning Conjunction of Moon and Mars with Meteor Showers on July 28!  MSN
    4. Lunar pairings and meteor showers  worldjournalnewspaper.com
    5. The Sky Today on Monday, July 28: The Maiden hosts the Moon and Mars  NewsBreak: Local News & Alerts

    Continue Reading

  • Vera Rubin Observatory glows under recalibration LEDs photo of the day for July 28, 2025

    Vera Rubin Observatory glows under recalibration LEDs photo of the day for July 28, 2025

    Named in honor of female astronomer Vera Rubin, whose pioneering work on galaxy rotation provided key evidence for dark matter, the Vera Rubin Observatory uses the world’s largest digital camera to peer at the night sky, looking for dark matter in our universe.

    What is it?

    This state-of-the-art facility houses the Simonyi Survey Telescope, which includes a 27.5 foot (8.4 meters) primary mirror. Inside the dome, Rubin uses special recalibration lights to check and correct for variations in the sensors and other instruments. These lights help engineers map uneven pixel responses and find shadows or other irregularities.

    Where is it?

    The Vera Rubin Observatory sits atop the mountain Cerro Pachón in northern Chile.

    The red and blue lights reveal details of the Rubin Observatory. (Image credit: RubinObs/NSF/DOE/NOIRLab/SLAC/AURA/W. O’Mullane)

    Why is it amazing?

    Continue Reading

  • How brain-inspired analog systems could make drones more efficient

    How brain-inspired analog systems could make drones more efficient

    Electrical and computer engineers want to mimic the brain’s visual system to create AI tools for guiding autonomous systems.

    The artificial intelligence systems that guide drones and self-driving cars rely on neural networks—trainable computing systems inspired by the human brain. But the digital computers they run on were initially designed for general-purpose computing tasks ranging from word processing to scientific calculations and have ultra-high reliability at the expense of high-power consumption.

    To explore novel computer systems that are energy efficient particularly for machine learning, engineers at the University of Rochester are developing new analog hardware, with the possible application toward more efficient drones. Rochester engineers are attempting to do so by abandoning conventional state-of-the-art neural networks developed on digital hardware for computer vision. Instead, they’re turning to predictive coding networks, which are based on neuroscience theories that the brain has a mental model of the environment and constantly updates it based on feedback from the eyes.

    “Research by neuroscientists has shown that the workhorse of developing neural networks—this mechanism called back propagation—is biologically implausible and our brains’ perception systems don’t work that way,” says Michael Huang, a professor of electrical and computer engineering, of computer science, and of data science and artificial intelligence at Rochester. “To solve the problem, we asked how our brains do it. The prevailing theory is predictive coding, which involves a hierarchical process of prediction and correction—think paraphrasing what you heard, telling it to the speaker, and using their feedback to refine your understanding.”

    Huang notes that the University of Rochester has a rich history in computer vision research and that the late computer science professor Dana Ballard was an author on one of the earliest, most influential papers on predicative coding networks.

    The Rochester-led team includes Huang and electrical and computer engineering professors Hui Wu and Tong Geng, their students, as well as two research groups from Rice University and UCLA. The team will receive up to $7.2 million from the Defense Advanced Research Projects Agency (DARPA) over the next 54 months to develop biologically inspired predictive coding networks for digital image recognition built on analog circuits. While the initial prototype will look at classifying static images, if they can get the analog system to approach the performance of existing digital approaches, they believe it can be translated to more complex perception tasks needed by self-driving cars and autonomous drones.

    And while the approach is novel, the system will not use any experimental devices but will instead be manufactured using existing technologies like the complementary metal oxide semiconductor (CMOS).

    Continue Reading

  • Report paints grim picture of how nuclear war could impact oceans | CU Boulder Today

    Report paints grim picture of how nuclear war could impact oceans | CU Boulder Today

    Recent conflicts in Europe and the Middle East have reignited fears about the use of nuclear weapons. What would a nuclear conflict do to the planet’s environment today?

    In a new congressionally mandated report, oceanographer Nicole Lovenduski, who directs CU Boulder’s Institute of Arctic and Alpine Research (INSTAAR), outlined the dangerous fallout a nuclear war could bring, from firestorms and global cooling to ecosystem collapse and potentially irreversible ocean disruption.

    Nicole Lovenduski/CU Boulder

    “The ocean makes up three-quarters of our planet’s surface,” said Lovenduski, a professor in the Department of Atmospheric and Oceanic Sciences. “Knowing how the ocean responds to changes in the environment is really important, because it can influence the global climate system.”

    To date, the only use of nuclear weapons in conflict occurred in 1945, when the United States dropped two atomic bombs on Japan. At the time, scientists were not tracking the environmental impact.

    According to Lovenduski’s previous research, nine nations possess more than 13,000 nuclear weapons in the world, with the United States and Russia controlling the most operational nuclear weapons. The stockpiles in countries including India, Pakistan, China, and North Korea have also increased in the past eight decades.

    Compiled by dozens of scientists across the country, the report aims to reevaluate the long-term environmental impacts of nuclear war using the latest scientific evidence.

    “What we learned in writing the report was that we need additional scientific research to adequately describe the potential environmental and climate consequences of a nuclear conflict. But we know enough to know that a nuclear war would be a horrific outcome for humanity,” Lovenduski said.

    CU Boulder Today sat down with Lovenduski to discuss how a nuclear conflict could change the ocean and why those changes are important.

    What happens when a nuclear weapon is detonated?

    Hypothetically, if there were to be a large-scale nuclear conflict on this planet that starts a lot of fire, there could be a firestorm that releases a lot of soot into the atmosphere. If it makes it all the way up into the stratosphere, where the air flow tends to be more stable, soot can stay there for a really long time and encapsulate the entire planet. That will lead to a dramatic reduction in the amount of sunlight that comes into our planet.

    Without sunlight, we cannot have photosynthesis.  Photosynthetic organisms, like plants on land and algae in the ocean, form the base of the food web for everything else. Without photosynthesis, we cannot have a source of food.

    What would happen to the ocean, specifically?

    If a lot of soot gets up into the stratosphere and blocks sunlight, it would cool the planet suddenly and significantly. That’s where the concept of nuclear winter first arose many decades ago.  

    Sea ice could extend all the way down to places in the Pacific and Atlantic that don’t currently have ice. That would affect how ocean currents move and whether surface seawater can sink and slow down large-scale circulation.

    We may no longer have, for example, the Gulf Stream, bringing warm water northwards into the Atlantic, resulting in dramatic cooling of Northern Europe. Ocean currents are important in making sure many parts of the world are habitable for many. 

    Would people living far from coastal zones be impacted?

    As a result of nuclear winter, crops on land could fail. We might look to the ocean for a source of food. But if the fish don’t have anything to eat, we’re all going to starve. So even if there’s a conflict, and it doesn’t affect us directly where we live, the global population is at risk of starvation.

    How long would it take for the ocean to recover?

    The atmosphere moves pretty fast. If the soot above the Middle East enters the stratosphere, it can spread globally within one to two years.

    But the ocean moves really slowly. When water sinks in the North Atlantic, it can take hundreds, if not thousands, of years for that water to reemerge. So if you perturb the ocean, it can take a long time to recover. In some of the computer simulations we did, the simulation stopped before we even saw that recovery happen, because we were out of computing time. So we never saw the ocean recover in our simulations, which is scary.

    I hope we don’t ever go down this road. I hope that the people in charge of deciding whether or not to engage in nuclear conflict can learn from some of the work that we have done. I hope that the report leads to a world where there is no nuclear conflict.

    Continue Reading