Category: 7. Science

  • a new way to detect disease at a glance

    a new way to detect disease at a glance

    Design and working principle of the multicolor urea biosensor.

    GA, UNITED STATES, August 1, 2025 /EINPresswire.com/ — A team of researchers has developed a new biosensor that turns the invisible signals into visible color changes to detect urea levels in a simple way. The sensor works by combining gold nanoparticles and a pH-controlled reaction, with a special chemical reaction that changes color depending on the amount of urea. As more urea is present, the reaction slows down, and the gold nanoparticles keep their shape, resulting in a visible color shift. This enables a clear, multicolor visual cue across a wide concentration range. The sensor can detect urea down to 0.098 µM in solution and 0.2 µM in solid form. It significantly outperforms traditional methods and performed well in real urine samples, opening new possibilities for easy, accurate urea monitoring at the point of care.

    Urea is a vital indicator of human health, particularly for diagnosing kidney and liver function. It is also widely used in agriculture, where overuse can lead to environmental contamination. While colorimetric tests are popular for their simplicity, most existing tools rely on single-color changes that are difficult to interpret by the naked eye, especially at low concentrations. More advanced methods, like fluorometry or electrochemical sensing, require complex equipment or training, limiting their accessibility. Improving the clarity and sensitivity of urea detection, especially through a visual and low-tech method—could bridge this gap. Due to these challenges, there is a growing need to develop multicolor sensors capable of intuitive, high-resolution readouts for both clinical and environmental use.

    Scientists at Sungkyunkwan University, South Korea, have unveiled a multicolor biosensing platform for urea detection, published (DOI: 10.1038/s41378-025-00931-5) on June 5, 2025, in Microsystems & Nanoengineering. The new sensor uses an enzyme called urease to break down urea to produce ammonia and raise the pH. This rise in pH prevents the chemical reaction that would normally change the gold nanoparticles. As a result, the particles keep their shape. Unlike conventional tests, this biosensor offers five visually distinct colors: blue, violet, purple, pink, and red, depending on the urea level. that can be read by the naked eye. The team validated the sensor’s performance in both liquid and solid formats, paving the way for convenient, ultra-sensitive urea testing in clinical and field settings.

    To make the sensor more practical and easier to handle, the team also developed a solid version using a gel by embedding the sensing chemistry into a hydrogel. This makes it easier to store and use. Both the liquid and solid versions worked well and were not affected by other substances in urine. The sensor’s performance rivaled that of commercial urea kits, while offering the unique advantage of real-time, multicolor visual feedback. A built-in self-validation feature further ensures reliability by showing a clear color change only when all components function properly, making the sensor both powerful and foolproof.

    “This sensor is not only technically advanced but also user-centric,” said Professor Dong-Hwan Kim, senior author of the study. “Its multicolor output allows anyone—even without lab training—to interpret results clearly and quickly. By controlling the Fenton etching through a simple pH shift, we’ve unlocked a highly tunable visual signal that outperforms many current diagnostic tools. We believe this is a significant step forward for point-of-care diagnostics, especially in resource-limited settings.”

    This visually intuitive biosensor holds enormous potential for healthcare and environmental monitoring. Its solid-state format simplifies storage and usage, making it ideal for portable diagnostic kits, home testing, and rural clinics. In medical settings, the sensor can offer early warnings of kidney dysfunction or metabolic imbalance through simple urine analysis. In agriculture, it could be adapted for on-site detection of urea-based fertilizer runoff. Moreover, the underlying principle—pH-modulated nanoparticle etching—could be expanded to detect other analytes using similar strategies. With its combination of accuracy, ease of use, and multicolor feedback, this biosensor represents a meaningful leap toward accessible, next-generation diagnostics.

    DOI
    10.1038/s41378-025-00931-5

    Original Source URL
    https://doi.org/10.1038/s41378-025-00931-5

    Funding information
    This work was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (RS-2023-00228994), RS-2024-00346003, and the National Research Foundation of Korea (2020R1A5A1018052) and (RS-2024-00410209).

    Lucy Wang
    BioDesign Research
    email us here

    Legal Disclaimer:

    EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
    for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
    article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

    Continue Reading

  • These States Could See Aurora Borealis Friday

    These States Could See Aurora Borealis Friday

    Topline

    Northern residents of seven continental states may be able to view the northern lights Friday night despite no significant predictions for geomagnetic storms, according to the latest National Oceanic and Atmospheric Administration forecast.

    Key Facts

    NOAA forecast a Kp index of two on a scale of nine for Friday, suggesting the northern lights might be more visible farther from the poles and into the northern United States.

    Friday will give Americans the best chance to see the lights of the next three days, with the likelihood of geomagnetic activity and storms dropping daily through Sunday.

    Get Forbes Breaking News Text Alerts: We’re launching text message alerts so you’ll always know the biggest stories shaping the day’s headlines. Text “Alerts” to (201) 335-0739 or sign up here: joinsubtext.com/forbes.

    Where Will The Northern Lights Be Visible?

    The northern lights will have the best chance of being seen throughout Canada and Alaska but NOAA’s predicted “view line” dips into Washington, Idaho, Montana, North Dakota, Minnesota, Wisconsin, and upper Michigan. (See map below.)

    What’s The Best Way To See The Northern Lights?

    Usually from a high vantage point, away from light pollution, while facing north sometime between 10 p.m. and 2 a.m. local time, according to NOAA. The lights will be most visible this weekend at around 3 a.m. Saturday, according to NOAA’s kp index forecast.

    What’s The Best Way To Photograph The Northern Lights?

    Flash on smartphones should be turned off and night mode enabled, NOAA suggests, and using a tripod can help to stabilize the image. With a separate camera, photography experts told National Geographic it’s best to use a wide-angle lens, an aperture or F-stop of four or less and a focus set to the furthest possible setting.

    Key Background

    Also known as the Aurora Borealis, the Northern Lights appear as a colorful phenomena in the night sky when electrically charged particles from the sun collide with the Earth’s atmosphere. The Northern Lights are most visible near the Arctic Circle because Earth’s magnetic field redirects the particles toward the poles, but they can stretch far beyond their usual range during times of high solar activity. The lights’ bright colors are determined by the chemical composition of the atmosphere.

    Further Reading

    ForbesNorthern Lights Displays Hit A 500-Year Peak In 2024—Here’s Where You Could Catch Aurora Borealis In 2025

    Continue Reading

  • Scientists finally solve the mystery of what triggers lightning

    Scientists finally solve the mystery of what triggers lightning

    Though scientists have long understood how lightning strikes, the precise atmospheric events that trigger it within thunderclouds remained a perplexing mystery. The mystery may be solved, thanks to a team of researchers led by Victor Pasko, professor of electrical engineering in the Penn State School of Electrical Engineering and Computer Science, that has revealed the powerful chain reaction that triggers lightning.

    In the study published on July 28 in the Journal of Geophysical Research, the authors described how they determined strong electric fields in thunderclouds accelerate electrons that crash into molecules like nitrogen and oxygen, producing X-rays and initiating a deluge of additional electrons and high-energy photons — the perfect storm from which lightning bolts are born.

    “Our findings provide the first precise, quantitative explanation for how lightning initiates in nature,” Pasko said. “It connects the dots between X-rays, electric fields and the physics of electron avalanches.”

    The team used mathematical modeling to confirm and explain field observations of photoelectric phenomena in Earth’s atmosphere — when relativistic energy electrons, which are seeded by cosmic rays entering the atmosphere from outer space, multiply in thunderstorm electric fields and emit brief high-energy photon bursts. This phenomenon, known as a terrestrial gamma-ray flash, comprises the invisible, naturally occurring bursts of X-rays and accompanying radio emissions.

    “By simulating conditions with our model that replicated the conditions observed in the field, we offered a complete explanation for the X-rays and radio emissions that are present within thunderclouds,” Pasko said. “We demonstrated how electrons, accelerated by strong electric fields in thunderclouds, produce X-rays as they collide with air molecules like nitrogen and oxygen, and create an avalanche of electrons that produce high-energy photons that initiate lightning.”

    Zaid Pervez, a doctoral student in electrical engineering, used the model to match field observations — collected by other research groups using ground-based sensors, satellites and high-altitude spy planes — to the conditions in the simulated thunderclouds.

    “We explained how photoelectric events occur, what conditions need to be in thunderclouds to initiate the cascade of electrons, and what is causing the wide variety of radio signals that we observe in clouds all prior to a lightning strike,” Pervez said. “To confirm our explanation on lightning initiation, I compared our results to previous modeling, observation studies and my own work on a type of lightning called compact intercloud discharges, which usually occur in small, localized regions in thunderclouds.”

    Published by Pasko and his collaborators in 2023, the model, Photoelectric Feedback Discharge, simulates physical conditions in which a lightning bolt is likely to originate. The equations used to create the model are available in the paper for other researchers to use in their own work.

    In addition to uncovering lightning initiation, the researchers explained why terrestrial gamma-ray flashes are often produced without flashes of light and radio bursts, which are familiar signatures of lightning during stormy weather.

    “In our modeling, the high-energy X-rays produced by relativistic electron avalanches generate new seed electrons driven by the photoelectric effect in air, rapidly amplifying these avalanches,” Pasko said. “In addition to being produced in very compact volumes, this runaway chain reaction can occur with highly variable strength, often leading to detectable levels of X-rays, while accompanied by very weak optical and radio emissions. This explains why these gamma-ray flashes can emerge from source regions that appear optically dim and radio silent.”

    In addition to Pasko and Pervez, the co-authors include Sebastien Celestin, professor of physics at the University of Orléans, France; Anne Bourdon, director of research at École Polytechnique, France; Reza Janalizadeh, ionosphere scientist at NASA Goddard Space Flight Center and former postdoctoral scholar under Pasko at Penn State; Jaroslav Jansky, assistant professor of electrical engineering and communication at Brno University of Technology, Czech Republic; and Pierre Gourbin, postdoctoral scholar of astrophysics and atmospheric physics at the Technical University of Denmark.

    The U.S. National Science Foundation, the Centre National d’Etudes Spatiales (CNES), the Institut Universitaire de France and the Ministry of Defense of the Czech Republic supported this research.

    Continue Reading

  • Astronaut savors the moment and shares a stunning aurora shot | On the International Space Station July 28-Aug. 1, 2025

    Astronaut savors the moment and shares a stunning aurora shot | On the International Space Station July 28-Aug. 1, 2025

    As their time on the International Space Station winds down, the Expedition 73 crew continued science and maintenance activities while also preparing for the arrival of the crewmates who will take their place.

    Orbital observation

    Anne McClain was not planning to take any photographs.

    Continue Reading

  • Microrobots for targeted drug delivery – University of Michigan News

    1. Microrobots for targeted drug delivery  University of Michigan News
    2. New method uses magnetism for targeted drug delivery  News-Medical
    3. Magnetically controlled microrobots show promise for precision drug delivery  Physics World
    4. Microrobots that can carry drugs and steer could provide targeted drug delivery  Phys.org

    Continue Reading

  • Hexagonal diamond: Scientists create bulk samples of ultra-hard carbon allotrope predicted nearly 60 years ago

    Hexagonal diamond: Scientists create bulk samples of ultra-hard carbon allotrope predicted nearly 60 years ago

    The first bulk synthesis of hexagonal diamond marks a milestone for carbon allotropes, offering researchers an opportunity to extensively characterise this unique material.

    Diamond is one of the hardest materials known to exist in nature, arising from its structure in which carbon atoms covalently bond together in a perfect tetrahedral arrangement. Nearly 60 years ago, scientists predicted a harder, alternative form known as hexagonal diamond, which has a hexagonal lattice, rather than the cubic lattice adopted by conventional diamond.

    Natural hexagonal diamond has been discovered on Earth and is thought to have formed during meteorite strikes when the immense temperatures and pressures can rapidly transform graphite into this rare form of diamond. However, only small grains of this natural hexagonal diamond have ever been discovered, mixed in with cubic diamond and graphite. Methods replicating the heat and pressure of a meteorite strike in the lab have often resulted in nanocrystalline structures of hexagonal diamond, with these samples often being impure, making it difficult to study hexagonal diamond in isolation.

    ‘Now we have made a millimetre-sized chunk of [near] pure hexagonal diamond,’ says Ho-Kwang Mao at the Centre for High Pressure Science and Technology Advanced Research in China.

    Figure

    To do this, Mao and his team in the US and China applied about 200,000 times atmospheric pressure to a single crystal of pure graphite using a diamond anvil cell. In situ x-ray diffraction before and after applying this pressure allowed the scientists to observe the microscopic conversion of graphite to hexagonal diamond. With the sample still under pressure, laser heating at 1400°C stabilised the phase, allowing the researchers to recover and subsequently study the near-pure sample.

    ‘This new two-step method provides the first definitive evidence of hexagonal diamond as a distinct and recoverable bulk material,’ says Eiichi Nakamura, an inorganic chemist at the University of Tokyo who has previously worked on carbon allotropes.

    Through this method, the team synthesised crystals of near pure hexagonal diamond – containing a few microcrystals of the more familiar cubic diamond. The crystals ranged from 100μm to several millimetres in size, marking a first for forming a distinct and recoverable amount of the material.

    Microscope image

    ‘If you make [the bonding pattern of carbon atoms] three dimensional, there are only two ways of packing the layers,’ says Mao. ‘There’s ABC packing within cubic diamond, and AB packing for hexagonal.’ High-resolution transmission electron microscopy confirmed that the sample had AB stacking of buckled honeycomb layers, a structure indicative of hexagonal diamond.

    The scientists probed the structure further using various spectroscopic techniques. ‘We found that one of the bonds between the layers is actually shorter, compared to the other three, so this helps explain why the structure is stronger [compared with cubic diamond],’ says Mao. Results also showed that all bonds were sp3 σ bonds, with no sp2 π bonds that would signal the presence of graphite.

    The team tested the hardness of the material using a 1mm diameter disc of hexagonal diamond, finding that the hardness was comparable to natural diamond, due to minor cubic diamond defects. Future efforts will likely focus on refining synthesis conditions, with Nakamura noting that ‘this [synthetic] breakthrough marks a milestone in the study of carbon allotropes’.

    Continue Reading

  • Prehistoric Humans Began Eating Tubers 700,000 Years Before Our Teeth Evolved To Do So

    Prehistoric Humans Began Eating Tubers 700,000 Years Before Our Teeth Evolved To Do So

    Around 2.3 million years ago, ancient human species such as Homo rudolfensis and Homo erectus suddenly changed their diets. Using their large brains, these extinct hominins manufactured digging tools that they used to access carbohydrate-rich tubers, bulbs, and corms, despite the fact their teeth were unsuited to chewing these starchy plant fibers.

    By analyzing the carbon and oxygen isotopes in the fossilized teeth of prehistoric humans, the authors of a new study were able to reconstruct these dietary changes, revealing that it took a further 700,000 years for our ancestors’ molars to catch up with their culinary behaviors. The findings provide concrete evidence to support the theory of behavioral drive, which holds that dietary habits and other behaviors that are beneficial for survival can trigger corresponding morphological changes.

    “As anthropologists, we talk about behavioral and morphological change as evolving in lockstep,” said study author Luke Fannin in a statement. “But we found that behavior could be a force of evolution in its own right, with major repercussions for the morphological and dietary trajectory of hominins.”

    Based on the isotopes in the teeth of an early hominin called Australopithecus afarensis, the researchers discovered that humans began feeding on herbaceous grassy plants known as graminoids around 3.8 million years ago. However, about 1.5 million years later, the isotopic ratios in the teeth of some Homo species suddenly changed, indicating a massive increase in consumption of oxygen-depleted waters.

    Tellingly, these isotopic values are indistinguishable from those of fossilized mole-rats, which fed on the bulbs and corms of certain graminoids. The study authors therefore conclude this abrupt switch reflects an increase in the consumption of underground storage organs like tubers, which reflect the oxygen-depleted waters of the surrounding soil.

    The timing of this change also aligns with the expansion of human brains and the development of stone tools that could have been used to dig up these energy-rich root vegetables. All of this suggests that cognitive advances enabled our distant ancestors to adopt a new diet based on readily available, nutrient-rich foods that circumvented the need for dangerous hunting expeditions.

    “We propose that this shift to underground foods was a signal moment in our evolution,” Fannin says. “It created a glut of carbs that were perennial – our ancestors could access them at any time of year to feed themselves and other people.”

    However, while eating tubers may have brought a host of survival advantages, early Homo species lacked the dental hardware required to break down the tough fibers in these subterranean treats. Examining the fossil record, the researchers found that it wasn’t until 1.6 million years ago that human molars evolved into a form more suitable for chewing tubers, bulbs, and corms – some 700,000 years after we began relying on them.

    The fact that we spent so many millennia surviving on foods that our bodies weren’t designed for is a testament to our ancestors’ flexibility, creativity, and adaptability. According to the researchers, this ability to improvise and find solutions at this early juncture in our history may have paved the way for our more recent evolutionary success.

    “One of the burning questions in anthropology is what did hominins do differently that other primates didn’t do?” explained study author Nathaniel Dominy. Noting that prehistoric primates didn’t switch from grasses to tubers, he says that “the ability to exploit [underground] grass tissues may be our secret sauce.”

    “Even now, our global economy turns on a few species of grass – rice, wheat, corn, and barley,” says Dominy. “Our ancestors did something completely unexpected that changed the game for the history of species on Earth.”

    The study is published in the journal Science.

    Continue Reading

  • Peacocks can shoot lasers from tail feathers, scientists discover

    Peacocks can shoot lasers from tail feathers, scientists discover

    Scientists have concluded that peacocks’ tail feathers are capable of emitting narrow beams of light. A team highlighted that the colored tail feathers include tiny reflective structures that can amplify light into a laser beam.

    Researchers from Florida Polytechnic University and Youngstown State University explored such structures that may emit a very different signature glow after applying a special dye to multiple areas on a peacock’s tail.

    They revealed that feathers can emit two distinct frequencies of laser light from multiple regions across their colored eyespots.

    First example of biolaser cavity

    Researchers are claiming this as the first example of biolaser cavity within the animal kingdom.

    The research team pointed out that after dyeing the feathers and energizing them with an external light source, researchers discovered they emitted narrow beams of yellow-green laser light.

    The study investigated that the light-emissive properties of dye-infused barbules from Indian Peafowl (Pavo cristatus) tail feathers at high intensities pumped at 532nm.

    Highly conserved set of laser wavelengths was observed

    Researchers revealed that the dye-infused barbules were prepared by repeatedly wetting the eyespot with dye solution and allowing it to dry. While wet, and after wet/dry cycling, across multiple parts of the same feather as well as across different feather samples, a highly conserved set of laser wavelengths was observed.

    The team stressed that the feather was found to require multiple staining cycles before laser emission was observed.

    “The eyespot of a dye-doped peafowl tail feather was found to emit laser light from multiple structural color regions. Regions where the visible reflection bands were outside of the gain region of the dye were also found to emit laser light in some locations,” said researchers in the study.

    “The greatest laser intensity relative to the broad emission curve was found to be emitted from the green color region.”

    Precise microstructures responsible for the lasing

    Researchers believe that the study’s findings could be an elegant example of how complex biological structures can support the generation of coherent light. Although, the study does not identify the exact microstructures that are doing the lasing.

    After getting peacock feathers, researchers cut away any excess lengths of barbs and mounted the feathers on an absorptive substrate. They then infused the feathers with common dyes by pipetting the dye solution directly onto them and letting them dry.

    The feathers were stained multiple times in some cases. Then, they pumped the samples with pulses of light and measured any resulting emissions according to the details of the research published.

    The authors were unable to identify the precise microstructures responsible for the lasing; it does not appear to be due to the keratin-coated melatonin rods. Co-author Nathan Dawson of Florida Polytechnic University suggested to Science that protein granules or similar small structures inside the feathers might function as a laser cavity, reported Ars Technica.

    Some experts claimed that looking for laser light in biomaterials could help identify arrays of regular microstructures within them. In medicine, for example, certain foreign objects—viruses with distinct geometric shapes, perhaps—could be classified and identified based on their ability to be lasers.

    The study wad published in the journal Scientific Reports.

    Continue Reading

  • Large-Scale Simulation Reveals Novel Insights on Turbulence

    Large-Scale Simulation Reveals Novel Insights on Turbulence

    Aug. 1, 2025 — Scientists at the University of Stuttgart’s Institute of Aerodynamics and Gas Dynamics (IAG) have produced a novel dataset that will improve the development of turbulence models. With the help of the Hawk supercomputer at the High-Performance Computing Center Stuttgart (HLRS), investigators in the laboratory of Dr. Christoph Wenzel conducted a large-scale direct numerical simulation of a spatially evolving turbulent boundary layer.

    After a certain point in the development of a turbulent flow — for example, as air moves over a wing in flight — the outer region of the turbulent boundary layer (where blue dominates) maintains a persistent, self-similar physical structure. Image credit: IAG, University of Stuttgart.

    Using more than 100 million CPU hours on Hawk, the simulation is unique in that it captures the onset of a canonical, fully-developed turbulent state in a single computational domain. The study also identified with unprecedented clarity an inflection point at which the outer region of the turbulent boundary layer begins to maintain a self-similar structure as it moves toward high Reynolds numbers. The results appear in a new paper published in the Journal of Fluid Mechanics.

    “Our team’s goal is to understand unexplored parameter regimes in turbulent boundary layers,” said Jason Appelbaum, a PhD candidate in the Wenzel Lab and leader of this research. “By running a large-scale simulation that fully resolves the entire development of turbulence from an early to an evolved state, we have generated the first reliable, full-resolution dataset for investigating how high-Reynolds-number effects emerge.”

    Why It’s Difficult to Study Moderate Reynolds Numbers

    During flight, turbulence causes very high shear stress on the surface of an aircraft. The resulting drag can reduce flight performance and fuel efficiency. To predict this effect, aerospace engineers rely on computational models of the turbulent boundary layer, the millimeters-thin region where the surface of the aircraft interacts with free-flowing air.

    For industrial applications, turbulence models do not need to replicate physics down to the finest details; they must only be accurate enough for practical use and be capable of running smoothly on modest computing resources. Before engineers can use such simplified models, however, scientific research using high-performance computing systems is necessary to provide the data on which they are based. This is why the Wenzel Lab has long used HLRS’s supercomputers to run NS3D, direct numerical simulation software it created to investigate fundamental physical properties of turbulent boundary layers at extremely high resolution.

    The top of this figure represents the IAG team’s large-scale simulation, which captured the complete development of a turbulent boundary layer from low to high Reynolds numbers. As a flow moves across a surface, the outer region of the turbulent boundary layer becomes thicker. Beyond a certain point, it retains similar physical properties. Image credit: IAG, University of Stuttgart.

    Scientists in the field of computational fluid dynamics (CFD) use a figure called the Reynolds number to characterize the developmental state of a turbulent boundary layer. The Reynolds number is the ratio of inertial forces to viscous forces in a fluid flow, which governs the local range of turbulent eddy sizes. At low Reynolds numbers, which occur early in the motion of a surface through air, nonlinear convective instabilities responsible for turbulence are quickly damped by viscous action at small scales. With increasing Reynolds number, the turbulent boundary layer becomes thicker. Large, coherent structures emerge, creating a more complex turbulent system that is not simply an extrapolation of trends at low Reynolds numbers, but has its own distinct properties.

    In the past, CFD simulations have generated rich datasets for understanding turbulence at low Reynolds numbers. This is because the computational domain size and the necessary number of simulation time steps involved at this stage are still relatively small. By today’s standards, this means that the simulations are not prohibitively expensive. Laboratory experiments also provide invaluable data for turbulence research. For quantities relevant to the present study, however, they have only focused on the high Reynolds number regime due to physical limitations. Sensors can only be machined so small, and some fundamental physical quantities, such as shear stress, are notoriously difficult to measure in the lab with high accuracy.

    As a result, scientists have accumulated a wealth of simulation data for low Reynolds numbers and reliable experimental data for high Reynolds numbers. What has been missing, however, is a clear picture of what happens in between, as both simulation and experimental methods are of limited use. Appelbaum and his collaborators in the Wenzel Lab set out to attack this problem directly.

    A Sharp Bend

    Using HLRS’s Hawk supercomputer, Appelbaum ran a series of simulations that, when the results were stitched together, replicate the entire evolution of a turbulent boundary layer from low to high Reynolds numbers. Although the “real-life” situation the simulation represented might seem vanishingly small — traveling at Mach 2 for approximately 20 milliseconds — the campaign required large-scale computing power. The team used 1,024 computing nodes (more than 130,000 cores) on Hawk — one-fourth of the entire machine — for hundreds of short runs, each of which lasted 4 to 5 hours. In total, the simulation required more than 30 days of computer runtime.

    “Most research groups would not take the risk of spending so much computational time on a problem like this, and might instead look at other interesting research problems that aren’t as expensive,” Appelbaum said. “We’re the weird ones who put all of our eggs in this one basket to investigate a long-standing gap in the research.”

    The investment paid off. In their large-scale simulation the investigators focused on (among other factors) the skin friction coefficient, a value that represents the proportion between shear stress at a solid surface in a moving fluid in comparison to free momentum of the flow. It is a key parameter describing the shape of the mean velocity profile and is fundamental in determining the viscous drag.

    While past research produced data reflecting the skin fraction coefficient (cf) at low and high Reynolds numbers, a large-scale simulation on HLRS’s Hawk supercomputer for the first time revealed a specific period at moderate Reynolds numbers at which a fundamental change occurs. Image credit: IAG, University of Stuttgart.

    Appelbaum used the results of the simulation to show how the previously separate datasets for low and high Reynolds numbers blend together. Whereas past research could only estimate through interpolation how the datasets might intersect, the IAG team’s results reveal a sharp turn. Notably, they identified a change in skin friction scaling that is linked to the establishment of a fully-developed state in the outer 90% of the boundary layer. This self-similar state is a milestone in the turbulent boundary layer’s development, signaling that scaling behavior continues in a predictable way as it evolves to industrially relevant Reynolds numbers.

    “To understand self-similarity, it helps to think about the aspect ratio of a photograph,” Appelbaum explained. “If I have a rectangular picture where the lengths of the sides have a ratio of 1:2, it doesn’t matter whether the picture is the size of my hand or if I scale it to the size of a bus. The relationships among the elements in the photo remain self-similar, regardless how large it is. Our work confirms that the outer region of the turbulent boundary layer takes on the same kind of self-similarity once the system reaches a specific Reynolds number. Importantly, this state is coupled with the change in scaling behavior of the skin friction coefficient, which experiments have shown to remain in effect until very high Reynolds numbers seen in aerospace applications. This allows us to get an early, but realistic glimpse of the turbulent behavior in this ultimate regime of turbulence.”

    Increased Performance Offers New Opportunities for Research and Engineering

    This new dataset offers a unique resource that will better enable researchers in the computational fluid dynamics community to investigate turbulent boundary layers at moderate Reynolds numbers. For the Wenzel Lab, the next step will be to dive deeper into the physics behind the inflection point they identified. Appelbaum says that the team already has some ideas about this and plans to publish a follow-up paper soon.

    In other ongoing work in the Wenzel Lab, the scientists have been busy porting the NS3D code to GPUs on HLRS’s newest supercomputer, Hunter. With the help of user support staff at HLRS and computing processor manufacturer AMD, they have already verified that the code remains physically accurate and performant using the new, GPU-accelerated system. In the coming months they will be optimizing NS3D to ensure that it takes full advantage of the increased performance that Hunter offers.

    “We anticipate being able to to simulate larger domains at even higher turbulent states,” Appelbaum said. “More computing performance will also make it more feasible to do studies in which we might run several simulations to investigate the scaling behavior of two or more parameters simultaneously.”

    In work that points toward this future, Tobias Gibis, a member of the Wenzel Lab and co-author of the present work, recently defended his thesis, in which he unified the scaling behavior of heat transfer and pressure gradients in turbulent boundary layers. Appelbaum added, “Building on Christoph and Tobias’s transformative work on the influence of heat transfer and pressure gradients to include Reynolds number effects would undoubtedly have very high scientific value. The support and resources from HLRS are the bedrock for this type of heavy computational work.”

    In the meantime, the team’s dataset at moderate Reynolds numbers will contribute to a pool of wall-bounded turbulent flow data and could aid in the development of more comprehensive, more accurate turbulence models. This will give engineers new capabilities for optimizing aircraft designs for a wider range of operating conditions, and for improving other kinds of machines like fans or automobiles whose efficiency relies on managing effects in turbulent boundary layers.

    Related Publication

    Appelbaum J, Gibis T, Pirozzoli S, Wenzel C. 2025. The onset of outer-layer self-similarity in turbulent boundary layers. J Fluid Mech. 1015: A37.


    Source: Christopher Williams, HLRS

    Continue Reading

  • ‘Misokinesia’ Phenomenon Could Affect 1 in 3 People, Study Reveals : ScienceAlert

    ‘Misokinesia’ Phenomenon Could Affect 1 in 3 People, Study Reveals : ScienceAlert

    Noticing somebody fidgeting can be distracting. Vexing. Even excruciating. But why?

    According to a study published in 2021, the stressful sensations caused by seeing others fidget are an incredibly common psychological phenomenon, affecting as many as one in three people.

    Called misokinesia – meaning ‘hatred of movements’ – this strange phenomenon had been little studied by scientists until recent years, but was noted in the context of a related condition, misophonia: a disorder where people become irritated upon hearing certain repetitious sounds.

    Misokinesia is somewhat similar, but the triggers are generally more visual, rather than sound-related, researchers say.

    Watch the video below for a summary on the research:

    frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>

    “[Misokinesia] is defined as a strong negative affective or emotional response to the sight of someone else’s small and repetitive movements, such as seeing someone mindlessly fidgeting with a hand or foot,” a team of researchers, led by first author and psychologist Sumeet Jaswal, then at the University of British Columbia (UBC) in Canada, explained in a study published in 2021.

    “Yet surprisingly, scientific research on the topic is lacking.”

    Related: Angry Outbursts Could Literally Be Putting Some People’s Heart at Risk

    To improve our understanding, Jawal and fellow researchers conducted what they said was the “first in-depth scientific exploration” of misokinesia – and the results indicate that heightened sensitivity to fidgeting is something a large number of people have to deal with.

    Across a series of experiments involving over 4,100 participants, the researchers measured the prevalence of misokinesia in a cohort of university students and people from the general population, assessing the impacts it had upon them, and exploring why the sensations might manifest.

    hands fidgeting
    Misokinesia creates strong emotional responses on seeing small, repetitive movements. (PeopleImages/Getty Images)

    “We found that approximately one-third self-reported some degree of misokinesia sensitivity to the repetitive, fidgeting behaviors of others as encountered in their daily lives,” the researchers explained.

    “These results support the conclusion that misokinesia sensitivity is not a phenomenon restricted to clinical populations, but rather, is a basic and heretofore under-recognized social challenge shared by many in the wider, general population.”

    According to the analysis, misokinesia sometimes goes hand in hand with the sound-sensitivity of misophonia, but not always.

    The phenomenon seems to vary significantly among individuals, with some people reporting only low sensitivity to fidgeting stimuli, while others feel highly affected.

    “They are negatively impacted emotionally and experience reactions such as anger, anxiety, or frustration as well as reduced enjoyment in social situations, work, and learning environments,” explained UBC psychologist Todd Handy.

    “Some even pursue fewer social activities because of the condition.”.

    Handy began researching misokinesia after a partner told him he was a fidgeter and confessed she felt stress when he fidgeted (or anybody else for that matter).

    “As a visual cognitive neuroscientist, this really piqued my interest to find out what is happening in the brain,” Handy said.

    So, the million-dollar question stands: Why do we find fidgeting so annoying?

    In the study, the researchers ran tests to see if people’s misokinesia might originate in heightened visual-attentional sensitivities, amounting to an inability to block out distracting events occurring in their visual periphery.

    The results based on early experiments were inconclusive on that front, with the researchers finding no firm evidence that reflexive visual attentional mechanisms substantively contribute to misokinesia sensitivity.

    While we’re still only at the outset then of exploring where misokinesia may spring from on a cognitive level, the researchers do have some hypothetical leads for future research.

    “One possibility we want to explore is that their ‘mirror neurons’ are at play,” Jaswal said.

    “These neurons activate when we move but they also activate when we see others move… For example, when you see someone get hurt, you may wince as well, as their pain is mirrored in your own brain.”

    By extension, it’s possible that misokinesia-prone people might be unconsciously empathizing with the psychology of fidgeters. And not in a good way.

    “A reason that people fidget is because they’re anxious or nervous so when individuals who suffer from misokinesia see someone fidgeting, they may mirror it and feel anxious or nervous as well,” Jaswal said.

    As to whether that’s what’s really going on here with misokinesia, only further research into the phenomenon will be able to say for sure. A follow-up study conducted by Jaswal in 2024 on 21 volunteers found the condition may be linked to challenges in disengaging from a stimulus, rather than about the initial distraction.

    One thing is certain though. From the results seen here, it’s clear that this unusual phenomenon is much more usual than we realized.

    “To those who are suffering from misokinesia, you are not alone,” Handy said. “Your challenge is common and it’s real.”

    The findings are reported in Scientific Reports.

    An earlier version of this article was published in September 2021.

    Continue Reading