- A Ghostly Bridge of Stolen Stars Reveals Galactic Tug-of-War SciTechDaily
- Scientists capture bridge of stray stars being sucked from one galaxy to another Space
- Faint glow reveals galaxies merging in deep space Earth.com
- DECam Captures Elusive Intracluster Light in Galaxy Cluster Abell 3667 Sci.News
- ‘Bridge’ of stray stars reveals active merger of two galaxy clusters Brown University
Category: 7. Science
-
A Ghostly Bridge of Stolen Stars Reveals Galactic Tug-of-War – SciTechDaily
-
Explosive Prebiotic Molecule Could Reveal Clues to Life in Space – SciTechDaily
- Explosive Prebiotic Molecule Could Reveal Clues to Life in Space SciTechDaily
- Scientists have created ‘super alcohol’ – but your nearest pub is in deep space Metro.co.uk
- Strange tetra-alcohol synthesised in simulated interstellar ice Chemistry World
- Chemists Explore ‘Super Alcohol’ That May Point to Cosmic Life University of Mississippi | Ole Miss
- In A Breakthrough, ‘Super Alcohol’ Synthesis Unlocks Clues To Alien Life Origins NDTV
Continue Reading
-
NASA Rovers Keep Getting Stuck, And We Finally Know Why : ScienceAlert
Although humanity is getting better at sending robotic probes out into the Solar System to explore the places no human can tread, we’re still very much on a learning curve.
The first extraterrestrial robotic rover was launched from Earth in 1970. It’s only now, more than half a century later, that scientists have figured out why these marvels of ingenuity and engineering keep getting stuck in the soils of alien worlds.
“In retrospect, the idea is simple: We need to consider not only the gravitational pull on the rover but also the effect of gravity on the sand to get a better picture of how the rover will perform on the Moon,” explains mechanical engineer Dan Negrut of the University of Wisconsin-Madison.
“Our findings underscore the value of using physics-based simulation to analyze rover mobility on granular soil.”
Related: We Will Never Get Tired of This Video of Astronauts Falling Over on The Moon
frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>Making a rover that will operate in an alien environment is more complicated than making one that will work on Earth. We’ve lost more than one Mars mission to giant dust storms that leave drifts of sand on solar panels, preventing the machinery from being able to generate power, for instance.
Gravity is another one. The Solar System bodies on which we have deployed robotic rovers have lower gravity than Earth, and this has an effect on how things move around. Engineers, when designing rovers, have therefore taken into account the effects the target gravitational environment will have.
Nevertheless, rovers still manage to get stuck pretty often, requiring control teams to conduct a series of maneuvers to try and free the poor robot. It’s usually fine, if annoying, although in one notable case it was not: NASA’s Mars rover Spirit got stuck in soft soil in 2009, and there it remains to this day.
Using computer simulations running on a physics-based engine called Project Chrono, Negrut and his colleagues set out to get to the bottom of this recurring problem. Comparing their results with real-world tests on sandy surfaces revealed a discrepancy that pointed right to it.
Previous tests of rover designs in Moon- and Mars-simulated dirt omitted one very, very important detail: sand, also, behaves differently under different gravitational conditions.
frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>
The dust that coats the Moon and Mars is fluffier and squishier than dust on Earth, shifting more easily, and hindering traction – making it far easier for their wheels to get stuck. Think of a vehicle on Earth that has driven into slippery mud, or very loose desert sand.
This eureka moment could be the missing piece of the puzzle that could keep future space exploration rovers out of a dusty jam.
“It’s rewarding that our research is highly relevant in helping to solve many real-world engineering challenges,” Negrut says. “I’m proud of what we’ve accomplished. It’s very difficult as a university lab to put out industrial-strength software that is used by NASA.”
The research has been published in the Journal of Field Robotics.
Continue Reading
-
Sharpest-ever images of the Sun’s surface reveal magnetic ‘stripes’
The sun never sits still, yet until recently our best snapshots blurred its fine threads. Now an image set from the Daniel K. Inouye Solar Telescope on Haleakalā resolves details just 20 kilometers wide, letting researchers watch bright and dark stripes move across the solar surface.
“We investigate the fine-scale structure of the solar surface for the first time with an unprecedented spatial resolution of just about 20 kilometers,” said Dr. David Kuridze, National Solar Observatory.
These tiny stripes, known as photospheric striations, measure less than the length of Manhattan.
Magnetic stripes on the solar surface
The facility uses a 4-meter mirror and a Visible Broadband Imager tuned to the G-band, a slice of blue light that highlights magnetic hotspots.
Two-second exposures stitched into mosaics capture the Sun’s boiling photosphere in scenes spanning 45 arcseconds.
Across each convection cell, hot plasma rises in the center, then cools and sinks at the edges. Along those edges the images reveal parallel bright and dark striations only 12 miles long and a few hundred feet wide.
The darkest lines sit where the magnetic field weakens, absorbing light and shading the view. Brighter lines outline regions where the field strengthens, allowing the lower layers of hotter gas to shine through.
In the sharpest frames, individual striations appear and vanish within a minute, hinting that the underlying magnetic pattern shifts as fast as weather on Earth. That fleeting behavior may help scientists test how turbulence lifts energy toward the corona.
Granules and curtains
Solar granules are cell-like tops about 600 miles across. Each one lasts only a few minutes before collapsing and yielding to a fresh granule.
Striations sit on the granule walls and form when sheets of magnetic flux tubes bend under the flow. The curtain description arises from the way the field drapes across the wall, sifting light in alternating bands.
Simulations using radiation-magnetohydrodynamic codes reproduce the same patterns once they include field variations of roughly 100 gauss. That agreement tells researchers they are finally seeing physics that once hid below the resolution line.
Because the field leans sideways, the lines run almost parallel to the solar limb in images taken away from disk center. Closer to disk center they fade, confirming that viewing geometry shapes what we see.
Why these solar stripes matter
Space-weather forecasts depend on knowing where magnetic energy builds up. Large flares usually start when twisted fields snap and reconnect.
“Magnetism is a fundamental phenomenon in the universe, and similar magnetically induced stripes have also been observed in more distant astrophysical objects,” commented Dr. Han Uitenbroek. Tiny striations may act like stress gauges, exposing subtle twists long before a flare blooms.
If the same physics repeats in stellar nurseries or planet-forming disks, measuring it on the Sun provides a local testbed. The data can sharpen star models that feed exoplanet climate codes.
Engineers also keep watch because a strong storm can harm satellites and knock out power lines on Earth. A clearer trigger could stretch warning times from hours to days.
“Wilson depression” explained
When a magnetic flux tube rises, the gas inside becomes thinner, so light escapes from deeper, hotter layers. That drop in the visible-surface height is called the Wilson depression.
In the new images, each bright stripe marks a spot where the Wilson depression digs up to 30 kilometers below its surroundings, while a neighboring dark stripe sits higher and cooler.
These height steps, though smaller than Mount Everest’s summit, sculpt the Sun’s textured look.
Sharpest-ever view of the Sun’s surface, using the NSF Inouye Solar Telescope, reveals ultra-fine magnetic “stripes,” known as striations, just 20 kilometers wide. Click image to enlarge. Credit: NSF/NSO/AURA Earlier telescopes blurred those steps into single bright faculae. The recent Inouye view shows a magnetic staircase instead.
Knowing the depth lets modelers set realistic boundary conditions in helioseismology codes that trace waves beneath sunspots. That, in turn, refines estimates of how energy climbs from the convection zone to the outer atmosphere.
Space weather stakes
During Solar Cycle 25 the Sun has already launched multiple coronal mass ejections that merged en route to Earth. Stacked eruptions of that kind can supercharge geomagnetic storms.
Fine-scale magnetic stripes could point to the places where those ejections will erupt. Spotting them early may improve flight reroutes and satellite drag forecasts.
The data also help space agencies protect Artemis lunar hardware and planned Mars missions. Even a brief particle surge can upset delicate electronics.
Rapid-fire imaging now under way will map how stripes evolve through an entire solar rotation, linking them to flare statistics. That project could reveal whether a threshold in stripe density marks an impending eruption.
Solar physics and magnetic stripes
The Inouye team is upgrading its Visible Tunable Filter to capture the same region ten times faster. Faster cadence will freeze the photospheric turbulence in time-lapse detail.
Researchers also plan multi-wavelength campaigns linking Inouye images with Parker Solar Probe plasma data, bridging photosphere to heliosphere. That link will test whether small stripes seed the giant switchbacks the probe registers.
On the modeling side, new simulations down to 4-kilometer grids will hunt for instabilities such as the flute mode that could carve magnetic curtains. The work may explain why stripes sometimes split or merge.
Once those answers land, the next leap may involve an even larger infrared telescope now on the drawing board, one that peers deeper into sunspot roots. Bigger mirrors continue to shrink the gap between theory and observation.
The study is published in The Astrophysical Journal Letters.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–
Continue Reading
-
These Tarantulas Have Genitals So Large They Require a New Genus
Scientists have just discovered four tarantula species with genitals so unusually long that they had to be put in a category of their own. Literally.
Their mating appendages are so extreme that researchers couldn’t cram them into any existing spider genus—so they made a new one: Satyrex.
In the world of tarantulas, male genitalia typically scales at about twice the length of the spider’s upper body. But these guys? Their palps—the sperm-delivery limbs—are four times longer than their cephalothorax (that’s head-plus-torso, for non-arachnologists), and almost half the length of their longest legs. In one case, that’s a 2-inch genital limb on a 5.5-inch spider.
“The males of these spiders have the longest palps amongst all known tarantulas,” lead researcher Alireza Zamani said in a statement. Zamani, an arachnologist at the University of Turku in Finland, co-authored the study recently published in ZooKeys.
He believes the adaptation might have evolved to give males a little distance from their partners during sex since female tarantulas are famously cannibalistic.
These Tarantulas Have Genitals So Large They Require a New Genus
It’s not just the palps that make these spiders notable. One species, Satyrex ferox, raises its legs and hisses at the slightest disturbance. It gets the “ferox” title (Latin for fierce) not for show, but for its full-on attitude and massive size.
The name Satyrex is derived from the combination of “satyr” and “rex.” Satyrs, the half-goat mischief-makers of Greek mythology, were often portrayed as overly horny and well-endowed. Rex, of course, means king. Combined, it’s a fitting name for the new rulers of the spider penis world.
The newly discovered tarantulas—S. arabicus, S. ferox, S. somalicus, and S. speciosus—were found hiding in rocky crevices and burrows across the Arabian Peninsula and the Horn of Africa. Zamani and his team found S. arabicus in Saudi Arabia, photographed S. ferox in Yemen and Oman, and described the other two in Somaliland.
A fifth spider, previously classified under a different genus (Monocentropus longimanus), has now been reassigned to the Satyrex family, thanks to its matching proportions.
In a field where genitalia often holds the key to classification, these spiders didn’t just stand out. They stretched the whole system. As Zamani put it: “At least in tarantula taxonomy, it seems that size really does matter.”
Continue Reading
-
How to see the 2025 Perseid meteor shower in Japan
Want to see shooting stars this summer? The 2025 Perseid meteor shower, one of the brightest and most reliable celestial events of the year, will peak in the early morning hours of Tuesday, August 13, in Japan, with the best viewing time around 2:50 a.m. in Tokyo. Well, if you can see it. Tokyo might not be the neon-anime world some people stereotype it as, but it’s still a very bright city.
Make a trip outside the city and, if the skies are clear and you’re willing to stay up (or get up) in the middle of the night, you’ll be rewarded with dozens of meteors per hour.
Here’s how, when, and where to catch the Perseid meteor shower in or around Tokyo in 2025.
When Is the Best Time to Watch?
Viewing Timeline
Honorable Mention: For the Truly Obsessed
When Is the Best Time to Watch?
Geminid meteor shower over Lake Shoji and Mount Fuji
The peak of the Perseids is expected to hit around 2:50 a.m. in Tokyo on Tuesday night into Wednesday morning (Aug 12 and 13, 2025). That said, the nights before and after (Aug. 11 and 14) will also offer decent chances, with slightly lower meteor activity.
A gibbous moon (the phase just before a full moon, when it appears more than half full) will be visible during the peak. This extra brightness may wash out fainter meteors, so try to keep the moon out of your direct line of sight and focus on the darker parts of the sky to spot the brighter shooting stars.
Viewing Timeline
- Aug 11 (Monday night): expect about ~10-15 meteors per hour
- Aug 12-13 (Tuesday early morning): Peak (~30-40 meteors per hour in dark areas)
- Aug 13-14 (Wednesday early morning): ~20 meteors per hour
Timing Varies Slightly Across Japan
The best time to view the Perseids is just before dawn, when the radiant point (the part of the sky the meteors appear to come from) is highest. This varies slightly depending on your location:
- Tokyo (Eastern Japan): ~2:50 a.m.
- Osaka (Kansai): ~3:10 a.m.
- Sapporo (Hokkaido): ~2:20 a.m.
- Fukuoka (Western Japan): ~3:40 a.m.
- Okinawa (Naha): ~3:50 a.m.
For most of Japan, your best viewing window is between 2 and 4 a.m., depending on location. Who needs sleep on a weekday, anyway?
Where to See the Perseid Meteor Shower Near Tokyo (Without a Car)
If you’re sticking to public transport, you likely won’t get pitch-black skies, but you can still have a great night under the stars. Think of these as low-effort options that give you a taste of the Perseids without needing a tent, a car or a backup battery for Google Maps.
However, most of these places lose public transport after 11 p.m., so you’ll need to stay out all night. Bring a friend and maybe have a backup plan for getting home.
Tamagawa Riverbed
Just west of central Tokyo, the Tamagawa River runs broad and flat—with surprisingly good sky exposure if you go far enough upriver. Head to Futako-Tamagawa Station, then walk away from the city lights. It won’t be truly dark, but it’s open enough to catch the brighter meteors. Bring a mat, some snacks, and something warm to wear. It’s the easiest option that doesn’t involve staring at a skyscraper.
Koganei Park
This large park in western Tokyo is probably your best bet inside the city limits. It’s big enough that you can get away from streetlights, and the central lawn offers clear views of the sky. It’s not going to compete with the mountains, but on a clear night, you might still spot a dozen shooting stars if you’re patient. Get there early if the weather’s good—others will have the same idea.
Jogashima Island (Miura Peninsula, Kanagawa)
This one’s a little more effort, but totally doable without a car. From Shinagawa, take the Keikyu Line to Misakiguchi Station, then hop on a short bus or taxi to Jogashima. The island is surrounded by the ocean, and the southern coastline, especially, is pretty dark. You’ll get a near-panoramic view of the sky over the water, and the lighthouse adds some nice atmosphere. Just bring a flashlight and try not to tumble off a cliff.
Tsurigasaki Beach (Chiba)
Famous for surfing, this beach near Ichinomiya is also great for sky-watching — especially near the torii gate on the sand. It’s not as remote as other coastal spots, but the open Pacific horizon means fewer buildings and a better shot at clear skies. Take the JR Sotobo Line to Kazusa-Ichinomiya, then grab a taxi or walk. If the tide’s low and the weather’s right, it’s a surprisingly peaceful place to stargaze.
Clock here to read more.
- External Link
-
https://gaijinpot.com/
© GaijinPot
Continue Reading
-
Wild New Theory Suggests Gravitational Waves Shaped The Universe : ScienceAlert
Just as ocean waves shape our shores, ripples in space-time may have once set the Universe on an evolutionary path that led to the cosmos as we see it today.
A new theory suggests gravitational waves – rather than hypothetical particles called inflatons – drove the Universe’s early expansion, and the redistribution of matter therein.
“For decades, we have tried to understand the early moments of the Universe using models based on elements we have never observed,” explains the first author of the paper, theoretical astrophysicist Raúl Jiménez of the University of Barcelona.
“What makes this proposal exciting is its simplicity and verifiability. We are not adding speculative elements, but rather demonstrating that gravity and quantum mechanics may be sufficient to explain how the structure of the cosmos came into being.”
Related: Dark Matter May Have Existed Before The Big Bang, Study Finds
frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>We don’t know for certain how the very earliest stages of the Universe unfolded following the Big Bang some 13.8 billion years ago. All scientists can do at this point is come up with theories that fit the physics of the Universe we do observe.
Those theories are pretty good, but there are clear shortcomings. Take the JWST’s discovery of large numbers of massive galaxies earlier in the Universe than cosmologists expected, for example.
The currently accepted timeline of the Universe’s evolution involves a period of rapid expansion, or inflation, just after the Big Bang. From a single, one-dimensional point of infinite density – a singularity, the mathematical description of the Universe just before the Big Bang – the Universe rapidly inflated, puffing up with a hot plasma soup that cooled to form matter.
The inflaton is a speculative particle or quantum field that scientists use to explain cosmological inflation and the surprising smoothness of the cosmos. In theory, the particle drives the rapid expansion of the Universe while still allowing for variations in the density of the plasma soup that eventually condense into black holes, galaxies, stars, and all the other bits and bobs of matter scattered throughout the Universe.
Despite our best efforts, however, physicists have found no other evidence that supports the existence of the inflaton. Jiménez and his colleagues wanted to know if there’s another way – if we can explain the early evolution of the Universe using different parameters that rely less on speculative elements.
They started with a very simplified model of the real Universe consistent with general relativity and current observations of the expansion of the Universe, called de Sitter space. Within this field, quantum fluctuations in space-time – that is, gravitational waves – can be generated by a type of turbulence called tensor perturbations.
frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>
Gravitational waves are thought to fill the Universe today. They’re the ripples generated in space-time by massive disruptions. The ones we can detect currently are generated by collisions between massive, dense objects such as neutron stars and black holes, but physicists believe the entire Universe is ringing with a constant background hum of gravitational waves too large for us to be able to detect (yet).
The researchers found that the gravitational waves generated by tensor perturbations in their space-time model could create density variations in the primordial plasma on their own, as well as drive the early expansion of the Universe.
Eventually, these variations would create clumps dense enough to collapse under gravity and form the seeds of the early Universe – the very first stars and galaxies and black holes.
It’s such an elegant solution, and removes the reliance on hypotheticals as the driving force behind the early evolution of the entire Universe, although further work is needed to verify it, of course.
Nevertheless, “Our proposed mechanism could remove the need for a model-dependent scenario: the choice of a scalar field, as the inflaton, to drive inflation,” the researchers write.
Their work has been published in Physical Review Research.
Continue Reading
-
Astronomers Spot the Earliest Confirmed Black Hole at Cosmic Dawn
In 2024, an international team of astronomers launched the CANDELS-Area Prism Epoch of Reionization Survey (CAPERS), a program that would use data from the James Webb Space Telescope (JWST) to identify galaxies at “Cosmic Dawn.” This cosmological period took place less than one billion years after the Big Bang and is when the first galaxies in the Universe formed. In a recent study, the CAPERS team confirmed the existence of a black hole at the center of a galaxy (designated CAPERS-LRD-z9) roughly 13.3 billion light-years away.
This makes the black hole the earliest ever observed by scientists, and presents opportunities to study the evolution of black holes and the structure of the Universe during this early period. The research was led by Anthony J. Taylor, a postdoctoral candidate at the University of Texas at Austin’s Cosmic Frontier Center, and included several members of the CAPERS consortium. The paper detailing their findings was published on August 6th in The Astrophysical Journal Letters.
During the 1970s, scientists discovered that most massive galaxies have a central supermassive black hole (SMBH), which explained why their core regions would periodically become bright enough to outshine all the stars in their disks. This led to the term Active Galactic Nuclei (AGN), which describes these bright galactic centers and differentiates them from less bright and active galaxies. With the deployment of Webb, astronomers are finally getting the chance to observe the early ancestors (or “seeds”) of these behemoths and study how they have influenced the evolution of their galaxies.
“Little Red Dot” galaxies appear in large numbers roughly 600 million years after the Big Bang. Credit: NASA/ESA/CSA/STScI/Dale Kocevski (Colby College).
CAPERS-LRD-z9 was first identified by the Public Release IMaging for Extragalactic Research (PRIMER) survey using Webb’s Near-Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI). Like many galaxies identified by Webb, CAPERS-LRD-z9 is part of a new class of galaxies known as “Little Red Dots” that existed 1.5 billion years after the Big Bang, which are very compact, red, and surprisingly bright. While conducting follow-up observations, the CAPERS team identified the tell-tale signs of fast-moving gas using Webb’s Near-Infrared Spectrometer (NIRSpec) to conduct NIRSpec/PRISM spectroscopy.
As gas and dust circle a black hole and accrete onto its face, it is accelerated to relativistic speeds (close to the speed of light). Whereas gas flowing away relative to our instruments is shifted towards the red end of the spectrum, gas moving towards them is shifted to bluer wavelengths. When they examined the spectral signatures coming from CAPERS-LRD-z9, the team detected the presence of both, confirming that they had identified a black hole roughly 13.3 billion light-years away. While astronomers have found a few more distant candidates, they have not yet found the distinctive spectroscopic signature associated with black holes.
“When looking for black holes, this is about as far back as you can practically go. We’re really pushing the boundaries of what current technology can detect,” said Taylor in a UT News release. “The first goal of CAPERS is to confirm and study the most distant galaxies,” added co-author Mark Dickinson, the CAPERS team lead. “JWST spectroscopy is the key to confirming their distances and understanding their physical properties.”
The presence of an SMBH seed at the center of CAPERS-LRD-z9 presents astronomers with a unique opportunity to do that. For one, this galaxy supports the theory that SMBHs are the source of the unexpected brightness of Little Red Dots, which is typically attributed to an abundance of stars. However, this is inconsistent with current cosmological models that suggest these galaxies did not have enough time to form so many stars. Furthermore, black holes shine brightly because they compress the gas and dust they consume, releasing tremendous amounts of light and heat.
The little red dots could represent galaxies that are in an evolutionary phase predating the luminous quasar phase. Credit: NASA/ESA/CSA/ISTA)/ETH Zurich/NAOJ
Confirming the existence of an SMBH seed in CAPERS-LRD-z9 helps illustrate this process in very early galaxies. This galaxy could also help explain the distinct red color in LRD galaxies, which could be due to a thick cloud of dust surrounding the black hole – something that has been observed in more recent galaxies. In addition, the size of the black hole (up to 300 million times the mass of our Sun) was an unexpected find, roughly half the mass of all the stars in its disk. This is similar to what astronomers noticed with other SMBH seeds in galaxies that existed less than 1 billion years after the Big Bang.
Therefore, this finding also presents astronomers with an opportunity to study how these black holes could have grown so large so quickly. “This adds to growing evidence that early black holes grew much faster than we thought possible,” said co-author Finkelstein, the director of the Cosmic Frontier Center. “Or they started out far more massive than our models predict.”
Looking ahead, the team hopes to gather more high-resolution data on CAPERS-LRD-z9 to learn more about the role black holes played in the development of galaxies in the early Universe. “This is a good test object for us,” said Taylor. “We haven’t been able to study early black hole evolution until recently, and we are excited to see what we can learn from this unique object.”
Further Reading: UT News, The Astrophysical Journal Letters
Continue Reading
-
Left-Handers See Detail Differently Than Right-Handers
Summary: Researchers have discovered that whether you are right- or left-handed influences which side of your brain processes fine visual details. The new “action asymmetry hypothesis” proposes that brain specialization for high- and low-frequency visual information develops from the everyday way we use our hands.
In right-handers, the left hemisphere processes high-frequency vision; in left-handers, this is reversed. The findings challenge long-standing theories that such asymmetries develop in the womb or are tied directly to language processing.
Key Facts
- Handedness Link: High-frequency visual specialization is reversed in left-handers compared to right-handers.
- Language Separate: Both righties and lefties process high-frequency sounds for language in the left hemisphere.
- Action-Perception Connection: The brain’s perceptual systems may organize according to how each hand is used in everyday tasks.
Source: Cornell University
Imagine hammering a nail into a wall: Your dominant hand swings the hammer while the other holds the nail steady.
In a new theory, Cornell psychology scholars propose that everyday tasks like this are responsible for a fundamental aspect of perception in the brain: why one side is specialized to process high-frequency visual information, and the other low frequencies.
Longstanding research has shown that for most people, the brain’s left hemisphere responds fastest to rapidly changing (high-frequency) input, like hammering, while the right hemisphere processes more static (low-frequency) events, like holding a nail. But scientists lacked an explanation for why this is so.
With their “action asymmetry hypothesis,” the Cornell team explains this phenomenon and – demonstrated for the first time in large studies – that high-frequency visual specialization is reversed in left-handed people.
“We found the same pattern you always find in righties, whose left hemispheres are specialized for high-frequency visual perception – and the exact opposite in lefties,” said Daniel Casasanto, associate professor in the Department of Psychology and College of Human Ecology, and director of the Experience and Cognition Lab.
“These data support our theory that the way perceptual systems are organized in the brain depends on the way we perform actions with our hands.”
Casasanto is the senior author of “Frequency Asymmetries in Vision: the Action Asymmetry Hypothesis,” published June 27 in the Journal of Experimental Psychology: General. The first author is Owen Morgan, M.A. ’23, a doctoral student in the field of psychology.
Theories about the so-called hemispheric asymmetry for visual perception have suggested that it may develop in the womb, or that it could be linked to language, since the left hemisphere processes the high-frequency components of language.
The new study contradicts those theories, since handedness usually does not cause any reversal in fetal development, or in the hemisphere that processes language.
Building on a body of work Casasanto calls the body specificity hypothesis – showing that people’s brains and minds are organized according to specifics of how their bodies interact with the world – Casasanto and Morgan investigated the role of motor action.
They repeated experiments conducted previously to establish hemispheric asymmetry for frequency perception, but with a key difference. They enlisted lefties – subjects routinely left out of prior work that prioritized homogeneous samples.
“In this case,” Casasanto said, “testing lefties and comparing them to the righties is the key to figuring out how perception is actually organized in the brain, and why it’s organized that way.”
A pair of experiments including nearly 2,000 participants – roughly equal numbers of right- and left-handers, and some mixed – confirmed which hemisphere handled high-frequency visual processing.
That was measured by reaction times when pairs of “hierarchically constructed” target shapes were flashed on a screen – for example, a diamond composed of small triangles next to a triangle composed of small squares.
A third experiment confirmed that both righties and lefties used their left hemisphere to process high-frequency sounds in language. That ruled out the possibility that so-called language laterality could explain the hemispheric differences in visual perception.
Why might right- and left-handers process high-frequency vision in different sides of the brain? The authors propose two mechanisms.
First, once a hemisphere becomes responsible for high-frequency action, it may be efficient for the brain to make connections between similar motor, vision and hearing systems in the same side. Or perhaps dominant hands continuously feed high-frequency sights and sounds into that side’s visual field.
“The fact of this action asymmetry causes asymmetries in the visual and auditory input that we give ourselves,” Casasanto said.
“Then the hemisphere that is used to getting either high- or low-frequency input may become specialized for that kind of information.”
The studies did not find a strong reversal in low-frequency visual input. The researchers speculate this may be because either hand could perform many low-frequency tasks, such as holding a nail, specialization might be reduced.
After showing that high-frequency visual processing reverses with handedness, providing initial support for the action asymmetry hypothesis, Casasanto plans in future research to investigate whether that is also true for hearing.
The team also plans to test frequency specialization in stroke patients who have lost the use of their dominant hand, to see whether visual perception gets reorganized according to their new habits of hand action.
“That’s our hypothesis: that asymmetries in hand action give rise to asymmetries in perception in vision and audition,” he said.
“The way you perform actions with your hands influences a bunch of different cognitive functions, including language and emotion – and, we know now, visual perception.”
About this handedness and visual processing research news
Author: James Dean
Source: Cornell University
Contact: James Dean – Cornell University
Image: The image is credited to Neuroscience NewsOriginal Research: Closed access.
“Frequency asymmetries in vision: The action asymmetry hypothesis” by Owen Morgan et al. Journal of Experimental Psychology: General
Abstract
Frequency asymmetries in vision: The action asymmetry hypothesis
According to a large body of research, the left and right cerebral hemispheres are specialized for different frequencies, in vision and audition, but the cause of this specialization is unknown.
Here, we tested whether hemispheric asymmetries in visual perception can be explained by asymmetries in people’s tendency to perform high- and low-frequency actions with their dominant and nondominant hands, respectively (the action asymmetry hypothesis).
In two large, preregistered, online studies, participants judged low- and high-frequency shapes presented in the left and right visual hemifields.
Overall, the typical hemispheric asymmetry for high versus low visual frequencies, which we found in right handers, was significantly reduced in left handers.
Across experiments, hemispheric asymmetries for high-spatial-frequency stimuli were completely reversed between strong right and left handers.
A third experiment testing dichotic listening suggests that this reversal cannot be explained by differences in language laterality.
These results provide initial support for the action asymmetry hypothesis: Frequency asymmetries in perception may be explained by frequency asymmetries in action.
Continue Reading
-
Tom Hanks pays tribute to late astronaut James Lovell, commander of Apollo 13
James Lovell died at 97 years old on Thursday and Tom Hanks is paying tribute to him.
The 69-year-old actor portrayed the former NASA commander in 1995’s “Apollo 13,” which told the story of the ill-fated 1970 lunar mission that endured critical failures before safely returning to Earth.
He wrote on Instagram, “There are people who dare, who dream, and who lead others to the places we would not go on our own. Jim Lovell, who for a long while had gone farther into space and for longer than any other person of our planet, was that kind of guy.”
He continues, “His many voyages around Earth and on to so-very-close to the moon were not made for riches or celebrity, but because such challenges as those are what fuels the course of being alive — and who better than Jim Lovell to make those voyages. On this night of a full moon, he passes on — to the heavens, to the cosmos, to the stars. God speed you, on this next voyage, Jim Lovell.”
The director of “Apollo 13,” Ron Howard, shared his sentiments to the former Navy test pilot, as he admitted it had been a “tremendous honor” to know such a “remarkable” man, who took part in four space missions during his life.
Sharing a carousel of photos and videos of the astronaut, including side-by-side photos from his film and the real life events, he wrote, “RIP #CommanderLovell. Navy test pilot, Gemini 7, Gemini 12, Apollo 8 and, of course, Apollo 13.
“Simply knowing Jim has been a tremendous honor. His combination of intellect, courage and commitment to duty made him one of the most remarkable individuals I’ve ever met. His support of our movie-making efforts inspired authenticity and elevated our process in so many ways. Thank you, sir, for your service to our country and to humankind.”
RIP James.
Continue Reading