Ancient tools found in Indonesia may shed light on mystery of ‘Hobbit’ humans
Archaeologists have found primitive stone tools on the Island of Sulawesi, Indonesia that date back to 1.04 to 1.48 million years.
This marks the oldest human habitation in this region.
The study, published in Nature, is predicted to solve the mystery of Homo floresiensis.
They are diminutive “hobbit” humans who lived on nearby Flores Island until about 500,000 years ago.
The ancient discovered tools were used to cut and scrap due to their sharp edges.
Excavations revealed seven different stones
This suggests that early humans may have inhabited Sulawesi around the same time or even earlier than Flores.
Dr. Adam Brumm, co-lead author of the study, said: “We have long suspected that the Homo floresiensis lineage came originally from Sulawesi. This discovery adds further weight to that possibility.”
More discoveries at the Calio site found seven stones alongside animal fossils, including a jawbone of an extinct giant pig.
While there were no human remains found in the fossils, researchers believe that Homo erectus or an early relative may have made the tools.
The study sparked speculations about how the ancient humans reached the isolated islands.
Characteristics of hobbit humans
Brum noted “getting to Sulawesi would not have been easy.”
Experts suggest that the discovery highlights the gaps in the understanding of how early humans migrated.
Prof. John Shea of Stony Brook University stated: “If hominins reached these islands, they might have survived briefly before going extinct,” noting that only modern humans have clear evidence of successful ocean crossings.
To learn more about this mysterious chapter of human evolution, researchers continue to search Sulawesi for hominin fossils. Brumm said: “There’s a truly fascinating story waiting to be told on that island.”
A study analyzing data from NASA’s Parker Solar Probe has uncovered evidence of a “helicity barrier” in the Sun’s atmosphere.
In 2018, NASA launched the Parker Solar Probe on a trajectory that would eventually have it dive into the Sun’s atmosphere (corona), getting seven times closer to our host star than any other spacecraft so far. In June 2025, the probe completed its 24th close approach to the Sun, whilst equaling its record for the fastest a human-made object has ever traveled, at a zippy 692,000 kilometers per hour (430,000 miles per hour).
The probe is aimed at studying the Sun’s atmosphere and will hopefully shed light on a few long-standing mysteries, such as how the solar wind is accelerated. One puzzle, first discovered in 1939, is that the Sun’s corona is far hotter than the solar surface. And not just by a little.
“The hottest part of the Sun is its core, where temperatures top 27 million °F (15 million °C). The part of the Sun we call its surface – the photosphere – is a relatively cool 10,000 °F (5,500 °C),” NASA explains. “In one of the Sun’s biggest mysteries, the Sun’s outer atmosphere, the corona, gets hotter the farther it stretches from the surface. The corona reaches up to 3.5 million °F (2 million °C) – much, much hotter than the photosphere.”
This is known as the “coronal heating problem”. The basic problem is this: why is the atmosphere far hotter than the surface, when the surface is much closer to the core, where energy is generated through the fusion of hydrogen into helium?
There have been suggestions that the extra heat in the corona is caused by turbulence, or a type of magnetic wave known as “ion cyclotron waves”.
“Both, however, have some problem—turbulence struggles to explain why hydrogen, helium and oxygen in the gas become as hot as they do, while electrons remain surprisingly cold; while the magnetic waves theory could explain this feature, there doesn’t seem to be enough of the waves coming off the sun’s surface to heat up the gas,” Dr Romain Meyrand, author on the new paper, explained in a previous statement.
While both ideas have problems, together with a “helicity barrier”, they show some promise for explaining the coronal heating problem.
“If we imagine plasma heating as occurring a bit like water flowing down a hill, with electrons heated right at the bottom, then the helicity barrier acts like a dam, stopping the flow and diverting its energy into ion cyclotron waves,” Meyrand added. “In this way, the helicity barrier links the two theories and resolves each of their individual problems.”
Essentially, the helicity “barrier” alters turbulent dissipation, changing how fluctuations dissipate and how the plasma is heated. The team has now analyzed data from the Parker Solar Probe, and it appears to show evidence for the helicity barrier.
“The barrier can form only under certain conditions, such as when thermal energy is relatively low compared to magnetic energy. Since fluctuations in the magnetic field are expected to behave differently when the barrier is active versus when it is not, measuring how these fluctuations vary with solar wind conditions relevant to the barrier’s formation—including the thermal-to-magnetic energy ratio—provides a way to test for the barrier’s presence,” the team explains in their paper.
“By analyzing solar wind magnetic field measurements, we find that the fluctuations behave exactly as predicted with changes in solar wind parameters that characterize these conditions. This analysis also allows us to identify specific values for these parameters that are needed for the barrier to form, and we find that these values are common near the Sun.”
Further analysis is necessary, but the approach looks fairly promising for explaining the problem.
“This paper is important as it provides clear evidence for the presence of the helicity barrier, which answers some long-standing questions about coronal heating and solar wind acceleration, such as the temperature signatures seen in the solar atmosphere, and the variability of different solar wind streams,” Dr Christopher Chen, study author and Reader in Space Plasma Physics at Queen Mary University of London, said in a statement.
“This allows us to better understand the fundamental physics of turbulent dissipation, the connection between small-scale physics and the global properties of the heliosphere, and make better predictions for space weather.”
While conducted on our own Sun (we are far from ready to plunge spacecraft into the atmosphere of other stars), the study has implications for other stars, and other parts of the universe, in other collisionless plasmas.
“This result is exciting because, by confirming the presence of the ‘helicity barrier’, we can account for properties of the solar wind that were previously unexplained, including that its protons are typically hotter than its electrons,” said Jack McIntyre, lead author and PhD student from Queen Mary University of London.
“By improving our understanding of turbulent dissipation, it could also have important implications for other systems in astrophysics.”
The study was published in Physical Review X.
An earlier version of this story was published in July 2025.
The air can look clear and still carry a problem. Across the United States, ozone has been linked to lower chances of survival for some trees, and a new analysis finally shows how much risk different species face.
Researchers paired long term forest records with measured exposure to tropospheric ozone to set species specific thresholds for harm. The study spans 88 species and roughly 1.5 million trees, a scale that moves the conversation beyond seedlings and lab chambers.
Study on ozone pollution
Nathan Pavlovic of Sonoma Technology Inc. and Charles Driscoll of Syracuse University brought together forest inventory data and air quality archives to estimate how exposure links to slower growth and lower survival. Their team focused on mature trees observed in place, not seedlings in controlled settings.
Earlier work in the United States leaned heavily on seedling experiments, including a 16 species synthesis that set response curves for biomass loss. That seedling paper became a touchstone, but it could not tell us how older trees respond after decades in the field.
Ozone effects on tree survival
The team used the concept of a critical level to summarize risk, the exposure at which a defined drop in growth or survival appears.
The researchers expressed exposure with “W126,” a cumulative, summertime-weighted metric for ozone exposure that emphasizes higher concentrations during daylight hours, reflecting their greater potential to damage vegetation.
They modeled growth and 10 year survival separately, which matters because a small shift in survival compounds over a century long rotation. The paper reports species specific W126 levels for a 5 percent drop in growth and a 1 percent drop in survival, allowing managers to see which trees blink first.
The numbers they put forth sit in a wider policy context. The EPA has long evaluated vegetation protection using W126 in its welfare reviews, and its advisory panel, CASAC, has considered thresholds associated with 1 to 2 percent biomass loss in trees when weighing secondary standards.
Ozone impact in west vs east
“Recently (2016-2018), portions of the western United States exceeded O3 CLs (or ozone critical levels, are the exposure thresholds at which a specific percentage decline in tree growth or survival is expected to occur) for nearly all tree species for both growth and survival,” wrote Pavlovic and colleagues.The clearest pattern appears west of the Rockies.
In the East, the analysis found little evidence of widespread growth impacts at current levels, with survival effects limited to sensitive species and pockets with higher exposure.
That picture is consistent with national monitoring records showing strong declines in extreme ozone across the eastern United States since the 2000s.
Seedlings versus mature trees
Seedling studies provide clean experiments, but they cannot fully replicate heat, drought, soil variation, and competition in mature stands. The earlier 16 species seedling synthesis captured broad sensitivity classes, yet some species that look tolerant in chambers can be more vulnerable in place when ozone stacks with water stress and heat.
The new analysis, by design, keeps those mediating factors in the model to produce field relevant exposure thresholds.
That makes the species list more useful for foresters weighing which coniferous evergreens to plant and where deciduous hardwoods may hold up better under ozone.
What this means for forests and policy
A practical use case is simple. If a county’s summertime W126 sits above a species’ survival threshold, managers can alter planting mixes, accelerate thinning, or shift regeneration toward less sensitive species that still meet ecological goals.
Policy makers face a different question. Secondary ozone standards under the Clean Air Act are meant to protect crops, materials, and ecosystems, but the current form, built around an eight-hour human health metric, does not map neatly to vegetation outcomes that depend on seasonal accumulation.
Internationally, Europe often reports AOT40 exceedance while the United States leans on W126, and scientific bodies favor flux based vegetation metrics that track uptake. That split underscores why an ecosystem specific exposure metric remains important in any standard that intends to protect living landscapes.
Ozone pollution and tree survival
No single metric captures every pathway to damage, and W126 emphasizes summertime peaks that matter a great deal for crops. The longer growing seasons of evergreen conifers can raise cumulative uptake, which may help explain why western forests emerge as more sensitive in the new maps.
Uncertainty also comes from interpolation in places with sparse rural monitors, from wildfire smoke chemistry, and from how drought changes stomatal behavior. Even so, the thresholds offer a practical yardstick that can be updated as monitoring improves and as exposure models evolve.
The broader science keeps moving, including new Earth system model schemes that better represent ozone injury to photosynthesis and water use.
Better process representation should make regional carbon and climate projections more realistic, and helps translate exposure reductions into real ecological gains.
The study is published in Journal of Geophysical Research: Atmospheres.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
Relationship between diamondoid content and thermal maturity
A substantial amount of research has confirmed that the formation of diamondoids is closely related to the thermal maturation of organic matter. Fang et al.34 proposed that the evolution of diamondoids in petroleum was divided into three main stages, i.e., early generation stage (Ro at 0.8% − 1.0%), high-temperature cracking stage (Ro > 1.0%), and cracking stage. Wei et al.5 suggested that diamondoid compounds underwent stages of generation, enrichment, and destruction with increasing thermal maturity, where they suggested that the generation stage was at Ro < 1.1%, the enrichment stage at Ro ranging 1.1% − 4.0%, and the destruction stage at Ro > 4.0%. Although there are some differences in the criteria for dividing the evolutionary stages in these studies, it is evident that thermal maturity plays a decisive role in the formation and destruction of diamondoids.
Fig. 6
Relationship between diamondoid content in crude oil and its related source rock Ro.
The Gulong shale oil is characterized by in-situ accumulation19,22, which means that the burial depth of shale or the thermal maturity of organic matter controls the content and distribution characteristics of diamondoids in the shale oil. As shown in Fig. 6, the total diamondoid content and adamantane content are closely related to Ro, with correlation coefficients R2 greater than 0.84, indicating that the formation of diamondoids and adamantanes is mainly controlled by thermal maturity. The relationship between the content of diamantanes and Ro is weaker, with a correlation coefficient R2 greater than 0.56, suggesting that the content of diamantanes is not only affected by thermal maturity, but also by other factors.
According to the relationship between diamondoid content and Ro shown in Fig. 6, four stages can be distinguished. (1) Immature to low-mature stage with Ro < 0.8%; diamondoid components are mainly adamantanes with low content, generally less than 60 µg/g, and diamantanes are not detected. We suspect that the initial formation of diamondoids may have occurred during early diagenesis or as a result of biogenic processes. (2) Mature stage with Ro from 0.8 to 1.2%; diamondoid content remains at a low level, still dominated by adamantanes with content less than 100 µg/g, and diamantanes are not detected. This indicates that the maturation process has little effect on the formation of diamondoids. (3) Mature to high-mature stage with Ro from 1.2 to 1.4%; diamondoid content shows a slow increasing trend. The total diamondoid content ranges from 100 to 300 µg/g, adamantane content ranges from 100 to 280 µg/g, and diamantanes appear with content around 15 to 25 µg/g. This implies the beginning of rearrangement and cracking reactions in the shale oil. (4) High-mature stage with Ro > 1.4%; diamondoid content increases rapidly. The total diamondoid content ranges from 300 to 1000 µg/g, adamantane content ranges from 280 to 1000 µg/g, and diamantanes are present with content around 25 to 55 µg/g. This indicates significant rearrangement and cracking reactions in the shale oil.
Relationship between diamondoid content and clay mineral catalysis
Multiple studies have confirmed that diamondoid compounds can be formed from polycyclic hydrocarbons under high-temperature conditions, catalyzed by Lewis acid in clay minerals6,15,35,36. Smectite and other clay minerals play an important role in the formation of diamondoids. Chao37 demonstrated through pyrolysis experiments that the Lewis acidity decreases gradually during the transformation of smectite to illite. Research on the diagenetic evolution of clay minerals in the Gulong shale shows that in the stage with Ro < 1.0%, clay minerals are mainly composed of smectite and illite-smectite mixed layers38. After Ro > 1.0%, clay minerals are dominated by illite-smectite mixed layers and illite, with the mixed layers progressively transforming into illite with increasing thermal maturity. Although the catalytic activity of clay minerals is strong in the early stages of diagenesis and weakens in later stages, their catalytic role remains significant as organic matter begins to undergo rearrangement and cracking reactions37.
At a similar higher thermal mature stage, tight oil often contains a higher concentration of diamondoids than the shale oil (Fig. 6). For example, in the Qijia-Gulong Sag, the tight oil from Well HT1H in the Fuyu oil layer has a Ro of approximately 1.45%, with a total diamondoid content of 270 µg/g, adamantane content of 251 µg/g, and diamantane content of 19 µg/g. In contrast, the shale oil from the adjacent Well GY8HC, which is sourced from the overlying source rock, has a total diamondoid content of 362 µg/g, adamantane content of 339 µg/g, and diamantane content of 24 µg/g. Since tight oil originates from early-generated crude oil in the overlying Qingshankou Formation shale18, it has experienced higher temperatures than its overlying source rock after entering the reservoir. However, the content of all types of diamondoids in shale oil is 1.26 to 1.35 times higher than that in tight oil. This indicates that under the same thermal evolution conditions, the cracking degree of crude oil in non-clay reservoirs, of which the clay mineral content is between 4.7% − 13.8%, is significantly lower than that in clay-rich shale, whose clay mineral content is 32.4% − 56.1%. This suggests that the catalytic action of clay minerals in shale promotes the cracking of shale oil and the formation of diamondoids. In contrast, tight sandstone reservoirs, which contain fewer clay minerals and have weaker catalytic activity, have lower diamondoid content.
Relationship between diamondoid content and overpressure in oil reservoirs
Song et al.16 argued that under overpressure conditions, the acidic catalytic activity of clay minerals was enhanced. Under high-temperature and high-pressure conditions, the acidic sites on the surface of clay minerals can adsorb and activate lower-grade diamondoids with alkyl substitution in crude oil, making them more susceptible to rearrangement and homologation reactions, thereby generating higher-grade diamondoids. Additionally, the overpressure state increases the intensity of molecular motion in crude oil, leading to more frequent collisions between molecules and accelerating the rate of diamondoid formation. Lin and Wilk6 also proposed a similar argument, suggesting that higher grade diamondoids in crude oils were generated from lower grade diamondoids with alkyl substitution through homologation reactions under higher subsurface temperatures and pressures.
In this case, comparing the diamondoid content of shale oil with similar maturity but different pressure environments during the high-maturity stage also suggests the impact of overpressure on diamondoid formation. For example, a shale oil (Ro 1.31%) from Well SYY1 in the Qijia-Gulong Sag, located 20 m horizontally from a fault, has a lower reservoir pressure coefficient (1.3) due to the oil expulsion through the fault; the total diamondoid content is 119 µg/g, adamantane content is 119 µg/g, and diamantanes are absent. In contrast, shale oil from Well GY7 in the area has close thermal maturity (Ro 1.35%), but is far away from the fault, with horizontal distance at 1004 m; it shows no significant vertical hydrocarbon expulsion, and the reservoir pressure coefficient is 1.5. The shale oil from this well has a total diamondoid content of 281 µg/g, adamantane content of 247 µg/g, and diamantane content of 7 µg/g. The total diamondoid content and adamantane content in the Well GY7 are 2.4 and 1.9 times those in the Well SYY1, respectively; particularly, the presence of diamantane in Well GY7 strongly suggests that overpressure significantly enhances the formation of higher-grade diamondoids. Measured reservoir pressure coefficients in the Gulong shale indicate that they are closely related to the thermal maturity. The pressure coefficient is 1.0 for Ro < 1.0%, 1.0 to 1.2 for Ro between 1.0% and 1.2%, and 1.2 to 1.6 for Ro > 1.2%. Therefore, it can be expected that overpressure exerts a more significant influence on diamondoid formation when Ro exceeds 1.2%. Notably, the correlation between diamantane content and maturity in shale oil (Fig. 6) is weaker than that observed for adamantanes, implying that the diamantane content may be more influenced by reservoir pressure. In shale oils with close thermal maturity from different areas, variations in through-going faults and formation pressure coefficients lead to significant differences in diamantane content. The reason may be that the increase in pressure changes the structure of clay minerals and enhances their catalytic activity39.
Relationship between diamondoid composition and crude oil maturity
The compositional distribution of diamondoids in shale oil is closely related to their thermal maturity. Chen13 established a relationship between Ro and MDI (Methyl Diamantane Index: 4-MD/ (1-MD + 3-MD + 4-MD)) for conventional oil in the Tarim Basin; as shown in Fig. 7, the Ro ranges from 0.9% to 2.2%, while MDI ranges from 20% to 75%. In contrast, as for the Gulong shale oil, Ro ranges from 1.1% to 1.6%, while the MDI mainly ranges between 15% and 70% (Fig. 7). Compared to the conventional oil in the Tarim Basin, the rapid increase of the MDI with increasing thermal maturity may be attributed to the enhanced clay mineral catalytic effect under high-temperature and high-pressure conditions present in Gulong shale oil reservoirs, as discussed before.
Fig. 7
Relationship between Ro and MDI in the Tarim crude oil13 and Gulong shale oil.
DAHL et al.1 proposed a calculation formula for evaluating the degree of oil cracking using the absolute content of 4-MD + 3-MD:
Where: C0 is the concentration of (4-+3-) MD in the uncracked sample (µg/g), which is the baseline value of (4-+3-) MD; Cc is the concentration of (4-+3-) MD in samples of any maturity (µg/g); CR represents the degree of oil cracking.
The basis for evaluation lies in determining the baseline value of 4-MD + 3-MD in a basin. In this study, the baseline value of (4-+3-) MD of conventional oil, which is 4.6 µg/g, is used as the reference. This value is comparable to the global baseline level of 4 to 5 µg/g for crude oil. For the Gulong shale oil, the concentration of 4-MD + 3-MD is generally in the range of 4 to 9 µg/g. The cracking degree of the Gulong shale oil is estimated to be between 0% and 50% (Fig. 8). Specifically, for the shale oil with Ro between 1.2% and 1.4%, the cracking degree ranges from 0% to 35%, while for the shale oil with Ro > 1.4%, the cracking degree ranges from 8 to 51%.
Fig. 8
Relationship between the cracking degree of Gulong shale oil and Ro.
As shown in Fig. 8, shale oil samples with close maturity show a significant difference in the cracking degree, implying that the shale oil cracking may be impacted by shale reservoir fluid pressure and clay mineral content, as discussed above. As discussed in 4.3, shale oil reservoirs close to a fault tend to have less fluid pressure; as a result, corresponding shale oils may have less diamondoids and thus oil cracking degree.
Notably, in contrast to hydrocarbon reservoirs in other basins, which may have experienced migration or secondary alteration, the Gulong shale oil is characterized by in-situ accumulation. The oil has not undergone large-scale migration, gas washing, or biodegradation, and its high diamondoid content indicates the characteristics of in-situ cracking and rearrangement to form diamondoids. In contrast, conventional oils in the Daqing Placanticline have relatively low total diamondoid content, but the content of diamantanes is significantly higher than that in shale oil with the same maturity (Table 3), indicating that the diamondoid content is mainly contributed by early oil generation, with some diamantanes originating from late-stage oil generation.
Formation and evolution pattern of diamondoids
Based on the analysis of diamondoid formation processes and the relationship between their relative abundance and key controlling factors, the formation of diamondoids in the Gulong shale oil is divided into four distinct stages (Table 5).
Primary stage (Ro < 0.8%): Thermal degradation of organic matter is weak. Due to no significant oil cracking occurring in this stage, we suspect that the source of diamondoids may be transformed from some biomarkers preserved in kerogen40,41,42. The diamondoids are mainly adamantanes, and the content is generally less than 60 µg/g.
Table 5 Formation and evolution pattern of diamondoids in the Gulong shale oil.
Inheritance stage (Ro = 0.8%−1.2%): The thermal degradation of organic matter is enhanced; as a result, diamondoids increase slightly. However, diamondoids remain predominantly adamantanes, with the content less than 100 µg/g.
Generation stage (Ro = 1.2%−1.4%): Oil starts to crack, with the maximum cracking degree reaching up to 35%. The cracking of shale oil positively impacts the formation of diamondoids. Additionally, the catalytic effect of clay minerals combined with overpressure accelerates the formation of diamondoids. This is characterized by a marked increase in the quantity of adamantanes, with the highest content reaching up to 300 µg/g. Diamantanes also begin to appear, but are present in small amounts, with the content less than 25 µg/g.
Enrichment stage (Ro > 1.4%): Oil undergoes significant thermal cracking at this stage, indicated by a rapid rise in diamondoids. For example, the shale oil in the Qijia-Gulong Sag shows a maximum cracking degree of 51% and a minimum density of 0.77 g/cm³ (Table 1). Additionally, the catalytic effect of clay minerals can increase diamondoids by 1.3 times, while overpressure in reservoirs can further boost diamondoids by approximately 2 times. This results in adamantane compounds reaching a maximum content of up to 1000 µg/g and diamantanes reaching a maximum of up to 55 µg/g. The combined influence of these factors promotes the enrichment of diamondoids.
Four new species of tarantula have been discovered – and if they knew the name they were going to get, they might have presented themselves sooner.
The males are so well-endowed that scientists essentially named them the ‘genital king.’
Spiders don’t really have penises, in the traditional sense. Instead, they use arm-like structures called palps to grab sperm from ducts in their abdomen, which is then inserted into the genital opening of a female. It sounds like barely a fraction of the romance or fun that many other animals enjoy, but it gets the job done.
Males of the newly described species boast the longest palps of all known tarantulas. The largest gets up to 5 centimeters (2 inches) long – almost as long as its legs, and 3.85 times longer than its carapace. By comparison, most tarantula species sport palps merely twice as long as their carapace.
Related: Watch These Male Spiders Jump Like Hell to Avoid Being Eaten After Sex
The four new species were grouped into a brand new genus, which now also includes a fifth species that was previously described but placed in a different genus.
“Based on both morphological and molecular data, they are so distinct from their closest relatives that we had to establish an entirely new genus to classify them, and we named it Satyrex,” says Alireza Zamani, arachnologist at the University of Turku in Finland.
They’re named after satyrs, male nature spirits from ancient Greek mythology known for their bawdy behavior and prominent packages. The end of it, rex, is Latin for ‘king’, as popularized by the likes of Oedipus and Tyrannosaurus.
A Satyrex ferox male. (Bobby Bok)
The largest of the new species is Satyrex ferox, with its specific name meaning ‘fierce’ thanks to its aggressive nature. The others are S. arabicus and S. somalicus, which are named after the areas they’re found in (the Arabian Peninsula and Somalia, respectively). The fourth is S. speciosus, because of its brighter coloration.
As for why the spiders are proudly packing, it might be a matter of self-preservation.
“We have tentatively suggested that the long palps might allow the male to keep a safer distance during mating and help him avoid being attacked and devoured by the highly aggressive female,” says Zamani.
The research was published in the journal ZooKeys.
A new study, published in the Journal of Science and Personal Relationships, reveals new details about the relationship between couples who gossip and their overall well-being.
The study, conducted by psychology researchers at UC Riverside, explored the role of gossip in producing positive outcomes in both same-gender and different-gender couples. Contrary to the popular belief that gossiping is negative, the recent study indicated that it can actually strengthen bonds between partners.
For the health of your relationship—gossip
The study’s lead author, Chandler Spahr, stated that everyone gossips, and for good reason, as it could be a sign that a relationship is strong. Spahr noted that this was “the first to examine the dynamics of gossip and well-being with romantic partnerships.”
The research team, led by Spahr, studied 76 same-gender and different-gender couples in Southern California. To capture their conversations, participants wore an Electronically Activated Recorder (EAR), a portable listening device, which recorded about 14% of their chatter throughout the day.
Researchers found that both same-gender and different-gender couples gossiped for about 38 minutes a day individually and 29 minutes a day together. The study found that woman-woman couples gossiped the most.
Interestingly, these couples reported high levels of overall happiness, suggesting that gossiping helps them bond. According to UC Riverside, woman-woman relationships showed the highest level of relationship quality.
The researchers explained that gossip isn’t typically about tearing someone down. Instead, it’s a way to discuss people who are not present. Senior author Megan Robbins described it as the experience of driving home from a party and recapping who was there and what happened, such as asking, “Did you see so-and-so?”
Gossip often gets a bad reputation, but that may be because the chatter is negative. For couples, however, “negatively gossiping with one’s romantic partner on the way home from a party could signal that the couple’s bond is stronger than with their friends at the party, while positively gossiping could prolong fun experiences,” the study’s authors wrote.
The study concluded that whether the gossip is positive or negative, it can signal that the couple’s bond is strong.
Why do people gossip?
The study’s authors suspect that gossip may “reinforce the perception that partners are on the same team,” thereby reinforcing “feelings of connectedness, trust, and other positive relationship qualities, as well as contributing to overall well-being.”
Furthermore, the authors suggested that gossip could be a social regulation tool that helps establish expectations and behaviors that contribute to a harmonious relationship, said UC Riverside.
This study builds on a previous survey Robbins conducted in 2019 that debunked common myths about gossip. The 2019 study found that women do not tear people down more than men, and lower-income individuals do not gossip more than wealthy people. It did, however, find that younger people tend to engage in negative gossip more often than older adults.
These psychology researchers continue to study the function of gossip because, as Robbins stated in a UC Riverside press release, “everyone does it.” If everyone gossips, there must be a deeper reason, as it’s a way for people to bond and reinforce their relationships.
The study was published in the Journal of Science and Personal Relationships.
If you’ve ever tried to take a photo of the moon and stars, you know that astrophotography is incredibly challenging. Although there are tips and tricks for taking astrophotos on your cell phone, to capture the cosmos accurately for research purposes, you’re going to need specialized instruments. Meet the Legacy Survey of Space and Time (LSST) Camera, the largest digital camera in the world, which has found its home in the Andes Mountains.
Chile’s Vera C. Rubin Astronomical Observatory, named for the astronomer whose research helped to substantiate dark matter, sits on top of El Peñon summit. The area is home to several remarkable observatories due to its high altitude and clear skies, including the Southern Astrophysical Research Telescope and the Gemini South Observatory. The Rubin Observatory is the first of its kind, though, thanks to its combined primary/tertiary mirror, speed, computing infrastructure, and remarkable camera.
The Rubin Observatory’s LSST Camera (also known as the LSSTCam) is the largest digital camera ever constructed, with a whopping 3,200 megapixels, or 3.1 gigapixels. With one million regular pixels to every megapixel, that’s around 3,200,000,000 pixels of space in each shot. For comparison’s sake, a 4K-enabled camera has just over 8 megapixels, and it would take hundreds of HD screens just to display one image captured by the LSSTCam. This means that when it records images of our night sky via the 8.4-meter Simonyi Survey Telescope, it’ll do so at an unprecedentedly high definition, giving us new insights into the galaxy.
Inside the LSST Camera
The LSST Camera is the only instrument used to support the Legacy Survey of Space and Time, from which it takes its name. It’s approximately three meters long and 1.65 meters across, but it’s packed with all kinds of components that make it immensely heavy. It weighs roughly 3000 kilograms (6000 pounds) and offers around a 9.6 square degree field of view.
The camera’s focal plane, which is the part of the sensor that receives light on the surface, is what allows the camera to capture images at such a high definition. This is achieved through the use of more than 200 charge-coupled devices (CCDs), each equipped with 16 amplifiers that all read one megapixel. Courtesy of these devices, the entire 3,200 megapixels can be read in just two seconds.
Another feature offered by the camera is six differently colored optical filters. Optical filters are glass discs that can be placed in front of a lens. These filters each allow a clearly defined light range to pass through so they can be captured in images, varying from ultraviolet light all the way up to infrared. Using different optical filters, alongside setting them up in alternate formations, allows researchers to understand a great deal more about space due to the different types of wavelengths emitted by various astronomical objects under different space weather conditions. Because of the huge size of the camera, the filters are also large, sitting at 75 centimeters (or 30 inches) – so much so that they need their own machine to swap them in or out.
What is the LSST Camera used for?
The giant telescope camera’s purpose is to record a time-lapse of the universe to support our understanding of dark matter, how the Solar System was formed, and how the Milky Way is structured. To do this successfully, it needed to be built in a way that could capture a massive volume of ultra-wide images each night in the highest definition possible. This data will be processed and made available for research.
Per the BBC, the United States National Science Foundation and Department of Energy, who jointly funded the observatory, stated that the LSSTCam’s technical specifications make it possible to capture rare and previously undetected astronomical events. The aim is to use these images to record 20 billion galaxies over the next decade, capturing about 1,000 images daily – or, roughly 10,000 images across ten years.
The camera has been built expressly with the aim of recording faint or variable space objects as well as possible. Its wide range enables it to capture a vast area when scanning the sky, while the telescope’s mirrors are designed to catch large volumes of light at once. Similarly, the mirrors themselves were engineered to help ensure that the telescope could move around quickly while producing as few vibrations as possible. In turn, this makes sure that images captured are sharp and focused, making them more useful for advancing our understanding of the skies.
Researchers at IBM and Moderna have successfully used a quantum simulation algorithm to predict the complex secondary protein structure of a 60-nucleotide-long mRNA sequence, the longest ever simulated on a quantum computer.
Messenger ribonucleic acid (mRNA) is a molecule that carries genetic information from DNA to ribosomes. It directs protein synthesis in cells and is used to create effective vaccines capable of instigating specific immune responses.
It’s widely believed that all the information required for a protein to adopt the correct three-dimensional conformation is provided by its amino acid sequence or “folding.”
Although it’s made up of only a single strand of amino acids, mRNA has a secondary protein structure consisting of a series of folds that provide a given molecule’s specific 3D shape. The number of possible folding permutations increases exponentially with each added nucleotide. This makes the challenge of predicting what shape a mRNA molecule will take intractable at higher scales.
The IBM and Moderna experiment, outlined in a study first published for the 2024 IEEE International Conference on Quantum Computing and Engineering, demonstrated how quantum computing can be used to augment the traditional methods for making such predictions. Traditionally, these predictions typically relied on binary, classical computers and artificial intelligence (AI) models such as Google DeepMind’s AlphaFold.
Related: DeepMind’s AI program AlphaFold3 can predict the structure of every protein in the universe — and show how they function
According to a new study published May 9 on the preprint arXiv database, algorithms capable of running on these classical architectures can process mRNA sequences with “hundreds or thousands of nucleotides,” but only by excluding higher complexity features such as “pseudoknots.”
Get the world’s most fascinating discoveries delivered straight to your inbox.
Pseudoknots are complicated twists and shapes in a molecule’s secondary structure that are capable of engaging in more complex internal interactions than ordinary folds. Through their exclusion, the potential accuracy of any protein-folding prediction model is fundamentally limited.
Understanding and predicting even the smallest details of a mRNA molecule’s protein folds is intrinsic to developing stronger predictions and, as a result, more effective mRNA-based vaccines.
Scientists hope to overcome the limitations inherent in the most powerful supercomputers and AI models by augmenting experiments with quantum technology. The researchers conducted multiple experiments using quantum simulation algorithms that relied on qubits — the quantum equivalent of a computer bit — to model molecules.
Initially using only 80 qubits (out of a possible 156) on the R2 Heronquantum processing unit (QPU),, the team employed a conditional value-at-risk-based variational quantum algorithm (CVaR-based VQA) — a quantum optimization algorithm modeled after certain techniques used to analyze complex interactions such as collision avoidance and financial risk assessment techniques — to predict the secondary protein structure of a 60-nucleotide-long mRNA sequence.
The previous best for a quantum-based simulation model, according to the study, was a 42-nucleotide sequence. The researchers also scaled the experiment by applying recent error-correction techniques to deal with the noise generated by quantum functions.
In the new preprint study, the team provisionally demonstrated the experimental paradigm’s effectiveness in running simulated instances with up to 156 qubits for mRNA sequences of up to 60 nucleotides. They also conducted preliminary research demonstrating the potential to employ up to 354 qubits for the same algorithms in noiseless settings.
Ostensibly, increasing the number of qubits used to run the algorithm, while scaling the algorithms for additional subroutines, should lead to more accurate simulations and the ability to predict longer sequences, they said.
They noted, however, that “these methods necessitate the development of advanced techniques for embedding these problem-specific circuits into the existing quantum hardware,” — indicating that better algorithms and processing architectures will be needed to advance the research.
A hidden tectonic fault in Canada’s Yukon could be gearing up to unleash a major earthquake of at least magnitude 7.5, according to a new study.
The Tintina fault, which runs from northeastern British Columbia through to central Alaska, has been quietly accumulating strain for at least 12,000 years. Previously thought to be relatively benign, new analysis suggests it’s still very much active.
Worryingly, scientists can’t say when the next major quake will strike – only that it almost certainly will.
“Our findings indicate that the fault is active and continues to accumulate strain,” Dr Theron Finley, lead author of the study published in Geophysical Research Letters,told BBC Science Focus. “And so we anticipate that in the future, it will rupture again.”
Tintina is what’s known as a ‘right-lateral strike-slip fault’ – a type of fault where two blocks of the Earth’s crust slide past each other horizontally. If you stand on one side of the fault and the other side moves to your right during an earthquake, it’s called right-lateral.
Over time, one side of the fault has slipped around 430km (270mi), mostly during the Eocene period – a geological epoch that occurred roughly 56 to 33.9 million years ago – when it’s thought to have been moving as much as 13mm (0.5in) a year.
The Tintina fault extends 1,000km (600mi) from northeastern British Columbia into Alaska. – Credit: National Park Service
Although small earthquakes have occasionally been recorded in the region, the Tintina fault was largely considered dormant.
“There have been a few small earthquakes in the magnitude three to four range detected along or adjacent to the Tintina fault,” Finley said. “But nothing really suggests that it’s capable of larger ruptures.”
That changed when Finley and his team used new technologies to re-examine the fault. Combining satellite surface models with drone-mounted Light Detection and Ranging (LIDAR) data, the researchers were able to see through the dense forest and uncover signs of a seismically active past – and future – in the Yukon.
Scattered across the landscape were fault scarps – long, narrow landforms produced when an earthquake ruptures all the way to the surface. These can stretch for tens or even hundreds of kilometres, though they’re usually only a few metres high and wide.
“In the case of the Tintina fault, the scarps appear as an interesting series of aligned mounds,” Finley said.
By dating these surface features, the researchers found that while the fault has ruptured numerous times over the past 2.6 million years, it hasn’t produced a major earthquake in the last 12,000 years – all while slowly accumulating strain at a rate of 0.2 to 0.8 mm (0.008 to 0.03 in) per year.
Fortunately, the region is sparsely populated. But when the fault does rupture, Finley warned that significant landslides, infrastructure damage, and impacts to nearby communities are likely.
“I want to be clear that we don’t have a great sense of how imminent an earthquake is,” he said. “We just know that from our observations, it appears that a long time has elapsed since the last one. But there’s not really a way to tell whether another one is more likely in the coming days and weeks versus thousands of years from now.”
Now that the fault has been confirmed as active, Finley says the next step is to better estimate how often large earthquakes occur there. While this won’t allow researchers to predict exactly when the next rupture will happen, it could provide a more reliable timescale for when one should be expected.
“Earthquakes don’t necessarily occur periodically, but it would give us a better sense of how often we expect large earthquakes,” Finley said. Regardless, when Tintina finally does go, it won’t be a small one.
Read more:
About our expert
Theron Finley is a surficial geologist at the Yukon Geological Survey. He recently graduated with a PhD from the University of Victoria, Canada, where he conducted research on active faults in western Canada using remote sensing, tectonic geomorphology and paleoseismology.
The human genome is made up of 23 pairs of chromosomes, the biological blueprints that make humans … well, human. But it turns out that some of our DNA — about 8% — are the remnants of ancient viruses that embedded themselves into our genetic code over the course of human evolution.
These ancient viruses lie in sections of our DNA called transposable elements, or TEs, also known as “jumping genes” due to their ability to copy and paste themselves throughout the genome. TEs, which account for nearly half of our genetic material, were once waved off as “junk” DNA, sequences that appear to have no biological function. Now, a new study offers support for the hypothesis that these ancient viral remnants play a key role in the early stages of human development and may have been implicated in our evolution.
By sequencing TEs, an international team of researchers identified hidden patterns that could be crucial for gene regulation, the process of turning genes on and off. The findings were published July 18 in the journal Science Advances.
“Our genome was sequenced long ago, but the function of many of its parts remain unknown,” study coauthor Dr. Fumitaka Inoue, an associate professor in functional genomics at Kyoto University in Japan, said in a statement. “Transposable elements are thought to play important roles in genome evolution, and their significance is expected to become clearer as research continues to advance.”
There are many benefits to studying how TEs activate gene expression. It could help scientists understand therolethat thesequencesplay in human evolution, reveal possible links between TEs and human diseases, or teach researchers how to target functional TEs in gene therapy, said lead researcher Dr. Xun Chen, a computational biologist and principal investigator at Shanghai Institute of Immunity and Infection of the Chinese Academy of Sciences.
With more research, “we hope to uncover how TEs, particularly ERVs (endogenous retroviruses, or ancient viral DNA), make us human,” Chen added in an email.
When our primate ancestors were infected with viruses, sequences of viral genetic information would replicate and insert themselves in various locations in the host’s chromosomes.
“Ancient viruses are effective in invading our ancestral genomes, and their remnants become a big part of our genome. Our genome has developed numerous mechanisms to control these ancient viruses, and to eliminate their potential detrimental effects,” said Dr. Lin He, a molecular biologist and the Thomas and Stacey Siebel Distinguished Chair professor in stem cell research at the University of California, Berkeley, in an email.
For the most part, these ancient viruses are inactive and are not a cause of concern, but in recent years, research has shown that some of the transposable elements may play important roles in human diseases. A July 2024 study explored the possibility of silencing certain TEs to make cancer treatment more effective.
“Over the course of evolution, some viruses are degenerated or eliminated, some are largely repressed in expression in normal development and physiology, and some are domesticated to serve the human genome,” said He, who was not involved with the new study. “While perceived as solely harmful, some ancient viruses can become part of us, providing raw materials for genome innovation.”
But because of their repetitive nature, transposable elements are notoriously difficult to study and organize. While TE sequences are categorized into families and subfamilies based on their function and similarity, many have been poorly documented and classified, “which could significantly impact their evolutionary and functional analyses,” Chen said.
Ancient viral impact on human development and evolution
The new study focused on a group of TE sequences called MER11 found within primate genomes. By using a new classification system as well as testing the DNA’s gene activity, researchers identified four previously undiscovered subfamilies.
The most recently integrated sequence, named MER11_G4, was found to have a strong ability to activate gene expression in human stem cells and early-stage neural cells. The finding indicates that this TE subfamily plays a role in early human development and can “dramatically influence how genes respond to developmental signals or environmental cues,” according to a statement from Kyoto University.
The research also suggests that viral TEs had a part in shaping human evolution. By tracing the way the DNA has changed over time, the researchers found that the subfamily had evolved differently within the genomes of different animals, contributing to the biologicalevolutionthat resulted in humans, chimpanzees and macaques.
“To understand the evolution of our genome is one way to understand what makes humans unique,” said He. “It will empower us with tools to understand human biology, human genetic diseases, and human evolution.”
Exactly how these TEs were implicated in the evolutionary process is still unclear, Chen said. It is also possible that other TEs that have yet to be identified played distinct roles in the evolutionaryprocess of primates, he added.
“The study offers new insights and potential leverage points for understanding the role of TEs in shaping the evolution of our genomes,” said Dr. Steve Hoffmann, a computational biologist at the Leibniz Institute on Aging in Jena, Germany, who was not involved with the study. The research also “underscores how much more there is to learn from a type of DNA once slandered as a molecular freeloader,” he added in an email.
Hoffmann was the lead researcher of a scientific paper that first documented the nearly complete genome map of the Greenland shark, the longest-living vertebrate in the world that can survive until about 400 years old. The shark’s genome was made up of more than 70% jumping genes, while the human genome is composed of less than 50%. While primate genomes are different from those of a shark, “the study provides further evidence for the potential impact of TEs on genome regulation” and “is a message with relevance for all genome researchers,” Hoffmann said.
By investigating how genomes have evolved, researchers can determine which DNA sequences have remained thesame, which have been lost in time and which have emerged most recently.
“Taking these sequences into account is often critical to understanding, e.g., why humans develop diseases that certain animals don’t,” Hoffmann said. “Ultimately, a deeper understanding of genome regulation can aid in the discovery of novel therapies and interventions.”
Taylor Nicioli is a freelance journalist based in New York.
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.