SYDNEY, July 17 (Xinhua) — Australian central bearded dragons that run the fastest are more likely to die in the wild than their slower peers, a study using wearable devices has revealed.
Researchers tracked these lizards in their natural habitats for a year using miniature fitness trackers equipped with accelerometers and temperature sensors, according to a statement on Thursday from the University of Melbourne, which led the study.
These results highlighted that understanding real-world behaviors and environments is crucial for predicting how cold-blooded animals like reptiles, amphibians, fish, and invertebrates will cope with climate change.
Bearded dragons adjust their behavior with the seasons, moving between sun and shade to keep their body temperature optimal for key functions, said the study published in the Journal of Animal Ecology under the British Ecological Society.
The study unexpectedly found that dragons with higher running speeds faced greater mortality, likely from increased predation and mating activity, while males had higher survival rates than females.
“Speedy lizards are engaging in riskier behaviors, such as moving around more openly and frequently, making them vulnerable to predators like birds and cats,” said University of Melbourne Research Fellow Kristoffer Wild, the study’s lead author.
The study challenges the idea that speed always benefits survival, revealing that real-world survival relies on complex interactions between an animal’s physiology, behavior, predation risk, and environment, factors often missed in laboratory studies, Wild said. ■
JUDD et al. 2024/Science, VAN DER MEER et al. 2025/Earth and Planetary Science Letters
Two new studies offer the most detailed glimpse yet of how Earth’s climate and sea levels have changed during the Phanerozoic — the latest geologic eon covering the time period from 538.8 million years ago to the present.
The first curve reveals that Earth’s temperature has varied more than previously thought over much of the Phanerozoic eon and also confirms that Earth’s temperature is strongly correlated to carbon-dioxide levels in the atmosphere.
The team from Arizona compiled more than 150,000 published data points, their colleagues at the University of Bristol generated more than 850 model simulations of what Earth’s climate could have looked like at different periods based on continental position and atmospheric composition. Using special data assimilation protocolls, the different datasets were combined to create an accurate curve of how Earth’s temperature has varied over the past 485 million years.
The climate curve reveals that temperature varied more greatly than previously thought. It starts with the Hirnantian, a period of major climatic oscillation lasting from approximately 460 to around 420 million years. The coldest period in the analyzed timescale is the Karoo glaciation, lasting from approximately 360 to 260 million years. But overall, the Phanerozoic was characterized by mild to warm climates, with global mean surface temperatures spanning from 52 to 97 degrees Fahrenheit (or 11 to 36 degrees Celsius). In the warmest periods global temperatures did not drop below 77 degrees Fahrenheit (25 degrees Celsius). In the last 60 million years, after a peak during the “Cretaceous Hothouse,” Earth started to cool down. The global average temperature today is about 59 degrees Fahrenheit (15 degrees Celsius). The authors also note that the periods of extreme heat were most often linked to elevated levels of the greenhouse gas carbon-dioxide in the atmosphere.
The second curve shows how sea levels correlate both with tectonic activity – closing or opening oceanic basins and shifting continents – and the climate, determining how much water is trapped in ice caps or glaciers.
“Plate tectonics determines the depth of the oceans. If the ‘bathtub’ becomes shallower, then the water level will rise. Ice caps on continents withhold water from the ocean, but when the ice melts, the ‘bath water level’ will rise, ” explains study lead author Dr. Douwe van der Meer, guest researcher at Utrecht University.
To assess sea level changes, the scientists looked at the prevailing sediment type deposited at the time. Claystone typically forms in deeper marine settings, while sandstone is deposited in shallow basins. This preliminary curve was then combined with data derived from fossils and paleogeographic simulations, visualizing the distribution of land and sea during different geological periods.
The scientists were also able to estimate the location and volume of continental ice caps based on Earth’s changing climate over time and the position of the continents in relation to the poles.
Sea levels were relatively low during the first 400 million years, reflecting the cooler climate and low tectonic activity. During the Carboniferous (358-298 million years ago) there were very large sea level variations due to a large ice cap covering a large landmass in — called Gondwana by geologists — the southern hemisphere.
During the Cretaceous (145-66 million years ago) the supercontinent of Pangaea started to break up and the hothouse climate caused the poles to be ice free. These two effects resulted in global sea levels being more than 200 meters higher than they are at present.
In the last 60 million years Earth started to cool down and around 30 million years ago the first ice sheets started to form on the poles. In the past 2 million years during the last major ice ages sea levels dropped up to 100 meters.
The climate study, “A 485-million-year history of Earth’s surface temperature,” was published in the journal Science and can be found online here.
The sea level study, “Phanerozoic orbital-scale glacio-eustatic variability,” was published in the journal Earth and Planetary Science Letters and can be found online here.
Additional material and interviews provided by University of Utrecht.
If you’re tired of long, boring workouts and want a different way to shred fat while enhancing power and conditioning, it’s time to grab a kettlebell in one hand and a battle rope in the other. This four-week plan combines two tools that bring the heat: kettlebells and battle ropes. It features four short, intense workouts a week that will leave you sweaty, better, and stronger than yesterday.
Here is what this plan is all about. You’ll hit explosive circuits and trisets that push your conditioning and challenge your muscles without beating up your joints. It is a low-impact, high-intensity program designed for individuals who want to stay athletic, maintain a lean physique, and move with ease. Whether you’re training at home or the gym, you only need a couple of kettlebells, a battle rope, and about 30 minutes to start helping you torch fat.
So, if you’re ready to trade traditional cardio for something that works and feels much more fun, this battle rope and kettlebell plan is your new go-to. Let’s get to work.
Why This Battle Rope and Kettlebell Workout Program Works
This four-week program is all about the awesomeness of kettlebells and battle ropes, two tools that deliver enhanced athletic performance and fat loss without trashing your joints. When you imagine high-intensity conditioning, it’s not much of a stretch to think of running sprints and box jumps. But there is none of this here because these two tools deliver low-impact, high-intensity training done right, and here’s why:
Kettlebells build real-world strength, not just the kind that looks good in the mirror. They allow you to carry groceries in one trip, get off the ground quickly, and enhance your deadlifting prowess.
Battle ropes spike your heart rate, allowing you to feel the burn in your lungs without the pounding of traditional cardio. You’ll also build shoulder stability, muscular endurance, and grit in equal measure.
Andrey Burmakin/Adobe Stock
The 4-Week Battle Rope and Kettlebell Workout Plan Overview
This plan is designed to yield maximum return in the shortest time possible. With four 30-minute workouts a week, you’ll hit the sweet spot between intensity and recovery. Each workout combines a targeted blend of kettlebell exercises and battle rope drills to build power and conditioning fast.
You’ll rotate through MetCon and power circuits, as well as full-body shred sessions, each one leaving you better than before. And because it’s only four sessions a week, you’ll have time to recover and be ready to go again.
Workout Format
Duration: 30 minutes.
Rest periods: 60 seconds between exercises and sets, and circuits unless specified.
Day 1: MetCon Blast (Battle Rope Intervals + Kettlebell Finisher)
Focus: Conditioning, endurance, and calorie burn.
Day 2: Upper Body Strength (Kettlebell Focus)
Focus: Strength, power, shoulder, and core stability.
Day 3: Rest or Active Recovery
Day 4: Lower Body & Core Circuit
Focus: Lower-body and rotational core strength.
Day 5: Rest or Active Recovery
Day 6: Total Body Shred (Kettlebell + Rope Hybrid Circuit)
Focus: Metabolic conditioning, total-body endurance, and mental toughness.
Day 7: Rest and Recharge (Hydrate, move, stretch, and get ready to do it all again)
BATTLE ROPES AND KETTLEBELL WORKOUTS
Each workout will maximize intensity within 30 minutes, utilizing effective exercises with minimal rest periods. You’ll rotate between strength-based circuits, rope-focused intervals, and hybrid sessions that accelerate your results.
Day 1: MetCon Blast
Format: Five battle rope intervals and one kettlebell triset finisher.
A mysterious black hole spotted between two galaxies that are crashing into each other is challenging existing theories on how these powerful cosmic objects are formed.
Researchers behind the study were surprised as black holes are typically found at the centre of galaxies, not floating between them.
The discovery was made using Nasa’s James Webb Space Telescope (JWST), which captured images of two distant galaxies merging in a collision.
Released on Tuesday, the image shows the black hole appearing as a bright glow between the galaxies.
“Finding a black hole that’s not in the nucleus of a massive galaxy is in itself unusual, but what’s even more unusual is the story of how it may have gotten there,” said Dr Pieter van Dokkum, professor of astronomy and physics at Yale University and lead author of the study.
“It likely didn’t just arrive there, but instead it formed there, and pretty recently.
“In other words, we think we’re witnessing the birth of a supermassive black hole, something that has never been seen before.”
Scientists have been studying black holes and how they form for decades as they remain one of the most mysterious objects in the universe and are so powerful that not even light can escape them.
In this latest discovery, researchers believe that the black hole was formed without the usual step of a dying star collapsing.
There are some leading theories on how supermassive black holes found in the centre of galaxies are formed.
One says that they begin as leftovers of massive stars and when a star starts dying, it explodes and collapses under its own gravity to form a black hole.
The newly formed small black holes then feed on gas and merges with others to become a supermassive, a process that can take billions of years.
But this theory does not explain how some black holes appear fully formed in the early universe.
This led scientists to consider the ‘direct collapse’ theory, a rare situation where a dense cloud of gas collapses directly into a black hole, skipping the usual step of a dying star.
This latest discovery by the JWST could be the strongest evidence yet of that process.
“By looking at the data from the Infinity Galaxy, we think we’ve pieced together a story of how this could have happened here,” said Prof van Dokkum.
“Two disk galaxies collide, forming the ring structures of stars that we see. During the collision, the gas within these two galaxies shocks and compresses.
“This compression might just be enough to form a dense knot, which then collapsed into a black hole.
“We can’t say definitively that we have found a direct collapse black hole. But we can say that these new data strengthen the case that we’re seeing a newborn black hole, while eliminating some of the competing explanations.”
The findings are part of a growing list of discoveries made by the telescope since its launch on Christmas Day in 2021.
It is a joint project by Nasa and the European and Canadian space agencies to study the early universe and learn more about the Solar System.
The telescope has already captured detailed images of galaxies forming less than 400 million years after the Big Bang.
It has also provided new clues on the atmospheres of exoplanets, planets that orbit stars outside the Solar System.
Other Nasa telescopes have made breakthrough discoveries, including the TESS space telescope, which observed a “super-Earth planet” that has been flashing a repeated signal from 154 light-years away.
The planet, named TOI-1846 b, is almost twice the size of Earth.
It orbits a red dwarf, small and cool stars, that is about 40 per cent smaller in size and mass than the Sun.
Scientists are hoping to use the JWST to study the planet’s atmosphere, as its unique instruments would be capable of detecting any possible signs of water, vapour, methane, carbon dioxide or other gases.
UAE currency: the story behind the money in your pockets
Back in high school chemistry, I remember waiting with my bench partner for crystals to form on our stick in the cup of blue solution. Other groups around us jumped with joy when their crystals formed, but my group just waited. When the bell rang, everyone left but me. My teacher came over, picked up an unopened bag on the counter and told me, “Crystals can’t grow if the salt is not in the solution.”
To me, this was how science worked: What you expect to happen is clear and concrete. And if it doesn’t happen, you’ve done something wrong.
If only it were that simple.
It took me many years to realize that science is not just some series of activities where you know what will happen at the end. Instead, science is about discovering and generating new knowledge.
Now, I’m a psychologist studying how scientists do science. How do new methods and tools get adopted? How do changes happen in scientific fields, and what hinders changes in the way we do science?
One practice that has fascinated me for many years is replication research, where a research group tries to redo a previous study. Like with the crystals, getting the same result from different teams doesn’t always happen, and when you’re on the team whose crystals don’t grow, you don’t know if the study didn’t work because the theory is wrong, or whether you forgot to put the salt in the solution.
The replication crisis
A May 2025 executive order by President Donald Trump emphasized the “reproducibility crisis” in science. While replicability and reproducibility may sound similar, they’re distinct.
Reproducibility is the ability to use the same data and methods from a study and reproduce the result. In my editorial role at the journal Psychological Science, I conduct computational reproducibility checks where we take the reported data and check that all the results in the paper can be reproduced independently.
But we’re not running the study over again, or collecting new data. While reproducibility is important, research that is incorrect, fallible and sometimes harmful can still be reproducible.
By contrast, replication is when an independent team repeats the same process, including collecting new data, to see if they get the same results. When research replicates, the team can be more confident that the results are not a fluke or an error.
Reproducibility and replicability are both important, but have key differences. Open Economics Guide, CC BY
The “replication crisis,” a term coined in psychology in the early 2010s, has spread to many fields, including biology, economics, medicine and computer science. Failures to replicate high-profile studies concern many scientists in these fields.
Why replicate?
Replicability is a core scientific value: Researchers want to be able to find the same result again and again. Many important findings are not published until they are independently replicated.
In research, chance findings can occur. Imagine if one person flipped a coin 10 times and got two heads, then told the world that “coins have a 20% chance of coming up heads.” Even though this is an unlikely outcome – about 4% – it’s possible.
Replications can correct these chance outcomes, as well as scientific errors, to ensure science is self-correcting.
For example, in the search for the Higgs boson, two research centers at CERN, the European Council for Nuclear Research, ATLAS and CMS, independently replicated the detection of a particle with a large unique mass, leading to the 2013 Nobel Prize in physics.
The ATLAS experiment at the Large Hadron Collider at CERN is one of two that led to the discovery of the Higgs boson. CERN, CC BY
The initial measurements from the two centers actually estimated the mass of the particle as slightly different. So while the two centers didn’t find identical results, the teams evaluated them and determined they were close enough. This variability is a natural part of the scientific process. Just because results are not identical does not mean they are not reliable.
Research centers like CERN have replication built into their process, but this is not feasible for all research. For projects that are relatively low cost, the original team will often replicate their work prior to publication – but doing so does not guarantee that an independent team could get the same results.
Because the results on vaccine efficacy were so clear, replication wasn’t necessary and would have slowed the process of getting the vaccine to people. XKCD, CC BY-NC
When projects are costly, urgent or time-specific, independently replicating them prior to disseminating results is often not feasible. Remember when people across the country were waiting for a COVID-19 vaccine?
The initial Pfizer-BioNTech COVID-19 vaccine took 13 months from the start of the trial to authorization from the Food and Drug Administration. The results of the initial study were so clear and convincing that a replication would have unnecessarily delayed getting the vaccine out to the public and slowing the spread of disease.
Since not every study can be replicated prior to publication, it’s important to conduct replications after studies are published. Replications help scientists understand how well research processes are working, identify errors and self-correct. So what’s the process of conducting a replication?
The replication process
Researchers could independently replicate the work of other teams, like at CERN. And that does happen. But when there are only two studies – the original and the replication – it’s hard to know what to do when they disagree. For that reason, large multigroup teams often conduct replications where they are all replicating the same study.
Alternatively, if the purpose is to estimate the replicability of a body of research – for example, cancer biology – each team might replicate a different study, and the focus is on the percentage of studies that replicate across many studies.
These large-scale replication projects have arisen around the world and include ManyLabs, ManyBabies, Psychological Accelerator and others.
Replicators start by learning as much as possible about how the original study was conducted. They can collect details about the study from reading the published paper, discussing the work with its original authors and consulting online materials.
The replicators want to know how the participants were recruited, how the data was collected and using what tools, and how the data was analyzed.
But sometimes, studies may leave out important details, like the questions participants were asked or the brand of equipment used. Replicators have to make these difficult decisions themselves, which can affect the outcome.
Replicators also often explicitly change details of the study. For example, many replication studies are conducted with larger samples – more participants – than the original study, to ensure the results are reliable.
Registration and publication
Sadly, replication research is hard to publish: Only 3% of papers in psychology, less than 1% in education and 1.2% in marketing are replications.
If the original study replicates, journals may reject the paper because there is no “new insight.” If it doesn’t replicate, journals may reject the paper because they assume the replicators made a mistake – remember the salt crystals.
Because of these issues, replicators often use registration to strengthen their claims. A preregistration is a public document describing the plan for the study. It is time-stamped to before the study is conducted.
This type of document improves transparency by making changes in the plan detectable to reviewers. Registered reports take this a step further, where the research plan is subject to peer review before conducting the study.
If the journal approves the registration, they commit to publishing the results of the study regardless of the results. Registered reports are ideal for replication research because the reviewers don’t know the results when the journal commits to publishing the paper, and whether the study replicates or not won’t affect whether it gets published.
About 58% of registered reports in psychology are replication studies.
Replication research often uses the highest standards of research practice: large samples and registration. While not all replication research is required to use these practices, those that do contribute greatly to our confidence in scientific results.
Replication research is a useful thermometer to understand if scientific processes are working as intended. Active discussion of the replicability crisis, in both scientific and political spaces, suggests to many researchers that there is room for growth. While no field would expect a replication rate of 100%, new processes among scientists aim to improve the rates from those in the past.
LITLINGTON, England, July 17, 2025 /PRNewswire/ — A recent research article backed by Cheyney Design and Development, a leader in X-ray inspection and imaging technologies, presents a revolutionary perspective on the nature of light. The article, published in Annals of Physics (an Elsevier journal) by Dr. Dhiraj Sinha, a faculty member at Plaksha University, shows that Einstein’s theory of photons has its origins in Maxwell’s electromagnetic fields. It dismantles a century-old scientific belief that photons are not physically linked to electromagnetic field theory pioneered by the great Scottish physicist James Clerk Maxwell. The research is based on a prior discovery on electromagnetic radiation published in Physical Review Letters, which was funded by Cheyney. It offered a unified theoretical framework on radiation from radio to optical frequencieswhile using the argument that radiation is generated due to the broken symmetry of the electromagnetic field. It serves as a testimony to the company’s commitment to nurturing transformational scientific discoveries.
Electromagnetic field excitation of electrons (Image: Google Gemini)
The physical nature of light is an intriguing scientific mystery as it behaves like a wave in free space and as a particle when it interacts with matter. The theoretical work on light as an electromagnetic wave by Maxwell in 1865 was empirically validated by Heinrich Hertz in 1887. However, the broad scientific consensus on light was shattered within a decade. Experiments on the photoelectric effect where electrons are generated as light strikes a metal contradicted Maxwell’s theory on light. Albert Einstein in 1905 presented the heuristic argument that light can be considered to be made of particles or photons with energy proportional to its frequency. It could explain the empirical observation that energy of electrons is linearly dependent on the frequency of light in the photoelectric effect. The idea found acceptance and the dual nature of light forms the current basis of our phenomenological interpretation of light.
Dr. Sinha has attempted to transform the century-old belief by showing that Maxwell’s electromagnetic field theory can explain the interaction between light and electrons. The recent article highlights the role of time varying magnetic field of light, which generates an electric potential in space. Dr. Sinha argues that an electron is energised by the electric potential of light, which is mathematically defined as dj/dt where j is the magnetic flux of radiation and t is time. It implies that the net energy transfer to an electron of charge e is W=edj/dt. Transforming the expression of energy to frequency domain or phasor representation gives the energy of an electron as ejw, where w is angular frequency of light. Dr. Sinha argues that it is similar to Einstein’s expression for the energy of a photon ħw, where ħ is reduced Planck’s constant. Thus, light energises electrons according to the Maxwell-Faraday equation of classical electromagnetism. The theoretical framework finds support from experimental observations on magnetic flux quantisation in superconducting loops and two-dimensional electron gas systems. The fact that the quantum nature of light has origins in Maxwell’s electromagnetic fields is revolutionary.
A number of physicists have come out in support of Dr. Sinha. Richard Muller, Professor of Physics at University of California Berkeley, and Faculty Senior Scientist at Lawrence Berkeley Laboratory, commented, “The ideas are intriguing and they address the most fundamental of the unexplained issues of quantum physics including the particle/wave duality and the meaning of measurement.” Jorge Hirsch, professor of physics at University of California, San Diego wrote a letter of support to the editorial board members. Steven Verrall, former faculty member at University of Wisconsin La Crosse said, “Dr. Sinha provides a new semiclassical approach to modelling quantum systems. I also think that his unique approach may ultimately add valuable insights to the continued development of semiclassical effective field theories in low energy physics.” Lawrence Horwitz, professor emeritus at the University of Tel Aviv pointed, “This article is indeed a valuable contribution to the theory of photons and electrons.”
Dr. Sinha’s discovery offers a novel theoretical pathway towards developing integrated radio and photonic devices by seamlessly merging the principles of classical electromagnetism into modern photonic devices. It will have far-reaching consequences on technologies like solar cells, lasers and light emitting diodes which exclusively rely on the principles of quantum mechanics. It creates a completely new orbit and transformative pathway towards new radio and photonics technologies.
Dr. Dhiraj Sinha added, “The work started during my doctoral years at the University of Cambridge and early support from Cheyney was critical. It took a transformational turn during my postdoctoral work at Massachusetts Institute of Technology. The empirical results obtained through extensive experiments across a broad spectrum of radio and optical frequencies led to the discovery of the missing theoretical link between the ideas of Einstein and Maxwell.”
Additional Information
Sinha, D. Electrodynamic excitation of electrons. Annals of Physics, 473, 169893 (2025).
Sinha, D. & Amaratunga, G. A. Electromagnetic radiation under explicit symmetry breaking. Physical review letters, 114, 147701 (2015).
About Cheyney Design & Development Ltd
Cheyney Design & Development Ltd, Litlington, UK, founded by Richard Parmee, is at the forefront of innovations in X-ray inspection technology. Its patented, cutting-edge technology and advanced stochastic algorithms position it as a technical leader in the X-ray inspection arena. Cheyney is dedicated to supporting early-stage innovations with transformative potential in science and engineering.
A new study from the Hebrew University of Jerusalem reveals that Neanderthals living in two nearby caves in northern Israel — butchered their food in noticeably different ways. Despite using the same tools and hunting the same prey, groups in Amud and Kebara caves left behind distinct patterns of cut-marks on animal bones, suggesting that food preparation techniques may have been culturally specific and passed down through generations. These differences cannot be explained by tool type, skill, or available resources, and may reflect practices such as drying or aging meat before butchering. The findings provide rare insight into the social and cultural complexity of Neanderthal communities.
Neanderthals lived in the nearby caves of Amud and Kebara between 50 and 60,000 years ago, using the same tools and hunting the same prey. But due to the research lead by Anaelle Jallon from the Institute of Archeology (supervisors Rivka Rabinovich and Erella Hovers) with colleagues from the Natural History Museum of London, Lucille Crete and Silvia Bello, studying the cutmarks on the remains of their prey have found that the two groups seem to have butchered their food in visibly different ways, which can’t be explained by the skill of the butchers or the resources or tools used at each site. These differences could represent distinct cultural food practices, such as drying meat before butchering it.
Did Neanderthals have family recipes? A new study suggests that two groups of Neanderthals living in the caves of Amud and Kebara in northern Israel butchered their food in strikingly different ways, despite living close by and using similar tools and resources. Scientists think they might have been passing down different food preparation practices.
“The subtle differences in cut-mark patterns between Amud and Kebara may reflect local traditions of animal carcass processing,” said Anaëlle Jallon, PhD candidate at the Hebrew University of Jerusalem and lead author of the article in Frontiers in Environmental Archaeology. “Even though Neanderthals at these two sites shared similar living conditions and faced comparable challenges, they seem to have developed distinct butchery strategies, possibly passed down through social learning and cultural traditions.
“These two sites give us a unique opportunity to explore whether Neanderthal butchery techniques were standardized,” explained Jallon. “If butchery techniques varied between sites or time periods, this would imply that factors such as cultural traditions, cooking preferences, or social organization influenced even subsistence-related activities such as butchering.”
Written in the bones
Amud and Kebara are close to each other: only 70 kilometers apart. Neanderthals occupied both caves during the winters between 50 and 60,000 years ago, leaving behind burials, stone tools, hearths, and food remains. Both groups used the same flint tools and relied on the same prey for their diet — mostly gazelles and fallow deer. But there are some subtle differences between the two. The Neanderthals living at Kebara seem to have hunted more large prey than those at Amud, and they also seem to have carried more large kills home to butcher them in the cave rather than at the site of the kill.
At Amud, 40% of the animal bones are burned and most are fragmented. This could be caused by deliberate actions like cooking or by later accidental damage. At Kebara, 9% of the bones are burned, but less fragmented, and are thought to have been cooked. The bones at Amud also seem to have undergone less carnivore damage than those found at Kebara.
To investigate the differences between food preparation at Kebara and at Amud, the scientists selected a sample of cut-marked bones from contemporaneous layers at the two sites. They examined these macroscopically and microscopically, recording the cut-marks’ different characteristics. Similar patterns of cut-marks might suggest there were no differences in butchery practices, while different patterns might indicate distinct cultural traditions.
The cut-marks were clear and intact, largely unaffected by later damage caused by carnivores or the drying out of the bones. The profiles, angles, and surface widths of these cuts were similar, likely due to the two groups’ similar toolkits. However, the cut-marks found at Amud were more densely packed and less linear in shape than those at Kebara.
Cooking from scratch
The researchers considered several possible explanations for this pattern. It could have been driven by the demands of butchering different prey species or different types of bones — most of the bones at Amud, but not Kebara, are long bones — but when they only looked at the long bones of small ungulates found at both Amud and Kebara, the same differences showed up in the data. Experimental archaeology also suggests this pattern couldn’t be accounted for by less skilled butchers or by butchering more intensively to get as much food as possible. The different patterns of cut-marks are best explained by deliberate butchery choices made by each group.
One possible explanation is that the Neanderthals at Amud were treating meat differently before butchering it: possibly drying their meat or letting it decompose, like modern-day butchers hanging meat before cooking. Decaying meat is harder to process, which would account for the greater intensity and less linear form of the cut-marks. A second possibility is that different group organization — for example, the number of butchers who worked on a given kill — in the two communities of Neanderthals played a role.
However, more research will be needed to investigate these possibilities.
“There are some limitations to consider,” said Jallon. “The bone fragments are sometimes too small to provide a complete picture of the butchery marks left on the carcass. While we have made efforts to correct for biases caused by fragmentation, this may limit our ability to fully interpret the data. Future studies, including more experimental work and comparative analyses, will be crucial for addressing these uncertainties — and maybe one day reconstructing Neanderthals’ recipes.”
A newly developed bionic knee could help people with above-the-knee amputations walk and climb with greater ease than they could with a traditional prosthesis.
The new prosthesis, described July 10 in the journal Science, connects to a user’s leg via a titanium rod attached to their femur and permanently implanted electrodes in their leg muscles. In addition to improving movement capabilities, the prosthesis helped users feel a greater sense of ownership and agency over the prosthetic limb, the researchers said.
“A prosthesis that’s tissue-integrated — anchored to the bone and directly controlled by the nervous system — is not merely a lifeless, separate device, but rather a system that is carefully integrated into human physiology, offering a greater level of prosthetic embodiment,” study co-author Hugh Herr, a professor of media arts and sciences at MIT who develops prostheses that emulate natural limbs and is himself a double amputee, said in a statement. “It’s not simply a tool that the human employs, but rather an integral part of self.”
Whereas conventional prosthetic legs attach to the user’s residual limb with a socket, the new bionic prosthesis interfaces directly with muscle and bones. Doing so allows it to take advantage of a surgical approach to amputations recently developed by Herr and colleagues. In this new approach, surgeons reconnect pairs of muscles that stretch and contract in opposition to each other, such as the residual hamstring and quadriceps muscles, so that they can still communicate with each other. In conventional above-the-knee amputations, these muscles are not reconnected, which can make it more difficult to control a prosthesis.
The new study also introduced a technique to integrate the system into the residual femur at the amputation site. This technique allows for better stability and load bearing than a traditional prosthesis.
“All parts work together to better get information into and out of the body and better interface mechanically with the device,” study co-author Tony Shu, a biomechatronics researcher who performed the research while a graduate student at MIT, said in the statement. “We’re directly loading the skeleton, which is the part of the body that’s supposed to be loaded, as opposed to using sockets, which is uncomfortable and can lead to frequent skin infections.”
In the new study, two people who had previously received traditional above-the-knee amputations underwent surgery to receive both the muscle-connecting procedure and the bone-integrated implant. The study compared these people with seven others who’d had the muscle surgery but not the bone implant and with eight people who’d had neither. All of the study participants used the same powered knee prosthesis, albeit connected in different ways, for tasks including climbing stairs, stepping over obstacles, and bending and straightening the bionic knee.
Get the world’s most fascinating discoveries delivered straight to your inbox.
The people who received the combined system performed better in almost all of the tasks than those who received only the muscle-connecting surgery, the team found. They also performed much better than the people who used traditional prostheses.
The two participants who received both the muscle surgery and the implant also showed greater increases in their sense of ownership, or the feeling that the prosthetic limb was part of their body, and agency, or the ability to intentionally control the device, after completing the tasks in the study.
“No matter how sophisticated you make the AI systems of a robotic prosthesis, it’s still going to feel like a tool to the user, like an external device,” Herr said. “But with this tissue-integrated approach, when you ask the human user what is their body, the more it’s integrated, the more they’re going to say the prosthesis is actually part of self.”
The prosthesis is not yet commercially available. Clinical trials for Food and Drug Administration approval could take about five years, Herr said in the statement.
The largest piece of Mars ever found on Earth was sold for just over $5 million at an auction of rare geological and archaeological objects in New York on Wednesday. But a rare young dinosaur skeleton stole the show when it fetched more than $30 million in a bidding frenzy.
The 54-pound (25-kilogram) rock named NWA 16788 was discovered in the Sahara Desert in Niger by a meteorite hunter in November 2023, after having been blown off the surface of Mars by a massive asteroid strike and traveling 140 million miles (225 million kilometers) to Earth, according to Sotheby’s. The estimated sale price before the auction was $2 million to $4 million.
The identity of the buyer was not immediately disclosed. The final bid was $4.3 million. Adding various fees and costs, the official sale price was about $5.3 million, making it the most valuable meteorite ever sold at auction, Sotheby’s said.
The live bidding was slow, with the auctioneer trying to coax more offers and decreasing the minimum bid increments.
The dinosaur skeleton, on the other hand, sparked a bidding war among six bidders over six minutes. With a pre-auction estimate of $4 million to $6 million, it is one of only four known Ceratosaurus nasicornis skeletons and the only juvenile skeleton of the species, which resembles the Tyrannosaurus rex but is smaller.
Bidding for the skeleton started with a high advance offer of $6 million, then escalated during the live round with bids $500,000 higher than the last and later $1 million higher than the last before ending at $26 million.
People applauded after the auctioneer gaveled the bidding closed.
The official sale price was $30.5 million, inclusive of fees and costs. That buyer was also not immediately disclosed, but the auction house stated that the buyer plans to loan the skeleton to an institution. It was the third-highest amount paid for a dinosaur at auction. A Stegosaurus skeleton called “Apex” holds the record after it was sold for $44.6 million last year at Sotheby’s.
Parts of the skeleton were discovered in 1996 near Laramie, Wyoming, at the Bone Cabin Quarry, a gold mine known for its dinosaur bones. Specialists assembled nearly 140 fossil bones, along with some sculpted materials, to recreate the skeleton and mounted it so it’s ready to exhibit, Sotheby’s says. It was acquired last year by Fossilogic, a Utah-based company specializing in fossil preparation and mounting.
It’s more than 6 feet (2 meters) tall and nearly 11 feet long, and is believed to be from the late Jurassic period, about 150 million years ago. Ceratosaurus dinosaurs could grow up to 25 feet long, while the T. rex could be 40 feet long.
The bidding for the Mars meteorite began with two advance offers of $1.9 million and $2 million. The live bidding proceeded slowly, with increases of $200,000 and $300,000, until it reached $4 million, then continued with $100,000 increments until it reached $4.3 million.
The red, brown and gray meteorite is about 70% larger than the next largest piece of Mars found on Earth and represents nearly 7% of all the Martian material currently on this planet, Sotheby’s says. It measures nearly 15 inches by 11 inches by 6 inches (375 millimeters by 279 millimeters by 152 millimeters).
It was also a rare find. There are only 400 Martian meteorites out of the more than 77,000 officially recognized meteorites found on Earth, the auction house says.
“This Martian meteorite is the largest piece of Mars we have ever found by a long shot,” Cassandra Hatton, vice chairman for science and natural history at Sotheby’s, said in an interview before the auction. “So it’s more than double the size of what we previously thought was the largest piece of Mars.”
It’s not clear exactly when the meteorite was blasted off the surface of Mars, but testing showed it probably happened in recent years, Sotheby’s says.
Hatton said a specialized lab examined a small piece of the red planet remnant and confirmed it was from Mars. It was compared with the distinct chemical composition of Martian meteorites discovered during the Viking space probe that landed on Mars in 1976, she said.
The examination found that it is an “olivine-microgabbroic shergottite,” a type of Martian rock formed from the slow cooling of Martian magma. It has a course-grained texture and contains the minerals pyroxene and olivine, Sotheby’s says.
It also has a glassy surface, likely due to the high heat that burned it when it fell through Earth’s atmosphere, Hatton said. “So that was their first clue that this wasn’t just some big rock on the ground,” she said.
The Daily Sabah Newsletter
Keep up to date with what’s happening in Turkey,
it’s region and the world.
You can unsubscribe at any time. By signing up you are agreeing to our Terms of Use and Privacy Policy.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Recently, NASA tested a payload adapter at the Marshall Space Flight Center as part of the preparation for the upcoming Artemis IV mission.
What is it?
The massive, dark circular payload adapter was carefully lowered from Test Stand 4697 to Test Stand 4705 for storage, after successfully completing initial structural tests. The next stage is for flight engineers to run quality checks on the adapter before building the final device.
The payload adapter plays an important role in spacecraft launches, as it connects the spacecraft or satellite to a launch vehicle. Without an adapter, the two parts of the spacecraft can’t interface.
Where is it?
The payload adapter was initially tested and is being stored at the Marshall Space Flight Center in Huntsville, Alabama.
The large payload adapter is moved via crane into storage. (Image credit: NASA/Sam Lott)
Why is it amazing?
The payload adapter is just one piece of equipment that is being tested as part of NASA’s planned Artemis IV mission. This crewed lunar mission will focus on the first lunar space station, Gateway, according to NASA. The international hub will allow astronauts to study both the moon and the planets beyond, especially Mars.
To get the astronauts to Gateway, NASA plans to launch the crew using the Orion spacecraft with an upgraded SLS rocket. Before that happens, all launch materials, from boosters to payload adapters, have to be thoroughly tested and cleared for takeoff.
Want to learn more?
You can read more about the upcoming Artemis IV mission and the Gateway hub on the moon.
Breaking space news, the latest updates on rocket launches, skywatching events and more!