It’s full Moon week, which always means a flurry of excitement online and across social media, as stargazers and astrophotographers prepare to catch a glimpse of our natural satellite looking big and beautiful.
Between 9–11 July, the 2025 Buck Moon rises in the southeast, poking its head above the horizon around 10pm.
It’s located in the constellation Sagittarius, which never rises very high in the sky for those of us in the Northern Hemisphere.
That means the Buck Moon will be a low, horizon-grazing full Moon, and also, incidentally, the furthest Moon from the Sun of the year.
If you get your timing right, and you happen to be in a favourable location, you might be able to spot the International Space Station in the sky on the same night that the Buck Moon rises.
Did you know it’s possible to see the International Space Station with the naked eye?
The ISS is certainly visible without the need for binoculars or a telescope, if you know when and where to look.
In fact, the Space Station’s orbit means it passes over about 90% of the world’s population, so it’s likely flown above your head multiple times without you even knowing.
It circles Earth every 90 minutes, giving astronauts on board 16 sunrises and sunsets every day, and giving us down on the ground the chance to see a bright overhead pass.
The Buck Moon rises this week, 9–11 July, and is visible in the southeast from about 10pm, drifting towards the south and then setting in the southwest in the early hours.
This full Moon in July won’t rise very much above the southern horizon, meaning you’ll need a clear, flat horizon to see it.
Get yourself a view unspoiled by tall buildings and trees etc.
The further south in the Northern Hemisphere you are, however, the easier it will be to see the Buck Moon rise.
And if you do manage to see the Buck Moon, you may also see the International Space Station on the same night.
The International Space Station is visible from Earth because sunlight reflects off it and bounces back down to the ground.
This means that, in order to see the ISS, you need a good, dark sky.
And because we’re largely reliant on sunlight to give us a view of the ISS, it can normally only be seen just after sunset or just before sunrise.
This Buck Moon rises just after sunset, meaning you may be able to catch a glimpse of it and the Space Station in the same evening.
Each June and December, the Sun, the Earth and the orbit of the Space Station are aligned as such that the ISS doesn’t pass through Earth’s shadow, according to the European Space Agency.
This means that, around the summer solstice period, the ISS can be seen up to four times a night, depending on your location and the weather.
What’s more, according to Sky and Telescope, the Space Station can be seen multiple times a night around the 4th July holiday, including a week or so either side.
That could be good news for anyone wanting to try and catch a glimpse of the ISS and the Buck Moon in the same night.
You may even be lucky enough to see – or photograph – it passing in front of the Moon, in an event known as a ‘transit’.
You can input your data into the online Transit Finder and it will calculate when the next lunar transit of the Space Station will occur from your location.
Here is a selection of websites that will help you calculate when the International Space Station is next making a pass over your location.
If you do manage to see or photograph the International Space Station, get in touch by emailing contactus@skyatnightmagazine.com
Nasa has suffered a major set back in a plan to stop apocalyptic asteroids from smashing into Earth.
Astronomers believe boulders ejected when a Nasa spacecraft collided with an asteroid almost three years ago “could complicate” future missions.
The spacecraft, known as the Double Asteroid Redirection Test (DART), hit the asteroid Dimorphos on September 26, 2022.
At the time, Nasa said the spacecraft’s kinetic impact with the asteroid altered its orbit, marking humanity’s first time purposely changing the motion of a celestial object and the first full-scale demonstration of asteroid deflection technology.
Picture taken by a satellite after the DART made impact with the asteroid in 2022
ASI/NASA
A team of astronomers at the University of Maryland have now discovered when DART hit the asteroid, the space rocks ejected carried three times more momentum than the spacecraft itself.
The ejected boulders then created forces in “unexpected directions” that “could complicate future deflection efforts”, according to experts at university.
The astronomers used images recorded by DART’s companion spacecraft which was separated from the spacecraft 15 days before the impact to help track the boulders in the aftermath of the event.
As a result of the discovery, Research Scientist at the university’s Department of Astronomy, Tony Farnham, believes that more factors may need to be considered when planning missions like Nasa’s DART mission in 2022.
LATEST DEVELOPMENTS:
Imagery taken from Nasa’s Hubble Space Telescope in 2022 shows debris blasted from the surface of the asteroid around 11 days after impact
NASA/ESA/STScI/Hubble
“Our research shows that while the direct impact of the DART spacecraft caused this change, the boulders ejected gave an additional kick that was almost as big, Farnham stated on the university’s website.
“That additional factor changes the physics we need to consider when planning these types of missions.”
In the weeks following the DART mission in 2022, Nasa Administrator, Bill Nelson, said: “All of us have a responsibility to protect our home planet. After all, it’s the only one we have.
“This mission shows that Nasa is trying to be ready for whatever the universe throws at us. Nasa has proven we are serious as a defender of the planet.
“This is a watershed moment for planetary defence and all of humanity, demonstrating commitment from Nasa’s exceptional team and partners from around the world.”
Nasa confirmed in 2022 the Dimorphos asteroid did not pose any hazard to Earth before or after the controlled collision.
Have you ever wondered what kind of video content would most grab the attention of monkeys?
A new study of long-tailed macaques suggests the monkeys seem to like some of the same kind of content that humans do: videos featuring aggression and individuals they know.
“Humans and macaques are both social animals who have a fundamental need to belong,” said Brad Bushman, co-author of the study and professor of communication at The Ohio State University.
“It’s not surprising that they both would be most interested in the video content that may help them navigate relationships in their groups.”
The study was published online recently in the journal Animal Cognition. It was led by Elisabeth H.M. Sterck, professor of animal behaviour and cognition at Utrecht University in The Netherlands.
Researchers showed two-minute videos to 28 macaques that lived at a primate research center in The Netherlands. Each macaque saw multiple videos over time featuring monkeys in their group or strangers. Each individual video showed monkeys in one of four types of activities: conflict, grooming of each other, running, or sitting.
The researchers calculated how much time the monkeys spent looking directly at the screen and their reactions while watching.
Findings showed the macaques paid the most attention to videos featuring conflicts between monkeys. Running was the next most popular type of video. Grooming and sitting attracted the least attention.
It is notable that both macaques and humans seem to be attracted to videos featuring similar content, Bushman said.
“We have plenty of research showing the popularity of violent media with humans. Now we have some evidence that other primates might also be attracted to conflict and aggression in videos,” Bushman said.
“From an evolutionary perspective, this makes sense. Both humans and other animals may be hardwired to pay attention to aggression because that is an adaptive response that increases survival,” he added.
The other significant finding of the study was that the macaques watched videos featuring members of their own group more closely than those involving strangers.
“This indicates that gathering social information on group members is more important than getting information about strangers,” Sterck said.
And seeing familiar faces on the screen isn’t just something that’s attractive to monkeys.
“When we as humans watch movies, we like to see actors we know – we like to see the stars playing in big movies more than we do actors who are not familiar to us,” Bushman said.
Findings also showed that low-ranking and less aggressive macaques paid more attention than others to the videos.
“More dominant individuals can be more confident that aggression will not affect them – they don’t have to pay attention to others as much,” Sterck said.
“Lower-ranking individuals can become an aggression victim and that may be why they pay more attention to what others are doing in the videos.”
In addition, high-strung macaques that were more easily stressed paid less attention to group members than those who did not act as stressed.
“We found that the gathering of social information from the videos differed with dominance rank and behavioral tendencies, which may reflect personality,” Sterck said.
The research involved two separate groups of macaques that live at the Biomedical Primate Research Centre in Rijswijk, The Netherlands.
The “stranger” videos that the macaques viewed were those monkeys from a third out-of-view group.
In each enclosure, there is a corridor where the macaques are accustomed to participating in cognitive tests. There were four compartments where the monkeys could watch videos on a laptop. The subjects entered the corridor on their own volition, and were isolated from other monkeys of their multi-generational group during the two-minute videos.
“The macaques are very visual animals. Their eyesight is similar to that of humans and they are very interested in watching videos,” Sterck said.
The researchers said the findings showed that humans share tendencies with our monkey relatives, including the attraction to videos with conflict.
“Even this brief exposure to aggressive media captured the attention of macaques in the study,” Bushman said. “When you see this in some of our closest primate relatives, it is easy to see why humans are so interested in violent media.”
Other co-authors of the study, all from Utrecht University, were Sophie Kamp, Ive Rouart, Lisette van den Berg, Dian Zijlmans and Tom Roth.
';
Earth, its cosmic home the Milky Way, and even the very local region of universe around us could be situated within a void of low density compared to the rest of the universe.
If so, that would solve one of the most frustrating and lingering problems in cosmology, the so-called “Hubble tension.”
New research suggests that “baryon acoustic oscillations (BAOs)” from the initial moments of the universe, think of them as “the sound of the Big Bang,” seem to support the concept of the local void or “Hubble Bubble.”
The Hubble tension arises from the fact that when measured using different techniques the speed at which the universe is expanding (known as the Hubble constant) has different values. One technique measures the Hubble constant using astronomical observations in the local universe, while the other gives its value as an average across the entire universe.
That means if the local universe sits in a low-density “Hubble bubble,” it would be expanding faster than the higher-density wider cosmos, explaining why observations give a larger Hubble constant value and faster expansion than slower theoretical averages.
“A potential solution to this inconsistency is that our galaxy is close to the center of a large, local void,” research author Indranil Banik of the University of Portsmouth said in a statement. “It would cause matter to be pulled by gravity towards the higher density exterior of the void, leading to the void becoming emptier with time.
“As the void is emptying out, the velocity of objects away from us would be larger than if the void were not there. This, therefore, gives the appearance of a faster local expansion rate.”
There are two ways to calculate the Hubble tension.
For one, scientists observe a “cosmic fossil” called the cosmic microwave background (CMB). The first light that was free to travel the universe, the CMB, is a field of radiation that almost evenly and uniformly fills the entire cosmos.
Scientists can observe the CMB and calculate its evolution using the Lambda Cold Dark Matter model (LCDM), the standard model of cosmology, as a template. From this, they derive the current-day value for the Hubble constant across the universe as a while, not just locally.
Alternatively, astronomers use observations of type Ia supernovas or variable stars, two examples of objects that astronomers call “standard candles,” to measure distances to their host galaxies. How fast these galaxies are receding is revealed by the change in the wavelengths of light from these bodies, or the “redshift.” The bigger the redshift, the faster a galaxy moves away from Earth. The Hubble constant can be calculated from this.
The problem is that this observation method of the local universe gives a Hubble Constant value that is greater than the theoretical value obtained with the Lambda CDM, which considers the universe as a whole. Hence the Hubble tension.
Banik thinks that this discrepancy is a local problem.
“The Hubble tension is largely a local phenomenon, with little evidence that the expansion rate disagrees with expectations in the standard cosmology further back in time,” Banik said. “So, a local solution like a local void is a promising way to go about solving the problem.”
For this local void theory to solve the Hubble tension, Earth and the solar system would have to sit roughly centrally within the low-density Hubble bubble. The Hubble bubble would have to be around 2 billion light-years wide, with a density around 20% lower than the universe’s average matter density.
Indeed, counting the number of galaxies in the local universe does seem to reveal a lower density than neighboring parts of the cosmos.
However, a major stumbling block to this concept is the fact that the existence of such a vast void doesn’t fit well with the LCDM, which suggests matter should be evenly spread in all directions, or “isotropically and homogenously” distributed through the universe.
New data obtained by Banik shows that the sound of the Big Bang, known as Baryon Acoustic Oscillations or BAOs, actually support the concept of a local void contrary to the LCDM.
“These sound waves traveled for only a short while before becoming frozen in place once the universe cooled enough for neutral atoms to form,” Banik explained. “They act as a standard ruler, whose angular size we can use to chart the cosmic expansion history.”
Banik argues that a local void slightly distorts the relation between the BAO angular scale and the redshift. This is because velocities induced by a local void and its gravitational effect slightly increase the redshift in addition to that caused by cosmic expansion.
“By considering all available BAO measurements over the last 20 years, we showed that a void model is about one hundred million times more likely than a void-free model with parameters designed to fit the CMB observations taken by the Planck satellite, the so-called homogeneous Planck cosmology,” Banik added.
The next step for Banik and colleagues will be to compare their void model to other models to try to reconstruct the universe’s expansion history.
This could involve the use of “cosmic chronometers,” massive evolving cosmic objects like galaxies that can be aged to determine how the rate of expansion of the universe has changed over time. With galaxies, this can be done by observing stellar populations and seeing what type of stars they possess, with an absence of shorter-lived massive stars indicating a more advanced age.
This age is then compared with the redshift the galaxy’s light has undergone as a result of the expansion of the universe as it traveled to us, revealing the expansion history of the universe over cosmic time.
Perhaps this way, the headache of Hubble tension can be relieved permanently.
The team’s research was presented by Banik on Monday (July 7) at the Royal Astronomical Society National Astronomy Meeting (NAM) 2025 at Durham University in the UK.
Andres Almeida (Host): As wildland fires grow more intense and unpredictable, NASA is helping first responders gain the upper hand. We’re exploring ACERO, the Advanced Capabilities for Emergency Response Operations Project. It’s something that will better protect communities, potentially saving lives. This is Small Steps, Giant Leaps.
[Intro music]
Welcome to Small Steps, Giant Leaps, your podcast from NASA’s Academy of Program/Project & Engineering Leadership, or APPEL. Each episode dives into the lessons learned and experiences from NASA’s technical workforce.
Today we’re talking about ACERO and the team developing an airspace management system to enable drones and other aircraft to safely monitor and suppress wildland fires 24 hours a day. The system will increase aerial response capabilities for responders during low-visibility conditions in a way that is not currently possible. This will better protect our communities. Here to tell us about it is Patrick Hill, chief of Unmanned Aircraft Systems training at NASA’s Langley Research Center in Hampton, Virginia.
Host: Hey, Patrick, thanks for being here.
Patrick Hill: Yeah, thanks for having me.
Host: Can you talk a bit about ACERO, what it is and what is your role with the project?
Hill: Okay, yeah, so I’m a systems engineer and the chief of UAS training at NASA Langley Research Center. Before the flight demonstration, I was involved in payload testing and flight training. We were verifying the functionality all the individual parts that will go into the larger systems that we were testing during the flight demonstration. I was the acting pilot in command for the SuperVolo UAS. The SuperVolo is a hybrid electric, gas, electric, uncrewed aircraft system which takes off vertically and transitions to forward flight. These things combined allow the vehicle to quickly be deployed in some austere environments, while enabling it to stay in the air for much longer than other aircraft.
I would also like to mention that it wasn’t just me working on the project. We had dozens of individuals, from software developers to hardware integrators to systems engineers, airspace and ground crews, who all came together, put a lot of hours to accomplish the goals of the project.
Host: Yeah, that’s how it often is. It’s such an integrated process, right?
Hill: Yeah, it’s not just me out there. I know I’m a pilot, but, and we have high egos, but it’s a, it was not definitely just not me out there.
Host: So, can you tell us some advantages of using drones in wildland fire scenarios?
Hill: Sure. Let me start by laying out how things are currently done today. So UAS, or drones, are currently used for aerial ignition, where they would drop small pods, which ignite on the ground and start control burns. This manages the larger burns. In some scenarios, you’ll also have aerial ignition from crude aircraft. In the more severe wildfires, you’ll have crude aircraft coming in and dropping in tons of water or fire retardant. All of these aircraft are usually guided by an air boss who acts as a type of local airspace manager.
An important note is that during most of the burns today, when crewed aircraft are flying, the drones are not and then when the drones are flying, the crewed aircraft are not. Now, I’m not saying that we’re trying to completely replace all the crewed aircraft. That’s not what we’re trying to do here. What the drones are great at is dull, dirty, and dangerous jobs. We don’t really relate to wildfires as dull, but there are times where we can orbit our aircraft above a mountain for hours so ground crews could have a constant radio communication with each other.
UAS are also utilized to monitor critical information like local weather or the progression of the fire in real time. In the future, we plan on utilizing our systems in a second shift capability, or at night when the crewed aircraft are grounded. Low-flying aircraft around mountains at night don’t really jive together, and that’s a place where the UAS would really strive.
Host: What types of sensors and technologies are equipped to aid in wildland fire management?
Hill: So, for the ACERO flight demonstration, the aircraft I was operating was equipped with the Doodle Labs radio, which acted as a key node in a mesh network of ground radios operated out of these PAM cases we were testing. We also utilize our multirotor aircraft to monitor local weather in real time, equipping an aircraft with a radiosonde, which is the same payload that you would see on a weather balloon (while other crews were flying UAS, utilizing cameras to monitor the progression of the burn).
Host: I’d love to talk about the PAMS that you have. Can you talk about the Portable Airspace Management System?
Hill: Sure. So, the Portable Airspace Management Systems, or PAMS cases, because we love our acronyms here at NASA, is essentially a situational awareness gold mine for UAS and wildfire crews.
It allows crews to preserve a block of airspace to operate in during the flight demonstration, when given the green light from our airspace approver, who in this case, was performed by individuals from CAL FIRE the UAS crews were able to see all active flights. Crews were also notified of an aircraft if it was going non-conforming or straight outside of its pre-approved flight area. The case has also received ADS-B [Automatic Dependent Surveillance-Broadcast] in, which allows crews to monitor the local manned traffic in the area.
Host: So, you mentioned CAL FIRE. What has been the response to ACERO from emergency officials?
Hill: So, I don’t claim to be an emergency official myself, but from the individuals that we worked with during the ACERO flight demonstration, they all seemed extremely pleased with the products that we were able to produce.
I believe the PAMS case and the possibility of the future integration with the wildfire crews, both on the ground and in the air, was particularly seen as a useful technology that will generate forward momentum and integration of UAS in the wildfire and wildfire fighting airspace.
Host: So, looking ahead, what advancements Do you foresee in drone technology that could further aid in wildfire management.
Hill: Yeah, so first we need to be able to assist with controlling the airspace that you’re flying in, which is why we’re testing these PAMS cases.
As technology develops and we get more confidence in our systems, we would see a sharing of the airspace with the crewed side, having small UAS delivering equipment to crews in the field, while larger manned aircraft are able to come in and drop their water on location scouted by a UAS, which is flying at a high altitude, which may also be providing key nodes to radios and all crews on the ground.
And then going back to the second shift capabilities, [it] will allow crews to be constantly monitored and evolving fire or even react to it when it may not be possible to do otherwise.
Host: Do you find yourselves sharing knowledge across teams? Is that a core component of what you all do?
Hill: Yeah, so sharing the knowledge that we receive is part of NASA mission, right? So, everything that we’re doing should be passed forward to the public, which include includes the firefighter crews and their teams that are, you know, currently in these environments, working and fighting these fires.
Host: I do like learning from you the term Air Boss. I hadn’t heard that before, so that’s a term that’s wildfire management, correct?
Hill: Yeah, that’s correct. So, the Air Boss kind of acts as a flying ATC tower, right? And he gives out all the instructions to the aircraft coming in and out of what would actually be a TFR, or flight restricted area. And that would allow the, you know, continuation of all the aircraft going in and out at the low distances that these aircraft need to be at.
Host: With UAS, with these drones, you’re looking to complement, especially at night, right, when it’s less safe to fly, when it’s less safe for a pilot to fly.
Hill: Yeah, so, as it is today, for the most part, we ground all manned aircraft at night because, you know, there’s no lights on the mountains, and we don’t want these aircraft get flying into a dangerous area. Now, we do have aircraft that have terrain following and things like that, but the UAS are, in a lack of a better term, much more replaceable than a manned aircraft.
Host: Can you share any lessons learned from ACERO so far?
Hill: far? Well, California has some of the best burritos I’ve ever eaten! But seriously, we have learned a lot of great takeaways from our flight demonstration. In particular, we learned a lot about the complex integration of flying multiple UAS in a confined airspace, and what it takes to really do that safely.
A major takeaway that I received was that the communication line from the aircraft to the ground systems to everybody else is vital for all crews. It’s not just vital for our crew flying the aircraft, but for the other crews that are in the area need to know what’s going on with us as well.
Host: Patrick, one more question for you. What was your giant leap?
Hill: Yeah. So, my giant leap started way back when I was in college. My intent was to go to school and become an airline pilot. The way my major worked, I was able to take some classes studying UAS, and I was immediately sucked. In my second year of school, I took my giant leap, stepping away from my original major to study UAS, and I’ve never looked back. That decision has taken me places that I’d never have gone otherwise, like working for NASA.
Host: Do you think when you like your own, like pilot’s license just for recreation?
Hill: Yeah, so I actually have 70 hours in a crude aircraft, which is a stone’s throw away. I should probably have my license already, to be honest.
[Laughter]
But just the way things have worked, and I’ve got I’ve gained those hours in very short bursts many times. So, I reached the point to take my test several times and then gone back to restudy. But yes, I think one day in my future, I will be flying myself.
Host: There have to be many pilots there at Langley, correct? Like, recreational?
Hill: Yeah, yeah. So I think most of the people I work with have their own license or in some like, shape or form. We have a lot of glider pilots here, actually, which is, you know, that’s not something you see every day or run into, so, quite unique for the people that we have. But their skills are some of the best I’ve ever seen.
Host: That’s pretty great. Do you have any closing words for anybody who wants to pursue a career at NASA and something you’d follow in your footsteps, perhaps?
Hill: Follow your heart, but do what you’re good at? I loved UAS, and fortunately, I had an engineering mindset, which has taken me far in this in this world. So, if you’re, you know, planning on following my footsteps, go to school, get your degree, I bet you relied on mentors too, yeah, oh, yeah, so I wouldn’t have gotten where I am today without the help of, you know, former classmates, teachers, professors, my old bosses, my current bosses. Everybody has, you know, put a little bit of effort into the person I am today, which, you know, working for NASA is pretty fantastic.
Host: Well, thanks, Patrick, thanks for your time today.
Hill: Yeah, thanks. I really appreciate it. And also, as of this recording date, yesterday was Father’s Day. So, shout out to all the dads out there, especially my dad and my two brothers who recently had kids. Just want to put that out there.
Host: Excellent. Happy Father’s Day to all.
Host: That’s it for this episode of Small Steps, Giant Leaps. For more on Patrick Hill and the topics we discussed today, visit appel.nasa.gov. That’s A-P-P-E-L dot NASA dot gov. And while you’re there, check out our other podcasts like Houston, We Have a Podcast, Curious Universe, and Universo curioso de la NASA. Thanks for listening.
[Outro music]
Outro: Three. Two. One. This is an official NASA podcast.
Planet Earth will spin a little faster on three separate days this summer, starting today. This will technically result in shorter days, but the change will be so minuscule you won’t even notice.
Several milliseconds will be shaved off of the 24 hours it takes for Earth to complete a full rotation — we’re talking even less time than the blink of an eye.
Planet Earth is our timekeeper, but it’s not perfect.
It takes our planet 24 hours — one day — to complete one full rotation on its axis, which breaks down to 86,400 seconds. But Earth’s rotation could change by a millisecond (.001 seconds) or two every day.
The orbit of the moon can have an effect on how fast the Earth spins around. “Our planet spins quicker when the moon’s position is far to the north or south of Earth’s equator,” according to TimeandDate.com.
“Earthquakes, volcanoes, tidal forces, subterranean geology, and many other mechanisms can cause the planet’s rotation to slow down or speed up, and those micro-adjustments can trend over time,” Popular Mechanics reported.
The 8.9 magnitude earthquake that struck Japan in 2011 accelerated Earth’s rotation, shortening the length of the standard 24-hour day by 1.8 microseconds (0.0018 milliseconds).
These tiny day-to-day fluctuations in the Earth’s spin speed began to be measured in the 1950s with atomic clocks. Any number above or below the standard 86,400 seconds is called the length of day (LOD).
The shortest day recorded was on July 5, 2024, when Earth completed its full rotation 1.66 milliseconds faster than the standard 86,400 seconds.
There are three days this summer when the moon will be around its furthest distance from Earth’s equator, resulting in a minuscule increase in the Earth’s spin speed. The following are predictions from scientists:
July 9: The day is shortened by 1.30 milliseconds
July 22: Earth loses 1.38 milliseconds of the day
Aug. 5: The day is shortened by 1.51 milliseconds
What feels like the shortest day of the year in the Northern Hemisphere is known as the winter solstice, when Earth is tilted away from the sun at its maximum. This results in the fewest amount of daylight hours all year and occurs in mid-December.
There weren’t always 24 hours in a day. Researchers believe that in the Jurassic Period, it took Earth just 23 hours to make a complete rotation around its axis. Scientists have found that the length of a day on Earth is increasing each century by about 1.7 milliseconds. Over time, that adds up. Experts think that 200 million years from now, there will be 25 hours in a full day.
Insider Brief
Quantum computers still face a major hurdle on their pathway to practical use cases: their limited ability to correct the arising computational errors. To develop truly reliable quantum computers, researchers must be able to simulate quantum computations using conventional computers to verify their correctness – a vital yet extraordinarily difficult task. Now, in a world-first, researchers from Chalmers University of Technology, the University of Milan, the University of Granada, and the University of Tokyo have unveiled a method for simulating specific types of error-corrected quantum computations – a significant leap forward in the quest for robust quantum technologies.
Quantum computers have the potential to solve complex problems that no supercomputer today can handle. In the foreseeable future, quantum technology’s computing power is expected to revolutionise fundamental ways of solving problems in medicine, energy, encryption, AI, and logistics.
Despite these promises, the technology faces a major challenge: the need for correcting the errors arising in a quantum computation. While conventional computers also experience errors, these can be quickly and reliably corrected using well-established techniques before they can cause problems. In contrast, quantum computers are subject to far more errors, which are additionally harder to detect and correct. Quantum systems are still not fault-tolerant and therefore not yet fully reliable.
To verify the accuracy of a quantum computation, researchers simulate – or mimic – the calculations using conventional computers, like standard a laptop. One particularly important type of quantum computation that researchers are therefore interested in simulating is one that can withstand disturbances and effectively correct errors. However, the immense complexity of quantum computations makes such simulations extremely demanding – so much so that, in some cases, even the world’s best conventional supercomputer would take the age of the universe to reproduce the result.
Researchers from Chalmers University of Technology, the University of Milan, the University of Granada and the University of Tokyo have now become the first in the world to present a method for accurately simulating a certain type of quantum computation that is particularly suitable for error correction, but which thus far has been very difficult to simulate. The breakthrough tackles a long-standing challenge in quantum research.
“We have discovered a way to simulate a specific type of quantum computation where previous methods have not been effective. This means that we can now simulate quantum computations with an error correction code used for fault tolerance, which is crucial for being able to build better and more robust quantum computers in the future,” says Cameron Calcluth, PhD in Applied Quantum Physics at Chalmers and first author of a study recently published in Physical Review Letters.
The limited ability of quantum computers to correct errors stems from their fundamental building blocks – qubits – which have the potential for immense computational power but are also highly sensitive. The computational power of quantum computers relies on the quantum mechanical phenomenon of superposition, meaning qubits can simultaneously hold the values 1 and 0, as well as all intermediate states, in any combination. The computational capacity increases exponentially with each additional qubit, but the trade-off is their extreme susceptibility to disturbances.
“The slightest noise from the surroundings in the form of vibrations, electromagnetic radiation, or a change in temperature can cause the qubits to miscalculate or even lose their quantum state, their coherence, thereby also losing their capacity to continue calculating,” says Cameron Calcluth.
To address this issue, error correction codes are used to distribute information across multiple subsystems, allowing errors to be detected and corrected without destroying the quantum information. One way is to encode the quantum information of a qubit into the multiple – (possibly infinite – ) energy levels of a vibrating quantum mechanical system. This is called a bosonic code. However, simulating quantum computations with bosonic codes is particularly challenging because of the multiple energy levels, and researchers have been unable to reliably simulate them using conventional computers – until now.
The method developed by the researchers consists of an algorithm capable of simulating quantum computations that use a type of bosonic code known as the Gottesman-Kitaev-Preskill (GKP) code. This code is commonly used in leading implementations of quantum computers.
“The way it stores quantum information makes it easier for quantum computers to correct errors, which in turn makes them less sensitive to noise and disturbances. Due to their deeply quantum mechanical nature, GKP codes have been extremely difficult to simulate using conventional computers. But now we have finally found a unique way to do this much more effectively than with previous methods,” says Giulia Ferrini, Associate Professor of Applied Quantum Physics at Chalmers and co-author of the study.
The researchers managed to use the code in their algorithm by creating a new mathematical tool. Thanks to the new method, researchers can now more reliably test and validate a quantum computer’s calculations.
“This opens up entirely new ways of simulating quantum computations that we have previously been unable to test but are crucial for being able to build stable and scalable quantum computers,” says Giulia Ferrini.
Read the scientific article Classical simulation of circuits with realistic odd-dimensional Gottesman-Kitaev-Preskill states .
When Canadian and European Space Agency leaders reaffirmed their commitment to work together in June, leaders focused their public remarks primarily on shared exploration goals and decades of fruitful partnership.
Register now and get
3 free articles every month.
You’ll also receive our weekly SpaceNews This Week newsletter every Friday. Opt-out at any time.
Sign in to an existing account
Get unlimited access to
SpaceNews.com now.
Use code SNLAUNCH for 30% off your first payment.
Subscriptions renew automatically at full price. Cancel anytime. Sales tax may apply. No refunds. Only one discount code valid per subscription.
See all subscription options
Many of the stars in the Milky Way galaxy are small, dim red dwarfs—stars much smaller than the sun in both size and mass. TOI-6894, located far away from Earth, is one of them.
Astronomers previously thought a star like this could not have large planets circulating it, because its mass is only about 20 percent of the sun, meaning its planetary system—generated from materials surrounding the star—would not have contained enough mass to form a giant body like Saturn or Jupiter.
But when observing TOI-6894, an international research team detected a clear transit signal—a temporary decrease in a star’s brightness caused by a planet passing across it. This newly discovered planet, named TOI-6894b, blocks 17 percent of the star’s light, indicating the planet is fairly large. The signal was picked up by the Transiting Exoplanet Survey Satellite (TESS), an observation instrument launched by NASA to hunt for planets orbiting stars outside of our solar system.
This makes TOI-6894 “the lowest mass star known to date to host such a planet,” said Edward Bryant, Astrophysics Prize Fellow at the University of Warwick, in a press statement. The finding appears to upend conventional theory on how planets are formed. “This discovery will be a cornerstone for understanding the extremes of giant planet formation,” Bryant said.
Astronomers at University College London and the University of Warwick, as part of a global collaboration with partners in Chile, the US, and Europe, trawled through the data of about 91,000 red dwarf stars observed by TESS before discovering the planet TOI-6894b. After that, the nature of TOI-6894b was clarified by additional observations made with other telescopes. According to these, TOI-6894b’s radius is slightly larger than Saturn’s, but its mass is only about half that of the ringed giant. Its density is extremely light at only 0.33 g/cm³, indicating that it is an expanding gas planet.
TOI-6894 is nearly 40 percent smaller than the previous record for the smallest star with a planet of this size. This fact poses a serious contradiction to conventional theories of planet formation.
The widely accepted planetary formation model, the “core-accumulation theory,” proposes that a ring of dust and rocks—known as protoplanetary disk—forms around a star, and that materials in this disk then gather together to form the cores of planets. After starting out this way, larger gas planets then accrete gases around their cores to become gigantic. But if the mass of the star is small, the mass of its protoplanetary disk tends to be small as well. In such a scenario, the nucleus necessary for the formation of a giant gas planet will not grow.
Based on this theory, it is estimated that more than 120 times more solid matter than that of the Earth would be required to form TOI-6894b. However, the observed disk surrounding the star TOI-6894 contains only 58 times the mass of the Earth at most. This raises the possibility of an alternative planet-formation mechanism existing.