Preparing Robots to Crawl on the Moon

CU Boulder researchers are working with the company Lunar Outpost to develop a digital twin of a rover on the surface of the moon. (Credit: Nico Goda/CU Boulder)

The future of moon exploration may be rolling around a non-descript office on the CU Boulder campus. Here, a robot about as wide as a large pizza scoots forward on three wheels. It uses an arm with a claw at one end to pick up a plastic block from the floor, then set it back down.

To be sure, this windowless office, complete with gray carpeting, is nothing like the moon. And the robot, nicknamed “Armstrong,” wouldn’t last a minute on its frigid surface.

But the scene represents a new vision for space exploration — one in which fleets of robots working in tandem with people crawl across the lunar landscape, building scientific observatories or even human habitats.

The Armstrong robot, top, and its digital twin, bottom. (Credit: Network for Exploration and Space Science)

Xavier O’Keefe operates the robot from a room down the hall. He wears virtual reality goggles that allow him to see through a camera mounted on top of Armstrong.

“It’s impressively immersive,” said O’Keefe, who earned his bachelor’s degree in aerospace engineering sciences from CU Boulder this spring. “The first couple of times I used the VR, the robot was sitting in the corner, and it was really weird to see myself using it.”

He’s part of a team of current and former undergraduate students tackling a tricky question: How can humans on Earth get the training they need to operate robots on the hazardous terrain of the lunar surface? On the moon, gravity is only about one-sixth as strong as it is on our planet. The landscape is pockmarked with craters, some cast in permanent darkness.

In a new study, O’Keefe and fellow CU Boulder alumni Katy McCutchan and Alexis Muniz report that “digital twins,” or hyper-realistic virtual reality environments, could provide a useful proxy for the moon — giving people a chance to get the hang of driving robots without risking damage to multi-million-dollar equipment.

The study is funded by NASA and the Colorado company Lunar Outpost. It is part of a larger research effort led by Jack Burns, astrophysics professor emeritus in the Department of Astrophysical and Planetary Sciences (APS) and the Center for Astrophysics and Space Astronomy (CASA).

For Burns, a co-author of the study, Armstrong and its VR digital twin represent a big leap forward, despite the robot’s humble appearance. Burns is part of a team that has received a grant from NASA to design a futuristic scientific observatory on the moon called FarView — which would be made up of a web of 100,000 antennas stretching over roughly 77 square miles of the lunar surface. Daniel Szafir of the University of North Carolina, Chapel Hill was also a co-author of the new study.

The space group’s first hurdle: Creating a digital twin for Armstrong to roam around in. To do that, the researchers began by creating a digital replica of their office in a video game engine called Unity—right down to the beige walls and drab carpet.

Next, the team ran an experiment. In 2023 and 2024, they recruited 24 human participants to operate Armstrong while sitting in a room down the hall. Donning VR goggles, the subjects took the robot through a simple task: They picked up and adjusted a plastic block that represented one of the antennas in FarView.

Here is an exclusive Tech Briefs interview, edited for length and clarity, with O’Keefe.

Tech Briefs: What was the biggest technical challenge you faced while developing this digital twin?

O’Keefe: It was probably synchronizing the physical robot with the digital robot because it’s pretty easy to just get something into a simulation and get it to move. But to actually get it to move as precisely as the real robot does and to be able to match those characteristics and verify them? That took a lot of work.

Tech Briefs: How’d you wind up doing that?

O’Keefe: We had a couple different ways. We measured the movement speeds of the arm with a Python script and then matched those movement speeds in Unity. Then we did a couple of experiments with the physical rover where we drove it a set distance and made sure it went that same distance, in the same time, in the simulation. And we did the same thing for turning as well to make sure it turned at the same speed.

Tech Briefs: The article I read says, “Today, Burns’ team is moving on to a new goal. They’re recreating the much more complex environment of the lunar surface. The researchers are working with Lunar Outpost to build a digital twin of a rover on the moon, but in the same game engine.” My question is: Do you have any updates you can share? How’s that going?

O’Keefe: No major updates. Actually, immediately after that interview, my co-worker and I merged our work together, so the simulation looks a lot more accurate. We’ve got the planets in the night sky. We were able to figure out exactly where the planets would be at a specific date and render that into the engine. So that’s probably the biggest update we’re currently working on — fine-tuning the physics of the rover and also getting the dust clouds that it kicks up to be simulated correctly.

Tech Briefs: Once you have that digital twin, what will your next steps be?

O’Keefe: I’m actually not sure where that goes next. After we’ve got the digital twin done, we might be switching focus to dip back into VR technology. The control interface and the ability to debug stuff with VR is pretty promising. So, we’re thinking about working with that. Or, we might switch over to another one of the Lunar Outpost vehicles and do the same thing with that.

Tech Briefs: Going back to what you just said, the article also says the hardest part is getting the lunar dust just right. How did you overcome that?

O’Keefe: That’s definitely still a work in progress, because we can’t just go up to the moon and see if our stuff looks right. There are a couple of videos online of rovers driving around, so I think we’re going to try and match with that. There’s a decent amount of research out there with models of dust physics, so it’s really going to be just try and verify everything we can with the materials we have, and then, hopefully, at some point we’ll get some real data to verify it that way.



Transcript

00:00:00 [equipment squeaking] (Xavier O’Keefe) The task  was to identify which of the three antennas was   misaligned, drive up to it, pick it up, rotate  it to be aligned how it should, and then leave. On the moon there’s going to be, you know,  a lot of missions, a lot of things going on,   hopefully soon. Potentially deploying as  many as 100,000 antennas. And that would   be a real pain for human operators  to do manually with their hands. And the robot allows for that extra degree  of precision that humans may not have. [squeaking] The big picture idea is to develop  a training platform for telerobotics on the   moon. So, on the moon there’s a lot of missions  that need to be done with robots and the cost   of failure is a lot higher. You know, it’s  a multi-million or billion dollar mission. So we want operators to be able to train on a  very realistic high fidelity simulation on Earth  

00:00:50 and then, you know, have that experience  transfer really well to the moon. We had two groups of participants, one  who just completed the task with the   physical rover. The second who completed  the task with the digital twin and then   went on to do the same task  with the physical rover. Most significant result was the amount  of failures that we saw between the   groups. So the second group had  85% fewer unrecoverable failures   than the first group, and these were when  the participants would knock over the antenna. This was pretty much impossible to undo  without an expert operator. And on the moon,   this would be something that  you couldn’t recover from,   this would be a loss of a lot of money,  potentially loss of the mission.

00:01:25 So showing that we were able to  reduce these failures by 85% is,   I think probably the most  promising part of this work. That’s what I’m hoping will continue to  happen here, is that we’ll keep perfecting   our platform and keep learning how to  simulate these rovers better and keep   applying it to real robots that help real  companies solve real problems, on the Moon.


Continue Reading