Vandi Verma has been designing and configuring robots for over 20 years. And where some may only dream of reaching new heights in their field, Verma has literally done so. In her work as chief engineer for robotic operations for the Mars 2020 Perseverance rover and deputy section manager for the mobility and robotics section at NASA’s Jet Propulsion Laboratory, she has sent robots to Mars and shaped the AI that powers their “thinking” and driving capabilities. In this interview with McKinsey’s David DeLallo, we get to learn what it means to test the limits of robotics in one of the highest-stakes environments there is. An edited version of their conversation follows.
David DeLallo: What has been your role in developing and operating the Mars rover, Perseverance?
Vandi Verma: Robotic operations on the rover mission cover everything related to the rover’s navigation, whether that’s us directing its drives on Mars from Earth or using the rover’s self-driving capability. It also includes everything related to the robotic arm, such as how we manipulate the surface, how close we get to things, and all the uncertainties to do with that. Another aspect is sampling; Mars Perseverance has a second robotics system inside the rover with another arm that does the sample handling. The last aspect of robotic operations is the interface with the Ingenuity helicopter.
We’re also continually improving rover systems during the missions. While we can’t upgrade the hardware while it’s up there, we can upgrade the software. We’ve done three software upgrades since we landed. We add new capabilities, test them, and figure out how we’re going to use them. That’s all part of the work.
Robotic engineers play a lot of different roles during the mission life cycle. I wrote code that’s running on Mars to enable some of the rover’s capabilities and worked in various other roles along the way. Most of us will work on building the robot, and then once we land, we may participate in actually operating the robot.
David DeLallo: Do you participate in operating the rover?
Vandi Verma: Yes, I still like to do hands-on operations on Mars. Since 2008, between Spirit, Opportunity, Curiosity, and now Perseverance, I have been driving the robots on Mars and manipulating the sampling system and robotic arm on the different robots, as well as doing Ingenuity operations. I learn a lot from sitting behind the controls and actually doing the Mars operations.
David DeLallo: How do the rover’s autonomous operations work from a technical perspective?
Vandi Verma: That’s a great question. One of the things that people sometimes think is either we are operating it or it’s driving autonomously. Those are not actually two separate things. Even when it drives autonomously, we still have to communicate to the robot what we want it to do, so we even operate the autonomous capability.
But in terms of what autonomous capability it does have, one is self-driving. And that’s really a lot of fun to operate, because you’re planning not just a short travel distance but hundreds of meters on Mars. There are some things the robot can do really well autonomously. For instance, it’s very good at detecting geometric hazards. Because we use computer vision, we can create a three-dimensional terrain map that enables it to detect when there’s a big rock, a boulder, or a crater. But what it’s not so good at is detecting texture and things like sand. When you do autonomous operations for autonomous driving, you are giving it some guidance. We create these zones where we say, “We know from the ground [Earth] that this is sand, so don’t drive around on it.” But on board, as it’s driving, it gets more uncertain about its position—it doesn’t know where it is relative to this patch you noted because there’s no GPS on Mars. It is intelligent enough, however, to increase its distance from hazards as more time goes by. So it’s a joint operation that we do between the human and robotic capability.
David DeLallo: Can you describe where AI comes into play?
Vandi Verma: We use AI in several areas. One is in helping to decide what is complex terrain and determining how conservative or not to be. If you are too conservative, you end up keeping so much margin around the terrain that the robot will not be able to progress. We also use AI to enable the rover to autonomously figure out what rock is interesting. We take wide-angle images and then have the robot use onboard algorithms to say, “That’s the rock I want to shoot the laser at.” For this, we utilize a lot of offline techniques to come up with the parameters attuned to what is scientifically interesting and to help the robot know that.
Another example is onboard planning. The rover has seven instruments, and it doesn’t just do one thing at a time. We might be driving the rover and flying a helicopter on the same day or taking measurements from radar as we drive. So the rover has to figure out, “Do I have the energy to do both of these activities? Should I serialize them? What can I do in parallel? Does something conflict with another?” Or, if as the rover is executing on Mars suddenly something falls, is that an opportunity for it to do something else interesting? That’s an onboard planning capability. We haven’t deployed it yet, but we’re getting very close. When we do, we will have a planner with the capability to continuously look at what the constraints are, get data from the rover, and be able to decide to potentially do something different and optimize the plan based on that information.
David DeLallo: You’ve been developing advanced robotics for more than 20 years. What are some of the biggest changes you’ve seen?
Vandi Verma: It’s been really interesting. When I was in graduate school, it was very exciting because there was so much happening in the field and in laboratories, and you saw the potential. But we were at this stage where, say, for self-driving capabilities, we hadn’t even had the first DARPA [Defense Advanced Research Projects Agency] challenge. We did, however, have robots. We knew this capability had tremendous potential. But we were at this stage where the robot would stop, take an image, create a hazard map, do a lot of thinking, and then figure out, “Well, maybe I should drive this arc, or maybe I should drive that arc.” And then we would get excited about it. But to the observer, it was like, “A two-year-old can do better than that.”
Now I work with a lot of NASA engineers, and they always optimize the heck out of everything. While there’s always a bit of manual driving to do, we’re getting to this stage with our self-driving capability where we are willing to let the robot drive because we know that what it will do is actually better than our optimizations because it’s using cumulative information. We can see the image from its point of view at the start of the drive, but once the rover starts moving, it’s collecting far more information on board than we’re going to have. So that’s where I think the huge change is—the capability and the trust are to the point where we’ve seen over time enough correct decisions by the AI that we are now willing to let it take control.
David DeLallo: You raise an interesting point. There has been much discussion about whether humans can trust AI to take care of them. But in the case of space exploration, we have to trust that robots will take care of themselves, too.
Vandi Verma: Right. We have to trust that the robot will not destroy itself and drive off a cliff. That trust is key because we have one robot on Mars. It’s still difficult to get to Mars, so that robot is a very valuable asset. We used to design autonomous capabilities with the idea of “Make better, but do no harm,” and restrict the chances that the robot could harm itself. But the robot’s decision-making abilities have improved. So now we’re getting to the point where we’re letting the robot make more decisions, even those where it could be dangerous if it wasn’t the right one, because the risk that the robot will make the wrong decision is now so much lower. That has been the biggest step since 2008 when I first started working on the Mars Exploration Rover mission.
And the leap in trust isn’t just because the AI and software have improved. It’s the result of the full-system approach that you have to take when thinking about robotics and AI. We used to have driving on Curiosity that was autonomous, but we didn’t use it very much because, while the AI behind the self-driving capability would navigate the rover around big hazards, it wasn’t avoiding little sharp rocks. And those rocks started to tear the wheels. Adjusting the algorithms to say, “Oh, this is not something you do,” is a much more complex operation. A less complicated solution is to upgrade the wheels to reduce the hazard. We also made the cameras faster and upgraded the computing. So it takes a full-system approach to say, “Can we make this AI so it’s actually usable to solve the problem we want it to?”
David DeLallo: What are some of the things that frustrate you about AI?
Vandi Verma: There are two parts to that question. The first one is frustrating in a good way, and that is that it’ll always find a loophole. It’s not going to use common sense. If I’m working with a human co-driver, they get what you mean even if it’s not exactly what you said, whereas the AI’s going to go with exactly what you specify. The other one is that you have to craft that specification in a way that is still a little bit of an art and a niche. When we are driving the robot, it’s very intuitive now thanks to where we’ve gotten to in terms of human–machine interfaces. But the AI itself still requires a little bit of fiddling, and I think that’s something we can make a lot of improvement in.
David DeLallo: Do you see other business applications for the things you’re developing right now?
Vandi Verma: Yes. We work a lot with industry and academia and students. NASA is a very collaborative organization, and part of our charter is to expand knowledge, so we have a lot of different programs in which we do that. In terms of specific examples, Earth has extreme environments that are very analogous to exploring other planetary areas. And then there are things like remote operations. We have a lot of experience with operating robots where you really can’t go and just flip that switch or change something. You have to do everything remotely. And I think that’s a big area for transference as we start to do more robotics and want it to be hands-free. The human–robot interaction experience that we have from doing this continuously for decades is going to be very valuable as well.
David DeLallo: What are you most looking forward to working on next?
Vandi Verma: That’s such a hard question because there are so many possibilities. We tend to have a lot of ideas in different directions in the initial stages, and then we take the ones that show the most promise and develop them into missions. I’m really excited about us completing the Mars sample. This has such a high potential for us actually detecting signs of ancient life. Beyond that, in terms of the missions we are looking at next, Mars is far, but you can still control the robots with a time delay. And we’ve been doing that for a while. But once you start getting into the outer planets and beyond, you really have to be totally autonomous because the radiation is much more extreme. The rover is not going to survive long enough for you to be able to send communications back and forth, so it has to complete the entire mission autonomously, which is pretty amazing. I’m really excited about that.