Immersed on Mars

 
 
 

Mission control rooms seem to be obsessed with pixels.  We hang huge screens on the walls and surround every operator with as many screens as possible.  The newer the control room, the more pixels it's bound to have.  It's like we want to completely envelop and immerse our operators in data, replacing the world around them with the world that they're monitoring.

 
 
Sorry, this is not the answer.

Sorry, this is not the answer.

 
This is how geologists explore the Earth.

This is how geologists explore the Earth.

While I don't think that more and more monitors are the answer, I believe that immersion is critical for exploration because exploration is fundamentally about presence.  Presence is the elusive essence of being in a place, and it's a very powerful thing.  All of us are endowed with an amazing innate ability to immediately absorb and understand an environment simply by being present in it.  Presence enables us to confidently and accurately form hypotheses, plan actions, and ultimately empowers discovery.  That is why almost all human exploration has been accomplished via physical presence in the environment.  It's why any geologist will tell you that in order to truly understand the story that an environment on Earth has to tell, you have to go there.

But this is how geologists explore Mars.

But this is how geologists explore Mars.

We don’t need to actually visit the canyon - let’s just look at some pictures of it instead.
— said no geologist, ever

Consider, then, the challenge faced by the scientists who help to operate the Curiosity Mars Rover.  Physical presence is, for now, beyond our reach.  Instead, scientists have to dig through a torrent of 2D images coming back from our spacecraft and piece together a mental model of its environment.  3D renderings are available, but they're almost always viewed on flat monitors.  There's nothing natural about this interface, and in 2013 we performed a carefully controlled scientific study that showed that many scientists struggle to understand the martian environment this way.

We were confident that virtual reality was part of the answer to this challenge, so we started building systems based on this technology back in 2012.  The rebirth of VR hadn't really started yet, so it was tough going at first.  The first system we were proud of was built with the Oculus Rift, and the video below summarizes that work.

 
 

Looking back three years later, it looks pretty crude.  The Oculus Rift didn't have any kind of positional tracking, so we had to hack that on with a Vicon Bonita.  We were experimenting with different techniques for reconstructing and rendering the Martian surface - some successful and others... well, let's say it's a good thing we didn't stop here!

We couldn't stop here because even with all of its failings, this system produced dramatic benefits.  With high statistical significance, our study showed that scientists using this head-mounted system more than doubled their ability to estimate the distances of objects in the environment and more than tripled their ability to estimate the angles of objects in the environment, all without any familiarity or training with the technology.  We continued this work with other partners - first with Sony and Project Morpheus (later Playstation VR), and then with Microsoft and the HoloLens, which marked the birth of the OnSight project.  Watch the video below to get the big idea, or have a look at our press release.

 
 

I believe that OnSight is a major innovation in the way that we explore Mars.  It has been successfully deployed to mission operations for the Curiosity rover team and aspects of it are already being integrated into the operations system for the next rover mission.  I think it's also a breakthrough technology for sharing the experience of Mars exploration with the world.  We used OnSight to build a limited-engagement museum exhibit called Destination: Mars at the Kennedy Space Center Visitor Complex in Cape Canaveral, Florida.  See the video below for more about that, or check out its press release.

See you on Mars!

 
 

Natural Dexterous Robot Control

Be sure the watch the videos at the bottom!

If you've ever watched an episode of Dr. Who, you've seen the titular character dashing about in the control room of his craft manically pushing buttons, turning wheels, and fiddling with stuff that looks like it came out a junkyard (and I'm sure a lot of it did) to precisely navigate time and space.  It's part of the don't-take-this-too-seriously charm of the whole show - there's no attempt to explain what that thing that looks like a gasoline pump actually does - somehow, the Doctor has it all figured out.

This is right after we made ATHLETE climb down off the top of that lander mockup that's at the left edge of the screen.  In high winds.

This is right after we made ATHLETE climb down off the top of that lander mockup that's at the left edge of the screen.  In high winds.

Some of the control interfaces for complex robots are equally inscrutable.  Take ATHLETE (All-Terrain Hex-Limbed Extra-Terrestrial Explorer), the gigantic robot shown towering over me to the right.  Side note: I am 6 foot 6 inches (2 meters) tall.  ATHLETE is a behemoth and if we end up sending it to Mars it will be twice as big. Depending on its configuration ATHLETE has around 42 degrees of freedom in its joints plus dozens of cameras and around 7 computers onboard.  I think the Doctor would find it quite a challenge.

We don't rush around a glowing, flashing control console to drive ATHLETE (although that would be pretty cool).  At the most basic level, all control of ATHLETE is accomplished through an intricate command language.  There are hundreds of commands and some commands have dozens of arguments.  We depend a variety of tools to help us remember things like what the 4th argument is for the command that moves the robot's limbs or exactly what the parameter is called that lets you change the rate that the robot reports the current level in its motors.  We have a few tools that let us visualize how a command is expected to affect the robot.  In the end, though, most of us end up memorizing a lot of the commands and how to use them.  It's more like programming the robot than driving a car, and it takes a very long time to learn.  Similarly, it takes many months for a JPL engineer who is already experienced with robotics to be qualified to help operate the Curiosity Mars Rover.

It's not reasonable to expect Astronauts and other explorers in the future to learn a different command language for every robot that they need to control.  I'd argue that it's not really reasonable to expect them to learn any command languages.  We have to make it possible for someone to walk up and immediately take control of a sophisticated robot simply by showing it what they want it to do.  For ATHLETE, we wanted to make controlling the robot as easy as posing its limbs.

zSpace with glasses and stylus.  Tracking sensors are in the top corners.

zSpace with glasses and stylus.  Tracking sensors are in the top corners.

To make that happen, we decided to incorporate an interface device from a company called zSpace. It's a screen that tracks your head position and allows our software to display a 3D scene in stereo.  You see those stereo views and enable the screen to track your head position by wearing lightweight passively polarized glasses.  In addition, zSpace precisely tracks the position and orientation of a stylus that has several buttons on it. It's straightforward to make it appear that the stylus is casting a ray into the 3D scene and you can use that ray like a 3D cursor to interact with things in the scene. I find that zSpace provides a very surgical feeling that's not unlike the experience of controlling a DaVinci surgical robot. 

 
 

As shown in the video above, we built an interface using zSpace that made it possible for a user to simply grab parts of the robot and position them as desired.  It supports both forward-kinematic joint-by-joint control and inverse-kinematic control by positioning the orange versions of the end effectors at the desired location.  Once the robot is appropriately positioned, we can automatically generate commands for ATHLETE to execute.  Suddenly, controlling the basic motion of the robot became something that a person with no previous experience can learn in a few seconds. I've demonstrated this system to dozens of visitors and nearly all could confidently use the entire application after simply watching me use the application for less than a minute. There are no commands or arguments to memorize, no menus or keyboard shortcuts -- controlling a robot this way doesn't feel at all like programming.  You simply pose the robot the way you want and then hit a button to make the real rover pose itself the same way.  

We've since built more interfaces that expose more of the capabilities of the robot, but I don't think anything we've built for ATHLETE since can match this interface's sheer ease of use.

In April of 2014 I appeared in a segment of Man vs. The Universe, a documentary produced by Revelations Entertainment for the Science Channel.  In the clip below, I talk about the ways we might control a spacecraft in the vicinity of an asteroid, one mission that ATHLETE might be involved with.  There are some good shots of the interface in this video and you'll also see Garrett using the interface in a few clips.  He wrote a lot of the code to make it all work.

 
 



Mixed Reality on the Space Station

 
Astronaut Luca Parmitano using HoloLens during a complicated maintenance task in the Aquarius Reef Base, about 3 miles off the cost of Florida and 60 feet underwater.

Astronaut Luca Parmitano using HoloLens during a complicated maintenance task in the Aquarius Reef Base, about 3 miles off the cost of Florida and 60 feet underwater.

 

Consider the challenge faced by an astronaut on the space station.  They're living inside the most complicated machine ever built, traveling at about 5 miles per second, 250 miles above the surface.  They're surrounded by equipment needed to keep them alive - all of which they have to maintain - and new scientific experiments are arriving all the time that need their attention. There's no way that any person could be expected to be an expert in all of these things - after all, there's a person back on Earth who spent years becoming the leading expert on just one of those things!  Those experts do their best to train the astronauts before they launch, but there are so many things to learn that an astronaut often finds themselves struggling to recall a lesson on a piece of equipment from more than a year ago!  When that happens, an astronaut has to rely on lengthy, often hard-to-follow procedures for a task.  Procedures get the job done, but they're a slow way to work.

So, my team decided that a mixed reality device like the Microsoft HoloLens could be helpful here.  We launched two of them to the space station in December 2015.  We actually tried to launch two of them six months earlier, but that's another story...  

Our idea is to use the HoloLens to connect astronauts with knowledge and expertise more effectively so that they can get more done onboard the space station.  Initially, we'll be using Microsoft's new version of Skype for HoloLens, which will allow an expert on the ground to see exactly what the astronaut is doing and draw holographic annotations into the environment around the astronaut.  So, instead of saying "on the lower panel, beneath the black handle, push the green button that's third from the right unless it's flashing", an expert can simply say "push this button" and draw a circle around the button.  The astronaut simply sees a circle hovering around the button and knows exactly what to do.  With a capability like this, an expert on Earth can be far more helpful to an astronaut in orbit.

 
 

We tested this out in the summer of 2015 at NEEMO XX, NASA's underwater mission simulation environment.  A crew of "aquanauts" consisting mostly of astronauts lived in the Aquarius Reef Base for several weeks, testing out mission scenarios and technologies that might one day be used onboard the space station or during a mission to Mars.  It went really well, and you can read more about it here.

 
Aquarius Director Roger Garcia leading Luca Parmitano through the maintenance task shown at the top of the page.  Using Skype, Roger could talk to Luca as if on a normal videoconference, but could also draw annotations into Luca's world to make…

Aquarius Director Roger Garcia leading Luca Parmitano through the maintenance task shown at the top of the page.  Using Skype, Roger could talk to Luca as if on a normal videoconference, but could also draw annotations into Luca's world to make it clear exactly what he needed to do.

 

Our success at NEEMO was exciting, but the real deal came just a few months later when we successfully launched Sidekick to the International Space Station in December 2015 and then used it onboard in February 2016.  Here's a video!

 
 

 

I think this is only the beginning.  Mixed reality will allow us to naturally connect an astronaut with tons of information on the tasks they need to perform.  She might be able to say "Where is the part that I need to fix this" and be led by a floating arrow right to the storage compartment where that part is stored.  Animated holographic diagrams could be displayed on top of a partially built piece of experimental equipment showing exactly where the next piece should be installed. She could look right through a panel and see where a cable is routed that needs to be replaced.

 
Scott makes this look good.

Scott makes this look good.

 

Controlling (Robot) Arms with your Arms

 
 

Be sure to check out the videos at the end!

When we build a system to control a robot, we're trying to come up with a mapping from the things that a human knows how to do to the things that the robot knows how to do.  Typically, these two sets of things are extremely different (which is often why we built the robot in the first place)!  Interfaces bridge the gap by converting things that a human can do into things that a robot can do.  Good interfaces make the conversion intuitive for a human.  Most interfaces are not good interfaces.

To be fair, sometimes it's really hard to map the disappointing small number of easily recognizable human actions to the myriad capabilities of a sophisticated robot.  A joystick makes it pretty easy to control a two-degree-of-freedom wheeled robot that can move forward/back and left/right, but what is it like to control a six-degree-of-freedom arm with a joystick?

It's awful.  Seriously, this is never going to be a pleasant experience.  

As evidence, I offer you the Kinova Jaco Robotic arm.  The Jaco is an awesome robot.  It's fast, quiet, extremely safe, and reasonably easy to program (although its motion planning API needs some improvement).  It was originally created as an assistive device to be mounted on a wheelchair but Kinova also sells it to research labs like mine.  We bought a Jaco because it's a reasonable analogue for Robonaut 2 and Valkyrie.

Alright folks, I'm about to levy some harsh criticism on the control system for Jaco, a device of noble purpose that has had a positive impact on many lives.  In all fairness to Kinova, they had to build a low cost, highly durable controller that could be used by a specific population of users with significant disabilities.  While I believe (possibly incorrectly) they could have done better, their options were extremely limited.  I applaud them for using their talents to improve lives.  

Everybody got that?

Ok, this interface is not good.

Jaco comes with a three degree-of-freedom joystick (forward/back, left/right, and twist left/twist right) that includes seven additional buttons, only two of which are clearly labeled.  Those three axes and seven buttons combinatorially intertwine to produce a perfect gordian knot of inoperability.  Here's just a taste: the Jaco robotic arm has ELEVEN operating modes that uniquely map motions of the joystick to different motions of the arm.  What mode you're in is communicated via four mysterious, unlabeled blue lights (the astute reader will note that 4 < 11).  You can switch between modes with the unlabeled buttons, at least most of the time, and when you can't it is for inscrutable reasons.  The end result of this interface is that even after many hours of controlling Jaco with the joystick and studying the rather lengthy user manual, I often have absolutely no idea what is going to happen when I touch the joystick.  While rather entertaining, this as a rule is not a good place to begin when taking control of an expensive machine.  

As a result, I'm reduced to randomly pushing the joystick in a few directions to see which way the arm moves in whatever mode I'm in and then, assuming it's a suitable mode, trying to remember that mapping until I have to switch to another mode (sometimes successfully) to accomplish something else.  It's like trying to play a song on a piano that has random notes assigned to the keys and which scrambles the keys around every now and then.  A visitor to my lab has no hope of controlling the arm without a great deal of coaching (though it's fun to watch them try).  Plus, this whole approach depends on being able to directly observe the robot as you control it, something that's hard to do if it is outside the space station and you aren't!

My team decided to build an interface that would make controlling the robot arm as easy as controlling your own arm.  Put simply, we decided to make our arm the joystick.

We've worked with the first-generation Kinect for many years in my lab - we actually started working with it before its release - but this was the first thing we built using the new Kinect.  That meant that the first day was spent mostly in coordinate system conversion hell.  The creators of Unity3D (our graphics engine of choice) made many amazing choices, but for committing the sin of basing Unity on a left-handed coordinate system I hereby sentence them to 4 hours of swapping and negating axes while trying different euler angle rotation orders, times the number of Unity developers in the world-- a rather unfortunate penance of 912 years.  Garrett has gotten pretty good at this process, which is another way of saying that he's doomed himself to being involved every time we have to do this, which is surprisingly often because every single piece of equipment we work with uses a right-handed coordinate system.

We also couldn't directly interface with the Kinect's or Jaco's API directly because of the fact that Unity is based on Mono, which is exactly like .NET except for when it would be a huge pain for it not to be.  We've learned that it's best not to even try to consume a .NET API in Unity, even if it should work.  That way lies hours of rage followed by a dejected retreat to where you should have started - just putting everything behind a web-socketed server.  If it's all on the same network, the latency usually isn't that bad and this approach has the additional bonus of making it easy to use these devices from any of our lab computers without having to install drivers or even rewire things.

Kinect gives us the 3D location of the joints in your arm, so in theory we could try to use those to directly position the joints of Jaco.  It turns out that's not a good idea-- Jaco is less capable than a human arm and its joints aren't even in the same spots.  Instead, we just tell Jaco to position its hand at the same position and orientation as the human's hand (in its own frame of reference) and ignore what the rest of the arm does to get it there.  This means that the only joint positions we really need from Kinect to control Jaco were the hand and fingertip.  

We didn't want to stop there, though, because waving your arm around while staring at a rendered arm on a computer screen isn't what you'd call natural (though again, it's entertaining for others to watch).  Like with the joystick, it can be a bit hard to know which way you need to move your arm to trigger the desired motion of the robot arm.  It certainly doesn't feel like the arm on the screen is your arm.

This is really just a problem of perspective, and we solved it with the Oculus Rift head-mounted display.  We grabbed one more bit of information from the Kinect - the user's head position - and used it to control the point of view of the Rift in the Unity3D scene.  Later, we also filled the scene with sensor data acquired from a Primesense 3D sensor located near the robot so we could see a live 3D model of surroundings of the robot.

You're probably wondering if we could possibly add any more gadgets into this system (there might be room for a Clapper in here somewhere), but we didn't need to.  We had crossed a threshold into something remarkable.  I have built numerous systems to control different types of robots and there's a delightful and highly addictive *click* when we finally arrive at something that feels right.  With the Rift on, we felt a sense of presence in the robot's environment.  On top of that, in Alex's words, "the robot arm started feeling like an extension of my own arm".  I won't claim that it actually felt like my own arm, because there was no haptic feedback.  Additionally, Jaco can't move as quickly as my arm and so it was often playing "catch up".  Even with these limitations, a visitor to my lab can immediately take control of the arm with zero training and usually pick a block up off a table after a minute or two.  Furthermore, you don't even need to be in the same zipcode as the robot.  Have a look at the video below to see the system in action!

 
 

I've mentioned latency a few times without mentioning the pesky speed-of-light problem that we have to deal with when controlling space robots from Earth.  It's not that I forgot about it -- believe me, this isn't something that people in my line of work forget about.  It's just that we are targeting this work on low latency operational scenarios like an astronaut inside the space station controlling a robot outside the space station.

Controlling a humanoid robot arm with a human arm isn't too big of a stretch, but how far can we take this approach?  Around the same time as our work with the Oculus and Kinect, we performed a series of experiments with the Leap Motion hand tracker.  One of the craziest (and coolest) things we did with it is use it to control a giant six-legged robot called ATHLETE.  You can see what that looks like (and a few more things we did with the Leap motion at JPL) in the video below.

 

We actually even drove the real robot this way, live from a stage at the Game Developers Conference.

 

Holograms on the Vomit Comet

 
That's me! &nbsp;Mixed reality in microgravity is every bit as cool as it sounds.

That's me!  Mixed reality in microgravity is every bit as cool as it sounds.

 

One of the projects I lead put a pair of Microsoft HoloLens mixed reality devices on the International Space Station.  Before we did that, we had to make sure that the devices were actually going to work in orbit.  The issue is that the HoloLens relies on a bunch of sensors to keep track of where it is at all times, and one of those sensors pays attention to the direction of gravity.  So the problem is not that there's no gravity on the space station (there's about 80% of what there is on Earth), but that the space station is constantly falling.  That's what orbit is-- falling all the time and never hitting the ground because you're moving forward so quickly that you miss it!  We had to make sure that the HoloLens wouldn't lose track of which way it was pointing if it couldn't figure out which way was "down".

Our collaborators at Microsoft did a ton of simulations that all looked very encouraging, but the bottom line was that we didn't feel comfortable launching the HoloLens to the space station until we actually tested how it would perform in microgravity.  There's really only one way to test how a piece of equipment reacts to an (apparent) lack of gravity, and it looks like this.

 
NASA's C-9 "Weightless Wonder" reduced gravity aircraft. &nbsp;Yes, that is exactly&nbsp;how they fly it.

NASA's C-9 "Weightless Wonder" reduced gravity aircraft.  Yes, that is exactly how they fly it.

 

NASA calls it the "Weightless Wonder" but I'm guessing that's just because they decided that the more popular name of the "Vomit Comet" was less publicly acceptable.  And just to get it out of the way, yes, that's an accurate name but no, I personally did not get sick.

Over the space of two days I went on three flights, each consisting of 40 parabolic flight maneuvers.  During each parabola, the pilots first point the nose of the plane at the sky to climb to around 32,000 feet, then cut the engines and let the plane fall like a rock for about 20 seconds, then point the nose of the plane at the ground and gun the engines to pull back into a new climb.  So, every minute is divided roughly in half, with half the time consisting of hypergravity (a bit under 2 gees) and the other half consisting of microgravity (around 0 gees).  This rather unique oscillation is what causes some participants in reduced gravity experiments to part with the contents of their stomach.  It probably also has something to do with the fact that you are gaining and losing 8,000 feet of altitude in the space of about a minute, over and over again.

I know this sounds like the world's largest roller coaster (not an entirely inaccurate description), but the whole point of all of this insanity is those brief spans of microgravity freefall that are quite similar to what the space station experiences constantly.  Unfortunately, all microgravity experiments on the plane have to be designed to fit entirely within those 20-second blips, so we had to move quickly.

 
I didn't see a diagram like this until after the flight. &nbsp;It's probably just as well!

I didn't see a diagram like this until after the flight.  It's probably just as well!

 

Every minute looked about like this:

0:00 - 0:20: Lie flat on your back while the pilot pulls ~2 gees.  You can hear the engines roaring and it feels like someone is sitting on your chest.  Lifting or turning your head during this time is a bad idea.

0:20 - 0:40: The engines of the plane suddenly go quiet and every part of your body starts to feel very light.  You hear the flight director shout "Push!" You push up from the floor a bit, but not too hard!  If you do anything that's close to a jump you'll fly up and smack into the ceiling! Quickly work through the steps of your test procedure as you float lazily through the air.

0:40-0:60: "Feet DOWN!" shouts the flight director as you fall to the floor (sometimes bouncing right back up in the air for a second).  You have just a couple of seconds to get on your back and in your assigned spot before the 2 gees are back on your chest.

Repeat 120 times.

Our experiments were a complete success.  The HoloLens handled microgravity just fine, and we were even able to test out some of the applications that expect to run onboard once we launch this winter.

Pictures don't do it justice, so here's a video of us that shows what it was like:

 
 

What I Learned from Mars Polar Lander

 
 

My first mission at JPL taught me a lot of things, including that space exploration isn't for everyone.  When I started, my task was to help develop the tools that would control the Mars Polar Lander after it landed.  It was an extremely challenging mission, in part because it was to be an emblem of the new "Faster, Cheaper, Better" approach to space exploration. There wasn't enough money for anything.  Still, it somehow made it to the launchpad, survived the slightly controlled explosion that we call "launch", and dodged all of the navigational mistakes that had doomed its sister spacecraft, the Mars Climate Orbiter, a few months earlier.  However, as anyone in my business will tell you, a lander's launch and cruise are nothing compared to the trial that follows.

The now fairly famous "seven minutes of terror" of Martian Entry, Descent, and Landing (EDL) could be considered shorthand for the "seven minutes of contemplating if the last 4 years of your life have amounted to anything," the "seven minutes of wondering if you'll have a job tomorrow," and for some, the "seven minutes of considering what a congressional inquiry might be like."  For Mars Polar Lander it could also be called the "seven minutes of terror AND UTTER SILENCE" because the aforementioned scant budget prevented the inclusion of any system that would allow the spacecraft to communicate with us during those seven minutes.  Everything looked perfect on final approach.  The spacecraft went quiet exactly when expected.  We watched the clock for seven minutes.  And then heard nothing.

Just to be clear, we didn't hear "Mayday, mayday, this is Mars Polar Lander, and I'm going down!"  or "Mars Polar Lander here.  I'm sorry guys, but this is farewell."  Just NOTHING.

You might think it was time to give up.  After all, not hearing back from a machine whose top priority (aside from not making a huge hole in the ground) was to phone home is really not a good sign.  But it turns out that there are a lot of things that can go wrong in those seven minutes, and some don't mean the spacecraft is dead.  The spacecraft might have reset right after landing, might have gotten confused about what time it was, might have lost track of which way it was facing, might have suffered a momentary antenna malfunction, and so on.  These faint threads of hope weave into a tangled mess called a fault tree and at JPL we don't give up until every branch has been tested and ruled out.  So for many days later, the team gathered at precisely appointed times as we turned the antennas of the Deep Space Network to the sky in what can only be described as a gesture of hope.

 
 

Most of the news crews were long gone, but I'm afraid they hadn't forgotten us.  Many were busy publishing articles openly criticizing NASA and questioning whether it was really worth the investment of taxpayer money.  There was talk of a special investigative commission in Congress.  Then, just when all hope seemed lost, when we were just about to give up, on the day of the very last communication opportunity, something remarkable happened...

All hope was lost.  We did give up.

I'm sorry, folks - every story of space exploration doesn't end like Apollo 13, with a radio crackling to life and a room erupting in celebration and tears.  Some stories just end with the tears.

Project Manger Richard Cook informing the media of mission failure. He&nbsp;stayed.

Project Manger Richard Cook informing the media of mission failure. He stayed.

I wish I could say that at JPL, we all stolidly accept these stakes and simply move on.  Could you calmly accept that four years of struggle and sacrifice had been obliterated in an instant?  That the call of congratulations from Capitol Hill would instead be a call for judgement?  I was fortunate - I had spent a fraction of the time on the mission that many others had, but even for me it was a heavy blow.  Recall that I was building the software to control the spacecraft after it landed, so I had to face that my software was simply never going to be used.  Some did leave.  I stayed because I saw colleagues, who had worked on the mission from the beginning, who did accept the failure and move on.  I also knew that it couldn't always be like this.  If I had to experience a mission failure I was determined to also experience mission success.

It would take five more years.  Like I said, space exploration isn't for everyone.