Natural Dexterous Robot Control

Be sure the watch the videos at the bottom!

If you've ever watched an episode of Dr. Who, you've seen the titular character dashing about in the control room of his craft manically pushing buttons, turning wheels, and fiddling with stuff that looks like it came out a junkyard (and I'm sure a lot of it did) to precisely navigate time and space.  It's part of the don't-take-this-too-seriously charm of the whole show - there's no attempt to explain what that thing that looks like a gasoline pump actually does - somehow, the Doctor has it all figured out.

This is right after we made ATHLETE climb down off the top of that lander mockup that's at the left edge of the screen.  In high winds.

This is right after we made ATHLETE climb down off the top of that lander mockup that's at the left edge of the screen.  In high winds.

Some of the control interfaces for complex robots are equally inscrutable.  Take ATHLETE (All-Terrain Hex-Limbed Extra-Terrestrial Explorer), the gigantic robot shown towering over me to the right.  Side note: I am 6 foot 6 inches (2 meters) tall.  ATHLETE is a behemoth and if we end up sending it to Mars it will be twice as big. Depending on its configuration ATHLETE has around 42 degrees of freedom in its joints plus dozens of cameras and around 7 computers onboard.  I think the Doctor would find it quite a challenge.

We don't rush around a glowing, flashing control console to drive ATHLETE (although that would be pretty cool).  At the most basic level, all control of ATHLETE is accomplished through an intricate command language.  There are hundreds of commands and some commands have dozens of arguments.  We depend a variety of tools to help us remember things like what the 4th argument is for the command that moves the robot's limbs or exactly what the parameter is called that lets you change the rate that the robot reports the current level in its motors.  We have a few tools that let us visualize how a command is expected to affect the robot.  In the end, though, most of us end up memorizing a lot of the commands and how to use them.  It's more like programming the robot than driving a car, and it takes a very long time to learn.  Similarly, it takes many months for a JPL engineer who is already experienced with robotics to be qualified to help operate the Curiosity Mars Rover.

It's not reasonable to expect Astronauts and other explorers in the future to learn a different command language for every robot that they need to control.  I'd argue that it's not really reasonable to expect them to learn any command languages.  We have to make it possible for someone to walk up and immediately take control of a sophisticated robot simply by showing it what they want it to do.  For ATHLETE, we wanted to make controlling the robot as easy as posing its limbs.

zSpace with glasses and stylus.  Tracking sensors are in the top corners.

zSpace with glasses and stylus.  Tracking sensors are in the top corners.

To make that happen, we decided to incorporate an interface device from a company called zSpace. It's a screen that tracks your head position and allows our software to display a 3D scene in stereo.  You see those stereo views and enable the screen to track your head position by wearing lightweight passively polarized glasses.  In addition, zSpace precisely tracks the position and orientation of a stylus that has several buttons on it. It's straightforward to make it appear that the stylus is casting a ray into the 3D scene and you can use that ray like a 3D cursor to interact with things in the scene. I find that zSpace provides a very surgical feeling that's not unlike the experience of controlling a DaVinci surgical robot. 

 
 

As shown in the video above, we built an interface using zSpace that made it possible for a user to simply grab parts of the robot and position them as desired.  It supports both forward-kinematic joint-by-joint control and inverse-kinematic control by positioning the orange versions of the end effectors at the desired location.  Once the robot is appropriately positioned, we can automatically generate commands for ATHLETE to execute.  Suddenly, controlling the basic motion of the robot became something that a person with no previous experience can learn in a few seconds. I've demonstrated this system to dozens of visitors and nearly all could confidently use the entire application after simply watching me use the application for less than a minute. There are no commands or arguments to memorize, no menus or keyboard shortcuts -- controlling a robot this way doesn't feel at all like programming.  You simply pose the robot the way you want and then hit a button to make the real rover pose itself the same way.  

We've since built more interfaces that expose more of the capabilities of the robot, but I don't think anything we've built for ATHLETE since can match this interface's sheer ease of use.

In April of 2014 I appeared in a segment of Man vs. The Universe, a documentary produced by Revelations Entertainment for the Science Channel.  In the clip below, I talk about the ways we might control a spacecraft in the vicinity of an asteroid, one mission that ATHLETE might be involved with.  There are some good shots of the interface in this video and you'll also see Garrett using the interface in a few clips.  He wrote a lot of the code to make it all work.