Dr. Andrew E. Johnson graduated with Highest Distinction from the University of Kansas in 1991 with a BS in Engineering Physics and a BS in Mathematics. In 1997, he received his Ph.D. from the Robotics Institute at Carnegie Mellon University where he developed the spin-image surface signature for object recognition and surface matching. Currently, he is a Principal Member of Technical Staff at the Jet Propulsion Laboratory where he is developing image-based techniques for autonomous navigation and mapping during descent to planets moons, comets and asteroids. At JPL, Dr. Johnson has worked on technology development tasks as well as flight projects. For the Mars Exploration Rover Project, Dr. Johnson was the lead algorithm developer for the Descent Image Motion Estimation Subsystem (DIMES), the first autonomous machine vision system used during planetary landing. Following the successful development and execution of DIMES, he is now moving back to the development of machine vision systems for landing hazard avoidance, pin-point landing and rover navigation. Part of this work includes a collaboration with the University of Southern California and the University of Minnesota in the area of vision guided safe and precise landing for autonomous helicopters. In 2003, Dr. Johnson was awarded the JPL Lew Allen Award for Excellence for "his "groundbreaking contributions in the area of machine vision algorithms for safe and precise landing."
Ph.D. Robotics, Carnegie Mellon University, 1997
M.S. Robotics, Carnegie Mellon University, 1995
B.S. Engineering Physics, University of Kansas (with Highest Distinction), 1991
B.S. Mathematics, University of Kansas (with Highest Distinction, with Honors), 1991
Machine Vision Group, Jet Propulsion Laboratory, Pasadena, CA, 10/97-present
Principal Member of Technical Staff, 10/2004-present
Senior Member of Technical Staff, 10/1997-9/2004
Technology Development
Safe and Precise Planetary Landing: Developed algorithms to process visible imagery for velocity estimation, position estimation and hazard detection during landing Developed algorithms to process lidar scans for velocity estimation, position estimation and hazard detection during landing.
Small Body Navigation: Developed algorithms to process visible imagery to enable station keeping, hazard detection and precision landing. Developed algorithms for multi-resolution 3D modeling of small bodies. Developed velocity estimation and hazard detection algorithms from scanning lidar data.
Surface Rover Navigation: Developed improved Visual Odometry algorithm for future Mars rovers.
Technology Validation
Pin-Point Landing Parachute Drop Tests: Lead team to develop system for collecting descent imagery during a high altitude parachute drop test. Used imagery to validate performance of algorithms for pin-point landing.
Autonomous Helicopter Tests: Implemented hazard detection and landing site tracking software on an autonomous helicopter. Resulted in first autonomous safe landing of helicopter in unknown terrain
Mars Exploration Rover Descent Image Motion Estimation System (DIMES) Helicopter Tests: Designed field tests and verified performance of software with imagery collected.
Mars Science Laboratory Hazard Detection Rocket Sled Tests: Designed tests and verified performance of software with data collected.
Flight Project Implementation
Deep Impact: Encounter Red Team Member, AutoNav Team Member
Mars Science Laboratory: Surface Guidance Navigation and Control team member. Entry Descent and Landing (EDL) Hazard Detection and Avoidance Lead
Mars Exploration Rover: Software Lead for the Descent Image Motion Estimation System, EDL Team, Spacecraft & Rover Engineering Operations Team, Rover Localization Team.
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, 8/91-9/97
Research
Proposed and developed a new point signature (spin-image) for matching 3D surfaces in unknown arbitrary position and attitude. Applied signature to problems in interior modeling and object recognition. Surface matching software is now licensed to multiple companies and research laboratories where it is being applied to the problems of robot navigation, interior modeling and protein matching. Proposed and developed a novel mesh resampling algorithm for geometric compression. Proposed and developed a novel algorithm for shape-from-shading surface reconstruction from side-scan sonar data.
Cambridge Research Center, DEC, Cambridge, MA, 6/95-9/95, 6/96-8/96
Research
Created 3D data registration and integration algorithms for a modeling from images virtual reality system. Created a polygonal mesh simplification algorithm for generating concise representations of 3D scenes with texture.
Center for Light Microscope Imaging and Biotechnology, Pittsburgh, PA, 6/92-8/92
Research
Created algorithms for detecting filamentary structures in fluorescent microscopy images of cells.
Exxon Production Research, Houston, TX, 5/90-8/90
Research
Designed and built a prototype acoustic source to be used in seismic mapping for oil recovery.
Safe and Precise Planetary Landing
Small Body Navigation
Surface Rover Navigation
3D Modeling
Object Recognition