The Mobility and Robotic Systems Section develops image interpretation algorithms that enables the automation of scientific tasks onboard spacecraft, enabling them to serve as remote robotic assistants for scientists. A main goal of sending robots into space is to acquire scientific data that are transmitted back to Earth for analysis. However, the rapidly increasing amount of data acquired by these robots, compared with the relatively limited transmission capabilities, can prevent the download of all of the data. This problem calls for a way to select, at the remote site, the data that is most likely to contain the desired science information. This process is called "Autonomous Science Data Processing," or "Onboard Science."
The objective of image processing for onboard science is to autonomously establish if specified symbolic information is present in an image using criteria developed by scientists. This processing is done using a set of programs called Symbolic Perception of ObjecTs for Target Extraction and Recognition programs (SPOTTERs) that detect, identify or recognize objects in an image. The image analysis modality used by a particular SPOTTER (e.g., single images, color or multispectral images, or sequences of images) is defined by the characteristics of the object that it must extract from the image. The Mars Exploration Rovers (MERs) have used sky, clouds and dust devil SPOTTERs while a rock SPOTTER is being considered to be used by the 2011 Mars Science Laboratory (MSL). These applications are described below.
On Mars, clouds are important aeolic events that provide information about winds and season characteristics. But their presence is rare, and best-guess campaigns aimed to image them at specific times have a low success rate. To increase the probability of detection, an autonomous Cloud SPOTTER has been developed, enabling the rovers to search for clouds many times during a single day and greatly increasing their likelihood of success. Figure 1 above provides the result of using this extractor on MER, where the original scene is shown along with its summarization as a binary thumbnail image that is automatically returned as part of the acknowledgement of command execution. Running the extractor has a low cost in terms of onboard resources since it discards those images that do not contain clouds, saving storage and bandwidth for the few images that have clouds. This low cost enables searches at times when clouds are not expected to be present, adding discovery to the previous goals of verification and monitoring.
Of similar interest and rarity in the Martian atmosphere are the phenomena known as dust devils. Figure 2 shows the output of our Dust Devil SPOTTER onboard MER. This software is designed to use image sequences to decide where there is a dust devil in the scene. As in the Cloud SPOTTER case, a central criterion for the automatic extraction is reliability close to that of an expert on Earth, preventing loss of valuable data. This performance must be achieved for the usual case where the events are extremely faint and difficult to discover at plain sight. In case of doubt, the extractors are biased to err on the side of caution and preserve the data.
More general than extracting atmospheric phenomena, sometimes it is valuable to simply identify the sky, enabling its logical separation from the terrain in images. Figure 3 shows the results of our Sky SPOTTER, an extractor low in the symbolic image hierarchy, i.e., its output could be considered a primitive image region that could be further decomposed. For example, higher level extractors, like the Cloud SPOTTER described above and the Rock SPOTTER described below, rely on the quality of the Sky SPOTTER to discriminate sky from ground, enabling them to focus their searches in specific parts of the image.
Similarly, segmenting rocks from the background terrain (e.g., sands, dunes) has value in many scenarios. Figure 4 shows the output of our Rock SPOTTER on imagery acquired in the JPL's MarsYard. The Rock SPOTTER uses the output of the Sky SPOTTER to prune out the sky areas of the image (if any) and focus its search solely on the ground. Some variations of the Rock SPOTTER, like that developed to analyze micro-photography images of Martian soil, do not require the use of the sky detector.
All of these technologies are scalable in terms of timing, memory usage, and operational scenarios. For example, the Cloud and Dust Devil SPOTTERs were designed to operate in real time while taking into account the large engineering constraints of the MERs (small onboard processing power, constrained memory, low bandwidth) while providing high hit-to-miss detection ratios established by the scientists. Beyond these goals, these extractors are very robust and can operate on any image of any scene acquired by the MERs without scene-specific tuning. With this precedent, the Cloud and Dust Devil SPOTTERs are already scheduled to be used by the MSL rover.
Despite these successes, some future robotic exploration scenarios call for autonomy that is more complex than the detection of an event. Instead, results from image processing serve as inputs to higher level onboard decision making software for a variety of purposes: targeting, path planning, prioritization, navigation, issuance of science alerts, customized image compression, etc. For example, the Sky and Rock SPOTTERs described above are used by the Onboard Autonomous Science Investigation System (OASIS) whose goal is to act autonomously to preserve evidence of interesting rocks imaged during long rover traverses.
For possible future use on MSL, OASIS sponsored the development of the Rock Crosshair SPOTTER, a modified version of the Rock SPOTTER to serve as the targeting mechanism for the ChemCam, a hybrid optical instrument with a powerful laser. When the laser of the ChemCam hits a rock, it emits a spark that can be analyzed by its camera. Figure 5. shows the output of the Rock Crosshair SPOTTER indicating likely target locations, where the target parameter has been set to rock size. This type of automated data acquisition might save up to one full day of operations every time that the ChemCam is used, by eliminating the need for the rover to remain stationary waiting for operators to select targets for it.