The ability to manage risk is an indispensable part of human and machine intelligence when performing complex tasks, ranging from military operations to space exploration. Although machine intelligence plays increasingly significant roles, in most cases humans are still solely responsible for predicting and coping with risks, while robots simply execute a given plan without explicit awareness of risk. Our vision is to revise the relationship between human and robot to a cooperative partnership, where both parties would share the responsibility of managing risks. This paradigm shift would not just mitigate the cognitive workload of human operators but also make a human-robot system significantly safer and more reliable because 1) robots could quickly respond to contingencies without waiting for the instructions from human operators and 2) humans and robots could cover the weaknesses of each other. We call this new concept of machine intelligence as Risk-aware Human-cooperative Autonomy (RHA). The objective of this proposed project is to perform basic research on both autonomy algorithms and human factors engineering in order to define, study, and realize RHA.
In the proposed RHA concept, the instructions from human operators are not a detailed command sequence but high-level goals as well as bounds on risks. RHA is responsible of optimizing the actions of robots to achieve the goals within the risk bounds, while flexibly responding to contingencies. It also continuously informs humans of risk assessments, and accepts changes in goals and risk bounds if necessary.