User-Adaptive Assurances for Enhancing Trust

Investigators: Eric W. Frew (PI), Nisar Ahmed, Dale Lawrence, and Brian Argrow 
Sponsor: Center for Unmanned Aircraft Systems

The move from traditional forms of automation to novel autonomy architectures requires new methods to overcome major barriers to producing certifiably dependable autonomous systems. Some barriers are driven by the notion that we must certify autonomy using process-based (standards-based) type certification. Key barriers to extending current certification methods to increasingly sophisticated autonomous systems include: verification and validation in the presence of non-determinism; autonomous perception in complex environments and attendant computation and communication with ``big data" sets; operational and data security; and fidelity/role of modeling and simulation. In contrast, people are permitted to operate complex systems through performance-based licensing based on relatively sparse assessment processes (compared to certification standards). This is due to necessity, in that it would be costly (and probably counterproductive) to examine human performance in all possible scenarios. But licensing is also possible due to a certain level of trust in (some) human operators.

This project will investigate the closed loop interactions between assured autonomy and user trust. This project investigates a model-based approach to understanding how user trust evolves in systems consisting of a supervising user and an autonomous agent. This model consists of a multivariate model for user trust, and a feedback connection between user and agent. Feedback information is termed assurance, which is also shown to consist of multiple aspects concerning the state of the autonomous agent. It is argued that the closed loop interactions between user and agent can and should be designed to foster user trust. In order to develop design principles, it is first necessary to define the terms and salient components of these models and provide a logical framework for their interconnection. Although elements such as trust and assurance are essential in a usable autonomous system, they are also nebulous concepts with multiple meanings. Here we provide definitions and structure that enables a systematic study of the problem.