Artificial intelligence (AI) has the potential to alter many aspects of military operations and improve overall operational effectiveness. One particularly complex mission domain for the U.S. Department of Defense (DOD) is air and missile defense (AMD). With the proliferation of more advanced weapons, there is a greater need for warfighters to quickly assess the situation, develop an appropriate course of action, and best utilize their warfare assets to respond. This series of activities requires warfighters to have a high level of trust in the system. However, trust in AI systems is not universally defined, and there is no common criteria for evaluating an AI system's trustworthiness. This thesis studies how the established trust factors in the literature could apply to the AMD domain to enable and enhance trust between human operators and future AI-AMD systems, and how trustworthiness can be designed into future AI-AMD systems. The thesis proposes a framework of trust and human-machine interactions (HMI) in AI-AMD systems. Thereafter, the thesis proposes a set of trust factors considering the operational and organization environment, as well as the operator and AI-AMD decision-aid team dynamics. This thesis uses the U.S. military solutioning framework of DOTMLPF-P (Doctrine, Organization, Training, Materiel, Leadership and Education, Personnel, Facilities, and Policy) to develop a strategy to improve calibrated trust between the operator and AI-AMD system.
Johnson, Bonnie W.
Naval Postgraduate School
Master of Science in Systems Engineering
Systems Engineering (SE)
Approved for public release. Distribution is unlimited.