Technical Challenge 1
Trustworthiness Models & Measures of Trust for ISSA
Human-Computer Interaction
We are examining how trust and trustworthiness for learning-enabled safety assurance are defined and measured by different domains, including Human Factors, Formal Methods, Machine Learning and LLMs, and explainable AI. The goal is to develop a theoretical framework of trustworthiness for learning-enabled ISSA systems and bridge the gap between the understanding of trustworthiness from the designers’ or engineers’ perspective and the human operator or user’s perspectives.
Technical Challenge 2
Assured Autonomy Design Methods & Tools for ISSA
Explainable AI
We are conceptualizing methods and tools for measuring and examining trustworthiness of learning-enabled ISSA systems through formal methods, machine learning, explainability, vision language models, human machine interactions/human factors, and human-in-the-loop (HITL) simulation-based studies. The goal is to develop methods, models, and evidence-based best practices to ensure that learning-enabled systems for ISSA are trustworthy, believed, and relied upon.
Technical Challenge 3
ISSA Integration, Simulation & Evaluation Framework
ML-based Analyses
We are working with our industry partners to integrate, demonstrate, and test these approaches within operationally relevant AAM environments. The goal is to demonstrate these approaches within a simulation environment and an associated measurement framework.
Technical Challenge 4
Safety Case for In-time System-wide Safety Assurance
Formal Methods
We are developing a safety case, with evidence supported by data, and that is tested with our industry stakeholders. The goal is to transform the safety case and associated tools into a Safety Case Toolkit that practitioners can use to provide the argument and supporting evidence that its systems are capable of safety assurance.
-300x230.jpg)
© Florida Institute of Technology, All Rights Reserved