Formal Reasoning for Compositional Systems Engineering (FORCE)


CAV 2026 Workshop
July 25, 2026


Lisbon, Portugal

Compositional Reasoning for Defense and Aerospace Systems

Darren Cofer

Formal methods are being successfully deployed at scale in production environments at 'big tech' companies, but barriers remain to their adoption by defense and aerospace companies developing national security systems. Researchers at Collins Aerospace are attempting to change this state of affairs. As part of the DARPA PROVERS program, Collins and our research partners are working to improve the safety and security of our systems by dramatically improving the usability, flexibility, and accessibility of formal methods development and verification tools. We are leveraging compositional reasoning tools at the system architecture level, memory-safe programming languages (Rust) augmented with software contract verification, and the formally verified seL4 microkernel, while integrating these formal methods tools and artifacts into our CI/CD pipelines to make them accessible to the defense industry workforce. This presentation will provide an overview of formal methods at Collins, the workflow and the assurance evidence being developed, and the demonstration target platforms.

Bio: Darren Cofer is a Principal Fellow at Collins Aerospace. He earned his PhD in Electrical and Computer Engineering from The University of Texas at Austin. His area of expertise is developing and applying advanced analysis methods and tools for verification and certification of high-integrity systems. He is currently leading the Collins team on DARPA’s PROVERS program, developing and applying formal methods for verification of safety and security properties in both manned and autonomous air vehicles. He served on RTCA committee SC-205 developing new certification guidance for airborne software (DO-178C) and was one of the developers of the Formal Methods Supplement (DO-333). He is currently a member of SAE committee G-34 developing certification guidance for the use of machine learning technologies onboard aircraft.

Safe Autonomous Systems via Shielding

Bettina Könighofer

Shielding is a prominent method in runtime verification that offers formal safety guarantees for autonomous agents. By using a safety model of the environment, shielding enables the automatic construction of a mechanism that blocks unsafe actions during both training and deployment. In this talk, we explore the main components of the shielding pipeline. We begin by revisiting its central assumption: the availability of a world model that accurately captures the environment. We then discuss the requirements such models must fulfill, along with recent progress in learning discrete world models. Next, we examine the synthesis of shields that provide quantitative safety guarantees in environments that inherit both probabilistic and adversarial behaviour. We conclude by discussing shield representations that support human-understandable explanations of the shield's decisions.

Bio: Bettina Könighofer is an Assistant Professor of Formal Methods and Machine Learning at Graz University of Technology. Her research focuses on runtime assurance, probabilistic model checking, and reinforcement learning. Her work on shielding was among the first to provide formal safety guarantees for deep reinforcement learning. She received her PhD from TU Graz under the supervision of Prof. Roderick Bloem in 2020. Before joining the faculty in 2023, she led the TrustedAI group at LAMARR Security Research.