Safer complex systems
AI safety | Systems engineering | Functional safety | Safety of the intended functionality
The increasing level of automation and use of artificial intelligence in complex, socio-technical systems requires new approaches to assuring safety.
A holistic perspective on safer complex systems
Multidisciplinary research
In order to maintain a tolerable level of residual risk, despite the increase in complexity and autonomy in socio-technical systems, we must significantly adapt and improve our safety assurance capabilities. This requires groundbreaking research covering a broad range of disciplines ranging from AI ethics, systems engineering, safety assurance and theoretical concepts of trustworthy machine learning.
These disciplines must interact to provide meaningful and actionable solutions to real-world problems. Collaborative research is required not only to answer the question of “how to engineer safe, AI-based, autonomous systems?” but also how to determine “what is safe enough?”, and “how to argue that this level of safety has been met?“.
The work performed by the Assuring autonomy international programme at the University of York is an excellent example of world class multidisciplinary research in this area.
Advanced systems engineering
Safety assurance arguments
Convincing safety assurance arguments for complex, AI-based systems are needed that provide evidence to support the following claims:
- The ethical and technical requirements on the task to be performed as well as the environment in which the system operates are sufficiently well understood.
- Data used to develop, train and argue the safety of the system are a sufficient representation of the task and environment in which the system operates.
- The performance limitations of components are well-understood and do not lead to unacceptable behaviour at the system level.
- The safety of the system is continually ensured through-life despite changes to the environment or the system itself.
Regulations and standards
Prof. Dr. Simon Burton
I am an experienced engineer, researcher, consultant and manager with a passion for systems engineering and safety assurance. In the last years, I have increasingly focused on assuring the safety of complex, AI-based systems. My current roles include:
- Independent consultant
- Honorary visiting professor, University of York
- Convenor of ISO TC 22/SC 32/WG 14 and project lead ISO PAS 8800 "Road vehicles - Safety and Artificial Intelligence"
Why "Safer Complex Systems"?
I first became interested in the interaction between complexity, uncertainty and our ability to reason about safety whilst I was working on a study for the Royal Academy of Engineering together with my colleagues John McDermid, Philip Garnet and Rob Weaver. In particular, these topics resonated with the challenges that I had encountered in my work on the safety assurance of AI and autonomous systems.
Since the publication of our initial study, which was written in the first months of the COVID-19 pandemic, this topic has increased in importance. As the world digests the disrupting events of the last few years, coupled with the widespread deployment of (and resulting public scepticism concerning) AI technologies, there is more need than ever to develop capabilities for managing the emergent risk of the increasingly complex socio-technical systems upon which so much of our daily lives depend.
The impact of complexity and emergent risk is also of increased interest to the systems safety engineering community. For example, the recent setbacks in the rollout of automated driving systems can be seen as the result of underestimating the complexity of the task and environment in which road vehicles operate and the resulting uncertainty in our ability to reason about their impact on road safety.
I fully support all efforts to define and manage the consequences of the resulting paradigm shift in our approach to systems engineering and safety assurance. I am an active member of the UK Safety Critical Systems Club (SCSC) working group on Safer Complex Systems and support its efforts to illuminate this topic from a broad range of perspectives.
Below you can more information on safer complex systems in the form of some of my recent publications:
- Burton, Simon, John Alexander McDermid, Philip Garnett, and Rob Weaver. “Safer Complex Systems: An Initial Framework.” (2021).
- Burton, Simon, John Alexander McDermid, Philip Garnett, and Rob Weaver. “Safety, Complexity, and Automated Driving: Holistic Perspectives on Safety Assurance.” Computer 54, no. 8 (2021): 22-32.
- Burton, Simon, John Alexander McDermid, “Closing the gaps: Complexity and uncertainty in the safety assurance and regulation of automated driving.” (2023).
- Burton, Simon, and Benjamin Herd. “Addressing uncertainty in the safety assurance of machine-learning.” Frontiers in Computer Science 5 (2023).
For a full list of my research publications, see my Google Scholar page.