Safer complex systems

AI safety | Systems engineering | Functional safety | Safety of the intended functionality

The increasing level of automation and use of artificial intelligence in complex, socio-technical systems requires new approaches to assuring safety.

A holistic perspective on safer complex systems

Multidisciplinary research

In order to maintain a tolerable level of residual risk, despite the increase in complexity and autonomy in socio-technical systems, we must significantly adapt and improve our safety assurance capabilities. This requires groundbreaking research covering a broad range of disciplines ranging from AI ethics, systems engineering, safety assurance and theoretical concepts of trustworthy machine learning. 

These disciplines must interact to provide meaningful and actionable solutions to real-world problems. Collaborative research is required not only to answer the question of “how to engineer safe, AI-based, autonomous systems?” but also how to determine “what is safe enough?”, and “how to argue that this level of safety has been met?“.

The work performed by the Assuring autonomy international programme at the University of York is an excellent example of world class multidisciplinary research in this area.

Advanced systems engineering

Safety-critical autonomous systems must be engineered to be resilient against internal system faults, performance insufficiencies, changes in their environment and other unforeseen circumstances.
 
The inevitable performance limitations of AI-based components must be well-understood so that potential errors can be mitigated either by technical measures within the system or through a restriction of their operating conditions.
 
In future, the ability of complex, autonomous systems to achieve their goals in open real-world environments could be further increased by making them anti-fragile, enabling them to adapt to exposure to uncertainty and disruption. 

Safety assurance arguments

Convincing safety assurance arguments for complex, AI-based systems are needed that provide evidence to support the following claims:

Regulations and standards

Regulations are required that embody and enforce societal, legal and policy expectations on AI-based autonomous systems, whilst not restricting the potential of innovation for public good.
 
However, regulatory requirements can be abstract and difficult to interpret within a specific context.
Standards are therefore needed that translate these abstract concepts into measurable technical criteria, supported by best practice approaches to achieving and evaluating them.
 
These regulations and standards must acknowledge the inherent uncertainty within complex, AI-based systems and its impact on risk. This has an impact on the way that safety requirements are expressed, the role of system theoretic safety analyses and the acceptance of potential confidence deficits in assurance arguments.

Prof. Dr. Simon Burton

I am an experienced engineer, researcher, consultant and manager with a passion for systems engineering and safety assurance. In the last years, I have increasingly focused on assuring the safety of complex, AI-based systems. My current roles include:

Why "Safer Complex Systems"?

I first became interested in the interaction between complexity, uncertainty and our ability to reason about safety whilst I was working on a study for the Royal Academy of Engineering together with my colleagues John McDermid, Philip Garnet and Rob Weaver. In particular, these topics resonated with the challenges that I had encountered in my work on the safety assurance of AI and autonomous systems.

Since the publication of our initial study, which was written in the first months of the COVID-19 pandemic, this topic has increased in importance. As the world digests the disrupting events of the last few years, coupled with the widespread deployment of (and resulting public scepticism concerning) AI technologies, there is more need than ever to develop capabilities for managing the emergent risk of the increasingly complex socio-technical systems upon which so much of our daily lives depend.

The impact of complexity and emergent risk is also of increased interest to the systems safety engineering community. For example, the recent setbacks in the rollout of automated driving systems can be seen as the result of underestimating the complexity of the task and environment in which road vehicles operate and the resulting uncertainty in our ability to reason about their impact on road safety. 

I fully support all efforts to define and manage the consequences of the resulting paradigm shift in our approach to systems engineering and safety assurance. I am an active member of the UK Safety Critical Systems Club (SCSC) working group on Safer Complex Systems and support its efforts to illuminate this topic from a broad range of perspectives.

Below you can more information on safer complex systems in the form of some of my recent publications:

For a full list of my research publications, see my Google Scholar page.