The researchers:
- Dr Peter Popov (Principal Investigator)
- Professor Robin Bloomfield (Principal Investigator)
- Dr Andrey Povyakalo (Co-Investigator)
- Dr David Wright (Co-Investigator)
- Dr Katerina Netkachova (Co-Investigator)
- Dr Kizito Salako (Co-Investigator)
- Dr Vladimir Stankovic (Co-Investigator)
- Professor Bev Littlewood (Co-Investigator)
- Professor Lorenzo Strigini (Co-Investigator)
- Professor Peter Bishop (Co-Investigator)
Research status:Completed
In summary
Research by academics at City, University of London is ensuring better practice and improved safeguarding in the likelihood of a critical computer-based systems failure.
The academics demonstrate that the risk posed by a computer-based system of different complexity – from simple protection system to large distributed critical infrastructure – is at an acceptably low level applying the ‘claim-argument evidence’ (CAE) method.
This method requires explicit arguments linking evidence to the claims made about the likes of safety and security, encouraging rigour and the use of analytical probabilistic models.
This method has been widely adopted by industry and regulators in the UK and worldwide, including from railway, energy, and autonomous vehicle industries.
What did we explore and how?
Developed over the course of several decades, this research is about a specific form of ‘assurance cases’, which demonstrate that the risk posed by critical systems are acceptably low using CAE recently extended with CAE blocks. Modern CAE-facilitated assurance cases rely not only on expert judgement, but also on the rigour of models suitable for quantitative risk assessment.
Using probabilistic models has long been advocated as a way of making assurance cases stronger. Recent advances demonstrated that models are particularly important for computer-based systems of significant complexity when relying on expert judgment is problematic and in the presence of significant uncertainty, e.g., as is typically the case with cyber-threats.
Additionally, there have been extensions to assurance cases to deal with cyber-security and with security-safety ‘co-engineering’, supported by models of industrial automation, power grids and medical devices, and addressing the assurance gaps in critical applications of machine learning and artificial intelligence.
The team developed sophisticated software tools to support the application of probabilistic modelling, especially in the case of large, complex systems, and worked on quantifying system's resilience against design faults and cyber-threats.
Benefits and influence of this research
The impact of the research by City academics can be found in its widespread adoption by industry and regulators in the UK and worldwide.
It has:
- reduced the risk of harm from malfunctioning or intentional sabotage of critical systems (e.g., nuclear, transportation, power supply, defence) through applying evidence-based arguments
- improved confidence in assurance
- improved understanding of risks in future widely deployed critical systems.
In an era of machine learning (ML) and artificial intelligence (AI) this research is badly needed and can result in significant savings not only in research and development but also in preventing losses in industrial deployments relying on ML and AI.
The CAE Blocks framework has also become a core part of the IAEA Software Dependability Assessment guidelines, affecting nuclear safety worldwide, and earned support from the Centre for the Protection of National Infrastructure (CPNI). CAE has been used to assess the safety of industrial systems and to develop codes of practice for security informed safety, including for the rail industry and for air traffic management for the Civil Aviation Authority.
Impact was also achieved from partnerships with: Radiy, a major supplier of equipment to commerce and industry, with more than 70 installations worldwide including safety protection systems for nuclear plants;
The Royal Academy of Engineering published a recent report which was informed by the work conducted by the City team.
Additionally, there have been extensions of the CAE method to security and to security-safety “co-engineering”, which rely extensively on probabilistic models, in industrial automation, power grids and medical devices.