PhD Studentships on Safety of Autonomous Systems
Intel Labs has set up an international collaborative research institute, on the safety of autonomous vehicles (AVs). A City, University of London team from the Centre for Software Reliability (CSR), the Research Centre for Machine Learning (RCML) and the Data Science Institute (DSI) are a key part of the institute with their proposal on “Justifying the safety of autonomous vehicle systems”. The project is led by Prof Robin Bloomfield with Profs Strigini, Garcez, Bishop and Dr Popov.
Our research will address the following questions:
- What is the basis for judging that AVs are acceptable for widespread use? How to define and manage the trade-offs that arise between safety, efficiency and cost etc. How to evaluate confidence and combine disparate evidence.
- What architecture, training and implementation strategies can be used to manage and reduce risks to passengers and public during AV development prior to widespread acceptance, and how is the contribution of these to system safety to be assessed?
We will address the research questions from an assurance case perspective, to structure and challenge the claims and assumptions from both existing research and concepts (e.g. Responsibility-Sensitive Safety, RSS) and the solutions developed as part of this project.
We have four PhD posts to progress this work and we are seeking applicants now, for start of studies in the 2019-20 and 2020-21 academic years. The posts come with a stipend and home fees for 3 years, subject to satisfactory performance.
- Developing, with Intel, an overall assurance case for an autonomous system based on the Claims, Argument Evidence framework (CAE) and extending recent research on combining informal/logics, the use of confirmation theory and the automated search for defeaters.
- Explainability of ML systems: how and whether this can be developed to increase our understanding and assurance of autonomous systems. Devising and evaluating knowledge extraction algorithms in practice for explaining the reasoning of CNNs under exceptional situations. Investigating issues of data efficiency and scalability of knowledge extraction as an effective method for providing assurance to deep networks.
- Combining reasoning under uncertainty with logical models of the assurance of the system. For example, how to assess ML-based subsystems in AVs when we move from ordinary reliability levels to ultra-high-reliability? What forms of testing, now in use or proposed, give what kind and level of confidence? How can AV architectures based on defence in depth and diversity can be designed and evaluated? For example, how the use of ML “ensembles” changes the reasoning that AVs are safe enough. Similarly, how combining sensors (ML and non-ML) affects the confidence in the system assurance, etc.
- Modelling complex traffic systems, in particular assessing the effects of road instrumentation, of coexistence of different vehicles (AVs and conventional vehicles). For example, to apply Convolutional Neural Networks (CNN) for decision making at traffic lights, and to identify exceptions such as reflections following showers which may appear to be obstacles due to exceptional light conditions, using data and infrastructure from Intel.
The work will involve a team with experience in an exciting mix of machine learning, safety and trust, mathematical and statistical approaches supported with extensive empirical work.
We expect the successful applicants to work as a team with themselves and the academic staff. There will be opportunity for travel and working with ICRI partners, and spending time in Intel labs.
How to apply
Deadline for applications: 8 March 2020
Your CV and Personal Statement should specify "ICRI SAVE project".