Centre for Software Reliability
  1. News
  2. Staff
  3. Research
  4. Publications
  5. Courses
  6. Consultancy & Advisory Services
  7. Contact CSR
  1. PhD Research Projects
Software Reliability

PhD Research Projects

Centre for Software Reliability

Full details of the application process are available online.  On your application form, please state that you wish to be considered for admission to the Centre for Software Reliability.

Some example topics:

Self-adaptive Software Systems

Cristina Gacek

An important behaviour expected of modern software systems is that of being able to adapt itself at run time in order to address fault tolerance concerns, changes in user needs and the operational environment, as well as to optimise resource utilisation. I am interested in supervising projects exploring self-adaptation to support green (sustainable) computing, the provision of assurances in the face of adaptation, as well as the explicit avoidance of incompatibilities being introduced during self-adaptation.

People and Software Engineering

Cristina Gacek

Software development is a human intensive activity at times relying on solo undertakings, but very often in group activities. Although this fact is widely recognised, research efforts in this area are still far from covering the full spectrum of relevant issues. I am interested in supervising projects that explore sociological and/or psychological factors supporting successful group work in a software engineering environment.

Safety Cases and Probabilistic Measures of Dependability of Software-based Systems

Bev Littlewood

CSR has worked for years on the problems of evaluating the dependability of software-based systems. The aim is to obtain trusted probabilistic measures of safety, reliability, security. Much of this work has involved Bayesian approaches to uncertainty, and we see this as continuing. A lot of our work concerns fault tolerant  systems, in which multiple software systems are developed "independently" in the hope that if one fails, others will succeed. Such systems are widely used in (e.g.) nuclear protection systems, and it is important that justifiable trust can be placed in their correct functioning. The technical problems here are many, and should be of interest to probabilists/statisticians and computer scientists (particularly those with a background in formal methods).

Studies in automation bias

Lorenzo Strigini

When computers advise people in their decisions, paradoxical "over-reliance" situations may arise in which the computer's help makes people more likely to make mistakes rather than less likely. Theses are available to study by experiment and mathematical modelling how these situations arise, so that designers of systems from medical diagnosis aids to collision-alert systems for vehicles can forecast these situations and avoid creating them. This is an interdisciplinary topic straddling computing and psychology. To learn more about the open research questions, you may be interested in this paper and in our previous work on a medical application.

Measuring resilience

Lorenzo Strigini

When computer based systems have critical functions, it is natural to test how well they would resist internal faults, accidental misuse or deliberate attacks, by artificially causing these kinds of stress. Many techniques in current use in  dependability and security apply this approach: for instance, fault injection, robustness testing, fuzzing, red teaming, penetration testing, mutation testing, dependability benchmarking. But how much should one be reassured if these tests are passed successfully? How difficult is it to create the "right" stresses? These questions require a mix of mathematical and empirical work. If interested in these issues, you may wish to look at discussions of open research questions for instance in this paper about software robustness or this paper about resilience of complex systems

Theses in these areas can aim at assessing by experiment the real predictive accuracy of these forms of measurement in specific applications (e.g. software security or robustness to error) and to prove by formal mathematical treatment what conclusions about a specific system can be drawn from these kinds of measurement.

Combination of formal proof with statistical testing of software

Lorenzo Strigini

Formal verification and statistical testing are two approaches to achieving confidence that software will perform as required.  However, for most complex systems neither method, alone, is sufficient. Confidence from proofs is limited by complexity of real-life requirements and of implementations, and of the proofs themselves; confidence from statistical testing depends on assumptions about the system being tested and the test process, which must be verified.

At the Centre for Software Reliability we have contributed to the state of the art in both these areas and have experience of their limits in concrete applications with critical software.  We are keen to supervise theses that will produce progress in combining the two categories of methods so that they support each other.

Effectiveness of alerting systems for security

Lorenzo Strigini


There is a booming business in developing advanced computer-based systems that help security and police forces to recognise criminals or dangerous situations and prompt intervention. These systems automatically analyse inputs from security cameras or other sensors and alert staff to consider reacting. Systems designed on similar principles alert computer system administrators to cyber intrusion. Designing these systems presents fascinating technical challenges, but the trickiest ones may be the socio-technical challenges, involving how the users react to these systems.  What is the risk of the computer causing false arrests or massive defence operations by false alarms? Or of it flagging a real danger but being ignored? These depend not only on how "clever" the computer is but also on how well it is matched to the abilities and decision making heuristics of the people using it.

This topic, related to our work on automation bias, requires interdisciplinary work at the intersection of system and software engineering with psychology, with possible interesting explorations of legal and organisational aspects, and may suit people with various backgrounds in hard sciences, computing or psychology.