PhD Research Projects
Centre for Software Reliability
Some example topics:
Validation of Cyber Security
Formal Modelling of System Security
Security of large scale computer systems is a serious problem in Government and the Business community. The intent of this research would be to develop formal models of large scale systems, with an intent to prove security properties. Previous work in the area has focused on the correctness of aspects of security, such as formal modelling of protocols or cryptographic security. This work would be aimed at larger scale properties of connected systems. There are many possible directions of research depending on the background and interest of the student, but would be particularly suited for students with an interest in Cyber security and a reasonable background in mathematical and modelling techniques.
Security-focused Simulation of Large-scale Interconnected Systems
This project would design, develop and explore simulation approaches intended to give insight into the way large scale systems behave in the context of security threats, allowing analysis of Cyber attack and defence strategies. This area of research would be most suitable for a candidate with strong programming skills and some experience of/interest in simulation based approaches to validation.
Heterogeneous systems -- systems of software/hardware having multiple components, naturally working in different domains
- Specifying and using properties of Heterogeneous Systems Techniques for specifying and using properties of single domain systems have been studied extensively, and have been shown to be very useful in practical applications, for both software and hardware. Systems involving components that are naturally described in different domains are less well studied. Developing an appropriate frame work for specifying properties of components in divergent domains, translating properties between multiple domains, finding the right projections to give the essential information to be used by connected components and developing correctness models for such systems has both theoretical and practical interest. This research could be undertaken in a number of different ways, ranging from pure theory through to very pragmatic studies showing results with significant examples, depending on the interest of the student(s).
- Cooperating proof tools for Heterogeneous systems There has been much research on, and great success in applying, automatic proof tools such as model checkers. More recently, the use of SAT solvers has shown to be a very useful enhancement of these tools. The extension of SAT into richer underlying domains with the use of SMT (Satisfiability modulo theory) is currently an active area of research. One of the properties of a heterogeneous system is that there is no single domain of reasoning that is ideal for all components. Extending the idea of SMT to allow multiple different theories to be supported would be a significant contribution to tools support for reasoning about such systems. This work could be undertaken at the theory level, the tool building level or the application level, depending on the background and interests of the student(s).
Verification of Security Hardware with Specification Obfuscation
Hardware verification, using both testing and formal approaches, is a well established and on-going area of EE/CS research. New approaches to security have resulted in classes of hardware that are difficult to verify using standard approaches. An interesting area of research, either for a EE interested in verification and proof or for a CS with an interest in hardware, is to develop and analyse techniques for verifying devices/IP blocks that serve a security function which requires some parts of the specification to be obfuscated. There is industrial interest in this problem and a high likelihood that a student undertaking this research would have collaborative contact with industrial design teams.
Large Scale Software Systems
Large scale software systems are generally created by composing various subsystems and services, often crossing organisational boundaries. I am interested in supervising PhD projects addressing issues encountered in such endeavours. Relevant topics may include but are not limited to means for avoiding incompatible system compositions at design time (including explicit considerations of architectural mismatches), as well as detecting of such incompatibilities at run time thus triggering a reconfiguration.
Self-adaptive Software Systems
An important behaviour expected of modern software systems is that of being able to adapt itself at run time in order to address fault tolerance concerns, changes in user needs and the operational environment, as well as to optimise resource utilisation. I am interested in supervising projects exploring self-adaptation to support green (sustainable) computing, the provision of assurances in the face of adaptation, as well as the explicit avoidance of incompatibilities being introduced during self-adaptation.
People and Software Engineering
Software development is a human intensive activity at times relying on solo undertakings, but very often in group activities. Although this fact is widely recognised, research efforts in this area are still far from covering the full spectrum of relevant issues. I am interested in supervising projects that explore sociological and/or psychological factors supporting successful group work in a software engineering environment.
Safety Cases and Probabilistic Measures of Dependability of Software-based Systems
CSR has worked for years on the problems of evaluating the dependability of software-based systems. The aim is to obtain trusted probabilistic measures of safety, reliability, security. Much of this work has involved Bayesian approaches to uncertainty, and we see this as continuing. A lot of our work concerns fault tolerant systems, in which multiple software systems are developed "independently" in the hope that if one fails, others will succeed. Such systems are widely used in (e.g.) nuclear protection systems, and it is important that justifiable trust can be placed in their correct functioning. The technical problems here are many, and should be of interest to probabilists/statisticians and computer scientists (particularly those with a background in formal methods).
Studies in automation bias
When computers advise people in their decisions, paradoxical "over-reliance" situations may arise in which the computer's help makes people more likely to make mistakes rather than less likely. Theses are available to study by experiment and mathematical modelling how these situations arise, so that designers of systems from medical diagnosis aids to collision-alert systems for vehicles can forecast these situations and avoid creating them. This is an interdisciplinary topic straddling computing and psychology. To learn more about the open research questions, you may be interested in this paper and in our previous work on a medical application.
When computer based systems have critical functions, it is natural to test how well they would resist internal faults, accidental misuse or deliberate attacks, by artificially causing these kinds of stress. Many techniques in current use in dependability and security apply this approach: for instance, fault injection, robustness testing, fuzzing, red teaming, penetration testing, mutation testing, dependability benchmarking. But how much should one be reassured if these tests are passed successfully? How difficult is it to create the "right" stresses? These questions require a mix of mathematical and empirical work. If interested in these issues, you may wish to look at discussions of open research questions for instance in this paper about software robustness or this paper about resilience of complex systems
Theses in these areas can aim at assessing by experiment the real predictive accuracy of these forms of measurement in specific applications (e.g. software security or robustness to error) and to prove by formal mathematical treatment what conclusions about a specific system can be drawn from these kinds of measurement.
Combination of formal proof with statistical testing of software
Formal verification and statistical testing are two approaches to achieving confidence that software will perform as required. However, for most complex systems neither method, alone, is sufficient. Confidence from proofs is limited by complexity of real-life requirements and of implementations, and of the proofs themselves; confidence from statistical testing depends on assumptions about the system being tested and the test process, which must be verified.
At the Centre for Software Reliability we have contributed to the state of the art in both these areas and have experience of their limits in concrete applications with critical software. We are keen to supervise theses that will produce progress in combining the two categories of methods so that they support each other.
Effectiveness of alerting systems for security
There is a booming business in developing advanced computer-based systems that help security and police forces to recognise criminals or dangerous situations and prompt intervention. These systems automatically analyse inputs from security cameras or other sensors and alert staff to consider reacting. Systems designed on similar principles alert computer system administrators to cyber intrusion. Designing these systems presents fascinating technical challenges, but the trickiest ones may be the socio-technical challenges, involving how the users react to these systems. What is the risk of the computer causing false arrests or massive defence operations by false alarms? Or of it flagging a real danger but being ignored? These depend not only on how "clever" the computer is but also on how well it is matched to the abilities and decision making heuristics of the people using it.
This topic, related to our work on automation bias, requires interdisciplinary work at the intersection of system and software engineering with psychology, with possible interesting explorations of legal and organisational aspects, and may suit people with various backgrounds in hard sciences, computing or psychology.