City Perspectives: The Bradley Manning trial, data leaks and online privacy
By Professor Kevin Jones, Professor of Security and Dependability and Head of Computer Science
Information security: Bradley Manning and Edward Snowden
The Bradley Manning and Edward Snowden cases demonstrate instances where individuals were given legitimate access to privileged information but did not have permission to share that information with others outside of their organisation, or physically remove the information from the work place.
If you look at what is usually encompassed in the protection of employer information, it involves preventing people from being in places where they can have access to things you don't want them to access. However, in these specific instances, someone legitimately had rights to access data, but didn't have rights to use the information in the manner that they did. The lesson here is that policies need to be set up to permit people to only have access to what their employers determine is necessary. There should also be ways of managing what they can do with the data once they have it.
Employers and data leaks from employees
The way that employers can protect themselves from data leaks is no different in the digital space than in the physical space. You very carefully vet the employees you allow to have access to sensitive information. There is nothing fundamentally different about the digital space when you are an employee and work within an organisation. Of course, Snowden and Manning were probably vetted to the highest existing standards, so there is clearly a problem. The only other thing the employer in sensitive situations can do is to monitor the behaviour of employees. Anomalous behaviour should be noticeable enough to be picked up. You could install technical and non-technical systems in which people reasonably monitor each so that when you see something that just doesn't look right it is queried. The Manning and Snowden cases are classic because they demonstrate that national security organisations themselves - whose basic purpose is to monitor other people - perhaps need to have systems of closer internal monitoring so as to make sure people are not doing things they shouldn't be doing.
Individual computer users and online privacy
With regard to the ordinary individual protecting their online privacy and the size of their individual digital footprint, there's always a trade-off; the more effort you put in to protecting it, the smaller your footprint will be. It is possible to leave no digital trace whatsoever, but it requires a lot of effort. The question is whether the effort is justified by the returns. Most people actually don't care about what information they leave online, provided it's not useable by people who intend to harm them and harbour no criminal intentions. For most people, there is probably a point of diminishing returns; it would not be worth the effort of aiming towards complete anonymity. This means that you could not, for example, perform a Google search, or even just log in to websites that you would naturally. You would then have to conduct your online interactions through anonymisers and redirectors. This is entirely possible and the technology to support it is available but it would carry a cost. For the large majority of internet users, however, it is probably not worth the effort because actually, most of us want Amazon and Google and everybody else to know something about us. We don't want people to have unauthorised access to some of that information, but ultimately what we like to read or the kind of groceries we buy is probably not sufficiently sensitive that we would go to the effort of making sure that nobody could find this information.
Data analysed by state security agencies
On the matter of discovering information, the world's major government security agencies have considerable computing resources but it is impossible to completely analyse all of the data available in the entirety of cyberspace. However, security organisations can target parts of the problem quite well. It is possible to partition a very large space in such a way that we can identify particular groups coming from particular areas with particular topics of interest. These can be analysed quite thoroughly and quite well. So it's really a matter of how we apply the available resources to the problem at hand and reduce it to manageable chunks. If people are worried about anyone anywhere ever typing the wrong phrase and being targeted by a security agency, they will be worrying about an unlikely scenario.
The Big Sky Theory in aviation posits that two randomly flying bodies are unlikely to collide given that three dimensional space is vast in comparison to those bodies. So there won't be accidents involving aircraft except at or near an airport. It's the same with information security. If you focus on areas where there are groups of people who are likely to be targets - both topic areas and physical geographical areas - there is a possibility that you can exhaustively analyse the information coming into those spaces. However, if you take that as a fraction of all of the data in the totality of cyberspace, it is probably quite small, so it requires intelligence - in the classic sense rather than in the military sense - to be able to work out where the optimum place will be to deploy one's resources to gain the best possible results. Very good results can be obtained in carefully targeted ways and reasonable results in much broader ways - but not as in 'we-know-everything-anywhere-from-anyone-at-any-time'. That's just not feasible.