Power, surveillance and digital media

Yesterday I was teaching some of my students about Foucault, power and surveillance. These themes have never been more relevant to everyday life. The expansion of digital communications has created innumerable opportunities for the exercise of power through monitoring human activity, creating new kinds of vulnerabilities. This is especially the case for children and young people, whose lives are increasingly being played out online, warts and all.

Take Paris Brown, a 17 year old appointed in 2013 as the UK’s first youth crime commissioner. Her remit was to represent young people’s views to the police in Kent, and she invited them to use social media to do so. But social media came back to bite her. The tabloid press dredged up offensive posts from her Twitter account, including ill-advised racist, homophobic and violent comments, probably written whilst drunk. Her reputation was trashed, and a few days later she resigned.

Taken literally, the Tweets are lewd and unpleasant. Thinking about the context, however, it looks like this was just an adolescent seeking attention, perhaps showing off to her friends, expressing anger and confusion in a clumsy and foolish way, and pushing social boundaries to see what would happen. So – normal teenager stuff. For my generation growing up, you could say and do stupid stuff to get a reaction, cause a bit of outrage, and it was rarely recorded. That has all changed.

I also talked to my students about the UK government’s monitoring of communications through GCHQ. Afterwards, the question came up: is this sort of surveillance really such a bad thing? One student pointed out that GCHQ came out of Alan Turing’s work at Bletchley Park, including cracking the Enigma code during World War II, which helped defeat the Nazis. GCHQ’s current work involves foiling terrorist plots, saving lives. What’s wrong with that?

Clearly it is too simplistic to suggest that surveillance systems are driven by malice, like a bunch of Bond villains trawling people’s emails in a secret underground lair. Surveillance is more rational than that: the state is threatened by actions such as terrorism, and the production of knowledge is a crucial way of exercising of power to regulate these threatening actions.

But in any kind of rationality, there is always an irrationality. The power exercised by GCHQ doesn’t just block terrorism. It helps to produce terrorism as a definable thing – a set of ideas and subjectivities that can be monitored, documented and regulated.

Mass surveillance also has unintended consequences, like the unpleasant side effects of a medical treatment. Storing all electronic communication in the name of counter terrorism compromises the privacy of entire populations. That changes the nature of social life, in ways that may be hard to perceive but which are nonetheless pervasive. Autonomy is inevitably curtailed. An email, for instance, might look like communication between two people, but it isn’t. Other people can examine it, log it, store it. It could be used in a court of law at a later date in some way that is impossible to foresee.

We don’t have to look hard to find examples of such powers being used abusively. I imagine many of those who helped gather information for the East German Stasi believed that they were doing good, protecting their state from dangerous ideologies. The power they exercised no doubt enabled certain things, protected certain values – but it also crushed people and ideas that didn’t fit with the dominant view. It is all too easy for power to slip into violence.

Foucault poses the question of how to let power flow whilst avoiding it solidifying into authoritarian forms of domination. There are no easy answers. But we have to at least keep asking the question. It may well be that many of those working in surveillance wrestle with this on a daily basis. However, if you believe Edward Snowden’s description of America’s National Security Agency, the employees there were definitely not questioning what they were doing enough, or even at all – and that is when power becomes dangerous.

Alan Turing’s groundbreaking role in surveillance may have helped to win WWII, but look what happened to him: suicide, following persecution for his sexuality. The state monitored his private activities, criminalised him and subjected him to enforced medical castration. Government interference in the most intimate of matters caused him irreparable harm. It is an unfortunate irony that the machines he dreamt up are now being used to insert surveillance ever deeper into people’s lives.

Digital surveillance and digital memory

Over the last week or so, the mainstream media here in the UK have been filled with shock-horror stories about communications surveillance by western intelligence agencies. I’m a bit baffled by all the fuss to be honest. Monitoring, storing and sharing information are intrinsic to the very nature of digital communication, so it’s no surprise to find governments tapping into that for their own ends. What else did we expect?

They key thing is memory. Computers are often characterized as calculating machines, but to hold the results of those calculations computers need memory. And as the capacity of computers to calculate has increased in power and shrunk in size and cost, so has their capacity to remember. Media archaeologist Wolfgang Ernst argues that digital technologies are characterized by processes of micro-memory. Even when humans experience computers as working in real time, in fact packets of data are being rapidly sent in and out of little chunks of memory with names like ‘buffers’ and ‘caches’. Across a longer time frame, digital devices store data in ROM chips, flash memory, hard drives and so on. Digital technologies are therefore intrinsically archival. Their basic architecture involves the capacity to hold onto information.

Diagram of a video decoder, used for the playback of MPEG-2 videos. Note the many blocks of memory involved.
Diagram of a video decoder, used for the playback of MPEG-2 videos. Note the many blocks of memory involved.

If digital machines are constantly remembering, and we use them for communication, then it is inevitable that what we communicate is going to be remembered – whether for a few milliseconds, a few hours, for 30 days (as with GCHQ) or much longer. Since digital communication requires the networking of machines, the possibilities for circulating this stored data are endless, both legal and illegal, from routine monitoring and data harvesting to snooping, hacking and data theft.

Digital memory is nothing new. The Prophet VS from 1986 had 96 waveforms stored in its ROM, and space to store a further 32 user edited waveforms.
Digital memory is nothing new. This Prophet VS synthesizer from 1986 has the capacity to store 32 user-edited waveforms. The key differences between this and modern devices are that (a) waveform data aren’t personal and (b) the machine’s networking abilities are very limited (it uses MIDI sample dumps, which are pretty clunky).

Just think of all the personal data passing through phone companies, internet service providers, online shops, online banks. Now think of all the people along the way who could access it, whether legally or otherwise – call centre workers, IT technicians, communications engineers, account managers. It’s inconceivable that such complex pipelines, carrying such massive flows, would not be leaky. In the case of companies such as Google, Facebook and Amazon, their very business models depend on harvesting and selling on data from customers to advertisers. When you use Google’s search engine, they are gathering data from you just as much as you are from them. Intelligence services holding emails for 30 days is just the tip of the iceberg.

Through this little socket go all my phone calls, emails, Tweets, web pages, web searches...
Through this little socket go all my phone calls, emails, tweets, blog posts, web pages, web searches…

Of course there are all sorts of measures that can be used to restrict this information storage and circulation. GCHQ seem to be placing a lot of faith in laws and their employees’ obedience to them, claiming that they never actually access any of the stored data without a warrant. (This raises all sorts of questions: do they really think their employees always comply with this? If it is machines that have snooped on you, not humans, has your privacy been invaded or not? And if the machines are part of the organization, doesn’t the information stored within them count as knowledge within the organization?) As well as data protection laws, there are also restrictive technological systems such as secure servers, encryption systems, firewalls and so on. But all such protective measures involve choking the processes of remembering and exchange that are the life blood of digital communication. The failure of such measures, either occasionally or routinely, therefore seems inevitable.

So I’m skeptical about the value of building stronger regulatory systems to maintain privacy in digital communications. The capacity to siphon off and hold onto information is so intrinsic to digital technologies that looking to regulation seems almost like a denial of how these systems operate. Given the vastness and complexity of the infrastructures involved, effective regulation would probably involve swathes of new law, policing, enforcement, new technologies, heavy-handed discipline, and surveillance of the surveyors – which surely just returns us to the original problem.

This isn’t to say that regulation and restriction can’t be useful in many situations. The firewall in my home router is staying turned on. But we’re likely to be disappointed if we place all of our faith in restrictive, technocratic laws and systems to protect our privacy. Instead, I suggest we need to accept that information storage and exchange are intrinsic to digital communication technologies, and act accordingly.

Diagram of a circuit-level firewall. A useful tool, but we can't trust such devices to protect our privacy. Copyright Cisco Systems Inc.
Diagram of a circuit-level firewall. A useful tool, but we can’t trust such devices to protect our privacy. Image Copyright Cisco Systems Inc.

The example of peer-to-peer file sharing of music offers a useful point of comparison. It’s a case in which there has been an enormous amount of digital data shared without permission, and a lot of consternation about the illegality and possible negative effects of that sharing. So it has some similarities with the current debate about surveillance. Record companies and some recording artists have made a big fuss about file sharing, wringing their hands, arguing for more enforcement of copyright laws and the shutting down of web services, trying to implement copy protection systems and so on, but people continue to share music illegally on a massive scale. Restrictive policing has had some success, with networks such as Napster being sued and forced to close, internet service providers sending out warning letters and limiting bandwidth for ‘persistent offenders’, and the prosecution of small numbers of individuals. But these efforts amount to damage limitation, and are hardly inspiring. Is this the kind of thing we want more of? Meanwhile, musicians are beginning to accept the reality of file sharing – whether they like it or not – and find new ways to make a living that depend less on the sale of recordings.

Similarly, whether we like it or not, I think we need to accept that, with networked digital communication technology, monitoring and sharing of personal data is an ever-present possibility. We can’t enjoy easy access to the internet without allowing the internet access to us. In most cases, it’s highly unlikely that anyone will give a toss about the banal stuff we chuck through these channels on a daily basis, so most of it will probably disappear without trace. But we might do well to maintain an awareness that interception is probably quite easy for anyone who is sufficiently motivated – whether a government agency, a global corporation or an individual, whether acting inside or outside the law. And the chances are we would never know it had happened.

don't share it-1

If we really want to keep information out of these networks, it’s simple enough to do: we just need to make sure we don’t include it in any form of electronic communication. Don’t tweet it, don’t email it, don’t put it on Facebook (even with their privacy settings enabled) and don’t talk about it on the phone. Maybe coping with the reality of digital communication will involve dividing our information roughly into two categories. On the one hand, we’ll have information that we are willing to allow (albeit grudgingly) into the digital domain to be copied, stored, sold, exchanged, scooped or snooped; stuff that might be personal, but ultimately isn’t all that sensitive or critical. On the other hand, we’ll have information that we absolutely do want kept private, which we will learn to keep out of the digital domain entirely. Perhaps in years to come, older forms of communication such as letters and face-to-face conversations will be re-valued for the degree of privacy they afford, a bit like the way that the ritual of playing vinyl records has become more precious in a world full of MP3s.