Tanya Kant is a doctoral researcher in Media and Cultural Studies in the School of Media, Film and Music at the University of Sussex. Her current research focuses on personalisation on popular online platforms. She is currently conducting interviews with users of tracker blocking tools, especially Ghostery, about their experiences of data tracking, online privacy and tracker blocking. If you are interested in participating in interviews, please contact Tanya Kant directly via email (tk44[at]sussex.ac.uk) or twitter. This week, Tanya questions if resistance is futile to online data tracking.
In his new book Networks of outrage and hope, Manual Castells celebrates the Internet’s capacity to bring together otherwise disconnected social subjects, creating a de-centralised, many-to-many network of communication that supports and encourages emerging social movements. Like many other scholars and popular writers, Castells emphasises the communitarian opportunities for social-political action that the web affords. As a ‘horizontal’, deinstitutionalised form of communication technology, the Internet has been widely championed for its capacity to mobilise and publicise even the most marginal activist collectives, facilitating the up-rise of citizens compelled to challenge, resist and even overthrow dominant societal norms and institutions (2012). Less dramatically, the Internet also opens up new spaces for collective activist debates – as evidenced by the existence of this blog.
However, though the net’s accommodation of socio-political activism is of course important, it is only one component of the multi-faceted, multi-functional plethora of platforms that most users utilise and enjoy. Platforms such as social networks, blogging sites or search engines can – and are – used to support activist movements and support social change, but they also serve other far more banal uses; personal messages between friends, day-to-day news and entertainment consumption, lifestyle applications and status updates that express everyday frustrations rather than stark national tensions.
Yet, as recent revelations concerning both state and commercial surveillance have highlighted, even web users’ most individuated, ephemeral web movements are being monitored and mined; our day-to-day online trajectories are harvested for data that is archived, profiled and used (amongst other things) to generate profit. Furthermore, as this article will explore, there is a rising proliferation of ‘self-help tools’ available to users who are interested in protecting their personal data trail; by tracking the first and third party companies and organisations interested in following our online movements, and by blocking the data signals that these parties use to track us. Which begs the question, if our individual data trails are mined, tracked and used for profit – but can also be protected and controlled – should we be considering them as sites of resistance? As well as considering the web within the context of collective activist debates, should we be considering protecting our everyday online movements as a form of individualised activism?
The personal data economy
At current, the amount of digital data that we collectively generate is staggering; as Viktor Mayer-Schonberger and Kenneth Cukier highlight, the globe’s digital data stores double in size approximately every three years, and at current they amount to around 1,2000 exabytes (Mayer-Schonderger and Cukier, 2013: 9). Somewhat unsurprisingly then, a users’ individual data trail – comprised of a number of identifying signals such as browsing histories, cookie aggregates, log-in details, IP address and other data – can be extensive, and can be tracked by a number of companies and other organisations interested in what that user is doing with the web.
As some users are aware and as scholars such as Helen Nissenbaum (2010) emphasise, our online movements and articulations may be personal, but they are not necessarily private; data collection is an established practice for popular cost-free platforms like Facebook and Google. After all, personal data is what keeps these services afloat – the new documentary Terms and Conditions May Apply states that every Google users is worth around $500 dollars per year to the company. Google and other platforms persistently assert (in their Terms of Service and in courts of law) that though we may generate our data trails, we do not necessarily own them; the data is not just ‘yours’, but ‘theirs’ too.
Furthermore, though many users are aware of the often ambiguous data collection strategies executed by these prominent first parties, it is the increasingly opaque data aggregation by third party trackers – advertisers, marketing specialists, data analysts and audience research agencies – that remains relatively under-publicised. As Nick Nikiforakis et al (2013) and Johnathan Mayer (2011) highlight, these third party trackers user a number of techniques, such as cookie aggregation and browser fingerprinting to extract data from a users’ web movements, despite the fact that the user may never have actually visited that third party’s site. By using these relatively covert and legally ambiguous data mining techniques, third parties can build robust and coherent user profiles and user datasets that can be sold to interested parties or used to generate revenue through targeted advertising and other marketing practices.
Tracking the trackers; finding ‘a window to the invisible web’
As Andrew McStay points outs, the data tracking practices currently in operation are often ambiguous and nonnegotiable (2012). Though users may be forced to accept the existence of identifying tags like cookies if they want to continue using their favourite sites, the idea that users truly ‘consent’ to data tracking is problematised by the striking lack of concise information, understanding and transparency that is needed for user to enter an informed agreement with their platform of choice. As a result, even if a user has supposedly ‘consented’, many data collection policies actively deny individuals the control or agency to effectively opt-out of such data mining. He states that;
‘Consent is not passive, but rather requires that people do something. This means that people must be informed and able to conceive an educated opinion so as to express will…. To be devoid of understanding is to be unable to give proper consent (2012: 600)’
Furthermore, scholars such as Lev Manovich and Christian Fuchs argue, surplus value extraction from user generated data platforms, as well as the lack of user access to their own data files, can lead to user exploitation and the creation of a new type of ‘digital divide’ in the form of ‘a hierarchy of “data-classes”’ (Manovich, 2011) in which disempowered users constitute a kind of digital ‘lower class’. These writers highlight the tensions – between user control and platform ownership – that surround our online data tracking, revealing that even if a users’ themselves consider their online trajectories to be apolitical, everyday and innocuous, the trail they leave behind has become an active site of contention.
Fortunately for concerned users, there are a number of ‘self-help tools’ available to those interested in protecting and controlling their data trails. Browser add-ons and software applications such as DoNotTrackMe, Tor, AdBlock Plus and Ghostery provide a number ways of blocking trackers or preventing the consequences of data mining. For example, privacy specialists Ghostery offer a downloadable browser extension that enables individual web users to detect and block over 1,200 listed trackers. As the Ghostery site states, the Add-on is designed to help even the most technophobic user ‘become a web detective’ by equipping them with a free and easy-to-use tool that will alert them of any ad providers, data analysts, marketing companies and web publishers that are invisibly monitoring their visits to websites. Not only does the Add-on detect unwelcome third parties, it can ‘block’ them too – by preventing the tracker from following your data trail on either just the website you are viewing or from the web in its entirety. Tracking blocking tools such as Ghostery help to rectify the lack of knowledge and control surrounding ambiguous and opaque data collection strategies, by informing users on who’s tracking them and empowering users with the tools to protect their data.
However, as Nikiforakis et al (2013) and Mayer recognise, there are a number of problems with the self-help tools currently available that undermine their effectiveness. The most common and widely publicised issue is that no self-help tool can offer 100% protection against tracking. For example, some ad blockers block advertising from appearing on the sites you visit but don’t stop you from being tracked all together, whilst other tools protect you from certain types of data mining (cookies) but not others (social plugins). Another problem is that the use of some blockers may result in a loss of functionality on some sites. Most worryingly however is Nickiforakis et al.’s finding that some blocking practices actually help third party trackers to identify users – they state:
‘Over 800,000 users, who are currently utilizing user-agent-spoofing extensions are more fingerprintable than users who do not attempt to hide their browser’s identity, and challenge the advice given by prior research on the use of such extensions as a way of increasing one’s privacy (2013).’
In other words, some attempts at resisting personal data tracking are ironically actually assisting the companies using fingerprinting. All of which begs the question; what can be done to protect our data trails? Is resistance really futile?
Taking back control: data trails as sites of resistance
As already outlined, there are those that argue that if users enjoy cost-free services like search engines, entertainment sites, social media and video-hosting platforms, then users shouldn’t be resisting tracking at all – it is user data, instead of user income, that fund these free-to-use services. This may be true, but given the invisible and ethically ambivalent tactics of tracking companies, it is understandable that users try to not only inform themselves about the complex tracking tactics currently in operation but also try and take back control of the data that they generate. As scholars such as Andrew McStay and Sundar & Marathe argue, ideas such as user consent, user agency and user control can only truly operate with a robust framework of user understanding – in other words, an individual needs to know what they are consenting to and who has control of their data if they are to be afforded any true sense of agency.
Are self-help tools such as Ghostery enough to give users the power and understanding they need in order to take back control of their data trails? Despite the technological problems that Mayer highlights, but they at least offer some kind of individualised resistance to data mining practices that users are often involuntarily implicated in. Resistances to the commercial data tracking industry – those responsible for personalising and shaping our web experience (for better or worse) – may be small, but they are not futile. Better understanding and control can equate to a greater sense of user agency – an attribute that, as Sundar & Marathe argue, may not necessarily be detrimental to the cost-free web we all enjoy.
Castells, Manuel (2012) Networks of Outrage and Hope: Social Movements in the Internet Age, Polity Press: Cambridge
Fuchs, Christian (2012) ‘The political economy of privacy on Facebook’, Television and New Media, 13 (2), 139-159
McStay, Andrew (2012) I consent: an analysis of the Cookie Directive and its implications for UK behavioural advertising, New Media & Society, 15 (4) 596-611
Manovich, Lev (2011) ‘Trending: The Promises and the Challenges of Big Social Data’ (accessed 17th April 2013)
Mayer, Jonathan (2011) ‘Tracking the trackers: self help tools’, The Centre for Internet and Society Blog (accessed 15th September 2013)
Mayer-Schonberger, Victor and Cukier, Kenneth (2013), Big Data: A Revolution that Will Transform How we Live, Work and Think, London: John Murray
Nick Nikiforakis, Alexandros Kapravelosy, Wouter Joosen, Christopher Kruegely, Frank Piessens, Giovanni Vignay (2013) ‘Cookieless Monster: Exploring the Ecosystem of Web-based Device Fingerprinting’ , SP ’13 Proceedings of the 2013 IEEE Symposium on Security and Privacy, Pages 541-555
Nissenbaum, Helen Fay (2010) Privacy in context: technology, policy, and the integrity of social life, Stanford: Standford Law Press
Sundar, Shyam S. & Marathe, Sampada S. (2010) ‘Personalization versus Customization: The Importance of Agency, Privacy and Power Usage’, Human Communication Research, 36, pp 298-322