6

Conclusions: Encrypted communications as a site of social, political and technical controversy

We now conclude this journey among experiments in ‘concealing for freedom’. During this journey, we have explored the choices made in technology and governance that lead to both a variety of configurations of encrypted tools and to a diversity of intended publics and action repertoires (Tilly 1985) for these tools. We have also witnessed the attempts to categorise and make sense of the ‘mess of messengers’ that seek to respond to the challenges associated with the increasing variety and complexity of the field.

In the final pages of this book, we seek to draw some conclusions about encrypted communications as a site of social, political and technical controversy today. Encrypted messaging tools remain at the centre of a powerful double narrative, with on the one hand a strong positive discourse around empowerment and better protection of fundamental civil liberties and, on the other, an equally strong critical discourse shaped by allegations concerning the technology’s links to (and fostering of) terrorism. Furthermore, we can see how there are two ‘turns’ in the ecosystem of online communication: the cryptographic turn, that has seen Internet companies implement a number of cryptography-based organisational and technical responses aimed at restoring user trust in their cloud-based services, and the ‘opportunistic turn’, a progressive move by the crypto community towards making encryption seamless and requiring almost no effort from users to actively control most of the messaging tool’s operations.

Issues related to encryption and its adoption in messaging systems inextricably entangle with issues of standardisation (both formal and informal), the political economy of software development and adoption, and the consequences of choices about technical architectures. This concluding chapter will offer some reflections on these different aspects as informed by our fieldwork and will then tie the different ways in which political effects can be achieved through technological choices to broader contemporary political concerns related to privacy, in particular, how they can interact with recent supra-national legal instruments such as the General Data Protection Regulation (GDPR). Finally, we will comment on the implications of our study and of cognate research for the development of social studies of encryption and for its interactions with Internet governance research, in particular work inspired by STS.

Internet rights and freedoms ‘by architecture’

Throughout its chapters and its various stories about the development and use of encrypted messaging systems, this book has addressed the question of the relationship between different kinds of technological architecture – most notably those that support the concealing of metadata, data or communications – and Internet freedoms and fundamental rights.

The relationship between human rights and Internet protocols is starting to become an issue in a few arenas, both political and technical; for example, the IRTF and its Human Rights Protocol Considerations research group (which will be further discussed below). As Stéphane Bortzmeyer (2019) aptly contends, the idea progressively taking hold in such arenas is that

the Internet is not just an object of consumption, that the customer would only want to be fast, cheap, reliable, as it would a car or the electrical grid. We do business, politics, we talk, we work, we get distracted, we date: The Internet is not a tool that we use, it is a space where our activities unfold.

It is, to paraphrase Carl Schmitt (2003), the nomos of the twenty-first century, a normative universe where fundamental rights, as constrained or enabled by the platforms and protocols of the Internet, are in many cases just as important for people as the guarantees provided by governments.

The Internet as a multifaceted public space intersects with a pre-existing human rights framework. Human rights are formalised in texts such as the 1948 Universal Declaration of Human Rights (UDHR), where they are claimed to be universal, indivisible and inalienable. Despite such claims, it is clear that human rights are not absolute, as they may be in conflict with one another – in fact, they usually are. For example, the right to freedom of expression may conflict with the right not to be insulted or harassed, and freedom of expression may conflict with the right to privacy, if we want to prevent the publication of personal data. Historically, it has been the task of the legal system to determine the balance between such rights. In the networked age, the question is whether the technical space of the Internet, including its rules, limits and capabilities, has an influence on human rights, or whether it transforms human rights; and, if the latter, what concrete policy measures are needed as a consequence?

In 2012, the co-inventor of the Internet and Google evangelist Vint Cerf put forward the proposition that ‘Internet access is not a human right’, arguing that ‘technology is an enabler of rights, not a right itself’, as a human right ‘must be among the things we as humans need in order to lead healthy, meaningful lives, like freedom from torture or freedom of conscience’ and so ‘it is a mistake to place any particular technology in this exalted category, since over time we will end up valuing the wrong things’ (Cerf 2012). Nonetheless, some countries have made Internet access a basic right – Finland, for example – and this sentiment has been echoed by other entities, such as, for example, the Constitutional Council in France. Relatedly, it has also been argued that, while Internet access per se may not be a human right, the empowerment such access can provide probably is. As Tim Berners-Lee and Harry Halpin point out, considering the ability to access a particular technical infrastructure as a human right may be less important than defining as a new kind of right the ensemble of social capabilities that the Internet engenders (Berners-Lee and Halpin 2012).

Thus, Internet rights and freedoms may be promoted or enforced ‘by architecture’, including by ‘technology-embedded’ proposals around network neutrality and encryption. Data protection as an Internet right extends the notion of privacy to the digital age, fundamentally reshapes it and puts it into tension with new, similarly transformed networked forms of the right to free expression. This book has unveiled different ways in which technical developments of encrypted secure messaging systems, and of the associated governance models, construct Internet rights and freedoms, and are then in return shaped by them. The makers of ‘concealing for freedom’ technologies, their users and their regulators, operate within arenas of social, political and technical controversy ranging from standardisation and political economy of software development to choices of technical architecture and business models. The following pages draw conclusions about each of these aspects.

On (de-)centralisation: Choices of architecture as (a substitute for) politics

This book has provided thorough empirical evidence that, in the field of secure messaging as in other fields of protocol and software development, the choice of more or less centralised technical architectures is a context-based compromise, and not the result of choices between abstract models that might have intrinsically better or worse qualities. Decentralised architectures are a suitable solution in particular situations; but just as centralised architectures can, in some cases, have useful and rights-preserving qualities and, in others, be highly problematic, decentralised architectures are not always the ideal solution. The case studies we have examined in Chapters 2 to 4 illustrate, via concrete cases, the extent to which technical decisions contribute to enacting particular configurations of governance and repertoires of action.

Many Internet protocols have a client/server architecture. This means that the machines that communicate are not equivalent. On one side is a server, permanently on and waiting for connections, and on the other is a client, who connects when it has something to ask. This is a logical mode of operation when the two communicating parties are distinct, which is the case on the Web: when visiting a website, a user is a reader, and the entity which manages the website produces the content to be read. Yet not all uses of the Internet fit into this model. Sometimes one wants to exchange messages with an acquaintance. The communication, in this case, is not reader-to-writer, one-way, but peer-to-peer. In this case, the machines of two humans communicate directly, something the Internet allows via peer-to-peer architecture.

So why go through an intermediary when it is not always strictly necessary? Usually, as Chapters 2 and 3 in particular have shown, it is because the intermediary serves a variety of purposes, that range from technical functioning and optimisation to business model, and to organisational/governance forms providing different extents of control. An example is the storage of messages in the case where the correspondent is absent, and their machine off. The Simple Mail Transfer Protocol (SMTP), which is the basis of the sending and relaying functions in email services, does not provide for messages to be sent directly from Alice’s machine to Bob’s. Alice’s software sends the message to an SMTP server, which then transmits it to the SMTP server that Bob uses, which will then retrieve it via yet another protocol, probably the Internet Message Access Protocol (IMAP). The consequence of this architecture is that now, Alice and Bob depend on third parties, the managers of their respective SMTP servers. These managers can stop the service, limit it, block some messages (the fight against spam always causes collateral damage), and, if Alice and Bob do not use encrypted email protocols, read what passes through their servers. In practice, this may not happen, but the possibility exists, and it is technically very simple to archive all the messages being transmitted.

Since passing through an intermediate server has consequences, the question inevitably arises: which server to use? A personal machine that we install and manage ourselves, which is the closest a user can get to complete decentralisation? A personal machine run by a friend who knows and takes care of everything? A server or a cluster of servers run by a local collective, as in federated networks? Servers run by actors that mostly operate on centralised architectures, such as a public body, or a Silicon Valley platform like Gmail that is now able to extract information from a sizable portion of the world’s email? The choice is far from obvious, both for developers when they are faced with technical choices, and for users who have to pick one or two communication tools out of many, based on often obscure criteria.

From the point of view of privacy and freedom of expression, centralised architectures are frequently criticised for posing the greatest threats. However, as Chapter 1 has shown, it is not always possible to individualise these problems: even if one does not use Gmail, others using Gmail may leak the contacts being made to your non-Gmail email to Google.

Still, in principle any third party that controls a server has the ability to abuse its power, and machines that are managed by a particular individual, local company or even public administration in federated networks may not, ultimately, be safer than the giant centralised actors of Silicon Valley, as they will simply have fewer resources to solve security and privacy-related issues. A machine run by a well-intentioned but overworked and not necessarily competent amateur can present high risks, not because the amateur is untrustworthy, but because it can be relatively easy to successfully attack the system. At the same time, it should be pointed out that professional servers are not necessarily safer: a number of recent hacks of very large companies have shown that they ultimately present perhaps larger targets. Servers managed by public bodies are an option, but even an administration that is well-meaning at a particular point in time may eventually evolve into one that violates rights and liberties. In this regard, federated systems, which allow servers to be easily interchanged, have some benefits in terms of sustainability. For example, Mastodon, the decentralised microblogging service (à la Twitter), is made up of hundreds of independently managed servers, some of which are administered by an individual (thus, their future is uncertain if this individual abandons this role), some by associations and yet others by companies of different sizes.

What are the implications of decentralised and peer-to-peer architectures for Internet rights and freedoms? First, as we have seen in Chapter 4, we should recall that these terms do not designate a particular protocol, but a family of highly diverse protocols. The most well-known peer-to-peer application in recent Internet history has been for the exchange of media files (e.g. music, video), but peer-to-peer is a very general architecture. And despite being accompanied by a rhetoric of openness and freedom, decentralised architectures also have their problems from the standpoint of Internet rights and freedoms. In the era of Google and Facebook as dominant, centralised, totalising platforms that seek to exert control over all user interactions, it has been easy to often present peer-to-peer as the ideal solution to all problems, including censorship. But as this book has shown in its analysis of the field of secure messaging, the situation is far more complicated than that.

First, peer-to-peer networks have no central certification authority for content; thus, they are vulnerable to various forms of attacks, ranging from ‘fake data’ to ‘fake users’. It should be remembered that at one time, rights-holders circulated fake MP3s on peer-to-peer networks, with promising names and disappointing content, that lured users and eventually led them to be identified by their act of downloading. An attacker can also relatively easily otherwise corrupt the data being shared, or at the very least the routing that leads to it. Furthermore, in terms of net neutrality, because the peer-to-peer protocols that account for a good deal of Internet traffic are often identifiable within a particular network, an ISP may be tempted to limit their traffic. Many peer-to-peer protocols do not hide the IP address of users; for example, in the popular peer-to-peer file sharing client BitTorrent, if you find a peer who has the file you are interested in, and you contact them, this peer will learn your IP address (unless you disguise your IP address by using a VPN). This can be used by rights-holding individuals or organisations as a basis for issuing threatening letters or for initiating legal proceedings, as it has been the case with the HADOPI1 in France (Arnold et al. 2014). There are peer-to-peer networks that deploy protection against this leak of personal information, such as Freenet, but they remain rarely used by the public at large.

Another danger specific to peer-to-peer networks is ‘fake users’, also called Sybil attacks: i.e. if verifying an identity can be done without needing something expensive or difficult to obtain, nothing prevents an attacker from creating millions of identities and thus subverting systems. It is in order to combat this type of attack that different systems and platforms have resorted to other ways of verifying identity. Bitcoin uses ‘proof of work’, a form of cryptographic proof in which one party to a transaction proves to the other parties that a particular amount of computational resources has been dedicated to a specific objective. Organisations like the CAcert certification authority (Tänzer 2014), or informal groups like users of the Pretty Good Privacy (PGP) encryption program, use certifications created during physical meetings, which include verifying a user’s national identity documents. There is currently no general solution to the problems of Sybil attacks, especially if any solution is required to be both ecologically sustainable – which is not the case of the proof of work mechanism – and fully peer-to-peer – which is not the case for conventional enrolment systems, as they require a privileged actor to approve a participant’s entry. Solutions based on social connections, such as the one proposed by PGP, pose problems to privacy, since they expose the social graph of the participants – the list of their correspondents. In this field of alternative methods for identity verification, experiments aimed at allowing gossip in social networks to help verify identity,2 while remaining privacy-preserving, are on the way;3 but the road is far from linear.

As the most recent generation of secure messaging tools develops, and more broadly, the blockchain takes hold as an often more politically acceptable decentralised technology – even as we are learning more and more about its governance and technical flaws – the relationship between different architectural models for networking and communication technologies, the choices made and the roads not taken, are likely to remain a controversial issue. Attempts to deploy decentralised technologies and communities within particular territories are currently being piloted.4 More traditionally political arenas are involved in the decentralisation debate: in early 2019, declarations of intention by political officials in several European countries indicated an intention to actively favour decentralised technologies, blockchain in particular, via legislation. French President Emmanuel Macron, for example, referred to blockchain as a possible way to bring transparency and traceability into the agricultural industry to ensure better quality, and argued that Europe should adopt a ‘homogeneous policy’ in this regard. Within Europe at least, it is therefore likely that further proposals to use blockchain or decentralised technologies for such purposes will see the light in the coming months and years.

The success of the blockchain is perhaps still greater in research projects and imaginaries of intermediary-less futures than in fully working applications, and we can recognise, as in previous ‘expectation cycles’ related to distributed architectures and a number of other technologies (see e.g. Borup et al. 2006; Brown and Michael 2003), interactions between both potential and hype. Yet, the blockchain and its myriad variants have some specific features that will be very interesting to observe in the coming years. In particular, it seems to be the first decentralised networking technology to be widely accepted — ‘celebrated’ would be more accurate — by national and supra-national institutions, despite its first widespread application, Bitcoin, being born with the explicitly stated goal of making each and every institution obsolete, and despite its birth, development and functionality being subject to several controversies (Musiani and Méadel 2016).

Thanks to the role that local and distributed technologies have played, relying on networks such as TOR, in rallying and organising social movements and grassroots resistance tactics, decentralised architectures are increasingly seen as technologies of empowerment and liberation. Yet they do not escape a powerful double narrative, fuelled by previous narratives depicting peer-to-peer as an allegedly ‘empowering-yet-illegal’ technology. On the one hand, the discourse around empowerment and better protection of fundamental civil liberties is very strong; on the other hand, several projects that have sought to merge decentralisation and encryption to improve protection against surveillance have needed to defend themselves from allegations of use by terrorists and other unsavoury publics (a defence some projects are technically unable to mount).

This dialectic is taking place in the broader context of discussions about civil liberties and governance by infrastructure (Musiani et al. 2016), some of them particularly related to encryption (or the breaking of it), such as the Apple vs FBI case or WhatsApp proposing, since April 2016, encryption by default (Schulze 2017). Indeed, both the (re-)distribution and the (re-)decentralisation of networks are strictly linked – much more so than a few years ago, and in particular after the Snowden revelations – to discussions of surveillance and privacy, and find themselves frequently associated with discussions about encryption, and its practical implementations. The next section will discuss this relationship and its challenges.

On encryption: Strategies of concealment as integrity (and power) tools

As we have seen throughout the book, encryption is a controversial issue. As the case studies unfolded, we could see how the different developer teams – and, often, the users of the systems they develop – cope with the fact that debates over encryption policy are framed as pitting security at the cost of civil liberties against new technological freedoms that may pose security risks. While technologists, including several developers and security trainers interviewed for this book, hold that without encryption the right to privacy remains purely theoretical given the ease of spying on digital communications, encryption is frequently framed in political debates as a mechanism that allows criminals to conceal the content of their communications from the judicial system and police.

Our research allows us to point out several ways in which this debate should be nuanced, as we have examined the ‘making of’ encryption and its interplay with Internet rights and freedoms. We have observed how cryptography is not only used to conceal information, i.e. to make data confidential, but also to provide checks on the integrity and authentication of data, even data that is public. For example, in order to check that data has not been modified either by accident or with ill-intent, we have seen hash functions – functions that shrink data to a small code that can be checked independently – being used to check the integrity of the data. We have also seen cryptographic techniques being put to use with both private secrets, called private keys, that can work with public information, and public keys, to both authenticate as well as encrypt data. With public keys and hash functions, digital signatures can be created to make sure that we know the identity of the entity that originates particular data. This approach, widely used in secure messaging, is useful in a variety of scenarios, including e-Signature schemes that can help reduce bureaucracy or to prevent the spread of false information. Thus comes the dilemma that, in the field of secure messaging, has been exemplified by the FBI vs Apple case (Schulze 2017): if a government implements a policy to reduce the scope of encryption so that its police or intelligence services can ‘read’ digital messages, there is a clear danger that the government would accidentally prevent other uses of encryption that would damage the ability of users to place any trust in data circulation and processing by third parties, with wide-ranging negative economic consequences.

For this reason, and acknowledging that developer teams’ respective strategies in terms of architectural choice or standardisation greatly vary, we can recognise a common trend in the different stories of development we have examined: strong resistance to making the encryption of data illegal, for any reason. Actors in the secure messaging field share the belief that cryptography needs to be legal as digital technologies not only increase the possibilities of surveillance, but they do so in a fundamentally asymmetric manner that is incompatible with democracy (see also Bortzmeyer 2019); platforms gather a lot of information about us, but these platforms are opaque to citizens, and the same issue holds true of various state intelligence agencies; furthermore, this data processing happens en masse.

Our observation of the development processes of secure messaging tools as ‘situated practices’ reveals the limits of presenting encryption primarily as a mechanism that will prevent states and police forces from conducting their investigations properly. As we followed developers in their endeavours, we could see that for them, end-to-end encryption – where the endpoints are the users, and no entity in the middle has the ability to decrypt the message – actually, and partially, restores the social norms around communication that were expected prior to the advent of digital communication, so that messages are clearly given to be from a particular sender and can only be read by a particular recipient, with interference to the message being detectable.

With a revival post-Snowden (e.g. Barr 2016), but having its roots in a long-standing academic and political debates (Rivest 1998; Soghoian 2010), one heavily debated, controversial policy option concerning encryption has been and is its ‘selective weakening’ by allowing both the use of encryption and its possible breaking in specific cases – for example, in case of investigation of a case of terrorism, the idea of a ‘backdoor’ to allow decryption of encrypted messages. From the standpoint of technologists, this arrangement is highly problematic as, regardless of the choice of technical architecture, the mathematical algorithms that form the core of encryption cannot work only in some cases, but not in others: either they make it possible for data to be accessed by anyone who has the key, or they have a flaw, which is exploitable by anyone who is aware of it – an argument made, most prominently, by the Keys Under Doormats report (Abelson et al. 2015). In proposals for ‘backdoors’, one decryption key would be the legitimate receiver’s key, but another key would exist, controlling a ‘middlebox’ that decrypts the message and re-encrypts it to the intended recipient. This key is considered to be under ‘key escrow’, which means stored by a third party such as the government, and perhaps only available in special circumstances. However, such a solution would, for clear enough reasons, not work in open-source and free software projects (including those examined in Chapters 3 and 4), where the review of any code would show the backdoor. It would be more realistic in contexts such as those described in Chapter 2, where the user does not have control over the software she is using, and the software she is given may include the backdoor from the start.

As one of the developers we interviewed pointed out, though, ‘Obviously, any real threat to the state will not use the latter tools, but [will instead] search for software without backdoors. But this method can work with the honest citizen who, unlike the terrorist, trusts proprietary software, and who communicate with each other through Silicon Valley platforms’. Thus, from the standpoint of this developer and others among our interviewees, the purpose of anti-encryption campaigns for ‘backdoors’ is not, or not primarily, about finding solutions allowing authorities to decrypt for anti-terrorism purposes, but a way to pressure Silicon Valley and other technical actors to include backdoors in their communication software, in order to enable and/or sustain systems of mass surveillance.5 In response to this scenario, the European Commission has recognised in its recent cybersecurity strategy (European Commission 2017) that encryption allows fundamental freedoms to be exercised, and digital freedoms organisations such as European Digital Rights (EDRi) have argued that as such, encryption should be recognised as a tool for countering the arbitrariness of state governments and dominant private actors (EDRi 2017).

Developers of the variety of secure messaging systems examined in this book are coping with several levels of complexity. Their use of modern cryptography and, most often, formally verified protocols, has the aim of reducing security problems (e.g. those linked to backdoors) and mistakes. They also predominantly work on the assumption that the cryptography they develop should be understood by decision-makers in order to be deployed; as one of our interviewees put it, ‘no policy-maker should have to know the difference between the Decisional Diffie-Hellman and Computational Diffie-Hellman property’. They also mostly share an understanding that privacy is ‘hard to get right’, due to the varieties of social contexts involved, which we examined in Chapter 1. They are aware that in order for notions of privacy to be meaningful and applicable to development processes, it is necessary to carefully define the threat model and run simulations to see various empirical ways to measure phenomena such as proximity or unlinkability. Finally, several developers are aware that in order to rule out the possibility of backdoors in their or others’ software, the algorithms and protocols using cryptography need to be formally verified or audited by external parties – an aspect that, as the reader will recall from Chapter 5, was also included by the EFF in earlier versions of its evaluation grids. However, for the reasons mentioned above, privacy is harder to actually verify, and does not fit within formal verification frameworks; thus, it is complicated to determine exactly what kinds of privacy are being discussed and whether a given system can support it.

While developers do not expect policymakers to have cryptographic knowledge, they operate within national and supra-national contexts where encryption, and the ability to compromise it, is an intensely political issue if not an outright proxy for power – and, post-Snowden, increasingly so. In 2013, a few months after Snowden’s revelations, the NSA was revealed to have discreetly lobbied for the US National Institute of Standards and Technology (NIST) to include a weakened, possibly deliberately flawed algorithm in a 2006 cryptography standard (Greenemeier 2013). And in 2016, a financial industry group proposed a protocol called eTLS, omitting from it the forward secrecy feature which had been incorporated into the latest version of the Transport Layer Security (TLS) protocol. The European Telecommunications Standards Institute (ETSI) released eTLS, rebaptised ETS to minimise ambiguity, as a standard in the autumn of 2018, to great controversy (Leyden 2019) and steadfast opposition from the Internet Engineering Task Force (IETF).

Standardising bodies such as the IETF and its parallel organisation, the Internet Research Task Force (IRTF), have shown, in the above controversy as well as others, that they are well-positioned, via entities such as the IRTF-chartered CryptoForum Research Group,6 to issue authoritative advice about the safety of cryptographic algorithms. However, there are other important sources of authoritative advice. In Europe, the promotion of best practices in the use of public cryptographic algorithms, their verification and the generation of cryptographic standards was until recently undertaken by the European Union Agency for Cybersecurity (ENISA, which still uses this abbreviation, in reference to its original name: European Network and Information Security Agency); however now this function is being devolved down to the nation-states, which may entail risks (as discussed, pressures to introduce backdoors, or uneven levels of cryptographic knowledge within different national contexts).

Furthermore, several debates are taking place at the national level in European countries on end-to-end encryption and on how to strike a balance between the protection of digital rights and law enforcement. In 2017, the French Digital Council (CNNum) issued advice on encryption, reaffirming the usefulness and necessity of encryption technologies in light of the repeated attempts by the Ministry of Interior to challenge their use due to their potential exploitation by terrorists and criminals (CNNum 2017). In 2019, new and worrying signals came from Germany, where, after more than twenty years of unequivocal support for strong encryption (Herpig and Heumann 2019), a law is being examined that would force chat app providers to hand over, on demand, end-to-end encrypted conversations in plain text; this inclusion of Internet services providing encryption software would expand German law, which currently ‘merely’ allows communications to be gathered from a suspect’s device itself (Chapman 2019). In the United Kingdom, within the debates concerning the Investigatory Powers Act,7 the government issued a revised version of the bill that ‘clarifies the government’s position on encryption, making it clear that companies can only be asked to remove encryption that they themselves have applied, and only where it is practicable for them to do so’ (Carey 2016).

In response to the controversy around encryption in Germany, Roman Flepp, from a Swiss-based end-to-end encrypted instant messenger platform called Threema, that is popular among German-speaking users, asserted that: ‘Under no circumstances are we willing to make any compromises in this regard’ (quoted in Chapman 2019). However, as this book has shown, the road to ‘no compromises’ is in practice, for developers of secure messaging tools, paved with compromises – some of which relate to technical choices, others to the user publics they target and still others to the broader geopolitical scenarios and debates they operate within. Such debates will no doubt continue as encryption remains a matter of intense public concern, which technologists are actively involved in it, in both actions and words.

On standardisation: Setting (open) norms as solidarity and transparency

The stories of secure messaging tools development told by this book are also revealing about how the field is currently approaching issues of standardisation, a process that is simultaneously technical and political; there is an ‘intimate connection between standards and power’, a power that ‘lies in [standards’] very subtlety’ (Busch 2011). Standards describe the specifications for code, and this code may then be independently implemented in conformity with the specification of various licensing options, ranging from open source to proprietary. Geoffrey Bowker and Susan Leigh Star have long since noted that standards play an important role in the making of public policy and, more broadly, of social order: ‘standards and classifications, however imbricated in our lives, are ordinarily invisible […yet] what are these categories? Who makes them, and who may change them? When and why do they become visible? How do they spread?’ (Bowker and Star 1999: 94). A number of case study analyses of competing standards in information technology have contributed to shed light on these processes, including the birth of the QWERTY keyboard (David 1985) and the VHS vs Betamax controversy (Besen and Farrell 1994), and, more recently, debates about the respective merits of two different Internet protocols – IPv4 vs IPv6 (DeNardis 2009).

Technical standards take shape at once in material forms, in social and economic interactions, and in their intended or inferred use. Several standards are developed (and recognised as such) intentionally and result from regulatory actions or from being voluntarily adopted. Such formally endowed standards are developed by dedicated organisations, such as the International Standardization Organization (ISO) or, for the Web, the World Wide Web Consortium (W3C), with the standards taking the form of documents that describe objects, their properties and the extent to which they can be put under strain or stress without breaking or being compromised. One example is the IETF’s Request for Comments, or RfC, whose very evolutions over time contribute to making visible the transformations of the IETF, including changes in both its organisational forms and practices (see Braman 2016).

For ‘ordinary’ users, the meaning and pervasiveness of standards in their everyday lives, as operative in a myriad of different situations, often escapes understanding (Star and Lampland 2009). The publication of a standard is only the beginning of the norm-making process – and indeed, as the Signal case examined in Chapter 2 has shown, sometimes it is not even a necessary step. The adoption of something close to a ‘standard’ may occur either de facto or accidentally, with seemingly minor decisions and actions becoming crucial for the development of a field in a particular direction. In some instances, what determines the adoption of an object, process or protocol as a standard is its ability to circulate and gain recognition: factors such as popular demand, perceived quality and the credibility of its developers become crucial for the standard’s success. The spread of standards through social, technical and institutional media is a multifaceted process; mechanisms of standardisation are economic, social and technical – with different degrees of intentionality – alongside the ‘official’ institutional practices of certification and harmonisation (Loconto and Busch 2010).

The issue of open standards seems particularly salient for encrypted messaging and their potential for ‘concealing for freedom’. Open standards are produced by standardising bodies that allow anyone to participate in the process. This openness is seen as maximising both transparency and the collective intelligence gained through the ‘wisdom of the crowds’ – this is argued to be particularly useful in standards around security and cryptography (Simcoe 2006). Open standard bodies have produced most of the technical standards that form the core of the Internet, like TCP/IP, HTML and TLS. As we started seeing at the end of the previous section, however, standard-setting bodies can be subject to lobbying and wider agenda-setting strategies that can, in some instances, even be characterised as manipulation. As we saw in Chapter 2, the perception of standard bodies as possibly fragile in the face of lobbying is the reason why secure messaging developers increasingly emphasise the institutionalisation of standardising bodies and their progressive distancing from coding communities, which often creates an environment that is less suitable for experimental and unfinished projects (as is the case for several ‘young’ systems in the secure messaging field).

However, our developer interviewees also emphasise that they perceive differences between standardisation bodies, with several national and closed standards organisations being seen as more prone to manipulation, whereas open standards bodies are seen as having more rigorous processes. One of our respondents referred to the contentious standardisation of Office Open XML file formats in the late 2000s, which was ultimately designated as an ‘open’ standard by the International Standardization Organization – ISO, a standards body composed solely of representatives from nation-states – following pressure from Microsoft (see also Ryan 2008).

Unlike standards bodies comprising only commercial organisations (e.g. ECMA, the European Computer Manufacturers Association), open standards bodies usually employ a multi-stakeholder model, including public institutions, private companies, universities and individuals. Open standards bodies for the Internet include the IETF and, for the Web, W3C. A key aspect addressed by such bodies is patent disclosures. Open standards, particularly those with a royalty-free licensing policy like those produced by the W3C, are perceived by our interviewees to be highly important due to their prohibition of patent licensing fees. Bodies like the IETF force patent disclosures, and have been protagonists in several controversies (e.g. the ETS controversy mentioned above) in their efforts to prevent patented elements from becoming part of open standards.

When it comes to the code underlying the systems we analyse in this book, we can see a consensus among several developers – predominantly those involved in federated and decentralised secure messaging solutions, but also including some involved with centralised components, as we will see later with Signal. Open source, our interviewees argue, should be supported both as a technological principle and as a matter of policy, with some interviewees also suggesting that policies should go even further to foster the adoption of free software (as ‘a matter of liberty, not price’8), as a political programme. This would include guaranteeing four fundamental ‘user freedoms’, in which the user should be able to: run the program in question as they wish, for any purpose; study how the program works, and potentially change it; redistribute copies; and distribute copies of their modified versions to others. This political programme is meant to go a step further than ensuring code is ‘open source’ and ‘open access’, although it assumes open access to code is a necessary precondition for freedom.

The free software movement originally came out of Richard Stallman’s acknowledgment that the practice of sharing software, with its hacker culture heritage (Coleman 2013) was being increasingly ‘enclosed’ by commercial ventures. In order to create a legally binding form of resistance to these new enclosures, Stallman created the GPL (General Public License9), which postulated that the copyright on a given piece of software is allocated by default to the developer and that the developer can license their software to an unlimited number of people. The GPL license requires that all derivative works also use the GPL, thus preserving for posterity the aforementioned four fundamental user freedoms and enabling the software commons to grow virally. The GPL has proved to be a successful license and software methodology: GNU/Linux, which depends entirely on the GPL, subtends most of the Internet’s architecture today and even Google’s Android is based on a free software kernel, although Google outsources vital components to its proprietary cloud.

However, the ‘virality’ of the GPL can pose problems when there is the need to integrate with commercial software, as is often the case in public administration, a point that our interviewees are well aware of. Less strong open-source licenses, which retain the possibility of private actors copying a piece of software and making a proprietary version of it, again carry the risk of enclosure – as has been explored in recent work, this kind of ‘weak’ open licensing is an important motivation for many companies to fund open-source projects, as it opens the way to eventually fork parts of them into proprietary versions (O’Neil et al. 2020; Birkinbine 2020).

An interesting path for software designed in the public interest – which has, in the field of secure messaging, a notable precedent in Signal, one of its most widespread protocols and applications – seems to be a ‘dual license’ policy, with server-side software being published using the Affero GPL (AGPL10), while client-side software uses the GPL v3.0. The AGPL software prevents free software from being made available as a service over a network, such as a web service, without the code being released. If someone wants to use this software in combination with commercial software, then the creators of the AGPL-licensed software retain the right to grant a non-exclusive and non-transferable ‘dual license’ on its use. This license can be granted for free to public administrations, while for-profit companies can be changed. This is the business model originally used by Signal, and it is an interesting attempt to preserve ‘the best of both worlds’, as one of our developer interviewees put it: allowing integration with commercial software on a case-by-case basis, while keeping the code itself free for the commons.

Encrypting, decentralising and standardising online communications in the GDPR era

So far, this chapter has drawn from our fieldwork with secure messaging developers and users to analyse the different ways in which political effects can be achieved through technological choices at the crossroads of encryption and decentralisation of digital networks, as well as the attempts to build standards in the field. This section will tie these dynamics to broader political concerns related to privacy in our present time, in particular with the advent of the General Data Protection Regulation (GDPR) as the primary and most recent supra-national legal instrument aimed at securing privacy-related Internet freedoms at the European level.

In 2009, European Consumer Commissioner Meglena Kuneva acknowledged that personal data was well on its way to becoming ‘the new oil of the Internet’. One decade on, the inevitable resource wars for the control of mass personal data began, with data extractivism – the strategy of the capture and refinement of personal data to sell on the world market – becoming the primary business model of Internet giants such as Google and Facebook. Public policy, meanwhile, has struggled to fully understand the value of personal data and to translate it, and a set of safeguards for it, into law. The GDPR is the most recent and comprehensive attempt by the European Commission to use legal means to ensure the ‘right to a private life’ in the sphere of digital data, which prompts the question whether GDPR provides an effective answer to the erosion of data sovereignty and integrity by technological platforms. While this book cannot provide a full answer to this question, it has further demonstrated that for data protection to be effective, it must also, if not primarily, be defended technologically.

As Laura DeNardis eloquently puts it, ‘cyberspace now completely and often imperceptibly permeates offline spaces, blurring boundaries between material and virtual worlds’ (DeNardis 2020: 3): the ubiquity of the Internet has caused the world to be enveloped in a single, data-driven, technosocial system. Faced with this rapid and all-encompassing set of evolutions, policy has frequently treated emerging technologies primarily as hostile instruments threatening a status quo, thereby creating need for legal reform (see Elkin-Koren’s 2005 enlightening analysis of peer-to-peer), with, in particular, a frequent emphasis on decentralised and/or encryption technologies as enablers of crime and terrorism. In this conjuncture, any questions of data integrity, protection and control require a new approach, where rather than attempting to mandate laws that apply to the underlying technology, policymakers would encourage the adoption of technology contributing to maintain the society that they wish for. Putting rights-preserving technologies, such as ‘concealing for freedom’ systems, in the hands of local state actors and citizens may be the last and most effective bastion against data extractivism, especially if done in concert with the development of policies around data protection. In this sense, government-mandated organisations (such as the previously mentioned French Digital Council) have produced detailed advice on encryption technologies and have persuasively analysed why such technologies should not be banned, but encouraged. These documents establish a sort of ‘philosophy’ that, while it may or may not be followed in actual regulation, is interesting to assess as it takes shape.

Of course, recent (and less recent) scholarship has also analysed a number of less-than-encouraging examples of policymakers fostering the adoption of specific technologies as a tool for (co-)shaping society. Sociotechnical systems like the Chinese ‘social credit system’, where a single government-run platform is used to make decisions regarding nearly all aspects of their citizens lives, from education to mobility, are run in a privacy-invasive manner that disregards the fundamental rights of individuals, placing the data sovereignty of the nation over that of the individual (Chen and Cheung 2017). And, in the Western world, the inability of the United States to effectively regulate Silicon Valley platforms was a crucial shortcoming, eventually leading to the construction of extensive surveillance practices by private technical companies (Bauman et al. 2014). But precisely because technology is such an important arrangement of power – that a variety of actors seek to achieve their Internet policy objectives via choices of technical architecture and infrastructure, and specific uses of them – the building of a rights-preserving technological alternative, supported by laws that are both aware of the state-of-art of technology and co-evolve with it, seem to be an interesting way to build novel technosocial systems that preserve data integrity and sovereignty, both inside traditional national borders and across these borders.

The ability to guarantee fundamental human rights via technology – to embed rights into technology – is gaining momentum, both in the realm of law (e.g. Article 25 of the GDPR, mandating data protection by design and by default) and in the realm of technology development. In this latter regard, the work of the IRTF’s Human Rights Protocol Considerations Research Group is especially interesting (see also ten Oever 2021). This research group, basing its work explicitly on the UN Human Rights Charter, attempts to undertake technical reviews of protocols so as to determine whether they are compliant with human rights. However, a shortcoming of this group’s work is that the review happens after the protocol has been designed, which restricts the possibility of modifying existing protocols. One encryption-related illustration of this has been the group’s support for the implementation of Encrypted SNIs (Server Name Indication, the indication of the one or more domain names one is connecting to via an IP address) in TLS 1.3, in order to prevent the ‘breaking’ of encryption by network operators in order to enable mass surveillance and censorship.

Given that standards bodies like the IETF and W3C have no legally binding regulatory power at the nation-state level (unlike, for example, the ISO), their primary role is to make recommendations, supported by the self-regulation of the industry itself. Internet governance scholars have extensively analysed the historical gap between the ‘soft power’ of self-regulating standards bodies like the IETF and W3C and the extent to which they respectively created most of the core standards of the Internet and Web. They have also assessed the fairly limited effectiveness of bodies such as the ISO and International Telecommunications Union (ITU), which are more closely linked to international organisations, in particular the UN system, but whose standards (and procedures to attain them) have much less endorsement from Internet technologists (see Mueller 2012; more broadly Harcourt et al. 2020).

However, the more open, ‘multi-stakeholder’ standards bodies, such as IETF and W3C, are not without pressure from specific groups of actors. A recent controversy illustrating this has focused on the standardisation of Encrypted Media Extensions for Digital Rights Management tools, pushed by Silicon Valley giants (Netflix and Google in particular). This took place in the face of protests by civil society groups, such as the Electronic Freedom Frontier Foundation, and objections from political figures like European Parliament representatives Julia Reda and Lucy Anderson, who were concerned about how DRM violated ‘fair use’ rights in Europe, as well as fundamental human rights (Reda and Anderson 2017). Tim Berners-Lee himself, the inventor of the Web, eventually yielded to powerful lobbying and supported the DRM standard, arguably to maintain the relevance of the W3C in the face of the Web’s slow monopolisation (McCarthy 2017). European standardisation bodies (both national and supra-national) have also at times proved unreliable in their defence of users’ fundamental rights: for example, during the aforementioned controversy over encrypted SNIs, the European Telecommunications Standards Institute (ETSI) ‘forked’ the IETF standard TLS 1.3 in order to create a ‘backdoor’ explicitly rejected by the IETF, removing the encrypted SNIs supported by human rights activists and enforcing cryptography that would allow monitoring of encrypted traffic by a ‘middlebox’ between a user and a website. While the ETSI justified this choice as useful for quality-of-service and monitoring enterprises, its implications for making mass surveillance easier were emphasised by commentators (Leyden 2019). Unlike other entities that are part of the Internet governance galaxy, such as the Internet Governance Forum – an arena for discussion with no decision-making power – standardisation bodies are by design not meant to include civil society at large in their discussion and consensus-making procedures. Thus, the risk exists that the governance of online privacy and surveillance, including encryption issues – and in particular the governance that includes the making of standards for this area – will become increasingly unfettered from human rights. This could override traditional forms of sovereignty and the human rights protection system of ‘checks and balances’, to shift towards a ‘state of exception’ where the private sector would hold excessive and largely unsupervised power.

To summarise, standardisation bodies are under pressure, as they are often not multi-stakeholder, and their actions can cause collateral damage in the absence of safeguards. For all these reasons, it will be increasingly difficult to maintain human rights as an important component of standards-related discussions, and there is a pressing risk that standardisation will happen without taking them into account.

The GDPR in the multi-architectural world of online communications

While the GDPR is the first global privacy framework that extends the rights of European citizens into technological platforms in other national jurisdictions – and for this reason has been hailed as a historic achievement of legislation (Gérot and Maxwell 2020) – it has been created in reference to a particular type of architectural configuration, one that is centralised and server-based, and the dominant model in the contemporary landscape of Internet-based services.

The General Data Protection Regulation is based on the core concept of informed consent, where in order to maintain user’s autonomy over their data, users must be informed both about what personal data is being collected by a particular web service, called the data controller, and how this data is being processed by possible third parties, called data processors. For example, when using a service such as an online newspaper, the user must be informed of their rights and then ‘opt-in’ to the usage of their data and its distribution to ad networks (e.g. Google’s advertisements) which are data processors. A citizen thus ideally is granted rights and control over their data in order for their digital autonomy to be maintained. However, the General Data Protection Regulation has been controversial because of what its enforcement has led to in practice. As recent studies have shown (e.g. Utz et al. 2019; Herrle and Hirsch, 2019), citizens are cognitively overwhelmed by the scope and scale of the processing of their personal data, and a gap often forms between the ground-breaking goals of the General Data Protection Regulation and its implementation. Users are confronted by endless email approvals and online forms that claim to give a data controller consent, often phrased in inscrutable ways, and often presenting a ‘take it or leave it approach’, leaving citizens with the impression of being forced to comply with a package of limitations in order to use the contemporary Web.

While it is out of scope for this book to comment extensively on the GDPR, we will highlight here one particular aspect: the extent to which one core hypothesis that subtended its creation may affect its effectiveness when it comes to the tools for secure communications we analyse. For, as a regulation, the GDPR is fundamentally based on the paradigm of centralised architecture: the data controllers are assumed to be large, centralised servers under the control of external entities in the ‘Cloud’, with any processing of data being undertaken by a discrete number of data processors, assumed to be other external third-party ‘cloud’ servers. Further, the assumption is both that these data processors are known in advance by the data controller and that their identity can be communicated to the user before the personal data is transferred to the data controller.

These assumptions seem problematic, though, in light of the great variety of architectural and encrypting arrangements we have examined. First, GDPR Article 24 requires controllers to implement systems capable of demonstrating that the processing of personal data is performed in accordance with the GDPR. For server-side ‘cloud’-based infrastructure, this seems difficult if not impossible, as neither the user nor any legal body has insight into the data processing in the server-side cloud, which is opaque by design. The flows between data processors are thus also unknown to the user, and often processing is done in new and unforeseen ways via shadowy networks of data processors whose complexity is difficult to grasp, much less enumerate – as it is nearly impossible to prevent the copying of data. Second, although concrete measures such as data minimisation and pseudonymisation are discussed by Article 25 (centred on privacy ‘by design’ and ‘by default’ as discussed above), it is unclear how they can be assessed based on a centralised architecture. Finally, the GDPR hinges not only on the promise of the server-side data controller being transparent, but also on the assumption that consent for these increasingly opaque dataflows can be meaningfully given by the user. It appears that attempting to place these constraints on current centralised cloud-based services is quixotic, as the data protection action assumes the centralised platforms’ good faith – while in many cases, such platforms’ business models are based primarily on data extractivism. It is likely for this very reason that centralised platforms like Facebook and Google would rather opt to be fined by the European Commission, or specific European countries – no matter how substantial the fines are – than comply with the substance of GDPR, which would require an in-depth ‘re-architecturing’ of their technologies.

What is in the GDPR, then, for more decentralised and federated architectures? It has been suggested that a move away from centralised platforms to alternatives based on blockchain as an architectural principle could be used to ‘re-architect’ a new generation of platforms more in line with securing user’s rights. However, scholars have argued that in its current state, blockchain technology is not mature enough to be compatible with data protection regulations (Halpin and Piekarska 2017). Institutions have mobilised on the issue as well: the European Union Blockchain Observatory and Forum has remarked that the EU’s courts and data protection authorities have so far highlighted three main tensions between the distributed ledger technologies and the EU’s new data protection rules, namely: the difficulties in identifying the obligations of data controllers and processors; disagreements about when personal data should be anonymised; and the difficulty of exercising new data subject rights. This includes the right to be forgotten and the possibility of erasing certain data, given that personal data shared on a blockchain prevents censorship (including changing or erasing the data by legal request of a data subject), is cryptographically intertwined into the entire blockchain by design and is public by default. As a solution, the authors propose four rule-of-thumb principles: Technologies designing systems for public usage should ‘start with the big picture’ and decide whether blockchain is needed and really meets their data needs; they should avoid storing personal data on blockchain and use data obfuscation and encryption techniques instead; if blockchain cannot be avoided, they should favour private, permissioned blockchain networks; and they should be transparent with users (EU Blockchain Observatory 2018).

Beyond the blockchain, decentralised networks that use end-to-end encryption and privacy enhancing-technologies, both federated and P2P, are likely to be more easily compliant with the GDPR and superior to several applications of the blockchain (e.g. from an environmental standpoint – proof of work requires substantial computing resources, and thus energy consumption). However, the challenges such technologies face both at the development stage and in relation to large-scale adoption by users are numerous and have been discussed at length in this book, especially in Chapters 3 and 4. From a theoretical standpoint, in the case of p2p networks, the user herself is the data controller, and freely chooses her own data processors and who to share data with. For federated networks, the data controller will be the entity that runs the server. If end-to-end encryption is used in both these models, then in decentralised systems the personal data are hidden from the data controller in the federated setting and from other peers in a P2P setting, and so uses of the data that are not explicitly authorised by the user are rendered technically impossible. Privacy-enhancing technologies, as we have seen throughout the more empirically oriented chapters of this book, can then be used to limit the scope of third parties from determining even the metadata of communication, rendering the data anonymised by default and so unable to be processed without consent.

Decentralised and encrypted protocols are potentially capable of providing a robust technological solution for data protection, one that is compatible with recent legislation. A marriage between these two dynamics has been identified, by the NEXTLEAP project and beyond, as a promising way forward in the quest to ‘conceal for freedom’. However, as this book – and other previous and current work – has shown, ‘reclaiming the Internet’ (Aigrain 2010; Musiani and Méadel 2016) by fostering the coexistence of a variety of architectural, economic and governance arrangements, is a far-from-linear process: one that will include a good deal of experimentation with assemblages of human and non-human actors, rearrangements in power balances and attempts to place Internet rights and freedoms at the core of the technical development and standardisation processes.

Social studies of encryption within Internet governance research: Moving forward

In the final months of writing this book, two book-length contributions were published that are closely connected with the present work. Linda Monsees’ Crypto-Politics: Encryption and Democratic Practices in the Digital Era (2019) explores the post-Snowden debates on digital encryption in Germany and the United States, showing how discussions about the value of privacy and the legitimacy of surveillance practices have closely merged with controversies around encryption technologies, making encryption a subject of technopolitical contestation within multiple expert circles. Philip DiSalvo’s Digital Whistleblowing Platforms for Journalism: Encrypting Leaks (2020) delves into whistleblowing platforms as an increasingly important phenomenon for journalism in the post-Snowden era, as safer solutions for communicating with whistleblowers and obtaining leaks; DiSalvo explores the potentials of, and needs for, encryption for journalistic purposes, together with the perils of surveillance.

Together with the kind of work we propose in this book, with our investigation of the development and user appropriation of encrypted secure messaging tools, these recent works reaffirm the importance of encryption as an issue worthy of investigation via methods and concepts derived from STS. With this book, we have followed developers as they interact with other stakeholders and with the technical artefacts they design – with a core common objective of creating tools that ‘conceal for freedom’, while differing in their intended technical architectures, their targeted user publics and the underlying values and business models. Together, these stories flesh out the experience of encryption in the variety of secure messaging protocols and tools existing today, and its implications for the ‘making of’ digital liberties. Collectively, our book and these other recent works show how encryption takes shape both in visions for online freedoms and in very concrete, and diverse, sets of implementations – and how this is a core issue of Internet governance.

Internet governance, as a recent and increasingly dynamic body of work suggests, is as much about the work of institutions, and about legislative processes, as it is about the ‘mundane practices’ and the agency of technology designers, developers, hackers, maintainers and users as they interact, in a distributed fashion, with technologies, rules and regulations, leading to both intended and unintended consequences with systemic effects (Epstein, Katzenbach and Musiani 2016). STS approaches such as those we have adopted in this book can help in empirically analysing the diverse forms of decision-making and coordination activities that take place beyond formal and well-defined boundaries (van Eeten and Mueller 2013).

Such an approach is especially relevant at a time when online surveillance and privacy, and the technological and legal means to limit the former and protect the latter, are being identified (e.g. by Mueller and Badiei 2020) as the pre-eminent Internet governance-related issue of the last decade. It is an issue that has been catalysed by the Snowden revelations, but that has its roots in longstanding debates about personal data, identity on the Internet and cryptology. Arguably, the era ushered in by the Snowden revelations is one where the world took full measure of the extent of the United States’ de facto global authority ‘by infrastructure’ on the Internet and became aware of the depth of the US government’s ‘dangerous liaisons’ with private intermediaries (Musiani 2013). This opened up a major crisis of legitimacy for the US to keep on acting as the foremost actor in Internet governance, and arguably – even if the process was, slowly but surely, already underway before Snowden – contributed to the so-called ‘IANA transition’, the process through which the US relinquished their control of the Domain Name System root, and which led to substantial reforms in the accountability mechanisms used in managing it by the Internet Corporation for Assigned Names and Numbers (ICANN).

In parallel, recent years have also witnessed the rise of new ‘superpowers’ in Internet governance, most notably Russia and China (see e.g. Litvinenko 2020 and Negro 2017), whose predominant strategy has been to achieve ‘digital sovereignty’. This is the idea that states should reassert their authority over the Internet and protect their nation’s self-determination in the digital sphere, not by means of supranational alliances or international instruments, but by increasing their independence and autonomy at various technical, economic and political levels.

In this multi-faceted contemporary Internet governance scenario, encryption is becoming a central issue. As scholars of encryption grounded in the social sciences, we examine the variety of ways in which journalists and activists choose and use encrypted tools to communicate with their sources and explore the numerous arenas where technopolitical controversies about encryption happen. In the present work, we have analysed secure messaging protocols and applications as they are developed and made their own by different groups of pioneer users, and we shed light on the manifold ways in which the ‘making, governing and using’ of encryption in online communications unfolds; in doing so we have contributed to articulating and solidifying the study of the ‘mundane practices’ of Internet governance. At the same time, we have explored how these practices cannot do without a constant entwining with the institutional arenas in which political narratives and agendas about encryption unfold – an aspect which has been the focus of this concluding chapter. Indeed, institutions of Internet governance can and should be analysed with the help of the STS toolbox, understanding their authority not as a fait accompli, but as a result of their ability to renegotiate and reconfigure themselves in moments of controversy and destabilisation, in order to maintain momentum and legitimacy (see Flyverbom 2011 and Pohle 2016).

We also contribute to conceptualising encryption as a fully interdisciplinary subject of study, one that is as much the prerogative of social sciences as of computer science and legal studies, by unveiling the ‘informal’ dimensions of power arrangements surrounding it. Informal they may be, but they are no less crucial than institutional debates and decision-making processes in co-shaping our rights and freedoms – as citizens of online communications, and as stakeholders of how the Internet is governed today and will be governed tomorrow.

Notes

1 The HADOPI (quasi-acronym, in French, of Haute Autorité pour la Diffusion des Œuvres et la Protection des droits d’auteur sur Internet) law, or ‘Creation and Internet’ law, was a law introduced in 2009 in France, mandating the so-called ‘graduated response’ or ‘three-strikes procedure’ that could eventually allow a user’s Internet connection to be terminated in case of repeated offense (even though enforcement in practice never reached this final and drastic step because of widespread controversy and eventual abrogation of the law).

2 See Chapter 3 and, in particular, the discussions concerning the implementation of Briar’s group chat.

3 One being ClaimChain, a protocol developed by the NEXTLEAP project.

4 Examples are the DECODE project, which has a decentralisation dimension in addition to an open-source one, and in France, the Territoire Apprenant Contributif (TAC) experiment in Plaine Commune, Ile-de-France, which aims to propose an innovative appropriation of digital technologies by the territory to test new economic and social models, through the implementation of a new distributed network architecture, https://recherchecontributive.org.

5 Recent history is again useful here, in recalling that, in an infamous case of subversion of a standards body, the Dual EC pseudorandom number generator (used to generate keys), ratified by the US standards agency NIST, had a backdoor placed in it by the NSA; this backdoor was placed in Juniper routers, and was later exploited by a yet-unknown actor to compromise these routers and then install their own backdoor to decrypt network traffic (Checkoway et al. 2016).

7 An Act of the Parliament in the United Kingdom (approved in 2016) that sets out and expands the electronic surveillance powers of the country’s intelligence services and law enforcement agencies.