0

Introduction

(F)or a time, I operated part of the US National Security Agency’s global system of mass surveillance. In June 2013 I worked with journalists to reveal that system to a scandalised world. Without encryption I could not have written the story of how it all happened […] and got the manuscript safely across borders that I myself can’t cross (Snowden 2019a).

Thus wrote Edward Snowden, the former NSA contractor turned into the world’s most famous whistleblower for digital liberties, on 15 October 2019, shortly after the damning history of his revelations, and the process that led to them, was published under the title Permanent Record (Snowden 2019a, 2019b). According to Snowden, the current debates around encryption have fundamental implications for our individual liberties and collective presence on the Internet and attempts to undermine encryption amount to no less than making Internet users ‘vulnerable by design’. Encryption has become one of the core battlegrounds of Internet governance.

Indeed, as can hardly be disputed anymore, Snowden’s 2013 revelations have been a landmark event in the development of the field of secure communications. Encryption of communications on a large scale and in a usable manner has become a matter of public concern, with a new cryptographic imaginary taking hold, one which sees encryption as a necessary precondition for the formation of networked publics (Myers West 2018). Alongside turning encryption into a fully-fledged political issue, the Snowden revelations catalysed longstanding debates within the field of secure messaging protocols. The cryptography community (in particular, academic and free software collectives) renewed their efforts to create next-generation secure messaging protocols in order to overcome the limits of existing protocols, such as PGP (Pretty Good Privacy) and OTR (Off-the-Record Messaging). Protocols are a vital part of the Internet’s functioning, providing its conceptual models as well as the set of specifications that explain how data should be regrouped into packets, addressed, transmitted, routed and received; as Laura DeNardis made clear in Protocol Politics, the selection and adoption of particular protocols carries important political and economic implications, as well as technical ones (DeNardis 2009).

With recent events such as the introduction of end-to-end encryption in WhatsApp, the most popular instant messaging platform, billions of users started protecting their communications by default and on an everyday basis, often without realising it. While the mantra ‘I have nothing to hide’ is still widespread among Internet users, interest in ways to secure and preserve online communications with means such as encryption is increasing, and this has important socio-technical consequences. While these consequences apply particularly to those whose lives depend on an accurate appreciation of the risks related to their own profession or political context, they are also ever more relevant to the ‘ordinary citizen’.

In response to the increasingly widespread understanding of security in online communications as an important social and political issue, as well as a technical one, encrypted secure messaging is a vibrant field in the making. Developers remain in flux about how to implement security and privacy properties – despite a number of novel projects seeing the light – while users have not yet converged on a single application. For example, there is still debate about cryptographic properties such as forward and future secrecy, group messaging and non-repudiation. Furthermore, there is no clear standard to adopt for these properties. In terms of privacy, work is still immature; even the most popular secure messaging applications, such as Signal, expose users’ metadata via the requirement to associate users with their phone number or, in the case of Wire, leak social graphs via a centralised contact book.

For all these reasons, next-generation secure messaging appears unstandardised and fragmented, leading to a state where secure messaging users exist in dozens of ‘silos’, unable to inter-operate (Sparrow and Halpin 2015). The ‘silo effect’ is considered among the most important obstacles to the adoption of secure messaging apps:

The common trend of creating new secure communication tools and assessing the usability of these tools is a significant obstacle to adoption due to creating fragmented user bases. Also, to reach their communication partners, participants needed to use tools that are interoperable (Abu-Salma et al. 2017a: 137).

This is in stark contrast to the model used for email, where any email service can openly communicate with another in a federated fashion, agreeing upon collective standards of operation. Thus, developers of modern secure messaging protocols are facing a number of trade-offs between various design issues, including security and privacy properties, the introduction of group support features, the degree of decentralisation of the application, standardisation attempts and choices related to the licenses under which they release their software. In the meantime, vibrant discussions are happening around adapting open federated protocols (e.g. XMPP) to integrate the most recent security features.1 New initiatives, based on decentralised, federated or peer-to-peer protocols are emerging, yet still suffer from a number of limitations related to usability and scalability.

All these dynamics are relevant to this book’s examination of developers’ actions and their interactions with other stakeholders (users, security trainers, standardising bodies and funding organisations, for instance) and with the technical artefacts they develop, with a core common objective of creating tools that ‘conceal for freedom’ while differing in their intended technical architectures, their targeted user publics and their underlying values and business models.

As next-generation encryption is shaping the ways in which we can securely communicate, exchange and store content on the Internet, it is important to unveil the very recent, and sometimes less recent history of these protocols and their key applications, to understand how the opportunities and constraints they provide to Internet users came about, and how both developer communities and institutions are working towards making them available for the largest possible audience. Efforts towards this goal are built upon interwoven stories of technical development, of architectural choices, of community-building and of Internet governance and politics. This book is about these stories – exploring the experience of encryption in the variety of secure messaging protocols and tools existing today, and the implications of these endeavours for the ‘making of’ civil liberties on the Internet.

This book intends to provide two main empirical and theoretical contributions: firstly, it seeks to enrich a social sciences-informed understanding of encryption, including examining how its different solutions are created, developed, enacted and governed, and what this diverse experience of encryption, operating across many different sites, means for online civil liberties; secondly, it wishes to contribute to the understanding of the social and political implications of particular design choices when it comes to the technical architecture of digital networks, in particular their degree of (de-)centralisation.2

The first part of this introduction will discuss both of these perspectives. A second part will introduce our case studies, our methodology and approach to our fieldwork. In its final section, in order to facilitate navigation through the following chapters, we situate our case studies in a genealogy of the fundamental protocols in the encrypted messaging field, and we introduce relevant concepts and definitions that will be used in the following chapters, such as end-to-end encryption, centralisation, federation and peer-to-peer/decentralisation.

For a social sciences perspective on encryption

For quite a long time, ‘encryption’ as a research subject has mostly been the prerogative of computer scientists, with more ‘social’ issues concerning it often being confined to debates about usable security (i.e. discussions taking place within the computer scientist community, and based on survey-type studies that aim to find ways of making encrypted tools easier to use). As, since the Snowden revelations, encrypted communications and the goals of privacy and security they seek to enhance are becoming a matter of widespread public debate, it is important that the social sciences take up the challenge of investigating in depth how encrypted messaging tools are conceived and developed, how they are taken up by different user profiles – sometimes in unintended or unforeseen ways – how they inspire and are inspired by different imaginaries, and how they eventually become the target of governance. With this volume, and the three-year investigation it is based upon, we seek to contribute what is, to our knowledge, the first book-length endeavour focused on this subject that is grounded primarily in science and technology studies (STS) and more specifically in the sociology of innovation processes and technical development. First and foremost, our work exists in dialogue with contributions that have recently sought to apply some central concepts in the social sciences, in particular STS, to the study of encryption.

Encryption is a matter of competing imaginaries and of the visions, designs and implementations they co-shape, as Sarah Myers West has recently argued (2018). People think about encryption through cyphers (that transpose letters of an alphabet), and through codes (that replace words) in different social, cultural and political contexts. As Myers West notes, encryption has built its different meanings in the realm of national security and secrecy and in that of democratic systems, in each of which it enables private communication and makes it possible to avoid surveillance and potential social or political sanctions. Myers West’s STS-driven investigation into encryption imaginaries is especially resonant with our research, as it illustrates how similar technologies may acquire different meanings and roles in different cultural settings. A particularly important insight is that these technologies should be understood not only in a technical sense but in the specific social, cultural and political contexts in which they are used. Myers West’s conclusions are important to emphasise, as this book will provide numerous examples of how technologies (and technologists) do not determine universal solutions when it comes to the role and impact of encryption, with sociocultural contexts of use being paramount. The historical dimension of these sociocultural contexts and their evolution over time should also be taken into account, as Isadora Hellegren points out in her work – grounded in both discourse theory and an STS sensibility – that sees discourse as a contextual, structuring and performative process of meaning-making (Hellegren 2017). The multifaceted meaning of encryption evolves not only across communities of developers and users, but also across time, and understanding how various actors have constructed specific understandings of freedom with regard to technologies like encryption is significant to Internet historians, hackers, programmers and policymakers, as all of these actors are involved in constructing the form, function and meaning of Internet freedom, in particular when it comes to its relation to the state.

Encryption, and the debates around it, are the result of multiple public spheres and expert circles, embedded in broader Internet-and-society questions such as the control of networked media, surveillance and the protection of personal data. Linda Monsees’ recent Crypto-Politics (2019) is an important contribution to the study of these aspects and more broadly to the development of a social sciences perspective on encryption, even if her primary focus is not the making of encryption for a specific use, but rather discourses on encryption as a whole as they unfold in traditional and less traditional political arenas. Monsees uses discourse analysis methods to examine post-Snowden debates related to encryption in both the United States and Germany and describes the landscape of media and specialist discussions as trying to make sense of today’s ‘diffuse security’ – a context where ‘security practices disperse multiple insecurities and threat images’ (Monsees 2019: 5). Monsees develops the notion of ‘publicness’ to convey the idea that political controversies on encryption are often located outside established political institutions (although those that unfold in more traditional political arenas should not be neglected). In a way reminiscent of previous work on the performative role of controversies revolving around complex and open-ended sociotechnical phenomena, she concludes, echoing Hellegren, that ‘encryption controversies entail specific ideas relating not only to what ‘security’ means but also how these conceptions rely on specific ideas about citizenship, statehood and privacy’ (Monsees 2019: 10).

Beyond the dialogue with this nascent social science scholarship on encryption, our work further seeks to build on, and contribute to, the research of several scholars at the crossroads of media studies, sociology of technology and computer science. They have brought their conceptual and methodological lenses to the study of networked communication technologies and their implications for privacy, security and Internet governance.

The technical development of applications and protocols, and the set of choices involved in this development, critically contributes to making sense of what digital freedoms are, how they should be preserved and who are their adversaries. Anthropologist Gabriella Coleman has paved the way for scholars seeking to explore these questions. In particular, she has examined the role of hacker culture,3 exploring what hackers mean by freedom and how they enact it as a form of self-determination that considers unrestricted access to knowledge a necessary precondition for the evolution of their ‘technical art’ (Coleman 2005). Furthermore, Coleman’s work is of particular relevance for a social sciences-based study of encryption. Together with Alex Golub, Coleman has defined ‘crypto-freedom’ as a particular form of hacker practice, grounded in an understanding of freedom that positions the state as the main adversary in the battle for online privacy. This practice is derived from the particular historical and cultural context of liberalism in the United States and grounded in the belief that this freedom should primarily be preserved and fostered on the Internet through the development and use of encryption technology (Coleman and Golub 2008).

Since the Snowden revelations, several authors have tackled the question of what it means to be online as an individual, a citizen and a consumer in a world that is now broadly aware of the extent to which we are being surveilled. Within this realm, they have focused on the issue of the role played by the technical development of the architectures and infrastructures of online communication.

As explored by the work of Stefania Milan and colleagues (e.g. Milan and van der Velden 2018), we are bearing witness to an increasing variety of ‘data activism’ practices – a set of socio-technical tactics, resistances and mobilisations that adopt a critical approach towards datafication, mass data collection and pervasive surveillance. Data activism, as Milan conceptualises it, can be understood as a contemporary evolution of phenomena such as those analysed by Coleman (as well as Milan herself in previous works, e.g. 2013), like radical tech activism and hacktivism. Data activism is meant to represent the ‘next step’ in these forms of activism. It is both designed for the digital and shaped by the digital, inasmuch as it ‘explicitly engages with the new forms (that) information and knowledge take today as well as their modes of production, challenging dominant understandings of datafication’ (Milan and van der Velden 2018). Interestingly, the proponents of the concept point out that, given that datafication and the uses of ICTs for different political purposes are so widespread and pervasive, data activism might progressively acquire an appeal for more diverse communities of concerned citizens. This might extend beyond previous forms of tech activist engagement that seemed (self-)restricted to a niche of experts and technologists (ibid. 2018). Broadening the frame of, and interest in, tech activism is a widely discussed concern in a number of the most recent encryption projects, including some we will analyse more closely as this book unfolds.

Also relevant to our research, and key to understanding several of the dynamics described in the following chapters, is work that explores what it means to be a ‘digital citizen’ of the post-Snowden Internet. The conceptual lens of ‘data justice’ has been proposed by Arne Hintz, Lina Dencik and Karin Wahl-Jorgensen (2019) to illustrate that not only is citizenship and the possibility of citizen agency in today’s Internet profoundly shaped by phenomena such as massive data collection and commodification, but also that user rights and practices concerning online privacy and surveillance are today conceived of in highly individualised terms. According to these authors, these individualising dynamics engender or at least maintain a context of inequality, as they transfer the responsibility to ‘engage and negotiate citizenship in a digital age onto individuals’ (Gangneux 2019).

As our case studies will illustrate, this issue is important in relation to encryption technologies and their mass adoption because the target audience of secure messaging applications either born or substantially developed post-Snowden is far from being limited to tech-savvy and activist groups: several projects are aimed at widespread use. This is a major change in the field, as for a long time, a majority of the technical crypto community considered that greater user-friendliness and usability could realise in practice their desire for large-scale adoption, while simultaneously considering ease of use and comfort a secondary issue to the soundness of the technology. Scholars have previously suggested that the comparatively little attention given to this issue by developers implies a ‘forced responsibilisation’ of users, as it places all the burden of ‘getting up to speed’ on them: it is up to users to acquire the competencies to compensate for the technical artefact’s lack of usability. It has been further argued that this is to the detriment of the development of resilient collective digital security strategies (Kazansky 2015) and that it ‘delegates’ technical matters to ‘progressive techies’ despite a widespread societal desire to develop technologies for social justice (Aouragh et al. 2015).

Finally, we have found a particularly useful set of concepts within STS-focused studies of Internet governance (see e.g. Epstein, Katzenbach and Musiani 2016). Complementary to the predominantly institutional approaches that set the agenda for Internet Governance (IG) research in its early days – and which remain one of its preeminent features – STS approaches invite a consideration of the agency of technology designers, policymakers and users as they interact, in a distributed fashion, with technologies, rules and regulations. These, in turn, lead to consequences with systemic effects that may, at times, be unintended. Following these approaches, social and political ordering is understood as a set of ongoing and contested processes, which translates into a growing attention to the mundane practices of all those involved in providing and maintaining, hacking and undermining, developing and testing, or merely using the network of networks (Musiani 2015). Conceptually, STS-informed IG research relies on understanding governance as a normative ‘system of systems’ and it acknowledges the agency, often discrete yet pervasive, of both human and non-human actors and infrastructures. Particular attention is paid to the processes by which norms – technical or otherwise – are created, negotiated, put to the test and re-aligned, and also how they raise conflicts. These processes are understood to be as important as the stabilised norms themselves, if not more so. These are conceptual contributions that will be particularly relevant in the central chapters of the book, where we introduce the variety of architectural models that different projects choose to adopt initially and which they subsequently have to ‘tinker with’ due to factors including early user adoption, changes in the developers’ teams and efforts to find suitable business models.

STS also provides crucial conceptual resources to enable the understanding of encryption as a site of controversy and contestation, in particular with the notion of sociotechnical controversy. On the one hand, since the very early days of the Internet, being on and managing the network of networks has been about exercising control over particular functions that provide specific actors with the power and opportunity to act to their advantage. On the other hand, there is very rarely a single way to implement these functions or a single actor capable of controlling them. Thus, the Internet is controversial and contested, both a target and an instrument of governance, and the object of interest of a myriad of actors: from the most powerful and centralised to the ‘simple’ Internet user (Epstein, Katzenbach and Musiani 2016). Infrastructural and architectural arrangements, the development and implementation of particular protocols, can be understood as a fundamental place to exercise economic and political power, as we have examined elsewhere (e.g. DeNardis and Musiani 2016).

The Internet exhibits an increasing number of sites of contestation. These include the interconnection agreements between Internet service providers (Meier-Hahn 2015), the debate around net neutrality (Marsden 2017), the use of deep-packet-inspection (Mueller, Kuehn and Santoso 2012), the deployment of content filtering technologies (Deibert and Crete-Nishihata 2012), ubiquitous surveillance measures and the use of DNS for regulatory aims (DeNardis and Hackl 2015), prediction of people’s online behaviours via algorithmic governance (Ziewitz 2016) and the shaping of the visibility and hierarchies of search engine results (Mager 2012). Furthermore, contentious politics, activism and citizen-led protest are often embedded in the Internet and its applications. This is illustrated, for example, by Milan and ten Oever’s (2017) work on civil society engagement within ICANN (which operates the Internet’s Domain Name System), aimed at ‘encoding’ human rights into Internet infrastructures. Another illustration is our previous research on the shaping and use of citizen- and activist-oriented mobile and Web applications, and how the design of these tools shapes citizen participation and citizen-state interaction (Ermoshina 2016). Among all these examples, encryption technologies for online communications are becoming one of the core sites of ‘governance by architecture’, rife with controversies that concern, in turn, the development of these technologies, their implementation, their (sometimes surprising) appropriation by users and the attempts to regulate them.

Interestingly in this regard, Snowden’s disclosures of 2013, and the subsequent widening of mass-surveillance-related debates, not only revealed a need for further legal reform of intelligence and surveillance systems but have also highlighted ‘a variety of changing practices, policies and discourses that can (…) be related to post-Snowden contentions’ (Pohle and Van Audenhove 2017: 2–3). A number of sociotechnical controversies can be related to this. A notable example was the FBI vs Apple controversy that spanned 2015 and 2016, when Apple Inc. received several orders by district courts in the United States to assist ongoing criminal investigations by extracting data from iPhones with extensive cryptographic security protections. Apple itself could not break this encryption unless it wrote specific new software to enable authorities to bypass such barriers. This debate notably questioned whether – and if yes to what extent – judicial or governmental authorities could compel technical manufacturers to provide assistance in unlocking devices protected by encryption systems (see also Schulze 2017). Controversies such as this have contributed to unveil facets of the experience of encryption in today’s Internet and suggest that the most pressing issues of our time related to encryption may be not only legal and technical, but also social.

For a social sciences perspective on (de-)centralisation

As previously mentioned, the second core empirical and theoretical contribution of this book is intended to be the understanding of the social and political implications of particular design choices when it comes to the technical architecture of digital networks, in particular their degree of decentralisation (or lack of it). This issue, which has permeated the debates of both scholars and practitioners of networked technologies since their early days, in fact goes even further back in time. Indeed, the struggle between centralised, profit-driven systems and decentralised, user-controlled, user-innovated architectures is one that appears in many infrastructures and has been a longstanding concern of sociologists and historians of technology (see e.g. Edwards et al. 2007). In the early days of the telephone, people living in rural areas sometimes put together their own phone networks using barbed-wire fences as transmission lines; they were shared, distributed resources (Fischer 1987). The grand battle between trucking and rail in the United States was a duel between huge centralised systems (the railways) and independent entrepreneurs, often individuals, who used trucks to offer point-to-point (rather than station-to-station) delivery, and ultimately won by turning a publicly funded infrastructure (state and federal roads) towards a private purpose. Off-grid solar power installations as an alternative means of producing electricity outside standard circuits, and institutional resistance to these, are another case in point (Turner 2010). It is of little surprise, then, that the tension between different degrees of centralisation of technical architecture may also be found in the Internet, and in the different protocols and applications populating it – a tension that has been evident since its beginnings.

Indeed, in the history of Internet-based services, the concurrent push towards different types of design choice, in particular design based on peer-to-peer versus client/server architecture,4 has for quite some time been a source of compromises and tension – including social, technical, political, economic and legal tensions (see Musiani 2015b). In a client/server architecture, having information stored not on a user’s machine but a separate server, possibly one managed by a third party, greatly complicates resistance to censorship because it provides an obvious control point (DeNardis 2014) for authorities. On the other hand, without such a server, communication between users who are not permanently digitally connected becomes much more difficult. This is why core Internet services such as email usually resort to intermediaries that are able to ensure the ongoing functioning of the service but can also potentially stop it, limit it, block it, and read what passes through the servers they depend on (on the liability of Internet intermediaries for these actions see Riordan 2016).

Efforts aimed at developing decentralised systems date back to the early Internet (Minar and Hedlund 2001). They were generally built as ad-hoc strategic responses to specific threats of shutdown. The file-sharing system BitTorrent, for example, was developed as a response to the shutdown of Napster, in order to make legal prosecution for breach of IP in file sharing networks much more complicated (Izal et al. 2004). Napster had a peer-to-peer component in file search but was in fact being run by a small group of people. This implied that the system was fully dependent on these individuals, technically as well as legally. As a consequence, such systems can be effectively neutralised by turning off servers or seizing the equipment or the people in charge of them (Ku 2016). In contrast, decentralised systems do not have any central servers, and the functioning of the system involves many peers (people, and the computing resources at their disposal) who do not, or may not, even know each other. If any particular node is unavailable to the system, it continues to run regardless. Thus, the logic behind the creation of a decentralised system is usually to be resilient to targeting by authorities, and such systems are accordingly often deemed by technical and political actors in need of such resilience to be superior to proprietary, closed, more centralised systems, because they value long-term robustness over cost-effective commercial expedience (Oram 2001).

Decentralised systems have been subject to ‘waves’ of interest in the last twenty years, starting with the early 2000s file sharing frenzy and the hailing of peer-to-peer as a ‘disruptive technology’ (Oram 2001). In recent years there has been an even greater interest in and uptake of decentralisation. In the process, the motivations for adopting decentralised technologies have broadened from a particular strategy of opposition to specific companies or pieces of regulation, to proposals of an alternative ‘vision’ of what corporate, legal and state institutions should be. Two main dynamics have driven this tendency: the first is the emergence of blockchain technology (in particular with Bitcoin technology, as a response to the 2008 financial crisis; see Campbell-Verduyn 2017 and Brunton 2019), and the second was spurred by Edward Snowden’s revelations of mass surveillance operations facilitated by a number of telecommunications companies and Silicon Valley giants, on behalf of the US National Security Agency (see Pohle and Van Audenhove 2017). These events greatly raised the general public’s awareness of the surveillance-based, and personal data-based, business models of near-monopoly tech companies, and their ‘dangerous liaisons’ with state security agencies (Musiani 2013).

As a consequence of these dynamics, ‘both decentralisation and the notion of authority took on broader meaning and decentralisation became a technical, political, economic and social aim in and of itself, reaching outside the ‘hacker’ circles of the early p2p systems’ (Brekke & Isakiidis 2019). However, this larger appreciation of decentralisation as a principle and a vision is not devoid of side effects; most notably, often decentralisation has become an objective in and of itself, with little understanding of intent or assessment of actual effects. As information studies scholar and Internet pioneer Philip Agre said in 2003, ‘architecture is politics, but should not be understood as a substitute for politics’; decentralised protocols are too readily assumed, because of their technical qualities, to bring about decentralised political, social and economic outcomes. By choosing to structure the central part of this book around the different degrees of centralisation of the technical architecture for the examined secure messaging systems, we intend to take Agre’s lesson seriously, and assess in detail the extent to which economic, social, legal (and last but not least technical) factors complicate the picture of a linear ‘translation’ from a peer-to-peer technical architecture into a successful decentralised socioeconomic system, or from a centralised technical model to a top-down sociopolitical structure.

Case study selection

The core of this book is an in-depth understanding of three different end-to-end encrypted mail and messaging applications. After a preliminary survey of thirty cases of encrypted messaging applications,5 we proceeded to select a few of them for more detailed study. These three applications were selected based on their underlying protocols, and because of the relative accessibility of the developer communities for interviews and observations. The three applications originally selected were Signal (a centralised end-to-end encrypted instant messaging application), LEAP/Pixelated (a federated end-to-end encrypted asynchronous messaging protocol and client) and Briar (a peer-to-peer end-to-end encrypted messaging application for resilient communication using network-layer protection, such as Tor hidden services).

These three cases offered the possibility to address various research questions, such as the motivations behind:

– particular architectural choices (centralised, federated or peer-to-peer);

– specific choices of licensing, UI/UX design, relations between the underlying protocols and the application level;6

– solutions to privacy properties (such as metadata protection);

– design choices for group communication;

– various approaches to security properties (forward and future secrecy; server-side archives; cryptographic deniability; ephemeral or disappearing messaging).

However, when in September 2016 we started our fieldwork on the three selected projects, we quickly understood that these projects could hardly be treated as discrete given their connections with other initiatives in the field of encrypted messaging and email. In fact, the underlying protocols used by these three projects gave birth to a number of client-side implementations, forked or actively interacted with various applications in the field. Thus, we decided to ‘follow’ the three projects as they grew and transformed and use them as our ‘threads of Ariadne’, respecting the loops and knots that these threads were naturally forming on their way. During our fieldwork we had the opportunity to meet and talk with a large number of professionals, ranging from cryptographers to UI/UX designers, trainers and users, about the protocols and tools we focus on here (in discussions both recorded and off the record).

The research work subtending this book was and is thus an attempt to tell the complex story of protocols and communities, and eventually, to approach questions of governance of encryption protocols. Through a comparative analysis of centralised and decentralised protocols and their implementations, we address here several important and broader questions: What are the architectural patterns of successful centralised and decentralised systems? How do developers go about building scalable and high-performing privacy-preserving decentralised architectures? Can decentralisation help deliver privacy and anonymity? What are the motivations, values and characteristics of user communities that lead to the success or failure of certain (de-)centralised systems?

A note on methodology

Grounded in the different strands of literature from STS and the other disciplines we introduced above, our approach can be described as a multi-sited ethnography, inasmuch as we have undertaken research in, and between, several online and offline locations as part of our study, and we have also explicitly conceived specific technical protocols and systems as ‘part of a larger context that exceeds the boundaries of the field site’ (Muir 2011: 1015; see also a reflection on the method by its first proponent, George Marcus, who points out that multi-sited ethnography has been ‘most creative, critical, and interesting where it has been involved with the [STS] study of distributed knowledge systems, (Marcus 2012: 27).

We analyse the development of the architectures and the interfaces of messaging apps as ‘meeting points’ between the intentional goals of developers and the needs of users (Oudshoorn and Pinch 2005), In doing so, we aim to provide a fieldwork-driven explanation of emerging systems and communities of practice through ‘analytical thick descriptions’ (for a recent treatment of the concept, first introduced by anthropologist Clifford Geertz, see Ponterotto 2006) of events, artefacts and organisations. In particular, we pay attention to moments of crisis, debate and controversy – to try and understand the life of a technical artefact, from its creation to its appropriation and reconfigurations by users, to its becoming a subject of public debate, of governance, of lobbying. The primary methodology to achieve this goal has been to observe, for relatively prolonged periods of time, specific case-study groups or communities, while on the side conducting in-depth interviews with their members and reading appropriate documentation such as release notes and accounts of working sessions.

Just as we seek a nuanced understanding of developers’ motivations and the representations they have of users and their needs, in the tradition of ‘user studies’ developed within STS (see Oudshoorn and Pinch 2005 or, in the French tradition, Jouët 2000), we understand users not as a homogeneous and passive group, but as active participants in innovation and co-shaping technologies. In software development, this is possible via routes such as bug reporting, pull requests on code, mailing list comments and in-person contact between users and developers.

Among our interview subjects, developers were mostly selected on the basis of pre-existing personal relationships that the NEXTLEAP research team members had with the cryptographic research community. For projects with which research team members did not have any previous personal connections, we also reached out to developers via the GitLab and GitHub pages of the projects (e.g. Ricochet, Conversations). In contrast, user studies tended to be conducted with individuals selected through their attendance at training events in their local environments (both high-risk, in the case of Ukraine and Russia, and low risk in the case of France, Germany, Austria and the United Kingdom).7 Our selection of events and conferences to attend was driven primarily by our interest in speaking to users from high-risk contexts. Indeed, while the inclusion of high-risk individuals was important for the project (as Chapter 1 will show), due to the level of repression these users face in their native environment, it would have been difficult if not impossible to interview them there, since they would be unable to speak openly. This was the case for users from Egypt, Turkey, Kenya and Iran, where the interviews took place in March 2017 at the Internet Freedom Festival and at RightsCon. All interviews were conducted between autumn 2016 and the late summer of 2018, with 63 interviews in total. At several of these gatherings and training events, we also asked our respondents to take a moment to provide us with a drawing or a graphical representation of what/who they considered to be their security threat or adversary when it comes to digital communications. These images proved to be a very fruitful addition to our fieldwork materials, as Chapter 1, in particular, will show.

Throughout our research, and due to the sensitive nature of our subject vis-à-vis online civil liberties, it was important to keep questioning the modalities of our access to fieldwork, and to be continually reflexive about our core assumptions and the ethical guidelines we as researchers were working with – questions that have been identified as particularly relevant when issues of secrecy, privacy and security are at play (Barbosa and Milan 2019; De Goede et al. 2019; see also the Appendix to this book).

Being the ‘useful sociologist’ in a team of technologists

The fieldwork behind this book was a journey through disciplines, fields of expertise, methodologies and communities – an experience of professional and sometimes personal transformation for us as researchers, that led to a redefinition of our role as sociologists within a tech-oriented project. The NEXTLEAP team was a consortium of six partners: four were research teams based at academic institutions (three in computer science, one – the authors of this book – in sociology), one was a research centre whose activities include a mix of philosophy-inspired action research and outreach to civil society and institutions, and one was a software development firm. Thus, the team was far from being exclusively composed of academics, and STS academics were a definite minority.

We, the authors of this book, come from what is often considered as ‘the temple of actor-network theory’:8 the Center for Sociology of Innovation, based at the MINES ParisTech school of engineering in Paris We therefore initially approached our research on secure messaging as a challenge for STS. Our initial academic desire was to target the work of technical actors – developers and protocol designers – and the encryption protocols themselves. However, after we delivered the first state-of-the-art description of end-to-end encrypted messaging applications and protocols, we faced a clear demand from other members of the research team: ‘focus on users’. The following months of fieldwork were marked by several interactions between us and the technologists in the team, and ultimately led to an important redesign of our interview guide. At times we were also asked to revise our research questions and priorities to include a clearer focus on insights from user-oriented interviews. Eventually, we came to understand how we looked to the rest of the team: when software development cycles include a non-technical role, this role is usually that of the ‘usability researcher’, a place that had been, somehow, automatically reserved for us.

Thus, the story behind this book is also a story of two STS researchers defining their own role within an interdisciplinary team, not without a certain amount of tension. The attempts to convey to the technologists what our STS approach was, and how it could actually be useful for them and their development process, despite not being straightforward user studies, eventually led us to develop our own ‘circumvention techniques’ to make collaborative work possible.

Embodying the ‘useful sociologist’ in a team of technologists, we found ourselves between fields, something that is also reflected in our previous publications related to the project. We found ourselves presenting our work at conferences on usable security, publishing our works in computer science reviews focusing on usability studies, side-by-side with usability researchers. However, we maintained a connection with the STS world through a thin but, we believe, robust thread of reflexivity, as we developed a critical attitude to our own usability alter-ego and managed to use it as a key that helped us penetrate the otherwise quite hermetic world of crypto protocol design. While, by consortium design, our interactions and collaborations with our primarily technical colleagues were frequent and close, this book is most definitely, in retrospect, a way to tell the story of our three-year investigation that is intimately our own, departing when necessary, from the compromises needed throughout the project to produce results and publications driven by the conventions of both computer science and the social sciences.

The first step in this process was perhaps the Autocrypt9 gathering in Berlin, in December 2016, where Ksenia was invited to talk about ‘user needs and desires’ for encrypted email and messaging and develop ‘use-cases’ that could potentially be helpful in the design of the new specifications for email encryption. This event became the true starting point for our fieldwork – a gathering of some of the most advanced cryptographers and developers, working on projects such as Matrix.org, Enigmail, Wire, Secure Scuttlebutt, LEAP, Riseup and others. At the end of the day, the rest of the NEXTLEAP research team’s ‘hunt for the user’ had finally brought us to the heart of ongoing development and protocol design work.

Thus, this book is an occasion to reflect upon our work as an example of embedded real-time STS, where researchers are active participants in technoscientific work involving multiple stakeholders and contingencies. Throughout our case studies, we describe several examples of how our research fits into protocol design work, and occasionally reflect upon tensions within the research team. Ultimately, the desire to ‘meet the user’, that shows through the work of our technical colleagues, tells us something about the evolution of encrypted messaging towards a more human-centric design, reflected in recent developments among certain Internet governance standardisation bodies, such as the IETF (Internet Engineering Task Force) and its research branch IRTF (Internet Research Task Force), that seek to include human rights considerations in protocol design (see ten Oever 2021).

Before we present the structure of the book, and in order to facilitate navigation through the chapters that follow, the last part of our introduction will present a genealogy of the fundamental protocols in the encrypted messaging field; in it, we introduce relevant concepts and definitions that will be used in the following chapters.

Encrypted messaging protocols: The short genealogy of a ‘feedback loop’

The most recent generation of tools for secure communications (appearing in the 2010s, in particular post-Snowden) marked the rise of encrypted secure messaging ‘for the masses’.10 Yet despite this recent success vis-à-vis the general public, encrypted messaging is an unstandardised and fragmented field, as developers remain in a state of flux about how to implement security and privacy properties. In particular, developers face a number of trade-offs between various design issues, including security and privacy properties, the introduction of group support features, the degree of decentralisation of the application, and standardisation and licensing attempts. To attempt an initial systematisation of this complicated landscape, our first step was an in-depth survey of thirty email and chat applications offering end-to-end encryption.11 As part of this, we analysed their architectural and protocol choices, as well as their interfaces and business models (see also Ermoshina, Musiani and Halpin 2016). As an introduction and contextualisation of our research subject, this section provides a historical look at the development of email and secure messaging protocols in the light of cryptographic and usability problems that these protocols have sought to solve, while introducing relevant concepts and definitions that will be used in the following chapters. This brief, non-linear history of protocols and applications that have recently built the field of secure messaging is also a first look at an issue which we will further develop in the coming chapters: the technical and organisational choices made by protocol designers and app developers are, in fact, enacting various forms of freedom, both for users and developers.

End-to-end encrypted messaging is increasingly prominent, with its adoption by large proprietary applications such as WhatsApp and Facebook Messenger. In ‘end-to-end’ encrypted messaging, the server that hosts messages for a user or any third-party adversary that intercepts data as the message is en route cannot read the message content due to the use of encryption. The ‘end’ in ‘end-to-end’ encryption therefore refers to the ‘endpoint’, which in the case of messaging is the client device of the user rather than the server.

After the Snowden revelations, the academic cryptographic community has, with renewed impetus, sought to rigorously engage with the ‘untrusted server’ problem. This is an issue that until recently had felt, as suggested by Phillip Rogaway, ‘almost intentionally pushed aside’, although it is perhaps ‘the most fundamental privacy problem in cryptography: how can parties communicate in such a way that nobody knows who said what’ (Rogaway 2015). The so-called ‘new generation secure messaging protocols’, such as Signal’s Double Ratchet, seek to address these issues and provide a remedy for the security and privacy flaws identified in older protocols, such as OTR or PGP. The success of the Signal protocol, widely forked and adopted by many secure messaging projects, has catalysed debates within cryptographic communities, and led to the revival and renewal of older protocols, creating a ‘feedback loop’ effect. Thus, the genealogy presented here challenges linear histories of secure messaging protocols, as it unveils dynamic iterations between recent and older protocols, as well as between what was considered ‘synchronous’ versus ‘asynchronous’ protocols.

Email encryption

Historically, email has been considered a form of asynchronous messaging, where a user does not have to be online to receive the message, while chat is considered to be a form of asynchronous messaging, where a user has to be online to receive the message. However, these distinctions are increasingly blurring now that popular chat protocols generally support asynchronous messaging, and the email protocol is more and more often used for instant messaging, with the rapidly evolving galaxy of chat-over-email projects (see Chapter 4). Email is now considered a possible infrastructural solution for projects seeking not only privacy and security but also interoperability and some resistance to censorship.

Email is based on standardised and open protocols that allow interoperability between different email servers, so that, for example, a Microsoft server can send email to a Google server. SMTP (Simple Mail Transfer Protocol) is the protocol originally used for transferring email and as such is one of the oldest standards for asynchronous messaging, first defined in 1982 by the IETF12 (Unger et al. 2015) and by default not including provision for content confidentiality. Classically, as the NSA’s PRISM program has eloquently revealed, email is sent unencrypted and so the server has full access to the content of email messages. The federated nature of email infrastructure makes this even more problematic, as trust in service providers within decentralised systems is hard to guarantee. Thus, developers have for a long time made clear their intention to progress towards usable methods of end-to-end encryption for email.

To add end-to-end encryption capabilities to email, the PGP (Pretty Good Privacy) protocol was created in 1991 by Phil Zimmerman, as an ambitious technosocial attempt to ‘preserve democracy’ and let people ‘take privacy in their hands.13 Due both to pressure from the US government and patent claims by RSA Corporation, Zimmerman pushed PGP to become an IETF standard. The OpenPGP set of standards was finally defined in 1997, to allow the open implementation of PGP. GPG (GnuGPG) is a free software implementation of the OpenPGP standards developed by Free Software Foundation in 1999 and compliant with the OpenPGP standard specifications, serving as the free software foundation for most modern PGP-enabled applications.

OpenPGP is implemented in both desktop and mobile email apps, including Outlook, Apple Mail and Thunderbird through plug-ins. An alternative standard for encrypted email, called S/MIME,14 is also supported via plug-ins by most major email clients. The main difference between OpenPGP and S/MIME is that the latter requires the installation of certificates provisioned by centralised certificate authorities. In contrast to PGP, which is based on a decentralised ‘Web of Trust’ between users who accept and sign each other’s keys (and therefore delegates the responsibility of the complexity of key management to the end-user), S/MIME uses a centralised public key infrastructure to manage keys. Thus, while it has been adopted by some large, centralised institutions, it has been much less frequently adopted by the general public. In contrast to centralised approaches, OpenPGP offloads the key management to the users via a decentralised ‘Web of Trust’ model.

OpenPGP and S/MIME also work on mobile devices, such as PGPMail, but as OpenPGP binds the key to the particular device, there have often been concerns about how to securely transport any long-term private key material between devices, and so mobile adoption of encrypted email is considered to be low among users and problematic in terms of security. Although the challenges of using PGP on mobile platforms are well known, mobile PGP has not been subject to usability studies in the same manner that PGP itself has. S/MIME has been tested in some usability studies and in general demonstrates better usability than PGP, insofar as key management does not have to be maintained by the end-user, but users still have trouble understanding the interface.

In general, PGP was considered to have poor usability as users could not understand key management and judge how cryptographic keys establish trust relationships, or even understand the interface. These problems extend to security: if an adversary compromises a user’s private key, this allows all encrypted messages to be read. While there has been a resurgence of interest in OpenPGP since 2013, it has not been deployed to any great extent by ordinary users due to the aforementioned issues. In 2015, the IETF reopened the OpenPGP Working Group15 in order to allow the fundamental algorithms to be upgraded and to use more modern cryptographic primitives (for example, support for new algorithms).

The underlying PGP protocol presents a number of well recognised flaws. First, PGP tends to allow any and every combination of uses of encryption and signatures based on user preference, but does not offer authentication of the headers (i.e. the ‘to’ and ‘from’ fields). This allows messages to be surreptitiously forwarded and otherwise redirected. In general, PGP has been considered an open standard that has serious problems in terms of both security and usability, and this prompted the rise of a generation of competing technologies such as Off the Record Messaging. However, recently, the email community has been quite active and has developed a number of initiatives that have sought to renew PGP and to ‘make email encryption great again’, as Delta Chat’s lead developer and former NEXTLEAP member, Holger Krekel, summarised during a conversation with us. The result is that, since 2016, the PGP community has experienced a definite revival through the introduction of new solutions such as Autocrypt, which automates both secret and public key management by adding an Autocrypt-specific mail header to outgoing mails which contains, among other information, the sender’s public key.16

Instant messaging encryption

Unlike email, which started as a high-latency and asynchronous messaging system, chat protocols began as low-latency synchronous messaging, although recently the line has become increasingly blurred as many chat protocols allow asynchronous message delivery. The user patterns of chat apps have also become increasingly varied in recent years, questioning the distinction between social media and messaging platforms (‘messaging is the new social media […] families use WhatsApp groups instead of Facebook’; Balive 2015).

The development of encryption for chat apps corresponded to changes in the contexts of usage (notably the move from desktop to smartphone technologies) and the spread of mobile Internet. The first encryption protocol for instant messaging, called OTR (2004), presumed a synchronous setting with both contacts being online at the same time. The birth of the second important protocol for end-to-end encryption protocols (Axolotl, started in 2013, now called Signal protocol) corresponded to the rise of instant messengers (in 2013, chat apps surpassed short message services in global message volume for the first time; see eMarketer 2015) and to the Snowden revelations.

The most widely used standardised chat protocol is called XMPP (Extensible Message and Presence Protocol) and it became an IETF standard in 2004. XMPP is a federated standard that ‘provides a technology for the asynchronous, end-to-end exchange of structured data […] among a distributed network of globally addressable, presence-aware clients and servers’ (Borisov et al. 2004). There are many implementations of the XMPP specifications, with the XMPP Foundation giving a list of 35 clients, 12 servers and 15 libraries using the XMPP protocol.17 XMPP traffic and content are not encrypted by default, although network-level encryption security using TLS has been built into the core. In addition, according to the XMPP foundation, a team of developers is working on an upgrade of the standard to support end-to-end encryption.18

The OTR (Off-the-Record) protocol, released in 2004, is an extension to XMPP to provide end-to-end encryption. In their paper with the iconic title ‘Off-the-Record Communication, or, Why Not to Use PGP’, the creators of OTR, Borisov and Goldberg, describe their protocol as a security upgrade of PGP, at least insofar as it does not have long-term public keys that can be compromised. OTR also provides deniable authentication for users, unlike PGP messages, the latter of which can be later ‘used as a verifiable record of the communication event and the identities of the participants’ (Borisov et al., 2004). The first OTR implementation was a popular Linux IM client, GAIM. At the present moment it is said to be used by 14 instant messaging clients,19 including earlier versions of Cryptocat (in-browser Javascript client), Jitsi, and ChatSecure (XMPP client for Android and iOS). However, the first versions of OTR were designed for synchronous messaging between two people, and so did not work for group messaging or asynchronous messaging.20

The Signal Protocol, the non-federated protocol developed in 2013 by Open Whisper Systems, is said to have evolved in response to the flaws and limits of OTR. Moxie Marlinspike, the co-author of Signal, was inspired by some features of OTR, such as the idea of ephemeral key exchange (Marlinspike 2013), but also added additional security features such as future and forward secrecy, support for asynchronous messaging and group messaging, going a step further than OTR by allowing clients to be offline. The Signal Protocol is used in mobile messaging applications such as the homonymous Signal (formerly TextSecure and RedPhone) and WhatsApp, while its forks are used in Wire and Riot. Silent Circle, a Washington, DC-based encrypted communications firm founded in 2011, has used a version of the Signal Protocol since 2015 in its Silent Phone. In 2016 Facebook announced the implementation of Signal Protocol for Facebook Messenger.21

Regardless of these multiple re-usages and forks, the Signal Protocol remains unstandardised, as we will explore in Chapter 2. However, the first step towards ‘standardisation’ of parts of the Signal Protocol has begun with the creation of OMEMO (a recursive acronym for ‘OMEMO Multi-End Message and Object Encryption’). OMEMO is a new encrypted extension of the XMPP protocol developed in 2015 that effectively copies the Signal Protocol and adopts it to XMPP. It was presented to the XMPP Standards Foundation in 2015 but is still in its experimental phase.22 OMEMO builds upon the work of the Signal Protocol, responding to the flaws of both OTR and PGP, due to OTR’s ‘inter-client mobility problems’ and the absence of forward secrecy of OpenPGP and its vulnerability to so-called replay attacks. The software implementations of OMEMO are growing, and include Conversations, an open-source application for Android that counts over 50000 downloads via Google Play Market, and an unknown number of installs via F-Droid.

Network-level anonymity

End-to-end encryption does not usually allow a user to be anonymous to the server or a third party without additional network-level encryption. Thus, network-level initiatives, such as P2P routing services or anonymous remailers, which can add supplementary privacy properties to end-to-end encrypted messaging, are worth mentioning here. Metadata protection and traffic obfuscation is still an area of active research, stimulating experiments with standards and architectures (e.g. Vuvuzela’s usage of ‘noise’ to obfuscate metadata, discussed in Van den Hooff et al. 2015). There seem to be no functional standards on this level yet; however, some solutions, such as Tor or I2P, tend to serve as references or de facto standards for different projects.

The Tor hidden service protocol offers a platform to develop decentralised and encrypted instant messenger servers. Tor (the name is derived from the acronym of the original software project, ‘The Onion Router’), is software that directs Internet traffic through a network of volunteers, globally dispersed, which act as relays to conceal a user’s location and usage, essentially making it more difficult for a third party to trace Internet activity to a particular user. Tor’s success and fame mostly relies on its ability to provide privacy and anonymity to vulnerable Internet users. It is used as a default by projects such as the Tor Messenger, Pond and Ricochet. Another example is the decentralised and end-to-end encrypted mobile messenger Briar, which relies on the Tor network when available, but could also work over Bluetooth in case of emergency off-the-grid situations. Briar is described in more detail in Chapter 3.

Tor only provides anonymity for network addresses, but not metadata such as the sender, recipient and time of message, which are kept in the email header at the time of email or can be deduced by the server. Historically, work has been carried out on anonymous high-latency remailers to fix these transport meta-data leaks in federated messaging, these falling under three types: Cypherpunk Anonymous Remailer, Mixmaster and Mixminion. The last of these is not currently active, according to the project’s website.23

A number of experimental network-level tools, while not guaranteeing anonymity, provide some level of encryption. Zero Tier One is an end-to-end encrypted, peer-to-peer virtual network that provides static network addresses which remain stable even if the user changes physical WiFi/networks. CJDNS implements a virtual IPv6 network in which all packets are encrypted to the final recipient, using public key cryptography for network address allocation and a distributed hash table for routing.24

A fragmented yet vibrant field

As we observed in previous mapping research on P2P services (Méadel and Musiani 2015), part of the reason why there is such great diversity and complexity in this field is the relatively short lifespan of several projects. While our mapping of thirty end-to-end encrypted messaging and email apps covered only projects that are currently active (with one exception, Pond, ‘in stasis’, albeit not deactivated), our preliminary research revealed countless others that, after two or three years of pre-beta phase, and sometimes less, stopped development with no evident explanation. While in more than a few cases, the motives behind this are primarily related to technical experiments that did not deliver as hoped or expected, a number of additional factors may also be responsible, including the failure to develop an economic model, the internal governance of FOSS development groups, and the inability to rally a critical mass of users around the app (possibly due to a lack of ease-of-use, as discussed below).

Despite the prevalence of free and open-source software projects, proprietary software is not absent in this landscape, revealing both a potentially fruitful ‘business-to-business’ market for end-to-end encryption and a lack of open-source and standards adoption by mainstream applications. Open source itself is multi-layered and sometimes hybrid, with the code on the client side being open source and the server side being proprietary. Perhaps unsurprisingly, the proprietary features are more important in applications destined for business-to-business use. Free and open-source software is predominant for tools adopted or designed to be used by activists and tech-savvy users. The transparency of code and encryption protocols used by open-source software is aimed not only at improving the project, but also at producing communities of peer reviewers, experts, beta-testers and advanced users who participate in a collective reflection on the future of privacy-enhancing technologies.

The target audience of the applications is far from being limited to tech-savvy and activist groups; several projects are intended for widespread use, and user-friendliness appears to be the main issue that stands between this wish and its realisation. A look at visual aspects connected to applications – the design of interfaces, for example, or the design of diagrams and graphics to explain the functioning of the applications – also reveals the different publics targeted by applications and how the developers perceive them. General public-oriented systems use very ‘politically neutral’ imagery, resorting to the very classical ‘Alice and Bob’ models25 while stressing that their tools are for ‘everyone’ (e.g. for ‘sharing holiday photos’), while tools meant for companies emphasise in both visuals and words a focus on security. Other narratives refer to fictional anarchist leaders or real-life activists (e.g. ‘Nestor Makhno’ or ‘Vera Zassulitch’), figures likely to resonate with the target audience. Interestingly, in some instances where user feedback is visible on the App Store or Google Play, it shows that end-to-end encryption is perceived as problematic because both sender and receiver have to install the app for encryption to take place, which complicates usage.

Several secure messaging projects propose solutions to the problem of data storage. Indeed, despite the guarantees of ‘no personal data collection’, some projects still store key data on the servers (such as usage statistics, device information, keys, usernames or friend relations). Developers tend to explain such practices by reference to technical requirements (e.g. proposing a better user experience based on the usage statistics collected) but many developers remain keen to seek alternatives that involve minimal data storage and use stronger forms of decentralisation.

A related issue is a powerful ‘double’ narrative about end-to-end encryption. If on the one hand, it is associated with a very strong discourse on empowerment and the better protection of fundamental civil liberties, several projects show in parallel a desire/need to defend themselves from associations with criminal activity and allegations such as ‘encryption is used by jihadists’ (Sanger and Perlroth 2015). Such narratives are fuelled by previous and current ones about decentralised technologies and peer-to-peer, with their history of alleged ‘empowering-yet-illegal’ tools. These issues in turn connect to the broader context of discussions about governance by infrastructure and civil liberties (Musiani et al. 2016), some of them particularly related to encryption (or the breaking of it), such as the Apple vs FBI case26 and WhatsApp using, since April 2016, encryption by default. Thus, the present research hints at something that we will address in the following chapters – something a large majority of the projects need to take into account, and indeed are already taking into account: architecture is politics, but not a substitute for politics (Agre 2003).

Given the range of debates, controversies and emergent publics around encryption and secure messaging, we hope to have begun to show why a social perspective is necessary for the design and refinement of technical protocols. This includes the necessity to understand how and whether users understand and value the various security properties of the protocols. For example, how do users understand what is a ‘key’ or what is ‘forward secrecy’? Often, protocol designers make assumptions about whether or not ‘ordinary users’ can understand the security and privacy properties of their protocols. For example, almost all protocols from PGP to Signal use methods such as ‘out-of-band fingerprint verification’ to determine whether or not the recipient of their message really is who they think they are. Our research shows that users rarely actually use these techniques to verify the identity of their contacts.

Another example that has been debated in the technical community is deniable authentication. While a protocol may be technically deniable, would this cryptographic deniability be able to stand the test of society, for example in a court of law? Answering these kinds of questions influences the kinds of protocols that can be designed by the research community. Lastly, why do only some protocols enable decentralisation via open standards? Do only specific groups of users (tech-savvy and activists) have a strong preference for peer-to-peer or federated solutions over centralised services? This book hopes to address these and other related questions.

Structure of the book

Over the course of the next six chapters, the overall objective of the book is to provide an analytical portrait of the field of encrypted secure messaging, in order to explore the experience of encryption in today’s variety of secure messaging protocols and tools, and their implications for the making of digital liberties.

The field of encrypted messaging offers many solutions designed to conceal, obfuscate and disguise private communications and other online activities. These solutions are tailored to protect against specific ‘adversaries’. The security and privacy features worked into different protocols offer various degrees of protection and let users conceal different parts of their online identities. To illustrate this, Chapter 1 shows how instruments such as ‘threat modelling’ and risk assessment are deployed during the development of tools in order to either identify from whom a user needs to hide or to analyse the possibility or chance of a threat being realised. Also, it becomes important not only to know who to conceal from, but also to evaluate the chances of actually ‘meeting’ this adversary. In fact, users perceive themselves as having, not a single identity, but rather a set of profiles or personas: daughter, journalist, lover, activist, colleague… Each role, according to users, may require a specific form of digital self-care or a specific set of tools. Each persona implies a specific pattern of online behaviour, thus creating what is called ‘security by compartmentalisation’. A consequence, as we argue in Chapter 1, is that when applied to online privacy and security, risk is a relational and socially defined concept, as it greatly depends on the user’s social graphs and communicative contexts.

Chapter 2 moves into the more empirically focused part of the book, by examining the case of a centralised application – Signal – and its underlying eponymous protocol. The Signal protocol is now considered to embody ‘best practice’ in the field of encrypted messaging and has become a trend-setter for other projects in terms of privacy and security features. Currently, Signal is centralised, as a single server mediates the setup of the protocol in its most widespread deployments. A new means of ‘quasi-standardisation’ or ‘standardisation by running code’ is being practised around this protocol. In this process, a quasi-standard is defined as ‘something that works’ and is iterated and redeployed by others. Centralisation has allowed Signal developers to update the protocol rapidly enough in response to research and bugs, and to limit concerns about the technical competence of having third-party developers potentially adapting their protocol to other applications than their in-house one. However, the ‘bottleneck’ of centralisation also implies that difficulties and tensions have arisen in connection with some attempts to reimplement the Signal protocol, due to the lack of clear guidelines and specifications in order to do so. Chapter 2 discusses centralisation as a ‘control by design’ model – in particular, control over changes in the protocol, so as to respond quickly to technical challenges. Here the chapter draws on two other cases alongside Signal: Telegram and Wire.

Chapter 3 examines peer-to-peer based secure messaging applications. Particular populations of users, especially those living in ‘high-risk’ environments, show interest towards these decentralised systems, as they see an alignment between their own favoured political and economic models – based on the principles of horizontal connections, mutual assistance, self-governance, participation – and the technical architecture of distributed networks. Peer-to-peer, as with previous recent instances, also promises less control by both governments and private corporations. However, peer-to-peer encrypted messaging faces a number of technical challenges, including a ‘vicious circle’ between adoption barriers and a dependency on a critical mass of users, the difficulty of managing users’ reputations and identities (identities are unique but users usually find them hard to memorise due to the form they are presented in), placing trust in the client (which presents many advantages censorship-wise but may present risks for users living in authoritarian regimes, where the main threat model remains physical pressure and device seizure). By discussing, in particular, the case of the application Briar and its underlying protocol, this chapter analyses both the potential and the challenges of decentralised architectures as applied to encrypted messaging.

Chapter 4 completes the analysis of different architectural choices and their impact on the configuration of encryption tools by examining systems based on federative models. When it comes to communities of developers debating online, the tensions between centralisation and more distributed architectural forms, such as federation, go hand-in-hand with debates over standards. Federation can help alleviate and distribute the very high degree of personal responsibility held by a centralised service provider and favours the freedom of users to choose between different solutions. On the other hand, it can present problems in terms of security, as it is harder to audit all the different implementations of a federated protocol and ensure correct updates. Drawing on the examples of Conversations (and its underlying OMEMO protocol), Matrix.org, and LEAP/Pixelated, Chapter 4 retraces the debates on federation in encrypted messaging, and analyses how federation takes shape in these debates as both an infrastructural configuration and a social experiment, in each case seeking a compromise between more distributed architectures and high levels of security.

Given the great variety of encrypted messaging solutions, how is one – a user, an NGO, a professional – to make sense of them? Chapter 5 tells the story of one such attempt. Classifications and categorisations are, to put it in Bowker and Star’s (1999) words, ‘powerful technologies’, whose architecture is simultaneously informatic and moral and which can become relatively invisible as they progressively stabilise while at the same time not losing their power. Thus, categorisation systems should be acknowledged as a significant site of political, ethical and cultural work. This chapter examines such work as it relates to encrypted messaging tools, by examining, in particular, one of the most prominent initiatives in this regard: the Electronic Frontier Foundation’s 2014 release of the Secure Messaging Scorecard (SMS). A particular focus is on the debates it sparked, and its subsequent re-evaluation and evolutions. We show how the different versions of the SMS, as they move from an approach centred on the tools and their technical features to one that gives priority to users and their contexts of use, actively participate in the co-shaping of specific definitions of privacy, security and encryption that put users at the core of the categorisation system and entrust them with new responsibilities.

Finally, the concluding chapter of the book ties together insights from the previous chapters to reflect on encryption as a site of social, political and technical controversy. Issues related to encryption and its adoption in messaging systems are inextricably entangled with issues of standardisation (formal/informal), the political economy of software development and adoption, and choices of technical architecture. This concluding chapter will offer some reflections on these different aspects as they have been informed by our fieldwork and will then tie the diverse ways in which political effects can be achieved through technological choices to broader political concerns related to privacy, examining in particular how they can interact with recent supra-national legal instruments such as the General Data Protection Regulation (GDPR). Finally, we will comment on the implications of our study and of cognate research for the development of social studies of encryption and for its interactions with Internet governance research, in particular of STS inspiration.

We now move into the first chapter of this book, written from the standpoint of users and security trainers – encrypted messaging experts working in an intermediary zone between developers and users. We will follow them as they deploy various strategies to make sense of the complex and moving ecosystem of end-to-end encrypted messengers.

Notes

1 For example, future secrecy, which we will address in detail later in the book.

2 Following up on previous work by the authors (e.g. Musiani 2015b).

3 Scholarly and non-scholarly debates about the term ‘hacker’ are ongoing. For the purpose of this book, and given that several projects and tools examined in it do not necessarily refer to an imaginary of subversion, we follow Hellegren’s (2017) definition of a hacker as a member of a community that unites with like-minded members ’in their practices of developing and modifying internet-specific technologies’.

4 Many Internet protocols are client/server, which means that the machines that communicate are not equivalent: one is a server, permanently on and waiting for connections; the others are clients, who connect when they have something to ask. This is a logical mode of operation, for example, in the case of the Web: when you visit a website, you are a reader, and the entity which manages the website produces the content you seek to read. But not all uses of the Internet fit into this model; they include direct sending of messages, or file exchanges – not a one-way communication but a peer-to-peer one, with two machines or two humans communicating directly.

5 This work is available as deliverable 3.1 of the NEXTLEAP project.

6 The three cases develop both a user-facing client application and a protocol that can potentially be separated from it (Signal protocol, LEAP and Bramble respectively).

7 We will come back to the high-risk/low-risk distinction, its opportunities and shortcomings, in Chapter 1.

8 We frequently get this type of remark from STS conference attendees.

9 Autocrypt will be extensively presented in Chapter 4 as an example of a federated-architecture encrypted messaging system.

10 A previous version of this section was published as Ermoshina, K., F. Musiani & H. Halpin, ‘End-to-end encrypted messaging protocols: An overview’, in Franco Bagnoli et al., eds, Internet science. Third international conference, INSCI 2016, Florence, Italy, September 12–14, 2016, Proceedings, Springer, p. 244–254.

11 The thirty applications we examined are: Briar, Caliopen, ChatSecure, CoverMe, CryptoCat, Equalit.ie, GData, i2P, Jitsi, Mailpile, Mailvelope, ParanoiaWorks, Patchwork, Pidgin, Pixelated, Pond, Protonmail, qTOX, Ricochet, Scramble, Signal, SilentCircle, SureSpot, Teem/SwellRT, Telegram, Threema, Tor Messenger, Vuvuzela, Wickr, Wire.

20 The latest version of OTR (v4) aims at supporting asynchronous messaging and out-of-order delivery.

25 ‘Alice and Bob’ are fictional characters used as placeholder names in cryptology and other computer science/engineering literature to lay out a scenario where there are several participants in a thought experiment or a model.

26 The Apple vs FBI dispute was a landmark controversy about whether courts can compel communication technology manufacturers to assist in unlocking phones whose data are encrypted. The controversy was spurred by orders issued by US District Courts to Apple in 2015 and 2016.

Windsor xifaxan 200mg shipping Buy alphagan online from Michigan Where to buy metformin online in Vancouver Women's health specialists grayslake Where to buy pravachol in Vermont Generic baclofen 10mg from London Where to buy ceftin 125mg online in Montreal Calgary lasix 100mg shipping Exelon lasalle How do i get nexium