Interview selection process and ethical guidelines
Interview subjects who were also developers were selected according to pre-existing personal relationships that researchers had with the cryptographic research community of NEXTLEAP research team members. Although this does result in some bias, we believe it is countered by the large number of interviews we have undertaken. The relatively small size of the global developer community might also be considered in mitigation. We also reached out to some developers via the GitLab and GitHub pages of projects to which we did not have personal connections (e.g. Ricochet, Conversations).
In contrast, user studies were undertaken with individuals who were selected more by chance. Some attended training events in their local environments (both high-risk, in the case of Ukraine and Russia, and low risk in the case of France, Germany, Austria and the United Kingdom). Some attended conferences in pre-selected venues that were determined by us to be likely to attract high-risk users who lived in areas that, due to the level of repression they were likely experiencing, would make it difficult if not impossible to interview them in their local environment, or would make it such that they could not speak openly in their native environment. This was the case for users from Egypt, Turkey, Kenya and Iran, where the interviews took place in March 2017 at the Internet Freedom Festival and at RightsCon. A total of 54 interviews were completed in a first phase of fieldwork between autumn 2016 and spring 2017. We interviewed developers (17), experts from NGOs focused on privacy and security, such as EFF, Tactical Tech and Privacy International (5), and everyday users (32). Developers from LEAP and Pixelated (PGP), ChatSecure (OTR), Signal protocol and its implementations and forks (including Wire and Conversations (OMEMO)) were interviewed, as well as developers from Tor, Briar and Ricochet that use their own custom protocols.
Within user groups we distinguish between high-risk users (14) and users (including researchers and students) from low-risk countries (18). The developers were all from the USA/Western Europe, and the high-risk users included users from Ukraine, Russia, Egypt, Lebanon, Kenya and Iran. Some high-risk users, due to the conditions in their country, had left (4) or maintained dual residency (2) between their high-risk environment and a low-risk environment. The ‘users’ category also includes a subset (18) of security trainers, for example users involved in organising seminars on security, disseminating privacy-enhancing technologies, practices and knowledge. We interviewed trainers from both high-risk (9) and low-risk countries (9).
A second round of interviews (28) took place in 2018. It focused in particular on aspects of project governance, and as such mostly developers (14) and corporate users (8) were interviewed, in addition to other users living in high-risk environments (6).
A specific protocol was developed in order to protect the privacy of our respondents: if they wished to complete the interview online, we let users and developers suggest a tool of communication of their choice to us. These tools ranged from PGP to Signal, meet.jitsi, Wire and WhatsApp. If an ‘in person’ interview was preferred, the interview was recorded with an audio recorder isolated from the Internet. We use a dedicated encrypted hard drive to store the interviews. Before the interview we asked our respondents to carefully read two user-consent forms related to the study and ask any questions they might have regarding their privacy, their rights and our methodology. The two forms were written in collaboration with UCL usability researchers and were based on the European General Data Protection Regulation. The documents included an Information Sheet and an Informed Consent Form. The first document explained the purpose of the interview, described the research project and clearly mentioned the sources of funding for the project. It also provided information about the length of the interview, as well as information about the researcher, including her email, full name, academic affiliation and the address of the research institution. The second form described the data processing procedures and the period and conditions of data storage; it emphasised the right of the interviewees to demand, at any moment, to withdraw their data from the research. A copy of each document was given to the interviewee. Different forms were used for users and developers.
Additional measures have been taken to ensure enhanced privacy for our interviewees. Thus, the name of the interviewee was not mentioned during the recording. We also adapted some questions to withdraw any elements of context (such as the country or the city, the precise social movement or affinity group a user was involved in and so on), if interviewees asked for this. We respected the right of our interviewees to refuse to answer a specific question. However, our questions were specifically designed to focus on digital tools, with no biographical questions included. The names of both developers and users are mentioned in this book only when a user gave us permission to do so; otherwise, users remain anonymised and we instead use qualifying labels such as ‘lead developer of…’ or ‘high-risk user from…’.