14

Hacking satellites

Jan-H. Passoth, Geoffrey C. Bowker, Nina Klimburg-Witjes and Godert-Jan van Manen

This is not a regular paper, but an experiment in collaboration and conversation, turned into an experimental paper as a way of giving and balancing voices. It is based on a conversation that started two years ago when we, a group of scholars/professionals in STS, Computer Science and Critical Security Studies, practical politics, hacking and IT security speculated about the possibility of hacking satellites, a core component of contemporary security infrastructures. Why not admit it here? It was some kind of challenge from the social scientists, asking the hacker if he can do it. ‘Sure’, he said, ‘hacking a satellite is not a big deal, it has been done before and I am pretty sure I can do it.

Two years later, at the workshop on ‘Sensor Publics: On the Politics of Sensing and Data Infrastructures’ at the Technical University of Munich organized by some of us and filled with meaning by others, (and the event from which the idea of this whole book emerged), participants were then treated with a real time experiment in hacking a satellite. And indeed: We witnessed an impressive and very nerdy presentation of the steps necessary and the security issues exploited in each step, that was constantly switching back and forth between fancy slides and a black and green command line window. Much later the presenter admitted: ‘I showed how to sniff that; which – from the hacker point of view – is not really attacking or hacking the satellite, but just listening in to what is already replayed there.

But when he fired up that SSH1 terminal and hooked up and configured the network interface of (at least what could have been) a satellite to his Linux machine, he seemed to be very serious that what we were witnessing now was a live satellite hacking event.

After the talk, the four of us started an experiment in extended conversations about the sociotechnical security infrastructures. Our aim was (at least) two-fold: First, we wanted to explore novel ways of listening to, and discussing and engaging with people inside and outside of academia – yet explicitly not in a sense of extracting knowledge and information, that is almost always at risk of patronizing or exploiting the expert engineer, but as a form of mutual exchange of perspectives, questions, and issues. Or, as one of us, in a moment of disciplinary identity crisis said: ‘we are all trained in very specific fields. And of course, you get that specific expertise that makes you the person you are in a way.’

Second, we aimed at experimenting with and developing novel formats for integrating these engagements into an academic publication while being sensible to the different work logics as well as the different disciplinary logics of crediting (academic) work and the challenges that bear for traditional processes of academic peer-review.

For us as writers and you as readers this is a challenge since ‘if you are really stubborn – like this table is straight – well, you might think it is straight…if you turn it like this moves his hands quickly sideways, it (replaceable with any form of knowledge) looks different.’ That is exactly what we did (and you might do) with this text: turn it like this, sideways or upside down. This text is the preliminary result of this experiment. Over the course of our conversation which took place, amongst others, at a University in Germany, a restaurant in Amsterdam, onboard several trains across Europe, at Long Beach California, and in a bar in New Orleans, via skype, email, drawings and sketches, phone calls and text messages, we almost forgot ‘who is speaking on whose behalf and who is the useful idiot’ (see Stengers 2005). We recorded and transcribed our conversations and decided to treat ourselves as authors symmetrically by citing all of us, and none of us specifically.

Going back to a long tradition of experiments with dialogue and conversation in anthropology after ‘writing culture’ (Clifford and Marcus 2010), we believe that although fieldwork, encounters and especially interviews are never really symmetrical, at least the text itself can try to infra-reflexively (Latour 1988; Passoth and Rowland 2013) introduce various symmetries and asymmetries. All of us will speak in what follows, and all us will be spoken about – from different angles and shifting positions, but we will intentionally just appear as ‘we’ – and in italics.

Infrastructural legacies

Back to hacking satellites – and to a conversation about (IT) security that turned out to be all about passion, protection and trust, less about technical fixes than about constant attention, responsibilities and care. Of course: the most obvious thing to discuss was this: if what we saw was not an actual hack, was it all just a big show? Something ‘to scare those social scientists a little bit’? Could we have seen a real satellite hack, if we all would just have dared to operate one from an official university IP address?

The answer to that is as simple as it is boring: yes, of course, there were quite a number of instances in the last decades – ‘out there since the 90s, and also, the last one where someone in China took over two satellites’. But the reason why this is possible in the first place is far less boring and it has to with some interesting characteristics of many, if not all large-scale infrastructures: what looks like a rigid security regime from the outside (or: from below as in the case of satellites), is quite often a patched together, partially upgraded, selectively maintained complex arrangement of old and new technologies, practices and organizations. Infrastructure ‘is not only [the things?] that we have just built, but it is also something that would go ten years back, or 50 years back. And it is a whole complex of – or a whole arrangement of – systems that are maybe too big to shut down. And [that] we have to deal with an existing infrastructure already, and not just rethink our way of communicating with the sky from scratch, and we cannot start with a blank slate.’ Our technical world is built on such legacies. Marissa Cohn has argued very convincingly that despite our modern fascination with innovation and shiny new toys, such a progressivist account of technology and technological change is not very helpful when it comes to dealing with large scale systems (Cohn 2013), but not even when it comes to rapidly updated software (Cohn 2019). Satellites are a very good example for such legacy systems as ‘these systems were designed to be put up and to run for 30, 40 years. They are way, way over their expiration date, but they still run’.

A lot of them are in orbit for quite a long time and even if a security issue is discovered and someone bothers to write a patch for it, it is still increasingly complicated to update them from the ground. Satellites are not Android phones, there is no update guarantee. In fact, it is not even a reasonable idea to think of something like a regular update and patching cycle: A satellite, in orbit for more than just a few years, is basically a very simple (and old) computer with quite a specific set of hardware.

I mean, if some programmer in Fortran2 makes the operating system for satellites – Fortran is a really specific programming language, and also very old – and makes a mistake, then yeah, satellites would come crashing down, and no one would know that these lines of code accidentally made the satellite go left instead of right.’ In the regular software world, there are protocols, modules and pieces of code that are so widely spread across the globe that once a bug is detected and a patch released, that patch – at least in theory – can be applied to all kinds of systems like a cure.

But in the case of satellites? Not that there are just a few of them, but compared to regular home computers or widely used embedded systems, most of them are pretty unique. And they last. Once up in orbit, they continue to work until they break (or fall) down or become space debris. Nevertheless, they are one backbone of contemporary telecommunications and security infrastructures (Witjes and Olbrich 2017). They are meant to last, but are not well cared for.

Patching with passion, adding containments, circles of trust

Such infrastructures are not only legacy systems that require maintenance to prevent them from breaking down, but also a lot of care to make sure that they do not turn into security risks. How could one care for them? By comparing the case of satellites with others, we collectively identified three forms of care – patching with passion, adding containments and circles of trust – that are embedded in open source software practices, secure data centre management and cybersecurity/network security practices. Patching is a very passionate activity. Auditing and testing pieces of code is part and parcel of the work in small- and large-scale open source communities, but it is also often voluntary, unpaid and honorary work. ‘People would do the checking like voluntarily?’, we wondered, ‘just because they can?Well, yes and no – ‘from the open source world, in practice, we know that this is not going to happen (…), there are times that we thought that people are auditing these codes, but it never happened.’ Just because the source code is out in the open does not make it more secure, a free license does not automatically spark interest – and of course: ‘The sky is filled with proprietary software.

As Chris Kelty has argued in his account of the history and practices of free and open software (2008) they are based on a reorientation of practices and relations of power that drive the design, circulation and maintenance of software. What is true for new software projects fuelled with the thrill of new beginnings is even more true for the tedious task of updating and the pesky job of searching for and fixing bugs. Patching requires ‘arts of noticing’ (Tsing 2015: 17–19), the kind of mixture of attentiveness, responsibility, competence and responsiveness, that Tronto and Fisher (1990), identified as core elements of an ethics of care.

While it might seem strange, maybe even inappropriate to use such concepts and sensibilities of care in a field so dominantly (and correctly) associated with ‘white boys with cherry coke’ (thanks to Noortje Marres for this wonderful image), they allow us to not only to counter today’s progressivist technology narrative (in line with Marisa Cohn’s work), but also to identify and highlight some of the mostly hidden and far less accounted for practices and sensibilities that keep today’s digital infrastructures running and prevents them from – in the case of satellites even literally – crashing. Such practices are attentive in a way that they start with a recognition ‘of a need and that there is a need that (needs) to be cared about’ (Tronto 1993: 127) – a need for others, human and non-human. They are rooted in a felt responsibility, leading both to a pressing obligation and an understanding that ‘something we did or did not do has contributed to the needs for care, and so we must care’ (Tronto 1993: 132). They also require those involved not only to feel responsible, but also to be able to care – patching a bug in a piece of old FORTRAN code needs a very specific competence. And they require those who care to be responsive, to care when needed, not only when there is time. Patching and caring for security is therefore a form of response-ability – an ability to respond.

And as in other fields of practice requiring care, quite often the attempt to organize it more thoroughly paradoxically results in having less time, space and options for caring. ‘If you are like – let’s say I am the Dutch government and I think it is really important to […] have a certain amount of time for security and justice […] you could also think as a government and say “We need to do something about it and fix this code before someone else fixes it for us.”’ Instead of relying on attentiveness, responsibility, competence and responsiveness, security is often organized and institutionalized very strictly, and the way that is done is by adding levels and more levels of containment. The way that highly protected data centres are managed are a good example for this (see also Chapter 13 by Taylor and Velkova, this volume).

To seal a part of the data handling and compute powers delivered by a huge data centre used by many different actors so that a part of it is highly protected, the most common option is: ‘build a black box. So, in the data centre, there is a black box. (…) Everything that comes out of the black box must be encrypted. And if you cut one line, it would go onto the other line, and another one, should the last one fail. And if someone were to put their head into the black box, to see what is there, how it is functioning, police should be immediately alarmed.

A need for care for security is handled by securitization, into an extraordinary, but still banal loop of analysing and managing risks and thereby creating ‘new security risks by solving old ones’. Edwards has reconstructed this ‘politics of containment’ approach in his account of the cold war history of information technology and has highlighted very convincingly that once the construction of such ‘closed worlds’ (Edwards 1996) starts, there is no real limit on how far down to the micro level and how far up to the global level it can scale. ‘[…] If somebody has to go in, they have to go through multiple security measures; (here should have been some detailed information about the different security measures which we unfortunately cannot disclose due to the sensitivity of this particular data centre) so there are multiple authentication factors. […] And then, [once you are] in that black box, you have three different cages; depending on your authorization, you can go into one, or two, or three cages. And then in those cages, you have […] with computers, and of course, backup power, and all that.’ And for each level, more and more security measures add up: from password to physical tokens, from biometric details to time-restricted access, from contracts and non-disclosure agreements to military grade vouching and screening procedures.

A ‘layered trust model security’, one of us notices, based on a ‘hierarchy of mistrust’, another one replies. But containment has side effects, in fact: effects that often directly oppose the way security is achieved. A closed world is also a world with restricted access, on purpose.

But by protecting one part of an information infrastructure formally – for example by locking up a data centre in national security containers equipped with cages and sealed black boxes – another part of the information infrastructure – its protocols, the bits and pieces of firewall software packages – are sealed off, too, and thereby effectively barred from maintenance, patching and care. On a level of bugs and vulnerabilities, such containments create interesting conundrums: what to do with a detected vulnerability? Publishing it very quickly and openly increases the chanced of a rapid fix, but it also increases security risks as long as the vulnerability is not fully understood. A common problem only one actor knows about is only an issue for that actor. As long as no one else knows, it can even be used as an advantage or weapon. But it also creates a new risk: if one actor found a vulnerability, chances might go up that someone else – someone careless or with criminal intentions – might also find it. So it might be better to fix it quick – and to fix it quick, it might make sense to tell others – at least others one can trust.

But who might be trusted and why? Again: those who care, those with that mixture of attentiveness, responsibility, competence and responsiveness. The formal hierarchy of mistrust is countered with informal circles of trust based, again on an infrastructure of services, technologies and vouching practices. The circles have their own platforms and their own (trusted) communication channels, a system of closed worlds to bridge the closed worlds of organized containment, based on a simple question: ‘We (…) have a big problem. (…) Does anybody see where this can come from?

Those who already trust each other help each other to identify those who might care and might be trustworthy: to join the circle (the platform, the mailing list, the professional secret network), someone already on the list acts as a sponsor and at least two other people on that list need to vouch for someone they trust, based on previous experiences with that person:‘if they screw up, this means I am screwed’.

Caring and response-abilities

Security infrastructures are relational, just like you would expect from infrastructures: one’s devices are another’s work, one´s solutions are another’s problems – to use a paraphrase of Star’s aphorism (1999: 180). What looks like a regime of securitization (Burgess and Balzacq 2010; Buzan, Wæver and Wilde 1997) or a dataveillance architecture from one angle (Amoore and De Goede 2005; Dijck 2014) turns out to be a messy patchwork of standards, exploits, firewalls, log files, careless users and annoying script kiddies from the other. In principle, this should come as no surprise for scholars in STS or critical security studies, but in practice, such sensibilities for symmetry or multiplicity are far from being standard practice. Whether this is a result of a certain preference for critique, an effect of packaging the bits and pieces of contemporary security infrastructure into official machineries and decorating them with uniformed humans, clean interfaces and maps, or just a matter of conceptual ancestry (Foucault, we can hear you) is not important. But the lack of responses from those involved and accounts of the work they provide to build, maintain and, well, care for security infrastructures leaves us with a lack of accountability and response-abilities (Kenney 2019) – a lack of ‘cultivation through which we render each other capable, that cultivation of the capacity to respond’ (Haraway and Kenney 2015: 230–31). Can we foster this capacity? To whom or to whose interests, issues, standards, or requirements do we (as security engineers, as hackers, as scholars studying security, as citizens…) respond to – and how? Our conversation began with hacking satellites, it led us to infrastructure, legacies and care. Infrastructures such as satellite communication networks are not only systems and technical as well as institutional legacies that require maintenance to prevent them from breaking down. They first and foremost need attention, shared responsibilities, rare and specific competencies and responsiveness – an ability, availability and readiness to respond. Satellites cannot be updated and keep running on 1980s protocols, p2p protocols can be turned against their users by hackers and security experts alike, the massive investment in protecting server racks in a physical data centers create the need to keep up a constant routine of security checks for all those involved. Such care cannot be delegated to additional (tech) components or ‘standard’ politics. Care is instead delegated to those informal networks of trust – I know who I need to call at company X – or an army of ‘human sensors’ (see also the Visual Vignette by Mayer and Iblis Shah, this volume). Also (or even more so) as a more practical, organizational or even political consideration of how to manage the various response-abilities: To whom or to whose interests, issues, standards, or requirements do we (as security engineers, as hackers, as scholars studying security, as citizens…) respond – and how? How to open ‘up possibilities for different kinds of responses’? (Schrader 2010: 299) Engaging is this conversation was exciting, challenging, time-consuming, fun and sometimes annoying, leaving at one point or the other, each of us wondering how many languages a group of four can actually speak.

Creating an inclusive, mutually respectful conversation amongst those who seldomly cross path and practising the translation of the different meanings of responsibility across different communities might be a first step for taking-care.

Keppra online usa Where to buy microzide 25mg in Kansas Nevada shipping casodex Buy catapres from Boise City Where to buy aromasin online in Trenton How to get avalide prescription Celebrex cumedin Biaxin for best price Green promethazine street value Meloxicam allergic reaction