Expanding technosecurity culture: On wild cards, imagination and disaster prevention
A society that sees its relation to the future in terms of prevention and organizes itself accordingly will always fear the worst, and its hopes will be galvanized by the thought that maybe things won’t turn out so bad after all in the end.
The systematized imagination of rather unlikely but highly devastating disaster scenarios – i.e. wild cards – is currently seeing a boom. In this chapter I want to consider the role of the imagination in the discourses and practices of current preventive and ‘premediated’ security research and policy, analysing the extent to which it is a response to new media-related epistemological and societal conditions. In doing so I shall look first at the roles of future scenarios and imagination in the strategic approach to nuclear war. I shall then explore the current condition of our technosecurity culture in which these highly speculative approaches – which come across more as literary processes than as classical scientific methods – appear attractive and indicate a profound shift in contemporary regimes of knowledge.
Keywords: technosecurity culture, imagination, premediation, possibility, future
“Mommy, Daddy, the synsects stung me!” Julie ran into the house in a fluster. Martin, who just sat down to deal with the administrative stuff for his organic farm, looked over at his eleven-year-old daughter. On her face and all over her arms were red marks that looked like mosquito bites. “What happened?” Julie had just been inspecting the rabbit hutches. Apparently, a swarm of these synsects had flown at her and attacked her. (Peperhove 2012: 72)
This horror story about a sudden attack by a swarm of sensing devices – in this case artificial insects – is not a science fiction invention. It is one of several disaster scenarios devised by the EU security research project FESTOS in order to identify ‘potential future threats posed by new technologies in the hands of organized crime and terrorists’ (Auffermann and Hauptman, 2012). Other scenarios address situations such as blackmail using hacked DNA data, the destruction of nanotech products using radio signals, and terrorist manipulation of people’s behaviour through the release of biological viruses. While this might sound highly bizarre, there is a systematic aspect to it nonetheless. When developing the scenarios – presented in the form of short stories in order to assess ‘[t]he dark side of new technologies’ (Peperhove 2012) – ‘special emphasis is placed explicitly on scenarios which, although considered not very likely to occur, are expected to have major impacts if they do occur – so-called wild cards’ (Festos 2012).
The scenarios technique is a favoured approach adopted in current security research, disaster management and technology assessment (see Grunwald 2012; Kaufmann 2011; Wright et al. 2008), as is the concept of wild cards (Steinmüller and Steinmüller 2004; DeWar 2015; Hiltunen 2006). The latter originates from the field of futurology. The term ‘wild card’ was coined by John Petersen, director of the Arlington Institute (a think tank), in his book Out of the Blue – How to Anticipate Big Future Surprises (2000). Other popular science studies, including one by futurologists and science fiction authors (!) Angela and Karl-Heinz Steinmüller, Wild Cards: When unlikely things happen, in which they call for an exploration of the unlikely, have elaborated the issue further (see also DeWar 2015; Mendonça 2004).
The scenarios technique – just like the Monte Carlo method or simulation1 – is a key approach used in security research and derives from the impressive range of methods used in cybernetics (specifically, in operations research). The methods and ideas used in the scenarios technique have been used especially in military planning games for nuclear first strikes (Ghamari-Tabrizi 2000; Pias 2008).
As part of the EU’s Seventh Framework Programme, funding was given to six Foresight projects whose task was to engage in the proverbial blue sky thinking and, in some cases, to conduct systematic research on wild cards.2
The idea of the scenarios method gained fresh momentum especially after 9/11, when an utterly unforeseen event demonstrated the vulnerability of western systems to low-tech attacks. Drawing lessons from the attack, the 9/11 commission report called on the security services to deploy imagination as a matter of routine. The report of the British Intelligence and Security Committee, which investigated the bomb attacks in London in 2005, suggested to its readers to accommodate the unknown in their thinking. Original and imaginative ways are needed, it said, to make the work of the secret services more effective and to be capable of detecting and understanding terrorist acts as well as future terrorist strategies (De Goede 2008: 156).
The idea is to preempt the worst scenarios in order to prevent them from happening (see, among others, Daase and Kessler 2007; Mythen and Walklate 2008). This idea is not new, but in the ‘war on terror’ it is acquiring its own unique dynamic as ‘post 9/11 imagination’, as Marieke de Goede (2008) calls it. In the face of unusual but effective and inventive low-tech attacks, the systematized imagination and the preemption of possible scenarios appears to be becoming even more attractive. Other phenomena, though, are also providing an impetus to the security policy notion of strategically deployed imagination. After a brief respite at the end of the Cold War, in which the threat of a nuclear war between the superpowers receded, new specters appeared, such as the problem of ‘failed states’ or of nuclear terrorism and the possibility that weapons of mass destruction could get into the hands of criminals. US security advisor Graham Allison and his Russian colleague Andrej Kokoshin depicted a potential scenario in the following way:
Consider this hypothetical, […] a crude nuclear weapon constructed from stolen materials explodes in Red Square. A fifteen kiloton blast would instantaneously destroy the Kremlin, Saint Basil’s Cathedral, the ministries of foreign affairs and defense, the Tretyakov Gallery, and tens of thousands of individual lives. In Washington, an equivalent explosion near the White House would completely destroy that building, […] and all of their occupants. (Allison and Kokoshin 2002: 35)
The systematized imagination of rather unlikely but highly devastating disaster scenarios – i.e. wild cards – is becoming increasingly popular. In the following I want to explore the role of the imagination in the discourses and practices of today’s preventive and ‘premediated’3 security research and policy in the context of (new) media-related epistemological and societal conditions. In doing so I shall look first at the roles of future scenarios and imagination, especially in the strategic approach to nuclear war. I shall then explore the current condition of our technosecurity culture (Weber 2016) in which these highly speculative approaches – which come across more as literary processes than as classical scientific methods – increasingly appear attractive.
‘Thinking about the unthinkable’: Knowledge production in conditions of great uncertainty
Professionalized, scenario-based future gazing during the Cold War era can be read as a response by the nuclear strategists of the time to a completely new situation involving huge uncertainty: our entry into the nuclear age and the possibility of humanity’s total annihilation.4 No one had any experience in conducting a nuclear war, and no one had any idea what the right way was to deal with this situation militarily and politically. The classical range of methods used by the military – as well as those used by ‘defense intellectuals’ (Cohn 1987) – obviously no longer seemed adequate in this situation. Indeed, the latter even departed – at least implicitly – from previously accepted classical scientific criteria of objectivity and the reproducability of experiments or strategies. The criterion of reproducability had become obsolete in the face of the totality of nuclear war. It was against this background that the use of scenarios (initially on paper or as a board game, later on in the form of computer simulation) to preempt potential war situations offered a means of exploring new strategies for new situations. Traditional notions of scientific rigor were relinquished in favour of generating evidence by means of the imagination.
Hermann Kahn, defense intellectual and expert at US think tank RAND, was paradigmatic of this attitude. He writes euphorically of the significance of the imagination:
Is there a danger of bringing too much imagination to these problems? Do we risk losing ourselves in a maze of bizarre improbabilities? […] It has usually been lack of imagination […] that caused unfortunate decisions and missed opportunities. (Kahn 1963: 3, quoted in Ghamari-Tabrizi 2005: 146)
Similarly, media theorist Claus Pias points out that think tanks, scenario-based imagination and computer simulation need to be understood as a response to a nuclear threat that can no longer be handled analytically or dealt with on the basis of experiments or prior experience:
What computer simulation was for the development of the hydrogen bomb, the scenario is for conceiving possible futures in the context of the nuclear threat. This is because their reality eludes not only analytic categories derived on the basis of past wars but also precludes experimentation with a war that would have devastating consequences. (Pias 2009: 13)
One possible future could therefore be winning a nuclear war – something Kahn assumed in his book On Thermonuclear War (1960). He rehearsed every scenario imaginable (and unimaginable) of a first or second strike nuclear war, regardless of any considerations of probability or likelihood (Kaplan 1983; Ghamari-Tabrizi 2005; Pias 2008). Being uninterested in moral issues but extremely interested in strategic futurological issues, he reckoned with the deaths of hundreds of millions of people and devised survival strategies and biopolitical measures for the post-nuclear age. Kahn’s second book, written in 1962 to counter criticisms of his first, explicitly bore the title Thinking about the Unthinkable. No matter how convincing (or otherwise) Kahn’s ideas may have been, they did achieve one thing at least: his use of scenario thinking rendered the monstrosities of nuclear deterrence thinkable and debatable in terms of different strategies, concepts and practical options.
What makes scenario techniques so attractive in today’s security research, though? And what about imagining wild cards – things that are unlikely to happen but would have dire consequences, such as the wild swarms of cyber insects mentioned above, tele-operated nanoproducts or terrorist induced viral infections? Can this be seen as a way of addressing a similar set of problems as those to which Kahn and other defense intellectuals sought to find answers in the nuclear age with their first and second strike scenarios?
Fixation on the future and technology-centred security
One significant reason why the scenarios method has proved so appealing has to do with the way societies in the global North see themselves. Zukunft als Katastrophe (Future as catastrophe, Horn 2014) is a fairly apt description of one dominant strand of this self-perception. A widespread feeling of uncertainty or indeed threat seems to be predominant. The search for safety and security in the face of violence, illness and death has taken center stage in our thinking, in our perceptions and, accordingly, in our security debates.5 But where does the feeling of threat come from? Its roots no longer lie (primarily) in the nuclear threat. On the political stage many like to argue that it stems from the experience of 9/11, but in surveillance and critical security studies most scholars agree that the trend towards an all-pervasive preventive security policy set in much earlier. Many theorists point to globalization, to the neoliberalization of today’s societies and the greater individualization this brings with it, and to the digitalization of the last few decades as central reasons why people’s fears – not only of terrorist attacks – are multiplying.
As far back as 1985, prominent science and technology studies (STS) scholar Donna Haraway identified the emergence of a New World Order, of high-tech societies and of techno-scientific cultures whose societal, political, technical, epistemic and normative foundations are undergoing radical transformation. These societies are characterized by a greatly accelerated and intensified hybridization of human and machine, of organic and non-organic, of science and technology. Hybrids such as the Oncomouse and intelligent software, she argued, can no longer be categorized within the traditional humanistic order. Haraway describes (the era of) the technosciences as a new episteme in which the linear causal logic of the Newtonian era has been replaced by a non-linear, multiple techno-rationality. At the same time, she noted, a new globalized political world order is being configured: a biotechnological power with new geo-strategies, technologies of the self, and logics of production and consumption (see also Weber 2003). Shortly after the publication of Haraway’s Cyborg Manifesto (1985), in 1986 Ulrich Beck’s theory of an emergent ‘risk society’ created a furor. According to him, potential technologically induced threats such as nuclear catastrophe and global warming are no longer predictable or calculable. An ever expanding sense of threat was similarly identified in the 1990s by British sociologist Anthony Giddens, who drew attention to the fact that the societies of the global North are increasingly concerned about their future (or futures) and are thereby generating a growing sense of danger.
In December 2003 Javier Solana, former Secretary-General of NATO and High Representative of the EU for Common Foreign and Security Policy up until 1999, presented a strategy paper on the European security doctrine in which he outlined the new situation in the following way: the number of corrupt or ‘rogue’ states is increasing – as, too, is poverty. This is accompanied by a growing number of regional conflicts, corruption, criminality and migratory movements. A further factor of insecurity, he asserts, is Europe’s major dependence on energy imports. The main threats are therefore a global, unscrupulous form of terrorism – consisting in part of fundamentalists prepared to use violence – as well as the spread of weapons of mass destruction, organized crime and growing flows of migrants generated by failed states and by global warming. He characterizes the difference between these and previous threats as follows:
Our traditional concept of self-defence – up to and including the Cold War – was based on the threat of invasion. With the new threats, the first line of defence will often be abroad. The new threats are dynamic. The risks of proliferation grow over time; […] This implies that we should be ready to act before a crisis occurs. Conflict prevention and threat prevention cannot be started too early. In contrast to the massive visible threat in the Cold War, none of the new threats is purely military; nor can any be tackled by purely military means. Each requires a mixture of instruments. (Solana 2003)
Solana goes on to emphasize that proactive policies are needed to counteract ‘new and ever-changing threats’.
The key difference compared with the threats of the Cold War is the dynamic nature of the new threats and their spread to civilian spheres of society, necessitating preventive action and massive investment in security measures, infrastructures and technologies – a development which at this point in time has already long been underway and has indeed accelerated and intensified. And it is especially sensing technologies which play an important role in this development.
As a politician, Solana maintains a measured approach, one that focuses less on potential technologically induced problems; yet in this respect, too, the future – our world – appears to be under threat and in great danger (Horn 2014), and these threats are (perceived to be) increasingly incalculable in terms of their dynamics and globality.
The constant discursive manifestation of likely and above all ‘possibilistic’ (see Clarke 1999) – that is, unlikely but (technically) possible – risks goes hand in hand with a conjuring up of ubiquitous dangers, further fueling the sense of threat. Pat O’Malley described this development long before 9/11 in the following terms: ‘the structural demand for knowledge relating to risk becomes insatiable. As well because the accumulation of such knowledge adds awareness to new sources of risk, the risk-knowledge process gains its own internal momentum’ (O’Malley 1999: 139).
The Cold War’s defense intellectuals found themselves facing a new threat (nuclear war, first or second strike) which could no longer be dealt with by conventional means. At the same time, the threat was (relatively) concrete and came with a clearly identifiable opponent: the Eastern Bloc, the Soviet Union. Today’s security strategists, by contrast, are working with dynamic, multifaceted and yet very vague threats. Wild cards are a part of this possibilistic risk management which attempts to do justice to all possible (imaginable) threats. Security discourse is rapidly meandering, multiplying and spreading – which generates very real threats in itself. As a result of increased funding for security research and technologies, for example, the number of laboratories working with dangerous pathogens has grown rapidly, along with the danger that manipulated organisms could be released accidentally or be stolen from their high-level security zones (Kaufmann 2011). Security researchers’ imagination of new threats and of ways of dealing with them thus generate yet more new threats. The expansion of the security zone in general and the focus on wild cards with their potential, possibilistic scenarios of unlikely threats (Clarke 1999) serve to fuel people’s sense of threat, legitimizing the extension of security measures and generally driving the security spiral ever onward.
The more risks that are identified and are classified as unlimited, the more plausible demands appear for comprehensive, maximum preventive measures (Amoore and de Goede 2008; Kaufmann 2011). Such demands, however, usually give rise to rather unimaginative proposals and measures involving high-tech surveillance and enhanced security.
This logic pays barely any heed to the political, social and economic causes of insecurity (such as poverty, inequality, colonialism, etc.) which feed terrorism, organized crime and mass migration. Instead, technology – in the form of databases and simulations along with sensing technologies such as (smart) video surveillance or biometry – is viewed deterministically as the primary if not sole solution (Marx 2001; Aas, Gundhus and Lomell 2009).
This is readily apparent to any observer of the German security research programme. The need for scenario-oriented security research (though not primarily wild cards) is explained thus: ‘Scenarios research avoids isolated solutions. It enables application-based systems innovations from which practical security products and services can be developed that match the needs of end users and are compatible with a free society.’ (Bundesministerium für Bildung und Forschung/German Federal Ministry of Education and Research 2014).
The emphasis on scenarios makes it possible to determine societally relevant threats in a normative way. At the same time, these threats are configured as systems innovations in a technical sense: the technological fix is thus already embedded within the research programme itself.
This gives rise to a convergence between security and surveillance. Very soon – in practice and not just in the scenarios – every area of society is placed under surveillance. Profiles are searched and produced in the realm of business, in politics, in the military and in everyday life. Sensing technologies such as CCTV, RFID chips, drones and scanners are used to search for terrorists, to monitor sporting events and cash machines – but also one’s own employees. As an essential and yet contested value in modern societies, security is interpreted and implemented primarily by means of technology.
Referring to the development of the military, Armand Mattelart has coined the term ‘techno-security’ to draw attention to the ‘globalization of surveillance’ since 9/11 which, he argues, has increasingly been characterized by the ‘techno-fetishism’ of current military strategies such as the technology-driven ‘revolution in military affairs’. For Mattelart, techno-security means an ‘exclusively technological approach to intelligence gathering, at the expense of human intelligence’ (Mattelart 2010: 138). The current military logic of modern network-centred high-tech warfare can be described as a logic of targeting, identifying and pursuing. A complex digital network of computers and sensors is designed to provide a comprehensive overview of the battle arena in real time. This idea is based on the premise that military success can be engineered by information sovereignty, technological superiority and the close interlinking of intelligence, command center(s) and weapons technology. Surprisingly, this strategic military logic is also found in the realm of ‘civilian security’ as part of democratically legitimated security policy. A paradigmatic example of this is DAS, the new ‘domain awareness system’ used by the New York police and developed in cooperation with Microsoft. Not only does it gather images from 3000 surveillance cameras, 1600 radiation detectors and more than a hundred stationary and mobile license plate scanners in real time; it also feeds police radio and emergency calls into huge databases run by crime and terrorism combat units and compares suspects’ data. It also makes it possible to track the movements of people or vehicles over long distances in real time and to reconstruct such movements over the previous weeks. A densely woven system of multiple sensors has been constructed to ensure that nothing that happens in public space goes undocumented. Of course, one could argue that all this is a delayed response to the trauma of 9/11 – it is happening in New York, after all – and that a similar situation would be inconceivable in Europe due to data protection legislation. However, the military logic of C4 (command, control, computers, communication), which is based on ISR – intelligence, surveillance and reconnaissance – is found increasingly in the civilian domain as well. We need only recall the 2012 Olympic Games in London: more than 13,000 British soldiers were deployed or were on stand-by there, along with aircraft carriers, ground-to-air missiles and unmanned drones. Data protection laws and basic rights were temporarily suspended, as when peaceful demonstrators were briefly detained to prevent them from entering the Olympic zone for the duration of the Games (Boyle and Haggerty 2012; Graham 2012). Things we had for a long time only witnessed at G8 summits are becoming the norm at all large-scale events. After recent terror attacks in Europe, e.g., Paris, Nizza, Berlin and Brussels, these developments in the EU are further accelerating.
For a number of reasons, it seems sensible to me to conceive of security nowadays in terms of security culture (Daase 2012). This has, in part, to do with the way military logic has expanded to encompass the civilian realm, with the growing perception of threats from various quarters, and with society’s preoccupation with the biopolitical value of security (of life and limb). This ‘security culture’ approach not only sheds greater light on institutional actors such as the military and the police but also enables a more comprehensive understanding of security regimes in everyday culture. Culture here is understood as a varied and dynamic socio-cultural practice involving many heterogeneous agents and actants. Regrettably, science and technology studies approaches have been marginal yet in surveillance and critical security studies until recently (i.a. Aas et al. 2009). And yet it seems crucial to understand technosecurity as a complex sociotechnical practice with heterogeneous human and non-human actors. Accordingly, the actors of techno-security culture include not only police forces, secret services and think tanks but also algorithms, social media, military doctrines and software engineers. By conceiving of security in terms of techno-security it also becomes possible to ask why imagination plays such a central role in the context of security, how new (surveillance) technologies impact upon our thinking, our perceptions, our behaviour and our techno-imaginations, and what effects new epistemologies and ontologies have on the configuration of society. In this context, technology and media are interpreted not just as a specialized tool (of control) but as discourse, praxis and artifact. They are inscribed with scripts (Akrich 1992), with instructions for action that are linked to visions and epistemic paradigms, to values and norms; and these scripts convey categorizations and standardizations (Bowker and Star 1999) while also enabling ‘social sorting’ to occur (Lyon 2003). To mention just three examples: Geof Givens and his co-authors have pointed out that face recognition software may have a gendered, racist or age-based bias if certain age groups or skin colours are more easily recognized than others (Givens et al. 2004); Torin Monahan (2009) described the discriminatory impacts of US electronic benefit transfer systems especially for female recipients of state welfare payments; and Bowker et al. (2009) drew attention to the fact that while the social network analysis used in police work serves to gather huge amounts of data, it is primarily a quantitative approach that tends to favour form over content and to ignore lifeworld practices and meanings.
Thus, technologies are not mere tools but are also reifications of categories, habits, ways of thinking, and imaginations which exert impacts in the form of power relations. One of the questions raised in this context is how certain modes of imagination drive forward technologies of securitization; another is whether technologies themselves influence imaginative practices and, if so, how: does the media/control logic of the database perhaps drive what has been termed ‘datification’ and data retention? This is plausible, given that the more data sets a database contains, the more valuable it is considered to be (Manovich 2001; Gugerli 2009). Computer simulations (Bogard 2012), the technology of scenario planning, data mining and worst case imagination are all used to exert a measure of control over uncertainty and insecurity and unforeseeable risks (de Goede 2008; Salter 2008; Kaufmann 2011; Hempel et al. 2011). The logic of preemption and prevention prefers the imagination to the power of the factual – suspicion becomes more important than evidence (Salter 2008: 243). The logic of prevention is one of risk assessment – an assessment of as many potential dangers as possible – but not one of specific dangers emanating from specific actors. Whereas the logic of averting specific dangers follows a linear means-purpose relation, the logic of risk is necessarily vague, unclear and open-ended, and thrives on imagining eventualities.
In this sense, then, imagination and the development of (im)possible scenarios based on automated processes of recombination constitute today’s epistemological foundation for risk management. Automated and semi-autonomous technologies of preventive, predictive analysis, of real-time monitoring and individualized targeting are regarded as appropriate means to combat unforeseeable risks – fueling, in turn, the illusion of and yearning for technological superiority (see, among others, Bigo and Jeandesboz 2009; Graham 2006), something that ironically can flip over into its very opposite. As secret services and their big data collections expand, for example, so too does the practice of whistleblowing. Furthermore, critics nominally loyal to them are beginning to wonder out loud whether the secret services possess far too much data to handle and are thus rendered incapable of acting (Möchel 2014).
Driven by the desire for ever more knowledge and information, security actors are developing complex networks intended to gather all kinds of available data from different (generally linked) sources. Preventive analysis is supposed to make the non-calculable calculable. At the same time, datification – the constantly expanding social media network, the multimedia interaction between individuals and things – enables the collection of huge amounts of data, which in turn can be scanned for patterns and used to produce profiles (Grusin 2010).
In the everyday work of security agencies, this practice often seems to lead to a rather banal and indeed bureaucratized imagination: scenario testing as a recombination of known scenarios – in the hope to thus preempt possible terrorist acts, catastrophes or even pandemics. And the more data, profiles, and behaviour patterns are stored in their database, the better prepared they feel for future disasters – or not, as the case may be, since at the same time it is obvious that this process can never be brought to an end:
Security is less about reacting to, controlling or prosecuting crime than addressing the conditions precedent to it. The logic of security dictates earlier and earlier interventions to reduce opportunity, to target harden and to increase surveillance even before the commission of crime is a distant prospect. (Zedner 2007: 265).
One effect of this bureaucratic imagination is a data collection zeal of unprecedented dimensions. The NSA affair is surely the best example of this; others include data retention which, up until recently, was widespread in the EU as well, and the growing expansion of digital border security systems. One might also think of the US VISIT programme in which foreigners are photographed upon entry to the United States and their biometric fingerprints taken by transportation security officers. These data are stored in a database that can be accessed by 30,000 employees of various US government agencies (Homeland Security (2009). A similar system called Eurodac has already long been in place in the EU collecting the fingerprints of those who seek asylum in Europe.
The idea of managing risks by means of surveillance and data monitoring arose during the 1990s (if not before) and has been extended ever further since 9/11. What it involves is not primarily following up on a specific suspect or suspicion but rather preventively ‘securing’ security. This preventive logic of surveillance and criminal prosecution – and thus also imprisonment – is less about averting specific threats than about prevention and premediation and about minimizing risks and costs. The characteristic phenomena associated with this logic include data retention, predictive policing, and working with prior incriminating circumstances, such as the use of certain words (see the case of Andrej Holm)6 or the fact of being resident for a long period of time in a country not considered to be a tourist destination, such as Yemen or Syria. In 2016, a 31-year-old in France was jailed for two years because he regularly visited so-called Djihad websites, downloaded a plan of a major building in Paris and made a mocking remark on that building on social media (Pany 2016).
Systematized imagination and high-tech build-up
Further analysis is required of our understanding of security as perpetual, as an idea that sees anything and everything as a threat and thus drives imagination to ever more dizzying heights while driving forward the strategic logic of a ubiquitous worst case scenario. We need a theoretical approach to technosecurity that facilitates analysis of the politics of knowledge, technoimaginations, and the values and norms implemented in technologies, as in technical infrastructures; at the same time, we need to examine how the effects of current software and automated decision-making facilitate ‘power through the algorithm’ (Lash 2007) in all manner of surveillance discourses and practices. Up to now studies that analyse the logic and consequences of, say, biometric or datamining software have been rare. Such studies would be helpful for looking more closely at the effects of security’s sociotechnologies and gaining a better understanding of technosecurity governance in the twenty-first century. One question that arises is the extent to which certain techno-logics drive our perceptions of the world as being everywhere and at all times at risk – thereby activating calls for technology-centred maximum security.
At the same time, wild cards seem to be the expression of a deep-seated uncertainty regarding what exactly the truly relevant threats actually are. Since we can never be completely sure whether they will be linked to the (un)disrupted flow of goods, to flood prevention or to terrorist attacks at an airport, we invent a few wild cards just to be on the safe side. The effects recall the aporia in which the cold war warriors of the 1950s became caught up:
Obsessed with preparedness, they sometimes did not scruple about overstating the threat for which preparation was necessary. They practised psychological warfare on their own people. Strategists like Kahn and Wohlstetter […] were not responsible for starting the arms race, but the more they speculated on the unknown terrors of the future, the faster the race was run. (Menand 2005)
I am grateful to the reviewers as well as to Katrin M. Kämpf for critical comments and helpful remarks on an earlier version of this essay. Many thanks to Kathleen Cross for translating in a very thoughtful way most of the paper.