6
Frame Against the Grain: Asymmetries, Interference, and the Politics of EU Comparison
Tereza Stöckelová
Since it is only worth comparing the incommensurable, comparing the commensurable is a task for accountants, not anthropologists (Viveiros de Castro 2004: 14).
Eduardo Viveiros de Castro may have been too narrow-minded about accountants (since 2008, we have learnt the hard way that their practice is much more creative than previously imagined), but he surely was right about anthropologists. Comparing the apparently commensurable not only offers very little insight beyond confirming what has been assumed already, but it also politically aligns social science analysis with the dominant orderings of reality. (In)comparability is not a perpetual, given feature of the phenomena we investigate, but a result of framing(s) enacted by various actors – researchers included. To paraphrase Bruno Latour’s principle of (ir)reducibility (1988: 158), nothing is, by itself, either comparable or incomparable to anything else. It has to be made so.
Since (in)comparability is not ‘out there’, a framing that makes reasonable specific comparisons and excludes others is by no means politically and epistemologically innocent. Implicated in power relations, it is a practice that can stabilise, strengthen, or subvert. And, as we will see, it (re)shapes the realities concerned. It is in this sense that I use here the notion of frame and framing, freely inspired by a variety of social scientists (e.g. Goffman 1974; Callon 1998). These scholars have deployed it to analyse the apparent paradox between the (often hidden) constructed-ness of things, persons, and issues (which in one regard are not given by nature, but are socially, materially, and discursively enacted) and their relative stability and effectuality. Surely quantification is today’s favoured strategy for imposing frames of comparison as if they were natural, thus rendering them effectively invisible (Porter 1995). The vivid social life of university rankings, which apparently make all institutions across the globe easily comparable at a single glance, is just one example (for evidence from the Netherlands, see de Rijcke et al., this volume). Such framing and comparative efforts – serving as a tool with which to govern academia in managerial and bureaucratic modes – have been elaborately analysed and criticised by social scientists (e.g. Strathern 2000; Shore 2008). There are, however, other practices implicated in enacting frames and units of comparison in which academics (including social scientists) massively partake. Among these are the entrenched geopolitical orderings that I will attend to closely in this chapter.
This chapter is certainly not an argument against comparing or comparative research. Rather, it is a call to pay attention to hidden framings and asymmetries of comparison, and for a reflexive, or diffractive (Haraway 1992) discussion on the effects of ‘making (in)comparable’, in which our social science research practices prominently participate.
From this perspective, I reflect on two interrelated research projects I was involved in between 2006 and 2010. Focused on changing academic cultures and practices, both of these projects had strong comparative elements. The first project, Knowledge, Institutions and Gender: An East-West Comparative Study (KNOWING) (2006–2008), was funded by the EU’s Sixth Framework Programme (FP6), and involved five research teams from Austria, the Czech Republic, Finland, Slovakia, and the UK. Each team carried out qualitative research including interviews, focus group discussions, and participant observation in two academic institutions in the social sciences (most often but not solely sociology), and in the biosciences (molecular biology, biochemistry, organic chemistry). The primary aim of the project was to ‘examine the production of knowledge contexts and cultures, including the role of gender, from an “East-West” perspective and identify structural and institutionalised practices and procedures, including standards of excellence, that hinder and/or promote the equal participation of women in science’ (KNOWING Proposal, Annex I 2005: 4).
The second project, Articulations of Science Policies in Research Practice and the Academic Path (2009–2010), funded by the Grant Agency of the Academy of Sciences of the Czech Republic (GAAV), was a follow-up to KNOWING in the Czech Republic and focused on two selected topics (academic paths, and assessment and accountability). It drew upon KNOWING data and group interviews conducted with researchers in different disciplines and types of academic institutions. Alongside our interest in investigating the topics in more detail, we also saw potential to follow the ongoing changes in Czech science policy and their ‘translations’ into variable institutional and regional contexts. Our study was carried out amidst the most intense academic protests in Czech history against cuts in the research budget for the Academy of Sciences, the increased public funding for industrial research and innovation, and the introduction of a research evaluation framework that ties public funding to strictly quantitative criteria for assessing research performance. This is important, as it introduced a special dynamic into the relationship between research participants and us as researchers, something that I will discuss further on in the text.
Both projects thus intended to make multiple comparisons between European countries, between ‘East and West’, selected academic disciplines, and different types of research institutions. At the same time, the investigated realities already involved a number of framings such as assessment exercises comparing the ‘research performance’ of teams (in an institute) or researcher organisations (in a country). Here I look into how and with what (collateral) effects we as researchers practised comparisons in these two projects: what was taken as the frame of comparison? How did researchers’ framings interfere with those of research participants? How were the frames reflected, taken into account, or made an object of enquiry sui generis? What epistemic and geopolitical asymmetries were embedded in our practices of comparison, and how may they have become destabilised over the course of the two projects? While we cannot stop framing, I will argue in the conclusion that there are alternatives to how we compare, and we should try to frame (in the EU projects) ‘against the grain’. I suggest that we should be more courageous and challenging in relation to epistemologically and politically established framings.
Making Units
Comprised of research teams from different countries, the consortia of EU-funded projects addressing societal challenges imply comparisons between EU member states. This corresponds to the European idea of ‘identity in diversity’ (of cultures, people, policies, and so on) that has to be investigated, understood, and constantly managed and harmonised.1 These comparisons do not simply represent realities ‘out there’ – they contribute to their enactment. In an analysis of the performative effects of the Eurobarometers, John Law observed that
these statistical methods are creating a homogeneous European collective space containing isomorphic individuals which is then re-stratified into sub-spaces or sub-populations (for instance, 27 country distributions of opinion) and, so, in re-creating the nation state in a particular mode’ (2009: 248).
What are the assumptions and effects of multi-member-state EU projects (including qualitative ones), and how do they shape the realities they study?
Let us look closely at the KNOWING project. Although we were interested in the disciplinary differences between the social sciences and the biosciences (and in comparisons along gender lines), what moved strongly to the forefront during the project were comparisons between ‘each country’s distinct epistemological culture and practice’ (KNOWING Proposal, Annex I 2005: 13). As in many other social science projects carried out in the European Commission (EC) framework (Godfroy 2010), each team in this project investigated the research landscape and institutions in its own member state and in its native language (except for one German researcher in the Czech team who had been living in Prague for many years and who investigated – mostly in English – Czech and foreign researchers at the bioscience institute). When we met for a consortium-wide workshop, members of each team spoke mainly about (and from) the perspective of their own ‘national’ data. While discussions among the Czech team back in Prague contained differences and similarities between investigated domestic fieldwork sites, ‘national reality’ tended to come to the forefront in the consortium debates and become homogeneous. Most of the time, it was ‘national reports’ that were mainly and solely elaborated through work packages, and which we exchanged before consortium meetings. In essence, they became the basic elements of our collective debates.2 They were the durable and mobile inscriptions that we could always easily refer to during the whole project.
In each country, there are surely distinct evaluation systems as well as funding schemes, agencies, and specific (language) audiences (particularly in the social sciences) that call for national comparisons and the identification of similarities and differences between nation-states. However, the strong comparative logic of EU member states – as projected into the format of ‘national’ research teams and funding for domestic fieldwork – tends to make certain kinds of phenomena less visible and researchable. In quantitative surveys such as the Eurobarometer in Law’s (2009) example, nation-states are in most cases enacted as internally homogeneous units of comparison. In qualitative studies such as KNOWING, epistemic asymmetries (and possibly huge heterogeneities inside a member state) may be made invisible, as researchers – due to the constraints in project budget and researchers’ capacity influenced by multiple project obligations – tend to carry out their fieldwork in the area where they live and work. That is also where research institutions able to successfully apply for EU funding are situated.3 However, in consortium debates and international publications, the results are then often taken as representative of the country or national culture as such.
At that point, the assumption of the research process was that EU member states are relatively stable entities with distinct research cultures which can be reasonably compared, and which are faced with (and need to) negotiate the ‘European discourse’ of excellence. And by means of our own research, we contributed to strengthening the stabilisation of these entities. Here is an illustrative quotation from a collective monograph published from the KNOWING project:
Already a first analysis of our material shows that the excellence discourse has reached all the countries investigated and that national and European discourses are closely intertwined (although in the UK, national excellence discourses are perhaps less explicitly entangled with European ones). Yet the way this concept becomes operationalised, filled with meaning and transformed into practice differs in interesting ways. These variations might be seen as linked to the different histories of national research systems, to the imagined place a country/institution holds on a more global research map and in particular to when and how research assessment exercises have started to be integrated (Felt and Stöckelová 2008: 76).
How, then, are such comparative conclusions arrived at, and backed up?
Juggling Comparability
In our research practice, the actual material for comparison did not involve primary data. These we did not share – due to language barriers, privacy protection, and the potential epistemological and ethical difficulties of working with ethnographic data generated by someone else in a different fieldwork. We felt this could not be seriously tackled in a three-year project with a limited budget (there were, for example, no resources for translation). Additionally, the proposition that at least some non-ethnographic data could be generated and exchanged for a comparative analysis was strongly resisted by some teams in the consortium. There was a proposal in the first work package of the project to distribute a ‘life course questionnaire’ (LCQ) as a standardised tool with supposedly ‘identical’ questions to researchers in the academic institutions under study. These would be processed statistically across the five countries prior to the participant observation phase of the fieldwork, and a sort of comparative baseline between them would be established. The ultimate point of disagreement was over the obligatory use of a Lickert scale in the questionnaire. This caused a major conflict within the consortium. Some of us were wary of creating an impression of easy and objective comparability packaged in statistics, and argued, for example, that
[a]lthough we seem to be quite aware of contingencies, contexts or cultural specificities in the construction and structuring of epistemic communities that we study we do not assume these same contingencies when constructing our methodology […] [There are] different cultures of expressing discontent or dissatisfaction. How are we then going to interpret the measurements: Are people in institutions in one country more critical/satisfied/hesitating or are the conditions in the institutions more/less satisfactory? (Internal consortium communication 2006)
Others believed that
it is necessary for all teams to have a common basis to work on. We do not share the idea that quantitative analysis of Lickert-scaled questions only makes sense if all conditions are equal. We rather think it can be quite interesting when keeping in mind these unequal conditions. We also do not want to risk giving up the original idea of the project to have it commonly conducted, and fear this could happen if we now start using different methods. We are aware of the fact, that there is a need to consider nationally differing and distorting variables in interpreting the LCQ […] [W]e think that comparisons of raw data and first analyses can be possible – or should not be made impossible right from the beginning at least. We think it would be a pity if we did away with possible ways of comparison this early, especially as it does not seem too time-consuming or extensive to add such questions (Internal consortium communication 2006).
Though all the teams were determined to generate the LCQ data, the conflict over the questionnaire’s content and form of its actual implementation (e.g. by personal interview vs. by post) was so polarising within the consortium that it could only be resolved through the voting procedure stipulated in the official contract, and not by negotiation. The margin of the vote on the question of whether ‘it is obligatory for each partner in the consortium carrying out the research to use the Lickert-scale item questions’ was very narrow – 4:3 in favour of each partner being free to include, or not, the Lickert-scale items. What is interesting about the whole controversy is that all the different positions of comparability and incomparability were argued with reference to nation-state specifics. The nation-state teams were reassembled as sites of epistemic autonomy in the consortium – they voted and expressed positions.4 Moreover, at the same time, the nation-states were reinforced as the units of in/comparability.
The failed attempt at comparing ‘raw data’ did not, however, prevent us from making any comparisons at all. In the later stages of the project, what was compared most often were the ‘claims’ made about national research practices, as well as cultures and policies based on our fieldwork (but often also other experiences of ours as academics, committee board members, and so on). These were either formulated in national work package reports, during consortium discussions when commenting on each other’s chapter drafts (Felt 2009: 35), or through various informal conversations and cross-cutting relations established throughout the project.
Comparability (and the limits to it) was gradually built up through a series of exchanges in the consortium during the lifespan of the project. At the same time, certain ambivalence remained about the nature of our comparative efforts across the team. In our collective monograph, we say that ‘[p]erhaps comparison, then, may reside not in the comparing of data, or results or findings, but in consideration of what questions it even made sense to ask in the first place’ (Molyneux-Hodgson 2009: iii). Equally, we suggest it resides in ‘the capturing of important similarities and differences among the countries participating in the study’ (Felt 2009: 35). The key point for my argument is that in the first place, during the process of (hopefully) making sense and travelling well beyond the consortium, our claims re-enact the very existence of the countries as substantially homogeneous units that can later be compared.
Sian Lazar distinguishes between the ‘representative form of comparison’ (comparing samples in more or less strict statistical terms), and ‘disjunctive comparison’, which involves setting ‘two groups (or cultures, societies) alongside one another and see[ing] what comes out of an examination of their similarities and differences’ (2012: 352). What happens in qualitative EU projects is surely not a representative statistical comparison. But it is not simply a disjunctive comparison either, as it is not arbitrary but speaks to the (pre-)existing EU political realities and the image of a ‘harmonised Europe’. Though applicants might deny subscribing to this agenda, they know very well how the application has to be phrased in order to get funding. This is what we promised as an ‘EU added value’, a category required in the application:
The added value of this project lies in its comparative design. By pairing new member states with established EU member states, the project will benefit from both prior experience and new outlooks. Further, this comparative collaborative framework maximises the usefulness of the project results on a European scale by incorporating varied contexts. The recommendations produced from the project will serve to better harmonise standards of scientific excellence throughout Europe, thus contributing to existing debates on scientific excellence (KNOWING Proposal, Annex I 2005: 20).
The uses and effects of knowledge codetermined by such ‘harmonised’ coordinates are only partly in the hands of researchers.
Rewriting Asymmetries
The above quote points to the fact that the frame of comparison used in the research project corresponds to a specific EU political imagination, and it also suggests that the units compared are not equal. The proposal distinguished between ‘new’ member states and the ‘established’ ones, and this had implications for our comparative practice. Even though we reflected in our collective monograph that ‘[t]he original intent to somehow bring “East” and “West” into a form of relation – by anticipating difference between the contexts and cultures implied by these geopolitically influenced words – was found in the end to offer little that was meaningful’ (Molyneux-Hodgson 2009: ii), the position of one of the countries counted among the established in the original proposal (namely the UK) remained distinctive in many respects. In essence, it was the most established amongst the established. First, the science policies introduced in different times across Europe were the most ‘advanced’ in the UK (e.g. the nationwide Research Assessment Exercise). Second, the UK (as well as Germany and the United States) was often referred to by our research participants as a comparative benchmark in terms of research quality and a desired mobility destination. Third, a great deal of the Science and Technology Studies (STS) literature that we worked with was concerned with the UK, was written by UK researchers, and was, obviously, in English. And last, but not least, consortium meetings took place in English, which, in principle, gave an advantage to native speakers who were able to express themselves more easily and in a more nuanced way. Anglo-Saxon realities were thus omnipresent.
This had at least two interrelated effects on the comparisons that were made. On the one hand, the UK – the research team as well as researchers under study – compared itself less to other European countries, and if they referred to other countries it was most often the US. The UK played out as a rather self-contained case. On the other hand, it was difficult for other research teams to avoid direct or indirect, and explicit or implicit, comparisons to the UK.5
Susan Meriläinen et al. (2008) analysed a similar dynamic when they traced the peer review process of a paper they submitted to the journal Organization, which today declares itself to be ‘theory-driven, international in scope and vision, open, reflective, imaginative and critical, interdisciplinary, facilitating exchange amongst scholars from a wide range of current disciplinary bases and perspective’ (Organization 2013). Meriläinen et al. interpreted their experience of this process to be an example of ‘hegemonic academic practice’, as the journal’s reviewers called on them to use UK data as a benchmark for Finnish data, and the data on male managers as a benchmark for female managers (2008:591). They also stated that ‘[w]hile Britishness became the norm, Finnishness was reduced to a deviation from the norm’ (Ibid. 591).
Although I have not experienced any explicit pressure from journals such as that described by Meriläinen and her colleagues, negotiating Anglo-Saxon realities has been a near-constant process, as most of the (STS) literature we worked with in the project drew upon research, and technical, cultural, and natural references from those geopolitical parts of the world. Furthermore, it must be emphasised that most of the time I actively and happily participated in re-enacting these comparative asymmetries, even if I tried to make some difference to (and with) them. To give an example from a policymaking context, our team made use of the asymmetry between the Czech Republic and the UK when we invited our UK colleague in the project consortium to speak at a conference (on science policy) that we organised in the Senate of the Parliament. We did not present the UK simply as an advanced case to be followed and ‘caught up with’; instead we tried to ‘problematize the idea that Western European countries have found an ideal science policy that can be mechanically transferred to the Czech Republic’ (quote from the Science Policy and Science in Action Conference Invitation). However, as far as we understood them, all the questions that were asked after our UK colleague’s presentation in the Senate seemed to proceed from the assumption that she must be trying to justify the UK Research Assessment Exercise (RAE)/Research Excellence Framework (REF) system as a positive model that would help the local academic community argue against the current version of research assessment in the Czech Republic.
Academics on the ‘periphery’ do not often subvert; on the contrary, they try to capitalise on the hegemonic configuration. As Meriläinen et al. observed,
[i]n general, scholars from peripheral countries such as Finland are seduced to marginalize themselves in international fora so that they may gain benefits domestically (getting articles accepted in high impact journals improves their position at home, e.g. in applying for academic jobs). They are forced to opt in core-periphery relations if they want to stay in the “game” in the periphery’ (2008: 594; for a similar argument see also Aalbers 2004: 320–321).
I was, of course, interested in publishing in English in UK-based journals, and in packaging my arguments for the Anglophone ‘model reader’. As well as my interest in reaching the wider STS community, I also knew that doing so would count in the institutional assessment of my work. Somewhat paradoxically, a key argument of one of my articles was a critique of the understanding (largely shared by science policies and science studies) of scientific knowledge and objects as ‘immutable mobiles’ (Latour 1987) and the problematic consequences this has for the social sciences in non-Anglophone countries – as well as for the local relevance of the knowledge they produce, and for their contribution to the performance of globally converging societies (Stöckelová 2012). And to round out the paradox, I later got a special financial bonus from my research institute for having published in a journal with such a high impact factor, as it increased the public funding of the institute – calculated according to a research assessment methodology that I critically analyse in the same article. The attempt to disturb the asymmetry in evaluating and valuing knowledge was at the same time incorporated in its operation.
The special and repeated efforts needed to displace and diffract asymmetries can hardly be overestimated. As noted above, in the KNOWING project we investigated two research institutions in each country – one in the biosciences, and one in the social sciences. The epistemic and policy landscape in which our study was conducted was strongly unbalanced in favour of the biosciences. On the one hand, a great majority of STS concepts (such as the notion of ‘lab ethnography’) were developed on the basis of empirical material drawn from the field of natural and biosciences (Garforth 2012). And it is also these disciplines and types of research that serve as a more or less explicit model of and for policy (Garforth and Stöckelová 2012). It almost took permanent effort and reflection in order for us to overcome this uneven condition when we were developing our understanding of studied research practices, cultures, and policies, and to avoid talking about the social sciences as an exception or deviation from a ‘standard’. Paradoxically, the fact that we all came from the social sciences did not help much. On the contrary; it may have created a feeling that we all understood our field already, and that we did not need to spend as much time discussing our social science data. The political and epistemic economy worked in favour of the asymmetries embedded in the comparison. I will note in the conclusion below that going against these asymmetries can be an effective methodological strategy.
Interfering with the Researched
The complexities of comparing data generated in the context of different disciplines made themselves very apparent in the follow-up to KNOWING. This was carried out in the Czech Republic from 2009–2010. In the project, ‘Articulations of Science Policies in Research Practice and Academic Path’, we carried out additional group interviews with researchers in different disciplines and institutional contexts. What was extraordinary about the project was its timing in that it coincided with the introduction of major changes in the research assessment system in the country. Experiments with a new assessment system began in 2004 when basic measures were introduced to quantitatively evaluate research outputs. They did not draw much attention from ordinary researchers as they had no immediate consequence. However, in 2009, institutional funding for research and university organisations began to be closely tied to evaluation scores at the same time as overall cuts were being made in the public budget for research. To make a long story short, this resulted in substantial cuts in the budget for the Academy of Sciences (which, unlike universities, gets public money solely for doing research) and a consequent increase in tension between the institutions and disciplines inside the Academy which have to compete with each other for diminishing resources. In this atmosphere, there were heated debates (in public as well as in academic spaces) about the adequacy of the evaluation procedure and its criteria. Some of the questions often raised were what types of output (academic, extra academic), and what types of institutions (research institutes, universities) and disciplines (social sciences and humanities, natural sciences, technical disciplines) are comparable or commensurable, and what the appropriate levels are on which to make comparisons and rankings (on the level of disciplines, institutions, research teams, and individuals).
We set out to study how, and with what effects, the unprecedented quantitative comparisons of the entities characterised by these multiple differences were constructed.6 We were interested in the processes of commensuration and ‘making things comparable’. In our analysis, we showed how the national evaluation was originally an initiative of a group of bioscientists who strove to establish their superiority in the research system in terms of professional accountability, but who later started to lose control over the whole process as industrial lobbies, the bureaucratic logic, and managerial accountability asserted themselves (Linková and Stöckelová 2012). During the fieldwork, our study was perceived by the research participants as interfering rather directly in the controversy. Most researchers that we approached with a request for an interview had a strong opinion on these issues, and they wanted to share and make their opinions heard through our research. While the national evaluation tried to impose universal commensurability and ranking, its opponents denounced the hidden asymmetries in the seemingly impartial evaluation criteria, and argued either for a different frame of comparison (one more favourable to them and their institutions and disciplines), or for acknowledging the incommensurability of the compared entities – for instance, noting the incommensurable nature of the output of different disciplines, their epistemic genres, and societal roles. Those in the latter group actually saw our ‘comparative research’ project as a possible means of showing these entities to be incomparable – and making them so.
The situation of the public and policy controversy might be specific to an extent, but it illuminates an important issue concerning comparative studies. Research never starts on un(infra)structured grounds, and it inevitably interferes with the existing frames of comparison and the (in)commensurabilities practised by various actors in the field (not least by Viveiros de Castro’s accountants). These interferences may remain invisible if the frames of comparison and their asymmetries are settled and practised more or less consensually in the studied field, and if the researcher does not set out to deliberately unsettle the framings but instead goes along with them. In this case, she interferes by effectively strengthening them. I argue that an empiricist approach to comparison – which prescribes that only what is ‘objectively’ comparable can be legitimately compared – does exactly this. It strengthens the dominant frames of comparison by respecting them – as in the case of anthropological ‘cross-cultural comparisons’ – when practised as a comparison of ‘the others’ (solely) among themselves, thereby re-enacting the West as coherent, unique, and indeed incomparable.7
Comparing Against the Grain
Comparing boxing and computer programming, Robert Schmidt (2008) makes a case against a kind of comparison which ‘simply emphasizes the links and commonalities between the objects of comparison’ (2008: 340), and argues for an experimental approach to comparison. He insists it is the latter that can bring about a desirable epistemological rupture and unexpected insight. In a similar vein, I argued in this chapter against the epistemological naïveté of comparability dwelling ‘out there’ as a limit to what we can and should compare as researchers.
My point is, however, more political. The trap of the notion of comparability awaiting the researcher ‘out there’ is not only epistemological. It concerns what reality is, and how it will change and develop. Indeed, there exist recognised units and frames of comparison and in/comparabilities in the reality we investigate. However, they are not given but practised by various actors and inscribed into infrastructures, architectures, and imaginations. Social research can, and should, study these comparative practices and arrangements, while it cannot itself avoid engaging in and with them (be it in a critical or affirmative mode). With researchers’ contributions, they can be practised differently, or not. I do not argue that as researchers, we should always stand in a subversive relationship to the framings practised by the actors we study. There might be minor or subaltern frames of comparison we decide to reinforce and make visible. Also, strategically sharing a dominant frame may help to make certain points that it would otherwise be hard to hear. But it seems crucial to remain aware of the performativity of our comparative undertakings. What would the elements of cultivating such awareness be?
The first issue is timing. Awareness should influence the ways we design our research projects. What are the key frames and units of comparison present in the field we are about to enter? Do we want to strengthen or question them? Are there lateral ways of formulating and researching our themes that would not only open new intellectual horizons, but would also deploy new or hitherto marginal realities? And in the context of (qualitative) multinational EU projects, an explicit quest should be to examine how we can unsettle the entrenched research design where national teams study the (homogenised) reality of their nation/member state. Certainly, there are things that cannot be planned and avoided in advance. We can only learn about them from the responses, requests, and traces we leave in the field. Nevertheless, we should learn.
It is remarkable that the KNOWING project was an explicitly feminist one, but it still generated rather limited reflection within (and after) the project on asymmetries and inequalities. While the proposal stated that
[r]esearch conducted from a feminist perspective is characterized by a critique of social inequalities (including but not limited to gender), a research design that provides space for the exploration of women’s everyday experiences and knowledges, is sensitive to and tries to minimise the power differentials between researcher and research participant, and is motivated by the desire to create positive social change’ (KNOWING Proposal, Annex I 2005: 22),
we were little prepared or equipped to handle the power differentials within the project consortium, particularly in the researched reality. If we were pushed to reflection at one point, it was only due to a conflict (which is not a bad thing in principle). However, severe conflicts may threaten a project as a whole, erode mutual trust, and needlessly use up a lot of energy. Thinking in the terms of the EC newspeak of ‘work packages’, it would be useful to include ‘reflection’ alongside ‘management’ for the duration of a project. In the busy schedule of other work package meetings, milestones, and deliverables, there is indeed very limited time-space and energy for such reflection – even when the willingness and interest are there.
The second issue concerns scale. In the social sciences, the gravity of the performativity issue will rarely be linked to a single project. Rather, it is the multiple, recurring execution of projects that creates powerful machinery for the reproduction of specific frames, units, and asymmetries. This raises questions not only for a single project, but for research and disciplinary communities. In this chapter, I sketched contours of what we create (intentionally or not) by engaging in projects and reproducing the arrangement where national/member state research teams investigate their national/member state realities. Is this what our disciplines wish to (and should) contribute to Europe?
I would argue that one of the key intellectual missions and socio-political roles of the social sciences has historically been to open established black boxes. However, the black boxes of the nation/member states are reproduced rather than opened up by the usual arrangements of the ‘societal challenges pillar’ of EU-funded research. Such arrangements not only re-enact black boxes, but they also deaden empirical research sensibility for complex realities escaping established categories. I am neither arguing for any easy cosmopolitism, as if Europe is – or should necessarily be – a smoothly shared, common socio-material-discursive space, nor for switching to an alternative standard for European social research. On the contrary, I insist that as much as the European cosmopolitics of composing a shared world (Latour 1999) needs to be experimental, the social research contributing to it also needs to be so. In my view, more space and resources should be dedicated to unexpected comparisons and experimental research designs.
Such research could seemingly not drive the ‘fast lane’ of academic production (Vostal 2015) and be slower and stumbling, thus coming into conflict with the current standards and measures of ‘excellence’ in ever-growth-oriented academia. As STS has numerously shown (especially in relation to (non-social) sciences), the epistemic content and organisational process of research cannot be separated (e.g. Latour 1987). Thus, it can hardly be underestimated that ‘research design’ issues are not simply methodological, but they simultaneously concern multiple facets of politics, including the academic one.
As a research community (always incoherent and multi-vocal, of course), we also have to find a way to translate these debates into messages regarding implications and limits of current research arrangements to sponsors and funders (such as the European Commission). The reason is that the actual funded projects are responses to – explicit or implicit – expectations inscribed in calls and evaluation criteria. However, in the context of Europe, the wording of the Vilnius Declaration – a recent, and so far rather unique, message from the social sciences and humanities to policymakers – conceals the performativity of social research. It talks about the social sciences and humanities as ‘indispensable [sic] in generating knowledge about the dynamic changes in human values, identities and citizenship that transform our societies’, and about ‘realigning science with ongoing changes in the ways in which society operates’ (Horizons for Social Sciences and Humanities 2013; my emphasis). Here, science is supposed to catch up with (a single) pre-existing (though changing) society, and it asks for the resources to do so.
I believe we need a more reciprocal understanding of the relation between the social sciences and the realities they study, and a more performative take on knowledge and knowing. For the sake of what there is, and what can be, social research should strive to create investigative frictions and make comparisons that go ‘against the grain’ of prevailing notions, rather than polish (however inadvertently) existing dominant realities. I am sure this would not be to the detriment of intellectual creativity.
Acknowledgements
This chapter has been inspired and shaped by many debates with many people. I would specifically like to thank Lisa Garforth, Marcela Linkova, Katja Mayer, Morgan Meyer, Iris Wallenburg, and the editors of this volume for their invaluable comments on earlier versions of this chapter. Robin Cassling and Jennifer Tomomitsu helped me greatly with fine-tuning my English. The writing of the paper has been supported by grant no. P404/11/0127 of the Czech Science Foundation.
Notes
1 This idea may be undergoing change now (in the ‘crisis’) with differences between the ‘North’ and ‘South’ of Europe appearing unbridgeable and escalating into conflicts. We have yet to see if and how this change will translate from economic policies to research ones.
2 Thanks to Lisa Garforth for drawing my attention to the significance of ‘national reports’ in this context.
3 Across the KNOWING consortium, there was indeed only one case study carried out outside the area of a researcher’s residence.
4 In fact, the consultant to the project based in the UK voted differently from the UK partner team carrying out the research, while the consultant based in the Czech Republic abstained from voting.
5 Lisa Garforth (a UK colleague from KNOWING) provided an interesting complementary perspective on the issue of asymmetry when she commented on a draft of this chapter. While she agreed that the UK context kept asserting itself as a particularly vivid reality and standard for comparison in the project, she also pointed to another side of this privileged position. She noted that ‘UK teams have no “private” research findings (in principle everything is available in its first language to the whole team) or non-common language at meetings which occasionally we found a bit problematic; there can be no “asides” in a native language just for colleagues, for example; everything is potentially hearable by everybody’ (2014). And she added that ‘we were also constantly aware of being the least “European” team with the least experience of EC funding systems, reporting systems, even Euro-English language (e.g. the comfortable use of “scientific” to mean what we would call “academic” in EC speak, which I think was also familiar and comfortable to most of the researchers but we never internalised it). For us, this meant that some version of “Europe” or “European research” was being encountered as a relative novelty, especially via the EC’s language and systems’ (written feedback on the draft of the chapter, 2014).
6 Different comparative ‘remarks’ concerning the value and quality of different academic disciplines and intuitions in the Czech Republic had been in the air for several years, but were never translated into an official and quantitative evaluation system.
7 For an analysis of practices of the modern/non-modern incommensurability, see Latour (1993); for a nuanced critique of comparative approaches in anthropology, see Gingrich and Fox (2000).
Bibliography
Aalbers, M. B., ‘Creative Destruction through the Anglo-American Hegemony: a Non-Anglo-American View on Publications, Referees and Language’, Area, 36.3 (2004), 319–322
Callon, M., ‘An Essay on Framing and Overflowing: Economic Externalities Revisited by Sociology’ in M. Callon, ed., The Laws of the Markets (Oxford: Blackwell, 1998), pp. 244–269
De Rijcke, et al., ‘Comparing Comparisons: On Rankings and Accounting in Hospitals and Universities’, this volume
Felt, U., ‘Introduction: Knowing and Living in Academic Research’, in U. Felt, ed., Knowing and Living in Academic Research: Convergence and Heterogeneity in Research Cultures in the European Context (Prague: Institute of Sociology of the Academy of Sciences of the Czech Republic, 2009), pp. 17–40
Felt, U., and T. Stöckelová, ‘Modes of Ordering and Boundaries that Matter in Academic Knowledge Production’, in U. Felt, ed., Knowing and Living in Academic Research: Convergence and Heterogeneity in Research Cultures in the European Context (Prague: Institute of Sociology of the Academy of Sciences of the Czech Republic, 2009), pp. 41–124
Garforth, L., ‘In/Visibilities of Research: Seeing and Knowing in STS’, Science, Technology, & Human Values, 37 (2012), 264–285
Garforth, L., and T. Stöckelová, ‘Science Policy and STS from Other Epistemic Places’, Science, Technology, & Human Values, 37 (2012), 226–240
Gingrich, A., R. G. Fox, eds., Anthropology, by Comparison (London and New York: Routledge, 2000)
Godfroy, A. -S., ‘International Comparisons in Science Studies: What and Why do we Compare?’, Innovation: The European Journal of Social Science Research, 23 (2010), 37–48
Goffman, E., Frame Analysis: An Essay on the Organization of Experience (New York: Harper and Row, 1974)
Haraway, D., ‘The Promises of Monsters: A Regenerative Politics for Inappropriate/d Others’, in L. Grossberg, C. Nelson, and P. A. Treichler, eds., Cultural Studies (New York: Routledge, 1992), pp. 295–337
Latour, B., Science in Action: How to Follow Scientists and Engineers through Society (Milton Keynes: Open University Press, 1987)
——The Pasteurization of France (Cambridge, MA: Harvard University Press, 1988)
——We have Never been Modern (Cambridge, MA: Harvard University Press, 1993)
——Politiques de la nature: comment faire entrer les sciences en démocratie (Paris: La Découverte, 1999)
Law, J., ‘Seeing like a Survey’, Cultural Sociology, 3 (2009), 239–56
Lazar, S., ‘Disjunctive Comparison: Citizenship and Trade Unionism in Bolivia and Argentina’, Journal of the Royal Anthropological Institute (N.S.), 18 (2012), 349–368
Linková, M., and T. Stöckelová, ‘Public Accountability and the Politicization of Science: The Peculiar Journey of Czech Research Assessment’, Science & Public Policy, 39 (2012), 618–629
Meriläinen, S., J. Tienari, R. Thomas, and A. Davies, ‘Hegemonic Academic Practices: Experiences of Publishing from the Periphery’, Organization, 15 (2008), 584–97
Molyneux-Hodgson, S., ‘Preface: The Contexts of Knowing’, in U. Felt, ed., Knowing and Living in Academic Research: Convergence and Heterogeneity in Research Cultures in the European Context (Prague: Institute of Sociology of the Academy of Sciences of the Czech Republic, 2009), pp. i–iii
Organization Journal, < http://org.sagepub.com/> [accessed September 2014]
Porter, T., Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton, NJ: Princeton University Press, 1995)
Shore, C., ‘Audit Culture and Illiberal Governance: Universities and the Politics of Accountability’, Anthropological Theory, 8 (2008), 278–98
Schmidt, R., ‘Gaining Insight from Incomparability: Exploratory Comparison in Studies of Social Practices’, Comparative Sociology, 7 (2008), 338–61
Stöckelová, T., ‘Immutable Mobiles Derailed: STS and the Epistemic Geopolitics of Research Assessment’, Science, Technology, & Human Values, 37 (2012), 286–311
Strathern, M., ed., Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy (London and New York: Routledge, 2000)
Vilnius Declaration – Horizons for Social Sciences and Humanities (2013), <http://horizons.mruni.eu/> [accessed 18 January 2014]
Viveiros de Castro, E., ‘Perspectival Anthropology and the Method of Controlled Equivocation’, Tipití: Journal of the Society for the Anthropology of Lowland South America, 2 (2004), article 1 <http://digitalcommons.trinity.edu/tipiti/vol2/iss1/1> [accessed 18 January 2014]
Vostal, F., ‘Academic Life in the Fast Lane: The Experience of Time and Speed in British Academia’, Time & Society (2015), 71–95