Many of the issues that form interesting topics of research are located at the intersection of different disciplines. For example, I am trained in science and technology studies (STS), but much of my research on medical innovation brings together topics that also motivate medical researchers and public policy scholars. As a result, there is much to gain from writing for diverse and interdisciplinary audiences. I am therefore glad to share some thoughts on how to make insights of interest beyond the borders of one’s own field:
State a clear problem
This point may seem self-evident, but it is important to keep in mind that the problems that motivate research in your discipline are not necessarily those that interest a wider audience. I have therefore found it useful to think about my research in terms of broader public problems. For example, the question how genetic technologies affect health care access is one for which I found receptive audiences in various disciplines.
Don’t be afraid of theory
My second suggestion may seem counterintuitive, since particular theoretical traditions or controversies are often very discipline-specific. Nevertheless, it helps to clearly locate your own perspective in a particular intellectual tradition and it can support your attempt to bring novel insights to a different field. Of course, readers of (for example) medical journals are probably not interested in a detailed exegesis on a particular school of thought – but present your materials through a broader framework, and they may just begin to think differently about other examples that are more or less similar to yours.
Know your strengths
When publishing in other disciplines, we both know more about certain aspects of the things we are writing about than our readers, but at the same time our intended audience is more knowledgeable about other aspects of the problem. One of the things I thus find most challenging is to be taken seriously in terms of what I have to say, while avoiding being ‘exposed’ as a clueless outsider. I therefore try to strike the right balance between trust in the expertise of my audience and in my own. For example, in my prize-winning paper, I try to avoid questioning health policy scholars’ expertise on the intricacies of health policy making, but do think I have something helpful to say about the particularities of novel technologies for health care access.
Erik Aarden is a postdoc at the Department of Science and Technology Studies of the University of Vienna, Austria. He obtained his PhD from Maastricht University, the Netherlands in 2010 with a study of the integration of genetic diagnostics in three European health care systems and has since continued (mostly comparative) research on the intersection between biomedicine, political institutions and social justice. He has previously been a postdoc at RWTH Aachen University, Germany and a Marie Curie fellow in Maastricht and at Harvard University, US. An article on the basis of his doctoral research was recently awarded the Critical Policy Studies Early Career Stage Researcher Prize.
Since the beginning of this century, many governments have invested heavily in biomedical research, in the hope of both improving population health and stimulating economic growth. Central to these investments have been the establishment of new research infrastructures that are supposed to contribute to cutting-edge biological research and to turning research results into both clinically and commercially viable products. For example, in the United States, the National Institutes of Health launched the National Center for the Advancement of Translational Science (NCATS) in 2012 “so that new treatments and cures for diseases can be delivered to patients faster”. Along similar lines, various institutions in government, academia and the private industry in the Netherlands established a national Health Research Infrastructure (Health-RI) last year. But how do policy visions of the medical and economic promise of research match up with the often mundane and highly specific research done within these infrastructures? To what extent can research infrastructures make promises come true, and what are the risks in expecting them to do so?
In a recently published article in Science and Public Policy, I explored these questions in relation to one specific case: the Singapore Tissue Network (STN). The government of Singapore has been especially assertive in sketching out a strategy for biomedical research as a domain of social and economic promise at the beginning of this century. Like in many other places, the establishment of new research infrastructures was an important element of this strategy. The STN was therefore set up as part of a first batch of new institutes and infrastructures a little more than fifteen years ago. Yet this particular facility did not exist for long, as it was shut down in 2011. This raises the question why high hopes associated with this particular institution did not materialize. The official version of events is that researchers did not use the facility, yet that left me with the question why they did not use it. To find that out, I spend five weeks in Singapore in 2013 to talk to researchers, administrators and policy-makers about the fate of the STN.
In my article I describe how the STN closed because different actors had very distinct understandings of the repository’s usefulness. In brief, I found that policymakers projected usefulness onto the Singapore Tissue Network, whereas researchers believed that usefulness was produced in the way the repository stored and operated its collection. This difference between projection and production manifested in various ways, ranging from how tissues of interest were identified and collected, how centrally stored tissues were made available to researchers, to how tissue was supposed to contribute to the aims of advancing medical knowledge and stimulating Singapore’s economy.
An example may help in clarifying the point. For the Singapore government, which is present in this story through its main research funding body, the Agency for Science, Technology and Research (A*STAR), one of the most important functions of the STN was to stimulate collaboration between researchers. Collaboration was considered to be an important ingredient to making research economically valuable. The funding agency therefore tried to stimulate collaboration by only giving users of the samples access to important data about those samples if they would work together with colleagues who contributed the tissue samples to the STN. Yet scientists were used to working with vastly different routines for exchanging data. They would not just give data they worked to collect and with which they were entrusted by the patient to anyone who came calling. Instead, collaboration with and trust of other researchers was a precondition for providing data to colleagues, and not a result of it. Due to this and similar gaps in perspectives on how a central tissue repository might be useful, researchers saw fairly little use in the STN and indeed ended up not using the services it offered. As a consequence, A*STAR did not consider it to be very useful either and decided to close the STN.
So what can we learn from this episode of an infrastructure not delivering on its promise, particularly in light of the ongoing and rapid establishment of similar infrastructures around the world? In my article, I explain that this is not simply a case of policymakers misunderstanding science, or researchers rejecting any purposes for their work other than advancing science. In fact, biomedical research and development in Singapore continue to this day, with clear economic objectives. What I do suggest, is that researchers and policymakers had different expectations and timeframes for these expectations, which made the STN redundant for both. An important lesson we may draw from this episode for similar initiatives elsewhere, is the central role of trust and communication in making investments in research infrastructures (not) work. Perhaps more explicit discussion of the different expectations and different ways of workings around Singapore’s tissue repository would have resulted in a longer and more successful existence for the STN.
Erik Aarden is postdoctoral university assistant with the STS Department at the University of Vienna. In his research and teaching he is interested in the relations between science and technology, socio-political orders and implications for distributive justice, seen through a comparative lens and with a focus on biomedicine.
Probably very few in our Western society can imagine a life without medicine like it is established nowadays. Doctors, medical knowledge and techniques like X-rays or ultrasound play a huge role in how we experience our own and others’ bodies. Medicine has made it possible for people to grow very old and also equip them with artificial body parts. Many commercials remind us to donate blood, vaccinate against ticks and stay healthy in general. As a part of the neoliberal capitalist ideology, our bodies became products, we have the duty to keep them running and maintain them, for example through preventive check-ups.
Part of this reality is rooted at the Josephinum. Its foundation marks the birth of modern medicine in Austria, as it essentially shapes the notion of the human body today. Established in 1784 (back then called the Imperial And Royal Academy Of Medicine And Surgery) by Emperor Joseph II it was and still is part of Vienna’s Medical University and is most commonly known to provide a remarkable collection of wax models of skinned human bodies, but also medical instruments and a library. As the accompanying booklet states, „the history of medicine culminates at the Josephinum“ in view of showing what the first insights into the human body looked like. Also, it claims to be the „physical embodiment of the Medical University’s cultural heritage“.
Back in the 18th century its foundation was part of a massive revolution of the healthcare system, accompanied by the establishment of a new general hospital in Vienna. One of the main ambitions of Joseph II was to bring academic physicians and army doctors – knowledge and practice – together, so patients would experience a much better treatment. But in this new hospital also the first department for female health (Frauengesundheit) was founded. Back then this meant nothing like today’s gender medicine – it was all about birth and the first institution providing women the opportunity to abort. The reduction of women to their ability to bear children was the contemporary notion regarding gender that time: During the 18th century the conception of ‘the’ human body changed from a ‘one sex model’, where the male was seen as the norm and the female as the abnormal, to a ‘two sex model’, where the female body no longer was a deviation of the male but became an entity of her own. Claudia Honegger described this as the genesis of a female special anthropology. Every organ and every bone, her whole body, was interpreted to be shaped just for giving birth. It was her duty and had to be every woman’s highest goal to do so. But although men had now found women’s assignment and the complementarity of sex and gender arose, women still remained inferior and somehow an unfinished or deficient man. This model is still present in sociobiological arguments, which try to explain social circumstances with physical conditions of human bodies, when in fact the social biases of the scientists are attributed to the investigated bodies.
At the Josephinum two out of a total of 16 whole white body wax models represent female bodies. One is standing and the other one is lying and pregnant – the Medici Venus. A similar relation is found in today’s anatomy books for students of medicine like Gray’s Anatomy. Illustrations of very different areas or organs of the human body are depicted – completely unrelated – with male genitalia. But not only in the images, also in the descriptions a bias is noticeable. Some organs, which are assigned to be female, especially the reproductive system, are described in relation to the male with vocabulary like “smaller” or “less developed” and not as organs for themselves. Wittingly or unwittingly “The human body” is male – now and then.
A visit at the Josephinum illustrates many topics of the classes I attended during the Complimentary Study Programme (EC) Science-Technology-Society at the Department of Science and Technology Studies. It shows for example what Ludwig Fleck called “tenacity of systems of opinion” (Beharrungstendenz von Meinungssystemen): Although countless studies have shown differently, science seems to have trouble with seeing variations – it just recognizes what fits into common beliefs, presuppositions or ‘prae-ideas’. Everything divergent – if it is seen at all – is reinterpreted or seems strange and abnormal. More specifically, the exhibition also exemplifies that not just gender, also sex has to be understood as nothing stable or objective. It is “the sum of socially agreed biological criteria for classifying persons as females or males“ and therefore basal for gender consumptions. Like gender, sex is also, contrary to popular beliefs, a result of complex social negotiation processes.
Overall the exhibition and its comparison to today makes it comprehensible how scientific knowledge is a social process. Thanks to feminism and the recognition of feminist epistemology in social sciences, it is now possible to make visible how patriarchy is deeply seated in seemingly impartial ‘facts’ and scientific knowledge. Furthermore, society and social norms of various social groups do not only shape the consumptions or identities of individuals, but also their physical bodies.
Although these findings did not yet gain foothold in natural sciences, their research or their results, one is able to start imaging how much power they contain. Realizing this on a broader social level, they open up gates to more precise, more profound sciences and therefore also a less discriminatory society.
West/Zimmerman 1987, 127; 2nd session of UK Geschlecht (Vienna, summer term 2015/16).
The moment is finally here: The new edition of the Handbook of Science and Technology Studies is finally finished and available. To say it in Latour’s terms, the Handbook is now “ready made”, but how was it in the making? How was this important representation of our discipline co-produced by the numerous authors, editors, politics and business of academic publishing and last but not least the material constrains of putting whole fields of study and their intersections into roughly 1200 pages? This post is looking back at the sometimes messy processes in the creation of the latest edition of the Handbook and the attempts to keep the mess under control. Stories from the Handbook back office. And to make it more playful and in order to advertise, I have hidden the titles of several chapters in this post. How many can you find? (Hint: the table of contents can be viewed here)
The story of the new edition of the Handbook began with an outreach, which was the call for abstracts of chapters. This bottom up approach from the STS community provided the editors with what they described in the introduction as the seeds for the landscape that this volume was about to become. In between the abstracts and the finished version many choices were made in shaping this landscape. What needs to be included (what chapters, sections, historical approaches and strands of research)? How much can be included (e.g., chapter lengths, images, bibliographic references)? Thinking of STS as a transdisciplinary environment of research, those choices made an effort to open up the field rather than limiting it. Of course, environmental justice to such a large field is nearly impossible while balancing limited resources — time, space in the future book, money — at hand, and the aim of making the topics comprehensible for the imagined future readers, for instance. However, structural inequality still persists, in a sense of still lacking non-Euro-American authors and arguably certain less mainstream perspectives. The editors reflect on this issue in their introduction.
My work for the Handbook began when the first full drafts of the chapters arrived, and we started the initial revision process. Finding and getting peer reviewers for each chapter was a long process. Fellow scholars do reviews without being paid, and often in their spare time on top of their work. Consequently (and very understandable), many people who we asked for, politely declined. For us, this meant we had to ask around 3-4 times more persons to comment in order to achieve our aim of around 3 reviewers per chapter.
After the reviewers, it was up to the editors to rethink the documents we received, the draft chapters and their respective reviews, in order to develop the STS Handbook. This included cropping, shortening and reformulating content in a sensible and sensitive way. This was often about politics, as a Handbook chapter also tells a story on the development of certain branch of research, so authors sometimes got defensive about their contribution, resulting in excessive self-referencing, or left out rival strands of research. Here the editors often wrote long Emails or talked personally the authors about these issues.
Retrospectively, with the review process my most important job began: the surveillance and regulation of (laboratory) practices of academic writing. Coordinating an international project with numerous actors involved is not an easy task. Timelines and plans were set for all the diverse steps of a handbook, from the first draft to the peer review process to the final proof reading. However, deadlines given to contributors were often seen more as a recommendation rather than a fix commitment or obligation. This PhD comic describes the situation quite well:
The result: Deadlines passed, but the inbox remained empty. Clearly, if we wanted to stay in time, we needed to reframe our science communication. But how do you get people to do their work? The solution: a tight deadline reminder regime! Consequently, one of the most work and time intense tasks for me was the sheer amount of reminders I had to write and sent out via mail. As in any project, there was a steep learning curve in disciplining our subjects – I mean our colleagues – for us and so early phases (e.g., during peer review) we would only send out reminder Emails after a deadline passed, in later stages we switched to also reminding the authors ahead of a coming deadline. While writing multiple reminders, I also had to learn how to deal with my own inhibitions. How could I be friendly and respectful, but authoritative at the same time when requesting overdue work? Adjusting the tone was especially difficult given my position as a graduate student and doing ‘merely’ an administrative job telling (often well-known) senior scholars what to do. In most cases, it was enough to switch from “Please get back at us until the [date]” to a more decisive version like: “Due to our tight schedule we manage to deal with any further delays. We do expect to receive the chapter within the next three days. This means Sunday, [date] at the very latest. Thank you for your understanding and collaboration”. More frustrating to me than writing endless Emails into a seemingly non-responding void, were a few authors who never responded to me, but only corresponded with the editors exclusively, which often caused more delays. Yet, in general the spamming technique plus the increasingly authoritative language worked surprisingly well. Foucault would have been proud of the way this resulted in self-discipline.
However, even this regime did not prevent bottlenecks sometimes. When the first final versions of the chapters were handed into the typesetting, this was an excellent point to research disasters from an STS perspective as the majority of chapters came at once, leaving us very little time to go through them before an important deadline with MIT Press. In order not to miss this (already extended) deadline, we and some friendly helpers spent around two weeks, going through all chapters, correcting mistakes, and standardizing the bibliography entries as well as bringing back ordering systems that obviously had been dismissed as oppressive by some authors (such as an alphabetic order in the reference list). At this point the Handbook began to age— we were too, including some new grey hairs — and the socio-material constitution of later life of the chapters began to look like an actual book (in pdf form).
A few months later, we got the chapters back for the final proof reading. Once more, we had a number of helpers who helped us going through all chapters again. At this point it became clear that gender and (in)equity in the scientific workforce also impact us as a field , since we noticed that the majority of our volunteering helpers were female. They contributed in the last re-configurations, the finishing touch of the Handbook and during our main reading session including dinner (thankfully paid for by the lead editor) discussions on the co-production of knowledge and food.
Looking back at the scientometrics of all the intellectual and practical contributions to the STS Handbook: we had more than 121 authors, around the same amount of reviewers, circa 30 persons involved in the editing and proof reading processes, and over 6000 Emails were sent and received. And this did not even include the many other actors contributing to what the Handbook is now, e.g. typesetters, citation software, managing personnel at the 4S or MIT Press. Thank you all for your time and work, sharing all the moments of joy, despair, frustration, thoughtfulness, and creative engagement. It was a wonderful and valuable experience.
In the end, the handbook was co-produced by a whole community, full of formal and informal work, and every interaction in between. It is not only a representation, but also a materialization of this community and the process of the Handbook’s creation showed its messiness, structures, hierarchies, and politics. Now it is out there, so please do what STS does best: Discuss it! De-construct it! Re-construct it! Teach with it! Criticize it! Use it as a pillow while studying!
After all, I claim: One does not need a laboratory to raise a discipline, one just needs to produce a Handbook.
Victoria Neumann is currently finishing the Master program ‘Science-Society-Technology’ at the University of Vienna. Apart from working for the Handbook, she is interested in biomedicine, time, and critical data studies.
Is the problem of contemporary academia really about acceleration – the continuous need to squeeze ever more elements into a finite amount of time? And, if so, would the proposed solution be to simply slow down, as many contemporary writers suggest? Acceleration and, more generally, a “culture of speed” have become the defining characteristics of contemporary societies and modern life, a trend echoed by their recent prominence in academic debates. In particular, young scholars account for the feeling of a growing pressure coupled with a worrisome degree of alienation when facing the discrepancy between how they imagined science to be and how science expects them to perform in order to succeed.
One can certainly find specific elements in the contemporary academic research system to support the drawn conclusion of speed as a major problem. Nonetheless, I would argue that focusing too much on acceleration might cause us to overlook a more complex phenomenon at work. Indeed, the feeling of acceleration might actually be understood as the outcome of a gradual process of reconfiguring the temporal infrastructures of academic work and life on multiple levels. A good example of avoiding such a normative dual view of fast or slow is Filip Vostal’s Accelerating Academia. So, if acceleration is not the core problem then deceleration is certainly not the solution. From where, then, does this strong feeling of time pressure and haste in contemporary academia emerge?
How time gets made in academia
In moving away from conceptualising time as a straightforward physical entity, we must shift our attention to the places and moments where time gets made. What is needed is a careful investigation of the key sites in academia that create binding temporal requirements and regulations; thus, imposing specific rhythms, which standardise as well as homogenise academic time. In short, we have to study what Rinderspacher calls “time generators.” More concretely this means looking into the academic system’s multiple recent reforms – i.e. in funding structures, assessment exercises, accountability procedures, curricula or career paths – as all of these are involved in doing important temporal reordering work. Indeed, I might suggest, as I have done in a recent book chapter, that for any problem academia encounters the appropriate response seems to be the establishment of a new time generator.
The challenge to boost quality in research led to a competitive distribution of funding via the project, subsequently putting time limits on what we can think and research, creating a new iron cage of project bureaucracy. This projectification of academic work has also generated a whole new category of researchers who, as Oili-Helena Ylijoki points out, temporarily join academic institutions as project collaborators and “sell their labour” through the commodity of “project time”.
Concern over quality at individual, collective and institutional levels brought a flurry of assessment exercises. Academic education has become increasingly structured through stressing what should to be taught per time unit and careers have become temporalised according to the paradigm of excellence and selectivity. The counting of publications per time unit, along with the valuing of journals expressed through the tallying of average numbers of citations per time unit (the impact factor) are yet further examples of how time gets interwoven into academic valuing and living practices. We thus encounter a bewildering multiplicity of ever-new time structures permeating academic lives.
Unintended consequences and temporal inconsistencies
These new temporalities do not leave core academic work untouched. We can see shifts in how we attribute value to both the manner in which we work and the outcomes we produce. We observe changes in academic lifestyles — affecting who remains in academia and who leaves. Furthermore, we can trace impacts on researchers’ ability or willingness to take the time to engage beyond their field, to do support work or to collaborate beyond the pragmatic and formal level. Or we might speculate that an unintended consequence of these temporal reorderings is the so-called reproducibility crisis, the inability of researchers to reproduce others and – even more troubling – their own experimental data.
This bewildering variety of temporalities tacitly governing academia pushes and pulls researchers in many different directions at once. In this context, the key question is how academics manage to create coherence between these different, often competing temporal structures and their values and attending demands. This leads to a deep feeling of asynchronicity. The rhythms of reporting and assessing, of lives and careers in research, and of projects and publications no longer seem to fit together. This creates ruptures and tensions from which arise the constant demand on academics to work on repairing inconsistencies. The feeling of acceleration can then be understood as a failure to synchronise adequately and the lasting feeling of “not being in/on time”. On the one hand the different temporal rhythms and their non-alignment create the feeling of constant demands to meet all kinds of deadlines. On the other hand, it expresses a deep struggle to embrace these new temporal imaginations, performances and demands. The latter becomes palpable through the oscillations between academic nostalgia, expressed in the partly nostalgic recollections of a better, “slower” past, and the quite radical rejections of the past as inadequate and inefficient.
Time and power
These observations, however, raise our attention to the deep entwinement of the control of time and the exercise of power. Controlling researchers’ temporal resources and being able to regulate their rhythms of work, defining the duration of research activities as well as the length of a researcher’s stay at an institution, and prescribing the speed of production as well as the rhythm of evaluations, are all expressions of power. Therefore, questions of inclusion and exclusion from the academic system (i.e. a factor often underestimated in debates on gender and academia) must be seen through the lens of time and the introduction of ever-new time generators. Being able both to coordinate one’s time within institutional/departmental time structures and to synchronise with other actors vital to one’s work becomes fundamental for access to opportunities and recognition. Consequently, this ability allows for decision-making at appropriate moments and thus, in the end, to successfully survive in academia.
What to conclude?
Contemporary researchers are confronted with many different temporal structures and must develop the capacity to fold them in ways appearing to fit with their expectations of a good academic life. However, this demands substantive work and it is highly questionable if the growing temporalisation of academia will actually produce the desired effects. More attention thus needs to be devoted to the ways different academic times come together to form a “timescape”, a term coined by Barbara Adam. Let’s make an analogy to landscapes: we appreciate the attention devoted to the spatial arrangement of the different elements in ways found sustainable and attractive for their inhabitants, cherish the know-how of landscape architects and acknowledge the work it involves. Analogously, more care should be paid to how different times come together to form academic timescapes, how they form a scape worth inhabiting and allow for creative work to unfold. This also means to engage in a deeper reflection on the necessity of ever-new time generators — ultimately they may create as many problems as they promise to solve. In short, we face a need to “retime research and higher education”, as I have recently argued. However, there is also a need to acknowledge the work that needs to be done to make a timescape worth inhabiting and open up space for creative work. Finally, as is done for landscapes, academic institutions would need to take the time to reflect on and thoroughly care for the academic timescapes they create — perhaps a new task for academic leadership.
Note: This piece originally appeared on the LSE Societal Impact blog. The original post can be viewed here.
Ulrike Felt is Professor of Science and Technology Studies and currently Dean of the Faculty of Social Sciences at the University of Vienna. Her research interests span number of themes including issues of science and democracy, questions of responsible research and innovation as well as the analysis of changing cultures of academic knowledge production. Understanding temporal structures in science and society as well as the importance of future making practices is her keen interest across the above-mentioned issues.
The metric tide is in. The use of quantitative indicators for the evaluation of the productivity and quality of the work of individual researchers, groups and institutions has become ubiquitous in contemporary research. Knowing which indicators are used and how they represent one’s work has become key for junior and senior academics alike.
Proponents of the use of metrics in research evaluation hope that it will provide a more objective and comparable way of assessing quality, one that is less vulnerable to the biases and problems often ascribed to qualitative peer-review based approaches.
However, critical research in science and technology studies and elsewhere increasingly points to considerable negative effects of the impeding dominance of quantitative assessment. In fact, indicator systems might serve as an infrastructure fueling hyper-competition, with all its problematic social and epistemic effects. They create incentives for researchers to orient their work to where high metric impact might be expected, thus potentially fostering mainstreaming at the expense of epistemic diversity, and prioritizing delivery over discovery.
Over recent years, many important initiatives have pushed for a more responsible use of metrics in research evaluation. The DORA declaration, the Leiden manifesto and the Metric Tide report are just the most prominent examples of discussions in academia and in the institutions that govern it. The recommendations of these initiatives have mostly focused on those actors which seem to have most bearing on the processes of concern: academic institutions, the professional communities providing the methods and data metrics build on, as well as evaluators.
But what about individual researchers? What is their responsibility in dealing with indicators in their everyday practices in research? Twenty years ago, when the metric tide was still but a trickle, the eminent anthropologist Marilyn Strathern (1997) wrote “Auditors are not aliens: they are a version of ourselves.” (p. 319). Still today, it would be simplistic and wrong to assume that researchers are merely victims to bureaucratic auditors imposing indicators on them.
Don’t we all strategically use those metric representations of our work we see as advantageous for whichever goals we are currently pursuing? Do metric logics structure the way we present ourselves in our profiles on academic social networks, and how we look at others’ portfolios? Isn’t there a secret joy in watching one’s citation scores and performance metrics grow? In how far do we individually play along the logics that we might criticize as a more collective phenomenon in our more reflexive moments? Is this a problem? If so, should finding ethical ways of dealing with indicators not be part and parcel of being a responsible researcher today?
These are the core questions of a recent debate “Implicated in the Indicator Game?” we edited for the journal Engaging Science Technology and Society. This debate gathers essays of a cast of junior and senior scholars in science and technology studies (STS). STS is an interesting context for discussing these wider questions, because scholars in this field have contributed particularly strongly to the critical discourse on indicators. Still, in their own careers and institutional practices, they often have to decide how to play the indicator game – for not playing it seldom seems a viable option.
In one essay in this collection, Ruth Müller asks, quoting an informant from her own fieldwork: “Do you think that the structure of a scientific career is such that it tends to make you forget why you’re doing the science?”. Diagnosing a loss of meaning in running to fulfill quantitative indicators, she points to aspects of work in science and technology studies which are indispensable for quality, but hardly to be expressed in indicators – interdisciplinary engagement with the sciences and engineering being the most important example for STS.
So, what can individual researchers and institutions do? Our collection contains many different answers to this question. All agree, however, that ignoring or boycotting indicators cannot be the solution. As Alan Irwin reminds us, the questions of accountability that indicators are supposed to answer will not go away. They need to be answered in different terms, by offering and celebrating new non-reductive concepts of the quality of research in different fields. For individual researchers, this calls for confidence to stand up for the quality also of those aspects of their work that cannot be well expressed in metrics, but also to recognize these qualities in others’ work.
As an outcome of our debate, we offer the concept of evaluative inquiry as a starting point for a more responsible dealing with indicators. In a nutshell, evaluative inquiries may present research work numerically, verbally, and/or visually – but aim to do so in ways which do justice to the complexity of actual practice and its engagements, rather than to reduce for the sake of standardization. They also do not jump to a reductive understanding of what counts in an assessment (such as publications), and aim to produce and represent the multiple meanings and purposes of researchers’ work. They are processual in the sense that the choice of criteria and of whether or not certain indicators make sense cannot be fully described in advance, but needs to be negotiated in the process of evaluating.
Of course this all sounds nice in theory. But it will require researchers to engage in these practices, rather than in hunting metric satisfaction. And it will require institutional actors to engage in more substantive discourses about the quality of research.
Strathern, M. (1997). ‘Improving ratings’: audit in the British University
system. European Review 5(03), 305—321.
Maximilian Fochler is assistant professor and head of the Department of Science and Technology Studies of the University of Vienna, Austria. His main current research interests are forms of knowledge production at the interface of science and other societal domains (such as the economy), as well as the impact of new forms of governing science on academic knowledge production. He has also published on the relations between technosciences and their publics as well as on publics’ engagement with science.
Sarah de Rijcke is associate professor and deputy director at the Centre for Science and Technology Studies (CWTS) of Leiden University, the Netherlands. She leads a research group that focuses on a) developing a theoretical framework on the politics of contemporary research governance; b) gaining a deep empirical understanding of how formal and informal evaluation practices are re-shaping academic knowledge production; c) contributing to shaping contemporary debates on responsible research evaluation and metrics uses (including policy implications).
The other day I discussed the difficulties of living and working in academia with a very successful former professor of mine. When it came to his own career, he made an interesting confession: “It was pure chance that I ended up doing what I am doing now,” he said, “After I graduated from high-school, I tried out several jobs and studies, until I found my place. These early years, I always leave them out in my CV.” This made me wonder about the role of CVs in academic practice and careers.
In June, the United States Food and Drug Administration has approved a new weight-loss device: AspireAssist. The device is surgically inserted into the abdomen and allows patients who have failed at losing weight by other means to drain ingested food from the stomach. After eating, users go to the toilet, plug in a tubing set into a tube that leads to the stomach, and “aspirate” (or, less prosaically, pump) up to 30% of their meal into the toilet.
“And what can I do here?” people ask me curiously one after another, eyeballing a mountain bike standing upright in front of a computer screen. I am in the lecture room of the Department for Science and Technology Studies (STS), University of Vienna, which is filled with people, technological objects, and further installations about RFID chips, artificial intelligence, and visions of reproductive medicine and self-driving cars. It is a Friday night in April 2016, the so-called “Lange Nacht der Forschung” (i.e. Long Night of Research). This nation-wide biannual science communication event invites diverse publics to interactively explore current research at more than 250 institutions. With its interactive installations, the STS department aimed to spark discussions about how technologies affect and shape society, bodies, everyday lives, and futures. While that only partially explains the bike standing in the room, read on to learn how challenges in planning my installation contributed to its realization.
by Pouya Sepehr, Maresa Barbara Wolkenstein, Helene Sorgner and Marilen Hennebach
New technologies have given governments an unprecedented means to access personal information. In order to ensure that all people can seek information and express themselves freely, there must be reasonable checks and balances on governments’ ability to access, collect, and store individuals’ data. Both security and freedom can be protected, but only through balanced laws and policies that uphold human rights. Surveillance happens at many levels: It can be eavesdropping programmes of foreign and local governments, it can be commercial corporations on a global scope, it can be more or less institutionalised and it has many different aspects, reaching from self-censorship to pleasure, from activism to fatalism. The question, though, is not so much if we mind but rather how and when we mind.
The revelation of NSA documents through Edward Snowden in 2013 had brought otherwise secret intelligence activities into the light of global attention. It has been shocking for many to realise that mass surveillance technologies are targeting civilian communication, including social media platforms. In fact, the era of mass communication has become the era of mass surveillance and hence, the question of personal freedom of expression has gained a technological dimension. The revelations have also shown that national security agencies have strong ties with giant tech companies which are willingly cooperating in giving access to information, proving that even civilians have “nowhere to hide” anymore.