Genomic privacy: The point of no return for “anonymity”

by Kaya Akyüz

Should genomic data in genealogy or personal genomics databases be used to simply catch a criminal or for similar purposes? Picture by Thierry Ehrmann (Flickr/CC BY 2.0)

Recently, I have learned that a genetic genealogy platform had been used by the US justice system in order to find a serial killer. My genome was one of the hundreds of thousands on this platform that allowed the investigators to spot the killer and as users of the platform, we have learned about it only after the press reported on the issue. Should genomic data in genealogy or personal genomics databases be used to simply catch a criminal or for similar purposes?

We are giving out data all the time willingly or unwillingly, knowingly or without even noticing. Our faces and license plates get recognized by cameras on streets, consumption habits are recorded, GPS signals are tracked and many more. Sometimes this is for “the public good” as in the case of surveillance for security and often it is for commercial purposes. However, the majority of these different forms of data collected resemble each other in that they originate from an individual: individual´s behavior, preferences, environment or social network. One other form of data that is increasingly produced in large amounts is genomic data in the numerous databases that function as biobanks and repositories for genetics research. The genomic data landscape also includes direct-to-consumer (DTC) genomics companies (e.g. 23andMe) to which millions of users send their spit to learn more about their traits, ancestry, health risks and biological relatives.

Everyone has a unique genome and when sequenced, it can be converted into a text of approximately 3 billion characters with four letters: G, C, T, A. Genome as data is different than other types of personal data because it does not simply originate from an individual, but bits and pieces shared by many different people come together through reproduction. While all humans carry almost the same genome sequence, the remaining minuscule differences increase as one moves away from the closest relatives to distant relatives, including each and every one of 7.5 billion people on earth. This way, genomic information allows an individual, e.g. an adoptee, to identify unknown biological relatives; however, it also means that an individual can never be anonymous unless all (close) biological relatives of the individual anonymize their genomic data. I will exemplify this with the controversial case mentioned in the beginning and explain why there is a need for urgent societal discussion about what “personal” data means and who is to decide, when we consider the human genome.

Along with genome editing in human germline using CRISPR-Cas9 technology, the most controversial biotechnological application in 2018 is probably the identification of the Golden State Killer, who is claimed to be responsible for 13 murders and numerous rapes in California in the 1970s and 1980s. Although the DNA had been collected from the crime scenes, the perpetrator had not been identified because his profile was not in the databases. With the help of the genetic genealogist Barbara Rae-Venter, named one of the ten people who mattered in 2018, the US justice system opened the doors to a debatable practice. The investigators produced a genome profile of the Golden State Killer from decades old crime scene material and then they uploaded it to an “open access” genealogy platform: GEDmatch. Customers of different genealogy companies (i.e. 23andMe, AncestryDNA, FTDNA and others) have been uploading their raw genomic data to GEDmatch in order to identify biological relatives that are in the databases of other companies, without needing to buy their services. The authorities used this opportunity to identify the closest relatives of the suspected perpetrator and tracked back the suspect through constructed family trees and common ancestors to a person, who happens to be a former police officer: Joseph James DeAngelo. And this is only the beginning. According to Nature and the New York Times numerous cases are underway with a similar approach. A rapidly growing cottage industry is emerging between genetic genealogy and criminal investigations without a chance to publicly discuss what this means for the ordinary individual, her genomic data, privacy and anonymity, especially considering that this system is amenable to many uses beyond identifying serial killers. As long as there is a biological sample and the identity of the owner is to be found, the system visualized below would serve the purpose.

The steps involved in identifiying the killer. Infographic by the author, Kaya Akyüz.

How was the suspected Golden State Killer identified using an open genealogy platform? With a simplified visualization of the process, here I show how shared ancestral genomic variations (noted with colors) allow finding links between an unidentified individual (using left tissue, in this case from crime scene, but possibly from a door handle, a glass etc.) and self-identifying individuals on an open genomic database without knowledge of either.

Identification of a notorious criminal does not leave enough room to discuss the ethical considerations around the use of ordinary individual’s genomic data; besides, who would (dare to) be against a system that allows to track down a serial killer? If it were not for the lax policies of GEDmatch regarding the use of genomic data of its users, the Golden State Killer had not yet been identified. After all, there are numerous databases that are bigger than GEDmatch, but a similar search in these would generally necessitate a court order due to policies of these public or private institutions. As a person, who has his genome data on GEDmatch since many years for genealogical purposes, I find it ethically unacceptable that law enforcement agencies uploaded crime scene material and used hundreds of thousands of profiles to find the relatives of the criminal, followed links through their lives, all of which took place without the users of the platform being informed.

The controversy seems to have become a breaking point in our understanding of genome data and anonymity. Science reports that a sample that encompasses only 2% of the American adult population (only four times that of GEDmatch) would allow 90% of Americans to be identified even if they have no genomic data in any database (60% is already identifiable with GEDmatch). This means uploading one’s own genomic data to such a database with real name or similar identifiers is no longer a mere personal decision unless the individual accepts being a “genetic informant” in searches that may lead to stigmatization of innocent individuals, risk legal protection against discrimination or even out the identities of distant relatives, who may be a sperm donor, an undercover agent, an “anonymized” individual who took part in a biomedical study, or just a person who drank from a glass in a restaurant followed by a stalker.

Waiting for individuals to anonymize their genomic data seems to be an option out of this problematic path. However, this means that those who decide not to anonymize their genomes, decide at the same time not to allow their relatives to anonymize themselves. If we are concerned about our anonymity, we have to push forward global regulations that encompass all platforms, companies or biobanks to remove identifiers attached to genomic data such as names and last names, year or place of birth, at least in a way that protects anonymity unless there is a court order. Otherwise, this is the end of anonymity as we know it. The post-genomic future holds numerous risks along with opportunities and we have to be aware that seemingly personal decisions are made on others’ behalf, often unbeknownst even to the person making the decision.
So the question is, is it “my genome, my decision” or “our genome, our decision”?

Kaya Akyüz is a PhD student and uni:docs fellow at the Department of Science and Technology Studies of University of Vienna. Having finished his bachelor and master’s studies in Molecular Biology and Genetics at Bogaziçi University, his current research is on the dynamics of making and unmaking a new field in science through the case of genopolitics, an emerging research field at the intersection of political science and genetics.

Why should we study ignorance today?

by Paul Trauttmansdorff

‘Is it a right to remain ignorant?’, Hobbes asks Calvin in the comic-strip ‘Refusing to find out’ (by Bill Watterson,

In what ways do ignorance and non-knowledge shape social and political action? This was roughly the overarching topic that brought together different scholars at a two-day workshop at the University of Vienna in November 2018, discussing related issues such as risk, ignorance, contingency, secrecy in social and political life. Bringing the debate to a public setting, sociologist Matthias Gross from the University of Jena lectured in the “Old Chapel” of the University of Vienna about the question what role non-knowledge plays in evidence-based politics. The panelists, Ulrike Felt, Head of the Department of Science and Technology Studies (University of Vienna), and Stefan Böschen from the Humanity and Technology Center at the Aachen University (RWTH), as well as moderator and co-organizer Katharina Paul (University of Vienna) were tasked to elaborate on the various forms of ignorance and its related practices. Why then should we study ignorance today?

Probably one of the most notorious examples of how ignorance affects politics is Donald Rumsfeld’s statement at a press conference in 2002, at which he pointed to the realm of what cannot be known about weapons of mass destruction manufactured by the Iraqi regime. He thus mobilized ignorance: “There are known knowns […], we also know there are known unknowns; […] But there are also unknown unknowns – the ones we don’t know we don’t know”. Some may call this circumventing a lie, but the public performance of ignorance certainly served a strategic purpose. As studies on ignorance emphasize, strategic ignorance is not about conspiracy theory, but about how ignorance can be characterized as a productive/destructive force by itself; it can be exploited, nurtured, and performed in different forms.

D. Rumsfeld provides a fine example for the increasing salience that ignorance and nonknowledge have acquired over the past two decades. He even contributed to labelling the growing field of ignorance studies “Rumsfeldian”, which not only sounds unlucky, but probably also conceals its interdisciplinary and theoretically diverse character (McGoey, 2012, p. 7). At the panel, Gross presented the issue in a quite broad way, in which he seemed to underscore the essential importance of non-knowledge in structuring our societies. Among the examples he pointed to were the role of the secret (for Georg Simmel one of the big achievements of human mankind); lack of knowledge(s); fake news and disinformation; the unequal distribution of knowledge for divisions of labor; the conscious rejection of knowledge (“ignorance is bliss”). Gross did not aim to present a comprehensive typology or clear-cut definition of non-knowledge, which, in fact, would be classic controversies at any conference of ignorance studies (Gross, 2007, p. 743). Rather, he used his keynote speech to generally demonstrate the need for finding better ways to acknowledge and register not-knowing in contemporary society and political governance.

In their response to Gross, both speakers Ulrike Felt and Stefan Böschen drew on a variety of examples for the makings of knowledge/non-knowledge in our increasingly complex, techno-scientific worlds. In Böschen’s account, the International Panel of Climate Change (IPCC), as a hub for producing scientific and political expertise on climate change, is confronted with the challenge to specifically select from hundred-thousands of articles for a single special report. Böschen wondered how they possibly deal with the production of non-knowledge that this selection included? How transparent and reasonable should the sorting according to criteria and indicators be made? Does this selection process undermine the institutional authority to constitute knowledge, e.g. given that climate science sceptics attack it on that basis?

Science and Technology Studies (STS) have a long tradition in exploring questions about the messy and complex processes that involve the establishment and institutionalization of scientific facts. Scholars have criticized the “black-boxing” of knowledge-production, which always entails powerful acts that designate authority over knowledge and implicate decisions about what is (un)worthy to know. The making of knowledge, and its flipside ignorance, is not necessarily a rational undertaking, but often charged by controversies and underpinned by values, norms, interests, politics. An interesting question is how to draw on this body of work when it comes to climate change politics? Ava Kofman, in a New York Times article, recently portrayed STS scholar Bruno Latour within the current politicization of climate science, a political moment, which actually reveals the role of “all-too-human networks” that are needed in support of or to stabilize scientific knowledge. For Latour, the point is that the problem of denialism will not be solved through presenting ever more data, because what should count as valuable knowledge and what should be ignored, cannot be determined by scientific facts alone. Or, in other words, scientific knowledge and political order are entangled and co-produced (Jasanoff, 2004).

For Ulrike Felt, it is also big data projects and the build-up of large-scale digital infrastructures in policy areas, such as the health sector (“digital health”), that call for a much broader reflection on what she calls “translation steps”. These would include various layers implicit in knowledge-making, from information gathering to acts of responsibility. Today, the need for collecting, storing, and processing (personal) data seems to be presupposed, and often it is far from clear who is entitled to decide on translating information into knowledge, and knowledge into responsible action. Normative visions and social imaginations are inscribed into digital infrastructures, but hardly ever (publicly) debated. It seems that the dominant imperative of big data collection has grown faster than our capacity to formulate societal and democratic ideas about what kind of knowledge we actually want to gain, who should act upon it, or what kind of responsibilities could emerge from it. Felt’s plea for a better reflection of what has been set in motion confirms the worries about these relatively new aspects of not-knowing that come along with big data systems.

Surprisingly, the current “crisis of evidence-based politics” was largely absent from the panel discussion. As one participant put it, what can be said about the visible forms of ignorance that we observe today in politics? The rise of political figures like Trump has fueled much debate about “post-facticity”, but it could also lead to think more in-depth about the relation between political government and “radical ignorance”, to paraphrase sociologist William Davies. As Davies argues in another New York Times article, America’s president but also the Brexiteers deeply resent the very idea of political governance based in technical and complex facts to solve “prosaic problems”. Governmental issues such as the multilateral governance of climate change or Brexit as “soft” separation of the UK from the EU rely on technical expertise and officials to subsume the unknown, a vision that nationalists across the globe reject. This type of radical ignorance aims at disrupting the link between expert knowledge and political governance as technocratic, regulatory and often global affair, and instead corresponds to their reactionary claims to reassert (state) sovereignty and nativist appeals to the “nation”.

Issues of knowledge and ignorance are thus not only entangled with questions of what power consists of, but also with the question of how political government can or should be envisioned. As relational, rather than stable categories, they become rearticulated and renegotiated in political decision-making and social struggle. Studying the practices of knowledge/ignorance means to look into what is deemed as worth knowing and what is not. And it can also lead to explore the ways in which political government, democratic authority, and social responsibilities are imagined.


Gross, M. (2007). The Unknown in Process: Dynamic Connections of Ignorance, Non-Knoweldge and Related Concepts. Current Sociology, 55, 742-759.

Jasanoff, S. (2004). States of Knowledge. The Co-Production of Science and Social Order. London: Routledge.

McGoey, L. (2012). Strategic unknowns: towards a sociology of ignorance. Economy and Society, 42(1), 1-16.

Paul Trauttmansdorff is a PhD candidate at the Institute for Science and Technology Studies at the University of Vienna. His current project examines digital technologies and socio-technical systems in the European border regime and is situated in the intersection of STS and Critical Security Studies.

Is there a Place in Space for Art?

by Nina Witjes & Michael Clormann

Orbital Reflector, co-produced and presented by Trevor Paglen and the Nevada Museum of Art, 2017 (© Trevor Paglen,

The image of “Starman”, the astronaut-dummy floating in his cherry red Tesla car through outer space has been all over the media earlier this year. The image was taken from a camera that sits on the car’s dashboard, capturing the dummy, the Earth and a sign on that reads „Don´t Panic”. Both the car and the dummy are a powerful symbol of commercial claim towards space in the age of “New Space”. New Space, a term widely adopted within the aerospace industry, signifies fundamental changes in how we use and relate to outer space: Not only does its goal of commercializing outer space pose a challenging technological and regulatory endeavor, it also introduces new structures, practices and organizational forms of exploration, exploitation, and excitement – in short, a new techno-politics of orbits and outer space. When Starman was put into orbit by SpaceX’s new Falcon Heavy rocket, the impact on the global space community was profound; start-ups, media and government actors alike indulged in an enthusiastic discourse on the promises of commercial space exploration; for space tourism science, business, and to, eventually, becoming multi-planetary.

Recently, art has claimed a place in space, too. Trevor Paglen, an artist/activist/researcher known for his work on surveillance and secret intelligence sites, is preparing and announcing to launch a small satellite able to transform into a highly reflective sculpture once it reaches low earth orbit. As soon as crowdfunding allows, the “art satellite” is supposed to launch. The project, in collaboration with the Museum of Nevada, states that „[a]s the twenty-first century unfolds and gives rise to unsettled global tensions, Orbital Reflector encourages all of us to look up at the night sky with a renewed sense of wonder, to consider our place in the universe, and to reimagine how we live together on this planet.“

While it is always a good idea to wonder about humanity and the great and not so great things we did on and with planet Earth, this project – literally – reflects the wrong way. This is, it misses the opportunity to draw attention to many serious issues. In particular that of waste in space. In an interview with PBS , the artist stated that “when I look at infrastructures, and I look at the kind of political stuff that’s built into our environments, I try to imagine, what would the opposite of that be? Could we imagine if space was for art? What would that be? And then I’m kind of ridiculous enough where like, OK, let’s get busy, let’s do that.“

This partly mirrors the attitude of New Space actors like Elon Musk, Jeff Bezos and others, that is, if we can imagine it, let´s do it – and think about the consequences later, if at all (remember: A car in space can be considered media-effective space junk). At the same time, it reveals the ambivalence with which the space sector approaches its legacy: The remnants of decades of spaceflight activities have left an ever-growing and, by now, a dangerously dense pile of rocket components and defunct satellites in earth’s orbits. This so-called “space debris”, threatening space infrastructures around it, is what many in the aerospace sector now call Paglen’s art project, too.

Not unlike the framing of climate change and marine debris as a socio-material risk of global impact, the worst-case scenario concerning space debris predicts a likely future, where the planet’s orbits are becoming permanently impenetrable to astronomical observation as well as any form of space travel leaving or revolving the planet. That is if no countermeasures are taken. Imagined as a cascading phenomenon of colliding, shattering and thus self-multiplying debris fragments, this scenario evokes immediacy through the identification of a point of no return, again, not unlike the one associated with climate change.

From an STS perspective, we agree with Paglen, that one of the main reasons why space policymakers are still slow to respond to the growing threat of space debris is that it has been largely invisible, as it is “easy to forget these all-but-invisible activities“ taking place in outer space — out of sight, out of mind. However, it is hard to imagine that the orbital reflector will change the way we think about space as a place by „making visible the invisible“, neither in terms of responsibility nor sustainability. Here´s why:

At the time when Paglen began working on the project, concrete fears of space debris had surfaced in public perception through two major and highly visible events in outer space: In 2007, China deliberately destroyed its “Fengyun-1C”, satellite in low orbit, an event heavily criticized as having unnecessarily released large amounts of small fragments of space debris. Two years later, we witnessed the first ever accidental collision of two communication satellites, Cosmo 2251 and Iridium 33, causing over 140.000 pieces of space debris in total.  This year, the Chinese space lab Tiangong-1 became a matter of international security concern: From the point when the space station was announced obsolete and defunct by the China National Space Administration (CNSA), the school-bus-size station´s uncontrolled descent appeared as a matter of nightmares for many as its re-entry into the earth’s atmosphere could only be vaguely predicted. The orbital reflector, with its reflective artwork deployed, will be double the size of it.

Instead of making the invisible visible, the Orbital Reflector (as well as any other shiny satellites, floating cars or just the usual clutter in outer space) might be part of the problem: Being in the way of science to gain a clear view of the universe as they limit a telescope’s ability to accurately envision the cosmos and measure its stars.

Jonathan McDowell, a researcher at the Harvard-Smithsonian Center for Astrophysics, recently told Gizmodo that launching bright satellites with no other function than art, fun or prestige into orbit, is „the space equivalent of someone putting a neon advertising billboard right outside your bedroom window”. Paglen recently responded to the criticisms, asking why it would be any more of a problem for stargazers than any of the other hundreds (soon to be thousands) of satellites due to launch every year? Well, after all, it is this kind of thinking about the responsibility that has led to severe environmental issues. Regarding the role of art in space and the fact that the Orbital Reflector will revolve around the Earth without any specific scientific or military goal, Paglen asks his critics why “we (are) offended by a sculpture in space, but we’re not offending (sic!) by nuclear missile targeting devices or mass surveillance devices, or satellites with nuclear engines that have a potential to fall to earth and scatter radioactive waste all over the place?” But is this really the case? After all, it has long concerned social sciences how to make infrastructures visible and how to deal with the sociotechnical vulnerabilities of any techno-society. In particular, researchers in science and technology studies and critical security studies have shown an emerging interest in questions of surveillance from and the increasing militarization of outer space as well as the risks these and their byproducts pose for the sustainability of earthly and space infrastructures. Discarding the Orbital Reflector as a bright idea should not be understood as a rejection of art concerned with and located in space, but of the claim that if the military can launch satellites, art should, too.

Simultaneously, critical debates about the role of art in space are not necessarily in favor of or naive about governmental space technologies. There are civil society projects, too, that use satellite images for monitoring human rights violations and war atrocities – often operating on a shoe-string budget. Many of them would probably be happy to see the 1.3 million dollars estimated for the construction and launch of Paglen´s activist art project to impact their activities. Instead, its contribution will be to shed light on places that are currently in the dark – not metaphorically speaking in terms of human rights but just because it´s nighttime.

Space has become a place where sustainability is increasingly negotiated as an issue of security, as billions of people around the world rely on space systems to facilitate their daily life, from navigation to environmental services, from science to communication, crisis response and banking, from intelligence to education. Space debris poses the question of how we want to live with our material leftovers revolving “above” of us. Another bright and shiny useless satellite in orbit does not provide a good answer.

Nina Witjes is a university assistant (post doc) at the Institute for Science and Technology Studies at Vienna University. Her work is situated at the intersection of STS and International Relations with a special focus on space programs and security.

Michael Clormann is a doctoral candidate / research associate at the Friedrich Schiedel Endowed Chair of Sociology of Science and the Munich Center for Technology in Society at the Technical University of Munich.

How to weave societal responsibility into the fabric of universities

by Ulrike Felt, Maximilian Fochler, Andreas Richter, Renée Schroeder, Lisa Sigl

This text is the result of an interdisciplinary collaboration of researchers in the life sciences and science and technology studies (STS) at the Research Platform Responsible Research and Innovation in Academic Practice at the University of Vienna. A shorter version of this blogpost was published as an opinion paper in Times Higher Education.

The understanding that science creates essential resources for our knowledge societies, both for economic growth and for responding to changing grand societal challenges is deeply rooted in diagnoses of our time. Ideas have gained traction that science should be done in ways that respond to societal needs and concerns. However, opinions are divided on how to achieve societal responsibility, particularly within a bottom up curiosity-driven research environment. Universities as institutions have remained strangely silent in this debate, which is so far led mostly by other actors in funding and research policy. This is surprising, since universities as institutions that combine research, education, and caring for “the long-term health” of our knowledge base have a unique position to take a lead in this debate.

Responsibility is a different concept than accountability

Over the past decades, most academic institutions have been re-organised to weave new forms of accountability towards society into academic lives. Scientists are now entangled in tightly knit systems that evaluate their performance and aim to demonstrate their worth to receive public funding. These organizational changes have been framed, mostly by policy makers and research managers, as urgently needed transparency and control. However, the emerging audit culture and the prominent role of metrics increasingly meet concerns. They have been criticised as narrowing down visions of the social value of scientific work, as spurring hypercompetition, as leading to a research governance by data instead of by smart ideas, and they have even been associated with increased mental health problems of academics.

This makes it relevant to note that accountability is different from responsibility. Accountability often limits universities’ contribution to society to a few productivity indicators, and potentially overlooks important core responsibilities of universities towards society. But what are these core responsibilities? And how could societal responsibility be woven into the fabric of universities in sustainable and meaningful ways?

It is time to re-articulate the public value of universities!

In this essay, we present a view from within the university, and a call to re-articulate the public value of universities, along with what we would define as its three core responsibilities: first, they are responsible for research, i.e. for producing new knowledge. Second, they are responsible for educating new generations, not only of academic scientists, but also of highly qualified staff in private and public institutions. And third, this combination of research and teaching implicates a responsibility to care for the body of knowledge that societies can tap into for solving societal problems both today and in the future.

To meet these quite demanding responsibilities, universities have grown and cultivated know-how and skills that warrant broad public support. This unique position however is also a mandate to cultivate a debate about responsibilities of universities towards society, and about how societal concerns and values are represented across different facets of academic life.

We should not mistake societal responsibility with applicability or marketability

In our perception, this debate is currently pursued in ways that are too narrow. Mostly, the public value of universities is located “downstream,” e.g. in potential future applications of the knowledge in the form of marketable products, in the presence of university’s expertise in public debate, in a start-up that a graduate may launch, or in fostering economic growth through highly-skilled human resources. Many so-called “third mission” initiatives are examples for this delegation of responsibility to a later uptake and commercialization of knowledge. This is unfortunate, since it implicitly equates responsibility with a narrow notion of innovation that focuses on marketability.

Universities need a broader notion of responsibility that focuses on diverse forms of societal relevance instead of marketability. Indeed, the study of innovation trajectories has shown that knowledge often benefits society in indirect, complex and unforeseeable ways. Remembering this is particularly important at a time characterized by increasing short-term orientation.

One advantage of such a broad approach to societal responsibility is that it makes visible that even basic researchers are often already motivated or guided by societal concerns — yet not in terms of direct and short-term applications. They may feel that their research is relevant to society, such as for example by contributing to a knowledge base for a certain policy area or for a present or potential future matter of societal concern, such as coping with Climate Change. This explains why calls for protecting scientific freedom against societal interventions often argue that science is most valuable and relevant to society when it is curiosity-driven.

Cultivating a knowledge ecology for creating better futures

Responsibility therefore needs to be understood as cultivating and caring for a knowledge ecology. Using the notion of knowledge ecology instead of knowledge system points to the importance of allowing a breadth and diversity of forms of knowledge, with different temporalities (immediate responses and long-term knowledge concerns) and diverse relations to society, its needs and concerns. This notion should also make us aware of the importance to think in terms of sustainability of knowledge. Knowledge ecologies should thus harbour capacities to address today’s challenges as well as to be prepared for new kinds of problems in the future that we cannot anticipate today.

Being academic scientists ourselves, we often sense that scientific communities have not yet developed a language for fully grasping and communicating the entanglements of their research fields with society. In everyday academic lives, it often even seems difficult to engage in debates about societal relevance. Too many other layers of concern, such as acquiring funding, competing in evaluations and securing continuous employment seem more pressing. All this has to happen in a work environment in which researchers are subjected to multiple and partly contradictory temporal logics, often leaving individual researchers with a lack of time. As an unintended consequence, even though many researchers may be sensitive to societal concerns, universities often do not provide conditions to allow those concerns to enter the processes of planning and carrying out research, as well as the education of new generations of scientists.

Universities have a mandate here to incentivise reflection on societal relevance in scientific communities, to reflect on how institutional framework conditions shape scientific practices and to create opportunities and offer time and space to consider societal issues in scientific practices. In the long run, this should allow universities to participate in the creation and care for a knowledge ecology that allows also in the long-run to address societal issues and challenges. This perspective on universities as core to societies’ capacity to care for a balanced knowledge ecology, is also an opportunity for universities to renew their self-understanding and to gain value and gravity in knowledge societies. Also in the short term, universities, the knowledge they create and their standing in public debates will profit from allowing their researchers to reflect on, articulate and communicate the societal relevance of their research.

But how can universities contribute to creating conditions that allow to consider societal concerns in academic practices? Let’s consider three first steps.

Universities form next generations and their attitude towards societal relevance

First, as higher education institutions, universities should develop ideas on how to build the reflection of societal issues into curricula and bring them to the class room. It is also in teaching environments that universities cultivate an ideal about what kinds of knowledge are worth being taught under current conditions. Currently, how students learn to ask questions is often narrowed down to highly-specialised skills and knowledge, while the ability to reflect on the broader societal meaning often falls short. A study of junior life scientists even suggests that they tend to unlearn to consider societal concerns until they reach more independent positions as group leaders.

Reflecting societal concerns, and one’s own expertise within society, thus is a core competence that needs to be fostered. As academic scientists are key multipliers, universities have a responsibility to build such reflections into curricula and classes. While a variety of inspiring tools and practices are readily available to do so, universities should be aware that there is no one-size-fits-all approach and dimensions of societal relevance are as diverse and situated as scientific fields. The ways in which reflections are built into study programmes (e.g. Master- or PhD-Programmes) should ideally build on a bottom-up debate in scientific communities. Rather than being an extra task that students and lecturers need to fulfil, societal responsibility should become part of the idea of good scientific practice that is transmitted at universities. This should not be felt as limiting an open, curiosity driven approach to knowledge generation, but broaden the basis of what drives this curiosity.

This reflective capacity is particularly important in times in which research and its contexts are rapidly changing. For example, the rise of ‘big data’ poses fundamental challenges to how scientific fields produce knowledge and how they relate to society. Reforming teaching programmes is thus one key to weaving societal responsibility into the spaces in which new knowledge is envisioned and produced. This should become an essential component of academic socialization.

Pick candidates that also care for societal issues

As hiring institutions, universities are key gatekeepers. Appointment procedures of professors thus are crucial moments that have long-term impact on how far societal responsibility will grow into future scientific work. As a second step, to weave societal concerns into academic practice, universities should thus move beyond the accountability rationale in evaluating future professors. In practice, this could mean that reflective capacity should become an important part of the overall qualitative assessment of candidates, taking into account how far respective scientists have learned to reflect on their scientific research field in societal context, and how far scientists are able to pass this on to the next generation of scientists. Further, reflecting societal relevance has an impact on how researchers take decisions in research. This could not only contribute to broadening the notion of excellence, but also bring back into focus the role of academic researchers as educators and multipliers who form the next generation of scientists and high-qualified professionals.

Move towards a plural culture of scientific quality

Third, universities should incentivise reflections on societal responsibility across the spheres of research and teaching. Ideally, this would lead to more open negotiations about guiding values within scientific communities, and about more appropriate ways of evaluating and valuing scientific work in societal contexts. However, reflecting on scientific relevance is most meaningful when closely tied to the respective scientific field, and awareness needs to be present that in different fields, potentially very different societal concerns and values matter.

One of the most challenging tasks for universities is to consider this diversity and to accommodate the plurality of knowledges and values that adds up to the public value of the academic world. Cultivating this plurality means to allow, but also to value and nurture a diversity of scientific quality criteria across fields, and to experiment with forms of evaluation that go beyond narrow concepts of accountability. Most importantly, it should be avoided to subject the value of societal responsibility to an additional standardized metrics, as this might invite a tick-box mentality to address responsibility and hinder researchers to weave societal responsibility into the fabric of their work in more conscious and meaningful ways.

We need cultural and institutional changes that allow to more intuitively relate questions of societal responsibility to dimensions of excellence, in the sense of synergy, not trade-off. While many funding institutions currently seem to handle excellence and responsibility as separate, if not mutually exclusive, universities should challenge this dichotomy and suggest instead that excellence and responsibility are best seen as symbiotic capacities for universities to make a meaningful contribution to societal development.

Where to go from here?

We have just offered a few examples for how universities can reorganise academic life with governance mechanisms that allow scientists to reflect and consider societal issues more explicitly. From their unique position within knowledge societies, universities may want to take a lead in organising exchange with other stakeholders to create societal capacities to sustain a broad enough knowledge ecology to respond to societal challenges, both today and in the future.

In doing so, it is essential to consider that societal issues do not only matter once the knowledge leaves the university, but they should traverse lab spaces, desktops and the minds of researchers. While by no means narrowing down their academic freedom and curiosity, societal engagement fosters the capacity to reflect on the entanglements between science and society and allows scientists to become academic citizens in a fuller sense.


Authors (alphabetical order):

All authors of this paper are members of the Research Platform Responsible Research and Innovation (RRI) in Academic Practice at the University of Vienna.

Ulrike Felt is Professor of Science and Technology Studies and Head of the Research Platform. She is, amongst others, president of the European Association for the Study of Science and Technology, board member of the Journal Responsible Innovation and has published on temporalities of academic life and the challenge of making RRI work in academic environments.

Maximilian Fochler is Assistant Professor and Head of the Department of Science and Technology Studies. He is author of several papers on changing research cultures in the life sciences in academia and biotech companies, with a particular focus on different forms of valuation and evaluation.

Andreas Richter is Professor of Ecosystem Science and Head of the Division of Terrestrial Ecosystem Research. His research interests range from carbon use efficiency of microbial communities to the effect of climate change on soil processes and carbon storage.

Renée Schroeder is Professor of Biochemistry at the University of Vienna. For her research in the field of RNA biology she was awarded amongst others the Special Honor Award “For Women in Science” and the distinguished “Wittgenstein Award”.

Lisa Sigl is postdoctoral researcher at the Research Platform Responsible Research and Innovation in Academic Practice at the University of Vienna. She has published on the changing governance of life science research, with a focus on labour conditions and infrastructures for commercialization.

Responsibility in social sciences

by Kaya Akyüz

Catherine Bliss’s book “Social by Nature” is at the core of a currently on-going debate (Picture provided by K. Akyüz)

Recently, a controversy among social scientists unfolded on twitter about the book Social by Nature: The Promise and Peril of Sociogenomics by the STS scholar Catherine Bliss. A review of the book was published in Nature with the title CRISPR’s Willing Executioners, alluding to Daniel Goldhagen’s book Hitler’s Willing Executioners: Ordinary Germans and the Holocaust. Bliss’ endorsement of the review on twitter raised sensitivity among some of the individuals interviewed for the study, who are mainly social scientists, working at the intersection of genomics and their disciplines. At the center of the controversy, however, are not just the arguments of the book, but social science practices.

The critique came mainly from Jeremy Freese, a sociology professor from Stanford and the interviewee #9 mentioned in the book, in the form of a tweet thread that was started on January 16, 2018 and included a plethora of “errors” in the book [i], retweeted and responded to by many, including other scientists who were interviewed for the book. Freese’s response on twitter begs many questions, some of which are exemplified by a heated exchange on a sociology discussion board. From the perspective of a social scientist, I share the view that Freese could have selected a better venue and form for his critique of a social scientific work. Nevertheless, Freese defends the tweet thread by stating that the interviewees should have been included in the publishing process. The aim of this blog post is, however, to discuss “responsibility” in social science and STS research through this case. This controversy raises the question: Should we incorporate our interview partners in the (pre-)publishing process as part of ‘good’ scientific practice, or is this controversy a special case because the interview partners were themselves social scientists?

Freese thinks social science genomics researchers are likened to Nazis in Nature, which makes the urgency of a rapid response as a “self-defense” through tweets understandable. But is Bliss, the author of the review, the editorial staff of Nature or the Stanford University Press the main target here? Freese criticizes these actors and points out the problems from his perspective as a research subject, but he barely scratches the surface of what seems to be a web of entrenched issues in the contemporary publishing system and knowledge production. A mutual critique among social scientists in the traditional form of commentaries and responses in journals would have contributed more to opening up the underlying issues. I’d have liked to read more of a sociologist’s perspective into the network of actors (e.g. the author and collaborators, funders of the research, reviewers of the book, and editorial team of the publishing house) that made such “errors” publishable and what this book means for the field of “social genomics”, rather than reading only the nitty gritty details and a few systematic issues without allusion to practices in science that lead to such controversies. If we are not able to achieve meaningful mutual critique even within social sciences as in the Bliss-Freese case, how could we move forward?

There is another layer of the controversy, exemplified by tweets (1, 2, 3), that says more about STS than about Bliss’ book; according to Freese, some STS scholars believe to be “morally superior” (e.g. to social science genomics researchers) and feel a “strong incentive to twist things that scientists say in order to make them look bad.” Although Freese’s target here is a specific, but unidentified, “vein” of STS, if this is to be taken as a valid argument, it should apply symmetrically to the broader community of social scientists, whom Freese is also a part of. Nonetheless, the quote made me consider the controversy as an alarm to view our scientific practice from a critical lens and imagine things “we” can do as social scientists.

We have to think about being more reflexive and how to achieve mutual respect between the researchers and the research subjects without forcing ourselves into normativity or losing our critical capacity. We have to re-think what we owe to our research subjects, who have devoted their time and made our research possible. All this may be difficult to achieve, in conditions in which we have to be fast in publishing, getting new grants, finding jobs and publishing further, but we have to slow down, even stop for a second, and, think what the consequences are. We have to realize the risks of “bad publicity” and think about the not-so-immediate consequences. We have to be active when we feel our ideas and research are misused. Succumbing to the publish or perish system, can save our careers in the short term, but in the long run, continuous controversies like this could start a new wave of science wars, where we all lose as members of science and society. With this blog post, I kindly invite contributions from readers on practical aspects of responsibility in (social) sciences going beyond the rhetorical meaning of the term.

[i] I roughly categorized the mentioned “errors” as: wrong affiliations of scientists mentioned  (e.g. 1, 2, 3, 4), misspelled names (of a scientist, of a company, of a profession, a term), wrong names (a scientist’s expertiseof the leader of a program, of a funding agency), non-existing collaborations (e.g. 1, 2),  confusion of a graduate student and his advisor/committee member, wrong years (of a first special issue, of first partnership, of introduction of a term, of a cohort’s incorporation of genetic data), wrong transcriptions of terms in interviews (estimate existence of equations instead of estimating systems of equations, no hypothesis testing instead of null hypothesis testing, a number of factorial partners instead of number of sexual partners, Freese’s own words), wrong court case details, wrong quoting of a scientist’s televised talk, disputed figures, existence of things claimed non-existent (epigenetic studies by social genomics researchers, incompleteness of Bliss’ listing of “the only full courses” with Freese’s own teaching as an example), “non-existent” terms (genetic methodological validity), interview quotes that are deemed by Freese to be “weird or dumb” or “not making sense,” and many interpretive disagreements, which I won’t mention further. However, among these, Freese claims numerous times that the quotations from interviews/empirical material are contradicting (e.g. 1, 2, 3, 4) or not supporting the analysis that follows.

Kaya Akyüz is a PhD student and uni:docs fellow at the Department of Science and Technology Studies, University of Vienna. Having finished his bachelor and master’s studies in Molecular Biology and Genetics at Bo?aziçi University, his current research is on the dynamics of making and unmaking a new field in science through the case of genopolitics, an emerging research field at the intersection of political science and genetics.

How did the polar bear get on the front page?

by Dorothea Born

When I started my PhD on visual climate change communication in popular science magazines between 1992 and 2012, polar bears were already everywhere: on the cover of the Time Magazine, in the WWF’s online shop, where you could buy a polar-bear adoption kid, or on Greenpeace’s advertisements to save the Arctic.

And while other scholars on visual climate change communication argued that it was time to move beyond polar bears, I became more and more intrigued by their ubiquity. I started to wonder: how did the polar bear actually become this “poster child” for climate change? Have polar bears and climate change always been connected? And how are polar bears linked to (pop)cultural meanings, which might explain their success? These were the guiding questions for my research that has recently been published in form of the article “Bearing witness? Polar Bears as Icons for Climate Change Communication in National Geographic.

Uncovering the icon’s history

Investigating the visual climate change discourses of National Geographic I had come across quite a lot of polar bear images, mainly after 2005. But, interestingly, I also found some articles, published throughout the late 1990s until the early 2000s, that were primarily concerned with polar bears and not with climate change. As with all articles in National Geographic, texts, images and captions worked together to show the daily routines of these charismatic animals: swimming, play-fighting, cuddling with their off-spring. All these images staged the polar bears as ‘one of us’, a visual stylistic strategy that I have called “anthropomorphization”: depicting the wild animals as having human features, like showing emotions, caring about their cubs, playing in the snow. These anthropomorphized pictures proved to be highly important for the polar bears’ later iconic function.

An example of an anthropomorphized polar bear – this one looks a bit sad or maybe just tired? Credit: Norber Rosing/National Geographic Creative (1998).

Connecting polar bears to climate change

While these pictures of the anthropomorphized polar bears were published between 1998 and 2004, in articles where climate change was not in the focus, over the course of these articles climate change was increasingly linked to polar bears. First, only as a side note; then, in 2000, as one possible factor threatening the polar bears’ survival; ultimately, in 2005, becoming a major concern. With this, the visual language changed. Polar bears were put in the context of their Arctic environment; the close-ups of family-idyll were exchanged for images that depicted the bears as blending with their snowy surroundings. After 2005, articles were less about polar bears and more about climate change and its consequences for the bears, which were also depicted visually: Dead polar bears, polar bear cubs running away from male bears that threaten to eat them, and, yes, also polar bears seemingly lost on a swimming ice floe.

An example from the transition phase: The background becomes more important and the theme of sheltering and protecting emerges. Credit: Paul Nicklen/National Geographic Creative (2005).

So the visual language that had already seemed so iconic and well established when I started my research in 2012, was actually a more recent development as well as the outcome of a process of iconization, through which the bears were established as icons of climate change. Within this process, the earlier phase of anthropomorphic depictions of polar bears was important for their eventual establishment as climate change icons. These images allow the viewers to identify with the cuddly bears, and thus also to feel pity for their fate in a warming world. In a next phase, linking the polar bear to climate change as well as establishing the bear as representative for the threatened Arctic environment served to prepare the last stage of polar bear images, where the lost bear on the ice floe emerges as the icon of climate change.

Iconography of the Polar Bear

This identification also builds on a longer (pop)cultural tradition: the “iconography of the teddy bear”. The Teddy Bear is named after former US president Theodore Roosevelt, who, as the story goes, spared a grizzly on a bear hunt because he pitied the animal. This event did not only lead to the invention of a profitable stuffed toy but also marked a change in our relation to nature, as nature became something not to be feared or exploited but to be pitied and protected (watch John Mooallem’s wonderful Ted-Talk for more about this).

Images and imaginations of polar bears build on this history of the teddy bear and polar bears figured in (pop)culture long before climate change became a hot topic. Yet, today the bears are so intrinsically linked to this issue that it seems impossible to think of them without climate change. E.g. Coca Cola used animated polar bears in their advertisements, but later started, together with the World Wide Fund, an “Arctic Home” campaign where you could buy stuffed polar bears. Another example is Lars, the little polar bear, who happily splashed through my childhood without giving a thought about global warming but is now used in children education to explain climate change.

The icon of the polar bear enables personal identification by evoking emotional consternation through the display of individual suffering. The icon is meant as a stand in for humanity, the drifting ice floe becomes a reference to spaceship earth. Thus polar bear images can serve to raise awareness for global climate change. Yet, these images do not make the wider causes or circumstances of climate change visible and do not foster a more complex understanding of the issue’s implication with global capitalism. Thus, even though they are undeniably fascinating creatures, it might indeed be time to move beyond polar bears.

The – now iconic – shot of the polar bears, seemingly lost on a drifting ice floe. Credit: Paul Nicklen/National Geographic Creative (2007).

Dorothea Born is a doctoral student at the Department of Science and Technology Studies. Currently she is a guest researcher at the Department of Media, Cognition and Communication at Copenhagen University funded by the Marietta Blau grant of the Austrian Agency for International Cooperation in Education and Research (OeAD). Her research interests gravitate around climate change communication in visual cultures, popular science magazines and conceptions of nature.

Some suggestions for interdisciplinary writing

by Erik Aarden

Picture of the author, by Michaela Schmidt
Picture of the author, by Michaela Schmidt

Many of the issues that form interesting topics of research are located at the intersection of different disciplines. For example, I am trained in science and technology studies (STS), but much of my research on medical innovation brings together topics that also motivate medical researchers and public policy scholars. As a result, there is much to gain from writing for diverse and interdisciplinary audiences. I am therefore glad to share some thoughts on how to make insights of interest beyond the borders of one’s own field:

State a clear problem

This point may seem self-evident, but it is important to keep in mind that the problems that motivate research in your discipline are not necessarily those that interest a wider audience. I have therefore found it useful to think about my research in terms of broader public problems. For example, the question how genetic technologies affect health care access is one for which I found receptive audiences in various disciplines.

Don’t be afraid of theory

My second suggestion may seem counterintuitive, since particular theoretical traditions or controversies are often very discipline-specific. Nevertheless, it helps to clearly locate your own perspective in a particular intellectual tradition and it can support your attempt to bring novel insights to a different field. Of course, readers of (for example) medical journals are probably not interested in a detailed exegesis on a particular school of thought – but present your materials through a broader framework, and they may just begin to think differently about other examples that are more or less similar to yours.

Know your strengths

When publishing in other disciplines, we both know more about certain aspects of the things we are writing about than our readers, but at the same time our intended audience is more knowledgeable about other aspects of the problem. One of the things I thus find most challenging is to be taken seriously in terms of what I have to say, while avoiding being ‘exposed’ as a clueless outsider. I therefore try to strike the right balance between trust in the expertise of my audience and in my own. For example, in my prize-winning paper, I try to avoid questioning health policy scholars’ expertise on the intricacies of health policy making, but do think I have something helpful to say about the particularities of novel technologies for health care access.

Note: This blogpost was originally published on the Taylor & Francis Author Services blog. I was invited to share some ideas on successful writing on the occasion of winning the Critical Policy Studies Early Stage Career Researcher Prize. You may find the original post here.

Erik Aarden is a postdoc at the Department of Science and Technology Studies of the University of Vienna, Austria. He obtained his PhD from Maastricht University, the Netherlands in 2010 with a study of the integration of genetic diagnostics in three European health care systems and has since continued (mostly comparative) research on the intersection between biomedicine, political institutions and social justice. He has previously been a postdoc at RWTH Aachen University, Germany and a Marie Curie fellow in Maastricht and at Harvard University, US. An article on the basis of his doctoral research was recently awarded the Critical Policy Studies Early Career Stage Researcher Prize.

Expectations and the challenge of making research infrastructures work

by Erik Aarden

The building that formerly housed the Singapore Tissue Network at the Biopolis research campus. Picture by Erik Aarden.
The building that formerly housed the Singapore Tissue Network at the Biopolis research campus. Picture by Erik Aarden.

Since the beginning of this century, many governments have invested heavily in biomedical research, in the hope of both improving population health and stimulating economic growth. Central to these investments have been the establishment of new research infrastructures that are supposed to contribute to cutting-edge biological research and to turning research results into both clinically and commercially viable products. For example, in the United States, the National Institutes of Health launched the National Center for the Advancement of Translational Science (NCATS) in 2012 “so that new treatments and cures for diseases can be delivered to patients faster”. Along similar lines, various institutions in government, academia and the private industry in the Netherlands established a national Health Research Infrastructure (Health-RI) last year.  But how do policy visions of the medical and economic promise of research match up with the often mundane and highly specific research done within these infrastructures? To what extent can research infrastructures make promises come true, and what are the risks in expecting them to do so?

In a recently published article in Science and Public Policy, I explored these questions in relation to one specific case: the Singapore Tissue Network (STN). The government of Singapore has been especially assertive in sketching out a strategy for biomedical research as a domain of social and economic promise at the beginning of this century. Like in many other places, the establishment of new research infrastructures was an important element of this strategy. The STN was therefore set up as part of a first batch of new institutes and infrastructures a little more than fifteen years ago. Yet this particular facility did not exist for long, as it was shut down in 2011. This raises the question why high hopes associated with this particular institution did not materialize. The official version of events is that researchers did not use the facility, yet that left me with the question why they did not use it. To find that out, I spend five weeks in Singapore in 2013 to talk to researchers, administrators and policy-makers about the fate of the STN.

In my article I describe how the STN closed because different actors had very distinct understandings of the repository’s usefulness. In brief, I found that policymakers projected usefulness onto the Singapore Tissue Network, whereas researchers believed that usefulness was produced in the way the repository stored and operated its collection. This difference between projection and production manifested in various ways, ranging from how tissues of interest were identified and collected, how centrally stored tissues were made available to researchers, to how tissue was supposed to contribute to the aims of advancing medical knowledge and stimulating Singapore’s economy.

An example may help in clarifying the point. For the Singapore government, which is present in this story through its main research funding body, the Agency for Science, Technology and Research (A*STAR), one of the most important functions of the STN was to stimulate collaboration between researchers. Collaboration was considered to be an important ingredient to making research economically valuable. The funding agency therefore tried to stimulate collaboration by only giving users of the samples access to important data about those samples if they would work together with colleagues who contributed the tissue samples to the STN. Yet scientists were used to working with vastly different routines for exchanging data. They would not just give data they worked to collect and with which they were entrusted by the patient to anyone who came calling. Instead, collaboration with and trust of other researchers was a precondition for providing data to colleagues, and not a result of it.  Due to this and similar gaps in perspectives on how a central tissue repository might be useful, researchers saw fairly little use in the STN and indeed ended up not using the services it offered. As a consequence, A*STAR did not consider it to be very useful either and decided to close the STN.

So what can we learn from this episode of an infrastructure not delivering on its promise, particularly in light of the ongoing and rapid establishment of similar infrastructures around the world? In my article, I explain that this is not simply a case of policymakers misunderstanding science, or researchers rejecting any purposes for their work other than advancing science. In fact, biomedical research and development in Singapore continue to this day, with clear economic objectives. What I do suggest, is that researchers and policymakers had different expectations and timeframes for these expectations, which made the STN redundant for both. An important lesson we may draw from this episode for similar initiatives elsewhere, is the central role of trust and communication in making investments in research infrastructures (not) work. Perhaps more explicit discussion of the different expectations and different ways of workings around Singapore’s tissue repository would have resulted in a longer and more successful existence for the STN.

Erik Aarden is postdoctoral university assistant with the STS Department at the University of Vienna. In his research and teaching he is interested in the relations between science and technology, socio-political orders and implications for distributive justice, seen through a comparative lens and with a focus on biomedicine.

The Josephinum Museum or: Society’s norms put in wax

by Kathrin Gusenbauer

Venus medici by Josephinum - Collections of the Medical University of Vienna, taken from
Venus medici by Josephinum – Collections of the Medical University of Vienna, taken from

Probably very few in our Western society can imagine a life without medicine like it is established nowadays. Doctors, medical knowledge and techniques like X-rays or ultrasound play a huge role in how we experience our own and others’ bodies. Medicine has made it possible for people to grow very old and also equip them with artificial body parts. Many commercials remind us to donate blood, vaccinate against ticks and stay healthy in general. As a part of the neoliberal capitalist ideology, our bodies became products, we have the duty to keep them running and maintain them, for example through preventive check-ups.

Part of this reality is rooted at the Josephinum. Its foundation marks the birth of modern medicine in Austria, as it essentially shapes the notion of the human body today. Established in 1784 (back then called the Imperial And Royal Academy Of Medicine And Surgery) by Emperor Joseph II it was and still is part of Vienna’s Medical University and is most commonly known to provide a remarkable collection of wax models of skinned human bodies, but also medical instruments and a library. As the accompanying booklet states, „the history of medicine culminates at the Josephinum“ in view of showing what the first insights into the human body looked like. Also, it claims to be the „physical embodiment of the Medical University’s cultural heritage“.

Back in the 18th century its foundation was part of a massive revolution of the healthcare system, accompanied by the establishment of a new general hospital in Vienna. One of the main ambitions of Joseph II was to bring academic physicians and army doctors – knowledge and practice – together, so patients would experience a much better treatment. But in this new hospital also the first department for female health (Frauengesundheit) was founded. Back then this meant nothing like today’s gender medicine – it was all about birth and the first institution providing women the opportunity to abort. The reduction of women to their ability to bear children was the contemporary notion regarding gender that time: During the 18th century the conception of ‘the’ human body changed from a ‘one sex model’, where the male was seen as the norm and the female as the abnormal, to a ‘two sex model’, where the female body no longer was a deviation of the male but became an entity of her own. Claudia Honegger described this as the genesis of a female special anthropology. Every organ and every bone, her whole body, was interpreted to be shaped just for giving birth. It was her duty and had to be every woman’s highest goal to do so. But although men had now found women’s assignment and the complementarity of sex and gender arose, women still remained inferior and somehow an unfinished or deficient man. This model is still present in sociobiological arguments, which try to explain social circumstances with physical conditions of human bodies, when in fact the social biases of the scientists are attributed to the investigated bodies.

At the Josephinum two out of a total of 16 whole white body wax models represent female bodies. One is standing and the other one is lying and pregnant – the Medici Venus. A similar relation is found in today’s anatomy books for students of medicine like Gray’s Anatomy. Illustrations of very different areas or organs of the human body are depicted – completely unrelated – with male genitalia. But not only in the images, also in the descriptions a bias is noticeable. Some organs, which are assigned to be female, especially the reproductive system, are described in relation to the male with vocabulary like “smaller” or “less developed” and not as organs for themselves. Wittingly or unwittingly “The human body” is male – now and then.

Lower half of right sympathetic cord. (Testut after Hirschfeld.) taken from Herny Gray's "Anatomy of the Human Body, Figure 849", American 20th Edition, 1918
Lower half of right sympathetic cord. (Testut after Hirschfeld.), (Figure 849) taken from Herny Gray’s “Anatomy of the Human Body”, American 20th Edition, 1918.

A visit at the Josephinum illustrates many topics of the classes I attended during the Complimentary Study Programme (EC) Science-Technology-Society at the Department of Science and Technology Studies. It shows for example what Ludwig Fleck called “tenacity of systems of opinion” (Beharrungstendenz von Meinungssystemen): Although countless studies have shown differently, science seems to have trouble with seeing variations – it just recognizes what fits into common beliefs, presuppositions or ‘prae-ideas’. Everything divergent – if it is seen at all – is reinterpreted or seems strange and abnormal. More specifically, the exhibition also exemplifies that not just gender, also sex has to be understood as nothing stable or objective. It is “the sum of socially agreed biological criteria for classifying persons as females or males“[1] and therefore basal for gender consumptions. Like gender, sex is also, contrary to popular beliefs, a result of complex social negotiation processes.

Overall the exhibition and its comparison to today makes it comprehensible how scientific knowledge is a social process. Thanks to feminism and the recognition of feminist epistemology in social sciences, it is now possible to make visible how patriarchy is deeply seated in seemingly impartial ‘facts’ and scientific knowledge. Furthermore, society and social norms of various social groups do not only shape the consumptions or identities of individuals, but also their physical bodies.

Although these findings did not yet gain foothold in natural sciences, their research or their results, one is able to start imaging how much power they contain. Realizing this on a broader social level, they open up gates to more precise, more profound sciences and therefore also a less discriminatory society.

[1] West/Zimmerman 1987, 127; 2nd session of UK Geschlecht (Vienna, summer term 2015/16).

Kathrin Gusenbauer completed the STS Department’s complementary study program “Science Technology and Society” as part of her Bachelor’s programme of Historiography. This blog post was inspired by the joint excursion of the courses “Science, Technology, Gender” and “Bodies, Knowledge, Society” to the Josephinum.

Making and doing a new Handbook for STS

by Victoria Neumann

Source: Department of Science and Technology Studies, University of Vienna

The moment is finally here: The new edition of the Handbook of Science and Technology Studies is finally finished and available. To say it in Latour’s terms, the Handbook is now “ready made”, but how was it in the making? How was this important representation of our discipline co-produced by the numerous authors, editors, politics and business of academic publishing and last but not least the material constrains of putting whole fields of study and their intersections into roughly 1200 pages? This post is looking back at the sometimes messy processes in the creation of the latest edition of the Handbook and the attempts to keep the mess under control. Stories from the Handbook back office. And to make it more playful and in order to advertise, I have hidden the titles of several chapters in this post. How many can you find? (Hint: the table of contents can be viewed here)

The story of the new edition of the Handbook began with an outreach, which was the call for abstracts of chapters. This bottom up approach from the STS community provided the editors with what they described in the introduction as the seeds for the landscape that this volume was about to become. In between the abstracts and the finished version many choices were made in shaping this landscape. What needs to be included (what chapters, sections, historical approaches and strands of research)? How much can be included (e.g., chapter lengths, images, bibliographic references)? Thinking of STS as a transdisciplinary environment of research, those choices made an effort to open up the field rather than limiting it. Of course, environmental justice to such a large field is nearly impossible while balancing limited resources — time, space in the future book, money — at hand, and the aim of making the topics comprehensible for the imagined future readers, for instance. However, structural inequality still persists, in a sense of still lacking non-Euro-American authors and arguably certain less mainstream perspectives. The editors reflect on this issue in their introduction.

My work for the Handbook began when the first full drafts of the chapters arrived, and we started the initial revision process. Finding and getting peer reviewers for each chapter was a long process. Fellow scholars do reviews without being paid, and often in their spare time on top of their work. Consequently (and very understandable), many people who we asked for, politely declined. For us, this meant we had to ask around 3-4 times more persons to comment in order to achieve our aim of around 3 reviewers per chapter.

After the reviewers, it was up to the editors to rethink the documents we received, the draft chapters and their respective reviews, in order to develop the STS Handbook. This included cropping, shortening and reformulating content in a sensible and sensitive way. This was often about politics, as a Handbook chapter also tells a story on the development of certain branch of research, so authors sometimes got defensive about their contribution, resulting in excessive self-referencing, or left out rival strands of research. Here the editors often wrote long Emails or talked personally the authors about these issues.

Retrospectively, with the review process my most important job began: the surveillance and regulation of (laboratory) practices of academic writing. Coordinating an international project with numerous actors involved is not an easy task. Timelines and plans were set for all the diverse steps of a handbook, from the first draft to the peer review process to the final proof reading. However, deadlines given to contributors were often seen more as a recommendation rather than a fix commitment or obligation. This PhD comic describes the situation quite well:


Academic Deadlines
Source: “Piled Higher and Deeper” by Jorge Cham

The result: Deadlines passed, but the inbox remained empty. Clearly, if we wanted to stay in time, we needed to reframe our science communication. But how do you get people to do their work? The solution: a tight deadline reminder regime! Consequently, one of the most work and time intense tasks for me was the sheer amount of reminders I had to write and sent out via mail. As in any project, there was a steep learning curve in disciplining our subjects – I mean our colleagues – for us and so early phases (e.g., during peer review) we would only send out reminder Emails after a deadline passed, in later stages we switched to also reminding the authors ahead of a coming deadline. While writing multiple reminders, I also had to learn how to deal with my own inhibitions. How could I be friendly and respectful, but authoritative at the same time when requesting overdue work? Adjusting the tone was especially difficult given my position as a graduate student and doing ‘merely’ an administrative job telling (often well-known) senior scholars what to do. In most cases, it was enough to switch from “Please get back at us until the [date]” to a more decisive version like: “Due to our tight schedule we manage to deal with any further delays. We do expect to receive the chapter within the next three days. This means Sunday, [date] at the very latest. Thank you for your understanding and collaboration”. More frustrating to me than writing endless Emails into a seemingly non-responding void, were a few authors who never responded to me, but only corresponded with the editors exclusively, which often caused more delays. Yet, in general the spamming technique plus the increasingly authoritative language worked surprisingly well. Foucault would have been proud of the way this resulted in self-discipline.

However, even this regime did not prevent bottlenecks sometimes. When the first final versions of the chapters were handed into the typesetting, this was an excellent point to research disasters from an STS perspective as the majority of chapters came at once, leaving us very little time to go through them before an important deadline with MIT Press. In order not to miss this (already extended) deadline, we and some friendly helpers spent around two weeks, going through all chapters, correcting mistakes, and standardizing the bibliography entries as well as bringing back ordering systems that obviously had been dismissed as oppressive by some authors (such as an alphabetic order in the reference list). At this point the Handbook began to age— we were too, including some new grey hairs — and the socio-material constitution of later life of the chapters began to look like an actual book (in pdf form).

A few months later, we got the chapters back for the final proof reading. Once more, we had a number of helpers who helped us going through all chapters again. At this point it became clear that gender and (in)equity in the scientific workforce  also impact us as a field , since we noticed that the majority of our volunteering helpers were female. They contributed in the last re-configurations, the finishing touch of the Handbook and during our main reading session including dinner (thankfully paid for by the lead editor) discussions on the co-production of knowledge and food.

Looking back at the scientometrics of all the intellectual and practical contributions to the STS Handbook:  we had more than 121 authors, around the same amount of reviewers, circa 30 persons involved in the editing and proof reading processes, and over 6000 Emails were sent and received. And this did not even include the many other actors contributing to what the Handbook is now, e.g. typesetters, citation software, managing personnel at the 4S or MIT Press. Thank you all for your time and work, sharing all the moments of joy, despair, frustration, thoughtfulness, and creative engagement. It was a wonderful and valuable experience.

In the end, the handbook was co-produced by a whole community, full of formal and informal work, and every interaction in between. It is not only a representation, but also a materialization of this community and the process of the Handbook’s creation showed its messiness, structures, hierarchies, and politics. Now it is out there, so please do what STS does best: Discuss it! De-construct it! Re-construct it! Teach with it! Criticize it! Use it as a pillow while studying!


After all, I claim: One does not need a laboratory to raise a discipline, one just needs to produce a Handbook.

Victoria Neumann is currently finishing the Master program ‘Science-Society-Technology’ at the University of Vienna. Apart from working for the Handbook, she is interested in biomedicine, time, and critical data studies.