The Uberfication of the University (Open access Forerunners series version available here; as of April 4 2017 an interactive Manifold series version is available here.)

Públicos Fantasma - La Naturaleza Política Del Libro - La Red (Mexico: Taller de Ediciones Económicas, 2016) - new book, co-authored with Andrew Murphie, Janneke Adema and Alessandro Ludovico. 

'Posthumanities: The Dark Side of "The Dark Side of the Digital"' (with Janneke Adema), in Janneke Adema and Gary Hall, eds, Disrupting the Humanities: Towards Posthumanities, Journal of Electronic PublishingVol. 9, No.2, Winter, 2016.

'Pirate Philosophy And Post-Capitalism: A Conversation With Gary Hall', by Mark Carrigan, The Sociological Imagination, December 8, 2016.

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repository CURVE here 

performative project Janneke Adema has put together, based on our ‘The Political Nature of the Book: On Artists’ Books and Radical Open Access’ article for New Formations, Number 78, Summer, 2013. 

'What Does's Success Mean for Open Access: The Data-Driven World of Search Engines and Social Networking', Ctrl-Z: New Media Philosophy, no.5, 2015.

Radical Open Access network

« On the limits of openness III: open government | Main | On the limits of openness I: the digital humanities and the computational turn to data-driven scholarship »

On the limits of openness II: from open access to open data

In ‘On the limits of openness I’ (see below), I argued that in order to gain an appreciation of what the humanities can become in an era of digital media technology, we would be better advised turning for assistance, not to computing science, but to the writers, poets, historians, literary critics, theorists and philosophers of the humanities. Let me explain what I mean.

Thirty years ago the philosopher Jean-François Lyotard was able to show how science, lacking the resources to legitimate itself as true, had, since its beginnings with Plato, relied for its legitimacy on precisely the kind of knowledge it did not even consider to be knowledge: non-scientific narrative knowledge. Specifically, science legitimated itself by producing a discourse called philosophy. It was philosophy’s role to generate a discourse of legitimation for science. Lyotard proceeded to define as modern any science that legitimated itself in this way by means of a metadiscourse which explicitly appealed to a grand narrative of some sort: the life of the spirit, the Enlightenment, progress, modernity, the emancipation of humanity, the realisation of the Idea, and so on.

What makes Lyotard’s analysis so significant with respect to the emergence of the digital humanities and the computational turn is that his intention was not to position philosophy as being able to tell us as much, if not more, about science than science itself. It was rather to emphasize that, in a process of transformation that had been taking place since at least the end of the 1950s, such long-standing metanarratives of legitimation had now themselves become obsolete.

So what happens to science when the philosophical metanarratives that legitimate it are no longer credible?   Lyotard’s answer, at least in part, was that science was increasing its connection to society, especially the instrumentality and functionality of society (as opposed to a notion of, say, ‘public service’). Science was doing so by helping to legitimate the power of States, companies and multinational corporations by optimizing the relationship ‘between input and output’, between what is put into the social system and what is got out of it, in order to get more from less. ‘Performativity’, in other words.

It is at this point that we return directly to the subject of computers and computing. For Lyotard, writing in 1979, technological transformations in research and the transmission of acquired learning in the most highly developed societies, including the widespread use of computers and databases and the ‘miniaturization and commercialization of machines’, were already in the process of exteriorizing knowledge in relation to the ‘knower’. Lyotard saw this general transformation and exteriorization as leading to a major alteration in the status and nature of knowledge: away from a concern with ‘the true, the just, or the beautiful, etc.’, with ideals, with knowledge as an end in itself, and precisely toward a concern with improving the social system’s performance, its efficiency.  So much so that, for Lyotard:

The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements.

(Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge (Manchester: Manchester University Press, 1986) p.4)

Scroll down 30 years and we do indeed find a lot discourses in the sciences today taken up with exteriorizing knowledge and information in order to achieve ‘the best possible performance’ by eliminating delays and inefficiencies and solving technical problems. So we have John Houghton’s 2009 study showing that the open access academic publishing model championed most vociferously in the sciences, whereby peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, without the need to pay subscriptions either to publish or to (pay per)view it, is actually the most cost effective mechanism for scholarly publishing.  Others have detailed at length the increases open access publishing and the related software makes possible in the amount of research material that can be published, searched and stored, the number of people who can access to it, the impact of that material, the range of its distribution, and the speed and ease of reporting and information retrieval, leading to what one of the leaders of the open access movement, Peter Suber, has described as ‘better metrics’. 

One highly influential open access publisher, the Public Library of Science (PLoS), is, with their PLoS Currents: Influenza website, even experimenting with publishing scientific research online before it has undergone in-depth peer review. PLoS are justifying this experiment on the grounds that it enables ideas, results and data to be disseminated as rapidly as possible.  But they are far from alone in making such an argument. Along with full, finished, peer-reviewed texts, more and more researchers in the sciences are making the email, blog, website or paper in which an idea is first expressed openly available online, together with any drafts, working papers, beta, pre-print or grey literature that have been produced and circulated to garner comments from peers and interested parties.  Like PLoS, these scientists perceive doing so as a way of disseminating their research earlier and faster, and therefore increasing its visibility, use, impact, citation count and so on. They also regard it as a means of breaking down much of the culture of secrecy that surrounds scientific research, and as helping to build networks and communities around their work by in effect saying to others, both inside and outside the academy, ‘it’s not finished, come and help us with it!’ Such crowd-sourcing opportunities are in turn held as leading to further increases in their work’s visibility, use, impact, citation counts, prestige and so on, thus optimizing the ratio between minimal input and maximum output still further.

Nor is it just the research literature itself that is being rendered accessible by scientists in this way. Even the data that is created in the course of scientific research is being made freely and openly available for others to use, analyse and build upon. Known as Open Data, this initiative is motivated by more than an  awareness that data is the main research output in many fields.  In the words of another of the leading advocates for open access, Alma Swan, publishing data online on an open basis bestows it with a ‘vastly increased utility’: digital data sets are ‘easily passed around’; they are ‘more easily reused’; and they contain more ‘opportunities for educational and commercial exploitation’. 

Some academic publishers are viewing the linking of their journals to the underlying data as another of their ‘value-added’ services to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to help ward off the threat of disintermediation posed by the development of digital technology, which makes it possible for academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). In fact a 2009 JISC open science report identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’.

In a further move in this direction, all seven PLoS journals are now providing a broad range of article level metrics and indicators relating to usage data on an open basis. No longer withheld as ‘trade secrets’, these metrics measure which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, notes, ‘Star’ ratings, blog coverage, etc

PLoS has positioned this programme as enabling science scholars to assess ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’, and they encourage readers to carry out their own analyses of this open data. Yet it is difficult not to see article level metrics as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’, as Lyotard puts it; quantities, furthermore, that are produced more to be exchanged, marketed and sold – for example, by individual academics to their departments, institutions, funders and governments in the form of indictors of ‘quality’ and ‘impact’ - than for their ‘“use-value’”. 

The requirement to have visibility, to show up in the metrics, to be measurable, nowadays encourages researchers to publish a lot and frequently. So much so that the peer-reviewed academic journal article has been positioned by some as having now assumed ‘a single central value, not that of bringing something new to the field but that of assessing the person’s research, with a view to hiring, promotion, funding, and, more and more, avoiding termination.’  In such circumstances, as Lyotard makes clear, ‘[i]t is not hard to visualize learning circulating along the same lines as money, instead of for its “educational” value or political (administrative, diplomatic, military) importance’. To the extent that it is even possible to say that, just as money has become a source of virtual value and speculation in the era of American-led neoliberal global finance capital, so too has education, research and publication. And we all know what happened when money became virtual.

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>