Some recent and not-so-recent publications

'Culture and the University as White, Male, Liberal Humanist, Public Space'

Experimental Publishing Compendium

Combinatorial Books: Gathering Flowers (book series)

How To Be A Pirate: An Interview with Alexandra Elbakyan and Gary Hall by Holger Briel’.

'Experimenting With Copyright Licences' (blogpost for the COPIM project - part of the documentation for the first book coming out of the Combinatorial Books pilot)

Review of Bitstreams: The Future of Digital Literary Heritage' by Matthew Kirschenbaum

Contribution to 'Archipiélago Crítico. ¡Formado está! ¡Naveguémoslo!' (invited talk: in Spanish translation with English subtitles)

How to Practise the Culture-led Re-Commoning of Cities (printable poster), Partisan Social Club, adjusted by Gary Hall

'Writing Against Elitism with A Stubborn Fury' (podcast)

'The Uberfication of the University - with Gary Hall' (podcast)

'"La modernidad fue un "blip" en el sistema": sobre teorías y disrupciones con Gary Hall' ['"Modernity was a "blip" in the system": on theories and disruptions with Gary Hall']' (press interview in Colombia)

'Combinatorial Books - Gathering Flowers', with Janneke Adema and Gabriela Méndez Cota - Part 1; Part 2; Part 3 (blog post)

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repositories PURE here, or in Humanities Commons here

Radical Open Access

Radical Open Access Virtual Book Stand

'"Communists of Knowledge"? A case for the implementation of "radical open access" in the humanities and social sciences' (an MA dissertation about the ROAC by Ellie Masterman). 

Wednesday
Dec012010

On the limits of openness III: open government

The global financial crisis that began in 2008 has only served to add further urgency to the belief of many in the UK that the government should relinquish its copyright on all local, regional and  national data collected with tax payers’ money - most vociferously that relating to Parliamentary expenses and the salaries and bonuses of the highest paid employees in the City of London  - and make it freely and openly available to the public by publishing it online, where it can be searched, mined, mapped, graphed, cross-tabulated, visualized, audited, interpreted, analysed and assessed using software tools.  The Guardian newspaper in the UK has even gone so far as to establish a ‘Free Our Data’ campaign to this end. 

From a liberal democratic perspective, freeing publically funded and acquired data like this, whether it is gathered directly in the process of  census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics, for example), helps society to perform more efficiently.  It does so not least by virtue of its ability to play a key role in increasing citizen participation and involvement in democracy, and indeed government,  as access to information such as that needed to intervene in public policy is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. 

But neoliberals also support making the data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the ‘best possible input/output equation’ (Lyotard). In this respect it is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for instance, in Higher Education regarding the impact of research on society and the economy, league tables, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (tax payers’) money, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability (over MPs expenses payments for second homes, moat cleaning, duck islands, trouser presses and the like).

Some libertarians have even gone so far as to argue that there is no need to make difficult policy decisions about what data and information it is right to publish online and what to keep secret at all. (Since Prince Harry is funded from the public purse, do the public have the right to access data regarding his blood group and DNA, so it can be determined once and for all that his father is Prince Charles and not James Hewitt?) Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In Shaping Things, his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it. 

Yet to have participated in the shift away from questions of truth, justice and what, in The Inhuman, Lyotard places under the headings of ‘heterogeneity,  dissensus, event… the unharmonizable’,  and toward a concern with performativity, measurement and optimising the relation between input and output, one doesn’t need to be a practicing data journalist,  or to have actively contributed to the movements for open access, open data or open government, at all. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free.  Similarly, if you are one of the 23 million in the UK and 500 million worldwide who use the pass-word protected Facebook social network,  then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising,  but to supply law enforcement agencies with profile data relating to yourself, your family, friends, colleagues and peers they can use in investigations.  Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View,  Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. And if this shift from open access to Google seems somewhat farfetched, it’s worth remembering that ‘Google has moved to establish, embellish, or replace many core university services such as library databases, search interfaces, and e-mail servers’; and that in fact Universities gave birth to Google,  Google’s PageRank algorithm being little more ‘than an expansion of what is known as citation analysis’.

Obviously, no matter how exciting and enjoyable such activities may be, you don’t have to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking.  (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, even if we want to, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift away from questions of what is just and right toward a concern with optimizing the system’s performance, is just not an option for most of us.  It’s not something that can be opted out of by declining to take out a Tesco Club Card, refusing to look for research using Google Scholar, or committing social networking ‘suicide’ and reading print-on-paper books instead.

For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgable and technologically proficient you are.  It’s regularly said that there are approximately four million cameras in the UK – one for every 14 people, more than any other country  (and that’s without even mentioning means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition). Yet no one really knows how many CCTV cameras are actually in operation in Britain today. (In fact the above statistic is reputed to have been based merely ‘on a dubious extrapolation from the number of cameras in London’s Putney High Street in 2002’.) 

For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect. It’s not a matter of positively contributing free labour to the likes of Flickr and YouTube, for instance; or of refusing to do so. Nor is it a case of the separation between work and non-work being harder to maintain nowadays. (Is it work or leisure when you’re writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood:

In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. 

(Gilles Deleuze and Felix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia (London: Athlone, 1988) p.492)

So as the above two examples show, this transformation of knowledge and information into quantities of data is not something that can actually be opted out of, since it’s not something that is necessarily opted into.

But there is a further and related reason all this data capturing, storing and mining cannot be simply opted out of or resisted via facilities such as Google Dashboard,  which allows people to see all the data Google has about them, or by reporting objectionable content,  as it’s possible to do in the case of Google Street View providing you’re knowledgeable enough. This is that too often such notions of refusal and active resistance (like their counterparts to do with ideas of privacy, civil rights and liberties)  have their basis in a conception of the autonomous, fully-conscious, rational, self-identical and self-present individual humanist subject that these changes in media and technology may be in the process of helping to reconfigure. As a result, they risk overlooking the possibility that computers, databases, archives,  servers, blogs, microblogs, RSS feeds, image and video-sharing, social networking and ‘the cloud’ are not just being used to change the status and nature of knowledge; they may be involved in the constitution of a very different form of human subject too. 

Wednesday
Nov242010

On the limits of openness II: from open access to open data

In ‘On the limits of openness I’ (see below), I argued that in order to gain an appreciation of what the humanities can become in an era of digital media technology, we would be better advised turning for assistance, not to computing science, but to the writers, poets, historians, literary critics, theorists and philosophers of the humanities. Let me explain what I mean.

Thirty years ago the philosopher Jean-François Lyotard was able to show how science, lacking the resources to legitimate itself as true, had, since its beginnings with Plato, relied for its legitimacy on precisely the kind of knowledge it did not even consider to be knowledge: non-scientific narrative knowledge. Specifically, science legitimated itself by producing a discourse called philosophy. It was philosophy’s role to generate a discourse of legitimation for science. Lyotard proceeded to define as modern any science that legitimated itself in this way by means of a metadiscourse which explicitly appealed to a grand narrative of some sort: the life of the spirit, the Enlightenment, progress, modernity, the emancipation of humanity, the realisation of the Idea, and so on.

What makes Lyotard’s analysis so significant with respect to the emergence of the digital humanities and the computational turn is that his intention was not to position philosophy as being able to tell us as much, if not more, about science than science itself. It was rather to emphasize that, in a process of transformation that had been taking place since at least the end of the 1950s, such long-standing metanarratives of legitimation had now themselves become obsolete.

So what happens to science when the philosophical metanarratives that legitimate it are no longer credible?   Lyotard’s answer, at least in part, was that science was increasing its connection to society, especially the instrumentality and functionality of society (as opposed to a notion of, say, ‘public service’). Science was doing so by helping to legitimate the power of States, companies and multinational corporations by optimizing the relationship ‘between input and output’, between what is put into the social system and what is got out of it, in order to get more from less. ‘Performativity’, in other words.

It is at this point that we return directly to the subject of computers and computing. For Lyotard, writing in 1979, technological transformations in research and the transmission of acquired learning in the most highly developed societies, including the widespread use of computers and databases and the ‘miniaturization and commercialization of machines’, were already in the process of exteriorizing knowledge in relation to the ‘knower’. Lyotard saw this general transformation and exteriorization as leading to a major alteration in the status and nature of knowledge: away from a concern with ‘the true, the just, or the beautiful, etc.’, with ideals, with knowledge as an end in itself, and precisely toward a concern with improving the social system’s performance, its efficiency.  So much so that, for Lyotard:

The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements.

(Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge (Manchester: Manchester University Press, 1986) p.4)

Scroll down 30 years and we do indeed find a lot discourses in the sciences today taken up with exteriorizing knowledge and information in order to achieve ‘the best possible performance’ by eliminating delays and inefficiencies and solving technical problems. So we have John Houghton’s 2009 study showing that the open access academic publishing model championed most vociferously in the sciences, whereby peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, without the need to pay subscriptions either to publish or to (pay per)view it, is actually the most cost effective mechanism for scholarly publishing.  Others have detailed at length the increases open access publishing and the related software makes possible in the amount of research material that can be published, searched and stored, the number of people who can access to it, the impact of that material, the range of its distribution, and the speed and ease of reporting and information retrieval, leading to what one of the leaders of the open access movement, Peter Suber, has described as ‘better metrics’. 

One highly influential open access publisher, the Public Library of Science (PLoS), is, with their PLoS Currents: Influenza website, even experimenting with publishing scientific research online before it has undergone in-depth peer review. PLoS are justifying this experiment on the grounds that it enables ideas, results and data to be disseminated as rapidly as possible.  But they are far from alone in making such an argument. Along with full, finished, peer-reviewed texts, more and more researchers in the sciences are making the email, blog, website or paper in which an idea is first expressed openly available online, together with any drafts, working papers, beta, pre-print or grey literature that have been produced and circulated to garner comments from peers and interested parties.  Like PLoS, these scientists perceive doing so as a way of disseminating their research earlier and faster, and therefore increasing its visibility, use, impact, citation count and so on. They also regard it as a means of breaking down much of the culture of secrecy that surrounds scientific research, and as helping to build networks and communities around their work by in effect saying to others, both inside and outside the academy, ‘it’s not finished, come and help us with it!’ Such crowd-sourcing opportunities are in turn held as leading to further increases in their work’s visibility, use, impact, citation counts, prestige and so on, thus optimizing the ratio between minimal input and maximum output still further.

Nor is it just the research literature itself that is being rendered accessible by scientists in this way. Even the data that is created in the course of scientific research is being made freely and openly available for others to use, analyse and build upon. Known as Open Data, this initiative is motivated by more than an  awareness that data is the main research output in many fields.  In the words of another of the leading advocates for open access, Alma Swan, publishing data online on an open basis bestows it with a ‘vastly increased utility’: digital data sets are ‘easily passed around’; they are ‘more easily reused’; and they contain more ‘opportunities for educational and commercial exploitation’. 

Some academic publishers are viewing the linking of their journals to the underlying data as another of their ‘value-added’ services to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to help ward off the threat of disintermediation posed by the development of digital technology, which makes it possible for academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). In fact a 2009 JISC open science report identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’.

In a further move in this direction, all seven PLoS journals are now providing a broad range of article level metrics and indicators relating to usage data on an open basis. No longer withheld as ‘trade secrets’, these metrics measure which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, notes, ‘Star’ ratings, blog coverage, etc

PLoS has positioned this programme as enabling science scholars to assess ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’, and they encourage readers to carry out their own analyses of this open data. Yet it is difficult not to see article level metrics as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’, as Lyotard puts it; quantities, furthermore, that are produced more to be exchanged, marketed and sold – for example, by individual academics to their departments, institutions, funders and governments in the form of indictors of ‘quality’ and ‘impact’ - than for their ‘“use-value’”. 

The requirement to have visibility, to show up in the metrics, to be measurable, nowadays encourages researchers to publish a lot and frequently. So much so that the peer-reviewed academic journal article has been positioned by some as having now assumed ‘a single central value, not that of bringing something new to the field but that of assessing the person’s research, with a view to hiring, promotion, funding, and, more and more, avoiding termination.’  In such circumstances, as Lyotard makes clear, ‘[i]t is not hard to visualize learning circulating along the same lines as money, instead of for its “educational” value or political (administrative, diplomatic, military) importance’. To the extent that it is even possible to say that, just as money has become a source of virtual value and speculation in the era of American-led neoliberal global finance capital, so too has education, research and publication. And we all know what happened when money became virtual.

Friday
Nov192010

On the limits of openness I: the digital humanities and the computational turn to data-driven scholarship

The digital humanities can be broadly understood as embracing all those scholarly activities in the humanities that involve writing about digital media and technology, and being engaged in processes of digital media production, practice and analysis. For example, developing new media theory, creating interactive electronic archives and literature, building online databases and wikis, producing virtual art galleries and museums, or exploring how various technologies reshape teaching and research.  Yet this field - or, better, constellation of fields - is neither unified nor self-identical. If anything, the digital humanities are comprised of a wide range of often conflicting attitudes, approaches and practices that are being negotiated and employed in a variety of different contexts.

In what follows my interest is not so much with the ongoing debate as to how precisely the digital humanities are to be defined and understood, but with an aspect of this emergent movement that appears to becoming increasingly dominant. So much so that for some it is rapidly coming to stand in for, or be equated with, the digital humanities as a whole.  This is the so-called ‘computational turn’ in the humanities. 

The latter phrase has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields, including interactive information visualisation, statistical data analysis, science visualization, image processing, network analysis, and the management, manipulation and mining of data, are being increasingly used to produce new ways of approaching and understanding texts in the humanities.  Indeed, thanks to increases in computer processing power and its affordability over the last few years, together with the sheer amount of cultural material that is now available in digital form, number-crunching software is being applied to millions of humanities texts in this way.

Before going any further I want to make it clear that it is not my intention to equate this computational turn with the digital humanities per se. Even if the latter is sometimes known as Humanities Computing - or as a transition between the so-called ‘traditional humanities’ and Humanities Computing  - what is coming to be called the digital humanities and this computational turn in the humanities are not one and the same thing as far as I am concerned.

In fact, far from equating the digital humanities with the computational turn, I want to insist on the importance of maintaining a difference between them, certainly for any understanding of what the humanities can become in an era of digital media technology. For, to date (and I acknowledge it is still relatively early days), the traffic in this computational turn has been rather one-way. As the phrase suggests, it has primarily been about exploring what direct practical uses computer science can be to the humanities in terms of performing computations on sets and flows of data that are often so large that,  in the words of the Digging Into Data Challenge, ‘they can be processed only using computing resources and computational methods’.   In the main the concern has been with either digitizing ‘born analog’ humanities texts and artifacts, or gathering together ‘born digital’ humanities texts and artifacts – videos, websites, games, photography, sound recordings, 3D data - and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts.  So we have the likes of Dan Cohen and Fred Gibb’s text mining of ‘the 1,681,161 books that were published in English in the UK in the long nineteenth century’ (according to Google at least);  Lev Manovich and the Software Studies Initiative’s use of ‘software to analyze and visualize... 4535 Time magazine covers... 1074790 manga pages, and 1100+ 20th century feature films’; or Stefanie Posavec’s Literary Organism which visualizes the structure of Part One of On the Road as a tree.   

Yet just as interesting as what computer science has to offer the humanities, I believe, is the question of what the humanities - in both their digital and ‘traditional’ guises (assuming they can be distingished in this way, which is by no means certain) - have to offer computer science; and, beyond that, what the humanities themselves can bring to the understanding of computing and the shaping of the digital. Do the humanities really need to draw quite so heavily on computer science to develop a sense of what they can be in an era of digital media technology?  Along with a computational turn in the humanities, might we not also benefit from a humanities turn in our understanding of the computational and the digital?

To be sure, one of the interesting things about computer science is that, as Mark Poster pointed out some time ago, it was the first case where ‘a scientific field was established that focuses on a machine’ rather than on an aspect of nature or culture. Yet more interesting still is the way Poster was able to demonstrate that the relation to this machine in computer science is actually one of misrecognition, with the computer occupying ‘the position of the imaginary’ and being ‘inscribed with transcendent status’.  This misidentification on the part of computer science has significant implications for our response to the computational turn. It suggests computer science is not all that well equipped to understand itself and its own founding object, let alone help the humanities’ with their relation to computing and the digital.

In fact, counter-intuitive though it may seem, if what we are seeking is an appreciation of what the humanities can become in an era of digital media technology and data-driven scholarship,  we would be better advised looking elsewhere for assistance, other than primarily with computing science and engineering, science and technology, or even science in general. I almost hesitate to say it in the present political climate - although it is important to do so for precisely this reason - but we would be better off turning to the writers, poets, historians, literary critics, theorists and philosophers of the humanities right from the start.

Wednesday
Nov102010

‘Follow the money’: the political economy of open access in the humanities

(The following is a slightly revised version of a lecture given at the 1st Conference on Open Access Scholarly Publishing held by The Open Access Scholarly Publishers Association and the DOAJ/Lund University Libraries, Lund, Sweden, September 14-16, 2009.)


I’ve been asked to talk about the challenges and opportunities of publishing open access journals in the humanities; and to talk about the experiences of Open Humanities Press (OHP) in particular. I’m going to start by focusing on OHP, just to set the stage for some broader comments on publishing OA in the humanities.

Open Humanities Press was founded by Sigi Jöttkandt, David Ottina, Paul Ashton and myself as the first open-access publishing ‘house’ explicitly dedicated to critical and cultural theory. It was launched in May 2008 by an international group of scholars including Alain Badiou, Jonathan Culler, Steven Greenblatt and Gayatri Spivak, in response to the perceived crisis whereby: academic publishers in the area have cut back on the number of research-led titles they bring out, in order to focus on readers, introductions, text books and reference works; and libraries are finding it difficult to afford the research that is published, both books and journals. In this respect OHP’s mission is a deceptively simple one: it’s to make leading works of critical and cultural thought freely and permanently available, on a worldwide basis, by publishing them open access.

In the first instance OHP consisted of a collective of already existing, high-quality, independent or ‘scholar published’ open access journals in philosophy, cultural studies, literary criticism and political theory. These include Culture Machine, Fibreculture, Vectors and Film-Philosophy. ‘Our feeling was that there were quite a few excellent open access peer-reviewed journals, but they weren't getting recognition because they were a bit isolated. By collecting the journals under a single banner we hoped to show both the humanities and the open access communities that there’s actually quite a bit of significant OA activity in the humanities’ (OHP Steering Group). 

OHP’s initial intention was to establish a reputation with its journals, before proceeding to tackle the more difficult problem of publishing book-length material open access:  difficult because of issues of economics and prestige I’ll come on to in a moment. However, things have developed much faster than we anticipated and, almost by popular demand, we have now an OHP monograph project,  run in collaboration with the University of Michigan Library’s Scholarly Publishing Office, UC-Irvine, UCLA Library, and the Public Knowledge Project, headed by John Willinsky at Stanford University.  The idea of the monograph project is to move forward both open access publishing in the humanities, and the open access publishing of humanities monographs. And we’ve launching our monograph project with five high-profile book series: New Metaphysics, eds Bruno Latour and Graham Harman; Unidentified Theoretical Objects, ed. Wlad Godzich; Critical Climate Change, eds Tom Cohen and Claire Colebrook; Global Conversations, ed. Ngugi wa Thiong'o; Liquid Books, eds Gary Hall and Clare Birchall.

So that’s Open Humanities Press.

Now, the first thing I want to say concerning the funding of open access in the humanities, is we need to recognize that the humanities have developed a very different set of professional cultures to the Science, Technology and Medicine (STMs), who have tended to dominate the discussion around OA so far. What works in the STMs doesn’t necessary work in the humanities:

a.    open-access in the humanities continues to be dogged by the perception that online publication is somehow less ‘credible’ than print, and that it lacks rigorous standards of quality control. This leads to open access journals being regarded as less trustworthy and desirable places to publish; and as too professionally risky, for early career scholars especially;

b.    open access in the humanities is also caught uncomfortably between two stools. On the one hand, all the talk of publishing services, platforms and tools is too just geeky for many humanities scholars. On the other, compared to the likes of tactical media, mash-ups, internet piracy and hacktervism, open access is too tame, too institutional, managerial and bureaucratic for many of those in the humanities concerned with new media; 

In the humanities, the message about the importance of open access also often comes from university managers and administrators; and, because there’s something of a clash of ideology between managers and academics, OA is not perceived as being radical or cool in the way peer-to-peer file sharing or even open source is;

c. one of the main models of funding open-access in the sciences, author-side fees, is also not easily transferable to the humanities.  Authors in the humanities are simply not used to paying to have their work published – even if it is a matter of just covering the cost of its production and processing and calling them ‘publication’ or ‘processing’ fees - and associate doing so with vanity publishing. At present they’re also less likely to obtain the grants from either funding bodies or their institutions necessary to cover the cost of publishing author-pays. That the humanities receive only a fraction of the amount of government funding the STMs do only compounds the problem. As does the fact that higher rejection rates in the humanities, as compared to the STMs, means any grants would have to be significantly larger. And that’s just to publish journal articles. Publishing books author-pays would be more expensive still.

So OHP, like most of the journals it encompasses, operates at the moment on a zero revenue, zero expenses basis. Any funding comes indirectly: via our institutions paying our salaries as academics. We’re simply using some of the time we’re given to conduct research to create open access publishing opportunities for others.  Of course some academics may be given reduced teaching or administrative loads by the institutions for setting things like this up, others may have PhD students or graduate assistants they can get to do some of the work. (Another indirect source of funding worth mentioning occurs via our institutions sometimes paying for the hosting of content - my thanks to Marta Brunner for this point.) But I suspect most are just donating their time and energy to open access as a service to the profession because they believe in it. What’s more, as Sigi Jottkandt has pointed out, ‘this largely volunteer effort is the norm rather than the exception’ when it comes to sustainable no-fee journal publishing in many humanities fields, in both OA and non-OA sectors’.   

Operating on a zero revenue, zero expenses basis like this can be a significant source of strength to many independent humanities journals and their publishers.  It makes it easier for them to publish highly specialised, experimental, inter- or trans-disciplinary research; research that does not always fit into the kind of neat disciplinary categories and divisions with which for-profit publishers like to order their lists, but which may nevertheless help to push scholarship in exciting new directions. It also makes it easier for such journals to publish research which, in challenging established disciplines, styles and frameworks, may fall between the different stools represented by the various academic departments, learned societies, scholarly associations, and research councils, but which may nevertheless help to push a field in exciting new directions and generate important new areas of inquiry.

Yet it can also be a potential weakness. It opens up many such scholar published journals to being positioned as functioning on an amateur, shoe-string basis, almost as cottage-industries.  Compared to a journal produced by, say, a large, for-profit, corporately owned press, they’re far more vulnerable to being accused of being unable to sustain high academic standards in terms of their production, editing, copy-editing, proofing and peer reviewing processes. They’re also more vulnerable to being accused of being of being unable to maintain consistently high academic standards in terms of the quality of their long term sustainability, the marketing and distribution services they can offer, their ability to be picked up by prestige-endowing indexes, and all the other add-on features they can provide such as journal archiving, contents alerts, word searches, discussion forums, etc. As I noted in Digitize This Book!, while this also applies to ‘independent’ print journals, it  is especially the case with regard to online-only journals, the vast majority of which are ‘still considered too new and unfamiliar to have gained the level of institutional recognition required for them to be thought of as being “established” and “of known quality”.’  

It’s precisely this perception of open access in the humanities that OHP is designed to counter by directly addressing these issues to ensure OA publishing, in certain areas of the humanities at least, meets ‘the levels of professionalism our peers expect from publications they associate with academic “quality”’.

I want to emphasize two points here:

1.    first, open access, as it’s been championed in the STMs, can’t simply be rolled out unproblematically into the humanities; and any attempt to do so is likely to face a number of significant challenges, as we’ve seen;  

2.    second, any attempt to develop OA in the humanities also needs to recognise that the humanities, in turn, are going to have an impact on open access.  So, contrary to the impression that’s given by most writing on this subject, it’s not just the humanities that are going to be fundamentally transformed by this process, via the development of OA journals and publishers such as OHP; open access is likely to undergo a significant transformation, too.

For instance, to my mind the open access movement quite simply has to place more emphasis on books than it has done to date. If it doesn’t, then its impact on the humanities will prove negligible, since it’s books published with esteemed international presses, rather than articles in high-ranking journals, that are still the ‘gold standard’ in many humanities fields.

But the humanities also have a long tradition of exposing and subverting many of the assumptions on which OA, as it’s been championed in the STMs, is based, including those associated with notions of writing, the text, the work and the author – to the point where the humanities and the sciences may actually be incommensurable in many respects.

Now radical differences of this sort often get played down at OA events such as this. Sure, we can have what Richard Poynder refers to as a ‘bad tempered wrangles’ over relatively ‘minor issues’ such as ‘metadata, copyright, and distributed versus central archives’.  But in the main the emphasis in the OA movement is on presenting a more or less unified front in the face of criticisms from governments, publishers, lobbyists and so forth, lest we provide them with further ammunition to attack open access, dilute our message, or otherwise distract ourselves from what we’re all supposed to agree is the main task at hand: the achievement of universal, free, online access to research. (Poynder, for example, speaks in terms of ‘working together for the common good’.)   However, I’d maintain it’s important not to see the presence of such differences and conflicts as a purely negative thing - as it might be perceived, say, by those working in the liberal tradition, with its “rationalist belief in the availability of a universal consensus based on reason”.  

In fact, if one of the impulses behind open access is to make knowledge and research – and with it society – more open and democratic, then I’d argue the existence of such dissensus will actually help in achieving this ambition. As the political philosopher Chantal Mouffe has shown, far from placing democracy at risk, a certain degree of difference and confrontation constitutes the very possibility of its existence. For Mouffe, ‘a well functioning democracy calls for a clash of legitimate democratic political positions’.

Speaking of metadata, this is one of the reasons why, in contrast to many in the OA community, I’ve maintained ‘that standards for preparing metadata should be generated in a plurality of different ways and places. Rather than adhering to the fantasy of having one single, fully integrated global archive... I’d argue instead for a multiplicity of different and at times conflicting and even incommensurable open-access archives, journals, databases and other publishing experiments.’  So I don’t see the fact that, because there are so many multi-format information materials, there’s no one efficient means of searching across them all that has yet been developed, as a problem or failing. For me, the fantasy of having one place to search for scholarship and research such as a fully integrated, indexed and linked Global Archive must remain precisely that: ‘a (totalizing and totalitarian) fantasy.’

None of which is to imply there can no longer be an OA community. It’s just to acknowledge that difference and conflict are what makes a community, and indeed the common, possible. We thus need to think the nature of community, of being together and holding something in common, a little differently. As the philosopher Jean-Luc Nancy asks:

What is a community? It is neither a macro-organism nor a big family... The common, having-in-common or being in common, excludes from itself interior unity, subsistence, and presence in and by itself. Being with, being together, and even being ‘united’ are precisely not a matter of being ‘one’. Of communities that are at one with themselves, there are only dead ones.

(Jean-Luc Nancy, A Finite Thinking, edited by Simon Sparks (Stanford, California: Stanford University Press, 2003) p.285)

To provide you with another example of how the humanities may come to shape OA: I’d argue that the willingness of the humanities to critically interrogate many of the assumptions on which OA is currently based can help the OA community to avoid that fate anticipated by the philosopher Jean-François Lyotard. In his 1979 book The Postmodern Condition, Lyotard contended that the widespread use of computers and databases, in exteriorizing knowledge in relation to the ‘knower’, was producing a major alteration in the status and nature of knowledge, away from questions of what is socially just and scientifically true and toward a concern simply with ‘optimizing the system’s performance’.  Thirty years later and a lot of OA conferences and debates are indeed taken up with showing how the externalisation of knowledge in online journals and archives can be used to make the existing system of academic research and publication much more efficient. So we have John Houghton’s study showing that OA is actually the most cost effective mechanism for scholarly publishing;  while others have discussed at length the increases open access and related software make possible - in the amount of material that can be published and stored, the number of people who can have access to it, the impact of that material, the range of distribution, the speed and ease of reporting and information retrieval, leading to what Peter Suber earlier called ‘better metrics’, reductions in staffing, production and reproduction costs etc.

(Incidentally, I wonder if this doesn’t partly explain why quite a few people associated with OA have a somewhat grumpy, ‘dogmatic’ public persona:

I mean, if they moralistically believe they already know the optimum way to achieve universal open access, and thus maximize the performance of the existing system of research – be it via interoperable institutional repositories or whatever - then presumably they can often only act negatively, to correct the delays, errors and inefficiencies they perceive in the ideas of others.)

Now the humanities could help prevent the OA movement from becoming even more moralistically and dogmatically obsessed with maximising performance, solving technical problems and eliminating inefficiencies than it already is, I think. (The attempt to avoid slipping into such technical discourse is just one reason why, elsewhere, I haven’t gone into the practical, ‘nuts and bolts’ of publishing open access.) At the same time, the humanities could help the OA community to grow, precisely by forcing scholars to confront issues of politics and social justice, in the manner of much humanities scholarship –  as doing so would be a really powerful way of encouraging more researchers in the humanities to actually publish open access. (Certainly, few of the arguments we currently use to persuade the humanities to publish OA have been particularly effective. So perhaps it’s time to try a different approach.)

For example, many humanities disciplines like to think of themselves as being politically engaged. Yet the humanities have something of a blind spot of their own when it comes to the politics of the academic publishing industries which actually make them possible – especially as those industries have become increasingly consolidated and profit-intensive in recent the years.

In an article on the political economy of academic journal publishing in general, and that of cultural studies in particular, Ted Striphas provides the example of Taylor and Francis/Informa. Their list features over 60 cultural studies journals, among them some of the most highly respected in the field including Cultural Studies, Continuum: Journal of Media and Cultural Studies, Communication and Critical/Cultural Studies, Inter-Asia Cultural Studies, Feminist Media Studies, and Parallax. Yet many cultural studies scholars would be shocked to learn that one of Informa’s subsidiaries was recently working for the US Army to assess how well it ‘had achieved its goal of “battlefield digitization”.’ The US Air Force, meanwhile, used the same subsidiary to help improve its management systems for U-2 spy planes.    

Which is not to say there’s something inherently immoral about the armed forces – just that scholars may want to be critically informed about their publishers’ financial links and connections; especially if those scholars are publishing research, say, criticising military intervention in Iraq or Afghanistan.

I realise it’s unfair to single cultural studies out like this; it’s not the only humanities field to have such a blind spot. What makes cultural studies’ naivety so noteworthy is the way it prides itself on being a ‘serious’ political project, as Stuart Hall puts it. According to Hall, the political cultural studies intellectual has a responsibility to ‘know more’ than those on the other side.   Indeed, it’s precisely this political aspect that singles cultural studies out from other fields of thought, for Hall, and helps to establishes the difference of its identity as cultural studies: the fact that ‘there is something at stake in cultural studies in a way that I think, and hope, is not exactly true of many other very important intellectual and critical practices’, he writes.  But if so, then as far as Striphas is concerned, this injunction has to include knowing more about ‘the formidable network of social, economic, legal, and infrastructural linkages to the publishing industry that sustains’ cultural studies and its politically engaged intellectuals, and shapes the conditions in which their knowledge and research ‘can – and increasingly cannot – circulate’. To this end Striphas stresses the importance of always scratching below the surface to discover ‘just who the corporate parents and siblings’ of those academic journals we publish in are, and what other activities they are involved with.  

As someone who identifies with cultural studies to a large extent,  it’s long seemed significant to me that cultural studies intellectuals, who otherwise appear so keen to wear their political commitment on their sleeves, are noticeably less keen when it comes to interrogating their own politico-institutional practices. The relative lack of interest the majority of the field have shown to date in making their own research available OA is a case in point. And, certainly, I think highlighting the politics of their publishing practices would be an effective way of persuading many in the humanities – and cultural studies in particular - to engage with open access.

1.    For one thing, it’d mean OA wouldn’t appear so tame, so institutional, managerial and bureaucratic;

2.    For another, scratching below the surface like this would offer an additional means of tackling the problem whereby OA scholar published journals, operating independently of the profit-intensive conglomerates, are often regarded in the humanities as less desirable places to publish.

We’ve already seen how OHP is specifically designed to address this issue. But could we not level the playing field even further, simply by asking where the money is coming from to fund the more ‘professionally run’ journals, not to mention what other activities their parent companies are connected to? Would doing so not have the effect of turning the very financial independence of many small-scale journal publishers, from a potential weakness, into a source of strength and credibility? Not least because it means they’re far less likely to be owned by a publisher whose parent company is involved in activities that many academics, if they knew about them, would not feel comfortable about continuing to donate their time and labour to support.

This is why I want to suggest that we, as a community of academics, authors, editors, publishers, librarians and so on, establish an initiative whereby all academic editors and publishers are asked to make freely available, on an annual basis, details of both their sources of income and funding, and all the sources of financial income and support pertaining to the journals they run. Furthermore, as part of this initiative, I suggest we set up an equivalent directory to the Directory of Open Access Journals (here at Lund)  - only in this case documenting all these various sources of income and support, together with information as to who the owners of the different academic journals in our respective fields are and, just as importantly, the other divisions, subsidiaries and activities of their various organisations, companies, and associations.  

I should stress I’m not suggesting that all corporately owned journals are the politically co-opted tools of global capitalism, while the smaller independent journals or those published on a non-profit basis by learned societies, scholarly associations and university presses somehow escape all this. Despite the possible implications of the word ‘full’, it’s not my intention to imply that anyone can be sufficiently outside of the forces of global capital to be politically and ethically ‘pure’ in this respect. None of this has emerged out of a sense of moralism on my part. Some of my best friends are editors of journals published and owned by corporate presses.

(Again, Marta Brunner makes an interesting and important point here, to the effect that: ‘many of us who work in public universities are already implicated by the ties of our institutions (e.g. to the military, to defence labs) that pay our salaries and therefore would also be paying for our open access publishing, to a certain extent, given... the volunteer economy of humanities-based OA’ - Marta Brunner, personal correspondence.)

Nevertheless, such an ‘Open Scholarship Full Disclosure Initiative’ would be of great assistance, I believe, in furnishing researchers, in all areas, with the knowledge to make responsible political decisions as to whom they wish to publish and work with. For instance, as a result of the information obtained some scholars may take a decision not to subscribe to, publish in, edit, peer review manuscripts or otherwise work for journals owned by multinationals involved in supporting the military; or that have particularly high library subscription charges;  or that refuse to endorse, as a bare minimum, the self-archiving by authors of the refereed and accepted final drafts of their articles in institutional open access repositories. (Or they may of course decide that none of these issues are of a particular concern to them and continue with their editorial and peer-review activities as before.)

But I also believe it’ll go a long way toward encouraging those in the humanities to become more aware of their interdependence as scholars on the publishing industry, and the need to become more politically involved in it; and consequently to see online journals – and OA journals especially - as attractive and desirable places to publish their work.  

At the very least, I’m convinced such an initiative would encourage both the editors and publishers of journals, and the owners of journal publishers and their subsidiaries, to behave more responsibly in political terms. What’s more, it’d be capable of having an impact even if the editors and publishers of those journals produced by the large, international, for-profit presses refused to play ball and provide full disclosure themselves:

a.    because such an initiative would raise awareness of the politics of journal funding and ownership more generally;

b.    because those editors and publishers who don't provide full disclosure would risk appearing as if they have something to hide - especially since this initiative taps into current public discourse around freedom of information and open data;

c.    but it would also hopefully have the effect of encouraging more scholars to research where the funding of such journals comes from, who their parent companies, institutions and organisations are, and what other activities they are involved in and connected to; and to make the results widely known and easily accessible.

It’s also worth emphasising that such an initiative would not require a huge amount of time and effort. After all, ‘Reed Elsevier, Springer, Wiley-Blackwell, and Taylor & Francis/Informa... publish about 6,000 journals between them’.   So to cover 6,000 journals, or somewhere between a quarter and a fifth of all peer-reviewed journals, we only need to research and disclose details of four corporations!  

Wednesday
Oct132010

Affirmative media theory and the post-9/11 world (part 2)

(The following is a slightly revised version of a text first published on 21 September, 2010, by the Creative Research Centre at Montclair State University. Part 1 of 'Affirmative Media Theory and the Post-9/11 World', again first published by the Creative Research Centre, is available below.)

 

To be sure, there’s something seductive about the thought of producing the kind of big idea or constructive theoretical discourse that is able to capture and explain how the world has changed and become a different place after 9/11. Let’s take just the most frequently rehearsed of those examples with which we are regularly confronted: that the awful events at the World Trade Center and Pentagon on that day in 2001 are connected to the ‘war on terror’, the ‘axis of evil’, the ‘clash of civilizations’, the introduction of the PATRIOT Act, the wars in Afghanistan and Iraq, the abuses in Abu Ghraib, indefinite detention at Guantanamo Bay, the so-called ‘global economic crisis’ that began in 2008, the election of Barak Hussein Obama in 2009, the continuing debate over the place of Muslims in US society - even the ‘return to the Real’ after the apparent triumph of (postmodern theories of) the society of the spectacle, the simulacrum and the hyper-real.

Yet when it comes to deciding how to respond to events and narratives of this sort – which we must, no matter how much and how often they are framed as being ‘self-evident’ – do we not also need to ask: why do big ideas and constructive theoretical discourses appear so compelling and refreshing at the moment, in these circumstances in particular? What exactly is the nature of this sense of frustration and fatigue with thinkers and theories – let’s not call them deconstructive – whose serious understanding of, and strenuous engagement with, antagonism, ambiguity, difference, hospitality, responsibility, singularity and openness, renders them wary of too easily dividing history into moments, movements, trends or turns, and cautious of creating strong, reconstructive, thirst-quenching philosophies of their own? From where does the desire spring for what are positioned, by way of contrast, as enabling and empowering systems of thought? Why here? Why now? And, yes, what is the effectivity of such ideas and discourses? What do they do? How can we be sure, for instance, that they don’t function primarily to replicate the forces of neoliberal capitalist globalisation?

To repeat: none of this is to claim big ideas and ‘constructive, explanatory’ discourses aren’t capable of being extremely interesting and important. Of course they are (especially in the hands of philosophers as consistently creative, challenging and sophisticated as Badiou, Hardt and Negri, Stiegler and Žižek). Yet how are we to decide if the idea of the post-9/11 world, persuasive though it may be, is viable, ‘capable of functioning successfully’, of being ‘able to live’ with the ‘enigma that is our life’, if this overarching-concept is so easily incorporated – in these ‘particular circumstances’ especially – into inhospitable, violent, controlling discourses or totalizing theoretical explanations (or posturing displays of male power and intellect)?

Let me raise just a few of the most obvious issues that would need to be rigorously and patiently worked through:

How is the use of the ‘post’ in this prepositional phrase to be understood? Is it referring to that which comes afterwards in a linear process of historical progression? Is the post meant to indicate some sort of fundamental fracture, boundary or dividing line designed to separate the pre-9/11 world from what came afterwards? Or is the post being used here to draw attention to that which, in an odd, paradoxical way comes not just after but before, too, just as ‘post’ is positioned before ‘9/11 world’ in the phrase ‘post-9/11 world’? In other words, does post-9/11 mean a certain world has come to an end, or is it more accurate to think of 9/11 and what has happened since as a part of that world, as that world in the nascent state?  Is the concept of the post-9/11 world referring to the coming of a new world, or the process of rewriting some of the features of the old? 

What is meant by ‘9/11’? Whose 9/11? Which 9/11? Arundhati Roy, writing in September 2002, is able to locate a number of places around the world for the 11th of September has long held significance:

Twenty-nine years ago, in Chile, on the 11th of September, General Pinochet overthrew the democratically elected government of Salvador Allende in a CIA-backed coup...

On the 11th September 1922, ignoring Arab outrage, the British government proclaimed a mandate in Palestine, a follow up to the 1917 Balfour Declaration [which]... promised European Zionists a national home for Jewish people...

It was on the 11th September 1990 that George W. Bush Sr., then President of the US, made a speech to a joint session of Congress announcing his Government’s decision to go to war against Iraq.

(Arundhati Roy, ‘Come September’, The Algebra of Infinite Justice (London: Flamingo, 2002) p.280, 283, 288-289.)

Of course your website indicates that by 9/11 you mean the terrible attacks on the World Trade Center in New York in 2001. I have no wish to detract from the pain and suffering associated with those events. The question arises nonetheless: on what basis can we take the decision to single out and privilege those tragic events over and above the others Roy identifies that also took place on 9/11? How can we do so, and how can we speak of what you refer to as a ‘post-9/11 generation’, without being complicit in those processes by which the attacks in New York have already been appropriated by a range of social, political, economic, ideological, cultural and aesthetic discourses for reasons to do with security, surveillance, biopolitics, justifying the wars in Afghanistan and Iraq and so on (discourses which can make the experience of writing about 9/11 fraught, to say the least)?

This is not to imply a decision to privilege 9/11/01 can’t be made. It’s merely to point out that such questions need to be addressed if this decision is to be taken responsibly and the implications of doing so for the ways in which we teach and write and act assumed and endured.

As for the last part of this phrase (you’ll have gathered there’s nothing ‘inherently’ viable about this concept for me), is it possible to begin to creatively think and imagine using the idea of a post-9/11 ‘world’ without universalizing a singularly US set of events? After all, even the formulation 9/11, with its echo of 911, seems very North American: in the UK we often tend to refer to September 11.

Yes, the Twin Towers were a symbol of World Finance Capital. Yes, the attacks on them were mediated around the world in ‘real time’. Yes, an article in Le Monde published the next day declared ‘We are all Americans! We are all New Yorkers’. (Has the phrase ‘post-9/11 world’ been chosen deliberately to draw attention to American-led neoliberal globalisation? It’s certainly difficult to propose alternatives to either with regard to the world’s social imaginary without risking being made to appear fanatical or extremist.)  Nevertheless, on what basis can we justify totalizing or globalizing these specific events in this manner? And how can we do so without inscribing 9/11 in the logic of evaluation inherent to neoliberalism’s audit culture (‘in the sense that the Holocaust’s singularity and horror would “equal” that of 9/11’ perhaps, but that of Hurricane Katrina or the Deepwater Horizon oilrig explosion would not);  or participating in the way 9/11 has often been made to overshadow other world historical events in the mythic imaginary: the dropping of the atomic bomb on Hiroshima on 6 August, 1954; Nixon’s decoupling of the US dollar from the gold standard in 1971 (which can be seen as one of the roots of the current economic crisis); the gas disaster at the Union Carbide factory in Bhopal on December 2-3, 1984; the 1999 alter-globalisation protests in Seattle; the 2003 invasion of Iraq, recently described by the ex-head of MI5 in the UK as having ‘radicalised a whole generation of young people... who saw our involvement in Iraq... as being an attack on Islam’ – and that’s to name only those events that come most readily to mind? 

Even if we confine ourselves to acts of non-state terrorism, there’s the Oklahoma City bombing of 19/4, 1995; the Madrid bombing of 3/11, 2004; London 7/7; and the attacks in Mumbai of November 2008.  Why would we not try to creatively think and imagine using the concept of a post-2-3/12 world? A post-19/4 world? A post-7/7 world?

Whose post-9/11 world is this exactly? Who wants this post-9/11 world?