Latest...

'The Inhumanist Manifesto', Media Theory, Vol. 1, No.1, 2017.

The Uberfication of the University (Open access Forerunners series version available here; as of April 4 2017 an interactive Manifold series version is available here.)

Públicos Fantasma - La Naturaleza Política Del Libro - La Red (Mexico: Taller de Ediciones Económicas, 2016) - new book, co-authored with Andrew Murphie, Janneke Adema and Alessandro Ludovico. 

'Posthumanities: The Dark Side of "The Dark Side of the Digital"' (with Janneke Adema), in Janneke Adema and Gary Hall, eds, Disrupting the Humanities: Towards Posthumanities, Journal of Electronic PublishingVol. 9, No.2, Winter, 2016.

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repository CURVE here 

performative project Janneke Adema has put together, based on our ‘The Political Nature of the Book: On Artists’ Books and Radical Open Access’ article for New Formations, Number 78, Summer, 2013. 

'What Does Academia.edu's Success Mean for Open Access: The Data-Driven World of Search Engines and Social Networking', Ctrl-Z: New Media Philosophy, no.5, 2015.

Radical Open Access network

Wednesday
Nov242010

On the limits of openness II: from open access to open data

In ‘On the limits of openness I’ (see below), I argued that in order to gain an appreciation of what the humanities can become in an era of digital media technology, we would be better advised turning for assistance, not to computing science, but to the writers, poets, historians, literary critics, theorists and philosophers of the humanities. Let me explain what I mean.

Thirty years ago the philosopher Jean-François Lyotard was able to show how science, lacking the resources to legitimate itself as true, had, since its beginnings with Plato, relied for its legitimacy on precisely the kind of knowledge it did not even consider to be knowledge: non-scientific narrative knowledge. Specifically, science legitimated itself by producing a discourse called philosophy. It was philosophy’s role to generate a discourse of legitimation for science. Lyotard proceeded to define as modern any science that legitimated itself in this way by means of a metadiscourse which explicitly appealed to a grand narrative of some sort: the life of the spirit, the Enlightenment, progress, modernity, the emancipation of humanity, the realisation of the Idea, and so on.

What makes Lyotard’s analysis so significant with respect to the emergence of the digital humanities and the computational turn is that his intention was not to position philosophy as being able to tell us as much, if not more, about science than science itself. It was rather to emphasize that, in a process of transformation that had been taking place since at least the end of the 1950s, such long-standing metanarratives of legitimation had now themselves become obsolete.

So what happens to science when the philosophical metanarratives that legitimate it are no longer credible?   Lyotard’s answer, at least in part, was that science was increasing its connection to society, especially the instrumentality and functionality of society (as opposed to a notion of, say, ‘public service’). Science was doing so by helping to legitimate the power of States, companies and multinational corporations by optimizing the relationship ‘between input and output’, between what is put into the social system and what is got out of it, in order to get more from less. ‘Performativity’, in other words.

It is at this point that we return directly to the subject of computers and computing. For Lyotard, writing in 1979, technological transformations in research and the transmission of acquired learning in the most highly developed societies, including the widespread use of computers and databases and the ‘miniaturization and commercialization of machines’, were already in the process of exteriorizing knowledge in relation to the ‘knower’. Lyotard saw this general transformation and exteriorization as leading to a major alteration in the status and nature of knowledge: away from a concern with ‘the true, the just, or the beautiful, etc.’, with ideals, with knowledge as an end in itself, and precisely toward a concern with improving the social system’s performance, its efficiency.  So much so that, for Lyotard:

The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements.

(Jean-François Lyotard, The Postmodern Condition: A Report on Knowledge (Manchester: Manchester University Press, 1986) p.4)

Scroll down 30 years and we do indeed find a lot discourses in the sciences today taken up with exteriorizing knowledge and information in order to achieve ‘the best possible performance’ by eliminating delays and inefficiencies and solving technical problems. So we have John Houghton’s 2009 study showing that the open access academic publishing model championed most vociferously in the sciences, whereby peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, without the need to pay subscriptions either to publish or to (pay per)view it, is actually the most cost effective mechanism for scholarly publishing.  Others have detailed at length the increases open access publishing and the related software makes possible in the amount of research material that can be published, searched and stored, the number of people who can access to it, the impact of that material, the range of its distribution, and the speed and ease of reporting and information retrieval, leading to what one of the leaders of the open access movement, Peter Suber, has described as ‘better metrics’. 

One highly influential open access publisher, the Public Library of Science (PLoS), is, with their PLoS Currents: Influenza website, even experimenting with publishing scientific research online before it has undergone in-depth peer review. PLoS are justifying this experiment on the grounds that it enables ideas, results and data to be disseminated as rapidly as possible.  But they are far from alone in making such an argument. Along with full, finished, peer-reviewed texts, more and more researchers in the sciences are making the email, blog, website or paper in which an idea is first expressed openly available online, together with any drafts, working papers, beta, pre-print or grey literature that have been produced and circulated to garner comments from peers and interested parties.  Like PLoS, these scientists perceive doing so as a way of disseminating their research earlier and faster, and therefore increasing its visibility, use, impact, citation count and so on. They also regard it as a means of breaking down much of the culture of secrecy that surrounds scientific research, and as helping to build networks and communities around their work by in effect saying to others, both inside and outside the academy, ‘it’s not finished, come and help us with it!’ Such crowd-sourcing opportunities are in turn held as leading to further increases in their work’s visibility, use, impact, citation counts, prestige and so on, thus optimizing the ratio between minimal input and maximum output still further.

Nor is it just the research literature itself that is being rendered accessible by scientists in this way. Even the data that is created in the course of scientific research is being made freely and openly available for others to use, analyse and build upon. Known as Open Data, this initiative is motivated by more than an  awareness that data is the main research output in many fields.  In the words of another of the leading advocates for open access, Alma Swan, publishing data online on an open basis bestows it with a ‘vastly increased utility’: digital data sets are ‘easily passed around’; they are ‘more easily reused’; and they contain more ‘opportunities for educational and commercial exploitation’. 

Some academic publishers are viewing the linking of their journals to the underlying data as another of their ‘value-added’ services to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to help ward off the threat of disintermediation posed by the development of digital technology, which makes it possible for academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). In fact a 2009 JISC open science report identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’.

In a further move in this direction, all seven PLoS journals are now providing a broad range of article level metrics and indicators relating to usage data on an open basis. No longer withheld as ‘trade secrets’, these metrics measure which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, notes, ‘Star’ ratings, blog coverage, etc

PLoS has positioned this programme as enabling science scholars to assess ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’, and they encourage readers to carry out their own analyses of this open data. Yet it is difficult not to see article level metrics as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’, as Lyotard puts it; quantities, furthermore, that are produced more to be exchanged, marketed and sold – for example, by individual academics to their departments, institutions, funders and governments in the form of indictors of ‘quality’ and ‘impact’ - than for their ‘“use-value’”. 

The requirement to have visibility, to show up in the metrics, to be measurable, nowadays encourages researchers to publish a lot and frequently. So much so that the peer-reviewed academic journal article has been positioned by some as having now assumed ‘a single central value, not that of bringing something new to the field but that of assessing the person’s research, with a view to hiring, promotion, funding, and, more and more, avoiding termination.’  In such circumstances, as Lyotard makes clear, ‘[i]t is not hard to visualize learning circulating along the same lines as money, instead of for its “educational” value or political (administrative, diplomatic, military) importance’. To the extent that it is even possible to say that, just as money has become a source of virtual value and speculation in the era of American-led neoliberal global finance capital, so too has education, research and publication. And we all know what happened when money became virtual.

Friday
Nov192010

On the limits of openness I: the digital humanities and the computational turn to data-driven scholarship

The digital humanities can be broadly understood as embracing all those scholarly activities in the humanities that involve writing about digital media and technology, and being engaged in processes of digital media production, practice and analysis. For example, developing new media theory, creating interactive electronic archives and literature, building online databases and wikis, producing virtual art galleries and museums, or exploring how various technologies reshape teaching and research.  Yet this field - or, better, constellation of fields - is neither unified nor self-identical. If anything, the digital humanities are comprised of a wide range of often conflicting attitudes, approaches and practices that are being negotiated and employed in a variety of different contexts.

In what follows my interest is not so much with the ongoing debate as to how precisely the digital humanities are to be defined and understood, but with an aspect of this emergent movement that appears to becoming increasingly dominant. So much so that for some it is rapidly coming to stand in for, or be equated with, the digital humanities as a whole.  This is the so-called ‘computational turn’ in the humanities. 

The latter phrase has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields, including interactive information visualisation, statistical data analysis, science visualization, image processing, network analysis, and the management, manipulation and mining of data, are being increasingly used to produce new ways of approaching and understanding texts in the humanities.  Indeed, thanks to increases in computer processing power and its affordability over the last few years, together with the sheer amount of cultural material that is now available in digital form, number-crunching software is being applied to millions of humanities texts in this way.

Before going any further I want to make it clear that it is not my intention to equate this computational turn with the digital humanities per se. Even if the latter is sometimes known as Humanities Computing - or as a transition between the so-called ‘traditional humanities’ and Humanities Computing  - what is coming to be called the digital humanities and this computational turn in the humanities are not one and the same thing as far as I am concerned.

In fact, far from equating the digital humanities with the computational turn, I want to insist on the importance of maintaining a difference between them, certainly for any understanding of what the humanities can become in an era of digital media technology. For, to date (and I acknowledge it is still relatively early days), the traffic in this computational turn has been rather one-way. As the phrase suggests, it has primarily been about exploring what direct practical uses computer science can be to the humanities in terms of performing computations on sets and flows of data that are often so large that,  in the words of the Digging Into Data Challenge, ‘they can be processed only using computing resources and computational methods’.   In the main the concern has been with either digitizing ‘born analog’ humanities texts and artifacts, or gathering together ‘born digital’ humanities texts and artifacts – videos, websites, games, photography, sound recordings, 3D data - and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts.  So we have the likes of Dan Cohen and Fred Gibb’s text mining of ‘the 1,681,161 books that were published in English in the UK in the long nineteenth century’ (according to Google at least);  Lev Manovich and the Software Studies Initiative’s use of ‘software to analyze and visualize... 4535 Time magazine covers... 1074790 manga pages, and 1100+ 20th century feature films’; or Stefanie Posavec’s Literary Organism which visualizes the structure of Part One of On the Road as a tree.   

Yet just as interesting as what computer science has to offer the humanities, I believe, is the question of what the humanities - in both their digital and ‘traditional’ guises (assuming they can be distingished in this way, which is by no means certain) - have to offer computer science; and, beyond that, what the humanities themselves can bring to the understanding of computing and the shaping of the digital. Do the humanities really need to draw quite so heavily on computer science to develop a sense of what they can be in an era of digital media technology?  Along with a computational turn in the humanities, might we not also benefit from a humanities turn in our understanding of the computational and the digital?

To be sure, one of the interesting things about computer science is that, as Mark Poster pointed out some time ago, it was the first case where ‘a scientific field was established that focuses on a machine’ rather than on an aspect of nature or culture. Yet more interesting still is the way Poster was able to demonstrate that the relation to this machine in computer science is actually one of misrecognition, with the computer occupying ‘the position of the imaginary’ and being ‘inscribed with transcendent status’.  This misidentification on the part of computer science has significant implications for our response to the computational turn. It suggests computer science is not all that well equipped to understand itself and its own founding object, let alone help the humanities’ with their relation to computing and the digital.

In fact, counter-intuitive though it may seem, if what we are seeking is an appreciation of what the humanities can become in an era of digital media technology and data-driven scholarship,  we would be better advised looking elsewhere for assistance, other than primarily with computing science and engineering, science and technology, or even science in general. I almost hesitate to say it in the present political climate - although it is important to do so for precisely this reason - but we would be better off turning to the writers, poets, historians, literary critics, theorists and philosophers of the humanities right from the start.

Wednesday
Nov102010

‘Follow the money’: the political economy of open access in the humanities

(The following is a slightly revised version of a lecture given at the 1st Conference on Open Access Scholarly Publishing held by The Open Access Scholarly Publishers Association and the DOAJ/Lund University Libraries, Lund, Sweden, September 14-16, 2009.)


I’ve been asked to talk about the challenges and opportunities of publishing open access journals in the humanities; and to talk about the experiences of Open Humanities Press (OHP) in particular. I’m going to start by focusing on OHP, just to set the stage for some broader comments on publishing OA in the humanities.

Open Humanities Press was founded by Sigi Jöttkandt, David Ottina, Paul Ashton and myself as the first open-access publishing ‘house’ explicitly dedicated to critical and cultural theory. It was launched in May 2008 by an international group of scholars including Alain Badiou, Jonathan Culler, Steven Greenblatt and Gayatri Spivak, in response to the perceived crisis whereby: academic publishers in the area have cut back on the number of research-led titles they bring out, in order to focus on readers, introductions, text books and reference works; and libraries are finding it difficult to afford the research that is published, both books and journals. In this respect OHP’s mission is a deceptively simple one: it’s to make leading works of critical and cultural thought freely and permanently available, on a worldwide basis, by publishing them open access.

In the first instance OHP consisted of a collective of already existing, high-quality, independent or ‘scholar published’ open access journals in philosophy, cultural studies, literary criticism and political theory. These include Culture Machine, Fibreculture, Vectors and Film-Philosophy. ‘Our feeling was that there were quite a few excellent open access peer-reviewed journals, but they weren't getting recognition because they were a bit isolated. By collecting the journals under a single banner we hoped to show both the humanities and the open access communities that there’s actually quite a bit of significant OA activity in the humanities’ (OHP Steering Group). 

OHP’s initial intention was to establish a reputation with its journals, before proceeding to tackle the more difficult problem of publishing book-length material open access:  difficult because of issues of economics and prestige I’ll come on to in a moment. However, things have developed much faster than we anticipated and, almost by popular demand, we have now an OHP monograph project,  run in collaboration with the University of Michigan Library’s Scholarly Publishing Office, UC-Irvine, UCLA Library, and the Public Knowledge Project, headed by John Willinsky at Stanford University.  The idea of the monograph project is to move forward both open access publishing in the humanities, and the open access publishing of humanities monographs. And we’ve launching our monograph project with five high-profile book series: New Metaphysics, eds Bruno Latour and Graham Harman; Unidentified Theoretical Objects, ed. Wlad Godzich; Critical Climate Change, eds Tom Cohen and Claire Colebrook; Global Conversations, ed. Ngugi wa Thiong'o; Liquid Books, eds Gary Hall and Clare Birchall.

So that’s Open Humanities Press.

Now, the first thing I want to say concerning the funding of open access in the humanities, is we need to recognize that the humanities have developed a very different set of professional cultures to the Science, Technology and Medicine (STMs), who have tended to dominate the discussion around OA so far. What works in the STMs doesn’t necessary work in the humanities:

a.    open-access in the humanities continues to be dogged by the perception that online publication is somehow less ‘credible’ than print, and that it lacks rigorous standards of quality control. This leads to open access journals being regarded as less trustworthy and desirable places to publish; and as too professionally risky, for early career scholars especially;

b.    open access in the humanities is also caught uncomfortably between two stools. On the one hand, all the talk of publishing services, platforms and tools is too just geeky for many humanities scholars. On the other, compared to the likes of tactical media, mash-ups, internet piracy and hacktervism, open access is too tame, too institutional, managerial and bureaucratic for many of those in the humanities concerned with new media; 

In the humanities, the message about the importance of open access also often comes from university managers and administrators; and, because there’s something of a clash of ideology between managers and academics, OA is not perceived as being radical or cool in the way peer-to-peer file sharing or even open source is;

c. one of the main models of funding open-access in the sciences, author-side fees, is also not easily transferable to the humanities.  Authors in the humanities are simply not used to paying to have their work published – even if it is a matter of just covering the cost of its production and processing and calling them ‘publication’ or ‘processing’ fees - and associate doing so with vanity publishing. At present they’re also less likely to obtain the grants from either funding bodies or their institutions necessary to cover the cost of publishing author-pays. That the humanities receive only a fraction of the amount of government funding the STMs do only compounds the problem. As does the fact that higher rejection rates in the humanities, as compared to the STMs, means any grants would have to be significantly larger. And that’s just to publish journal articles. Publishing books author-pays would be more expensive still.

So OHP, like most of the journals it encompasses, operates at the moment on a zero revenue, zero expenses basis. Any funding comes indirectly: via our institutions paying our salaries as academics. We’re simply using some of the time we’re given to conduct research to create open access publishing opportunities for others.  Of course some academics may be given reduced teaching or administrative loads by the institutions for setting things like this up, others may have PhD students or graduate assistants they can get to do some of the work. (Another indirect source of funding worth mentioning occurs via our institutions sometimes paying for the hosting of content - my thanks to Marta Brunner for this point.) But I suspect most are just donating their time and energy to open access as a service to the profession because they believe in it. What’s more, as Sigi Jottkandt has pointed out, ‘this largely volunteer effort is the norm rather than the exception’ when it comes to sustainable no-fee journal publishing in many humanities fields, in both OA and non-OA sectors’.   

Operating on a zero revenue, zero expenses basis like this can be a significant source of strength to many independent humanities journals and their publishers.  It makes it easier for them to publish highly specialised, experimental, inter- or trans-disciplinary research; research that does not always fit into the kind of neat disciplinary categories and divisions with which for-profit publishers like to order their lists, but which may nevertheless help to push scholarship in exciting new directions. It also makes it easier for such journals to publish research which, in challenging established disciplines, styles and frameworks, may fall between the different stools represented by the various academic departments, learned societies, scholarly associations, and research councils, but which may nevertheless help to push a field in exciting new directions and generate important new areas of inquiry.

Yet it can also be a potential weakness. It opens up many such scholar published journals to being positioned as functioning on an amateur, shoe-string basis, almost as cottage-industries.  Compared to a journal produced by, say, a large, for-profit, corporately owned press, they’re far more vulnerable to being accused of being unable to sustain high academic standards in terms of their production, editing, copy-editing, proofing and peer reviewing processes. They’re also more vulnerable to being accused of being of being unable to maintain consistently high academic standards in terms of the quality of their long term sustainability, the marketing and distribution services they can offer, their ability to be picked up by prestige-endowing indexes, and all the other add-on features they can provide such as journal archiving, contents alerts, word searches, discussion forums, etc. As I noted in Digitize This Book!, while this also applies to ‘independent’ print journals, it  is especially the case with regard to online-only journals, the vast majority of which are ‘still considered too new and unfamiliar to have gained the level of institutional recognition required for them to be thought of as being “established” and “of known quality”.’  

It’s precisely this perception of open access in the humanities that OHP is designed to counter by directly addressing these issues to ensure OA publishing, in certain areas of the humanities at least, meets ‘the levels of professionalism our peers expect from publications they associate with academic “quality”’.

I want to emphasize two points here:

1.    first, open access, as it’s been championed in the STMs, can’t simply be rolled out unproblematically into the humanities; and any attempt to do so is likely to face a number of significant challenges, as we’ve seen;  

2.    second, any attempt to develop OA in the humanities also needs to recognise that the humanities, in turn, are going to have an impact on open access.  So, contrary to the impression that’s given by most writing on this subject, it’s not just the humanities that are going to be fundamentally transformed by this process, via the development of OA journals and publishers such as OHP; open access is likely to undergo a significant transformation, too.

For instance, to my mind the open access movement quite simply has to place more emphasis on books than it has done to date. If it doesn’t, then its impact on the humanities will prove negligible, since it’s books published with esteemed international presses, rather than articles in high-ranking journals, that are still the ‘gold standard’ in many humanities fields.

But the humanities also have a long tradition of exposing and subverting many of the assumptions on which OA, as it’s been championed in the STMs, is based, including those associated with notions of writing, the text, the work and the author – to the point where the humanities and the sciences may actually be incommensurable in many respects.

Now radical differences of this sort often get played down at OA events such as this. Sure, we can have what Richard Poynder refers to as a ‘bad tempered wrangles’ over relatively ‘minor issues’ such as ‘metadata, copyright, and distributed versus central archives’.  But in the main the emphasis in the OA movement is on presenting a more or less unified front in the face of criticisms from governments, publishers, lobbyists and so forth, lest we provide them with further ammunition to attack open access, dilute our message, or otherwise distract ourselves from what we’re all supposed to agree is the main task at hand: the achievement of universal, free, online access to research. (Poynder, for example, speaks in terms of ‘working together for the common good’.)   However, I’d maintain it’s important not to see the presence of such differences and conflicts as a purely negative thing - as it might be perceived, say, by those working in the liberal tradition, with its “rationalist belief in the availability of a universal consensus based on reason”.  

In fact, if one of the impulses behind open access is to make knowledge and research – and with it society – more open and democratic, then I’d argue the existence of such dissensus will actually help in achieving this ambition. As the political philosopher Chantal Mouffe has shown, far from placing democracy at risk, a certain degree of difference and confrontation constitutes the very possibility of its existence. For Mouffe, ‘a well functioning democracy calls for a clash of legitimate democratic political positions’.

Speaking of metadata, this is one of the reasons why, in contrast to many in the OA community, I’ve maintained ‘that standards for preparing metadata should be generated in a plurality of different ways and places. Rather than adhering to the fantasy of having one single, fully integrated global archive... I’d argue instead for a multiplicity of different and at times conflicting and even incommensurable open-access archives, journals, databases and other publishing experiments.’  So I don’t see the fact that, because there are so many multi-format information materials, there’s no one efficient means of searching across them all that has yet been developed, as a problem or failing. For me, the fantasy of having one place to search for scholarship and research such as a fully integrated, indexed and linked Global Archive must remain precisely that: ‘a (totalizing and totalitarian) fantasy.’

None of which is to imply there can no longer be an OA community. It’s just to acknowledge that difference and conflict are what makes a community, and indeed the common, possible. We thus need to think the nature of community, of being together and holding something in common, a little differently. As the philosopher Jean-Luc Nancy asks:

What is a community? It is neither a macro-organism nor a big family... The common, having-in-common or being in common, excludes from itself interior unity, subsistence, and presence in and by itself. Being with, being together, and even being ‘united’ are precisely not a matter of being ‘one’. Of communities that are at one with themselves, there are only dead ones.

(Jean-Luc Nancy, A Finite Thinking, edited by Simon Sparks (Stanford, California: Stanford University Press, 2003) p.285)

To provide you with another example of how the humanities may come to shape OA: I’d argue that the willingness of the humanities to critically interrogate many of the assumptions on which OA is currently based can help the OA community to avoid that fate anticipated by the philosopher Jean-François Lyotard. In his 1979 book The Postmodern Condition, Lyotard contended that the widespread use of computers and databases, in exteriorizing knowledge in relation to the ‘knower’, was producing a major alteration in the status and nature of knowledge, away from questions of what is socially just and scientifically true and toward a concern simply with ‘optimizing the system’s performance’.  Thirty years later and a lot of OA conferences and debates are indeed taken up with showing how the externalisation of knowledge in online journals and archives can be used to make the existing system of academic research and publication much more efficient. So we have John Houghton’s study showing that OA is actually the most cost effective mechanism for scholarly publishing;  while others have discussed at length the increases open access and related software make possible - in the amount of material that can be published and stored, the number of people who can have access to it, the impact of that material, the range of distribution, the speed and ease of reporting and information retrieval, leading to what Peter Suber earlier called ‘better metrics’, reductions in staffing, production and reproduction costs etc.

(Incidentally, I wonder if this doesn’t partly explain why quite a few people associated with OA have a somewhat grumpy, ‘dogmatic’ public persona:

I mean, if they moralistically believe they already know the optimum way to achieve universal open access, and thus maximize the performance of the existing system of research – be it via interoperable institutional repositories or whatever - then presumably they can often only act negatively, to correct the delays, errors and inefficiencies they perceive in the ideas of others.)

Now the humanities could help prevent the OA movement from becoming even more moralistically and dogmatically obsessed with maximising performance, solving technical problems and eliminating inefficiencies than it already is, I think. (The attempt to avoid slipping into such technical discourse is just one reason why, elsewhere, I haven’t gone into the practical, ‘nuts and bolts’ of publishing open access.) At the same time, the humanities could help the OA community to grow, precisely by forcing scholars to confront issues of politics and social justice, in the manner of much humanities scholarship –  as doing so would be a really powerful way of encouraging more researchers in the humanities to actually publish open access. (Certainly, few of the arguments we currently use to persuade the humanities to publish OA have been particularly effective. So perhaps it’s time to try a different approach.)

For example, many humanities disciplines like to think of themselves as being politically engaged. Yet the humanities have something of a blind spot of their own when it comes to the politics of the academic publishing industries which actually make them possible – especially as those industries have become increasingly consolidated and profit-intensive in recent the years.

In an article on the political economy of academic journal publishing in general, and that of cultural studies in particular, Ted Striphas provides the example of Taylor and Francis/Informa. Their list features over 60 cultural studies journals, among them some of the most highly respected in the field including Cultural Studies, Continuum: Journal of Media and Cultural Studies, Communication and Critical/Cultural Studies, Inter-Asia Cultural Studies, Feminist Media Studies, and Parallax. Yet many cultural studies scholars would be shocked to learn that one of Informa’s subsidiaries was recently working for the US Army to assess how well it ‘had achieved its goal of “battlefield digitization”.’ The US Air Force, meanwhile, used the same subsidiary to help improve its management systems for U-2 spy planes.    

Which is not to say there’s something inherently immoral about the armed forces – just that scholars may want to be critically informed about their publishers’ financial links and connections; especially if those scholars are publishing research, say, criticising military intervention in Iraq or Afghanistan.

I realise it’s unfair to single cultural studies out like this; it’s not the only humanities field to have such a blind spot. What makes cultural studies’ naivety so noteworthy is the way it prides itself on being a ‘serious’ political project, as Stuart Hall puts it. According to Hall, the political cultural studies intellectual has a responsibility to ‘know more’ than those on the other side.   Indeed, it’s precisely this political aspect that singles cultural studies out from other fields of thought, for Hall, and helps to establishes the difference of its identity as cultural studies: the fact that ‘there is something at stake in cultural studies in a way that I think, and hope, is not exactly true of many other very important intellectual and critical practices’, he writes.  But if so, then as far as Striphas is concerned, this injunction has to include knowing more about ‘the formidable network of social, economic, legal, and infrastructural linkages to the publishing industry that sustains’ cultural studies and its politically engaged intellectuals, and shapes the conditions in which their knowledge and research ‘can – and increasingly cannot – circulate’. To this end Striphas stresses the importance of always scratching below the surface to discover ‘just who the corporate parents and siblings’ of those academic journals we publish in are, and what other activities they are involved with.  

As someone who identifies with cultural studies to a large extent,  it’s long seemed significant to me that cultural studies intellectuals, who otherwise appear so keen to wear their political commitment on their sleeves, are noticeably less keen when it comes to interrogating their own politico-institutional practices. The relative lack of interest the majority of the field have shown to date in making their own research available OA is a case in point. And, certainly, I think highlighting the politics of their publishing practices would be an effective way of persuading many in the humanities – and cultural studies in particular - to engage with open access.

1.    For one thing, it’d mean OA wouldn’t appear so tame, so institutional, managerial and bureaucratic;

2.    For another, scratching below the surface like this would offer an additional means of tackling the problem whereby OA scholar published journals, operating independently of the profit-intensive conglomerates, are often regarded in the humanities as less desirable places to publish.

We’ve already seen how OHP is specifically designed to address this issue. But could we not level the playing field even further, simply by asking where the money is coming from to fund the more ‘professionally run’ journals, not to mention what other activities their parent companies are connected to? Would doing so not have the effect of turning the very financial independence of many small-scale journal publishers, from a potential weakness, into a source of strength and credibility? Not least because it means they’re far less likely to be owned by a publisher whose parent company is involved in activities that many academics, if they knew about them, would not feel comfortable about continuing to donate their time and labour to support.

This is why I want to suggest that we, as a community of academics, authors, editors, publishers, librarians and so on, establish an initiative whereby all academic editors and publishers are asked to make freely available, on an annual basis, details of both their sources of income and funding, and all the sources of financial income and support pertaining to the journals they run. Furthermore, as part of this initiative, I suggest we set up an equivalent directory to the Directory of Open Access Journals (here at Lund)  - only in this case documenting all these various sources of income and support, together with information as to who the owners of the different academic journals in our respective fields are and, just as importantly, the other divisions, subsidiaries and activities of their various organisations, companies, and associations.  

I should stress I’m not suggesting that all corporately owned journals are the politically co-opted tools of global capitalism, while the smaller independent journals or those published on a non-profit basis by learned societies, scholarly associations and university presses somehow escape all this. Despite the possible implications of the word ‘full’, it’s not my intention to imply that anyone can be sufficiently outside of the forces of global capital to be politically and ethically ‘pure’ in this respect. None of this has emerged out of a sense of moralism on my part. Some of my best friends are editors of journals published and owned by corporate presses.

(Again, Marta Brunner makes an interesting and important point here, to the effect that: ‘many of us who work in public universities are already implicated by the ties of our institutions (e.g. to the military, to defence labs) that pay our salaries and therefore would also be paying for our open access publishing, to a certain extent, given... the volunteer economy of humanities-based OA’ - Marta Brunner, personal correspondence.)

Nevertheless, such an ‘Open Scholarship Full Disclosure Initiative’ would be of great assistance, I believe, in furnishing researchers, in all areas, with the knowledge to make responsible political decisions as to whom they wish to publish and work with. For instance, as a result of the information obtained some scholars may take a decision not to subscribe to, publish in, edit, peer review manuscripts or otherwise work for journals owned by multinationals involved in supporting the military; or that have particularly high library subscription charges;  or that refuse to endorse, as a bare minimum, the self-archiving by authors of the refereed and accepted final drafts of their articles in institutional open access repositories. (Or they may of course decide that none of these issues are of a particular concern to them and continue with their editorial and peer-review activities as before.)

But I also believe it’ll go a long way toward encouraging those in the humanities to become more aware of their interdependence as scholars on the publishing industry, and the need to become more politically involved in it; and consequently to see online journals – and OA journals especially - as attractive and desirable places to publish their work.  

At the very least, I’m convinced such an initiative would encourage both the editors and publishers of journals, and the owners of journal publishers and their subsidiaries, to behave more responsibly in political terms. What’s more, it’d be capable of having an impact even if the editors and publishers of those journals produced by the large, international, for-profit presses refused to play ball and provide full disclosure themselves:

a.    because such an initiative would raise awareness of the politics of journal funding and ownership more generally;

b.    because those editors and publishers who don't provide full disclosure would risk appearing as if they have something to hide - especially since this initiative taps into current public discourse around freedom of information and open data;

c.    but it would also hopefully have the effect of encouraging more scholars to research where the funding of such journals comes from, who their parent companies, institutions and organisations are, and what other activities they are involved in and connected to; and to make the results widely known and easily accessible.

It’s also worth emphasising that such an initiative would not require a huge amount of time and effort. After all, ‘Reed Elsevier, Springer, Wiley-Blackwell, and Taylor & Francis/Informa... publish about 6,000 journals between them’.   So to cover 6,000 journals, or somewhere between a quarter and a fifth of all peer-reviewed journals, we only need to research and disclose details of four corporations!  

Wednesday
Oct132010

Affirmative media theory and the post-9/11 world (part 2)

(The following is a slightly revised version of a text first published on 21 September, 2010, by the Creative Research Centre at Montclair State University. Part 1 of 'Affirmative Media Theory and the Post-9/11 World', again first published by the Creative Research Centre, is available below.)

 

To be sure, there’s something seductive about the thought of producing the kind of big idea or constructive theoretical discourse that is able to capture and explain how the world has changed and become a different place after 9/11. Let’s take just the most frequently rehearsed of those examples with which we are regularly confronted: that the awful events at the World Trade Center and Pentagon on that day in 2001 are connected to the ‘war on terror’, the ‘axis of evil’, the ‘clash of civilizations’, the introduction of the PATRIOT Act, the wars in Afghanistan and Iraq, the abuses in Abu Ghraib, indefinite detention at Guantanamo Bay, the so-called ‘global economic crisis’ that began in 2008, the election of Barak Hussein Obama in 2009, the continuing debate over the place of Muslims in US society - even the ‘return to the Real’ after the apparent triumph of (postmodern theories of) the society of the spectacle, the simulacrum and the hyper-real.

Yet when it comes to deciding how to respond to events and narratives of this sort – which we must, no matter how much and how often they are framed as being ‘self-evident’ – do we not also need to ask: why do big ideas and constructive theoretical discourses appear so compelling and refreshing at the moment, in these circumstances in particular? What exactly is the nature of this sense of frustration and fatigue with thinkers and theories – let’s not call them deconstructive – whose serious understanding of, and strenuous engagement with, antagonism, ambiguity, difference, hospitality, responsibility, singularity and openness, renders them wary of too easily dividing history into moments, movements, trends or turns, and cautious of creating strong, reconstructive, thirst-quenching philosophies of their own? From where does the desire spring for what are positioned, by way of contrast, as enabling and empowering systems of thought? Why here? Why now? And, yes, what is the effectivity of such ideas and discourses? What do they do? How can we be sure, for instance, that they don’t function primarily to replicate the forces of neoliberal capitalist globalisation?

To repeat: none of this is to claim big ideas and ‘constructive, explanatory’ discourses aren’t capable of being extremely interesting and important. Of course they are (especially in the hands of philosophers as consistently creative, challenging and sophisticated as Badiou, Hardt and Negri, Stiegler and Žižek). Yet how are we to decide if the idea of the post-9/11 world, persuasive though it may be, is viable, ‘capable of functioning successfully’, of being ‘able to live’ with the ‘enigma that is our life’, if this overarching-concept is so easily incorporated – in these ‘particular circumstances’ especially – into inhospitable, violent, controlling discourses or totalizing theoretical explanations (or posturing displays of male power and intellect)?

Let me raise just a few of the most obvious issues that would need to be rigorously and patiently worked through:

How is the use of the ‘post’ in this prepositional phrase to be understood? Is it referring to that which comes afterwards in a linear process of historical progression? Is the post meant to indicate some sort of fundamental fracture, boundary or dividing line designed to separate the pre-9/11 world from what came afterwards? Or is the post being used here to draw attention to that which, in an odd, paradoxical way comes not just after but before, too, just as ‘post’ is positioned before ‘9/11 world’ in the phrase ‘post-9/11 world’? In other words, does post-9/11 mean a certain world has come to an end, or is it more accurate to think of 9/11 and what has happened since as a part of that world, as that world in the nascent state?  Is the concept of the post-9/11 world referring to the coming of a new world, or the process of rewriting some of the features of the old? 

What is meant by ‘9/11’? Whose 9/11? Which 9/11? Arundhati Roy, writing in September 2002, is able to locate a number of places around the world for the 11th of September has long held significance:

Twenty-nine years ago, in Chile, on the 11th of September, General Pinochet overthrew the democratically elected government of Salvador Allende in a CIA-backed coup...

On the 11th September 1922, ignoring Arab outrage, the British government proclaimed a mandate in Palestine, a follow up to the 1917 Balfour Declaration [which]... promised European Zionists a national home for Jewish people...

It was on the 11th September 1990 that George W. Bush Sr., then President of the US, made a speech to a joint session of Congress announcing his Government’s decision to go to war against Iraq.

(Arundhati Roy, ‘Come September’, The Algebra of Infinite Justice (London: Flamingo, 2002) p.280, 283, 288-289.)

Of course your website indicates that by 9/11 you mean the terrible attacks on the World Trade Center in New York in 2001. I have no wish to detract from the pain and suffering associated with those events. The question arises nonetheless: on what basis can we take the decision to single out and privilege those tragic events over and above the others Roy identifies that also took place on 9/11? How can we do so, and how can we speak of what you refer to as a ‘post-9/11 generation’, without being complicit in those processes by which the attacks in New York have already been appropriated by a range of social, political, economic, ideological, cultural and aesthetic discourses for reasons to do with security, surveillance, biopolitics, justifying the wars in Afghanistan and Iraq and so on (discourses which can make the experience of writing about 9/11 fraught, to say the least)?

This is not to imply a decision to privilege 9/11/01 can’t be made. It’s merely to point out that such questions need to be addressed if this decision is to be taken responsibly and the implications of doing so for the ways in which we teach and write and act assumed and endured.

As for the last part of this phrase (you’ll have gathered there’s nothing ‘inherently’ viable about this concept for me), is it possible to begin to creatively think and imagine using the idea of a post-9/11 ‘world’ without universalizing a singularly US set of events? After all, even the formulation 9/11, with its echo of 911, seems very North American: in the UK we often tend to refer to September 11.

Yes, the Twin Towers were a symbol of World Finance Capital. Yes, the attacks on them were mediated around the world in ‘real time’. Yes, an article in Le Monde published the next day declared ‘We are all Americans! We are all New Yorkers’. (Has the phrase ‘post-9/11 world’ been chosen deliberately to draw attention to American-led neoliberal globalisation? It’s certainly difficult to propose alternatives to either with regard to the world’s social imaginary without risking being made to appear fanatical or extremist.)  Nevertheless, on what basis can we justify totalizing or globalizing these specific events in this manner? And how can we do so without inscribing 9/11 in the logic of evaluation inherent to neoliberalism’s audit culture (‘in the sense that the Holocaust’s singularity and horror would “equal” that of 9/11’ perhaps, but that of Hurricane Katrina or the Deepwater Horizon oilrig explosion would not);  or participating in the way 9/11 has often been made to overshadow other world historical events in the mythic imaginary: the dropping of the atomic bomb on Hiroshima on 6 August, 1954; Nixon’s decoupling of the US dollar from the gold standard in 1971 (which can be seen as one of the roots of the current economic crisis); the gas disaster at the Union Carbide factory in Bhopal on December 2-3, 1984; the 1999 alter-globalisation protests in Seattle; the 2003 invasion of Iraq, recently described by the ex-head of MI5 in the UK as having ‘radicalised a whole generation of young people... who saw our involvement in Iraq... as being an attack on Islam’ – and that’s to name only those events that come most readily to mind? 

Even if we confine ourselves to acts of non-state terrorism, there’s the Oklahoma City bombing of 19/4, 1995; the Madrid bombing of 3/11, 2004; London 7/7; and the attacks in Mumbai of November 2008.  Why would we not try to creatively think and imagine using the concept of a post-2-3/12 world? A post-19/4 world? A post-7/7 world?

Whose post-9/11 world is this exactly? Who wants this post-9/11 world?


Sunday
Sep192010

Affirmative media theory and the post-9/11 world (part 1)

(The following is a slightly revised version of a text first published on 2 September, 2010, by the Creative Research Centre at Montclair State University. Part II of 'Affirmative Media Theory and the Post-9/11 World', again first published by the Creative Research Centre, is now available above.)

 

Thank you for the invitation to contribute to your born-digital, dynamic, nimble, open-source, collaborative space at Montclair State University. I’m very happy to join the conversation of your Creative Research Centre and take part in your symposium, ‘The Uses of the Imagination in the Post-9/11 World’.

You’ve asked me to address ‘the inherent viability of the concept of the “post-9/11 world”’ and explain what this ‘over-arching concept’ means to me.  Perhaps you’ll forgive me, then, if I begin by telling you a little about my own research. This currently involves a series of born-digital, open, dynamic, collaborative projects I’m provisionally calling ‘media gifts’. Operating at the intersections of art, theory and new media, these gifts employ digital media to actualise critical and cultural theory. As such, their primary focus is not on building a picture of the world by establishing what something is and how it exists, before proclaiming, say, that we’ve moved from the closed spaces of disciplinary societies to the more spirit- or gas-like forces of the societies of control, as Gilles Deleuze would have it.

Instead, the projects I’ve been working on over the last few years – which include a ‘liquid book’, a series of internet television programmes, and an experiment that investigates some of the implications of internet piracy through the creation of an actual ‘pirate’ text   – are instances of media and mediation that endeavor to produce the effects they name or things of which they speak.


The reason I wanted to start with these projects is because they function for me as a means of thinking through what it means to ‘do philosophy’ and ‘do media theory’ in the current theoretico-political climate.  I see them as a way of practicing an affirmative media theory or philosophy in which analysis and critique are not abandoned but take more creative, inventive and imaginative forms. The different projects in the series thus each in their own way experiment with the potential new media technologies hold for making affective, singular interventions in the here and now.


The possibility of philosophy today 

Having said that, I want to make it clear I’m not positioning the affirmative media theory I’m endeavouring to practice with these media gifts in a relation of contrast to earlier, supposedly less affirmative, theoretical paradigms.

 

(A desire to avoid positioning the affirmative media philosophy I’m attempting to practice in a relation of contrast to previous theoretical paradigms is one of the reasons I’ve taken the decision not to explicitly relate the media gifts series to the so-called affective turn. For an example of the latter, see Richard Grusin’s recent book on affect and mediality after 9/11, where he writes:

one of the attractions of affect theory is that it provides an alternative model of the human subject and its motivations to the post-structuralist psychoanalytic models favoured by most contemporary cultural and media theorists. Affectivity helps shift the focus from representation to mediation, deploying an ontological model that refuses the dualism built into the concept of representation. Affectivity entails an ontology of multiplicity that refuses what Bruno Latour has characterized as the modern divide, variously understood in terms of such fundamental oppositions as those between human and non-human, mind and the world, culture and nature, or civilization and savagery. Drawing on varieties of what Nigel Thrift calls ‘non-representational theory’, I concern myself with the things that mediation does rather than what media mean or represent.  

(Richard Grusin, Premediation: Affect and Mediality After 9/11 (Palgrave Macmillan: Basingstoke, 2010) p.7)

Another of my reasons for not relating the media gifts series to affect theory lies with the fact that, as I have already intimated, I’m not so interested in developing ontologies or ontological models of understanding the world.

Still another is that, just as such affect theory attempts to do away with oppositions and dualisms, so it simultaneously (and often unconsciously and unwittingly) seems to repeat and reinforce them – in the case of the passage from Grusin above, most obviously between before and after 9/11, between representational and non-representational theory, and between post-structuralist psychoanalytic models and affect theory itself. And that’s without even mentioning the way Grusin’s book is constantly concerned with providing a
representation of the logics and practices of mediation after 9/11; and with explaining what things such as the global credit crunch mean in this context in a manner it’s frequently difficult to differentiate from the kind of cultural and media theory he positions his book as representing an alternative to:

remediation no longer operates within the binary logic of reality versus mediation, concerning itself instead with mobility, connectivity, and flow. The real is no longer that which is free from mediation, but that which is thoroughly enmeshed with networks of social, technical, aesthetic, political, cultural, or economic mediation. The real is defined not in terms of representational accuracy, but in terms of liquidity or mobility. In this sense the credit crisis of 2008 was a crisis precisely of the real – as the problem of capital that didn’t move, of credit that didn’t flow, was seen as both the cause and consequence of the financial crisis. In the hypermediated post-capitalism of the twenty-first century, wealth is not representation but mobility.
            (Richard Grusin, ibid, p.3))

 

In a discussion with Alain Badiou that took place in New York in 2006, Simon Critchley constructs a narrative of this latter kind when describing the ‘overwhelmingly conceptually creative and also enabling and empowering’ nature of the former’s system of thought.  For Critchley, the current situation of theory is characterised, on the one hand, by ‘a sense of frustration and fatigue with a whole range of theoretical paradigms: paradigms having been exhausted, paradigms having been led into a cul-de-sac, of making promises that they didn’t keep or simply giving some apocalyptic elucidation to our sense of imprisonment’; and, on the other, by a ‘tremendous thirst for a constructive, explanatory and empowering theoretical discourse’. It’s a thirst that Badiou’s philosophy apparently goes some way toward quenching. It’s ‘refreshing’, Critchley declares.

This desire for constructive, explanatory and empowering theoretical discourses of the kind offered not just by Badiou, I would propose, but in their different ways by Michael Hardt and Antonio Negri, Bernard Stiegler, Slavoj Žižek, and others, too, is of course understandable. I can’t help wondering, though, if such discourses aren’t also a manifestation, to some degree at least, of what Germaine Greer has characterized as male display (although the books Greer is thinking of are Malcolm Gladwell’s Outliers and Levitt and Dubner’s Freakonomics, rather than Badiou’s Being and Event or volumes by the likes of Nicolas Bourriaud and Marc Auge that put forward theories of the altermodern and supermodernity):

 

Every week, either by snail mail or e-mail, I get a book that explains everything. Without exception, they are all written by men... There is no answer to everything, and only a deluded male would spend his life trying to find it. The most deluded think they have actually found it. ... Brandishing the ‘big idea’ is a bookish version of male display, and as such a product of the same mind-set as that behind the manuscripts that litter my desk. To explain is in some sense to control. Proselytizing has always been a male preserve. ... I would hope that fewer women have so far featured in the big-ideas landscape because, by and large, they are more interested in understanding than explaining, in describing rather than accounting for. Giving credence to a big idea is a way of permitting ourselves to skirt strenuous engagement with the enigma that is our life.

(Germaine Greer, in Germaine Greer, Andrew Lycett and John Douglas, ‘The Week in Books: The Male Desire for Explanation; the Real Quantum of Solace; and Merchandising Fiction’, The Guardian, 1 November, 2008)


Still, as I say, I can recognise the appeal of enabling and empowering theoretical discourses to a certain extent. It’s a different aspect of the current situation of theory as it’s glossed by Critchley I’m particularly concerned with here.

Critchley – who is himself the author of The Ethics of Deconstruction and co-author of Deconstruction and Pragmatism – is careful to name no names as to which exhausted theoretical paradigms he has in mind. But given that a ‘certain discourse, let’s call it deconstructive’, Critchley suggests, is also explicitly placed in a relation of contrast to Badiou’s ‘very different’ creative, constructive philosophy, I wonder if deconstruction is not at least part of what he is referring to?  If so, then I have to say I find it difficult to recognise deconstruction, and the philosophy of Jacques Derrida especially (with which the term deconstruction is most closely associated, and which is very important for me), in any description that opposes it to that which is conceptually creative, enabling, explanatory and empowering. Derrida’s thought is all of these things – although in a different way to Badiou’s philosophical system, it’s true.  The interest of Derrida and deconstruction lies with systems – including what Badiou, in the same discussion with Critchley, refers to as ‘the classical field of philosophy’ – but also with what destabilizes, disrupts, escapes, exceeds, interrupts and undoes systems. And this would apply to Badiou’s own system of thought (‘and this is a system’, Critchley points out). This doesn’t mean deconstruction can be positioned as ‘melancholic’, though, and contrasted to construction and ‘reconstruction’, as Critchley and Badiou would have it.

For all his interest in radical politics, theatre, poetry, cinema, mathematics, psychoanalysis and the question of love, there’s an intriguing return to philosophy, and with it a certain disciplinarity, evident in Badiou’s work (as opposed to the interdisciplinarity associated with cultural studies, say, or the trans-disciplinarity of your CRC). Badiou refers to this as being very much a philosophical decision on his part:

And finally my philosophical decision – there is always something like a decision in philosophy, there is not always continuity: you have to decide something and my decision was very simple and very clear. It was that philosophy was possible. It’s a very simple sentence, but in the context it was something new. Philosophy is possible in the sense that we can do something which is in the classical tradition of philosophy and nevertheless in our contemporary experience. There is in my condition no contradiction between our world, our concrete experiences, an idea of radical politics for example, a new form of art, new experiences in love, and the new mathematics. There is no contradiction between our world and something in the philosophical field that is finally not in rupture but assumes a continuity with the philosophical tradition from Plato to today.

And we can take one further step, something like that. So we have not to begin by melancholic considerations about the state of affairs of philosophy: deconstruction, end of philosophy, end of metaphysics, and so on. This vision of the history of thinking is not mine.  And so I have proposed – in Being and Event in fact – a new constructive way for philosophical concepts and something like a reconstruction – against deconstruction – of the classical field of philosophy itself.

(Alain Badiou, ‘”Ours Is Not A Terrible Situation” - Alain Badiou and Simon Critchley at Labyrinth Books’, NY, March 6, 2006)


Yet what kind of decision is actually being taken here? What is it based on or grounded in? How philosophical is this decision by Badiou?  Couldn’t it be said that any decision to the effect that philosophy is possible, that a ‘reconstruction – against deconstruction – of the classical field of philosophy’ is possible, has to be taken by Badiou in advance of philosophy; and that his decision in favour of a ‘new constructive way for philosophical concepts’ therefore takes Badiou outside or beyond philosophy at precisely the moment he is claiming to have returned to or defended it? As such, doesn’t any such decision do violence not just to deconstruction but also to the classical tradition of philosophy?

These are questions that Derrida and deconstruction can help with. For Derrida’s philosophy is nothing if it is not a thinking of the impossible decision. As someone else associated with deconstruction, J. Hillis Miller, puts it:


Responsibility... must be, if it is to exist at all, always excessive, always impossible to discharge. Otherwise it will risk being the repetition of a program of understanding and action already in place… My responsibility in each reading is to decide and to act, but I must do so in a situation where the grounds of decision are impossible to know. As Kierkegaard somewhere says, ‘The moment of decision is madness’. The action, in this case, often takes the form of teaching or writing that cannot claim to ground itself on pre-existing knowledge or established tradition but is what Derrida calls ‘l’invention de l’autre [the invention of the other’].

(J. Hillis Miller, in J. Hillis Miller and Manuel Asensi, Black Holes: J. Hillis Miller; or, Toward Boustropedonic Reading (Stanford, California: Stanford University Press, 1999) p.491)


From this perspective, what’s so helpful about Derrida’s thought is not that it disavows the possibility of taking a decision in favour of a reconstruction of the classical field of philosophy; it’s that Derrida enables us to understand how any such decision necessarily involves a moment of madness. This is important; because once we appreciate the decision is the invention of the other, of the other in us, we can endeavour to assume, or better, endure ‘in a passion’, rather than simply act out, the implications of this realisation for the way we teach, write and act, in an effort to make the impossible decisions that confront us – including those concerning philosophy - as responsibly as possible. 


The concept of the post-9/11 world

Why am I raising all this here, in response to your invitation to address ‘the inherent viability of the concept of the post-9/11 world’? I’m doing so because if Critchley is right and the current situation of theory is characterised by a thirst for constructive, explanatory and empowering theoretical discourses then, as I say, I can understand this. I can also appreciate that the concept of the ‘post-9/11 world’ may be of service in this context (including, perhaps, in terms of what Badiou refers to as the political name or poetic event). And, of course, it has already been adopted as a new means of historical periodisation by some. But as far as practicing a creative, affirmative media theory or philosophy is concerned, it seems to me that whether what you are referring to as the ‘over-arching’ concept of the post-9/11 world is ‘viable’ or not, in the sense in which my dictionary defines viable - as ‘being capable of functioning successfully, practicable’, as being ‘able to live in particular circumstances’ - is just such an impossible decision.