Recent-ish publications

Review of Bitstreams: The Future of Digital Literary Heritage' by Matthew Kirschenbaum

Contribution to 'Archipiélago Crítico. ¡Formado está! ¡Naveguémoslo!' (invited talk: in Spanish translation with English subtitles)

'Defund Culture' (journal article)

How to Practise the Culture-led Re-Commoning of Cities (printable poster), Partisan Social Club, adjusted by Gary Hall

'Pluriversal Socialism - The Very Idea' (journal article)

'Writing Against Elitism with A Stubborn Fury' (podcast)

'The Uberfication of the University - with Gary Hall' (podcast)

'"La modernidad fue un "blip" en el sistema": sobre teorías y disrupciones con Gary Hall' ['"Modernity was a "blip" in the system": on theories and disruptions with Gary Hall']' (press interview in Colombia)

'Combinatorial Books - Gathering Flowers', with Janneke Adema and Gabriela Méndez Cota - Part 1; Part 2; Part 3 (blog post)

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repositories PURE here, or in Humanities Commons here

Radical Open Access

Radical Open Access Virtual Book Stand

'"Communists of Knowledge"? A case for the implementation of "radical open access" in the humanities and social sciences' (an MA dissertation about the ROAC by Ellie Masterman). 

Community-led Open Publication Infrastructures for Monographs (COPIM) project

« Is Big Publishing Killing the Academic Author? | Main | Proud To Be Anti-Growth »
Monday
Oct232023

Creative AI - Thinking Outside The Box 

I'm currently experimenting - more or less playfully/piratically - with the concept of artificial creative intelligence collaboratively generated by Mark Amerika and GPT2. In My Life as an Artificial Creative Intelligence this is defined by Amerika/GPT2 as ‘a human being who can think outside of the box’.

For me, such artificial creative intelligence (ACI) needs to include thinking outside of the masked black box that ontologically separates the human, its thought-processes and philosophies, from the nonhuman: be it plants animals, the planet, the ... or indeed technologies such as generative AI.

The approach to AI of ACI is thus very different from that promoted by the various institutes for human-centered, -compatible or -inspired AI that have been established around the world; as well as that put forward in recent work looking to ‘unmask’ the algorithmic biases of AI in order to safeguard the human (but which also functions to keep the human seperate from its co-constitutive relation with the nonhuman whilst simultaneously maintaining the human's position at the heart of the world). A snapshot illustration of such creative thinking can be provided with the help of two recent accounts of AI art. The first comes from a paper on ‘AI Art and Its Impact on Artists’ written by members of the Distributed AI Research Institute in collaboration with a number of artists. In this paper the human is set up by Harry H. Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers and Timnit Gebru in a traditional hierarchical dichotomy with the nonhuman machine that is artificial intelligence through the robotic insistence that multimodal generative AI systems do not have agency and ‘are not artists’. Art is portrayed as a ‘uniquely human activity’. It is connected ‘specifically to human culture and experience’: those continually evolving ‘customs, beliefs, meanings, and habits, including those habits of aesthetic production, supplied by the larger culture’.

Declarations of human exceptionalism of this kind should perhaps come as no surprise. Certainly not when ‘AI Art and Its Impact on Artists’ derives its understanding of art and aesthetics in the age of AI in part from liberal, humanist figures who were writing in the first few decades of the 20th century: namely, the American philosopher and reformer of liberal arts education John Dewey; and the Englishman Clive Bell who was a representative of Bloomsbury Group liberalism.

To be fair, Jiang et al. also refer in this context to several publications by contemporary scholars of Chinese, Japanese and Africana Philosophy (although its noticeable the majority of these scholars are themselves located in Western nations). Still, liberal humanism holds it values to be universal (rather than, say, pluriversal), so nothing changes as a result: most philosophers on art and aesthetics argue that nonhuman entities are unable to be truly creative, according to Jiang et al.. On this view, artists use ‘external materials or the body’ to make their lived experience present to an audience in an ‘intensified form’ through the development of a personal style that is authentic to them. It is an experience that is 'unique to each human by virtue of the different cultural environments that furnish the broader set of habits, dispositions towards action, that enabled the development of anything called a personal style through how an individual took up those habits and deployed them intelligently’. Consequently, art cannot be performed by artifacts. Generative AI technologies ‘lack the kinds of experiences and cultural inheritances that structure every creative act’. 

The second account of artificially intelligent art can be found in Joanna Zylinska’s book for Open Humanities Press, AI Art. It shows how human artists can be conceived more imaginatively – and politically – as themselves ‘having always been technical, and thus also, to some extent, artificially intelligent’. This is because technology, far from being external, is at the ‘heart of the heart’ of the human, its ‘“body and soul”’, in a relation of what Derrida and Steigler term originary technicity or originary prostheticity. Or as Zylinska has it: ‘humans are quintessentially technical beings, in the sense that we have emerged with technology and through our relationship to it, from flint stones used as tools and weapons to genetic and cultural algorithms’. She even goes as far as to argue that the ethical choices we think we make as a result of human deliberation on our part consist primarily of physical responses as performed by ‘an "algorithm" of DNA, hormones and other chemicals’ that drives us to behave in particular ways.

How can this second ‘human-as-machine’ approach to artificially intelligent art be positioned as the more political of the two? (Doing so seems rather counter-intuitive given the critically engaged nature of the work of DAIR, Gebru et al..) Quite simply because, in its destabilising of the belief that art and culture stems from the creativity of self-identical, non-technological human individuals, and its opening up to an expanded notion of agency and intelligence that is not delimited by anthropocentrism (and so is not decided in advance: say, as that which is recognised by humans as agency and 'intelligence’), such ACI presents an opportunity even more radical – in a non-liberal, non-neoliberal, non-moralistic sense – than that Jiang et al. point to in ‘AI Art and Its Impact on Artists’.

Rooted as the latter is in the ‘argument that art is a uniquely human endeavor’, Jiang et al. advocate for new ‘sector and industry specific’ auditing, reporting, and transparency proposals to be introduced for the effective regulation and governance of large-scale generative AI tools based on the appropriation of free labour without consent. (One idea often proposed is to devise either a legal or a technological means whereby artists can opt out of having their work used for training commercial machine learning like this. Alternatives involve incorporating a watermark or tags into AI-generated output for the purpose of distinguishing it from human-generated content. Some intellectual property experts have even suggested the introduction of a new legal framework, termed 'learnright’, complete with laws designed to oversee the manner in which AI utilises content for self-training.) The aim is to orient these tools, together with the people and organisations that build them, toward the goal of enhancing human creativity rather than trying to ‘supplant it’. When it comes to the impact of AI on small-scale artists especially, the danger of the latter approach includes loss of market share, income, credit and compensation, along with labour displacement and reputational damage, not to mention plagiarism and copyright infringement, at least as these are conventionally conceived by late stage-capitalism’s consumer culture. It is a list of earnings-related harms in keeping with their presentation of independent artists today – especially those who are neither financially self-sufficient nor able to support their practice by taking on other kinds of day jobs – as highly competitive microentrepreneurs. Evidence the interest attributed to them by Jiang et al. in trading ‘tutorials, tools, and resources’, and in gaining sufficient visibility on social media platforms to be able to ‘build an audience and sell their work’.

According to Demis Hassabis, chief executive of Google’s AI unit, we ought to approach the dangers posed by artificial intelligence with the same level of seriousness as we do the climate crisis, instituting a regulatory framework overseen initially by a body akin to the Intergovernmental Panel on Climate Change (IPCC), and subsequently by an organisation resembling the International Atomic Energy Agency (IAEA) for the long term. It is typical of those behind Big Tech to call for the regulation of the anticipated or hypothetical dangers that will be posed by large-scale foundational AI models at some point in the future, such as their ability to circumvent our control or render humanity extinct, rather than for actions that address the risks they represent to society right now. Obviously, the position of Google, Amazon, Microsoft et al. as the dominant businesses in the AI market – the latter both in its own right and as an investor in the capped profit OpenAI – would be impacted far more if governments were to seriously adopt the latter approach rather than leaving it to voluntary self-regulation on their part. They would also be subject to greater competition and challange if it wasn’t just Big Tech that was presented as having the computing power, money and technical expertise to deal with such existential concerns: if AI engines and their datasets had to be made available on an open source basis that makes it easier for a diverse range of smaller (and even non-profit) entities to be part of the AI ecosystem, for instance, and thus provide alternative visions of the future for both AI and the human. Nevertheless, to convey a sense of the radical political potential of artificially creative intelligence, let us return to the example of the environmental crisis crisis I provided previously in relation to Naomi Klein’s critique of the architects of generative AI. As we saw there, our romantic and extractive attitude toward the environment, which presents it – much as Jiang et al. do the work of artists in the face of AI – as either passive background to be protected or freely accessible Lockean resource to be exploited for wealth and profit, is underpinned by a modernist ontology based on the separation of human from nonhuman, culture from nature, alive from non-alive. It is this very ontology and the associated liberal, humanist values – which in their neoliberal form frequently include an emphasis on auditing, transparency and reporting, as we have seen – that artificial creative intelligence can help us to move beyond with its ability to think outside of the box.