Introduction

Artificial intelligence (AI) has become increasingly important in recent decades, coming to the fore in speech recognition, machine translation, medical diagnosis, video surveillance, gaming, assisted driving and so on. Artistic applications, though fewer than other fields, have always offered a creative alternative to the use of the medium. Initially, however, artworks supported the idea of machines that can think and act, and then gradually explored the relationship with algorithms not necessarily conceived in human likeness. The latter trend has become particularly evident since the 2010s, in response to the widespread use of AI and the ethical questions it raised. The different conceptions that AI has acquired have been reflected in the many ways in which artists have referred to its agency, consequently defining human-machine interaction and giving a concrete form to the algorithm. Henceforth, the three sections will summarise how intelligent software has been conceived and employed: as a projection of human cognition (i.e. a mirrored intelligence); as an embodied entity recalling human facets (i.e. an intelligent mirror); as a computational agent with its own internal dynamics (i.e. an algorithmic process). These perspectives will be contextualized in the Western, particularly Anglo-Saxon, context in which AI was theorised and largely developed. I will define a possible art historical path from the technical origins of the medium to the present, excluding the technical perspectives of other cultures in which AI has landed (Hui 2016). Consequently, the analysis will be based on the human-machine dichotomy, showing an anthropocentric bias that originally influenced the concept of artificial intelligence. I will argue that the authors have gradually relativised this dualism over the decades, opening the dialogue to autonomous computational agents and new ecosystems.

Various texts have addressed the relationship between AI and Western art: the chapters published by Manovich and Arielli discussed aesthetic issues related to AI, particularly for the visual arts (Arielli 2021; Manovich 2022a; Manovich 2022b; Arielli 2022; Manovich 2023); the issue of The Drama Review in which various authors addressed contemporary theatre applications (Morrison, Nyong’o & Roach 2019); the book by Pizzo, Lombardo and Damiano that looks at the impact of algorithms on interactive storytelling (2024); texts by Monteverdi and Dixon that began to contextualise the applications of AI in digital and multimedia performance (Dixon 2007; Monteverdi 2020; Monteverdi in press); Birringer’s brief discussion of interactive dynamics (2008). None of these contributions, however, offers a historical overview. This paper will build on some insights from these texts and attempt to reconstruct the intricate path that has led technological innovations and their narratives to meet in multimedia works. I will consider the applications of ‘computer-generated art’ where pieces result ‘from a computer program being left to run by itself’ (Boden & Edmonds 2019: 34), thus not restricting the field to robotics, video games, virtual realities and other contingent factors. Hence, the concept of artificial intelligence, associated with computation and largely explored in cybernetics and computer science, will not be applied to ‘generative art’ in general, which here refers to autonomous systems not necessarily related digital processing; ‘computer-assisted art’, as in this case humans would theoretically be able to achieve similar results without machine assistance; ‘live media’, as implying real-time processing not strictly related to generative outputs (Boden & Edmonds 2019; Galanter 2016).

Furthermore, AI agents will be considered specifically in their scenic and interactive role. The paper will deal with digital performance, thus including live performing art and gallery installations as well, inasmuch as the conjunction of computer technologies with live plays constitutes a central aspect of either form or content of the piece (Dixon 2007, x). Nevertheless, I will first discuss some non-live examples due to the early lack of cases, where the spreading employment of AI would still have to come. At the beginning of each section, a historical contextualisation will be given to show the overall background. The analysis will underline how the recontextualisation of features attributed to AI and the progressive emancipation from the human-centred perspective have led to a mutual relationality with the machine that is increasingly projected onto new ecologies of the medium.

Mirrored Intelligence: 1940s–1970s

The term ‘artificial intelligence’ was first used in the proposal for the Dartmouth Summer Research Project, attended by various scientists in 1956 to investigate the cognitive potential of machines. The meetings were based on ‘the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’ (McCarthy et al., 2006 [1955]: 12). This approach merged with the cybernetic perspective developed at the Macy Conferences, which proposed the application of new technologies to emulate the functions of living beings according to mathematical models (Wiener 1948). Data transmission, at first mediated by electrical impulses and then generally conceived as binary information (Shannon & Weaver 1949), was intended not only to create humanoid machines or human cyborgs, but also to study human behaviour through the unpredictable responses of automated systems. However, the analogy between machines and living beings proved to be far from straightforward and involved several speculations put forward by scientists and technicians. Consider, for example, the ‘electronic tortoises’ that Grey Walters built in the 1940s and 1950s. They resembled today’s robotic vacuum cleaner, equipped with wheels and motors to move and an electrical contact to detect collisions with objects encountered along the way. On top of the shell, which contained circuits and mechanics, there was also a rotating photoelectric cell: a kind of eye that searched for light sources and directed the robot’s movement towards them. These were very simple devices; nevertheless, the assumption that both the human brain and the circuits were based on electrical flows, feedback dynamics and autonomous behaviour led the engineer to infer willing purposes to the machine, such as speculation, discretion and self-recognition (Hayward 2001).

Both the analogy to human thought and the emulation of certain complex behaviours were in fact arbitrary attributions and, as such, related to cultural factors. Alongside, the decoupling of the thought-matter correlation brought to the process of ‘disembodiment’, whereby information was ‘conceptualized as an entity separate from the material forms in which it is thought to be embedded’ (Hayles 1999: 2). Disembodiment served as a conceptual tool to promote cybernetics and AI studies, as when ‘information loses its body, equating humans and computers is especially easy, for the materiality in which the thinking mind is instantiated appears incidental to its essential nature’ (Hayles 1999: 2). This narrative was borne out of the belief that the machine could be moved by the same chemical and informational elements employed by humans. Until the 1980s, however, the possibility of replicating artificial neural connections inspired by the brain structure (which are widely used in today’s neural networks further discussed) was shelved due to technical and conceptual difficulties (Minsky & Papert 1969). Instead, the so-called ‘symbolic representation’, based on the mathematical description of human logical-cognitive abilities, was pursued. This model implies that ‘intelligence in natural and artificial systems is associated with the capability of storing and manipulating the information in terms of abstract “symbols” (representing, in many cases, some mental proxy associated with external physical objects) and on the capability of executing mental operations and calculations over such symbols’ (Lieto 2021: 4). Thinking by objects means describing to the machine what those objects represent and, consequently, defining the operations by which interacting with them. This accurate description referred to psychological – not neurophysiological – aspects and left little room for the generative and learning ability of the machine.

Artworks: from human to software

The artistic field (in the United States as well as in Europe) moved in parallel with this context, becoming one of the many areas where the human-machine relationship was explored. Experimentation laid in a structuralist view, moving from the axiom that ‘creative thinking’ resides in ‘the educated guess or the hunch [that] includes controlled randomness in otherwise orderly thinking’ (McCarthy et al., 2006 [1955]: 14). In the algorithmic works produced in those years, a dichotomy was discernible between those focused on ‘interaction and response, translation across sensory-kinetic modes’ and those ‘generative, exploring aesthetics of permutation’ (Whitelaw 1998). The first case involved the translation of an input into an output (e.g. sound into light), where the randomness came from human action rather than machine processing. Instead, the second case, more akin to the present discussion, involved data reorganised by algorithms according to probabilistic rules: in musical compositions such as Lejaren Hiller’s Illiac Suite (1957) for string quartet, in which the author encoded certain musical parameters (pitch, rhythm, and dynamics) and assigned rules based on, for example, 16th century counterpoint, to create melodic lines (Hiller 1959); in textual works such as Theo Lutz’s Stochastische Texte (1959), in which the software, on the basis of a set of given words, rearranged sequences of logical-grammatical functions according to a certain probability of occurrence (Bajohr 2020); of visual works such as Georg Nees’s Schotter (1968), in which the author placed a row of 12 squares at the top of the painting and had the algorithm generate the rotation and position in subsequent rows according to an increasing disorder (Harmon 2021).

In all these fixed works, the author selected the material to assemble a univocal form, with the generative outputs produced before the exhibition. The random elaboration was strictly controlled according to the so-called ‘generative aesthetics’:

Generative aesthetics… implies a combination of all operations, rules and theorems which can be used deliberately to produce aesthetic states (both distributions and configurations) when applied to a set of material elements. Hence generative aesthetics is analogous to generative grammar, in so far as it helps to formulate the principles of a grammatical schema–realizations of an aesthetic structure. (Bense 1971 [1965]: 57)

The creative process remained anchored in a combinatorial approach with known parameters or styles (Boden 2009: 24–25), whereby machine cognition was not yet taken into account. Therefore, the focus of the so-called generative artworks was not so much on the relationship between biological and artificial beings, but on the exploration of open form at a structural level. The analogy to intelligence arose when machines began to relate to humans in a continuous or contingent way – that is, as a consequence of agency. However, the first attempts were mainly based on deceiving users’ perception by profiting from the social meanings they might project onto actions, appearances and affects (Ekbia 2015; Hofstadter 1995; Natale 2021). Take, for example, the well-known case of ELIZA, a chatbot programmed by Joseph Weizenbaum in 1966 that simulated a Rogersian therapist with whom it was possible to communicate through a command line interface. ELIZA’s script reformulated what users wrote by defining the syntactic function of words and changing their order to generate questions (Weizenbaum 1966). People, unaware of chatting with a software program, reacted as if they were talking to a human (McCorduck 2004). The bot thus acquired a dramatic function as a fictitious simulation of a living being (Pizzo 2011). Although ELIZA made no artistic claim, the interaction with artificial performers would be particularly explored in the coming decades (see Blue Bloodshot Flowers and Prosthetic Head discussed later). Human-machine feedback progressively moved away from closed systems towards multifaceted relationships (Bateson 1972).

Another notable case was AARON, one of the first algorithms to be regarded as symbiotic with the human creative process. The software, based on a wide range of if-then rules, was conceived by painter Harold Cohen and continuously implemented from its birth in 1973 until Cohen died in 2016 (McCorduck 1991; Sundararajan 2021). AARON was able to compose increasingly refined pictures, from the definition of shapes to the use of colours since the 1990s. Cohen himself referred to AARON as his ‘other half’, also stating:

AARON exists; it generates objects that hold their own more than adequately, in human terms, in any gathering of similar, but human-produced, objects, and it does so with a stylistic consistency that reveals an identity as clearly as any human artist’s does… It constitutes an existence proof of the power of machines to do some of the things we had assumed required thought, and which we still suppose would require thought – and creativity, and self-awareness – of a human being. (Cohen 1995: 158)

Here the author speaks explicitly about software intelligence and does so by recontextualising it in light of the specific aesthetic meanings it is capable of producing. In contrast with the works of the 1950s and 1960s mentioned above, AARON ’implied an ongoing and contingent human-machine interaction. Therefore, it was also prone to the attribution of agency as manifested in a ‘reflexive choice… constituted within relationships as they unfold across space and time’ (Burkitt 2015: 15). Both AARON and ELIZA focused not so much on the specific result achieved by the machine, but on the machine itself and its ability to communicate by sharing perceivable outputs. Notably, the software was acquiring its prominence connected not only to statistical processing but to the relationships that processing could establish with humans and, in general, the ecosystem it related with. In this regard AARON and ELIZA emphasised two approaches that would become particularly important in the years to come, as the relationship between machine and human (i.e. author, audience and/or actor) occurred before the staging (as in AARON) or during the play (as in ELIZA).

Intelligent Mirrors: 1980s–2000s

Following initial growth, AI started to reveal technical limitations, which led in the 1980s to funding cuts during the period known as ‘AI winter’ (McCorduck 2004). Symbolic algorithms, indeed, proved capable of solving tasks with only a few objects and simple instances – e.g. ordering geometric figures in a defined space (Winograd 1972) – which turned out to be problematic when complexity increased. To solve this impasse, ‘expert systems’ capable of handling broader reasoning in narrow areas of expertise were considered (Buchanan, Sutherland, & Feigenbaum 1969). However, even these systems became obsolete, being unable to handle uncertainty and learn from previous experiences. It was not until the mid-1980s that the discussion resumed, when the ‘connectionist model’ emerged by recovering an architecture shelved in the 1970s (Medler 1998). This provided a mathematical simplification of biological neural networks and shifted the focus from defined symbols and logic structures to distributed computing across many interconnected units called ‘neurons’ (Hu & Hwang 2002). The artificial neural networks developed at that time could learn from large datasets to achieve a given target. The new approach involved ‘probability rather than Boolean logic, machine learning rather than hand-coding, and experimental results rather than philosophical claims’ (Russell & Norvig 2021: 42), also manifesting an increasing ability to provide original outputs (e.g. in image, sound and text generation).

Neural networks have been studied and developed since the 1980s and only achieved their first notable successes around the turn of the millennium (see Deep Blue’s chess victories over Kasparov in 1997). Optimised and widespread use of such models came only after the 2010s (see next section). At the conceptual level, however, the network as a mutual interconnection between different entities took a central role in thinking about computer-generated systems. Especially from the 1980s, scientists and technicians moved away from intelligence mediated by information flows and began to think not only about the internal properties of systems, but also about the contextual relationships between them. According to Humberto Maturana and Francisco Varela, for example, a system could be considered alive if capable of re-configuring its own elements through continuous feedback from the environment (Maturana & Varela 1980; Varela, Francisco, Thompson & Rosch 1991); Marvin Minsky inferred that different mindless parts, called ‘agents’, can form an intelligent system through mutual interaction (1986); Hubert Dreyfus critiqued the notion of AI according to the tenets of embodied cognition (Dreyfus 1992); Donna Haraway argued that the cyborg overcomes the human/animal-machine dualism, prominent in Western society, towards new hybrid relationships (Haraway 1991). Software became an entity between social renewal and technological development, according to an imagery shaped not only by academic but also literary and cinematic reflections (Cave, Dihal & Dillon 2020).

Artworks: from software to embodiment

Despite the technical developments, live plays with AI remained sporadic and embryonic in those years. This presumably occurred because of the low performance of commonly used processors, the lack of functional graphical interfaces, and the ever-present difficulty in coding. Instead, authors explored audio-visual media, more accessible to non-experts and also related to the ongoing reflection on mass-media (Dixon 2007; Lehmann 2006). Since the late 1980s, object-oriented software has also been consistently developed (Castagna 1997: 40). These consist on programming units called ‘objects’ and were employed for assisted composition and choreography (e.g. Max/MSP, Isadora, TouchDesigner) or for creating virtual environments (e.g. Unity, Unreal Engine, Adobe Flash). Parallelly, hardware platforms for controlling analogue devices (e.g. Arduino, Raspberry Pi) emerged. The relationship with code was thus accompanied by predefined frameworks and graphical interfaces (Mancuso 2018; Manovich 2001), which often enabled real-time processing and expanded the possibilities of exchange between interacting participants (Birringer 2008).

AI technologies entered this context contingently and opened up new creative potentials according to extemporaneous interactivity inherent in digital processing (Monteverdi in press). Interactivity in regard to networked or neural systems can indeed be recognised back in the 1980s, albeit not as an applied model but as a theoretical reference. In his installation Very Nervous System (1983), for example, David Rokeby proposed the live translation of the participants’ movements into sound. The title suggested an analogy between the software and a ‘simplified fragment of human perception, judgment, and expression mechanisms’ (Rokeby 2019: 90) that was able to perceive movements and select an output from a range of possibilities. The reference to neural networks was even more explicit in Tod Machover’s The Brain Opera (1996), which implied a similarity to the principles described in Marvin Minsky’s aforementioned The Society of Mind (Orth 1997). In some sections of the work, material was used that had been provided beforehand or in real time by the audience. These inputs represented the mindless agents that were connected (and thus made ‘intelligent’, to use Minsky’s terminology) by the performance.

In addition to using AI as a conceptual reference – which would also persist in the years to come (Befera 2021; Otto 2019) – authors employed AI as an interactive computational tool engaging with human beings. This was the case in Susan Broadhurst’s Blue Bloodshot Flowers (2001), a performance based on the interaction between an actress and Jeremiah, an AI avatar head in computer graphics projected on a screen. The virtual character was able to see what was in front of it through cameras, recognise the background and foreground, estimate the size and velocity of the figures, and react accordingly through facial expressions – e.g. by showing boredom when nothing was happening (Bowden, Kaewtrakulpong & Lewin 2002). In the first part, the performer related to the avatar according to a predefined script; in the second one, the audience was allowed to interact with the virtual character. The performance thus emerged from the ‘enhancement and reconfiguration of an aesthetic creative potential which consists of the interaction and reaction with a physical body, not an abandonment of that body’ (Broadhurst 2002: 162). The virtual head expressed a specific embodiment by confronting the performer on stage. However, the system was not capable of learning, so the algorithm was not particularly creative and almost bound to predefined reactions.

Stelarc went a small step further in his Prosthetic Head (2002), which was similarly conceived as a responsive 3D head. The avatar resembled the artist and also implied verbal outputs. It reacted to questions written by the participants via a chat room, which in most cases was accessible at the installation venue. The character was provided with lip-syncing, speech synthesis and facial expressions performed in real time, so that the responses seemed realistic. It was also able to compose poems and songs extemporaneously. The software was based on ALICE, an Artificial Intelligence Markup Language (AIML) chatbot that extended and outperformed ELIZA stimulus-response architecture. Indeed, ALICE was able to recognise patterns in dialogues and improve its communication skills through supervised learning, where a botmaster ‘monitors the robot’s conversations and creates new AIML content to make the responses more appropriate, accurate, believable, “human,” or whatever the botmaster intends’ (Wallace 2008: 182). Unlike Jeremiah, the head showed the ability to learn from participants’ inputs, albeit this learning was mediated by a human being. Moreover, the fact that the features of the head, even the skin texture, recalled those of Stelarc reflected the author’s transhuman perspective, in which the body is an ‘inadequate evolutionary architecture that requires additional instrumentation to navigate unexpected temporal and spatial expansions of its operation’ (Denejkina & Stelarc 2015). The head can thus be seen as an extension of the author: in this, a strong similarity can be observed with AARON, which also learned from data manually provided by Cohen, although not having a body and not taking users’ inputs into account.

These two cases involved an interface that was not only text-based (as in ELIZA) and through which the audience could perceive the processing and even interact. Still, the outputs had strong anthropomorphic connotations: whether an extension or an alterity, AI was always based on expectations towards a human behaviour. How Long Does the Subject Linger on the Edge of the Volume… (2005) by Marc Downie, Paul Kaiser and Shelley Eshkar, choreographed by Trisha Brown, on the other hand, featured geometric figures projected onto a transparent scrim in the foreground superimposed on the dancers in the background. The play employed a motion capture system based on cameras and markers on performers’ bodies to make movements and positions detectable. Extemporaneous outputs, called ‘thinking images’ by Downie and Kaiser themselves, were generated during the performance: as the software had ‘its own structures and its own intentions, we set it free to figure things out on its own over the duration of the dance’ (Downie & Kaiser 2005). These figures were based on triangles that moved from right to left on the screen, changing their shape and direction to follow the dancers as they deviated from the given trajectory. The algorithm had a prior memory for the choreography, and the projected figures appeared to continuously improve as the play progressed (Downie 2005). Learning was thus based on domain-specific knowledge which, however, did not involve long-term data storage, but rather a memory that was limited to the piece unfolding.

The symbolic models on which the software in these works was programmed left little room for the virtual characters’ autonomy, as they encoded precisely defined instructions and parameters to be processed. One could say that, although a certain agency was observable, the algorithms did not result in proposing new content, but rather in being responsive to given inputs. The human-machine relationship thus turned out to be mostly fictional. After all, authors were still experimenting with the possibilities of AI at this time, when the large-scale use of intelligent algorithms, their actual ability to learn and the significant influences on the social fabric were still to come. This limited interaction was also accompanied by an anthropomorphic perspective expressed in human-like graphical and robotic entities that extended the anthropocentric analogy between human and artificial cognition. How Long… instead provided a more abstract manifestation of AI that somewhat overshadowed the reference to human beings. The performance thus moved in the direction of a newer aesthetic in which an increasing autonomy of machines could be observed, both in terms of action and representation. As we will see in the next section, the increasing abstraction of outputs brought to the fore not so much the manifestation of the software, but the dynamics by which this manifestation occurred, so that the algorithmic processing gradually emerged.

Algorithms: 2010s–Present

Since the 2000s, the connectionist model has been considerably implemented thanks to increasing computing power and the advent of the World Wide Web. Cloud services made it possible to delocalise the storage of data and transfer it worldwide in real time, with the possibility to rely on infrastructures managed by large companies. Since 2010, Deep Learning has also emerged, providing neural networks of multiple layers of increasing abstraction and complexity – e.g. from the recognition of lines and edges to the reconstruction of an entire image (Goodfellow, Bengio & Courville 2016). Such learning depends on the size and type of the database processed: the larger the amount of data, the higher the accuracy of the result. These years witnessed the famous victories of AlphaGo and Watson against the world champions in Go and Jeopardy! respectively, which, together with the aforementioned victory of Deep Blue against Kasparov, made the capabilities of neural networks widely known. Since then, implementations have proliferated in numerous fields (military, medical, automotive, word processing, gaming, etc.), while a few years later some AI models were made available to the general public via online applications (e.g. the well-known text-to-image applications DALL-E and Midjourney, and the chatbots ChatGPT and Bard).

It will not be possible to discuss in detail the many models of neural networks that are employed in digital performance (Anantrasirichai & Bull 2022). Will it suffice to note that they do not aim to emulate the cognitive abilities of humans but, from a more technical perspective, to replace human intelligence for specific tasks. On a broader level, optimising their performance has also led to a change in the context in which AI works:

While we were unsuccessfully pursuing the inscription of producing AI into the world, we were actually modifying (re-ontologizing) the world to fit reproductive, engineering AI… We envelope micro-environments around simple robots to fit and exploit their limited capacities and still deliver the desired output… Nowadays, enveloping the environment into an AI-friendly infosphere has started to pervade all aspects of reality. It is happening daily everywhere, whether in the house, the office, or the street. (Floridi 2023: 25–26)

In addition to changes in environments and habits that generally tend towards mechanisation and quantification (Manovich 2001), the ubiquitous envelope has led to humans becoming part of the processing: for example, when they become complements to the software mechanical actions; are job-replaced without backup; unwittingly provide large amounts of data; become customers whose decisions are predicted and manipulated (Amoore 2020). These aspects somehow reverse the perspective of AI as a tool for assistance and instead lead to discrimination and subjugation, raising critical questions for law, ethics, politics and social habits in general.

Artworks: from embodiment to relationship

In the artistic field, such issues might be challenged by aesthetic insights that question pre-existing imagery and explore new creative possibilities. Consider, for example, the interface. In social media, user engagement is categorised and targeted due to their quantified behaviour, and the interface is used for conveying marketing purposes and possibly for the devious pursuit of power (Crawford 2021; Natale 2021). In contrast, the artistic aim of the interface is not to control the audience, but rather to translate one content into another and to foster relationships and communication. Moreover, extemporaneity and interactivity often aim to overcome the technocentric perspective of Western culture (Hui 2021): by reinstating ‘the values of singular experiences and unrepeatable acts’ against ‘abundance and overproduction of goods’ (Berghaus 2005: 260) or by making tangible problems that are part of the algorithms’ employment (Otto 2019). Socio-political aspects might be taken into account implicitly – e.g. in the use of applications such as the aforementioned DALL-E and ChatGPT, pre-trained on unknown data, or software and libraries run by large companies, such as Google’s Colab and Keras – or explicitly – e.g. in the use of well-known platforms or human-machine relations as dramaturgical content.

Insofar as contemporary approaches view AI and enactment ‘as a process rather than as a datum’ (Pizzo 2021: 98), they have more prominently regarded software action beyond its concrete manifestation as a staged object. Authors began to ‘explore other ideas about what AI can do for art’ (Manovich 2022a: 65) and ‘to understand not only how [algorithms] work, but how we work with them’ (Dorsen 2012). The notion of intelligence has increasingly become a legacy of the past, where the focus on processing is regarding not just computation (as in the 1950s and 1960s), but also openness to the alterity and specificity of AI. Ultrachunk (2018), for example, is an improvisational duet between singer Jennifer Walshe and her AI-generated double from Memo Akten (Akten 2018). The musician provided audio-video recordings of improvisation sessions over a year, which were processed by the neural network Granma MagNet to play new content. During the performance, the resulting character was projected onto a screen and its voice outputs interacted with Walshe’s improvisation. AI resulted in an instrument that managed another instrument (the performer’s vocal cords and body) towards new digital outcomes. In contrast with commercial AI, the morphing effect of the images employed a database well known to the user (the singer herself), thus establishing an intrinsic relationship between physical/virtual corporeality and performing attitudes. The result differed from both the anthropomorphic avatars in Blue Bloodshot Flowers and Prosthetic Head: from the former because it represented the action of the singer herself through the algorithm management; from the latter because it involved no prior shaping but was generated entirely through AI learning.

As in Ultrachunk, the extensive data and program refinement required for training has meant that the relationship between author/performer and machine is increasingly symbiotic, especially when the database is built from scratch. The audience thus sees the result of a long-term collaboration which is somehow reflected in the extemporaneous performative result. This dynamic is very different from the one used in AARON, as learning is nowadays autonomous and potentially free from the author’s control. AI also mediates and defines interaction with other entities or realities, categorising the content coming from this otherness and transforming it into something else. Metabolo (2023) by Valerie Tameau in collaboration with Sineglossa involved the interaction between the dancer (Tameau herself) and a marine ecosystem in North Carolina (LaV 2023). The movements of the fauna were tracked via underwater webcams, transmitted to YouTube, stored as motion data and processed by the algorithm to produce sounds that modify an audio track. Tameau moved on these sounds in the guise of the marine deity MamiWata (coming from the tradition of equatorial West Africa from which she descends), also governing an abstract figure projected in the background through motion capture. In this way, she proposed a multi-species relationship anchored in a spiritual context and mediated by AI processing. As in How Long…, the dancer moved together with generative elements, but she did so not according to a predefined script, but to the ever-changing ecosystem. Hence, it was not the AI that followed the human: if anything, it was the human that followed the musical outputs generated by the AI, which in turn depends on the marine fauna on which the concept of the piece is focussed.

The scenic environment is also becoming more and more tailored to processing. As in Cat Royale (2023) by Blast Theory, in which a physical location was specifically designed to provide maximum comfort for three cats according to AI mediation (Blast Theory 2023). The animals stayed in the space for 12 days while their actions were observed and recorded. Parallelly, a robotic arm acted according to their measured degree of happiness by giving them food and making them play. The event left several interpretations open: the dominance of humans over animals, the self-reliance of cats in the absence of humans, or the comparison between the algorithmic monitoring of cats and that of humans in other contexts (such as social media). As Cat Royale was conceived as a performative experiment involving cats and robots, the stage was not structured as a neutral space where AI outputs appeared. Rather, the space itself was designed specifically to enable AI to perform, thus being essential for its processing and activation. In this regard, it recalled the envelope principle mentioned above, as animals and artificial entities were placed in an environment that both influenced and enabled their intermingled behaviour. At the same time, it refers to a constantly monitored space of which its inhabitants are unaware, recalling systems of artificial surveillance.

The ethical background implied in Cat Royale can also be observed in the other pieces mentioned in this section: implicitly in Akten’s Granma MagNet, which plays AI-processed images that are very common in commercial tools; explicitly in Metabolo, where pre-trained motion capture is used to suggest ecological and spiritual meanings. Insofar as extemporaneous interactions are involved, enactment implies less control by the author and a more valuable performative relationship between physical and artificial agents. This unpredictable background, though typical of improvisation and interactive systems, is increasingly related to AI processing, as depending on Big Data, predictive settings and statistical analysis. AI is thus conceived as a medium interconnected to the contemporary social fabric, as it can express the implications of data processing (e.g. predictive models developed in social media) on the one hand, and a more general sensitivity to the relationships within complex ecosystems involving other cultures, species and computational beings (e.g. African culture and marine fauna) on the other. The authors design the software not only to make data perceivable but also to allow the digital entity to develop its own behaviour, expressiveness and social function, reflecting the intertwining between aesthetics and techno-political implications of our time.

Conclusions

The overview made throughout the article shows some prominent phases of the application of AI in multimedia artworks and performances. The three sections highlight different facets of AI arising from concepts and narratives, which in turn are reflected in the staging. While the idea of intelligent software implied closed systems and focused on univocal results, the involvement of open ecosystems referred to the interaction between human and non-human agents within ever-changing possibilities. More recently, however, AI is seemingly not to be conceived as mirrored intelligence – in the human-centred correlation between information and cognition – nor as an intelligent mirror – as identified in a perceivable form dependant on human action – but as an information process with its own statue of existence. Note that these categories are not to be understood as watertight compartments but as gradually and continuously overlapping and revealing new perspectives as far as they emerge. Indeed, machine intelligence and its anthropomorphic form are still being considered in both art and theoretical reflection.

Two different models have been considered: the symbolic one, which is based on the evaluation of a set of well-described if-then rules, and the connectionist one, which works with statistical learning based on given targets. These macro-categories made it possible to frame two important trends in computer-generated art, depending on whether databases and learning are involved in the creative process. In theatre, AI has provided the ability to associate semantic content from different domains (e.g. text and image, movement and sound) and to re-signify them in light of the staged interaction and the technical properties of the medium – based on computational evaluations and autonomous selection of results. Regardless of the different functions assigned to the algorithm, AI has generally become a tool for, first and foremost, creating connections between contents as well as between environments, living beings and artificial entities.

Finally, the article shows how the ethical aspect has become increasingly important, especially in the last decades. This is because of the socio-political implications of AI when it comes to humans exposed to technological acceleration, the context adaptation to digital processing and the economic interests of large corporations. Performance art offers the opportunity to reflect on such issues and bring hidden dynamics or uncommon applications to light. At the same time, due to the uniqueness of representation given by the generativity of the results, it leads to new relational possibilities that engage the audience in an ever-changing openness to multiplicity.

Competing Interests

The author has no competing interests to declare.

Author Information

Luca Befera is a PhD candidate at the University of Turin, where he studies the aesthetics of intermedia performances, with a focus on interactive dynamics and digital mediation. He is currently working on the influence of artificial intelligence in the creative process and staging. His research also addresses the role of artworks in fostering social communication and new human-machine relationships. The analysis of the pieces is often combined with field research: he has collaborated with Alexander Schubert, Valerie Tameau, the collective Fronte Vacuo and the research centre HER: She Loves Data, among others. In his previous training in musicology at the University of Pavia, he investigated the influence of digital syntax and devices on contemporary sound-based approaches (i.e. post-spectral and electronic dance music). He occasionally cultivates his creative interests in the production of multimedia installations.

References

Akten, Memo 2018 Ultrachunk. Available at https://www.memo.tv/works/ultrachunk/ [Last accessed 23 October 2023].

Amoore, Louise 2020 Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press. DOI:  http://doi.org/10.1215/9781478009276

Anantrasirichai, Nantheera and David Bull 2022 Artificial Intelligence in the Creative Industries: A Review. Artificial Intelligence Review, 55: 589–656. DOI:  http://doi.org/10.1007/s10462-021-10039-7

Arielli, Emanuele 2021 ‘Even an AI Could Do That,’ in Artificial Aesthetics: A Critical Guide to AI, Media and Design, ed. Lev Manovich and Emanuele Arielli (http://manovich.net).

Arielli, Emanuele 2022 ‘Techno-Animism and the Pygmalion Effect,’ in Artificial Aesthetics: A Critical Guide to AI, Media and Design, ed. Lev Manovich and Emanuele Arielli (http://manovich.net).

Bajohr, Hannes 2020 ‘Algorithmic Empathy: On Two Paradigms of Digital Generative Literature and the Need for a Critique of AI Works,’ in BMCCT Working Papers Vol. 4, ed. Mario Wimmer, Markus Krajewski, and Antonia von Schöning (Basel: Universität Basel). DOI:  http://doi.org/10.12685/bmcct.2020.004

Bateson, Gregory 1972 Steps to an Ecology of Mind. San Francisco: Chandler Publishing.

Befera, Luca 2021 Web-Based Form as Expression of Networked Sociality in the Community-Based Piano Piece ‘Wiki-Piano.Net.’ Organised Sound, 26(3): 354–67. DOI:  http://doi.org/10.1017/S1355771821000443

Bense, Max 1971 [1965] ‘The Projects of Generative Aesthetics,’ in Cybernetics, Art and Ideas, ed. Jasia Reichardt (London: Studio Vista) 57–60.

Berghaus, Günter 2005 Avant-Garde Performance: Live Events and Electronic Technologies. New York: Palgrave Macmillan. DOI:  http://doi.org/10.1007/978-1-137-09358-5

Birringer, Johannes 2008 Performance, Technology and Science. New York: PAJ.

Blast Theory 2023 Cat Royale. Available at https://www.blasttheory.co.uk/projects/cat-royale [Last accessed 23 October 2023].

Boden, Margaret A 2009 Computer Models of Creativity. AI Magazine, 30(3): 23–34. DOI:  http://doi.org/10.1609/aimag.v30i3.2254

Boden, Margaret A. and Ernest A. Edmonds 2019 ‘A Taxonomy of Computer Art,’ in From Fingers to Digits: An Artificial Aesthetic, ed. Margaret A. Boden and Ernest A. Edmonds (Cambridge: MIT Press) 23–60. DOI:  http://doi.org/10.7551/mitpress/8817.001.0001

Bowden, Richard, Pakorn Kaewtrakulpong, and Martin Lewin 2002 Jeremiah: The Face of Computer Vision. In: 2nd International Symposium on Smart Graphics, Hawthorn, NY in June 2002, pp. 124–128. DOI:  http://doi.org/10.1145/569005.569023

Broadhurst, Susan 2002 Blue Bloodshot Flowers: Interaction, Reaction and Performance. Digital Creativity, 13(3): 157–63. DOI:  http://doi.org/10.1076/digc.13.3.157.7340

Buchanan, Bruce G., Georgia L. Sutherland, and Edward A. Feigenbaum 1969 ‘Heuristic DENDRAL: A Program for Generating Explanatory Hypotheses in Organic Chemistry,’ in Machine Intelligence, Volume 4, ed. Bernard Meltzer and Donald Michie (Edinburgh: Edinburgh University Press) 209–254.

Burkitt, Ian 2015 Relational Agency: Relational Sociology, Agency and Interaction. European Journal of Social Theory, 19(3): 1–18. DOI:  http://doi.org/10.1177/1368431015591426

Castagna, Giuseppe 1997 Object-Oriented Programming: A Unified Foundation. Boston: Birkhauser. DOI:  http://doi.org/10.1007/978-1-4612-4138-6

Cave, Stephen, Monique Dihal and Sarah Dillon eds. 2020 AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Oxford: Oxford University Press. DOI:  http://doi.org/10.1093/oso/9780198846666.001.0001

Cohen, Harold 1995 The Further Exploits of AARON, Painter. Stanford Humanities Review, 4(2):141–158.

Crawford, Kate 2021 Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. London: Yale University Press. DOI:  http://doi.org/10.12987/9780300252392

Denejkina, Anna and Stelarc 2015 The Body Is Obsolete: Stelarc’s Radical Experiments with Alternate Human Forms. The Kernel 26th July (accessed October 24, 2023, https://opus.lib.uts.edu.au/handle/10453/43306.)

Dixon, Steve 2007 Digital Performance: A History of New Media in Theater, Dance, Performance Art and Installation. Cambridge: MIT Press. DOI:  http://doi.org/10.7551/mitpress/2429.001.0001

Dorsen, Annie 2012 On Algorithmic Theatre. The Theater Magazine 42(2). DOI:  http://doi.org/10.1215/01610775-1507811

Downie, Marc 2005 Choreographing the Extended Agent: Performance Graphics for Dance Theater. Unpublished thesis (PhD), Cambridge University.

Downie, Marc and Paul Kaiser 2005 How Long… Available at http://openendedgroup.com/artworks/howlong.html [Last accessed 23 October 2023].

Ekbia, Hamid R 2015 ‘Heteronomous Humans and Autonomous Agents: Toward Artificial Relational Intelligence,’ in Beyond Artificial Intelligence: Topics in Intelligent Engineering and Informatics, ed. Jan Romportl, Eva Zackova, and Jozef Kelemen (Cham: Springer) 63–77. DOI:  http://doi.org/10.1007/978-3-319-09668-1_5

Floridi, Luciano 2023 The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford University press. DOI:  http://doi.org/10.1093/oso/9780198883098.001.0001

Galanter, Philip 2016 ‘Generative Art Theory,’ in A Companion to Digital Art, ed. Christiane Paul (Hoboken: Wiley-Blackwell) 146–180. DOI:  http://doi.org/10.1002/9781118475249.ch5

Goodfellow, Ian, Yoshua Bengio, and Aaron Courville 2016 Deep Learning. Cambridge: MIT Press.

Haraway, Donna 1991 Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledge.

Harmon, Caleb 2021 Generative Art. Honors Theses, Ouachita Baptist University.

Hayles, Katherine N 1999 How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: The University of Chicago Press. DOI:  http://doi.org/10.7208/chicago/9780226321394.001.0001

Hayward, Rhodri 2001 The Tortoise and the Love-Machine: Grey Walter and the Politics of Electroencephalography. Science in Context, 14(4): 615–641. DOI:  http://doi.org/10.1017/S0269889701000278

Hiller, Lejaren A 1959 Computer Music.” Scientific American, 201(6): 109–121. DOI:  http://doi.org/10.1038/scientificamerican1259-109

Hofstadter, Douglas. 1995. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. New York: Basic Books.

Hu, Yu Hen and Jenq-Neng Hwang 2002 Handbook of Neural Network Signal Processing. London: CRC Press.

Hui, Yuk 2016 The Question Concerning Technology in China: An Essay in Cosmotechnics. Falmouth: Urbanomic.

Hui, Yuk 2021 Art and Cosmo Technics. Minneapolis: University of Minnesota Press.

Lavanderia a Vapore 2023 Metabolo. Available at https://www.lavanderiaavapore.eu/events/metabolo/ [Last accessed 23 October 2023].

Lehmann, Hans-Thies 2006 Postdramatic Theatre. London: Routledge. DOI:  http://doi.org/10.4324/9780203088104

Lieto, Antonio 2021 Cognitive Design for Artificial Minds. New York: Routledge. DOI:  http://doi.org/10.4324/9781315460536

Mancuso, Marco 2018 Arte, Tecnologia, Scienza. Milan: Mimesis.

Manovich, Lev 2001 The Language of New Media. Cambridge: MIT Press. DOI:  http://doi.org/10.22230/cjc.2002v27n1a1280

Manovich, Lev 2022a ‘AI and Myths of Creativity,’ in Artificial Aesthetics: A Critical Guide to AI, Media and Design, ed. Lev Manovich and Emanuele Arielli (http://manovich.net). DOI:  http://doi.org/10.1002/ad.2814

Manovich, Lev 2022b ‘Who Is an Artist in AI Era?,’ in Artificial Aesthetics: A Critical Guide to AI, Media and Design, ed. Lev Manovich and Emanuele Arielli (http://manovich.net).

Manovich, Lev 2023 ‘AI Image and Generative Media,’ in Artificial Aesthetics: A Critical Guide to AI, Media and Design, ed. Lev Manovich and Emanuele Arielli (http://manovich.net).

Maturana, Humberto R. and Francisco J. Varela 1980 Autopoiesis and Cognition: The Realization of the Living. Dordrecht: D. Reidel. DOI:  http://doi.org/10.1007/978-94-009-8947-4

McCarthy, John, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon 2006 [1955] A Proposal for the Dartmouth Summe Research Project on Artificial Intelligence. AI Magazine, 27(4): 12–14. DOI:  http://doi.org/10.1609/aimag.v27i4.1904

McCorduck, Pamela 1991 AARON’S CODE: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen. New York: W. H. Freeman and Company.

McCorduck, Pamela 2004 Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick: A K Peter. DOI:  http://doi.org/10.1201/9780429258985

Medler, David A 1998 A Brief History of Connectionism. Neural Computing Surveys, 1(2): 18–73.

Minsky, Marvin Lee 1986 The Society of Mind. New York: Simon & Schuster.

Minsky, Marvin Lee and Seymour Papert 1969 Perceptrons: An Introduction to Computational Geometry. Cambridge: MIT Press.

Monteverdi, Anna Maria 2020 Leggere Uno Spettacolo Multimediale: La Nuova Scena Tra Videomapping, Interaction Design e Intelligenza Artificiale. Rome: Dino Audino.

Monteverdi, Anna Maria in press Dal Teatro degli automi ibridi di Woody e Steina Vasulka alla Danza assistita dall’Intelligenza Artificiale di Kamilia Kard

Morrison, Elise, Tavia Nyong’o, and Joseph Roach eds. 2019 The Drama Review, 63 (4). Cambridge: Cambridge University Press.

Natale, Simone 2021 Deceitful Media: Artificial Intelligence and Social Life after the Turing Test. New York: Oxford University Press. DOI:  http://doi.org/10.1093/oso/9780190080365.001.0001

Orth, Maggie 1997 Interface to Architecture: Integrating Technology into the Environment in the Brain Opera. In: Designing Interactive Systems, Amsterdam, AMS on 18–20 August 1997, pp. 265–275.

Otto, Ulf 2019 Theatres of Control: The Performance of Algorithms and the Question of Governance. The Drama Review, 63(4): 121–38. DOI:  http://doi.org/10.1162/dram_a_00879

Pizzo, Antonio 2011 Attori e Personaggi Virtuali. Acting Archives Review, 1(1): 83–118.

Pizzo, Antonio 2021 Performing/Watching Artificial Intelligence On Stage. Skenè: Journal of Theatre and Drama Studies, 7(1): 91–110. DOI:  http://doi.org/10.13136/sjtds.v7i1.308

Pizzo, Antonio, Vincenzo Lombardo, and Rossana Damiano 2024 Interactive Storytelling: A Cross-Media Approach to Writing, Producing and Editing with AI. New York: Routledge. DOI:  http://doi.org/10.4324/9781003335627

Rokeby, David 2019 Perspectives on Algorithmic Performance through the Lens of Interactive Art. The Drama Review, 63(4): 88–98. DOI:  http://doi.org/10.1162/dram_a_00876

Russell, Stuart and Peter Norvig 2021 Artificial Intelligence: A Modern Approach. Harlow: Pearson Education Limited.

Shannon, Claude and Warren Weaver 1949 The Mathematical Theory of Communication. Urbana: University of Illinois Press.

Sundararajan, Louise 2021 Harold Cohen and AARON: Collaborations in the Last Six Years (2010–2016) of a Creative Life. Leonardo, 54(4): 412–17. DOI:  http://doi.org/10.1162/leon_a_01906

Varela, Francisco J., Evan Thompson, and Eleanor Rosch 1991 The Embodied Mind: Cognitive Science and Human Experience. Cambridge: The MIT Press. DOI:  http://doi.org/10.7551/mitpress/6730.001.0001

Wallace, Richard S 2008 ‘The Anatomy of A.L.I.C.E,’ in Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer, ed. Robert Epstein, Gary Roberts, and Grace Beber (Dordrecht: Springer) 181–210. DOI:  http://doi.org/10.1007/978-1-4020-6710-5_13

Weizenbaum, Joseph 1966 ELIZA: A Computer Program for the Study of Natural Language Communication between Man and Machine. Communications of the ACM, 9(1): 36–45. DOI:  http://doi.org/10.1145/365153.365168

Whitelaw, Mitchell 1998 1968/1998: Rethinking a Systems Aesthetic. ANAT Newsletter 33 May (accessed October 24, 2023, https://mtchl.net/assets/1968-1998-Systems-Aesthetic.pdf)

Wiener, Norbert 1948 Cybernetics: Or, Control and Communication in the Animal and the Machine. Cambridge: MIT Press.

Winograd, Terry 1972 Understanding Natural Language. Cognitive Psychology, 3(1): 1–191. DOI:  http://doi.org/10.1016/0010-0285(72)90002-3