The role of documentation in collections and archives
One of the earliest studies of documentation, Suzanne Briet’s What is Documentation? (1951), suggested that documents can be described as primary and secondary. By primary the librarian and information studies expert referred to initial documents and by secondary she intended documents that are produced from the initial documents. Briet then moved on in her analysis by distinguishing between life itself and documents (Briet, 2006, 10). In noting that a document should not be read in isolation, because each document is contextual, rather than being formed purely by the remains of an isolated event, she pointed out that documents interconnect with each other to form a matrix or network of signs. This, she suggested, can lead to the production of more documents (10). The content of documentation, she inferred, is ‘inter-documentary’ (16).
Crucially, as Briet’s text anticipated, documentation nowadays plays a strategic role in cultural production, so much so that, as the curator and media theorist Annet Dekker and Gabriella Giannachi have shown in Documentation as Art, over time, several documents created about performance and new media art have become works of art (Dekker and Giannachi 2023). This is because these kinds of ephemeral artworks often exist primarily as documentation drafted at the time when the artwork was created and/or acquisitioned and subsequently reiterated every time the artwork is activated within the museum and/or archive. Hence performance and new media artworks persist through documentation—and documentation in turn, and over time, often becomes the artwork. The expanding types of records and documents and the resulting quantity of documentation (at the point of an acquisition, during activations, by a museum, the artist, the audience, etc.) form, over time, what could be described as the ecosystem of the artwork. This ecosystem, we hope to show, would greatly benefit from the inclusion of AI not in substitution for but in addition to the work done by existing stakeholders.
Documentation is far from neutral. The philosopher Jacques Derrida’s Archive Fever (1995) traced the word archive to Arkhe (Derrida 1996, 1; original emphasis). For him, the archive is an entity with a point of origin, as well as the ordering principle that is consequently produced by it. Hence, the documented artwork placed in the archive, or in the collection, persists not only as an artwork but also as the organizing principle of the archives and collections that underpin it. In this sense, documentation is key not only in the production of knowledge and the conservation of artworks but also in the preservation of the authority of the archive. For Derrida, the archive is a place for storage, conservation, and memory creation. Thus, he stated, ‘the technical structure of the archiving archive determines the structure of the archivable content even in its very coming into existence and in its relationship to the future’ (Derrida 1996, 17; original emphasis). In other words, the technologies and related practices of what Derrida calls the ‘archiving archive’ shape and so also define both present and future archived materials. For Derrida, ‘archivization produces as much as it records the event’ (17, original emphasis). Not only is the archive a tool for conservation, and a mechanism for dissemination it is also an ordering system for the production of knowledge. The documentation of an artwork in a collection or an archive is consequently shaped by the organizing apparatus used in generating it. Over time, this shapes the artwork’s existence. This shows how archives, and collections, are active, dynamic spaces in which artworks are reformed through documentation.
The introduction of performance and new media art forms within the museum has led to major changes in documentation, conservation, and exhibition practices. Hence, depending on where documentation is placed within the museum, documentation can be a historic record (in the archive), an exhibit (in the collection), and a mode of engagement (on social media) (see Giannachi and Westerman 2018). A key method for the conservation of artworks within the museum is their re-enactment or re-interpretation through different media and contexts. This raises questions as to the ontology of these re-enactments, or re-interpretations, and so also of the analysis of their relationship to the ‘original’ artworks. This capacity of re-enactments or re-interpretations to be both original and reproductions reveals a fundamental aspect of artistic production in the late 20th and early 21st century, namely the fact that the production process itself has taken over sites traditionally used purely for the conservation of works. This, in turn, raises questions as to who should manage access to the large quantity of data produced in this process, and what role AI could play within it.
Documentation is usually carried out by a range of stakeholders, including artists, curators, conservators, and researchers to track the defining characteristics of what constitutes a given artwork and preserve these characteristics over time. The field has become especially important for museums and archives since performance and media works started to be acquired. Since the project Matters in Media Art (2005)1 collection documentation is generally carried out through templates (condition reports, etc.). Among them are the Identity Report which illustrates the production history of an artwork and the Iteration Report which captures the artwork in a moment in time (Brost 2018). These reports intend to capture what the conservator Pip Laurenson described as allographic works whose ‘identity’ is defined by specific properties derived from the artists instructions (2006). By identity Laurenson refers to ‘everything that must be preserved in order to avoid the loss of something of value in the work of art’ (Ibid.). Questions asked of the artworks throughout the documentation process to establish its identity may include the assessment of their significant properties, the parameters of acceptable change, risk and the expertise necessary to support the work (Laurenson 2016: 76). In the establishment of an artwork’s identity different perspectives are therefore brought together which need managing over time. These have been critiqued (Hölling 2016 and Castriota 2021) as forms of stabilisation of what is ultimately often unfixed and unstable and continues to change in the museum (Van de Vall et al 2011; Marcal and Gordon 2023).
To conclude, while documentation has always played a key role in museums and archives for the conservation of ‘original’ artworks, it has, over time, and especially since the introduction of performance and new media artworks in museum and archive collections, also become a key practice for the activation and so also re-interpretation of these works, illustrating that, as Briet had anticipated, documentation started to play a key role in cultural production. This has transformed the sites in which documentation is placed, including both collections and archives, into dynamic production sites. While this may have always been the case for collections, the changing role of documentation explains why, as far as performance and new media artworks are concerned, collection carers are now less focussed on the conservation of the ‘original’ artwork as they are on identifying its identity to safeguard its potential future activation, so much so that the prefix re- which previously described the work of re-enactment, re-interpretation, and re-activation in the museum now appears less frequently than in the past. Hence activations started to be treated as simulacra forming part of the ecosystem of an artwork. As archives have become a key player in cultural production and the digital documentation of its collections has started to increase exponentially, it has become clear that AI can play a significant role in helping conservators and archivists to manage the ongoing production process of documents and records that relate to artworks in the archive, or in the collection, whether as a mechanism for conservation, production, or presentation.
AI and the archive
As the digital humanities researchers Giovanni Colavizza et al suggested, the digital transformation has been increasingly turning archives into data. It is in this context that AI could prove useful to scale traditional record-keeping activities and to experiment with novel ways to capture, organise, and access documentation (2021). Using the Records Continuum model (Upward 2005), which shows that documents are in a ‘constant “state of becoming”’ (Colavizza et al 2021: 13), Colavizza et al found that automation, organisation, access, and the creation of novel kinds of archives are likely to be the prevailing directions, alongside theoretical and professional positioning papers, that will define the field in years to come (Ibid.: 1). Others who used the Records Continuum, like the information studies researcher Richard Marciano et al pointed out that records may have different values for different users which suggests that different users may end up accessing the same records for a range of purposes and contexts (2018: 194).
Since archival organisations are likely to become more and more reliant on AI it is crucial to analyse what obstacles, advantages, and possible solutions have thus far been identified that could make the use of AI more reliable and accurate. One key obstacle is ‘confirmation bias’ which captures how AI tends to focus purely on what is already known rather than make explicit gaps in the data (Winters 2019: 20). In their study of the GLAM sector, the digital humanities researcher Lise Jaillant and the computer scientist Annalina Caputo also pointed out that while AI can make archives more accessible it tends to create ethical challenges especially in relation to bias (Jaillant and Caputo 2022: 833) and cite Google DeepMind research scientist Tolga Bolukbasi et al (2016) who described how the use of AI can amplify existing biases that are already present in data. Although thus far the value of AI has come from its capacity to process large amount of data very rapidly (Jaillant 2022b: 14), its accuracy in relation to provenance is poor, which causes major problems in the field of art documentation. However, as Colavizza et al show, the ‘organise’ and ‘pluralise’ sections of the Records Continuum model work especially well with AI (2021: 14). This suggests that AI archives will continue to be strategic for mapping new epistemological and presentation strategies (Giannachi 2016). Among other obstacles identified by current research is mistrust in technology which makes it difficult to fully implement AI tools in several areas, including the GLAM sector. For Jaillant and digital cultural heritage researcher Arran Rees to foster trust it is crucial to require knowledge of what is being treated by the algorithms and so interdisciplinary collaboration among record creators, archivists, researchers and other users in the sector is paramount (Jaillant and Rees 2023: 582).
Dekker suggested that it is increasingly evident that the best strategy to conserve artworks is to use a ‘network of care’ (2018) which could, as Dekker and Giannachi argued, lead to ‘generative preservation’ as a way to enable further iterations of existing artworks as part of the conservation process (2023). In this context it is key that AI ought not to replace but become part of the network of care. And if AI is to be part of the network of care of an artwork, it should become part of its ecosystem of stakeholders which in the case of works created with AI would involve the documentation of the work’s creation, conservation, and presentation over time, including any possible re-interpretations, re-activations, and re-enactments. Crucial in this context is the recognition that the ecosystem of an artwork and the ecosystem of its documentation and archive may differ depending on whether the artwork is preserved by the artists, a library, a museum, or all of the above. This suggests that to operate archivally, within the context of art documentation, AI needs to acknowledge not only the provenance and identity of an artwork, but also its iterative development within a wider ecosystem of stakeholders. The resulting archive would then be future-oriented, dynamic and generative, as well as past-oriented, forming a continuously expanding documentation repository and production centre.
The dynamic archive
The BRAID (Bridging Responsible AI Divides) and AHRC-funded project ‘Creating a Dynamic Archive of Responsible Ecosystems in the Context of Creative AI’, which involved an interdisciplinary team including staff in the humanities and computer science from the Universities of Nottingham and Exeter, as well as staff from The National Archives, investigated the creation of dynamic archives by using as case studies a series of artworks that used AI (2024). The project aimed to develop an insight into what might constitute responsible use of AI in the context of creative AI and involved examining the ethical and moral tension arising between the concepts of creativity, authenticity, and responsibility. The team found at its first workshop that a dynamic archive should be editable and facilitate annotation; continue to grow over time; reconstitute itself in accordance to keywords set by whoever may wish to consult it; make visible different time-based versions of itself; be used curatorially and creatively as a live (generative) archive; provide the context and live, iterative, documentation of a work, including an artwork generated by AI.
As we have seen there are currently several barriers to the generation of such an archive that are additional to those identified by Winters (2019) and Jaillant (2022a) to do with bias, Colavizza et al (2019) to do with provenance, and Jaillant and Rees (2023) to do with trust. The problem around bias is in fact so severe that the Deputy Director, Archiving and Data Services at the Internet Archive, Thomas Padilla suggested that eliminating bias is not an option as it risks introducing more bias and, rather, bias ought to be managed (Padilla 2019). Further challenges identified by the Head of Emergent technologies at The National Archives John Moore, a co-investigator in the BRAID/AHRC funded project, had to do with notions of authenticity and privacy (in Miles et al 2024). Other challenges that were identified also included overreliance on big tech storage, ‘take down’ requests, and the establishment of organizational rights to content, as well as the retention of content irrespective of knowledge of its current or perceived future value (Ibid.). Offering wider accessibility to archives, including the possibility that stakeholders in an ecosystem could annotate a documentation, raised further questions about trust, transparency, and privacy, especially when these stakeholders may change over time. In fact, a possible requirement to manage multiple perspectives is one that the latest AI systems (especially large language models) have begun to recognise (e.g. Kharchenko et al. 2024) mainly motivated by concerns of how such systems could be used in answering questions of a social and political nature. Finally, just as it is crucial to maintain accurate records of a work’s provenance it is critical that the same attention is devoted to any subsequent metadata added to it.
As a strategy for the identification of the design requisites of a dynamic archive, the above-mentioned project team developed a series of speculative designs for prototypes which were based on the modelling of various artworks’ ecosystems that were then matched to bespoke archives for each of the works analysed during the project (Drury et al 2024b). This was in recognition of the fact that the structure of a dynamic archive must allow for an element of temporality so that the archive can be adapted and reconstituted to reflect both changing values and changing stakeholders. The team used simultaneously a set of concentric and linear models to visualise the works’ ecosystems and found that compared to the concentric circles model, the linear trajectories framework previously developed by the computer scientist Steve Benford el al (2009) more closely resembles the nature of interaction within a dynamic archive as a series of progressive events observed or even produced by various stakeholders, rooted in an ‘original’ instance (located in the archive) documenting the use and reuse of artefacts at different stages (Drury et al 2024a and Miles et al 2024). The project team speculated that a hybrid model based on a fusion of the linear and trajectory models may be better suited to capture some of the features of a dynamic archive, namely its temporality, inclusivity and adaptability (Farina et al 2024).
The trajectories framework had already been applied in an archival context by Giannachi and Benford when they developed the CloudPad tool through which participants could annotate videos recorded as part of a documentation of Blast Theory’s Rider Spoke (2007) with their own recollections of the work (Giannachi et al 2011). The documentation carried out by an ethnographer from the University of Nottingham, Peter Tolmie, in collaboration with staff from the Ludwig Boltzmann Institute in Linz, had recorded participants from the front and rear while they cycled around the city as part of their experience of Rider Spoke and related the recordings to the GPS-tracking of the work to synch visual and audio media to the locations in which the work had, so to speak, taken place. At the time the archive was created, AI was not available and so it was onerous for the team to analyse the burgeoning number of documents produced by CloudPad users, even in the context of a controlled test. However, what this early form of a dynamic archive showed was the appetite participants to the work had in engaging with a work’s documentation as well as the desire by complete strangers who had not experienced Rider Spoke to further annotate and engage with the work’s documentation. With this in mind, we looked into what kind of contribution ChatGPT 3.5 and 4 could offer to the field, exploring what it currently states documentation is, how it states documentation can be used, and what research might be necessary to establish the responsible and trustworthy use of AI in the context of two case studies, Agent Ruby and Cat Royale.
Agent Ruby
Agent Ruby was an interactive multiuser work which was developed by the Bay Area artist Lynn Hershman Leeson between 1998–2002 and consisted of an artificially intelligent Web agent with a female persona capable of holding conversations with users and searching the internet to improve her knowledge. Originally, Agent Ruby was designed to have a four-part life cycle formed by the website, breeding stations, mood swings, voice recognition and dynamic processing of events (in Tromble 2005: 94). Agent Ruby was also meant to be downloaded to Palm handheld computers from the web. The vision had been for Ruby to develop speech synthesis and voice recognition and ultimately understand spoken language, and for her to be connected to the internet, to be able to incorporate current affairs into her conversation. Interestingly, Hershman Leeson stated that Agent Ruby was not pre-programmed, and so she would not know what Agent Ruby was going to respond to specific questions (Hershman Leeson 2014).
When the work was shown in 2013 at the San Francisco Museum of Modern Art (SFMOMA) in the exhibition Lynn Hershman Leeson: The Agent Ruby Files, curated by Rudolf Frieling, the audience records covering 12 years of the work were also exhibited. This was because the then Director of Collections and Conservation Jill Sterrett had used the work as ‘the way for [SFMOMA] to reimagine what […] documentation could be’ since after that moment SFMOMA pulled back from the notion of the artwork as ‘object- and material-based alone’, and fore fronted the idea that artworks like Agent Ruby are ‘activity-based and that there were actions around them’ (in Giannachi and Westerman 2018: 40–41). Crucial in this decision is the identification of the audience as a key stakeholder in the documentation and exhibition of the work. Essential is also the fact that SFMOMA, who had originally commissioned Agent Ruby, subsequently archived the work which was thereafter preserved but no longer allowed to grow online. This raised interesting questions as to what current audiences engaging with Agent Ruby make of the work which still appears to be interactive but is no longer able to search the internet.
We prompted ChatGPT to state how it would build on the existing documentation of Agent Ruby. It replied that it would preserve existing documentation but also incorporate new elements ‘to reflect contemporary themes and technologies’. Thus, it suggested to digitise and archive all existing documentation relating to the original work and ensure they are stored through an accessible format. It also suggested to create an interactive platform dedicated to the work and then put the user documentation online so it could be searchable. Moreover, it suggested, to continue to train Ruby to respond to a broader range of enquiries and user interactions. It called Agent Ruby a project and suggested that a documentation ought to capture the work’s contemporary relevance, community engagement and impacts, and offer a contextual analysis of how the field evolved that included AI and possible partnerships with AI researchers to explore new applications and advancements in conversational AI technology. Finally, it suggested the work should foster community engagement and include a range of educational resources, including lesson plans and curated readings, so that the work could be treated as a ‘dynamic and evolving project that continues to adapt to technologies and user expectations’. However, ChatGPT didn’t seem to take into consideration that the artwork had become part of SFMOMA’s collection and so any update would require a collaboration with both the artist and the museum. Moreover, as a documentation tool ChatGPT was not able to offer a more precise or detailed documentation of the artwork, but it was able to speculate as to what the artwork may become by looking at research on other works by the artist which, for example, included lesson plans and curated readings, a distinctive feature of other works by the artist.
Cat Royale
Cat Royale was produced by the Brighton-based company Blast Theory in 2023 as part of a wider research project Trustworthy Autonomous Systems Hub funded by UKRI. The work explored the impact of AI on humans and animals, offering key insight into the design of multispecies worlds that showed that humans should be part of any AI ecosystem (Schneiders et al 2024). For it, three cats lived inside an environment created by the artists. Blast Theory indicated that the AI made sure that every need of the cats was catered for, so they had food, drink, air conditioning and many hours of play provided by a robot arm controlled by the AI which over time learnt which of the 500 play activities it provided the cat liked best. The artists followed a complex ethical procedure that ensured the comfort and safety of the cats thanks to advice provided by animal welfare organisations that had been involved in the design of the project, including staff from RSPCA who supervised the work throughout the twelve days.
The work has so far been documented by the audience, mainly through social media, Blast Theory, and the interdisciplinary research team, largely to develop papers. Most of Blast Theory’s documentation is formed by video clips and social media prompts and most audience comments were responses to these prompts. Available documentation includes iterative design documents, 8 hours of footage of the cats from 8 cameras, ethnographic field notes and video of the work live, minutes from the debriefing meetings and ethical papers as well as a ‘crisis communications plan’.
As far as Cat Royale is concerned, Chat GPT noted that the work used AI documenting the work would imply capturing both the live performance work and in interactions facilitated by the AI technology. It also suggested that the work consisted of a mixture of performance, gaming elements and AI technology. It stated that player conversations and interactions with participants could be analysed and participant feedback obtained, without recognising that on this occasion the ‘players’ were not human. It also did not pick up on the fact that Cat Royale was part of a larger research project which had made key findings in terms of using AI in art (Schneiders et al 2023; Schneiders et al 2024; Benford et al 2024). It did, however, suggest the documentation should include the conceptual framework, live performance, AI interactions, technological development, audience experience, historical context and legacy and impact. This is an example of how the lack of available and accessible online information can lead AI to hallucinate and generate misinformation. As in the case of Hershman Leeson’s work, the AI was able to speculate about how to expand the work but was unable to provide current information about the work as key publications had appeared after its last update.
Discussion
In this section we will discuss ChatGPT’s responses to prompts about documentation in relation to our key case studies. It is clear from the results to our prompts that the potential of AI in documentation is proportionate to what has already been documented and is available to it. This means that the use of AI for contextualisation, audience documentation, the documentation of large quantity of data pertaining to different iterations of a work and the documentation of change, which are key parameters that current stakeholders in documentation struggle with due to the time-consuming nature of the work, as well as the use of AI in the generation of ‘live’ dynamic archives, could all be built on, as ChatGPT illustrated, provided the AI has access to existing documentation. This may seem an easy proposition, but in fact due to matters to do with IP, data protection, and privacy, most museum documentation is not published online.
AI could process documentation specifically in relation to different stakeholders, or networks of care, paying attention to the individual actors and groups interacting with each other and with the AI both at a specific moment in time and over time. Moreover, AI could compare records from different museums and illustrate what the work could become if different conservation parameters were applied. When AI is the creator and the documenter of an artwork, and so the documentation may be both a historic record and a specific iteration of an artwork, additional complexity emerges that could lead to the production of historically deep inter-documents. In this sense, AI could generate documents, artworks, and archives, and keep them connected among each other in a complex rhizomatic structure. This shows that AI could be extremely useful for the documentation, generation, and preservation of intangible cultural heritage, but only as far as it is cared for by an ecosystem that makes explicit and visible the repercussions of these activities, including its own biases. More specifically, when AI is both a creator and documenter of a work it must highlight what priorities of responsible innovation were used (Jirotka et al. 2017). Hence, the design process should be inclusive of different stakeholders and should anticipate risks and challenges associated with the use of huge volumes of data, including their provenance, the reliability of their metadata, privacy, IP, EDI, and user feedback.
As ChatGPT pointed out, there is no doubt that the field of art documentation is facing challenges related to keeping up with new technology, data protection, IP, privacy and security, maintain standards and consistency, and addressing the changing nature of art, including art and archives created by AI. So, we prompted ChatGPT to answer what it can do for the field of documentation. It answered that it could help to make sense of multitudes of documents, create new archival forms, and operate as a time machine. It not only identified the importance of using different disciplinary perspectives and forms of data capture but also referred to collaborative practices involving artists, technologists, archivists, researchers, who could be involved in iterative prototyping to develop documentation tools and methodologies that are tailored to the specific needs of each artwork. Finally, it suggested the use of image recognition, data analysis, 3D modelling and natural language processing that could search catalogues and datasets to offer a better understanding of the artworks, concluding that AI ‘cannot fully replace the knowledge and experience of trained professionals, such as art historians, conservators, and curators’ so that AI would be ‘a complementary tool to human experience in art documentation’.
We also prompted ChatGPT to answer what it thought a dynamic archive was and it described it as an evolving and interactive repository of different types of resources facilitating personalisation (or customization), agility, contextualisation, version control, automated indexing, collaboration, accessibility, and feedback—an empowering ‘user-centric’ definition that was not too far from the one we had started with in our first BRAID/AHRC-funded workshop. Even though Agent Ruby and Cat Royale were realised at different moments in time using different technologies, ChatGPT had, however, dealt with both of them in the same way, suggesting how they could be built on for future audiences. Interestingly, its approach was documentary, archival, and generative, in the sense that, to use Derrida’s expression, ChatGPT utilised the works and their documentation as mechanisms for the production of further artworks. In other words, AI treated documentation as a strategy for cultural production over which it seemed to maintain authority though, it made it clear that its work could not replace the knowledge and experience brought by existing professionals in the field.
While AI appears to behave like an authoritative archive, it can only use knowledge made available to it, which means there is a risk that it may produce biased and inaccurate answers. Key then is the role played by stakeholders, so that they could provide the AI with the knowledge that is needed to create reliable and trustworthy dynamic archives. When identifying stakeholders for complex artworks like Agent Ruby or Cat Royale it is therefore crucial that several parties are represented including the artists but also others involved in the work’s design and production, including performers, the funders or organisations that acquired the works and are preserving their legacy, industrial partners and invested researchers as well as their respective audiences. This suggests that the key stakeholders of a dynamic archive’s ecosystem may be the work’s creators, funders, copyright-holders, and end-users who experienced the work. Critical, however, is the fact that the ecosystems should make room for unpredictable factors that might emerge over time. In this sense, ecosystems must remain open and be capable of dealing with risk, governance, and anticipating change.
The value of what AI can bring to the field of documentation is likely to be proportionate to its inclusivity, and so consideration for EDI is critical here.2 AI’s capacity to process data and offer different and possibly diverse views of an event, including, for example, that of the cats in Cat Royale, depends on what the ecosystems involved managing these archival and documentary generative processes ultimately provide to the AI. Additionally, AI needs to be accountable and so learn to cite the provenance of its findings.3 This is, currently, alongside bias, one of AI’s principal limitations. And in fact, bias may have to do precisely with the lack of accountability of the provenance of its sources. We have seen that AI hallucinates in that when it does not know something, it simply makes it up on the basis of what is the most likely answer. Instead, a healthy AI ought to remember where the documents it processes stem from so that it can avoid meshing everything into one summative biassed autocratic view. In conclusion, AI works very well at the level of inter-document generation, but it cannot and should not make up documents without making that process explicit. This is why, ultimately, a healthy and deep AI ecosystem, should be invested not only in a trustworthy but also an ethical, accessible, sustainable, human-in-the-loop directed AI so that it could itself remain engaged in ethics, accessibility, sustainability, transparency, and EDI.
Notes
- Matters in Media Art (2005) http://mattersinmediaart.org. [^]
- ‘For this see also The Ethical Data Initiative’, https://ethicaldatainitiative.org/2024/05/22/the-ethical-data-initiative-shaping-a-responsible-data-landscape/. [^]
- See also Novelli et al. 2024 for a recent discussion of accountability as it relates to AI systems. [^]
Acknowledgements
We gratefully acknowledge BRAID and the AHRC who funded this project (BRAID/AHRC OPP18206).
Competing interests
The authors have no competing interests to declare.
Author Informations
Gabriella Giannachi is Professor in Performance and New Media at the University of Exeter, UK where she directs the Centre for Intermedia and Creative Technology. She has published several books on new media and documentation including Performing Presence: Between the Live and the Simulated, co-authored with Nick Kaye (2011); Performing Mixed Reality, co-authored with Steve Benford (2011); Archaeologies of Presence, co-edited with Michael Shanks and Nick Kaye (2012); Archive Everything (2016, trans. in Italian 2021, repr. 2023); Histories of Performance Documentation, co-edited with Jonah Westerman (2017, trans. in Chinese 2025); Documentation as Art: Expanded Digital Practices, co-edited with Annet Dekker (2022); and Technologies of the Self-Portrait (2022, trans. Italian 2023). Giannachi is currently working on several funded project on the use of AI for documentation.
Steve Benford is the Dunford Professor of Computer Science at the University of Nottingham where he co-founded the Mixed Reality Laboratory. He is a UKRI funded Turing AI World Leading Research Fellow exploring ‘Somabotics: Creatively Embodying Artificial Intelligence’. He also directs the EPSRC-funded Horizon Centre for Doctoral Training. Steve’s research explores artistic applications of digital technologies through performance-led methods that engage artists in creating, touring and studying unique interactive experiences.
Lydia Farina is Assistant Professor in the Department of Philosophy at the University of Nottingham which she joined in 2019. Her current research focuses on the philosophy of mind and metaphysics. More specifically she is looking into the nature of emotion, Responsibility and AI, and Human/AI Interaction. She has published on AI in several journals and edited collections including ‘Algorithmic processing and AI bias’ (Monti and Albano 2025), ‘Artificial Intelligence Systems. Responsibility and Agential Self-Awareness’ (in Müller 2021). She is the Principal Investigator of the ‘Responsible use of AI in the creation, archiving, reactivation and conservation of artworks and their archives’ funded by AHRC/BRAID. She was Principal Investigator of the Creating a Dynamic Archive of Responsible Ecosystems in the context of Creative AI as part of the BRAID Scoping to Embed Responsible AI in Context, funded by the AHRC.
References
Benford, Steve, Giannachi, Gabriella, Koleva, Boriana, and Rodden, Tom (2009) ‘From Interaction to Trajectories: Designing Coherent Journeys Through User Experiences’, Proceedings ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2009), Boston, MA, April 5–9, 2009, ACM Press, http://doi.org/10.1145/1518701.1518812
Benford, Steve, Mancini, Clara, Chamberlain, Alan, Schneiders, Eike, Castle-Green, Simon, Fischer, Joel, Kucukyilmaz, Ayse, Salimbeni, Guido, Ngo, Victor, Barnard, Pepita, and Adams, Matt (2024) ‘Charting Ethical Tensions in Multispecies Technology Research through Beneficiary-Epistemology Space’, in Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 1–15, http://doi.org/10.1145/3613904.3641994
Bolukbasi, Tolga, Chang, Kai-Wei, Zou, James, Venkatesh, Saligrama, and Adam, Kalai (2016) ‘Man is to computer programmer as woman is to homemaker? Debiasing word embeddings’, in: Proceedings of the 30th international conference on neural information processing systems. Curran Associates Inc., Red Hook, NY, USA, 4356–4364, http://doi.org/10.48550/arXiv.1607.06520
Briet, Suzanne (2006 [1951]) ‘What is Documentation?’, trans. Ronald E. Day and Laurent Martinet with Hermina Anghelescu, Lanham, MD: Scarecrow Press.
Brost, Amy (2018) ‘Documenting an Artwork’, MoMA (11–15 June 2018), https://vimeo.com/287112070/7ce7a968e6.
Castriota, Brian (2021) ‘Object Trouble: Constructing and Performing Artwork Identity in the Museum’, ArtMatters International Journal for Technical Art History, 12–22.
Colavizza, Giovanni, Blanke, Tobias, Jeurgens, Charles, and Nordegraaf, Julia (2021) ‘Archives and AI: An Overview of Current Debates and Future Perspectives’, ACM Journal on Computing and Cultural Heritage, 15:1, 1–15, https://arxiv.org/abs/2105.01117
Dekker, Annet (2018) Collecting and Conserving Net Art. Moving Beyond Conventional Methods. London: Routledge.
Dekker, Annet, and Giannachi, Gabriella (eds) (2023) Documentation as Art, London and New York: Routledge.
Derrida, Jacques (1996 [1995]) Archive Fever, trans. E. Prenowitz, Chicago: The University of Chicago Press.
Drury, Megan, Miles, Oliver, Brundell, Pat, Farina, Lydia, Webb, Helena, Giannachi, Gabriella, Benford, Steve, Moore, John, Jordan, Spencer, Perez-Vallejos, Elvira, Stahl, Bernd, and Vear, Craig (2024b), ‘Creating a Dynamic Archive of Responsible Ecosystems in the Context of Creative AI’, BRAID/AHRC OPP18206, 3, Workshop Report 3.
Drury, Megan, Oliver, Oliver, Brundell, Pat, Farina, Lydia, Webb, Helena, Giannachi, Gabriella, Benford, Steve, Moore, John, Jordan, Spencer, Perez-Vallejos, Elvira, Stahl, Bernd, and Vear, Craig (2024a), ‘Creating a Dynamic Archive of Responsible Ecosystems in the Context of Creative AI’, BRAID/AHRC OPP18206, 2, Workshop Report 2.
Farina, Lydia, Webb, Helena, Giannachi, Gabriella, Benford, Steve, Moore, Jogn, Stahl, Bernd, Perez Vallejos, Elvira, Jordan, Spender, Vear, Craig, Drury, Megan, Miles, Oliver, and Brundell, Pat (2024) ‘Structuring a dynamic archive of Responsible Ecosystems in the Context of Creative AI’, BRAID/AHRC OPP18206, 4.
Giannachi, Gabriella (2016) Archive Everything, Cambridge, Mass.: The MIT Press.
Giannachi, Gabriella, Lowood, Henry, Rowland, Duncan, Benford, Steve, and Price, Dominic (2011) ‘Cloudpad – a cloud-based documentation and archiving tool for mixed reality artworks,’ in ADHO Stanford.
Giannachi, Gabriella, and Westerman, Jonah (2018) Histories of Performance Documentation, London and New York: Routledge.
Hershman Leeson, Lynn (2014) ‘Robot Dialogue: Hershman and Agent Ruby’, https://www.youtube.com/watch?v=XLOgMgCNC_w
Hölling, Hanna (2016) ‘The aesthetics of change: on the relative durations of the impermanent and critical thinking in conservation’, in Erma Hermens and Frances Robertson (eds) Authenticity in Transition: Changing Practices in Art Making and Conservation, London: Archetype Publications, 13–24.
Jaillant, Lise (2022a) ‘How can we make born-digital and digitised archives more accessible? Identifying obstacles and solutions’, Archival Science, 22, 417–36.
Jaillant, Lise (2022b) (ed.) Archives, Access and Artificial Intelligence, Bielefeld: Bielefeld University Press.
Jaillant, Lise, and Caputo, Annalina (2022) ‘Unlocking digital archives: cross-disciplinary perspectives on AI and born-digital data’, AI and Society, 37, 823–835.
Jaillant, Lise, and Rees, Arran (2023) ‘Applying AI to digital archives: trust, collaboration and shared professional ethics’, Digital Scholarship in the Humanities, 38: 571–585.
Jirotka, Marina, Grimpe, Barbara, Stahl, Bernd, Eden, Gracem, and Hartswood, Mark (2017) ‘Responsible research and innovation in the digital age’. Communications of the ACM 60, 5 (May 2017), 62–68, http://doi.org/10.1145/3064940
Kharchenko, Julia, Roosta, Tanya, Chadhai, Aaman, and Shah, Chirag (2024) ‘How Well do LLMs Represent Values Across Cultures? Empirical Analysis of LLM responses Based on Hofstede Cultural Dimensions’, https://arxiv.org/html/2406.14805v1#bib
Laurenson, Pip (2006), ‘Authenticity, Change and Loss in the Conservation of Time-Based Media Installations’, Tate Papers, 6, https://www.tate.org.uk/research/publications/tate-papers/06/authenticity-change-and-loss-conservation-of-time-based-media-installations.
Laurenson, Pip (2016 [2013]) ‘Old Media, New Media? Significant Difference and the Conservation of Software-based Art’, in Beryl Graham (ed.) New Collecting: Exhibiting and Audiences after New Media Art, London: Routledge, 73–96.
Marçal, Helia, and Gordon, Rebecca (2023) ‘Affirming future(s): towards a posthumanist conservation in practice’, in Daigle, Christine and Hayler, Matthew (eds) Posthumanism in Practice, London: Bloomsbury, 165–178.
Marciano, Richard, Lemieux, Victoria, Hedges, Mark, Esteva, Maria, Underwood, William, Kurtz, Michael, and Conrad, Mark (2018) ‘Archival Records and Training in the Age of Big Data’ in J. Percell, L. C. Sarin, P. T. Jaeger, and J.C. Carlo Bertot (eds) Advances in Librarianship, Bradford: Emerald Publishing Limited, 44:179–99.
Miles, Oliver, Farina, Lydia, Webb, Helena, Giannachi, Gabriella, Benford, Steve, Moore, John, Jordan, Spencer, Perez-Vallejos, Elvira, and Vear, Craig (2024), ‘Creating a Dynamic Archive of Responsible Ecosystems in the Context of Creative AI’, BRAID/AHRC OPP18206, 1, Workshop Report 1.
Novelli, Claudio, Taddeo, Mariarosaria, and Floridi, Luciano (2024) ‘Accountability in artificial intelligence: what it is and how it works’. AI & Society, 39, 1871–1882. http://doi.org/10.1007/s00146-023-01635-y
Padilla, Thomas (2019) ‘Responsible Operations. Data Science, Machine Learning, and AI in Libraries’, Dublin, OH: OCLC Research.
Schneiders, Elke, Benford, Steve, Chamberlain, Alan, Mancini, Clara, Castle-Green, Simon, Ngo, Victor, Row Farr, Ju, Adams, Matt, Tandavanitj, Nick, and Fischer, Joel (2024) ‘Designing Multispecies Worlds for Robots, Cats, and Humans’ in Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 1–16, http://doi.org/10.1145/3613904.3642115
Schneiders, Eike, Chamberlain, Alan, Fischer, Joel, Benford, Steve, Castle-Green, Simon, Ngo, Victor, Kucukyilmaz, Ayse, Barnard, Pepita, Farr, Ju Row-Farr, Adams, Matt, Tandavanitj, Nick, Devlin, Kate, Mancini, Clara, and Mills, Daniel (2023) ‘TAS for Cats: AN Artist-led Exploration of Trustworthy Autonomous Systems for Companion animals’, ACM, http://doi.org/10.1145/3597512.3597517
Tromble, Meredith (ed.) (2005) The Art and Films of Lynn Hershman Leeson, Berkeley: University of California Press.
Upward, Franklyn Herbert (2005) ‘The records continuum’ in McKemmish, Sue, Piggott, Michael, Reed, Barbara, Upward, Franklyn Herbert (Eds) Archives: Recordkeeping in society, Wagga Wagga, NSW, Australia: Centre for Information Studies, 197–222.
Van de Vall, Renée, Hölling, Hanna, Scholte, Tatiana, and Stigter, Sanneke (2011) ‘Reflections on a biographical approach to contemporary art conservation’, in Bridgland, Janet (ed.) Preprints of the ICOM-CC 16th Triennial Conference, Lisbon: Critério, 1–7.
Winters, Jane, and Prescott, Andrew (2019) ‘Negotiating the Born-Digital: A Problem of Search’, Archives and Manuscripts, 47:3, 391–403, http://doi.org/10.1080/01576895.2019.1640753.