e_motions in process

Abstract

This research project maps virtual emotions. Rauch uses 3D-surface capturing devices to scan facial expressions in (stuffed) animals and humans, which she then sculpts with the Phantom Arm/ SensAble FreeForm device in 3D virtual space. The results are rapidform printed objects and 3D animations of morphing faces and gestures.

      Building on her research into consciousness studies and emotions, she has developed a new artwork to reveal characteristic aspects of human emotions (i.e. laughing, crying, frowning, sneering, etc.), which utilises new technology, in particular digital scanning devices and special effects animation software. The proposal is to use a 3D high-resolution laser scanner to capture animal faces and, using the data of these faces, animate and then combine them with human emotional facial expressions. The morphing of the human and animal facial data are not merely layers of the different scans but by applying an algorithmic programme to the data, crucial landmarks in the animal face are merged in order to match with those of the human. The results are morphings of the physical characteristics of animals with the emotional characteristics of the human face in 3D.

The focus of this interdisciplinary research project is a collaborative practice that brings together researchers from UCL in London and researchers at OCAD University’s data and information visualization lab. Rauch uses Darwin’s metatheory of the continuity of species and other theories on evolution and internal physiology (Ekman et al) in order to re-examine previous and new theories with the use of new technologies, including the SensAble FreeForm Device, which, as an interface, allows for haptic feedback from digital data.

Keywords

interdisciplinary research, 3D-surface capturing, animated facial expressions, evolution of emotions and feelings, technologically transformed realities

How to Cite

Rauch, B., 2011. e_motions in process. Body, Space & Technology, 10(1). DOI: http://doi.org/10.16995/bst.93
558

Views

315

Downloads

The project ‘Mapping Virtual Emotions: 3D-surface capturing of animated facial expressions in animals and humans’  adopts an interdisciplinary and practical approach to explore the study of human emotions, and in particular how we project those emotions onto animals. The data capturing of the animal and human faces was done at two places in different stages. I started with the portable laser scanner from the SCIRIA research unit, University of the Arts London to get data from stuffed animals, and then additional photogrammatic scanning was undertaken at the Molecular Medicine Unit, UCL, to capture the human face and its expressions. As suggested in my original proposal, I have studied theories of emotions through the work of Damasio, Darwin, Ekman and LeDoux. Paul Ekman’s work was particularly relevant to the study of human facial expressions, while I returned to Darwin (and Ekman’s commentary on Darwin’s book ‘The Expression of the Emotions in Man and Animals’, from 1872) for details on animal facial expressions.1

To stress the importance of emotion research and facial expressions towards an understanding of conscious reality, I introduce here a paper ‘Facial expression form and function’ by Susskind and Anderson (2008), in which the authors argue for an evolutionary approach to understanding human facial expression. For example they show that expressions of fear and/ or disgust alter the biomechanical properties of the face, such that fear increases while disgust decreases sensory exposure. This has been observed cross-species, namely on cows, chimpanzees and human beings.


Image: fox capture, animation stills, artwork and copyright B. Rauch, 2009.

I had the opportunity to exhibit my work in the ‘Digital and Physical Surfaces’ exhibition in the Triangle Project Space at Chelsea College of Art & Design, University of the Arts London in 2007. The work was an installation entitled ‘Virtual Emotions’. I presented a monitor piece on a trestle table which showed an animation of a human face morphing in and out of emotional expressions. The intention was to encourage the visitor to feel several emotions themselves while watching the person on screen. There were seven archive boxes for the visitors to file their own handwritten story, according to Ekman’s seven universal human emotions, i.e. anger, fear, disgust, sadness, happiness, etc. To the right of the table a 3D monitor displayed a fox’s head changing randomly to express different emotions, clearly showing signs of anger, disgust, etc. The scene was programmed so that the fox’s face appeared to be projected at some distance from the physical screen. This encouraged visitors to walk around the sculpture and attempt to look at the fox from different angles.

As human beings we are equipped to read any human facial expression. Despite cultural differences emotion research over the last four decades shows evidence that for most emotions a cross-cultural understanding is preserved. Expressions for basic emotions such as fear, anger, disgust, sadness, or enjoyment are not culture-specific. Ekman (1998, 1999) explains in more detail that these basic emotions are expressed universally by all humans, regardless of culture, race, sex, or ethnicity.2 It is a psychological fact that loss brings about sadness and threat triggers fear. Ekman employs the term basic to differentiate one emotion from another so as to contrast a position that considers ‘emotions as fundamentally the same, differing only in terms of intensity or pleasantness’ (Ekman, 1999). A second meaning of basic points to the evolutionary aspect that Ekman is interested in. An additional third meaning of the adjective basic indicates that emotions have evolved to handle fundamental life tasks and Ekman lists here achievement, loss and frustration.

Charles Darwin’s focus in ‘The Expression of the Emotions in Man and Animals’ was on emotions in other primates. His study included close observation of animals and humans. That emotions are observable in other primates is a defining characteristic of emotion, and yet it is possible that there are some emotions that are unique to humans, though there is no convincing evidence that that is the case. Naturally our language capacity to express and describe emotions in words changes many aspects of emotional experience.

Emotions, as mentioned above, are considered as having evolved to deal with fundamental life tasks, including life threatening situations. For that reason it seems emotions ought to begin quickly; often they happen before we become aware that they have begun. This is important so that we can respond quickly to them in the case of an emergency. Ekman (1999) emphasises that emotions happen to us, they are unbidden and usually they are not chosen by us.

Furthermore Ekman (1999) emphasises that emotions regulate the way we think and this again is evident in memories, imagery and expectations. Emotions are personal and subjective, and how each emotion feels is at the centre of what an emotion is. Ekman (1999) stresses that the use of questionnaires is a well known problem, because when filling in the questionnaire people are not experiencing the emotion at that moment, but merely try to remember what it felt like.

As a human being one has to deal with fundamental life tasks and Ekman (1999) explains that this influences how we respond to an event which marks the emotions. Often we involuntarily signal the emotion to others with a facial expression or other body language. Animals and humans alike express emotions not only in their faces - though the face is considered a marker for emotions - but they also use the rest of the body, for example general posture, hand position, sequences of reactions or the voice all play crucial parts in the expression of a situation. Some expressions are a series of movements such as head down, back, forward, to the side, and hands can be added: the hand might cover part of the sad face to express shame, or it might cover the expression of enjoyment in the face to indicate coyness (Ekman, 1993).

Ekman introduces an understanding of emotions as families of emotions so as to differentiate, for example, the many shades of anger-related emotions. The English language can reflect these subtle differences and scales in that we differentiate between emotions such as irritation, agitation, annoyance, grouchiness, unease, worry, shock, fright and horror. With the descriptive use of language and self-reflection inherent to the human being, emotions in human beings appear to be a more complex experience than that experienced by non-human primates. David Matsumoto (2007: 43) describes this assumption through the example of moral, what he terms an interpersonal version of disgust. While in the animal world a nasty object would trigger vomiting, a human being can be disgusted by others as people and react with an outbreak of extreme feelings.

In addition Matsumoto adds that humans can also feign emotions: we can lie and express something that we do not feel. Ekman (2003: 225) suggests that one take a test in reading faces. A catalogue of photos of faces and clear instructions on how to read them is appended to his book ‘Emotions Revealed, Understanding Faces and Feelings’. Subtle differences in muscle contractions around the eye, for example, can tell a true smile from a false one (the famous Duchenne Smile (Darwin, 1998: 200)).


Image: partners, animation stills, artwork and copyright B. Rauch, 2009.

Reading and understanding the expression on a human face is usually straightforward. One puts oneself in the position of the other and feels what the person making a particular face feels (Hansen, 2004: 158). Making a face might even generate the experience of the particular emotion expressed. This is more difficult with human-animal interaction and even more complicated for human-computer interaction. It is however not impossible, as Derrick de Kerckhove explains, if we consider the computer an expanded biofeedback system that can instruct and teach us how we can adapt ourselves to new perceptions. This idea refers to Marshall McLuhan’s notion of the extension of man, and de Kerckhove also draws on Bergson’s distinction ‘between perception as a virtual action of the body on things and affection as a real action of the body on itself’ (Hansen, 2004: 195 and footnote 84. p.311). Furthermore de Kerckhove discusses touch, a tactile modality, as we ‘[see] with the entire body’ (Hansen, 2004: 232). I would like to add that we see and feel with the entire body.3

Hansen elaborates on the shift from the visual to the affective and haptic. By exploring de Kerckhove’s argument of the disembodiment of visual experience in Virtual Reality, Hansen (2004) engages the facialization of the entire body as imagization of affection. In Hansen’s terms Virtual Reality is not simply the product of advances in technology and developments in computer graphics, but rather he insists that the experience of VR is grounded in the biological potential of human beings. It is to be understood as a body-brain achievement. In that sense VR is not technologically- but biologically-grounded. This new digital Virtual Reality is an adaptation to newly acquired technological extensions provided by New Media. (In the same vein this is further elaborated in my PhD thesis (Rauch, 2005); referring to Revonsuo (2006), I argued for the dreaming brain to be understood as a natural virtual reality model).


Image: animation axes, artwork and copyright B. Rauch, 2009.

My work series ‘Virtual Emotions’  attempts to visualise an evolution of emotions on a scale that ranges from the abstract via animal emotion to the hybrid human body. The virtual digital face seems to suggest an image that does not refer to the Real, in Lev Manovich’s understanding of the term; the new media image has changed our understanding of what an image is: we zoom, we click, we are the active users of the digital image. Furthermore Manovich describes the new image as process, because the image can no longer be restricted to the level of surface appearance (Hansen, 2004: 10). The image must be extended to encompass the entire process by which information is made perceivable through embodied experience. Hansen explains the digitization of a facial image as interfacing with the digital. Hansen uses the digital face to explain affect as interface.

Gabriele Buzzi (2007), in ‘Expression and Dévisage: the face’s signification from art to reality’, describes the face as the most analogical part of the body. She explains how difficult it is to recreate it digitally. This is probably the same challenge that artists have felt for centuries when trying to depict expressions in the human face. Franz Xaver Messerschmidt and Charles Le Brun are two notable artists in this respect. Messerschmidt’s ‘Grimacing head No. 13 “Der Speyer” (lost)’ and Le Brun’s hybrid heads depict animal expressions in humans. Le Brun’s drawings return me to the evolutionary account of emotional expressions. The drawings date from the 17th century and yet they are not unlike my recently-generated computer graphics.

Without doubt emotions are evolving as they are influenced by culture, context and behaviour. Matsumoto (2007) elucidates these three influences of human emotion. Western and Eastern societies have changed with the use of new technologies. Will our ability to read facial expressions slowly change with the new communication systems? Might people soon no longer be able to read facial expressions? Perhaps with the loss of the ability to read an emotion might come too the loss of the experience itself?


Image: ‘3D Virtual Emotion’, animation still. Artwork and copyright B. Rauch, 2009.

My work series on virtual emotions, including works such as ‘emotional degrees’, and ‘interFaced’ manifest a return to the tangible in synthetic imagery. In addition to creating digital objects I aim to merge and morph data in mixed realities (at a time when 3D scanning devices have become more available/affordable to institutions/labs/researchers). Previously, it was either in the computer or outside, but now the data can be merged, introducing new realities. This adds to building new realities for experience and emotional involvement. Researchers are more concerned with the data as it derives from actual, real objects/subjects; the obsession with the real proves true.

A new additional method includes using holographic imagery in either still or short animations. I hope to visualize through critical experimentation what evolution has selected and accommodated for human emotional expressive behaviour.

At OCAD University I have access to the services and leading-edge HALORAIL camera and holographic systems of the Photon League, a not-for-profit, artist led facility in Toronto. With these technologies I can experiment with techniques to explore and display the synthetic human-mammal expressions of emotion and gesture.

A new case study about synthetic emotions can be described briefly as a construction of the 7 universal emotions performed in stages. With the holographic device I captured individual scenes that I hope to use as sketches for further study. The prototype hologram uses channels to present a sequence (series) of emotional gestures that appear to occupy the same dimensional space. The transitions from image to image (emotion to emotion) are currently abrupt.

Modern holography, depending on the subject, is able to replicate objects in a way that is virtually indistinguishable from the original. Digital holography has added the dimension of true colour to the mix of visual cues to the brain that make up reality. A study of human response to holography shows there is a direct correlation between what is seen, the number of visual cues, and the viewers’ response.

The RAIL Project (mentioned above) aims to combine 3D holography with SensAble haptic technology. The marriage of haptics and holography takes recent developments in both haptics and digital holography to create a synthetic experience for the user involving animated holographic scenes that share the same dimensional space as the interactive, haptically-driven CG imagery.

One question remains whether the digital object becomes more real with new interfaces and has more affordable output possibilities. It remains to be seen what hybrid technologies will come from the many synthetic experiences that are being developed: as people demand more, at stake is a heightened sense of synthetic reality.

Conclusion

The work series on facial expressions and gestural emotions uses data from diverse constructed realities. Using the above 3D technologies researchers in the lab also study the expression on viewers’ and subjects’ faces. Compared to current work on nanoscale levels, this work encourages ambiguity and the blurring of realities. While the latest research with Nanotechnology has opened yet another insight into the world of real matter, constructivism teaches us that we cannot experience reality as it is. What we can learn from it is precisely what reality is not. When we observe nature, we hypothesise and continuously correct those with our new experiences. Yet even if we try to find rules and order for nature, we can predict and calculate but we still cannot know the real.

In Leonardo (Vol. 42, No. 1 and No. 2 in 2009), we find a recurring debate on Nanoscale research. ‘Truth and Beauty at the Nanoscale’ and ‘Fact and Fantasy in Nanotech Imagery’ discuss the level of accuracy and truth in the imagery that this technology can offer. On the one hand we have a tactile perspective of nanoscale particles since we can now touch with haptic devices representations of data but on the other we have to accept that these images are interpretations of data. Often the software simplifies for aesthetic reasons, so we are confronted with images that are not representations of an external reality. We suggest that Nanotechnology adds to an understanding of constructed, remediated and augmented realities that have no clear borders.

In ‘Media Dopplers’, Chad Scoville (2009) posits his concept of the outformation age, where the speed of information has surpassed the level of human interactivity. Information and decisions are happening on nanoscale levels outside of our constructed understanding of reality. Scoville touches briefly on the bizarre banality of Reality Television as a reflection of a failed capitalist society and western debt class. Cultural and critical production are left to machines and software agents. ‘AI is already here, and it doesn’t look like it’s supposed to’, he argues. Search engines are more responsive to semantic inquiry; these virtu-real entities, according to Scoville, are in ‘varied states of consciousness’. The images of ourselves that we place in YouTube, Facebook, etc. are our media dopplers. He compares the act of mediating oneself to a form of time travel where one has already been cloned, and he suggests further that we might well live already ‘in a network which is the product of this process’:  ‘In the sense of the media doppler, the infinite cloning loop of extraneated superspace, informatic control mechanisms bot themselves towards complete urbanity of virtualism,… [t]he copies outnumber the originals….it is just computers talking to each other to produce more silicon’.

Rauch became interested in this kind of time travel and the space where the clone looks like its copy and yet, at some point, she returned to the actual world with the desire to produce some of these synthetic creatures with their emotional facial expressions and features from both worlds. What reality do they belong to? And what does realism stand for in the postmodern condition?


Image: ‘interFaced’,  printed objects, Z-Corporation Rapidform, size 20 x 25 x 20cm, artwork and copyright B. Rauch, 2009.

Further Research

A new application of this research will be pursued in the e_Motion Research Project during which Rauch will work with autistic artists. The specific research studies human emotions and empathy as exhibited by facial expression and body gesture in the autistic person, captures these within a 3D environment, and manipulates these expressions with haptic devices, to produce 2D and 3D representations (e.g. digital animation, holographic images, and rapidform processed sculptural works) for artistic and scientific analyses. This is to create and evaluate integrated software/hardware and collaborative paradigms as they relate to a real-world situation of emotion study.

References:

Damasio, Antonio R. (1994) Descartes’  Error: emotion, reason and the human brain. London: Papermac, Macmillan.

Damasio, Antonio R. (1999) The Feeling of What Happens. London: Heinemann.

Darwin, Charles (1872, 1998) The Expression of the Emotions in Man and Animals, introduction and afterword by editor Paul Ekman. London: Harper Collins Publishers.

Ekman, Paul (1993) ‘Facial Expression and Emotion’, American Psychologist 48 (4): 384-392.

Ekman, Paul (ed.) (1998) The Expression of the Emotions in Man and Animals. London: Harper Collins Publishers.

Ekman, Paul (1999) ‘Basic Emotions’, in T. Dalgleish and M. Power (eds.) Handbook of Cognition and Emotion. Sussex, UK: John Wiley & Sons, Ltd.

Ekman, Paul (2003) Emotions Revealed: Understanding Faces and Feelings. NY: Times Books Henry Holt and Company, LLC.

Ekman, Paul (1999) ‘Basic Emotions’, in T. Dalgleish and M. Power (eds.) Handbook of Cognition and Emotion. Sussex, U.K.: John Wiley & Sons, Ltd.

Ekman, Paul (ed.) (1998) The Expression of the Emotions in Man and Animals. London: Harper Collins Publishers.

Ekman, P. & Rosenberg, E.L. (eds) (1997) What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS) (Series in Affective Science). New York/ Oxford: Oxford University Press.

Hansen, Mark B. N. (2004) New Philosophy for New Media. Cambridge, MA: MIT Press.

LeDoux, Joseph E. (1996) The Emotional Brain. New York: Simon and Schuster.

Manovich, Lev (2001) The Language of New Media. Cambridge, MA: MIT Press.

Matsumoto, David (2007) ‘Playing catch with emotions’, Journal of Intercultural Communication 10: 39-49.

McLuhan, Marshall (1964) Understanding Media: the extensions of man. New York: Mentor.

Rauch, Barbara (2005) Ph.D. Natural and Digital Virtual Realities: a practice-based exploration of dreaming and online virtual environments. London: University of the Arts London.

Rauch, B. & Harrison, D. (2006) ‘A Merging of Mindsets Through Collision and Collusion’, Technoetic Arts: a journal of Speculative Research 5 (1): 55–65.

Rauch, Barbara (2007) Digital and Physical Surfaces: presentation of practice based research. Catalogue. London: University of the Arts London.

Revonsuo, Antti (2006) Inner Presence: Consciousness as a Biological Phenomenon. Cambridge, MA: MIT Press.

Susskind, J.M. & Anderson A.K. (2008) ‘Facial expression form and function’, Communicative & Integrative Biology 1:2, 148-149.

Websites

Gabriele Buzzi, 2007. Available at http://www.vjtheory.net/web_texts/text_buzzi.htm
[accessed on November 14, 2010]

Charles Le Brun, Paris 1619-1690. Available at
http://www.charleslebrun.com
[accessed on November 14, 2010]

Franz Xaver Messerschmidt, 1736–1783. Available at
http://www.limmat.ch/schmid/fxm/
[accessed on November 14, 2010]

Scoville, Media Dopplers, The Outformation Age

Resetting Theory: rt009, Arthur and Marilouise Kroker, Editors. Available at www.ctheory.net/articles.aspx?id=614
Date Published: 9/24/2009, [accessed on November 14, 2010]

Dr Barbara Rauch is an artist researcher and academic with a fulltime academic/ research position. She is the director of the Data Visualization Lab and e_Motion Research Project leader at the Digital Media Research and Innovation Institute. She was the holder of a 2-year AHRC research grant, ‘The Personalised Surface within Fine Art Digital Printmaking’ (together with Prof P. Coldwell, FADE, University of the Arts London). As co-applicant and co-investigator she conducted several case studies with an emphasis on 3-dimensional prints and screen-based works (May 1st 2007-2009). The project was completed with an exhibition and symposium at the ICA, London in November 2008 and also a final conference which was held at the V&A in London on April 3rd 2009. As acting director at SCIRIA, Rauch led on the ‘Virtual eMotions’ research group. The group investigated emotions and in particular human facial expressions. The project was a continuation of an AHRC funded research project, ‘Mapping Virtual Emotions: 3D-surface capturing of animated facial expressions in animals and humans’, that was completed in June 2007. In August 2009 Rauch was recruited as a key member to OCADU’s Digital Futures Initiative, an innovative research and curriculum development initiative that brings ICT and art and design together.

Share

Authors

Barbara Rauch

Download

Issue

Dates

Licence

Creative Commons Attribution 4.0

Identifiers

Peer Review

This article has been peer reviewed.

File Checksums (MD5)

  • PDF: 09b98fd6e0ff286a2589ff0b71ccd62f
  • HTML: a935bce146f9f55fa1a2a50a67f7faff