Video 1 


1. Introduction

Music for Brainwaves focuses on sound, space, physiological data (EEG), and perception. From a compositional point of view and in contrast to Lucier’s Music for Solo Performer, where EEG triggers a physical movement on percussion instrument, Music for Brainwaves explores a sound continuum based indeed on EEG data, but generated by an algorithm. The sound continuum is based on Xenakis’s algorithm Gendy3 originally adapted by Nick Collins for the SuperCollider software and then ported to the Max/Msp software by Stephen Lumenta (Di Nunzio, 2013). The algorithm produces a chaotic continuous sound that is, metaphorically and aesthetically, intended to reflect what the sound of the firing neurons might be.

Music for Brainwaves is based on an interface employed in collecting physiological data from a performer. Then, the algorithm processes the collected data, and the consequent generated sound is projected in a (resonant) performance space; the result is heard by the performer, and thus included in a neurofeedback loop. In section 1, the definition of hyperbiological space will be introduced, and also how the performer primarily perceives it. The section moves towards the definition of EEG, Brain Computer Interfaces (BCI), Brain Computer Musical Interfaces (BCMI), and neurofeedback. Section 2 introduces the origin of the piece influenced by Alvin Lucier’s Music for Solo Performer. It shows how, from the 1960s, such devices triggered curiosity among other composers, until today’s composers. The section ends on the ‘raison d’être’ of the piece. Section 3 presents the development of the performance based on improvisation and body postures. The section moves toward the Gendy3 algorithm. The section also introduces the ex-NSA Teufelsberg listening station in Berlin (Figure 1), where the reception of the hyperbiological space happened through neurofeedback, and how this may relate to ancient sacred architectures in terms of resonance and perception. Section 4 offers a conclusion by discussing the development of the hyperbiological space and by providing future research plans in the field.

Figure 1 

Ex-NSA Teufelsberg Listening Station, Berlin (Forcucci).

1.1. Hyperbiological Space

The hyperbiological space is an augmented peripersonal space, which is defined as ‘the space immediately surrounding our bodies’ (Rizzolatti et al., in Holmes and Spence, 2004: 94) and proposed as a conceptual relationship of space, sound, body and physiological data (EEG). The relationship between the performer/machine computer network delineates the hyperbiological space, where physiological data processed by computer are determined and sent as sonic information to the performance space where it is, in turn, received and processed by the performer. The performer’s reception of the sound closes the loop, causing the corresponding modulation of physiology known as neurofeedback. Vernon proposes neurofeedback as:

A sophisticated form of biofeedback based on specific aspects of cortical activity. It requires the individual to learn to modify some aspect of his/her cortical activity. This may include learning to change the amplitude, frequency and/or coherence of distinct electrophysiological components of one’s own brain (2005: 347).

Accordingly, considerations in higher-dimensional spaces are not discussed here, nor are mathematical models proposed. However research made in such areas is useful in understanding abstracted dimensions related to a space where biological information is included. In particular Lewis Carroll’s ideas about higher-dimensional spaces emerges from the great nineteenth-century German mathematician Georg Bernhard Riemann, who demonstrated that these universes obey to their own inner logic (Kiku, 1995: 22, 23). The space sensed through neurofeedback by the performer includes the idea of the ‘higher-dimensional space’. Such sensed space appears when the performer hears, in a very resonant space, the sonic result created by his own EEG. The hyperbiological space is a complex dynamic subjective internal and external space in a closed loop provided by neurofeedback, and thus the cognitive-architectural space. Roy Ascott proposes the term ‘cyberception’ in explaining the relationship between our selves and the mediated world, and how we are augmented:

Cyberception involves a convergence of conceptual and perceptual processes in which the connectivity of telematic networks plays a formative role (…) It redefines our individual body, just as it connects all our bodies into a planetary whole (2003: 320, 321).

Although the experience of augmentation of the self in Music for Brainwaves does not involve the telematic mediation through the Internet as proposed by Ascott, a network indeed exists in the relationship among body, physiological data, sound, and space through neurofeedback.

1.2. EEG, BCI, BCMI and Neurofeedback


The discovery of human EEG (Electroencephalogram) measurements, and in particular the alpha wave, were first discovered by the German neurologist Hans Berger in 1924 (Millet, 2001: 522). The procedure of measurement of the electrical activity of the firing neurons in the brain is known as EEG. The data is then filtered to obtain different frequencies, which have different functions. Without proper processing and filtering, EEG data is essentially random data.

BCI (Brain-Computer Interfaces)

Brain Computer Interfaces (BCI) monitor the activity of the brain measurements (EEG). In the last decade it has improved as an extension of the body in order to control computers, prostheses or wheelchairs. BCI appeared in many other domains than neuroscience and medical fields such as video games, media art and music, among other fields. The next decade could unlock other opportunities, according to Lance et al.:

Based on advances in sensor technologies, analysis algorithms, artificial intelligence, multi-aspect sensing of the brain, behaviour, and environment through pervasive technologies, and computing algorithms will be capable of collecting and analysing brain data for extended time periods and are expected to become prevalent in many aspects of daily life (2012: 13).

BCMI (Brain-Computer Music Interfacing System)

As an extension of the BCI the BCMI is an interface that is specifically directed toward the production of music. The idea is directly inspired by Alvin Lucier’s piece Music for Solo Performer, and has since then evolved according to the power of the computers and the availability of affordable devices (Miranda, 2014: 1–27). For the actual work, the IBVA (Interactive Brainwaves Analyser System) BCI was used. The IBVA, developed by Masahiro Kahata, was one of the earliest affordable systems. It filters the raw data from the EEG into four frequencies named alpha, beta, delta and theta ranging from 2 Hz to 45 Hz (Miranda, 2014: 202). Music for Brainwaves focuses on the alpha frequency, as in the Alvin Lucier work’s Music for Solo Performer.


By watching and listening to real-time multimedia representations of its own electrical activity (EEG), the brain can improve its functionality and even its structure (Budzynski et al., 2009: xxi).

2. Background

The major influence of Music for Brainwaves came from Alvin Lucier’s piece, Music for Solo Performer. The points of convergence rely on the idea of control and energy data (EEG), present as a main component in Music for Brainwaves. The control of the body by the meditative state includes the energy data transferred to an algorithm, as proposed by Lucier:

Dewan described to me this phenomenon that had to do with visualization, that by putting yourself in a non-visual state, it would be called a meditative state now, you could release the potential of the alpha that is in your head. It’s a very small amount, but it would become perceptible, at least to an amplifier (…) Actually, it doesn’t sound like anything because it’s ten hertz and below audibility; it isn’t a sound idea, it’s a control of energy idea (1995: 48, 50).

During the same period, composers and artists were collaborating with engineers to integrate new technology and thus discover new tools for new forms of music and art. A noteworthy example is a series of performances in New York in 1966 called 9 Evenings: Art, Theatre and Engineering. These events were developed by artists and engineers and ‘endeavoured to reassess a legendary series of ten experimental performances that were presented at New York’s 69th Regiment Armory on East 25th Street in October, 1966’ (Garwood, 2007: 36). Among others there were the performances of Cage’s Variations VII:

Cage was performing an unscored work for the first time, attempting a live broadcast of all the sounds in the world at once. Variations VII, like other Cage compositions, departed from art-making as a purely pictorial process and moved it toward the spatial, experiential, and conceptual. This particular work highlighted the soup of invisible frequencies in the realm of immediate experience (Garwood, 2007: 40).

Music for Brainwaves relates to Cage’s work Variations VII through the ‘soup of invisible frequencies in the realm of immediate experience’ described by Garwood, since the performance includes invisible frequencies as EEG from the body performer into the performance space. Consequently, Music for Brainwaves attempts to bring brainwaves into the musical realm, which includes the notion of making an invisible activity perceptible. David Rosenboom, one of the closest allied with brainwave music, and Richard Teitelbaum with his pieces Organ Music and In Tune of 1968 (Branden, 2011: 132) and, later, Pauline Oliveros explored EEG for musical purposes around the same period. Rosenboom describes the first experiments from the 1930s with EEG technology:

While listening to his own alpha rhythm presented through a loudspeaker, Adrian tried to correlate the subjective impression of hearing the alpha come and go with the activity of his eyes (Adrian and Mathews, 1934). Inevitably, artists with an experimental bent would come to apply this – and subsequent developments in brain science – to both artistic production and research in artistic perception (1990: 48).

Improvement of technology, and subsequent developments in brain science, leads to more affordable devices; advances in neuroscience research from the last decade provide a growing interest in and rediscovering of music produced with EEG. Multiple directions emerged. Among them, notable examples may be found in the work of:

‘Wellenfeld’ quartet

The quartet includes Rudolf, Joke Lanz, GX Jupitter-Larsen and Mike Dando. The sound and the procedures developed by this performance are close to those of Music for Brainwaves:

The performance took place without prior tests or rehearsals. The performer develops and gains control over his own brainwave patterns during the performance, by listening to the sonic results of his mental activity (Eb.Er et al., 2014).


In our correspondence, Stelarc responded this about his use of EEG:

My use of amplified body signals and sounds, including EEG – both for sound and control purposes – occurred primarily from 1972 to approximately 1986. These were amplified live as part of body installation performances (and from 1980 with my Third Hand).

I do not have a music background although I’ve always used sound in my performances. So there are no scores as such. The performances began when the body was switched on and they ended when it was switched off. The sounds varied through partly physiological control (control of breathing, state of relaxation and tension and muscle tension) and partly through body fatigue. So a cacophony of sound was generated that varied in complexity and density depending on what sounds were switched on and off and were happening separately or synchronously.

Furthermore, and according to an interview in 2001 with Linz, Stelarc mentioned:

In the late ‘60s there was a lot of interest in biofeedback mechanisms (…) For me there was a desire to make sound as part of a body’s motion. It wasn’t a case of making the piece more dramatic through the use of sound, but rather that I’d always taken a multisensory approach to art. The premise of amplifying the body sounds was to articulate what’s happening inside the body, and the possibility of monitoring these signals enabled a kind of structural relationship (2001).

Music for Brainwaves relies, in terms of sonic aesthetics and improvisation, on the performance piece Wellenfeld from GX Jupitter Larsen. Music for Brainwaves also relates to Stelarc’s views with its aim ‘to articulate what’s happening inside the body, and the possibility of monitoring these signals enabled a kind of structural relationship’. Music for Brain Waves is influenced structurally by action music, where ‘the composer prioritizes exploration of performative actions as opposed to investigation of particular sonic parameters in the creation of this music’ (Kojs, 2009: 286). As such, the work addresses the process of making decisions about artistic systems and therefore how the body replaces the art object in determined environments, proposed by LaBelle as follow:

Cage addressed the very act of making decisions, the artist being understood not so much as the maker of objects but as an individual in the act of making decisions as to what, how, and where art take place and the systems by which to initiate its production (…) The body literally comes to replace the art object, for it pushes up into the realm of form to such a degree as to explode definition and the literal lines of material presence (2006: 54, 55).

Following LaBelle’s claims, Music for Brainwaves includes not only the body as a work of art, but also the decisions from the composer articulated within the development of the performance through the following procedures and decisions:

  1. Find a very resonant space;
  2. Wear the EEG device and start the algorithm;
  3. Sit for as long as you feel it necessary;
  4. Then lie on the floor, for as long as you feel it necessary;
  5. Sit again for as long as you feel it necessary;
  6. Take a text and read it mentally;
  7. End the performance or start again from the beginning if you feel it necessary.

The dramaturgy of the piece, or its ‘raison d’être’, resides in the idea of investigating the potential of a form of music generated directly by the inner experience of the body. How, in this context, can the audience perceive the movement and the influence of the brainwaves on the sound without relying on visual information? A direct relation between actions and reactions and their visualisation adds a too-predictable flavour to the composition – and probably a distraction from it. Instead, creating a focus on listening, in particular the relation between sound and (very resonant space) instead of the visual aspect of the work, demands a greater participation from the audience; when it is approached as a change in the sonic cloud with artefacts in the sound, it could lead to dedication rather than distraction, because the attention of the audience is directed towards few movements and events. During the development phase of the project, trials included other performers, such as a cellist and a dancer. The former was very convincing in term of musical interactions and gestural presence towards the audience; the latter formally provided an interesting perspective, given the contrast between a moving dancer and an immobile EEG performer. Both ideas, and their visual contributions, were, however, abandoned in order to concentrate on a purely (hyperbiological) spatial relation among an architectural resonant space, sound, and physiological data (EEG) from the performer.

3. Composition

Music for Brainwaves is based on the Gendy3 algorithm from Iannis Xenakis, and implanted into the software Max/Msp. The performance relies on four different pre-defined parameters of Gendy3, which structure the movements of the composition. The movements last three minutes each.

3.1. Performance

Actions for the performance are pre-determined yet improvised in length, according to the performance space and the audience. Improvisation gives a more flexible range of possibilities; the actions (i.e. sitting, lying on the floor, and reading a text) provide:

  • Different sonic gestures;
  • Different states of consciousness, however limited by the duration of the performance.

The algorithm is based on aleatory procedures and thus the results are not always predictable; they are mainly changes in the frequency, amplitude, and timbre of sound. The differences in brainwaves activity modulate the sonic cloud from the Gendy3 algorithm in the resonating space and the resulting sonic artefacts alter the sonic continuum. No action provides a direct effect on the sound, yet there is always a delay. In the same order of ideas, Birringer proposes the sensation of mediations in Prehn’s work, as follows:

Signals generated through electro-physiological monitoring of vital data (…) Prehn strives to concentrate not on semiotic processes of sense-making but on the immediate physical and emotional experience of the endo-movement, so to speak, the movements inside the body. For Prehn, such experiences are transcendental, ecstatic. They even resemble the hypnagogic trance states one might experience in a ‘ritualistic or liturgical’ context (…) The performer’s own immediate experience of multiple, simultaneous, fluid ‘phantoms’ of self or of the body’s signs in motion: the immediate sensations of mediations (2008: 31, 33).

The points claimed by Birringer and relevant to Music for Brainwaves are:

  • The interaction with the mediated environment ‘with signals generated through electro-physiological monitoring of vital data’;
  • The immediate physical and emotional experience of the endo-movement, so to speak, the movements inside the body’;
  • The hypnagogic trance states one might experience in a “ritualistic or liturgical” context’;
  • The performer’s own immediate experience of multiple, simultaneous, fluid “phantoms” of self or of the body’s signs in motion: the immediate sensations of mediations’.

3.2. Xenakis’s Gendy3 Algorithm

The choice of Gendy3 algorithm is based on the quest for a sonic aesthetic that develops a chaotic continuum, metaphorically intended to reflect the activity of firing neurons. Also, the idea of the performance relies mostly on the improvisation of bodily pre-defined gestures (e.g. put on the EEG device and start the algorithm; sit for as long as you feel it necessary; lie on the floor, for as long as you feel it necessary; sit again for as long as you feel it necessary; take a text and read it silently; end the performance or start again from the beginning if you feel it necessary), which move according to pre-defined parameters derived from Gendy3. In these pre-defined parameters, the responsibility for generating numbers inside the algorithm is left to the computer:

Gendy makes sound by repeating an initial waveform and then distorting that waveform in time and amplitude. Thus the synthesis algorithm computes each new waveform by applying stochastic variations to the previous waveform (Roads, 1996: 342).

Gendy3 is based on probabilities and on stochastics, which studies and formulates the law of large numbers operations According to Xenakis:

‘Stochastics’ studies and formulates the law of large numbers (…) the laws of rare events, the different aleatory procedures, etc. As a result of the impasse in serial music, as well as other causes, I originated in 1954 a music constructed from the principle of indeterminism; two years later I named it ‘Stochastic Music.’ The laws of the calculus of probabilities entered composition through music necessity (1992: 8).

The term used by most people is ‘the study of probability along the time dimension’. Stochastic music is defined by Serra as follow:

Stochastic music emerged in the years 1953–55, when Iannis Xenakis introduced the theory of probability in music composition. (…) Then Xenakis decided to generalize the use of probabilities in music composition (…) In the 1960s, Xenakis started to use the computer to automate and accelerate the many stochastic operations that were needed, entrusting the computer with important compositional decisions that are usually left to the composer (1993: 237).

The algorithm itself is described here:

The program is based on an extensive use of stochastic laws. This creates a homogeneous composition in which the microstructure and macrostructure are conceived through the same perspective, i.e. filling sonic space with sound material and structuring this space are accomplished with similar means (1993: 255).

In this sense, ‘filling sonic space with sound material and structuring this space are accomplished with similar means’ has a strong metaphorical connotation with Music for Brainwaves: filling the physical performance space by using the sounds emerging from Gendy3 as a strong sonic impulse into the resonant space, to trigger the hyperbiological space, which in turn leads to neurofeedback.

3.3. Ex-NSA Teufelsberg Listening Radome, Berlin

During the research process, different architectural configurations (e.g. apartment, artist’s atelier, theatre, and university laboratories) were tested, but none of them was resonant enough to sense the hyperbiological space, meaning to perceive the relation between, energy data (EEG), sound and space through the whole body as neurofeedback.

They sounded ‘dry’; a specific element was lacking from the neurofeedback, meaning that the sensation of the embodiment of the relation between sound and space was absent. A church or a cathedral was the first logical choice because of the reverberant acoustic properties of such architecture and the musical past linked to the idea of a composer creating a piece with a specific building in mind. However, an unexpected solution (in terms of acoustic properties) appeared in Berlin in May 2014: the ex-NSA Teufelsberg listening station in Berlin. The first impression during the performance in this space was the embodiment of the relation of sound and space through the physical sensing of the neurofeedback. Here, the Teufelsberg listening radome is approached for its exceptional properties of resonance. Merill and Schmidt present the former listening station as follows:

The field station was then used till 1990 by the U.S. Army & U.S. Air Force Intelligence together with the NSA for tapping and interfering with the radio communication of the Eastern Bloc (2009: 23).

Music for Brainwaves was recorded in the almost spherical radome, which are spheres used to protect and conceal radar antennas, on top of the highest derelict tower. Cox and Ings define the sonic properties of the Teufelsberg radome:

Teufelsberg, on the outskirts of Berlin (…) a disused military facility contains ‘radomes’ – spheres used to protect and conceal radar antennas. The highest radome is on the sixth storey of a derelict tower. Jump onto the concrete plinth in the centre of the room, and any sound you make is focused back towards you. Sway to the right so the focal point is at your left ear, and the amplification afforded by the curved walls lets you whisper into your own ear (2014).

The acoustics of the place, the highest radome on the sixth storey mentioned by Cox and Ings, are so reverberant that it makes it difficult even to communicate with one another. When the sound of Music for Brainwaves was sent into the room, the neurofeedback became something unique that I experienced only in that particular location: The timbre of the sounds had changed, in contrast to all of the other places where it has been played, not only because of the resonant acoustics of the place, but also how it affected the amplification of my ‘reaction’ in the neurofeedback loop. It means that the unusual resonance to the sound created by the relation between the energy data (EEG), the transformation by the Gendy3 algorithm and its propagation in the highly resonant space, augmented my bodily perception, as an embodiment. However, to explain the embodiment of the relation of sound, space and EEG data, in order to be fully understood, it must be experienced, as an inner experience.

The process emerging from Music for Brainwaves is obtained through physiological data extracted and processed by modern technology. The field of archeoacoustics investigates the sonic properties of ancient architecture and provide a good deal of insights (Kolar, 2013). Possible similar resonant issues as the ones experienced at the ex-NSA Teufelsberg listening station in Berlin seem already to have been discovered in ancient caves from the Neolithic period, for example, on Malta, as mentioned by Eneix:

Researchers detected the presence of a strong double resonance frequency at 70 Hz and 114 Hz inside a 5,000-year-old mortuary temple on the Mediterranean island of Malta. The Ħal Saflieni Hypogeum is an underground complex created in the Neolithic (New Stone Age) period as a depository for bones and a shrine for ritual use. A chamber known as ‘The Oracle Room’ has a fabled reputation for exceptional sound behaviour (…) resonant frequencies can have a physical effect on human brain activity (2014).

In addition, those particular resonant frequencies are found also in other sacred locations around the world:

Special sound is associated with the sacred: from prehistoric caves in France and Spain to musical stone temples in India; from protected Aztec codexes in Mexico to Eleusinian Mysteries and sanctuaries in Greece to sacred Elamite valleys in Iran. It was human nature to isolate these hyper-acoustic places from mundane daily life and to place high importance to them because abnormal sound behavior implied a divine presence (2014).

Cook, Pajot and Leuchter suggest that further research should be conducted in order better to understand the links between resonance and emotional processing:

Previous archaeoacoustic investigations of prehistoric, megalithic structures have identified acoustic resonances at frequencies of 95–120 Hz, particularly near 110–12 Hz, all representing pitches in the human vocal range (…) We evaluated the possibility that tones at these frequencies might specifically affect regional brain activity (…) These intriguing pilot findings suggest that the acoustic properties of ancient structures may influence human brain function, and suggest that a wider study of these interactions should be undertaken (2008: 95).

Such pilot studies are of interest for future investigation in order to explore potential relationships between resonances, frequencies, and neurofeedback.

4. Achievement, Conclusions

The development of Music for Brainwaves as a performance explored issues emerging from a performer’s body emitting physiological data as EEG, transformed into sound, sent into a resonant space and received back as neurofeedback by the same performer. The process leads to the term hyperbiological space, which relates to an augmented peripersonal space (by physiological data through sound in space) and the impression of sensing the relation of sound and space by the performer in the first instance, since ‘the composer, wired-up in various ways, would become the performer of and primary listener to the sounds produced’ (Branden, 2011: 132). The sensed space appeared in particular when sounds were sent into the resonant space of the ex-NSA Teufelsberg listening station; there was a clear impression of the embodiment of the sensed space. It has been shown that such experiences with resonating space existed from at least the Neolithic period and lead ‘to isolat[ing] these hyper-acoustic places from mundane daily life and to attribute high importance to them because abnormal sound behaviour implied a divine presence’ (Eneix, 2014). Further research must be accented in very resonant spaces, since the most active sensation of neurofeedback, at least for the performer, is experienced in such spaces. How to explore inner experience of sensing and to transfer it to the audience?