for robotic piano (RHEA)
Phipps Hall at the University of Huddersfield, UK. Made as part of a workshop at the University of Huddersfield run by Prof. Peter Ablinger and Prof. Winfried Ritsch, 19-25 October 2015
Tempi assigned to the individual keys
Hyperbodies are buildings and environments which can continuously change shape and content. The mutations of such buildings depend on the input coming from their user as well as from the surroundings of the building itself (the weather, people moving etc.). This interaction between user and building is determined by a data flow which the hyperbody uses and converts into a hypersurface structure, which then alters our perception of space in and around the hyperbody. (Oosterhuis, 2003).
Hyperbodies was composed with MIDI data to control pitch, velocity, and durations of each note. The process behind the work was to use a single short rhythmic sequence. This rhythmic template is repeated and assigned to all 88 keys of the piano—with each key also assigned its own independent playback tempo. The tempi were determined by firstly assigning each key a tempo in sequential order from 20bpm to 194bpm with a 2bpm difference, starting with 20bpm on the lowest key through to 194bpm on the highest (see table above). The order was then manually shuffled around like a deck of cards so that there was a relatively even spread of differing speeds and so that slower tempi were in the high register and faster tempi in the low register.I then listened back to all keys playing at once and then chose which notes to turn off, and determined what the temporal construction would be. The process was akin to beginning with a canvas of white noise and then deciding which spectra to take away, like a subtractive synthesis.
The original rhythmic sequence can be distinguished but at times its pattern is masked, as the multiple repeating patterns are playing at once, giving the impression of a swarm of pulsating textures in a suspended stasis. The precision of the machine means that fast swarming clusters of notes sound uniformly together but I also broke up this uniformity and embedded this into the MIDI note’s irregular subtleties and inconsistencies that introduce a more “handmade” quality to the texture rather than a quantised uniform system. This was achieved by an imprecise method of lining up MIDI notes by hand. For instance, many of the glissando lines are not perfectly in time or lined up geometrically in regular intervals, and rather have subtle pauses or glitches, or may even sound unfinished, or incomplete. To me this represents an unstable line with a course that is uncertain, appearing to decide its direction in real-time and which could change at any point. This is obviously a constructed idea in the piece because all activity is pre-programmed, but this representation of inconsistency is what gives this impression. The changing speeds of glissandi lines give a sense of flux that punctuates the piece, and the additional notes and pitched sequences surrounding the repeated patterns break up its regularity. The sequence played on the low register creates a muddiness, masking the clarity of the rhythm, becoming a rumbling pulsating sustained tone.
Hyperbodies takes some of its cues in its handling of temporal layers from Conlon Nancarrow’s multi-tempi approach. I was interested in how sheer speed, force, and complex temporal layering opens up new questions like what is gesture without the body? And what is heard when the music transitions beyond the realm of human performability? Musicologist Rolf Inge Godøy proposes that gestural imagery is “our mental capacity for imagining gestures without seeing them or actually carrying them out, meaning that we can recall and re-experience or even invent new gestures through our ‘inner eye’ and inner sense of movement and effort.” (Godøy, 2003, p. 55).
At times it may seem as if the performance ventures beyond the listener’s inner gestural imagery capacity with tempi ranges at around 300 bpm. It gives the impression of a hyperactive virtuosity—as explained by Eric Drott in his article Conlon Nancarrow and the Technological Sublime (2004). The sounds of the piano can then be thought of as a multi-dimensional “hyper-real” gestural-sonic sensorial experience. Lines are perceived as visual mechanical gestures of the moving robotic parts moving in sequence during a glissando. At these speeds, the digitised line becomes moving swarms of rapidly occurring note clusters. The work remains close to gestures of a performer’s ability. It establishes plausible pianistic gestures but then transcends this into the hyper-real or something beyond our grasp to relate to as real performed experience. There is a superhuman frenzy that overloads the listener’s cognitive capacities (Drott, 2004, pp. 534-535).
See below Peter Ablinger’s ‘Speaking Piano’ piece, which uses the same robotic piano: