Technology and Timbre

EN | FR

Technology and Timbre
An autoethnography on the influence of electronics on the composer’s orchestration practice

by Jorge Ramos
Research-Creation Series | Timbre and Orchestration Resource

Published: June 14th, 2021

DOI

The initial purpose of Moving Sources (2019) was to pave the way for the development of a future larger scale work—Point of Departure for symphony orchestra—to be written using the recently released computer-assisted orchestration software, Orchidea.[1] It is therefore a “study” that facilitated experimentation with and evaluation of computer-assisted processes.

Since this work was written for a composition competition, there were some external constraints that defined its duration and forces:

  • The work’s duration had to be between 5 and 10 minutes.

  • It had to be written for a classical orchestra.

  • It was not possible to use any electronic media.

  • If selected, the premiere would take place at Pavilhão Centro de Portugal in Coimbra, which is a very large and reverberant pavilion (Figure 1).

 

Figure 1: Photograph of the premiere, November 24, 2019 at Pavilhão Centro de Portugal in Coimbra, conducted by Jan Wierzba. © Orquestra Clássica do Centro.

 

Despite these constraints, there was ample opportunity to explore the influence of spatialization and electronic timbre-blending techniques on orchestration. Moving Sources explores the relationship between instrumental orchestration and electronics primarily through the means of spectral analysis and subsequent electronic-informed timbre-blending techniques such as filtering, reverberation, granular synthesis, pitch freezing, envelope generators, noise, delays and spatialization. 

The compositional process borrowed from traditional spectral procedures, hence the use of manual spectral analysis as established by the spectral composers such as Tristan Murail (b. 1947) and Gérard Grisey (1946-1998). My departure point was to translate the concept of wind chimes [2] as a physical instrument and its sonic properties into the instrumental domain, in this case the classical orchestra (Bjørn, 2018). All of the compositional material was extrapolated from an audio sample of wind chimes which then served as the basis for laying out a harmonic progression and achieving the auditory sensation of fluid harmonic mutation between all the selected partials.[3] To do so, I first conducted a spectral analysis [4] of the audio sample with SPEAR v0.8.0 for macOS [5]in order to extract only the strongest harmonic partials present in the given sample (Figure 2).

 
 

Figure 2: ‘Raw’ spectral analysis of the original wind-chimes sample in SPEAR.

After reducing its spectral content to only the strongest partials, I then used the same software to resynthesize the resulting spectrum back into an audible format (Figure 3).

 
 

Figure 3: Spectral resynthesis of the remaining harmonic partials in SPEAR.

Initially I only intended to extract pitch information (Figure 4), but I decided to also use the arching melodic and harmonic contour of the harmonic partials’ evolution in time (x-axis) in the piece. This process was realized in an intuitive way without limitation to an exact translation of the analysis output. Otherwise, the piece would have the same exact duration and structural proportions of the given audio file. This approach was essential to take over the development of pitch material throughout the piece. However, regarding the conceptual aspect of this orchestration approach, I based it on my personal experience with the wind chimes.

 

Figure 4: End result of the spectral analysis process in SPEAR.

 

On their own, wind chimes are instruments that play by themselves (with the natural stimulus of the wind) in a constant and almost generative ambient style. This characteristic informed my orchestration practice; I wanted the orchestra to portray this movement. To do so, I worked with the sound of the orchestra as a multidimensional and malleable form where the sound source could move from side to side and front to back. This expansion of orchestration into the spatial domain (just as in electronic music) is a critical aspect of my research. The following figure serves to illustrate both the compositional and orchestration used approach (Figure 5). Orchestration-wise, the aforementioned characteristic of the wind chimes together with the acoustics of the room were intuitively taken into account at the moment of orchestration.

 

Figure 5: Composition and orchestration scheme used for the development of Moving Sources.
[Top Arrows] (Composition) Wind chimes → Recording → Spectral Analysis (+ manual frequency to pitch matching) → Score.
[Bottom Arrows] (Orchestration) Wind chimes → Behavioural & Site-responsive-informed orchestration → Score

 

Spatialization as a compositional and orchestration parameter is not new; it is found throughout the history of folk and liturgical music. This principle was often used in polychoral compositions (for two or more choirs) by many Renaissance composers such as Giovanni Gabrieli (1554/57-1612).

In regards to orchestration, I assigned most of the strings to mimic the reverberation and the pitch-freezing effect found on the original recording of the wind chimes. Furthermore, the aural distinction between the foreground and background presence of sound layers was impossible to obtain through spectral analysis since SPEAR is not perceptually aware of sonic depth. Hence, the strings played three essential roles throughout the piece:

  1. Due to their "voice-like" tone quality, they acted as a cohesive element between all of the different sections (brass, woodwinds, strings and percussion). This meant that I could combine every non-string instrument with a string instrument towards achieving a more blended sonority, as seen in measures 103-112 [Bassoon 1, Horn in F 1, Violoncellos]. Moreover, the adoption of techniques such as harmonics, con sordino, sul ponticello, sul tasto, "noisy" bowing pressure and general dynamics allowed me to shape the resulting timbre in a way akin to the use of envelope generators, noise generators and filters in electronic music.

  2. The strings' disposition covered almost the entire stage (Figure 6), which meant that I was able to shift the sound spatially within the string group alone, for example, crossfading from Violins I (on the left) to Violins II (on the right). This allowed me to be in control of the stereo image of my orchestration decisions while simultaneously controlling the resulting timbral quality of the chosen orchestration solution. This technique was extrapolated from the tradition of multi-channel speaker diffusion with electronic spatialization tools and its impact on the resulting timbre.[6]

  3. The strings are the only section of the orchestra that gives the composer the ability to obtain a constant and uninterrupted stream of sound for a considerable amount of time (with the use of divisi and nonsynchronous bowing techniques).

 

Figure 6: Disposition of the instruments on stage

 

However, I soon realized that this approach of focusing on the strings section alone was too two-dimensional and lacked depth perspective. Therefore, I embraced the clarinets as an extension (or another dimension) of the string section, as opposed to the other more timbrally different woodwinds, brass and percussion?

Since they are located further back in the center of the stage, I could now extend the initial stereo spatialization idea of the strings to a fuller instrumental rendition of a multi-channel electronic setup.

This approach allowed me to transition from a bidimensional axis (left ⟷ right) to a multidimensional axis (left ⟷ right and front ⟷ back), as observed between measures 73 and 77. The sound trajectory progressively moves from the foreground (strings) to the background (woodwinds) and then settles on just the brass section, slightly to the left side of the background sound image.

Moreover, the decision to use the clarinets as a subsidiary section to the strings was based on my personal judgment of how I perceive the timbral similarities between the two sections. In my opinion, every other combination between the strings and the other woodwinds, percussion or brass was too easily distinguishable in both terms of localization and timbral identity.

Additionally, this use of spatialization was only perceivable in the medium-high and high registers (for example: violins and trumpets), since low frequencies take more time and space to develop and as such, are more difficult to localize. Therefore, the perception of location was not ideal when using low-pitched instruments (for example: basses or bassoons).

Naturally, this source-recognition effect was still more evident due to the large reverberation of the space, a pavilion. This was both a constraint and an asset for timbral blend. According to my approach in each section, the effect of reverberation was two-fold: a limitation to specific timbral combinations, if targeting to write fast passages, and a tool to shroud the timbre in a cloud of sound (“cluster-like”) as observed on the measures highlighted below (Figure 7). Compositionally, I found rhythmically active sections to be too blurry to be of any interest by themselves, but they were ideal to mask (or cloud) the subsequent sections.

 

Figure 7: Measures 35-41 of the string section
[Camera 2: 2:04 - 2:24]

 

Conversely, some orchestration decisions were made with the goal of producing a more sharply piercing timbre, such as the combination of the brass and string instruments in sul ponticello (Figure 8). The staccatissimo sul ponticello in the strings and closed ‘wah wah’ sordino in the trumpets produce a piercing timbre allowing the audience to immediately recognize its exact spatial source. In electronic terms, this timbre was inspired by the passing of a selection of electronically generated impulses through a high-pass filter.[7]

 

Figure 8: Measures 59-64, example of the combination between the trumpets and sul ponticello violas and double basses
[Camera 2: 3:26 - 3:45]

 

It was not my intention to design another orchestral piece enthralled by the impressive individual capabilities of each instrument. I tried to adopt a different approach to sonic orchestration, where instruments played a target role inside a controlled sound mass. The only exception occurs in measure 93 when a cello emerges from the string sound mass with a solo line, but even then, it only lasts few bars until it begins to dissolve into the other cellos, and subsequently, to dissolve into the rest of the strings. This exception was free from conceptual constraints and/or rules, and only materialized because I wanted to include a foreign element to the structure of this musical composition.

In my opinion, approaching instrumental orchestration similarly to electronic composition frees me from the creative weight imposed by the historical and traditional background of each orchestral instrument and their respective orchestration clichés (for example: using high-pitched melodic instruments for solo lines). I must also acknowledge that electronic music has its own normative practices; however, I believe that through electronic-informed timbral blending techniques I was able to expand both my timbral palette and improve the audience experience. Other simple examples of these techniques can be observed in the following examples (Figures 9, 10 and 11):

 

Figure 9: Measures 51-58, dynamic example of a delay effect. The quintuplet first heard in the Clarinet in Bb is systematically delayed by 1 beat of displacement from the triggering impulse. This, however, is acknowledged both as a repetition in music composition and a texture in traditional orchestration.
[Camera 2: 2:57 - 3:26]

 
 

Figure 10: Measures 78-87, example of harmonic content "filtering." The use of ‘wah wah’ mutes allows the players to freely shape the emitted sound through changing the filtering of the higher partials. This resembles an low-pass filter. [8]
[Camera 2: 4:17 - 4:54]

 
 

Figure 11: Measures 12-23, example of ‘pitch bending’ in the Violins.
[Camera 2: 0:43 - 1:28]

 

In addition, while applying simple electronic processes to instrumental lines (as in the above examples), I also began to combine them into more complex combinations where multiple processes are used simultaneously (Figure 12). This approach is closely related to modular synthesis as practiced with physical analog modular synthesizers (Moog, Buchla and others) and visual programming language software such as Cycling ’74 Max/MSP and Native Instruments Reaktor, among others. In these environments, each event is two-fold: a sound emitter and an event trigger.

 

Figure 12: Measure 59-60. The last impulse of the lower woodwinds triggers an impulse in the higher woodwinds. This impulse is progressively delayed and filtered, crossfading from the Piccolo to the English Horn.
[Camera 2: 3:26 - 3:33]

 

Despite the conscious selection and application of most of these techniques, most of my decisions were designed in an intuitive way. Also, these techniques borrowed from electronics are not necessarily completely new, as they have been used by many composers (such as Tristan Murail, Luciano Berio, György Ligeti, Edgard Varèse, and others). Even so, it was essential to familiarise myself with traditional electronic-music-informed orchestration processes before moving onto other systems.

Finally, through this work I became aware of how the use of computer software (manual spectral analysis in SPEAR) impacted my creative practice. For instance, it helped me towards acknowledging the blurry differentiation between timbral composition and dynamic orchestration, a key concept in computer-assisted orchestration. Hence, this piece was a study that allowed for initial exploration of ideas, but led me to identify a much broader scope for experimentation in future works, serving as a major step towards the use of modern computer-assisted orchestration software in the creation of Point of Departure.

Recording

Camera 1

 
 

Camera 2

 
 

Acknowledgements

Thank you to Gilbert Nouno, an ACTOR member and my supervisor at the Royal College of Music.

Bibliography

Previous
Previous

Musical collaborations, timbre, and recorded sound

Next
Next

Exploring the Virtual Orchestra through Blood, Sweat, and Tears. Part I