Workshop: New Instruments

[Saturday @ 1:45pm – 5:00pm, Holden Chapel]

Habeen Chang

“Rhythm Games: Musical Instruments or Controllers Without Control?”


Abstract

How much control over the production of sound must an object provide to be considered a “musical instrument”? Traditional acoustic instruments such as the piano offer full physical control of sound production. DJ turntables and mixers, while not allowing direct modification of the recorded music, give the performer some agency to start, stop, and transform the sounds at will; these DJ tools have too been studied as instruments (Dahl et al. 2010). But, towards the extreme, can just the presence of an interface, even with little-to-no bearing on the music, permit us to call the object a “musical instrument” and to study it as such? 

Rhythm games offer a starting point for an answer. Sometimes referred to more precisely as rhythm action or rhythm-matching video games, rhythm games constitute an immense variety of controllers and interfaces intimately tied with musical gameplay, even though the player does not trigger most of the audio events (Austin 2016). From keyboards in beatmania to foot pads in Dance Dance Revolution to analog knobs in Sound Voltex, the ongoing creation of such controllers allows for a ludic playground in which the input charters/mappers can craft a gestural realization of a song bound by the interface specific to that game, which is delivered for a player to perform (Magnusson 2019, Moseley 2016). 

The loss of agency could be seen as prohibitive to traditional ideas of musical performance, but this loss allows other aspects of musicality to shine through. By freeing these controllers from having their interfaces forced into a one-to-one mapping with a musical parameter, rhythm games allow us to experience and perform music through novel gestures: a fast sequence of unordered pitches can be mapped to a single continuous flowing gesture with the affect of being in-flight (Dogantan-Dack 2011).  Playing a rhythm game is performing music analysis, uncovering musical knowledge in the process, and every unique rhythm game controller, like a unique musical instrument, allows for different analysis and different knowledge (Rehding 2016).

Biography

Alex Habeen Chang is a graduate student working towards a Master of Arts in Music Theory at Boston University. He earned a Bachelor of Arts in Mathematics at Rice University in 2020 and is hence interested in engaging with music research utilizing mathematical or computational techniques. In addition, he is actively involved with the ludomusicology community: his research focus is primarily on rhythm games, as an avid rhythm game player himself.

Alyssa Aska, Klaus Lang, Pablo Abelardo Mariña Montalvo, and Martin Ritter

“FRESCOBALDI2: Enharmonic enhancement for keyboard instruments”


Abstract

The development of musical instruments has been heavily dependent on the technological capabilities of their time period. Keyboard instruments are no exception, having undergone many developments from the earliest organs in antiquity to the sophisticated 88-key synthesisers we are familiar with now. Traditional keyboard instruments have a limited set of producible pitches because they, for the most part, can not make adjustments to tuning while playing like some other instrument families. Therefore, they were usually restricted to whatever fixed system they were tuned in, each of which had to make concessions to certain intervals or in certain keys (Ripin 1989). Because performers had the desire to play some of these intervals and expand the possibilities of the keyboard, as early as the 15th century, keyboards were developed that allowed enharmonic capabilities, such as through the addition of split keys (Barbieri 2008). These keyboards did not end up becoming as ubiquitous as the 12-note per octave keyboard we are familiar with. Digital technology has made the development of enharmonic and microtonal keyboards much more possible in the last several decades and thus allowing for wide dissemination; however, many of the devices are developed for specific and personal use or are restrictive due to cost and performability (Georg Vogel 2023, Roli 2023).

The authors of this paper have developed a prototype enharmonic extension controller that demonstrates successful proof of concept, called the Frescobaldi2. This prototype attaches to a MIDI organ, resting on the upper half of the manual, replicating the placement of a straightforward split key system familiar to many during the 16th century, and through custom-built but transportable software and hardware, allows the performer to play pitches that are not present on the MIDI keyboard. While the first iteration of this device is designed to reproduce the capability of performing all pure thirds in an octave as in some of the initial split key systems, the idea behind the Frescobaldi^2 is to eventually enable complete customization and expandability of the device, leading to a robust enharmonic extension for MIDI keyboards without the need to purchase an additional, specialised keyboard.

Biographies

Organon has four members: Alyssa Aska, Klaus Lang, Pablo Abelardo Mariña Montalvo, and Martin Ritter, and it exists as a research group functioning within the University of Music and Dramatic Arts Graz. The purpose of organon is to provide explanatory materials for contemporary organ music, specifically for composers. These materials include texts, images, short video tutorials, concert recordings, as well as a list of works. Their materials demonstrate the general structure and operation of the organ as well as general playing techniques. Contemporary playing techniques are presented and explained in greater depth. In addition, the group continuously pursues ways to expand current compositional tools for organ by researching playing techniques and expanding the instruments and interfaces themselves. 

Dan Freeman

“Future Instruments”


Abstract

Between 2000 and 2005 a technological revolution took place in music performance and production. The development of powerful and stable laptops running innovative software such as Ableton Live have made this machine the dominant live instrument in many popular genres. The sonic possibilities of the laptop are essentially limitless, fulfilling the vision of early 20th century futurists, such as Luigi Russolo, who predicted that in the coming century music would expand beyond the limited orchestral timbres of his time and “conquer the infinite variety of noise sounds.” The early 21st century digitalization of live performance has, however, presented artists and instrument designers with challenges and this presentation would like to explore three of them. Firstly, since a laptop essentially consists of four sonic tools (instruments, samplers, sequencers and audio effect processing) how can we create an interface that seamlessly integrates them? Secondly, a laptop, like other electrophones over the past century, breaks with thousands of years of traditional instrument building in that the sound generation capacity of the instrument has nothing to do with physical construction of a controller. This disconnection poses challenges in that future instruments should ideally control the four components of the laptop while physiologically enabling the development of virtuosity. Lastly, how can future instruments be environmentally sustainable so that they do not contribute to the “landfill economy” nor rely on increasingly unstable supply chains? Can they also be readily accessible to the areas of the Global South that have historically had much greater difficulty acquiring cutting-edge music technology. As a Latino who has spent years living and teaching electronic music production in Central and South America, this last point is personally very relevant to me. This presentation will also include a performance on a new instrument I am currently developing known as “Blinky.” Blinky is a prototype of a digital live performance interface which strives to integrate an instrument, sampler, sequencer and audio effects processor into a coherent whole and to move laptop performance away from any use of the visual while exploring virtuosity and greater use of the body.

Biography

Dan Freeman is a producer/bassist and music technologist based in Brooklyn, New York. He is an Assistant Professor at Berklee College of Music as well as on the faculty of The Juilliard School and New York University’s Clive Davis Institute of Recorded Music. 

Born in Boston to generations of musicians in his mother’s native land of Nicaragua, he attended Harvard College, graduating in 1997 with an A.B. in History. After graduation, he moved to New York City and worked for over fifteen years as a session bassist performing in a myriad of venues ranging from underground Brooklyn loft parties to Broadway pit orchestras and Carnegie Hall.  

Since 2005 he has specialized in integrating laptops using Ableton Live and acoustic instruments and this has led to performances and workshops globally at venues, festivals and universities. In 2018 he designed a prototype instrument for digital live performance known as “Blinky” which combines a sampler, a sequencer, audio effects processing and a grid interface.

Sabe

“How does the voice, the body and the no-input mixing interconnect in  a live audiovisual performance?” 


Abstract

In a performance, gesture has a fundamental role. When we talk about a musical gesture, we immediately think of the body. The hand of a singer slowly rising as she is signing a crescendo would give to the audience a visual cue of the direction or intensity of the sound. The posture, the movements and the implication of the body are in service of the ear. However, a performer cannot rely only on a simple spatiotemporal movement to carry the meaning of the piece. It’s the personal musical expression that will enhance the musical gesture. Along with bodily manipulation of instruments, the use of emblems and affect displays are some of the tools that can be used to increase meaning. In that sense, the use of visual support is another effective tool to support a musical gesture. There is an undeniable affiliation between the morphology of sound and video. Furthermore, to convey a meaningful performance with intent, extramusical components joined with gestures are primordial. 

In this presentation, Sabe will demonstrate how she uses textured voice, the unpredictability of the no-input mixing and the videos in a creative-research approach to push her boundaries of the musical gesture in live performance. 

Biography

Sabe is an audiovisual performer artist from Canada. She uses her voice and a no-input mixing board as a compositional and live tool to create texture and synchronise them to her own textured videos.

After graduating from Concordia University with a major in photography, she studied several aspects of tonal music, including guitar and multiple vocal techniques. She fell in love with digital music and enrolled in the D.E.S.S in digital music at the Université de Montréal. It is through sound textures that Sabe finds a great freedom of artistic expression. Her studies with the audiovisual artist Myriam Boucher since 2020 were a decisive moment in her artistic approach and therefore open a world of possibilities in which she joins her two passions, music and visual.

Sabe draws on her personal experiences to express herself on various societal ills. In her performances, she connects sound and image to create intimate and abstract pieces in which she wants viewers to recognize themselves with their own life experience. Many have said that her creations are contrasted, raw and destabilizing.

Alexander Ishov (aka Sasha) and Theocharis Papatrechas

“PrismaSonus: Unmasking and Choreographing Hidden Nuances in Flute Playing via Microphone Placement”


Abstract

PrismaSonus is an artistic research initiative exploring the relationship between microphone placement, technique, and perception. By placing sensors inside the body of the instrument, this project highlights timbres and microtechniques that are otherwise hidden to performers and listeners. PrismaSonus examines how technology, technique, notation, and listening interact and converge in electroacoustic collaborations, proposing novel ways of creating immersive auditory experiences.

This presentation features the research element of Morphés for Flute and Electronics, presented at Thursday evening’s concert. Project creators Alexander Ishov and Theocharis Papatrechas will:

  • Demonstrate and analyze the musical and acoustical findings of PrismaSonus using pre-recorded material and live demonstrations;
  • Discuss methods for describing and notating electroacoustic flute techniques;
  • Share musical applications of technical research;
  • Present their collaborative model as a potential option for other performer-composer duos.

In June 2023, the team will expand the live performance version presented at this conference into Morphés II, occupying the 32 loudspeakers of the Experimental Theater at UC San Diego’s Music Department. Morphés III will be an online version of the project, mixed binaurally for headphones. Later in 2023, PrismaSonus will unveil an interactive database of electronically-mediated flute techniques that will serve as a modern, interactive resource for flutists and composers taking upon electroacoustic collaboration.

Biographies

Alexander “Sasha” Ishov is an innovative flutist specializing in 20th and 21st century music, devoted to the co-creation of new acoustic and electroacoustic works, as well as showcasing repertoire that challenges perceptions of the historical canon. As a committed researcher, he explores the intersection between interface design, pedagogy, and electronics, engaging with issues of design in the practice room, the classroom, and the concert hall. He is currently a DMA Candidate in Contemporary Music Performance at the University of California San Diego, and holds the Aspen Contemporary Ensemble flute fellowship position.

Theocharis Papatrechas is a composer and sound artist interested in utilizing technology to expand sonic possibilities and engage listeners into immersive artistic experiences. Having obtained a Ph.D. in Music from UC San Diego, he is currently holding the position of Postdoctoral Scholar at Qualcomm Institute of CalIT2. Converging research on environmental science, acoustic ecology, and audio arts, through data-driven sonification, his work investigates the impacts of extreme events on the acoustic signatures of underwater and remote terrestrial environments.

Luciano Azzigotti and Jack Adler-McKean

“Electroacoustic resonators in Incursion


Abstract

Resonances in brass instruments are created not through blowing streams of air through the instrument as might be logically inferred, but rather using an air stream to vibrate the lips, which in turn resonate the air already present in the instrument, resonances which then radiate outwards via the bell. In Incursion, a work for solo extended tuba following a long-term collaboration between composer Luciano Azzigotti, and tubist Jack Adler-McKean, a feedback loop system provides an additional source of resonation for the air contained within the instrument, which, given the fact that it does not rely on use of the lips, can radiate outwards via both the bell and the mouthpiece.

Biographies

The work of the Swiss-based Argentinian composer Luciano Azzigotti is characterized by the invention of systems where a specific type of relationship between instruments, beings and the environment can collide. He looks to reinterpret the basis of vibration as an immanent force and the musical object as an enactive experience by incorporating amplification and multimodal mediation as necessary materials in his writing for instruments, using dynamic notation, video, and spatial processes. He was awarded the Juan Carlos Paz Composition National Prize 2017, and has received commissions and awards from Fondo Nacional de las Artes, Ministry of Culture Argentina, Centro Cultural de España, Art Mentor Lucerne, Cultural Promotion Council Buenos Aires, CALQ, Fundación Telefónica, Melos/Gandini, and Colon Theatre. He is currently teaching Multimedia Composition at Haute école de musique de Genève and UNTREF (Buenos Aires), and is a PhD candidate at the Hochschule der Künste Bern / Musikhochschule Freiburg.

Jack Adler-McKean promotes the tuba family through collaborations with ensembles, composers, and academic institutions. Recent projects include performances with Klangforum Wien, music theater productions at the Luzerner Theater, collaborations on new solo works with George Lewis and Sarah Nemstov, premières at the Darmstädter Ferienkurse (2018 scholarship prize winner), and recitals in Buenos Aires and New York, as well leading tuba masterclasses in Jihlava, composition seminars in Malmö, and writing reviews for TEMPO and Music and Letters. His first book The Playing Techniques of the Tuba was published by Bärenreiter in 2020; other research outputs include conferences presentations in Vienna, Bloomington, and Paris, contributions to the Historical Brass Society Journal and Handbook of Wind Instruments, collaborations with the Musikinstrumentenmuseum der Universität Leipzig, and curation of the Contemporary Music for tuba collection for Edition Gravis. Jack was recently awarded his PhD from the Royal Northern College of Music (supported by the AHRC).