
Advanced Practice report
Advanced Practice: Interactive Frag Rap aka ‘FE!N MACHINE’
Introduction
For my live advanced practice performance, I set out to create a seamless system within the non-linear functionality of the DAW Ableton that reimagines the way that ad-libs can be overdubbed within a studio environment. This system will be demonstrated in action to a live audience accompanied by a corresponding video aspect which will be modulated live using the virtual instrument plugin: Visual Synthesiser (henceforth, VS).
Performance Material – Planning
My thorough reading of ‘Neon Screams’ (Kit Macintosh, 2021)1 was the main influence in the direction of my project. I wanted to capture the poetic nature of how he personifies the evolution of vocal delivery in the popular urban contemporary world, whilst doing justice to how that resonates with me as both a musician as well as an autistic individual. Neon screams goes through great lengths to explain the compositional and percussive values that Ad-libs are providing to post-modern genres such as rage and drill, and how these evolve and distort our interpretation of language as well as genre across the digital landscape of hip-hops subgenres. Using the non-linear functionality of Ableton to create a session that enables me to capture the actualisation of these definitions as a visual experience which can humanise my relationship with Frag Rap as an autistic individual.
Quite often, the personifications that Kit Macintosh uses to describe the hysterical nature of hip-hop vocal delivery can be used to explain the nature of which autistic people connect to this genre, with many finding the characteristics of the genre stimulating and regulatory to them. The music that inspired to take the direction of my performance using Ad-libs were songs ‘Low’ by SZA2 (as mentioned in my proposal) and ‘Topia Twins’ by Travis Scott3, which both feature them performing their songs that are accompanied by signature ad-libs from other established artists, providing unique replay value for these songs and ultimately changing how the songs are experienced by the consumer.
Wanting to replicate this experience in an interactive Ableton Live session, I knew there would be a thorough number of materials for me to collect to create a flexible experience that demonstrates how using signature ad-libs from other vocalist can provide additional connotations to other performers. I also knew that to carry out the performance effectively would require me to organise file management to a meticulous standard, so that I could effectively and accurately select, arm, and disarm virtual drum instruments that are allocated numerous samples from specific artists.
With all this in mind it was clear what I needed to collect so that I could fully realise the potential of my project:
-
Isolated vocal one-shots.
-
Music to perform/overdub to.
-
Visuals to emphasise the delivery of the one-shots.
-
An Ableton session processing effects for ad-libs and visuals while using relatively low CPU values.
Performance – Technology development and delivery
Preparation
To collect the samples which would then be organised by artist, I would resort to the use of stem extraction in the Fruity Loops 21.2 update, which would be a key feature in separating vocal performances from their backing tracks so that I could export clean one-shots of iconic ad-libs that are often heard in popular hip hop tracks. Within FL studio I would balance the audio content of each one shot centrally whilst also gain staging each to -6db before being brought into my Ableton session for routing.
Within Ableton, I organised a drum rack preset template that would each have their own pack of 16 one-shots from an assigned artist. These drum machines would also have certain effects assigned to the controlled knobs on my AKAI MPK Mini Plus, which would be used to modulate unique live effects during my live performance. These modulations would enable:
-
Transposing the sample – Drum Rack Parameter
-
Panning the samples – Drum Rack Parameter
-
Adding Reverb – Input Track
-
Adding Delay – Input Track
-
A Vibrato LFO – Drum Rack Parameter
-
A Tremolo LFO – Drum Rack Parameter
​
​
​
​
​
​
​
​
​
To ensure the application of time-based effects (reverb/delay) would not cause the CPU to overload, I set up an effects Bus for monitoring the input of my array of drum racks. This would also be the track which applies the sends for reverb and delay effects that are dedicated to the sample bus, as well as final plugins such as compressors and limiters. I would end up with a group of 9 separate drum racks equipped with 16 unique one-shots, meaning that the FX bus was set up to generate 144 unique one shot in total. A dedicated FX Bus was essential for keeping the use of plugins for sample modulation to a minimum, which would be key in keeping spare processing power for the video effects that would be generated along with the many MIDI controls being processed
For my visual effects to react to the session, I needed a dedicated FX bus for my virtual synthesiser, which would have the sample bus and any other audio signals routed into before being sent to the master. This bus would also provide additional gain before going into the VS track that has an allocated ceiling so that the audio signal won’t clip.
To provide music to perform my virtual instruments over, I installed some currently relevant hip hop tracks that would be suitable for the array of performers I had available on my prebuilt drum racks. Since I was not using these tracks for any commercial purposes, I didn’t deem any ethical issues with regards to using these tracks for demonstration purposes. I would assign each track/clip its own individual scene with a global tempo change to be in time with the song being played, which allowed me to ensure that any time-based effects such as delay feedback was in sync with whichever clip was being played.
These tracks would also be a bus which would send the signal to the VS FX BUS. Most of the reactions from the VS FX Bus was going to be based on the backing music for my virtual ad-lib instrument, which would be set up within the modulating settings of the virtual instrument to change displayed visual values based on the audio signals being received as well as the MIDI controllers being operated which are also interactive with the visual synthesiser instrument. Visual Synthesiser is the VST plugin made by Imaginado that allows you to create dynamic visual effects through thorough audio routing within any given DAW (digital-audio-workshop). The non-linear nature of Ableton’s functionality allows for greater control of the interactive aspects of the virtual instrument.
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
By using the MIDI controller functions and having them implemented into the visual synthesiser, it will give my audience greater clarity around my sample triggering, reacting whenever I use the controller that will be creating the performance value in relation to the audience. By triggering audio samples along with the visual reaction to those triggers, audience would have an easier time differentiating between the sounds within the songs I’m performing over and the audio cues I am actively playing throughout the performance that brings clarity to the audience with regards to my interaction with the performance. This would also be the main method of humanising the autistic symptoms which were a centric theme behind the project’s realisation (More on this in the development log).
Before routing the group buses for the samples and the music to the visual synthesiser bus, I made sure that there was plenty of audio headroom to work with to reduce the chances of unwanted distortion occurring before the master fader. Using limiters to increase the gain past the VS bus as well as to ceiling the audio signal going into the master fader was a safe method of ensuring there was a healthy amount of loudness whilst reducing the risk of clicks, pops and unwanted compression that could have potentially occurred had there not been thorough safeguards against them.
Technical rehearsal
​
The performance took place at the Talent House in a spacious room provided by the United in development movement. The room was enough to fill with 50 audience members, with a stage for me to have my equipment set up and a projected display for my visual content. With a table provided for support, I will have myself and my equipment set to the appropriate side of the stage to ensure that the audience can have a clear view of the light show that accompanies the audio.
My personal equipment (which was placed in a list) present on the table consist of a MacBook Air, an AKAI MPK Mini plus, a 50GB HDD all provided by me. The room allocated for my performance came with a pair of EVC-1082 PA speakers accompanied by a QSC-GX3 Power Amplifier, with a USB-C to HDMI converter being provided by UEL for visual content to be projected on stage. Before the start of the show, each performer including myself went through a thorough line test to ensure all the listed equipment was operating properly, including all video content being displayed as well as all the auditory content being brought into the Allen and Heath GR4 Rack Mixer which was operating as our interface for the evening…
For my performance, the necessary checks for my equipment were as follows:
-
That my Hard drive memory (containing all the projects and samples) was operating
-
That my Ableton session was compatible with my Macbook Air Ableton (The session was made on a high performing windows OS – considerably more powerful)
-
Ensure that I could display my visual effects on a separate monitor (Giving me visual accessibility for my performance)
-
Making sure any Visual Synthesiser presets made on my PC successfully transferred to my Ableton sessions. This was especially important because you could only load up presets from other devices from having them successfully load within an Ableton session that contains the software as a VST plugin instrument. Before leaving my home for the performance, this was a crucial check to be made.
-
Ensuring the MIDI was interacting as intended with both Ableton and my VS instrument.
-
Ensuring the audio interface was acting accordingly with my hardware and my Ableton session.
The performance
​
​
​
​
​
​
​
​
​
​
​
​
During the Interval where I was setting up my Mac with the audio and video outputs, my hard drive that contained all the files for the session was not being recognised by my MacBook Air. To ensure that I didn’t lose my cool on the stage, I made sure to regulate myself with deep and slow breathing. Although the hard drive eventually docked successfully, I lost too much time to give the audience the presentation I had organised, which means I went into my performance with less context given to my audience than what I would have liked. This also meant that whilst I was initially intent on playing 3 songs for my overdubbing performance, I had to cut my performance to 2 songs for the sake of keeping to the performance schedule. I therefore performed a live overdubbed to Travis Scott’s “FE!N” (2023)4 and “Hot” (2019)5 by Gunna, Young Thug and Travis Scott, both tracks which were assigned to scenes within my Ableton session for the sake of correcting my time-based effects as I performed across tracks of different tempo’s. Although I had practiced the most on these tracks, I feel as though I could have anticipated the appropriate sounds better throughout the performance, but the technical success of the performance was seamless which left me satisfied with the event overall.
Development Log – Context
Many autistic people will experience symptoms of ‘Echolalia’ throughout their lives. Traditional definitions state the condition as the often ‘meaningless’ repetition of words or phrases, but internal relationships with Echolalia often vary amongst autistic people, with many of them using their symptoms as a method of learning in a way that suits them best. Echolalia can often be associated with the typical abilities that are sometimes synonymous with autism, characteristics such as being able to perfectly identify pitch amongst other unique musical traits. It is estimated that most autistic people (between 75%-80% of them) possess the characteristic of Echolalia (Fan Xie, 2023)6, impacting the way these people learn and communicate through repetitive language. For me personally, the use of noises as an autistic individual to convey/connote a feeling/theme, add immense value to what that music provides to me as a sensory seeking experience, and I was inspired to explore methods of which a could convey that sentiment in a public space.
Other works of notable mention that have encouraged me to humanise my experiences of Echolalia include ‘Speaking for Ourselves’ (Michael b Bakan, 2018)7. Unlike much unethical research related to autism, Michael’s book goes out of the way to discuss musicality with a-typical youths, involving them in the conversation as opposed to studying them as objects from a far which researchers often receives criticism for due the tone-deaf findings and observations made about the neurodivergent community. Containing case studies with conversations between a plethora of autistic young individuals, the book works actively against stigmatisation by humanising autistic thought and way of being, and explaining how these children can find connection through discovering their own musicality and using it to gain confidence.
Autistic individuals that value regulatory ‘stimming’8 behaviour may have a strong affinity for the urban contemporary genres that rely on Ad-libs for effective performance effect. The percussive value of frag rap can be very appealing to the autistic ear and can also provide emotional regulation value for those neurodivergence who embrace the frantic nature of frag-rap. For my live performance, I wanted to attempt to humanise what listening to these percussive one-shots do for myself as a sensory seeking experience, which I aimed to do through the automation of my visual effects responding to the audio signals being received to emphasise how that information is received to the joyful autistic listene.
Personal Development
Overall embarking on this project has been a thrilling personal induction into the non-linear functionality of Ableton live software, whilst also giving me confidence in my ability to perform live and engage with young autistic people who have yet to discover their affinity with music.
Picking up new DAW software can be daunting at first, but Ableton’s functionality has been an eye-opening process that has strengthened my mixing and post-production techniques. It’s become clear during this endeavour that Ableton is the optimal DAW system for me to approach when considering live performance ventures, as well as making innovative music systems for interactions with neurodivergent children through uses of VS and the various sampling tools available to the software.
Reference List
-
Mackintosh, Kit, and Simon Reynolds. Neon Screams: How Drill, Trap and Bashment Made Music New Again. New edition. London, United Kingdom: Repeater Books, 2021[Accessed 25 April 2024]
-
SZA. (2022). "Low." SOS, Top Dawg Entertainment. [Online] Available at: https://www.youtube.com/watch?v=Z-T_O_vl-8Y [Accessed 17 December 2023].
-
Travis Scott (2023) TOPIA TWINS Ft. Rob49, 21 Savage, Cactus Jack Records. [Accessed 30 April] 2024https://www.youtube.com/watch?v=J4nvbKBuEBU.
-
Travis Scott (Ft. Playboi Carti). (2023). FE!N. Cactus Jack Records. [Accessed 30 April 2024]. https://genius.com/Travis-scott-fe-n-lyrics.
-
Young Thug (Ft. Gunna & Travis Scott). (2019). Hot (Remix), Warner Music Group. [Accessed 30 April 2024]. https://genius.com/Young-thug-hot-remix-lyrics.
-
Xie, Fan, Esther Pascual, and Todd Oakley. ‘Functional Echolalia in Autism Speech: Verbal Formulae and Repeated Prior Utterances as Communicative and Cognitive Strategies’. Frontiers in Psychology 14 (23 February 2023): 1010615. https://doi.org/10.3389/fpsyg.2023.1010615. [Accessed 25 April 2024]
-
Bakan, Michael B., Mara Chasar, and Graeme Gibson. Speaking for Ourselves: Conversations on Life, Music, and Autism. Illustrated edition. New York, NY: OXFORD UNIV PR, 2018. [Accessed 25 April 2024]
-
Mitchell, Kristin, WebMD Editorial Contributor, and Shelly Shepard. ‘What You Need to Know About Stimming and Autism’. WebMD. [Accessed 25 April 2024]. https://www.webmd.com/brain/autism/what-you-need-to-know-about-stimming-and-autism.









