Latency, Reamping and Revoicing

I’ve been cursed, I think.  My ears have been trained to notice and focus on something that most people learn to compensate for and it gets in the way of my music making.  Let me start at the beginning.

To me, the latency that really matters, when you are recording or playing a musical instrument, is that time between your articulation of the gesture that supposed to cause the note to play and when the note actually speaks.  In most real instruments, the delay between plucking the string or hitting the piano key or drum head, and actually hearing the sound made is negligible.  It’s so short a period of time, you don’t even notice it.  The gesture and the sound are inseparable.  This is a good thing, because it allows you to focus on what you play, how you articulate your notes and on the rhythm and feel of the music you are making.  The latency doesn’t get in the way.

In a DAW, at least with my “in the box” workflow, this isn’t the case.  If you are using amp simulations, for example, there is a noticeable delay between when you pick the string and when you hear the note in your headphones.  The delay is due to the computer having to record your note, through a buffer, which adds its own delay and then process it through the amp simulator mathematics, which adds another delay, then play it back to you through another buffer, adding a third delay.  The cumulative delay means that the sounds don’t happen when you intend them to happen and you start having to compensate, playing ahead of the beat, so that the moment that the note speaks is actually on the beat.  I hate having to compensate.

I get this with drum pads that drive a drum sound and with a USB keyboard that is driving a virtual instrument synthesiser.  Because there is computer processing work to do, in order to make the note speak, I have to play in such a way that I am anticipating when the sound has to be audible and playing forward of the beat, so that it does.  What this does, for me, is make me focus on the delay compensation and so I lose attention to the feel and rhythm, the articulation and subtleties of playing that actually are supposed to come from your fingers and brain.  I get a robotic, spasmodic performance, which is so unlike a live performance, it annoys me greatly.

Now I realise that players of wooden pianos are well acquainted with learning how to compensate for the time it takes the hammers to travel toward the strings, before they’re struck and I probably could train myself to get over the latency delay fixation and get back to thinking about bends and vibrato, attack and how much intensity and sustain I want to impart to each note, like I would when I am not recording, but that’s going to take some time.  It will take even longer, in my case, because I spent literally years of my life in digital audio research, worrying about things like perfect lip synchronisation to moving pictures and how to bring a track into alignment with a guide track exactly.  I was trained to notice and fixate on these small delays.

So my problem boils down to how to eliminate or minimise latency.  I could buy a super fast computer, but my belief is that the faster the computer is, the more you try to do with it, so you add more sophisticated processing, in-line, while recording and the latency creeps back in.  You know this intuitively, because you are striving to get the best sound you can, in your headphones, while you are recording, so that it makes you play as if you were in a real environment, not inhabiting a minimalist, stripped down soundscape, to save computational cycles.  Half the fun of playing is in how great it sounds.  When you strip that away, to avoid latency, so that all you hear is a pristine version of your instrument, some of the fun is lost and you can hear that in the performance, ultimately.  When the sound made is boring, you play boringly.  Who wants to listen to a record that consists of boring performances?  No audience known to man, that’s who.

Here’s the other solution, which I am just now beginning to explore.  It takes the load off the computer and ensures that the computer’s latency doesn’t get in the way of your playing.  If I split the guitar signal (with a Boss LS-2, for example, but any decent DI box could work), so that I send a clean, direct injected feed to the DAW and the other half to a real guitar amplifier, then take a feed from the amp back to the DAW (via USB), I record two tracks for every performance on the guitar that I make.  One track is clean, so I can run it through amp simulators (my favourite is S-Gear, but I have many) in the mix and get any sort of guitar sound I want to dial in.  That’s what I call “re-amping in the box”.  I also have the feed from the guitar amplifier recorded and because my amp is a modelling amp (a Fender Mustang III v1), it’s a simulation too, so I can dial in lots of different sounds and they sound great in my headphones, while I record.  In fact, I can monitor the track in one ear and slip the headphones off the other ear and listen to the sound made by the speaker, which sounds much, much better to me.  I could actually record a third track at the same time, by micing up the amplifier speaker, but my recording room is not that exciting and the sound proofing is not that good.  It might be worthwhile in a high end studio, with a great room acoustic, but not in my studio.

Why I like this way of recording is that sometimes, I like to play with the acoustic feedback that I get between amp and guitar.  Those sounds have a bit of an accidental, serendipitous nature about them and if you were to play through an amp simulator exclusively, you might not ever decide to take your playing in a direction where the acoustic feedback becomes part of the performance.  You’d never suspect it was possible.  With a live amp, though, you can have those happy discoveries and the performance you capture has your playing frozen to digits, as if there was acoustic feedback happening.  That’s a very flexible starting point for re-amping and final mixing, especially if you have recorded the feedback from your amp too.

The other thing that recording a clean feed while listening to a live guitar amp sound gives you is the ability to replay the clean feed back out of the DAW and into the guitar amp, recording the resultant sound via USB feed and/or microphone in front of the guitar amp speaker.  This is what they traditionally call re-amping.  It’s using the clean performance you captured in the DAW to drive an amp, during mix down time, to change the sound of the guitar amp.

Unfortunately, the signal presented to the input of a guitar amp, by a DAW, is not the same as the signal presented by a guitar.  To fix this, John Cuniberti patented a technique for messing up the signal sufficiently, with some passive filtering, to make it exactly resemble the signal that comes out of a guitar.  You can buy a Reamp box from Radial Engineering in Canada.  If you are going re-amp externally, this is the box you need.  It will transform the clean guitar track, recorded in the DAW, into a signal indistinguishable from one coming directly from a guitar, so the amp will respond as if a real guitar player were playing into it, not a recording of a guitarist.  If you re-amp in the box, however, using amp simulators, you’re ok, because the signal recorded to disk is the same as the one that presents at the input of the amp simulator anyway.  All the necessary pre-compensations are already part of the amp simulator.

This seems to be the way forward, for me.  I will be able to hear myself play in good sync with the bed of the track, so I can get a better performance, without worrying about latency.  The bonus is my playing can be more inspired because I will hear my own playing through a great guitar sound.

Solving the keyboard problem takes me down a similar path.  If I use a humble home keyboard, such as Yamaha makes for students, I can hear the sound of a piano, or various other instruments, including drum sounds, while I record my track, but take a MIDI feed, via USB, into the DAW.  I can also record the sound of the keyboard, if I want, as a guide track.  That way, I can monitor the bed track already captured in the DAW through headphones, but play along, while overdubbing, with a low latency sound source (the keyboard).  Having captured the MIDI notes of my performance, I can then use virtual synthesisers or drum machines, in the mix, to re-voice the MIDI to other sample-based sounds, or to drive virtual analogue synthesisers.  The point is that, while I won’t hear the final sound, when I track, I will hear a low latency, acceptable, keyboard sound, which will let me concentrate on the performance, and can then sweeten the sound after recording the MIDI, by simply running the captured MIDI into a virtual synthesiser or sampler.

For vocals, I might have to do a similar trick of monitoring my voice off the mic, directly and adding some hardware based compression, gating, EQ and reverb, so that I don’t burden iZotope Nectar with the processing needed to make a good sound in my headphones.  That one remains to be seen.

I realise this is a bit of an extreme kludge, to solve a problem that doesn’t bother most people, but it’s a problem that stops me from making music, so that is a very bad thing.  In that sense, any solution that works, however inelegant, is a good solution.

The bonus, at mix down time, is that I have a great deal of flexibility in the sounds I choose to re-amp or re-voice.  I can turn anything into anything else, provided I have captured a good performance.  If I don’t like the piano sound I first chose, I can choose another or even substitute the piano with some other more experimental sampled sound.  I’m not locked in.  It’s the same with drum sounds.  If I find a snare drum sound I like better, or which sits better in the mix, I can run the recorded MIDI into the sampler with the better sound and voila, snare replaced.   It does mean that I have to be more careful with the DNU tracks (do not use) and remember to mute them in the mix, but that seems a small price to pay.

If you’re struggling to get a good performance down, due to latency, or because you can’t make the sound of your own playing or singing good enough in your own headphones, give these re-amping and re-voicing ideas a try.

Advertisements

About tropicaltheartist

You can find out more about me here: https://michaeltopic.wordpress.com/. There aren’t many people that exist in that conjunction of art, design, science and engineering, but this is where I live. I am an artist, a musician, a designer, a creator, a scientist, a technologist, an innovator and an engineer and I have a genuine, deep passion for each field. Most importantly, I am able to see the connections and similarities between each field of intellectual endeavour and apply the lessons I learn in one discipline to my other disciplines. To me, they are all part of the same continuum of creativity. I write about what I know, through my blogs, in the hope that something I write will resonate with a reader and help them enjoy their own creative life more fully. I am, in summary, a highly creative individual, but with the ability to get things done efficiently. Not all of these skills are valued by the world at large, but I am who I am and this is me. The opinions stated here are my own and not necessarily the opinion or position of my employer.
This entry was posted in Uncategorized and tagged , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s