What is latency and how can I minimize it?
This article will share all the essential things you should know about reducing latency to better record and perform music!
Latency is the time delay for an analog signal (such as a guitar or microphone) to be converted to digital signal, processed by a computer, then converted back to analog to be delivered to speakers. This delay can get annoying, especially in live scenarios, or when recording in the studio when you need to hear yourself play.
THE QUICK VERSION…
Use a lower buffer size of (128 samples) when recording audio, and higher buffer size (512) for mixing or producing with MIDI instruments.
Ultimately, latency is a balance and a compromise of quality and speed, measured in milliseconds. There are a ton of variables in the production process that can cause latency, and we’ll explore all of them.
Let’s define some terms:
Buffering – the process of grouping samples into batches for processing.
Buffer Size – the number of samples in one batch.
Audio Cycle – the processing of one audio buffer.
Latency – the time duration of the buffer.
It’s generally well accepted that most humans can’t discern audio intervals less than about 10 milliseconds, or 0.01 seconds. In other words two sounds played 10 milliseconds apart sound instantaneous.
Latency can be calculated using some simple algebra, we can work out the required buffer size:
Latency = BufferSize / SampleRate
BufferSize = Latency x SampleRate
BufferSize = 0.01 x 44100 = 441
At a sample rate of 44.1KHz, 10ms is 441 samples. Since some sound cards only support buffer sizes that are powers of 2 this is often rounded up to 512 samples (about 12ms) or down to 256 samples (6ms) – depending on what your computer is capable of.
LETS GO DEEPER…The article below will summarize years of experience with audio, condensed into a few short articles.
A good latency analogy is most people know enough about their car to drive from point A to point B. Likewise, most people know just enough about computers to get their work done.
But a race car driver knows significantly more about how a car works than the average person. You too, can learn how your tools work, to better use them to your advantage.
Click through the categories below to learn more.
+ 1. How is latency created?
To get better results with latency, we need to understand latency is a balance of quality and speed. Again, there are a ton of variables in the production process that can cause latency. So first let’s explain the different types of latency, so we know how to tackle the problem.
Input Latency: When recording an instrument such as guitar, your computer has to convert this audio from analog to digital.
Processing Latency: This audio is processed with fx on it (processing latency), which is a HUGE variable depending on the amount of fx layered (since each one can delay your signal chain), especially if it is on your master channel.
Output Latency: The time it takes for your computer to convert digital signal back to analog out to your speakers.
These 3 processes combined is called round trip audio latency. This can often time take the most time because of the signal path. We’re going in and back out. And no, longer wires don’t cause latency, don’t even think about it… We can see the delay time of the input and output latency listed in milliseconds in our Audio Preferences in Ableton.
If you are playing a midi controller, there is no latency to transfer the data of notes played to your computer, because it’s already digital. Midi controllers have no input latency.
Latency can occur in the process of generating sound of the midi instrument such as a bass wobble in the soft synth Serum. There’s an additional amount of processing latency in the fx after the bass such as distortion or reverb.
And finally, the output of the midi instrument will have a certain amount of latency. The time it takes for your computer to convert digital signal back to analog out to your speakers. But often midi instruments have less latency than audio because input audio latency is not part of the equation when using midi instruments.
So with all that said, HOW DO WE REDUCE LATENCY?
As I said before, latency is a balance of quality and speed. We could just say, let’s not use FX… and then we wouldn’t have any latency right? Not quite because you’ll still have the input and output latency to deal with. So what can we do?
If you’re just starting out, Get an audio interface here.
This piece of gear is much better designed to get high performance audio in and out of your computer. A small affordable USB audio interface costs around $150 (or less if you get it used).
How much do you think the digital/analog converter built into your laptop costs to make? Most likely, just a few dollars, or even less. You get what you pay for.
But what kind of interface should you get? USB 2.0, USB 3.0, or Thunderbolt? What’s your budget? USB 2.0 interfaces are the most affordable, but the speed of USB 2.0 is limiting. You can still get decent low latency results with USB 2.0 interfaces if you have only about 4 inputs and 4 outputs. The cheapest is 2 channels in and 2 channels out.
But when you want to do multi-track recording, such as tracking drums or a full band, you need a heavy duty interface such as a thunderbolt interface. This is because the more channels that are recorded at the same time, the more data is being sent to your computer. Because of this, we don’t see any 16 channel USB 2.0 interfaces out there (since USB is slower) but 16 channels is totally doable with a Thunderbolt or USB 3.0 interface.
+ 2. The Right Settings: Monitoring, Buffer Size, Freeze Tracks
Monitoring Audio From Your Interface.
The simplest thing to do to get zero latency (Ideal for recording new audio…guitar, vocals, etc) is to switch on monitoring mode on your interface. Usually there is a switch or knob on your interface for direct monitoring. Although not all interfaces may have this feature.
When in monitoring mode, audio is still sent to your DAW to record, but you can mute the output of your DAW and still hear the direct output of your interface. The audio that you hear bypasses the computer.
Set Your Buffer.
When producing a song and you have a ton of midi and audio tracks with a ton of fx or plugins, you computer may struggle to keep up and so you may hear dropouts or pops and clicks. You can increase your buffer to a higher setting. This will work your computer less, but will increase your latency. To do this, go to Ableton Live’s Preferences, Click on the Audio Tab.
(For PC Users) If you’re an on-the-go producer, have a Windows laptop, and don’t want to lug around an interface, there is a sound driver called asio4all which can give you better latency with the built in sound card on your laptop.
In Live’s preferences, pick ASIO as the driver type, then the Audio Device as ASIO4ALL v2
But you’ll get better results with an audio interface since it’s designed to do what we’re wanting. (i.e. a cordless drill vs. a screwdriver)
If on a PC using ASIO4ALL or the ASIO Driver for your interface, to change your buffer size, click Hardware Setup
Once the ASIO4ALL panel is open, move the buffer slider to the right to increase buffer size and to the left to decrease.
ASIO4ALL Panel to change Buffer Size (typically used without an interface), 512 is default
ASIO Panel to change Buffer Size (when using drivers from an audio interface)
(For MAC Users)
To change your buffer, simply go to your audio preferences and change your buffer size from the pull down menu of buffer size options.
You can reduce latency by decreasing your buffer size, but that will work your computer harder. To reduce the load on your computer, you can freeze or flatten audio tracks that take a significant amount of CPU percentage.
Remember, Muting a track will not disable it, but you can select all devices on a track and group them in an audio effect track, then disable that rack.
In a live performance scenario, it’s a balance of midi instruments and backing tracks that you flatten. The more tracks you freeze that are just audio tracks, the less you workload your computer has, thus the lower you can set your buffer. However, you still want some midi instruments that have quality sound to play live. It’s a balancing act.
+ 3. Speed, What type of Computer should I buy?
You’re spending time thinking about, “Can my computer do this?” instead of thinking, “Does this sound go with this other sound?”
If you want low latency, get a fast computer with a good processor. When it comes to audio latency, the main thing we’re worried about in a computer is the processor.
You can have a really nice interface, but USB 2.0, USB 3.0, or Thunderbolt is really just the speed of the pipe to your computer. The speed in which your interface can convert analog to digital and back is still dependent on the speed of your computer’s processor.
IF YOU’RE TRYING TO PRODUCE FULL SONGS ON A MACBOOK AIR, YOU WILL HAVE PROBLEMS. It’s called an AIR for a reason. It’s lightweight. It’s designed for LIGHT workloads, NOT THE HEAVY LIFTING that music production demands.
I also work in IT and my honest opinion with my experience in the field is that most PC laptops with AMD processors are mediocre at best. Intel i7 is what you want. You get what you pay for,and that’s especially true with laptops. The “Best Buy special” might not be such a great deal after all.
Finding quality laptops for music production can be difficult. I’ve found that portability AND power are hard to come by. It also seems like the thinner they get, the less powerful they get. Keep in mind, a tablet PC is convenient on the go, but not a good tool for the studio. AMD Ryzen processors in custom Desktops are nice, however AMD processors in laptops are just not very good. Again, it goes back to “you get what you pay for.”
+ 4. Speed, What Parts should my Computer have?
Computer processors have 2 main properties that affect performance: Number of Cores and clock speed.
Clock speed is how many 1’s and 0’s your computer can process in a second, measured in GHz (or billions of bits per second). Clock speed will affect latency more than the number of cores because it’s a measurement of how fast your computer can process things.
However, when a computer processes a set of instructions (like playing a chord in the synth plugin Serum), one core will process how serum makes sound and do the number crunching, while another core will be crunching numbers to make your operating system do things like display graphics.
When all cores are busy processing things, it takes a certain amount of time for the next core to become available to process a new set of instructions, therefore creating latency. If possible, get a computer that has a higher number of cores and a high clock speed.
Usually a good laptop, Mac and PC, will be something that has an intel i7 processor with at least 4 or more cores and close to 3GHz of clock speed.
You can get higher clock speed on desktops. However, higher clock speed requires more power, which is hard to do when running on battery. My desktop is a 4 core Intel i7 at 4GHz. Most people aren’t asking for a high performance laptop. They just want something that is lightweight, decent battery life, and is fast enough to surf the web and do lightweight work. But we as music producers aren’t most people. We need something powerful that can deliver. What about AMD processors? Their Ryzen line of processors are pretty good. Most other AMD processors aren’t that great.
Hard Drives vs Solid State Storage
Another variable that doesn’t have much of an impact on real time latency, but certainly does affect the speed of your computer, such as booting up, performing tasks, and being more responsive is having a Solid State DriveFlash Storage. A hard drive is the long-term memory of your computer. It’s where all your files are stored. Spinning disk hard drives were the standard for many years. New laptops may even come with them because they boast that they can hold more, but they aren’t as fast.
Solid state drives are at least 4 times as fast as spinning disk drives, and they’re more reliable because they have no moving parts. If you want a fast computer, consider reinstalling your operating system with a solid state drive. Preferably a 500GB SSD or higher, so you have enough storage space.
How much RAM?
What about RAM? RAM doesn’t affect audio latency much. It’s the short term memory of a computer. If you put more RAM in a computer, it just means you have the potential to run more programs (or plugins) simultaneously. The cheap computers have 4GB of RAM or less. 4GB is enough to run your computer’s operating system and browse the web. That’s about it.
8GB of ram is recommended to use Ableton Live and is enough in most cases.
If you’re using a significant amount of 3rd party plugins or using multiple instances of a plugin that uses a significant amount of RAM such as the plugin Omnisphere, you might want to consider getting 16GB of RAM. 32GB of RAM is definitely overkill.
Computer Parts Summary
Ultimately what it comes down to in computer speed, is with a slower computer, the more layers in a song you have, the quicker you will have to increase your audio buffer settings and increase your settings to avoid pops and clicks.
On my desktop I can run at least 20 instances of Omnisphere simultaneously before I start noticing dropouts. Under my normal music production workflow, I don’t even worry about the CPU workload of my desktop. I write my songs, duplicate layers that I need, and mute layers as needed. As I go, if I have an old idea that I’m no longer using, I’ll save as a new version in my project name so I can delete the old layer I’m not using. If I ever want it, I can go back to the older version and get it. But at that point I’m deleting old layers because I want to clean up how my project looks, not out of necessity because the CPU meter in the corner of Ableton is dying
On my laptop, I could freeze the tracks that are taking the most CPU so they’re just audio tracks resulting in freed up CPU to do something else, but that can kill my workflow. I should be able to freely produce and lay down as many ideas as I want without freezing them in a point and time. Once they’re frozen, you’re somewhat committed to the idea. Sure you can unfreeze them and re-edit, but it just gets in the way of the workflow. You’re spending time thinking about, “Can my computer do this?” instead of thinking, “Does this sound go with this other sound?”
Music production should be limitless creativity. The power of your computer should not limit how or what you create. If it is, buy or build yourself a nicer computer. If you want to truly create great ideas, give yourself the tools to do so.
+ 5. Picking the Buffer Size Explained, and Delay Compensation
With all this information out of the way, now what? If audio is being processed by a computer, it will always have a certain amount of latency. Even if you have an extremely fast computer and the greatest sound card/interface, you’ll still have a even a small amount of latency.
So what is an acceptable latency delay time? Drums have a very fast attack. Attack is the initial uptick in volume intensity of a sound before it decays out. Since drums have a fast attack, when you have latency, the initial delay when playing drums is very noticeable. So an acceptable amount of latency would be playing a beat on a drum pad or with mic’ed acoustic drums accurately enough where the delay time of the computer has no affect on your playing.
Again, acceptable latency is when the delay time of your computer has NO affect on your instrument performance.
Another example is I should be able to play midi keyboard with a piano instrument pulled up in Ableton in the exact same manner I play real piano. My accuracy in timing of hitting the notes shouldn’t be affected. Now my timing isn’t that great in general, but I shouldn’t suck any more so on the computer.
Another common problem with latency is when playing to a click consistently in time, after you’ve played the notes, what is recorded is late every time. The good news is that whether it’s recorded midi or audio, you can always select what you’ve recorded and drag it left, (backwards) in time to line it up with the click.
You can calibrate Ableton to prevent this recording delay time. In Live’s audio preferences, there’s a latency delay compensation value you can change.
However, it’s a common mistake to change this number to a negative value and think that is going to affect the real time latency.
This parameter has NO effect at all on the real time audio delay of your computer. All it does is the math to determine where on the grid to plot what you’ve recorded relative to the things already there. If you make this setting negative, it will plot your recordings of midi or audio earlier in time as a way to compensate for that delay.
So what are some specific numbers to shoot for as far as acceptable real time latency? It depends on what you’re doing.
When you’re mixing a track or drawing notes in a piano roll, a longer amount of latency time is acceptable. All you need is what you hear through your speakers or headphones to resemble what you see on your screen. If you turn a knob by clicking on it, you should be able to recognize the change it made to the audio in a timely fashion. You would need to hear the real time changes of audio at the speed in which you’re turning that knob. That way you know when you’ve turned that knob’s parameter “just right”. In this phase of music production, a larger amount of latency is acceptable.
For me, around 50 ms (or even more) is totally acceptable. That’s usually going to be the delay time of the largest buffer size, which is 2048 samples, meaning my computer will process 2,048 snippets of audio before it plays a sound.
When recording audio or playing something live, we need much less latency because we need to hear the action of ourselves playing that instrument. So for me, I’ve found that 12-15 milliseconds of delay is acceptable. The delay doesn’t affect my playing much. Obviously even lower latency is preferred, if possible.
+ 6. Plugin Latency
Placing effects on a track, especially 3rd party plugins can create latency. In Ableton, you can hover over the title of the device box that represents a plugin and the latency in milliseconds of that specific device will be shown in the lower left corner of the screen.
You can also group all the devices on a specific track to a single device rack and hover over that rack to show the total latency for the track. Usually I’ve found that any plugin with a dry/wet knob will show that it has 0 ms of latency because some of the dry or un-effected signal is still coming through.
By default, Ableton also has a special setting called plugin delay compensation. This means if a single audio track has a bunch of latency due to FX on it, all tracks will experience the same latency. That is so certain tracks won’t arrive to the speakers at different times than others.
However if you want delayed arrival, you can click the D button next to your I/O settings on the bottom right side to reveal a setting where you can delay the amount of time before specific tracks should play. This setting lets you have a negative value, but has no effect on real time latency, only on playback of pre-recorded audio.
Don’t worry, you’re not going to have a combined latency of all your tracks (ie track 1, plus track 2, plus track 3). That is because all of these processes are done in parallel, aka simultaneously. Because these processes are performed simultaneously, this is why having a higher processor core count is important.
The latency time is listed in Ableton’s audio preferences is a bit confusing because it does not account for plugin latency. It only shows the input and output latency of your sound card/interface.
When dealing with plugin latency, I’ve found that what affects latency the most is processing on the master. For example, I tried out Izotope Ozone. While it’s a great “one stop shop” for mastering, it causes latency in 200-300+ milliseconds. Which is unacceptable if I was trying to also perform an element in this same project live.
Personally, I’ve had the most success with the low latency plugins made by the company Waves. Their plugins have latency in the single digit milliseconds.
+ 7. Analog gear and Offloading Plugins
You could just say screw it, I want zero latency. I’m going to avoid computers entirely. I don’t want any processing time. I’m going 100% analog. You can totally do that. Guitar pedals are great tools to enhance sound and have no latency because it’s still in the analog realm.
There are even digital pedals that process sound and have no noticeable latency (less than a millisecond). But one thing that you lose when in the outboard gear world is the editability of it all. With digital FX in a DAW, you can automate the values to gradually swell or abruptly end all by drawing lines for the parameters to move to over time. Sure you can record yourself manually turning the knob on your reverb pedal, but it’s a fixed movement.
There is also a hybrid solution with analog hardware emulation plugins made by UAD. I have a Thunderbolt UAD apollo audio interface. Yes it has a ton of inputs and outputs, but it also has audio processors in the hardware. These plugins made by UAD are designed so that the number crunching on how they affect your sound is done on the processors built into UAD hardware interfaces, not on your computer processor.
You can run these analog emulation plugins inside the UAD console, which is just a visual mixer for my audio interface, and route the sound back out before even going into Ableton, all with roughly 1ms of delay time. Yes there is what is called the DSP limit to the number of UAD plugins your UAD device can process at a time (based on how many UAD plugins you’re simultaneously running. Certain UAD plugins take more processing power than others, and the number of processors your UAD device has, mine has 4).
This concept is like having a mini DAW on a mini computer before your DAW. The human ear can’t even recognize the delay. There are a few limitations to this; Yes you can change parameters on these plugins in real time, but you can’t easily automate parameters inside the UAD console.
If you wanted to record audio into a DAW now, Ableton will always record audio on a track pre-fx. That is meaning pre-fx to Ableton. If you have this DAW before your DAW scenario, Ableton is just going to record the audio that is on its input, then process the fx on that track in Ableton. Say I have a clean guitar input, put distortion on it in UAD, then recorded it in Ableton, then put reverb on it in Ableton, I can change the amount of reverb, but the amount of distortion is fixed.
It’s usually best to non-destructively record so you have full control of fx parameters even after you’ve recorded. For that reason, I run all my UAD plugins inside of Ableton and not in the UAD console. That way I can automate them and have the freedom after recording.
When running UAD plugins in Ableton, the processing of audio is still done on your UAD device so you’re computer isn’t taxed, but because the audio processing is sent to Ableton first, then out to your speakers. I lose the ultra low latency capability of UAD plugins by running them in Ableton. Usually with the size of my Ableton projects, I keep my buffer at 128 samples which gives me about 12ms of latency. Yes it’s not ultra low, but 12 ms of total latency is very doable for my workflow.
+ 8. What is Sample Rate, and How Can it Affect Latency?
Also, for those who like to tinker, it may sound counterintuitive, but you can also get less latency if you increase your sample rate in Ableton’s audio preferences. The default sample rate in most DAW’s is 44100Hz. What that means is your computer is recording a snippet of audio 44,100 times every second. This is the standard, but it is also the lowest sample rate.
The reason for this 44,100 number is that the highest frequency or pitch that the human ear can hear is 20,000 Hz. (Think the upper part of a cymbal crash.) Meaning this highest pitch is a sound that oscillates or has peaks and dips 20,000 times a second. To get an accurate reading on that high frequency, you want to capture or sample that audio at twice the rate of the original sound. There is some overhead for high frequency filtering in analog to digital converters, but 44.1KHz is a standard.
But so is 48KHz, 88.2KHz, 96KHz, or even 192KHz. So why record at such high sample rates, when the highest thing we can even hear is at 20KHz? There is more information per second recorded. You can significantly slow down a piece of audio and not notice much quality degradation.
When you slow down audio, there is less samples per second playing than what was originally recorded. Because higher sample rates have more samples recorded per second, the buffer rate (bucket of bits) you set in Ableton’s audio preferences (say 512 samples) gets filled up faster. Therefore you experience less latency.
Another advantage to a higher sample rate is, since there is more information, plugins that generate sound can play in much higher resolution. Therefore certain plugins and their automated parameters may sound fuller and much less “steppy.”
However the downside is that you’re asking your computer to do more work faster. Because of this, in my own experience, I noticed I could have less midi tracks or audio tracks with processing on them simultaneously playing before I would start hearing dropouts, pops, and clicks. My CPU meter in Ableton would spike way more often. Then I’d have to start freezing tracks to free up my CPU again.
For this reason, after enough experimenting, I found it best to keep my sample rate at the default 44,100. I don’t drastically slow down much of the audio I record so I’m not really missing anything I would gain by having higher quality recordings from higher sample rates. But go try it yourself!
+ 9. Conclusion
- Latency is the time it takes between when you play something, your computer processes it, and then sound actually comes out of your speakers.
- Latency is a balance; a trade off for quality vs speed.
- There are three main legs to round trip latency for an audio track; input latency to convert something from analog to digital, processing time in audio effects on that sound, and output latency to convert that digital signal back to analog.
- Midi tracks only have two legs of latency – processing time and output latency.
- Having the right tools such as an audio interface is important, but having a decent computer is even more important.
- You can adjust settings such as buffer size to reduce the workload of your computer, but that is a trade off on the real time responsiveness of your computer.
- You can freeze or flatten tracks or layers in your song to limit the workload of your computer so that you can have a shorter buffer. But in general, it’s more freeing to have a faster computer that doesn’t require you to freeze and flatten tracks all the time.
- Again, to get the best results as a music producer, the power of your computer should not limit how or what you create. If it is, then buy yourself a nicer computer and a better audio card/interface. If you want to easily create great ideas, give yourself the right tools to do so.
Now you have a decent amount of knowledge on the tools and settings available, and how to tweak them to your advantage.