One of the surest ways to elevate your video quality is to elevate the audio quality. That can mean upgrading your recording gear or setup, and it can mean cleaning up your sound in post-production. Here we're going to show you some super effective ways to do the latter using the two most important audio-editing tools: EQ & compression.
Before we start, a quick note: If you clicked on this article wondering what EQ and compression are, no problem — start with this primer, then come back here.
But if you already know how equalizers and compressors work, the next step is to start actually using them on your audio. It can feel super intimidating: why are there so many tools, and why do some of them do almost the same thing, just slightly differently? (I’m looking at you, dynamic equalizers) How is a self-taught amateur supposed to know where to start?
There are plenty of YouTube tutorials out there, but they tend to be geared towards showing you someone else’s process instead of helping you develop your own. So we rounded up a couple of pros from the podcast world to give you tips on how to approach audio editing, and make the most of the tools at your disposal so that your project comes out sounding as polished and professional as it looks.
Choose Your Tools
Your first task is just to figure out what kinds of audio compressors and equalizers you want to work with. In case you skipped our primer above and you’re asking "wait, what does a compressor do again?" It’s the one that compresses the dynamic range of your audio signals, making the quiet parts a little louder, and the loud parts a little quieter. The equalizer, meanwhile, is meant to louden or quiet down individual frequencies in your audio, allowing you to highlight or remove specific sounds or tones.
Christian Koons, whose credits include work on The Dinner Party Download and Song Exploder, recommends starting out with an equalizer that includes a visual component. That way, you can see what frequencies are included in a particular noise — a person talking, a car idling outside — and start manipulating them from there.
“On the spectrum of sound you have the low end, all the way on the left, starting at around 20 Hertz, and then all the way at the right there’s 20,000 Hertz,” he explains. “With the EQ plugin I use, when you play sound through it, you can see where on the spectrum of sound things are happening, like a huge, low rumble happening below someone’s voice. I've found it super helpful with dealing with different complex audios, with weird noises here and there.”
Compressors are a bit simpler — there’s basically one kind, but many different settings to use within them. We’ll talk more about those settings below.
Christian Dueñas, an editor and producer at Maximum Fun, uses a multiband equalizer, which allows him to grab pre-set sections of frequency to push up or down. (Descript comes with a multiband compressor, which is like a multiband equalizer and a compressor in one.)
Dueñas likes a 20 band compressor in part because it separates out sound into three basic categories: “The ones on the left side are gonna be your bass-y notes,” he explains. That might be where a truck rumbling outside or air conditioner noise might live. “The ones in the middle are voices, and then the ones up top are super high, usually, things that are beyond our voices.” Thinking of the bands this way means he can see more easily how to turn down that AC noise without removing the bottom notes from someone’s voice.
Though sometimes he wants to do that, too. “A lot of people are a little too close to their mics,” he says. “So a bass-y sound overwhelms your ear.” Dueñas recommends focusing on lower frequencies first — any background hums, a too-deep voice, even the vibration of something hitting the table your mic was sitting on. “A lot of times I've found that notching out like the bottom two or three bands of a twenty band EQ will help immensely with stuff like table hits or like popped P’s,” he says.
That said, he also cautions that “Everybody's voice is different,” which means that “Not everybody's gonna need the same EQing. I’ve found that like a lot of men, or people with lower voices, you kind of wanna sweeten them a little bit [i.e., boost some of the high frequencies] to get them more sonorous.”
Technically, you can go in either order, but Brandon McFarland, an engineer and composer at Vox, prefers to use an equalizer for most of his audio editing, and add in compression only at the very end. “I'm kind of old school,” he says. “I was taught by older engineers to pretty much use compression as like a finalizing thing for vocals.” He typically applies a compressor lightly, and only to the louder end of the volume spectrum. “When a person's talking, I want the compressor to push down all the really loud peaks just a little bit, maybe like two dBs,” he explains.
To do that, he looks for the highest volume points (peaks) that the person’s voice hits when they’re recording. “I gauge where that peak is and then start to compress it there,” he says. And unless there’s a substantial quietness problem, he’ll just manually raise the volume on any too-soft words, so that he doesn’t have a compressor running on the bottom volume of everything that’s been said, crunching it upwards. “You can't go wrong if you just go in and manually raise it for two seconds that it needs to be raised,” he says. “Instead of trying to dial in the threshold and ratio of a compressor that's gonna affect other words.”
Brandon typically sets the threshold of his compressor around -20 dB; his compression ratio is just under 2:1, which means that for every two decibels over the threshold that the sound goes, it’ll be turned down by 1. Together, threshold and ratio determine what kind of gain reduction he’ll have on the track — basically, the difference between the peaks of the unedited audio, and the peaks of the compressed audio.
That ratio is a far cry from a limiter, which is a compressor set to such a high ratio that it basically shuts down anything above its threshold level. But as long as you set your audio levels appropriately when you recorded, you shouldn’t need a limiter — they’re more useful for the super-loud louds that sometimes show up in music production.
Last but not least, Brandon sets attack and release rates: how fast the compressor quiets down the sound when it hits the threshold, and then how long it takes to stop working once the sound has quieted down again. “Attack and release for vocals when someone's talking should be relatively loose,” he says; a rapid-onset compressor can sound unnatural to the ear. He typically sets his to 50 milliseconds “unless someone’s a really fast talker,” with the release around a hundred milliseconds. “A hundred milliseconds, that's enough time for someone to take a breath, or take a natural pause, and then you don’t hear it,” he explains. “It kind of like…floats off.”
Mixing audio can get pretty complicated pretty quickly. So don’t get too wrapped up in the super-technical, or in trying to turn yourself into a professional engineer overnight. Use the equalizer to turn down any unwanted non-vocal noises, and maybe to finesse someone’s s’s and p’s. Try Brandon’s technique for compressing just the top of the audio — especially because whatever platform you publish to will compress for you if your track is too loud.
And from there, experiment to find out what works for you! Many audio engineers are self-taught, and they all recommend developing your ear as your most important tool. After all, it’s your project — it’s gotta sound good to you before it sounds good to anyone else.