How To Mix Music: Make Your Recordings Sound Like A Hit Song

Written by
Jay LeBoeuf
|
|
10
min read

What makes professional recordings sound great? Often, the secret sauce is cooked up by an audio mixing engineer, who turns individual tracks into a blended palette that offers all sorts of sonic frequencies.

Plus it’s a fully powered editing suite that does everything you need to make a great podcast. If you know how to edit a doc, you’re ready to get started.
Descript makes editing audio as easy as editing text.

What Is Audio Mixing?

Audio mixing is the process of blending and balancing all of the individually recorded tracks that make up a song. A single song might include vocals, guitar, keyboards, bass, drums, saxophone, violin, and the footfalls of the muppets who live upstairs.

Each of these instruments gets recorded with its own dedicated microphone. During the mix process, an audio mixing engineer will balance the volume level of the various recorded tracks. They will also add effects like EQ and compression, and they will pan tracks left and right in order to sound good on stereo speakers. This process helps to produce a good mix that sounds fantastic over headphones or on your home stereo.

A Classic Audio Mixing Workflow

The first step in learning how to mix songs is studying the workflow that audio engineers use to transform rough tracks into final mixes. While different engineers swear by different processes, here is one step-by-step approach to follow as you learn how to mix audio.

  1. Launch your software. Nearly all of today’s audio mixing is done with a digital audio workstation (DAW) such as Pro Tools, Logic Pro X, Steinberg Cubase, Ableton Live, or FL Studio. Choosing one of these software programs is a matter of personal preference. While each of these tools has many overlapping capabilities, they also have niches where they excel.  Pro Tools, for example, is great for recording live instruments, while Ableton and FL Studio are well-regarded for working with loops, and Logic is considered a jack-of-all-trades program (but it only works on Apple products).
  2. Prepare your session. Within your DAW, arrange all of the raw audio tracks you want to mix. Enable the features that let you adjust the volume and stereo panning of each track. Many engineers like to color-code various tracks to stay organized. For instance, they might assign rhythm section instruments (like drums and bass) the color red, while assigning harmonic instruments (like piano and guitar) the color green, and melody instruments (like lead vocals) the color blue. Pick whatever colors you like.
  3. Volume balance. Throughout your mixing process, you will want to keep individual tracks’ volume within an optimal range. This is known as gain staging. You want to make sure that no track is too loud, which will cause unwanted distortion. But that’s not all. You also want to make sure that no track is too soft. That’s because on a really quiet track, background noise (like electrical hiss) might be as loud as the music you were trying to record. When a mastering engineer or a home listener tries to turn up the overall volume, they are going to hear that unwanted noise (which is sometimes called a noise floor). By setting your levels loud enough in the early stages of mixing, you won’t end up accidentally amplifying the noise floor down the line. Many mixing engineers handle gain staging by using software plugins for their DAW, but you can also do it yourself by monitoring the gain knob on your recording interface and mixer. The final element of volume balance is muting tracks that aren’t currently playing. Most DAW software can be programmed to automatically mute and unmute tracks at specific moments during playback.
  4. Compression. Adding compression is one of the integral steps to mixing a song, and it’s actually related to the volume balancing you did in the prior step. Compression keeps volume levels within a fixed range by taking the loudest sounds and softening them. This actually tricks our ears into thinking that soft sounds are louder than they really are. As a rule of thumb, compression is nearly ubiquitous in pop music (from electronic music to hip-hop to country), but it is rarely used in classical music recordings, which tend to emphasize dramatic shifts in volume dynamics. All digital audio workstations have a built-in compressor, but you can also use numerous software plugins if you prefer the sounds that they provide.
  5. EQ. EQ stands for equalization, and it describes the process of isolating specific frequencies and making them louder or softer. Mixing engineers apply different EQ settings to different instruments as they mix a song. Some engineers divide instruments into multiple tracks (such as giving each piece of a drum kit its own track) to tweak EQ with greater precision. Individual EQ settings help your overall mix because the prominent frequencies in one instrument, like a bass guitar, might be quite different from the prominent frequencies in another instrument, like female vocals. Boosting or lowering certain frequencies is different from boosting or lowering overall volume. For instance, you can keep the middle and higher frequencies of a bass, which will help listeners hear precise notes, but you can cut the lowest frequencies, which might sound like someone moving furniture across a wood floor.
  6. Effects. You can add all sorts of effects to individual tracks. Occasionally you might add effects to an entire audio mix, such as during the final mastering stage, but most sound engineers apply different effect levels to different tracks. There are DAW settings and plugins that produce vibrato and chorus (where the pitch of instruments gets slightly altered), tremolo (where the volume pulses from high to low and back to high), phasing (which sounds a bit like a rotating speaker), flanging (which is supposed to sound like audio tape slowing down and speeding up but mostly sounds like a jet plane taking off), and ring modulation (which sounds like a bright ringing bell but a little goes a long way). You can also add distortion, shift the pitch of your sounds, and do all sorts of things that sound fantastic in moderation and frantic in excess.
  7. Panning and surround sound. Panning is where you move the sound of an instrument toward the right speaker or the left speaker. This makes it sound like the music is surrounding the listener, instead of all coming from the same place. If you have two guitars on a track, pan one slightly left and one slightly right. You can also pan keyboards and bass according to your personal taste. Drum kits and lead vocals often aren’t panned much at all—they usually sit in the center of the mix.  When it comes to classical music, many audio mixing engineers pan instruments to match a typical symphony orchestra layout. Today’s digital mixing engineers may also have a chance to mix in surround sound, which positions sounds on all sides of a listener—front, back, left, right, up, and down. This comes through in full effect when the sound plays back on a surround sound system with multiple speakers. Yet even when the listener uses a set of stereo headphones, a good mix engineer can create the illusion of three-dimensional sound.
  8. Volume automation. You don’t have to keep individual track levels at the same volume throughout a song. You might want to boost the guitar player on their big solo or lower the French horn player on that slightly squeaky note. You can do this in a DAW by programming automated volume fades at various points in the song. In the early stages of mixing (called a rough mix), you might not bother with volume automation, but as you get near the finish line, work it in. Remember that most DAW software can also automate muting so that individual tracks turn on and off at preset times.
  9. Space. You can create the impression of “space” in your mix by using three types of mixing tools: reverb, echo, and delay. Reverb creates the illusion of sound recorded in a room that has a natural echo. Nearly all rooms have at least some reverberation, whether you are in a small studio or a large cathedral. Echo and delay are related effects that replicate recorded sounds more precisely than reverb does. The band U2 and their guitarist The Edge are known for playing with delay and echo, like in the song “Where the Streets Have No Name.” When used in moderation, this kind of effect can work well on all sorts of instruments. Delay, echo, and reverb tend to be the last effects you put on an audio mix.
  10. Final checks. In many professional scenarios, you will end up passing off your final mix to a mastering engineer, who will add some finishing touches that mostly relate to volume and compression. Or, using an AI-assisted service like LANDR.com to automate the process. Before you do that, double-check to make sure your gain staging is solid throughout (with nothing distorting and no hiss from your noise floor). Ensure that you’ve panned all the tracks you intended to (this is an easy step to forget) and that your selected effects are tastefully applied.

8 Key Tips for Mixing at Home

After reading this article, you’ll probably want to dive right into mixing your own music. (Or maybe you’ll want to impulsively buy a bunch of audio gear, but then when that gear arrives you’ll want to start mixing.) Keep the following tips in mind whenever you mix a song.

  1. Pan your instruments. Create a full sonic landscape by panning specific instruments to the left and right. In most cases, subtle pans sound better than hard pans (where a sound is sent entirely to one side of the mix).
  2. Name your tracks and color code them. If you have a lot of tracks in your mix, you need to stay organized. Start your mixing session with this crucial step so you never get confused about which track is which.
  3. Compression can be great; too much is bad. Compression, particularly sidechain compression, can make a track sound loud and hard to ignore. But a lot of music benefits from volume dynamics where one part of a song is quiet and another is loud. Compression can dull that effect, so don’t overdo it.
  4. Try high-pass filters and low-pass filters. These filters are types of EQ that cut a wide range of frequencies. For instance, if you set a high-pass filter at 30 Hz, all sounds at a lower frequency than 30 Hz will be cut out and everything higher than that frequency will “pass through.” This helps you cut rumbling sounds from a mix. Likewise, a low-pass filter cuts ultra-high frequencies and can remove hissing and hums from a mix.
  5. Use reverb the right way. A little reverb creates the image of a room where music is recorded, whether that’s a small studio or a large hall. Too much reverb, on the other hand, will muddy your sound and make it appear distant.
  6. Trim off the noisy ends of tracks. A lot of audio recordings start and end with a bit of noise or hiss when the musician isn’t playing. 
  7. Emphasize the full range of sonic frequencies. As you mix a song, make sure you’re accenting the full sonic spectrum. If everything comes from the same register, individual instruments won’t pop out of the mix, and the final product can sound a bit bland. Counteract this by selectively boosting frequencies in the high, middle, and low ends of the audio spectrum.
  8. Test your mix on multiple sets of speakers. As much as the audio mixing community would like to imagine that everyone listens to our mixes on open-backed headphones or high-end stereo systems, the reality is a bit less glamorous. A lot of people listen to music in their cars, on low-grade Bluetooth headphones, or from their tinny laptop speakers. With this in mind, you should check your mixes on a wide array of speakers to make sure they sound acceptable in different formats.
content
Written by
Written by
Jay LeBoeuf

Jay leads Business Development at Descript. He's been a founder, a music technology leader, an engineer, a Stanford University lecturer, and a drummer.

Descript is a collaborative audio/video editor that works like a doc. It includes transcription, a screen recorder, publishing, and some mind-bendingly useful AI tools.
Get started for free
Don't forget to share this article!
Jay LeBoeuf

Jay leads Business Development at Descript. He's been a founder, a music technology leader, an engineer, a Stanford University lecturer, and a drummer.

content
Share this article

Subscribe to our blog

You’ll get info and advice from us, insights and advice from your fellow creators.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By submitting your email you're agreeing to our Privacy Policy and Terms & Conditions.

See also

All-in-one audio & video editing, as easy as a doc.
Get started for free
@DescriptApp is PHENOMENAL. Best product experience I've had in the last 10 years. It will change the game of video & audio.
See offer
All-in-one audio & video editing, as easy as a doc.
Get started for free