Audio Engineering: Mix & Master With Your Ears

Mix and Master With Your Ears


Here’s my advice on mastering, which I learned the “old school” way before fancy-shmantsy plugins, I guess you could say, by following a few guiding rules:


Audio should sound naturalistic and balanced.  If you’ve compressed or EQ’d voices to the point where they no longer sound like the persons you’ve recorded, then you went off the rails somewhere.  Go back to the beginning and start again; your audio will be better for it and you’ll be a more adept engineer with more experience.


EQing can help mastering by balancing different voices (e.g., deep bass male voices vs. high female voices, studio vs. mid-range enhanced telephonic voices).  If you electronically level voice tracks the same (e.g., normalizing to the same RMS dB value), you’ll still have to deal with what Fletcher-Munson described as perceived volume differences.  So, balancing out the tracks with EQ as much as you can for each voice helps in the mastering stage when you are trying to find an overall volume balance among the tracks.  For example, cut some bass in male voices and cut some high end from female voices to get a better tonal balance.  Accentuate the midrange on studio tracks and bring down the midrange on telephone or Skype voices.  But remember, trust your ears.


In the mixing stage, knock down transient spikes and compress each track to help get loudness parity.  (Here’s where you’ll realize how well or poorly you paid attention to track levels while recording).  After compression, it helps to normalize each track to 0dB so you can achieve baseline loudness parity for the tracks.  You can apply compression as needed to the quieter tracks to reach parity.  Remember, trust your ears.

NOTE:  When compressing during the mastering stage, be mindful that track levels of telephone or internet recordings get pushed down, thereby causing them to sound quieter (diminished perceived loudness).  Because they are mostly mid-range in frequency, it doesn’t take much to make them sound quieter when their overall volume is diminished.  (This is why it is good to EQ such tracks to add more bass and top end.  It really helps in the mastering stage).  This effect is heard a lot on radio stations, for example, that use compression on their out-going or master signal.  The fix is to bias the volume upwards on telephone tracks, so they remain relatively even with a studio track, for example.


Now that your tracks are more closely balanced, play your tracks back while watching each level on your multi-track mixer.  Every DAW has one.  Here’s where you can throw out the idea that each track has to be the same fader level across the board or RMS level, whether it is 0dB or -3dB (as some people like to use for headroom).  During playback, adjust your tracks so they are closely peaking at the same upper limit, perhaps no more than -6dB.  If you’ve used compression to level off your tracks, this should be relatively meaningful and easy because there won’t be any wild dynamic range spikes.  If your meter shows you RMS zone (often as a different color band within the display), you can try to balance the RMS zones across the tracks using the track faders so that the center of the RMS band is relatively equal across tracks.  Don’t worry that your track faders will be in different positions.  You’re trying to balance what you hear and what you read on the meters, but not get hung up on some arbitrary number on a scale.  Remember, trust your ears.

NOTE: Music engineers often talk about tracks or sounds not sitting well in a mix. There are a lot of technical reasons for this, and the music folks masterfully understand how frequencies, dynamics, tonal properties, and volume levels work together to enhance or cancel out sounds.  One tried and true tool to get well balanced mixes is for an engineer to adjust tracks using a mixer–as in, pull up the volume here, lower it there–all so that the final mix will sound balanced and/or they way it should.  Even with two voices tracks, podcast engineers can benefit from using a mixer to adjust their tracks as described above.  Add in sound design (i.e., music scoring, SFX), and a mixer tool is even more useful for podcasters.  It should be noted that using the “envelop tool” to adjust volume within tracks is an alternative way to mix your sounds.                


If you done all this voice and track loudness balancing during the mixing stage, your mastering will be easy and effective.  When you mix down your tracks into a mono or stereo master track, you’ll quickly see in the waveform how well you’ve balanced your tracks.  Of course, there will always be differences among and between tracks, but you’re looking for a balance that is naturalistic and not distracting (i.e., one track loud and another soft).  So, at this stage it’s helpful to apply compression (if necessary) just to give some added uniformity to overall loudness.  It doesn’t take much compression at this point to balance a mixed track.  If you have to apply a lot of compression during mastering, you’re not mixing effectively.


Now that your audio is balanced, you can then meaningfully apply the LUFS/LKFS perceived loudness standards and let the chips fall where they may.  Sometimes it’s surprising how much these perceived loudness tools recommend cutting or boosting the track to meet the standards: -19dB (mono) or -16dB (stereo).





Be mindful of your monitor volume, whether it’s your desktop speakers or headphones.  There’s yet another standard for this that you can find on the internet, but a basic rule is always use the same monitor volume setting–at least you’ll be consistent across your projects.  For example, I’ve calibrated my system to a monitor standard, and on my computer volume I know that “40” is the number setting on the indicator.  On my hardware, I made physical mark on the dial that indicates the standard setting.   Whenever I work on a project, I just make sure my computer or hardware volume is set to this number.


Remember, mastering is all about perceived loudness and balance.  Sometimes an audio track, even after mastering for perceived loudness, will have indications of clipping. Distortion from digital clipping is bad—when you can hear it, of course.  Most DAWS have a clipping indicator, such as Audacity’s red markings on the track when the signal exceeds 0dB.  But not all such indications are audible distortions; they’re sometimes more like theoretical indications of a problem.  So, if you can’t hear any hot signal distortion, don’t worry about it too much.  You want loud to be loud, right?  But if you can hear distortion, then you might want to tame down or fix the audio if you can.


Good mastering really starts with good recording.  On most audio recorders, oftentimes you will not readily hear hot signals or clipping on your headphones—unless it is really big problem.  Also, during recording it is really difficult to balance track signals by ear, as people are sometimes speaking loudly, sometimes softly, sometimes all combined together.   The best you can do while recording is to trust your track record levels indicators and try to reach the best balance and parity possible while in the field.  (A good rule is to record on the lower side of 0dB, not the hotter side, with signal peaks between -12dB and -6dB).   If your field track levels are relatively in the same ball park, then things like dynamic range and ambient room noise will be similar and easier to process across tracks during mixing and mastering.  It’s always a bit of a train wreck when you have to bring up a track’s volume by 20dB just to match another.  By the way, always trust your ears.


Can’t stress this enough.