Skip to main content

2024 | Buch

Record, Mix and Master

A Beginner’s Guide to Audio Production

insite
SUCHEN

Über dieses Buch

This textbook is a practical guide to achieving professional-level audio productions using digital audio workstations. It contains 27 chapters divided into three sections, with specially devised diagrams and audio examples throughout. Aimed at students of all levels of experience and written in an easy-to-understand way, this book simplifies complex jargon, widening its appeal to non-academic creatives and is designed to accelerate the learning of professional audio processes and tools (software and hardware).The reader can work through the book from beginning to end or dip into a relevant section whenever required, enabling it to serve as both a step by step guide and an ongoing reference manual. The book is also a useful aid for lecturers and teachers of audio production, recording, mixing and mastering engineering.

Inhaltsverzeichnis

Frontmatter

Record

Frontmatter
1. An Introduction to How Sound Works
Abstract
Sound is the result of the energy that is created when air molecules are vibrated. When a speaker cone moves forwards and backwards, a person speaks, or a guitar string vibrates, it causes air molecules to move in sympathy, which in turn causes our eardrums to vibrate. The vibrations in our eardrums are transmitted through the middle ear bones to the inner ear, where they are converted into electrical signals that are interpreted by our brains as the sounds we hear.
Simon Duggal
2. Speakers
Abstract
Studio speakers, also referred to as monitors (not to be confused with your computer screen), are designed specifically for recording studios, film studios, home and project studios and other critical listening environments where accurately reproduced sound is crucial. They are designed to give an honest representation of what is going on in your audio material.
Simon Duggal
3. Digital Audio Workstation
Abstract
A digital audio workstation or DAW is software designed to enable multitrack audio and MIDI recording, editing and mixing on your computer.
There are several DAWs available, and although they may operate slightly differently, they all have the same core functions and features. Whichever DAW you use, the recording, editing and mixing processes will be pretty much the same.
Simon Duggal
4. Digital
Abstract
When an analog audio signal is recorded into a digital audio workstation, it has to be converted into digital bits—a binary sequence of zeros and ones. This is done by the A/D converter in the audio interface. The incoming analog signal is sampled at precise and regular intervals, tens of thousands of times per second. This is known as the sample rate.
Simon Duggal
5. Hardware
Abstract
An audio interface is a hardware device that connects to your computer to give you much better sound quality and connection capabilities than the computer’s own sound card. An audio interface will allow you to connect professional microphones and instruments to your computer, and output high-quality sound to your studio speakers and headphones.
Simon Duggal
6. Gain Staging
Abstract
Every piece of analogue hardware—a preamp, an amplifier, a compressor or an equalizer, for example—has an ideal operating level at which its best sound is achieved, and at which it has a good signal-to-noise ratio. Gain staging refers to setting ideal input and output levels for each piece of equipment in an analog recording chain. For example, an analog chain could be a microphone going into a preamp, followed by a compressor and then into the line inputs of an audio interface. The preamp input gain would be set to achieve the desired sound which may be clean, driven, distorted or anything in between. Its output level would then be set so the compressor receives the right amount of gain for its input. The output gain of the compressor would then be set so that the input of the audio interface receives the correct amount of gain.
Simon Duggal
7. Microphones
Abstract
There are a few different types of microphones. Understanding the differences between them will help you to choose the right type for a particular purpose.
Simon Duggal
8. Phase
Abstract
When recording with more than one microphone, a couple of mics on an acoustic guitar, or several microphones on a drum kit, for example, it’s important to be aware of potential phase issues.
Signals entering the preamp can become out of phase when the sound captured by one microphone is overlapped with a delayed signal of the same sound captured by another microphone which is further away from the source. The signal from the second microphone arrives at the input of the preamp slightly later than that of the first microphone. This causes peaks and dips in the frequency response, resulting in comb filtering—a hollow and unnatural sound. Phase issues are most noticeable on low frequencies. A signal that’s out of phase will sound thinner than when it is in phase.
Simon Duggal
9. Room Acoustics
Abstract
In critical listening environments, it is imperative that you get a truthful representation of your audio material through the speakers.
Sound reflecting off the walls, ceiling, floor and furniture arriving at the listening spot within approximately 20 milliseconds of the direct sound from the speakers, can cause interference with what you hear. Peaks and dips are created in the frequency response as a result of the reflected sound interfering with the sound from the speakers. This is known as speaker boundary interference response (SBIR).
Simon Duggal
10. Recording Tips
Abstract
When recording anything with a microphone, it’s a good idea to leave plenty of headroom to allow for those unexpected loud sounds. For example, if a singer gets too close to the microphone or momentarily sings louder than expected, that extra headroom will prevent the input signal from clipping. 6 dB of headroom should be enough.
Simon Duggal

Mix

Frontmatter
11. Equalisers
Abstract
Equalisers give you control over the tone of a sound. They are much like the bass and treble controls on your Hi-Fi except they cover a much wider frequency range, usually from around 20 Hz up to 40 kHz depending on which equaliser you use.
They can be used to subtly improve a sound, for example, by making a dull sound a little brighter, a harsh sound a little softer or a weak sound heavier, or to change the character of a sound in order to make it work differently or fit better within a recording. For example, if you want to give a clean vocal recording a telephonic effect, you will need to reduce the loudness of frequencies that are not present when listening through a telephone receiver. Or, if you want to make the vocal stand out in the mix, you may need to boost frequencies that give voices more definition or presence.
Simon Duggal
12. Dynamics
Abstract
Compressors are used to obtain a more consistent level by reducing loud parts of the audio material without squashing the peaks, thereby decreasing the difference between the quietest and loudest parts of the signal.
When used correctly, compressors can make instruments and voices sound solid, tight and more powerful by compacting the energy contained within the sound. On static sounds such as programmed drums, for example, a compressor can be used to change the envelope—the attack, decay, sustain and release of the sound.
Simon Duggal
13. Effects
Abstract
Reverb is created when multiple, fast, complex echoes are merged together. The resulting sound is a type of ambience that the listener hears as one effect.
In recording and mixing scenarios, reverb is used to recreate the natural ambiences of different rooms and spaces without having to physically record in those spaces. Reverbs can also be used to deliberately create weird, unnatural and crazy spaces. Some reverbs are created using digital algorithms whilst convolution reverbs use impulse responses—samples of actual physical spaces.
Simon Duggal
14. Subgroups
Abstract
There are times when you need to route the outputs of several channels simultaneously to the input of another channel.
A subgroup is simply an auxiliary channel that is configured to allow the outputs of other channels to be routed to its inputs. Much like how all of your channels end up going into one stereo master channel.
Simon Duggal
15. Monitoring in Mono
Abstract
The human brain finds it difficult to pinpoint the location of lower frequencies. Lower frequencies are omnidirectional—they travel in all directions—whereas higher frequencies travel directly towards the ear (see Chap. 1: An Introduction to How Sound Works—Sound Dispersion). For this reason, it is common practice to mix lower frequency elements such as kick drums and bass guitar to the centre mono part of the stereo field.
Simon Duggal
16. Mid/Side Processing
Abstract
A stereo audio file has two channels, left and right. When these channels play through left and right speakers at an equal level, a phantom centre between the two speakers is created. The phantom centre contains identical information that is present in both the left and right channels—this is the mid part of the signal. Removing this information would leave only the information that differs between the left and right channels—the side information. So, the side information is accessed by subtracting the mid information from the whole signal and vice versa.
Simon Duggal
17. Transients
Abstract
A transient is the initial attack of a sound—the pluck of a guitar string before the note sustains or the moment a stick makes contact with a drum before the skin resonates.
Transients are very short, have a high amplitude and have no tonal information. They usually contain higher frequencies than the harmonic content.
Simon Duggal
18. Panning
Abstract
Panning refers to the panoramic placement of sounds in the stereo field from left to right.
Each channel in a DAW has a rotary pan knob. Some DAWs have single pan knobs that allow you to move the position of a track from left to right. Others have separate left and right pan knobs for each channel, giving you individual control of the left and right sides of the stereo audio channel.
Panning is straightforward. Decide where you want your instrument or voice to be placed in the stereo field and use the pan knob to point it there.
Simon Duggal
19. Plosives
Abstract
When a vocalist sings into a microphone, bursts of low-frequency energy are created whenever words that begin with the letters b or p are sounded. This causes annoying low-frequency thumps on the recording which have to be edited or equalised out. These thumps are called plosives.
In many cases, a high pass filter can be used to roll off any unnecessary low-frequency content, such as rumble, or the singer accidentally knocking the microphone stand. This usually reduces plosives to some extent too.
Simon Duggal
20. Zero Crossing and Crossfades
Abstract
Zero crossing is the point at which a digital audio wave has zero amplitude. From this point, the signal will either rise or fall in amplitude.
When an audio segment is selected on the timeline to be looped or cut and pasted with another segment, its start and end points should be at zero crossing; otherwise, there will be a jump in amplitude at the loop or edit point which will result in an unwanted click or pop. To loop a selection or edit two sections together smoothly, the audio waveform will have to be zoomed in to sample level.
Simon Duggal
21. Mixing Tips
Abstract
Mixing is the process of balancing each recorded part so that its loudness, tone, panoramic position and effect levels blend sonically and musically with each other. The parts are then exported or ‘bounced’ as a stereo audio file ready for mastering (see Chap. 22: What is Mastering?). The finished master can then be converted into formats that are compatible with consumer playback devices such as CD and MP3 players.
Each mix engineer has his or her own particular style and approach to mixing. However, they all have one thing in common—they know how and when to correctly use all of the audio tools at their disposal.
Simon Duggal

Master

Frontmatter
22. What is Mastering?
Abstract
Mastering is the final stage of the audio production process after recording and mixing have taken place. Professional mastering ensures your track sounds sonically good enough to compete with commercial tracks, and sounds consistent on a variety of playback systems.
Simon Duggal
23. Prepare Your Track for Mastering
Abstract
Bounce your mix down as an interleaved WAV or AIFF file at the same sample rate and bit depth as your session. If your session was recorded at 48 kHz and 24 bits, bounce your mix down at 48 kHz and 24 bits. Ensure that the master output levels leave plenty of headroom. A mastering engineer might recommend −6 dB of headroom. If your master fader meters are hitting close to 0 dBFS constantly, that won’t leave the mastering engineer with much headroom and will make it more likely that clipping will occur when dynamic processing or Eq is used.
Simon Duggal
24. Mastering Tools
Abstract
Compression (see Chap. 12: Dynamics) is used in the mastering process to reduce dynamic range and increase energy in the track. Overdoing it with compression can squash the life out of a track so it’s important to use it subtly. Typically gain reduction of 1 to 4 dB should be enough. Multiband compressors (see Chap. 12: Dynamics) can be useful at this stage if you need to compress different bands of frequencies separately. Multiband compressors split the signal into 5 Eq bands, each of which can be compressed separately.
Simon Duggal
25. Dither
Abstract
Dithering is the process of adding some very low-level white noise (hiss) to digital audio during bit depth reduction. Why would we want to add noise to our great recording?
Consumer devices, such as Hi-Fi, personal MP3 players and car stereos, playback audio at 44.1 kHz and 16 bits. The professional standard for recording audio is either 24 or 32 bits and often at higher sample rates. This means that at the final stage of mastering the bit depth and sample rate of the audio material will have to be reduced to be compatible with consumer devices.
Simon Duggal
26. Metering: Peak, RMS and LUFS
Abstract
The peak of an audio signal is its loudest point. The peak value is a momentary measure of the loudest point.
Root mean square (RMS) and loudness units full scale (LUFS) are measurements of an average signal level based on what our hearing is accustomed to.
RMS is an accurate representation of the average loudness of your mixes though LUFS is considered a more accurate way of measuring perceived loudness. LUFS is the standard by which online streaming services such as YouTube and Spotify measure the perceived loudness of tracks. This is not to force you to listen at any particular volume, but moreover to ensure a relatively consistent level across different tracks.
Simon Duggal
27. Mastering Your Song: Things to Consider
Abstract
The most important aspect of mastering is accurate monitoring. Professional mastering studios have dedicated high-end speakers that are precise in both the frequency and time domains. This means that they are capable of delivering a flat frequency response, and perhaps more importantly, all frequencies reach the mastering engineer’s ears at the same time. The speakers are carefully positioned in an acoustically treated room to ensure that SBIR does not interfere with the direct sound.
Simon Duggal
Backmatter
Metadaten
Titel
Record, Mix and Master
verfasst von
Simon Duggal
Copyright-Jahr
2024
Electronic ISBN
978-3-031-40067-4
Print ISBN
978-3-031-40066-7
DOI
https://doi.org/10.1007/978-3-031-40067-4