Lesson 3


Introduction To Pro Tools


Learning About Pro Tools

Pro Tools is a software program that allows you to edit, record, mix, and master digital audio and MIDI files on your Mac or PC. The program uses nonlinear hard disk recording (unlike tape recorders, Pro Tools allows you to go immediately to any spot in a recording without having to rewind or fast-forward.), nondestructive digital editing (An editing mode in Pro Tools where the original recorded material is not altered), and works hand-in-hand with another software program called DAE.

Digidesign’s Audio Engine (DAE)

DAE (Digidesign Audio Engine) launches in the background of Pro Tools the minute you start the program. It is the engine under the hood of the Pro Tools user interface. It is there and you really don’t need to worry about it or interact with it. Sort of like the engine on your car; you just start your car and off you go.


Getting Started
 

Installing Pro Tools

If you own a copy of Pro Tools with documentation, please be sure to read all the documentation on how to best install your Pro Tools software and hardware.

Install your Pro Tools software (and hardware if needed).




For
Pro Tools LE users, the installation and setup procedures are covered in the Quick Start Guide that came with your Digidesign software. Follow the first four chapters of the guide for specifics on installation procedures.

Once your hardware and software is installed, it's time to connect the rest of your gear.

This Course Assumes That You Are Using Pro Tools 10. If you are using a later version, the concepts are essentially the same.


Making Your Connections

We will discuss how to configure your MIDI system in Lesson 9 of the course. Make sure that you have followed set-up procedures for your Pro Tools audio and software as outlined by Digidesign.

Here are two examples of a simple audio set-up using a system without Digidesign hardware.


Screen shot 2012-04-05 at 8.09.25 AM
An example of a simple PC set-up


Screen shot 2012-04-05 at 8.09.38 AM
PC set-up with mixing board added.


Understanding Signal Flow

Now that your studio is set up, you really need to understand how audio signals travel through your studio gear. This is called signal flow. Signal Flow is a very important concept. It will help you to understand how your studio works, and also help you to correct any problems that may occur in your studio.

When you sing, you create an acoustical sound wave that pushes air.

Sound is simply a vibration in the air. Sound waves travel outward in all directions from the source of the sound. Our ears then pick the waves in the air and our brain interprets the compressions in the air.

A vibration occurs over a single wavelength.
Frequency is a measure of how many vibrations occur within the time span of one-second or when a wave progresses through its crest and trough and starts over again. This is measured in Hertz (abbreviation Hz) and is a direct correlation to sound. In music, a pure sine wave of 440 Hz is the note A on the fourth octave of the piano.
Amplitude is a measure of the amount of energy in a waveform, or the amount of air being moved by the wave. Amplitude is indicated by the height of the crest and the depth of the trough of a waveform as represented in the graph below. Amplitude refers to loudness and is measured in decibels (db).

As a wave, sound has two main components:
frequency and amplitude. The higher the frequency, the higher the pitch. A frequency ratio of 2:1 is called an octave.

Screen shot 2012-04-05 at 8.10.18 AM

If your hearing is working correctly, you should be able to hear from 20Hz to 20,000 Hz or 20kHz.

Below is a guide to some of the more common frequencies in the music world.

Low bass (20 to 80 Hz) includes the first two octaves. These low frequencies are associated with power and are typified by explosions, thunder, and the lowest notes of the organ, bass, tuba, and other instruments. Too much low bass results in a muddy sound.

Upper bass (80 to 320 Hz) includes the third and fourth octaves. Rhythm and support instruments such as the drum kit, cello, trombone, and bass use this range to provide a fullness or stable anchor to music. Too much upper bass results in a boomy sound.

Mid-range (320 to 2,560 Hz) includes the fifth through seventh octaves. Much of the richness of instrumental sounds occur in this range, but if over-emphasized a tinny, fatiguing sound can be the result.

Upper mid-range (2,560 to 5,120 Hz) is the eighth octave. Our ear is very particular about sound in this range, which contributes much to the intelligibility of speech, the clarity of music, and the definition or "presence" of a sound. Too much upper mid-range is abrasive.

Treble (5,120 to 20,000 Hz) includes the ninth and tenth octaves. Frequencies in this range contribute to the brilliance or "air" of a sound, but can also emphasize noise.

Sounds below 20 Hz are infrasonic; sounds above 20 kHz are ultrasonic. There is much debate on how frequencies in these ranges affect hearing.

If you sing into a microphone, the mic will detect the sound wave and transform (transduce) that acoustical sound wave into electrical current. The electrical current is an “electrical picture” of your voice’s sound wave.

The electrical current travels through the microphone cable, then into a mic preamplifier. The mic preamp amplifies the current to a higher volume level called “
line level”. Line level is the standard level for all recording equipment. From the mic preamp, the current is then directed to an input on your mixer or audio interface.

When you are recording in Pro Tools, the audio current must be converted to digital data (remember the 0s and 1s!), so that your computer can record the sound wave data and store it on your hard drive. The analog-to-digital conversion usually takes place on your computer or other device containing an A/D (analog to digital) converter.

Here is a simple explanation on how analog waveforms are converted to digital:

Screen shot 2012-04-05 at 8.10.42 AM
An analog waveform now represented by voltage.



Screen shot 2012-04-05 at 8.11.14 AM
The first step in digitizing an audio waveform is to slice it up into moments in time, a time sample is taken at each vertical dotted line.

Screen shot 2012-04-05 at 8.11.44 AM
This is what the sample looks like after it has been digitized. Notice that we still have the basic shape of the waveform, but the smoothness has been lost.


Screen shot 2012-04-05 at 8.12.06 AM
The goal is to represent the signal with a string of numbers that represent measurements of each sample.

Screen shot 2012-04-05 at 8.12.36 AM

Next, the signal is quantized, because a computer can’t think in subjective terms. We have to fit this into the binary language. The higher the quantization level, the more accurate the picture. A grainy photo would represent a low quantization level. A clear photo represents a high quantization level. A CD has 65, 536 possible levels that it uses when measuring an audio signal.

Your signal will now be processed by the Pro Tools software DAE engine and loaded as a track, or it may be routed through a bus in the Pro Tools virtual mixer.

Once you have your signal in Pro Tools and on the hard drive, it is edited, processed, sent back out through the Pro Tools mixer, converted back to an analog current, and sent to your speakers.

Screen shot 2012-04-05 at 8.13.28 AM
Now that all those digits are stored on your hard drive, we have to turn it back into a sound wave so you can hear it. This is the wave now being turned back into voltage.

Screen shot 2012-04-05 at 8.13.52 AM
This is what is sent out of your system. There are other things that have to be done to the wave, but I just want to give you a basic idea of the process.

Anti-Aliasing

Some sounds carry frequencies that our ears can’t detect. For example, the ringing of a crash cymbal contains frequencies above 20 kHz that a human ear can’t detect.

If we are sampling at a rate of 44.1 kHz (CD sample rate) those high frequencies are cut off and a new waveform is created without those high frequencies called an
alias, or a false identity. This alias frequency creates a distortion of the sound if it is left in there. The cymbal sound will be distorted. Imagine all of the sounds that carry frequencies that we can’t hear. If sampled at 44.1 kHz there will be a lot of distortion on the samples.

So, the solution is to filter out any frequencies above 20 kHz (the limit of our hearing range) so they don’t get aliased. This is called
anti-aliasing.

A
Low Pass Filter, which allows only frequencies below a certain range to pass through, is inserted into the signal path before the sound is sampled and digitally converted.

The
low pass filter can’t really kick in right at 20 kHz so there is a margin of several thousand hertz for the filter to kick in all the way. With a sample rate of 44.1 we have an extra 4.1 or 8 kHz to play with above the 40 kHz mark. We won’t miss the sound of the filtered out frequencies because our ears can’t detect them anyway. Now the sampled sound will not be aliased, and more importantly, it won’t be distorted. The sound will be clean.

Pro Tools Sound and Memory Requirements

Sound files and audio samples take up a lot of space on your hard drive. The higher the sampling levels the more space! Many software-recording programs now allow for higher sampling levels well above 44.1 kHz. This is great, but remember that it takes up a lot of room on your hard drive and, if you want to burn your music to CD higher sampling rates will have to be converted back to 44.1 kHz (more on that later).

Let’s look at an example of how much hard drive space audio files eat up!

24 tracks of audio, a 5 minute song, 24 bit depth, 44.100 sample rate will work out to be about 7, 620, 480, 000 bits.

In simpler terms, about 908 MB, or almost 1 Gig on your hard drive.
You can do the math for about 10 songs, or a CDs worth of music. I hope you have a large hard drive!
I will discuss some memory saving tips for getting that number down later in the course.

Wow! That is a lot to digest, especially since all of this happens in the blink (or two) of your eye.

Gain Structure

A gain structure is a flow chart that shows you how your audio signal is routed before it gets onto your hard drive and into Pro Tools. This includes any amplifier (or attenuator) that affects the level of your audio signal. Any time we pass an audio signal through a piece of equipment, we are adding noise to the signal. Let’s look at a gain structure on a standard mixer set up.

Plug the microphone into a mixer, and the signal passes through a mic pre-amp that is controlled by a “trim” or Gain” knob.

The signal now passes through a mixer channel controlled by a fader.

The signal may now be routed through the master fader, a sub-mix bus, an aux bus, or a control room/headphone mix. The audio signal really can travel a long way before it comes back out again.

If you understand how signals flow in your studio, you can help you to get optimal recording levels through all of your gear.


The rest of the gain stages can be controlled in the Pro Tools Mixer. If you are using an
M-Box or other hardware audio interface, the input gain stages are controlled by the rotary knobs or faders on the devices.


Screen shot 2012-04-05 at 8.14.43 AM

Take the time to find out how you can monitor your input gain stages by using one of the methods mentioned above. Each software installation will be different. Make sure that you understand how to control input gain in your software/hardware configuration.


Pro Tools Project 3:

Evaluate the Gain Structure in your recording set-up to the class. Include any microphones, or other input devices that may be going into your Pro Tools set-up. Also tell us how you control your input gain.




Recap: Lesson 3

This week we spent some time learning basic audio concepts. You should have a better understanding of:

DAE
Basic Signal Flow
Waveforms
Amplitude
Frequency
Analog to Digital Conversions
Anti-Aliasing
Memory Requirements for Audio
Gain Structure

l arrow To Lessons Page