Danielle JardineSan Francisco Recording Connection

Lesson 3 – Sample Rate, Nyquist Theroem and Bit Depth (Part 2) Posted on 2013-09-27 by Danielle Jardine

Introduction to Basics of Digital Audio Continued with Sample Rate, Nyquist Theroem and Bit Depth -

7145647-illustration-of-mixing-and-sound-waves

The Sample Rate, is how many samples per second you take of your sound wave. The principle of the sample rate is that you need to take twice as many samples as the highest frequency you want to record. This is known as The Nyquist Theorem which states that in order for the set bandwidth to be fulling used (or encoded) the sample rate must be atlas twice as high as the highest frequency thats to be recorded. The computer only accurately represents frequencies up to half of the sample rate and the human ear can not perceive frequencies above 20kHz. Due to these two pieces of information the recording industry’s standard sampling rate has been set at 44.1kHz, which is 44,100 samples per second. When the sample rate is at 44.1kHz the highest frequency that can be sampled is 22kHz. Another reason to set the sample rate according to the Nyquist Theorem is so that the frequencies are able to be accurately captured and not have ‘alias’ frequencies. Aliasing will add frequencies to the signal that were not originally there, which are heard by the human ear as harmonic distortion. To eliminate any aliasing a low-pass filter can be added before the sampling process; this removes any frequencies that come out above the Nyquist Frequency. As stated above the sample rate is the rate at which samples of analog signal are taken in order to be durned into a Digital form. A sample is a sound bit of a sound waves instantaneous aptitude, this sampling happens x number of times per second (x = sample rate). Each of these samples are held for an unidentified amount of time until the next sample is taken. During this time, the sample is processed and assigned a numerical value. Each amplitude is given a numerical value of 1′s and 0′s. This is known as Quantization. Quantization its used to convert the voltage levels of a continuous analog signal into binary digits (bits) so that the audio data can me stored in a digital form; its the representation of the amplitude in the digital sampling process. Bits or Bit Depth is how we define the amount of numbers assigned to each sample during Quantization. To be stored in a computer or storage device all samples must be ascribed a numerical value. These values are the digital expression of the instanious aptitude of the signal at the second the sample was taken. Bianary digest (bits) are used to determine the range of possible numbers used by the computer to store/read the information.
- 1 bit = 1 or 0
- 2 bits = 00, 01, 10, or 11
-  As the number of bits increase so do the possible range of numbers
- 8 bits = 1 of 2 to the 8th power = 256 possible number combinations.

The most common Bit Depth is CD quality which is recorded at 16 bits (nnnnnnnn nnnnnnnn) = 65,536 possible combinations, though in recording 24bit and 32bit is also used. The grater the amount of bits used to store each sample the better the potential ratio of signal to noise. The higher the bit depth (the more numbers assigned per sample) the higher quality of information per sample and thus a higher quality of the playback.
Distortion is something to be aware of. It can appear in many different ways:
a) quantization error = white noise: when random signals are added into the signal due to the miss representation of signals. (which is lso known as aliasing)
b) clipping = when the amplitude of the incoming signal is greater then that of the maximum aptitude that came be expressed numerically.

« Return to Danielle Jardine's Blog

More Blog Entries from Danielle Jardine