Jordan HintonRaleigh Recording Connection

Chapter One Posted on 2015-07-06 by Jordan Hinton

In chapter one I learned that sounds are vibrations.  They are waves travelling through mediums, disrupting the air molecules around them.  These waves move back and forth in waveform.  The forward movement is referred to as compression and the backwards movement is referred to as rarefraction.  The amplitude of a wave is its loudness.  Amplitude is measured in decibels.

Frequency is how fast a sound wave moves (sound waves move in cycles) which is calculated in Hertz (cycles per second).  I learned about the range of frequencies that a human can hear, 20Hz-20kHz.  I can only hear up to about 16kHz.  

I learned about the parts of the hear that play a role in hearing and perceiving sounds.  The lower frequencies are heard the loudest and are picked up early on in the bigger areas of the ear canal and the higher frequencies are picked up in the more narrow end of the ear canal.  You have tiny bones near your cochlea that work together to intepret the sound vibrations and send signals to your brain.

I have attached the notes that I took on chapter one.

 

My first meeting with my mentor went well.  Am looking forward to working hands on in the studio and applying what I learn.

 

 

Sound is made possible by the vibration of an object displacing the air molecules around it.  Key part to remember the sounds are vibrations of an object.

When a guitar string is plucked, the string vibrates back and forth and displaces the air around the string.  On an acoustic guitar, the hole in the center invites these air molecules/sound waves and amplifies the sound (increases volume).  The wood of the guitar is carefully selected to resonate (produce a deep, full reverberating “echoing” sound).

Sound-Pressure Waves are how sound is perceived.  The vibration and displacement of air molecules is most amplified and pure at its source, but the farther away you get from the source, the less volume and strength the sound has.

Rare-fraction is when air molecules transfer energy into the molecules in front of them and then re-adjust leaving space behind their original position.  This is how sound travels?

***Movie 1.2 would not play on my computer on 6/15/15.

The above is possible because air molecules have weight and density.  There are billions and billions of air molecules all around us and each of them has space between them.  What we are doing with a sound source is moving these molecules closer together.  These molecules try to evenly space themselves out so when you create sound through rare-fraction, the molecules will eventually return to their natural spacing and the sound “energy of the vibration” will be gone.  Atmospheric Pressure is the natural air pressure around us at all times “natural density of air molecules around us at all times.”

A waveform is the graphic representation of the amplitude of a sound-pressure wave over a period of time.  Normal to compression (high pressure) to rare fraction (low pressure) to normal.

Sound waves happen over time, so we call them periodic.  Manipulating a waveform will change the way it sounds. 

Amplitude is the measurement of loudness, or the intensity of pressure or voltage that is above or below the horizontal line in the waveform graph which represents normality/silence.

Amplitude can be viewed from an acoustic stand point (sound pressure level), or an electrical stand point (voltage).

Amplitude can be measured (with decibel “dB” units) in several different ways: Peak amplitude (maximum value of the signal), peak-to-peak amplitude (change between peak and trough.  Trough is the lowest amplitude value in a waveform).

RMS or root means that a squared amplitude was developed as a way of determining a meaningful value level of a waveform over time.

A decibel is not a fixed value, it’s a measurement compared to a designated value (decibel scales).  There are different types of scales used in audio engineering, the difference is the reference value.  The ratio between the reference point and the measured value is denoted in Bels.

The dB SPL scale is most important and frequently used in audio engineering.  For mixing, the optimal dB SPL levels are between 85-95 dB SPL.  When music is mixed too loudly it can affect how the mix translates to other devices.  When it is mixed too softly the music isn’t loud enough to translate the subtleties of its frequency balance.

Frequency is how often a cycle (compression + rare-fraction) occurs in a second (Hz, cycles per second), which is pitch.

Human beings have a frequency range of 20Hz – 20,000Hz.  It’s not uncommon for someone aged 25 or older to not be able to hear above 15,000Hz due to the inevitable loss of hearing that occurs as we get older.  I can’t hear above 16,000Hz.

What makes the same note/pitch (frequency) sound different when comparing a piano to a violin, or other instruments?  Timbre.  Timbre is the harmonic and frequency fingerprint that differentiates one instrument from another.  An instrument has a fundamental frequency (the most present one heard), but contains a mixture of other frequencies that make up its sound.  Overtones, harmonics, and material used to build the instrument all contribute to the instruments unique sound. 

In the example of the violin, the way it is played (with bow, with rosin) and the bridge, other accessories all contribute to the sound waves created and give the instrument its own specific timbre.

Reflections are the explanation for how sound behaves when it comes into contact with objects or surfaces that impede its forward progress.  Sound waves behave like light in the sense that certain materials will reflect sound more efficiently than others. 

Reflection, absorption, and diffusion.  Reflection: Most sound is reflected which is almost as loud as incoming sound.  Absorption: Absorbing power is determined by material used.  Some material will absorb sound.  Diffusion: Some material scatters sound in an uneven manner depending on the desired effect.

Phase is a waveform’s position in time (position of a waveform in any given moment). 

Phase Shift is when one waveform is delayed from another and then mixed together.  Think of two microphones held up to an acoustic guitar for recording the soundwaves.  The soundwaves will reach each microphone at different times/phases, thus causing phase shit.

Envelope is a waveform characteristic that plays an important role in giving an instrument its distinct sound.  Below information puts the definition in context.

Sine waves are pure tones.  Most instruments do not produce sine waves though.  A sine wave is not the only type of waveform that can be created.

There are two categories of waveforms: simple and complex. 

Simple waveforms are waveforms that have some type of uniform shape that is generally repeated.  Four basic types of simple waveforms: sine, square, triangle, and saw.  Each waveform has same fundamental note, but the tone changes slightly because the various types of simple waveforms have differences in frequency content.

Oscillator is a circuit that, among other things, converts direct current into alternating current.  An oscillator will function the same way everytime it’s turned on which is why its waveform will not stray from the simple form it’s using.  When using a computer/machine to produce vibrations/soundwaves the amplitude and frequencies will stay the same but when using live instruments, the displacement of air molecules and the constant state of change in regards to the air around us, how the instrument is being played, and the mood of the person playing the instrument all affect the waveform of the soundwave and produce subtle differences even when playing the same notes.

Complex waveforms is the beauty of music.  It’s what produces the emotional response to the listener.  The key is to translate a message that can be received and loved.  (you can change the frequencies of waveforms playing the same key and add them together for a different tone/sound).

An Envelope can be described as variations in amplitude occur in a period of time over the duration of a note.  Four characteristics of an envelope: Attack, Decay, Sustain, and Release.

Attack is the time it takes for a sound to build up to its full amplitude.  Decay is the time taken for the subsequent run down from the attack level to the designated sustain level.  Sustain is the main duration of a note occurring after the attack and decay until the note is released by the musician.  Release is the amount of time it takes after the musician has stopped playing for a sound to return to silence.

 

Hearing –

There are several small organs responsible for our hearing. 

Pinna The outer part of the ear and the first stop for a soundwave in regards to our hearing.  Pinna is latin for feather.  The outer ear is important, it helps us localize sound and actually filters out certain frequencies because of its shape.  The way the Pinna helps localize sound is directly related to its shape.  The shape of our ear is specifically designed to produce phase cancellations in certain circumstances.  These phase cancellations and the amplitude/volume of a sound can tell us where the source is located.  The shape of the ear also amplifies and directs a sound source through the external ear canal to the tympanic membrane (ear drum).

Sound Localization is the process of determining where a sound is located and many factors are involved such as distance, direction, time, and proximity. 

Tympanic Membrane (Ear Drum) – Small flap of skin down through the ear canal that vibrates along with sound wave vibrations.  If ear drum is damaged or ruptured, hearing loss can occur.  Ear drum is in the middle of the ear, after the ear drum come the smallest bones in the human body, the malleus, the incus, and the stapes.

The Malleus, Incus, and Stapes amplify the vibrations coming from the ear drum/Tympanic Membrane using a hydraulic-like lever action.  Malleus rotates back and forth with the vibrations the ear canal is receiving, because of its shape it causes the Incus to push on the Stapes which is connected to the cochlea.

 

The Cochlea is a snail shaped organ in the middle ear.  The Cochlea is coiled up and inside is lined with tiny reed-like fibers that are connected to hair follicles.  There is a different set of follicles for different frequencies and sound travels through the ear canal into the cochlea which has liquid inside.  Based on the frequencies being transmitted, the hair follicles will vibrate and send impulses through nerve endings into the brain that are interpreted.

Hair follicles die over time inside the ear, when you experience a weird random ringing in your ear that seemed to have come out of nowhere, that is the sound of a hair follicle dying.  Fortunately we have hundreds of thousands of hair follicles.  Humans lose hearing in the upper ranges of frequency first.  High (SPL: Sound Pressure Level) situations can speed up hearing loss.   The buzzing in your ears after a concert is a sign that you have put a lot of pressure on your hair follicles in your ear and being in many situations like this can result in premature hearing loss.  As an audio engineer, your hearing is extremely valuable.  Our ears are an asset because of our field of work. 

There are clinics that make special ear plugs that reduce decibel level of sound but still allow you to hear.  This is very valuable to an audio engineer and many use these types of plugs religiously. 

Hearing loss is related to two main factors: the decibel level of a sound, and the duration of exposure to it.

 

Psychoacoustics is the scientific study of sound perception.

In 1933, Harvey Fletcher and Wilden A. Munson (Fletcher-Munson Curve) discovered through experimentation at Bell Laboratories that human beings perceive certain frequencies as being softer or louder than they actually are, at certain decibel levels.  This means that the frequency and decibel level of a sound have an impact on how loud our brains perceive the sound.

Our hearing is most sensitive in the 2-7Hz range.  Appropriate mixing levels are between 85-95 dB SPL.  This is because our ears have more accurate frequency response at higher dB levels and 85-95 dB SPL is as loud as we can go safely without causing damage to hearing.

Masking is when a louder sound dominates surrounding sounds.  Your brain focuses on the dominant sound.  This is important in controlling the movement/feel of a song.

Acoustic beats occur when occur when two sounds with slightly different frequencies interact with each other

One of the first reproductions of sound dates back to 1877 when Thomas Edison recorded a version of Mary had a little lamb.  Some reports surfaced about an earlier recording around 1860 of a French woman singing, Claire De Lune.

 

Until the late 1950’s monaural or monophonic sound reproduction was the standard (one speaker listening).  It’s hard to perceive special elements and depth from a performance.

Stereophonic sound reproduction consists of listening to a recorded performance on two speakers.  Through multitrack recording we are able to assign sounds to different places within the stereo image.

Stereo Image is defined as the perception of depth, space, and the ability to locate individual elements of a mix within a two-channel stereo recording.  Was invented shortly after sound reproduction but wasn’t adopted into the industry as a standard for almost 80 years.

 

« Return to Jordan Hinton's Blog

More Blog Entries from Jordan Hinton