In this part, I will present some information on sound recording. The content and depth are tailored given what you will need to know for the next parts of this series on “The Sound of Music”. They will be about how recorded (stored) music can be damaged, the problems these damages cause during reproduction and how I deal with them when dusting.
Firstly, I would like to mention a term, which came up during mid-1950, the time when stereophony was introduced to the market, emphasising the quality of music recording, storage and reproduction.
High Fidelity, HiFi or hi-fi
Fidelity? You may wonder what the quality of sound reproduction has to do with of being faithful to your husband, wife, or sexual partner. The word origins from the Latin word fidēlis, meaning “faithful or loyal”. It also means “accuracy in details” or “exactness”. Could one say… a truthful recording.
Initially, manufacturers introduced this term, giving the buyer a way to distinguish between sound equipment with high-quality reproduction and those with lesser quality. Later it was used by home stereo listeners.
The term was coined in the English-speaking world. However, in mid-1960 in the HiFi standard of DIN 45500 established the minimum requirements for equipment to achieve so they can use this predicate. It included test procedures, explanations of terms, terminals and standardisation of input and output, units of measure, markings, expression and labelling.
It was withdrawn in 1999 because most equipment outperformed those requirements by far, due to progress in semiconductor technology. Still, most equipment continued parts of the standards, such as input and output signal strength, labelling and connectors.
Ideally, high-fidelity equipment causes minimal amounts of noise and sound distortion and provides an accurate frequency response. I will refer to these terms in a minute, … before I begin explaining the recording process.
Audible sound ranges approximately from 30Hz to about 16kHz. We know, there are harmonic frequencies outside the hearing range, which change the shape of sound waves through superimposition and interference within the hearing range. For this reason, modern equipment processes sound in the range from 10 Hz to 30-40kHz.
All audio equipment transform or store sound waves, often called signal. A signal is an amount of information, in our case, sound information. A microphone, for example, transforms the acoustic signal (air pressure variation) into an electrical signal.
Equipment with a linear transforming function would treat all signals the same, independent of its frequency, there would be no distortion. This means there is a direct proportion between the signal at the input of the device and the processed signal at the output, at least in the range from 10Hz to 30kHz. Unfortunately, this is not the case.
I am showing you an idealised power gain curve of an amplifier. A real curve is not as horizontal and has many smaller (less than 3 dB) ups and downs.
At its vertical axis, the graph displays the reduction of power in dB and horizontally the frequency in a logarithmic scale.
The power-frequency line is linear from about 40Hz to about 4kHz. It is difficult to determine the exact point where the limits of linearity are. For practical reasons, the endpoints of linearity are defined as the frequencies where the linearity has dropped by 3dB.
The frequencies where this happens are called roll-off frequencies. This means that without any other changes to the equipment (settings, for example), but purely caused by the variation of the signal’s frequency, the output signal strength has dropped by 3dB. In picture-1 this is at 10Hz at the low-end and 30kHz at the high-end.
The range of linearity, as defined in the above way, is expressed as: “bandwidth from 10Hz to 30kHz”.
The graph also shows different roll-off characteristic at high frequencies, (where it is flatter) and low frequencies, this difference is typical.
Expressing the steepness of the roll-off curve numerically (how quickly the curve drops off) one measures the drop in decibel per octave. Remember, doubling or halving the frequency is equivalent to one octave. I have marked in the graph the distance between 10Hz and 20Hz, which is one octave. The power drops in this range by 2dB. The roll-off would be about 2bB/oct.
Is it easy to design products with good linearity? Fussing about all this indicates that it is not. Achieving it needs dedication. Improvement beyond acceptable requirements would not only increase the cost of manufacture exponentially but also would make the product more vulnerable to other influences, like changes in temperature, shock or vibration.
Therefore, compromises of this kind have to be accepted, accounted for and considered. The means determine the end. Horse for courses. What are you recording? A rock concert or a violin quartet?
Let me state
“The intrinsic law of sound recording and playback.”
What you put in, is the maximum you can get out.
Realistically, it is always less. This sounds logical, however, I know some people who sincerely believe, they can.
This is an expression you will hear from now on quite regularly. Since it expresses a ratio, its unit of measure is decibel, dB. It is often abbreviated SNR, and it compares a signal level with the level of background noise.
The graph shows that the most significant section of background noise is above 1kHz at a considerable level of 12bB, which is hard to compensate for on a recording without seriously compromising the sound signal.
Where does noise come from? Naturally, there is some surround sound at the time of the recording, even all caution is taken to minimise it. Most rooms have echoes, which in most cases are undesired.
Often they are sounds we are so used to that we don’t notice them at the time of recording, like the sound of the wind caused by air-conditioning systems. Later, you can hear it as a faint whistle all through your recording. You have to take the blame, grovel, and hope they agree to do it again.
Electrical equipment, especially cables are like antennae and absorb electrical waves from the environment. They are everywhere around us. Also, there is white noise, which you can hear on a radio if you tune it between the stations. The quality of a signal is indicated by its Signal to Noise Ratio (SNR). One wants to keep this ratio as high as possible.
All electrical equipment produce noise, due to their design. Here it is expressed in the ratio between the maximum sound level and the noise that comes with it. For example, a HiFi amplifier would have an SNR of 89 to 92dB. This means the sound signal is this amplifier is 15 times (6dB means doubling) louder than the background noise.
These chapters got quite long. Still, we had to cover those topics. They apply to all acoustic equipment and processes, including microphones and speakers. I will talk about them in their individual chapters.
In German, the person who works in this field is called “Tonmeister”, directly translated, meaning ‘sound-master’ = sound engineer. They are people who have studied electronics and specialised in acoustics. Often they have studied music as well or are at least are musicians with some level of proficiency. Why? Because they have to be able to read music and to efficiently communicate with musicians, one needs to be an insider.
I found this excellent picture on the internet, what a fantastic job. When you click on it, you will be taken to its source.
It shows, what I am about to describe. Also, it shows in the right-hand corner the production of a single record. The latter, I will talk about in more detail.
Sound recording starts with the instruments, where they are located in reference to each other and the microphones. With a smaller group of instruments, not always but often, each instrument will have its own microphone, like in picture-3. If the group is larger, there would be one dedicated microphone for each type of instrument.
This has its reasons. Instruments create their sounds differently (rubbing or plucking a string, blowing a horn or drumming) and the different design styles of pickup systems inside of microphones suit one style of instrument better than another.
The disadvantage is that the recording is taken before the interference of sound can and has happened. In this case, the sound engineer has to arrange the blending, the interference of the instruments during but mainly after the recording.
If this is not well done, the recording sounds dull and not “together”. Most unfortunately, some sound masters are not this masterly, and they generate an echo electronically on top of the original sound. Just writing about it makes me cringe.
For the recording of large orchestras and choirs, there may be two or four microphones suspended above them for the stereo effect and another two or four to take the acoustic characteristics of the recording venue.
In this case, the intended blending (sound interference) of instruments by the composer is picked up. The sound engineer can add room characteristics, especially at the end of a musical phrase, just before and into a pause. This makes the music more crisp during playback.
And no matter what, the first step is, to mechanically transform the pressure sound waves into an equivalent electrical wave, the signal of sound waves as we have seen them in part 1.
Why into an electric system? Good question. The Edison style recording system pneumatically amplified (through the horn) the pressure waves and they were directly “scratched” onto the recording surface. Today, electricity is a simpler way within our known scope of technology to transport, amplify and store any signal information.
Inside every microphone is an air pressure to electricity transducer. Depending on their nature (style of transducers) and design (casing and arrangement of components) they are quite non-linear in their frequency response. The electrical signal strength of such transducers is very tiny, in the range of millivolts, (a “normal” battery has 1.5 Volts. milli is 1/1000).
Nowadays, microphones include small amplifiers. They correct any non-linearity caused by the transducers; they increase the signal strength to optimise the signal-noise ratio during transport over the cable and to standardise the signal, so it matches the input requirements of any amplifier.
It’s the next component in the recording process. Here, power is added to the signal, the signal strength is increased. Often this increase is called power gain of the signal input to output, again a ratio. And it’s unit of measure? Correct, decibel. I told you, this unit sneaks around in music everywhere. A reasonable amplifier provides an input-output power ratio of 90 dB.
As we already know, the main criteria determining the quality of an amplifier is its linearity within its bandwidth, the value of the roll-off frequencies determining the bandwidth, and the signal to noise ratio, SNR, it adds to the signal.
In simple recording systems, equalisers are inbuilt. Better systems have a separate equaliser with many inputs, thus channels and generally a stereo output for storage. In a recording studio, where the original music is stored in a multi-channel format, the mixer desk can have 20 to 40 channels. They are individually recorded (especially when the signals are stored digitally) or grouped.
Since mixers operate at a lesser power level, it is simpler to keep their SNR low, and they have better linearity than power equipment. Also, many storage systems do not need a power amplifier.
Mixing prior to power amplification keeps the design of the amplifier simple, which makes it easier to achieve higher quality.
Once all the sound processing is done, one needs to store the result somewhere, often called “mass storage”. There are two significantly different types of storage, analogue and digital.
A stored signal is proportional to the wave created by an instrument or device. It may be transformed into another energy system or amplified; still, it follows the original signal. It also means the signal shape is continuous.
In sound terms: The wave signal is proportional to the sound wave created by the musical instrument. It may be electrical or recorded magnetically, or amplified, and I repeat: still, it follows the original signal.
Records and tapes record and store the signal in its analogue form. In the picture, you can see the groove and the needle (stylus) and the jagged sides of the groove, caused by the left and right channel of the stereo sound signal.
An analogue signal has been transformed via an algorithm, a mathematical formula. During digital encoding, the signal contour has been transformed into ‘0s’ and ‘1s’.
The picture shows the markings on a CD. There is no apparent correlation between the digital signal and the analogue signal. It needs to be converted back into analogue before it can be made audible.
CDs, computers, memory sticks, mobile phones and mp3 players store the information digitally.
Here is the principle scheme of a CD player. In the middle, you can see the digital to analogue converter that makes the sound information audible.
To summarise for today:
Parameters to consider for real, HiFi recording are:
- Linearity and bandwidth of the equipment
- Signal noise ratio of the signal
- number and placement of microphones
- Mixing and preparation for storing of the recording
- Recording media.
In the next part, I will talk about the production of records and how this impairs on sound reproduction.
Need to go back?
PS: A thought on Analogue versus Digital
Albert E. used to say: “I don’t know what kind of weapons will be used in WW 3. However, I know for sure, WW4 will be fought with sticks and stones.”
This appendix is about storing of information, of the world’s history and knowledge for future generations.
Since we began to write, we used some sort of letters, or hieroglyphs or symbols, visible to the naked eye.
Following generations, who found tablets, scrolls, bamboo shingles and papyrus had something to look at, which would make them wonder what it could mean. Some fired clay tablets survived almost 5000 years.
And with brain power alone, eventually, they would be able to decipher it. Music on a record can be seen with a loupe.
It is within the capacity of a pre-electricity civilisation to come up with a method to make it audible again.
Since there is a direct relationship between the information and the appearance of this information on its medium, the information is stored in an analogue technique.
Modern information storage is mostly digital. It requires an algorithm processed by a computer to convert, store and read it, a transformer technology and the same to make it readable and audible again. All processes rely on electricity. How many years would it take for a post WW 4 civilisation to reach this point again? Since the end of the Neolithic age, 8000 years have passed.
Digital storage methods all have a surprisingly short use by date.
Ordinary CDs start getting corrupted within 5 years. They sell long-life CDs with a guaranteed life of 99 years. Why 99? This has not been proven, yet.
Information on semiconductor storage, memory sticks, start getting corrupted after 5 years. Magnetic storage on hard drive technology lasts 2 to 5 years if left alone. Data can only be reliably stored when data is rewritten in 2 years cycles. Click on the link for more info.
Another essential point: “No electricity, no rewriting, loss of data!”
When I studied, 50 years ago, we talked already about and did experiments on holographic storage of data. I have not heard much of development or release of products, utilising this technology. This does not mean, they don’t exist.
Follow the above link, for more detail information, they say, 50 – 100 years before degradation starts. The optical gadget to write and read require laser technology.
If we want to leave something behind for future generations, someone has to put a thinking cap on, quick smart. Recommended reading: A Canticle for Leibowitz (this is a link)
05 February 2015
If you need to go back to