Wednesday, 31 October 2012

Lecture 2: Digital Sound Processing

Digital Sound Processing

Digital Sound Processing System in Stages:
  • Signal In
  • Band Limiting
  • Analogue to Digital Conversion
  • Digital Signal Processing Operations
  • Digital to Analogue Conversion
  • Smoothing
  • Signal Out
Signal Processing is an area of electrical and systems engineering along with an applied mathematics formula which analyses and performs operations on signals in either discrete or continuous time periods in order to produce useful operations from these signals.

Signals are analog or digital and electrically represent a variation in time or space in physical quantities.

Electronic Filters

Electronic filters are electronic circuits which perform functions of signal processing. There can be two reasons for this
  • Remove frequency components from the signal
  • Enhance existing frequency component
Electronic filters can be:

  • Passive (a component that consumes but does not produce energy or a component incapable of gaining power) or Active (a type of analogue electrical filter, recognised by the use of one plus activite components, such as different types of amplifiers, such as voltage or buffer
  • Analogue or digital
  • High Pass Filter- a device that passes high frequencies and reduces the amplitude of frequencies which are measured at more than its maximum cut off point
  • Low pass filter- passes low frequency signals but reduces the amplitude of signals with above maximum frequencies, it is the opposite of a high pass filter and is also known as a high-cut filter or a treble cut filter when used in audio applications
  • A band pass filter- a combination of high and low pass filters.
  • A band stop filter/band rejection filter- passes most frequencies in their original state but alters some within a specific range
  • Discrete time or continous time
  • Linear or non Linear
  • Infinite Impulse Response- deals with filters with an impulse response over an infinite length of time or Finite Impulse Response which give fixed time responses
Pitch

Pitch is a perceptual concept with allows the human ear to oder sounds on a frequency related scale. High and low pitches are compared in relation to musical melodies, which require sound with a claer frequency is clear and stable enough to be heard as something more substansial than just plain noise. Pitch is considerd a major auditory attribute of musical tones, along with duration, loudness and timbre.

Digital Signal Processing System Requirements:

  • Input and output filetring
  • Conversion from analogue to digital and vice versa
  • Digital processing unit
Why choose Digital Signal Processing?

1. Precision

Although theoretically digital signal processing is limited only by the process converting input and output (analogue to digital and digital to analogue) in practice word length (number of bits) and sampling rate (sampling frequency) aid modifications. The ever increasing operating speed and modern word length is increasing the areas of application.

2. Robustness

Logic level noise margins benefit digital systems, making them less susceptible to electrical noise and component tolerance variations, in comparison to analogue systems. Vitally, adjustments in complex systems for electrical drift and component ageing are virtually removed in complex systems.

Electrical noise is a random fluctuation in an electrical signal. There is a large scope for noise generated by electronic devices as there are various different effects which can lead to it being produced.

Component tolerance variations may be specified as a number of things:




  • Factor/Percentage from the nominal value
  • Maxiumum deviation from the nominal value
  • Explicit range of allowed values
  • Implied by the numeric accuracy of the nominal value
Electrical drift is when an undesired progressive frequency change occurs. Main causes for this are the ageing of the components and changes in the environment. It can happen in either direction, the frequency can increase or decrease.


3. Flexibility

The programmability of digital signal processing caters to aid processing operations by expansion and upgrading, without majorly significant hardware changes. It is possible for a user to constuct a practical system with suitable characteristics such as time varying in order to allow adaptations.


Sound Card Architecture



  • Spatial anti-aliasing is the technique that helps minimise aliasing when representing a high-resolution image at a lower resolution than its original state. This means that if you are attempting to reduce a pictures file size and display it, the detrimenta, l effects of the picture will be as limited as possible. It is used in many applications, including digital photography and computer graphics. Anti-aliasing is often used prior to converting from analogue to digital in order to remove the out of band component of the signal
Sampled Data Reconstruction Filters

  • The input of Analogue to Digital Conversion requires a low-pass analogue electronic filter named the "anti-aliasing filter" as described by the sampling theorem.
  • The sampled input signal must be bandlimited to prevent aliasing, which means waves of a higher frequency being recorded at a lower frequency
  • Likewise, a low-pass filter is required in Digital to Analogue Conversion to prevent aliasing, in this occurrence waves of a lower frequency being recorded at a higher frequency
Implementation

  • Sine waveforms have infinite signal responses, both negatively and positively a practical filter is require as it has a non flat frequency
  • Some systems have an anti-aliasing filter and a reconstruction filter. They are often identically designed due to input and output both being sampled at the same frequency, 44.1KHz.
  • Both attempt to block sounds above 22KHz and as far as possible pass sounds below 20KHZ.
  • Theoretically, Digital to Analogue conversion is a series of impulses but is better described as a series of stair steps.
  • The low pass reconstruction filter evens out the gap between the metaphorical stairs, remove the harmonics below the required limit
Sound Cards

  • A poorer quality of sound card can result limitations in sampling rates, this can be particularly clear in devices such as Notebook PCs
  • Most modern sound cards have a 16 bit length, meaning they can represent 2x16 values (65536) different signal levels within the input voltage range
  • Quantisation step size can be worked out by dividing the word length by the range.
  • Q = 10/65536 = 0.15MV
  • The above is a calculation for the range of 10V





Thursday, 25 October 2012

Lab 1: Sound Questions

Welcome to my second blog on the topic of "Sound" for my university module "Audio, Image and Video Processing". My first blog introduced the major principles of sound and was largely theoretical. In this blog, I will attempt to showcase some practical examples of what was covered in the first lecture in question and answer form. For usability purposes, please find the questions noted in black, whereas the answers will be noted in red.


In a recording room an acoustic wave was measured to have a frequency of 1KHz. What would its wavelength in cm be?

As explained in my first blog, there is a formula for working out wavelength. The above diagram shows the three way calculation method between Velocity, Frequency and Wavelength. Velocity is worked out by multiplying frequency and wavelength and hence is stationed above those in the diagram. This means in turn, working out wavelength and frequency would be done by dividing velocity by the other two elements.

In this particular calculation, the wavelength is the required product. The information in the question highlights that the frequency is 1KHz or 1000Hz depending on scaling. For the purpose of calculating, I am going to refer to in Hz form. The question also specifies that the sound is travelling through a room, more specifically through air. The speed of sound in air is roughly 333 metres per second. Therefore, to acquire the wavelength, you would divide 333 by 1000, which tells you the wavelength is roughly one third of a metre. I understand this may seem quite contrived for a simple calculation, but I was aiming to cover to cover all the particulars of working out wavelength. Noted below, is a more concise version of the calculation.

Velocity = 333 m/s Frequency = 1000Hz (1KHz) Wavelength = ?

Wavelength = Velocity / Frequency
Wavelength = 333 / 1000
Wavelength = 0.33 m 

If a violinist is tuning to concert pitch in the usual manner to a tuning fork what is the likely wavelength of the sound from the violinist if she is playing an A note along with sound from the pitch fork?

Again, working out the wavelength involves the equation where velocity is divided by the frequency, like the above calculation the velocity of sound in air is roughly 333 metres per second, but this question would require more research, so I visited a website http://liutaiomottola.com/formulae/freqtab.htm to find out more.

This website was able to tell me both the frequency and the wavelength of the A note, based on the velocity of sound at 340.29 metres per second, which although still roughly 333 metres per second as above, would give an answer which is slightly out.


Going by these conditions, the formula for calculating the wavelength of the A note would be as follows:


Wavelength = Velocity / Frequency

Wavelength = 340.29/27.5
Wavelength = 12.37 metres


How long does it take a short 1KHz pulse of sound to travel 20m verses a 10Hz pulse?


They will both take the same amount of time to travel, as Hz or KHz is the unit used to measure frequency and has no effect on the speed of sound.


Why are decibels used in the measurement of relative loudness of acoustics waves?


Sound is measured in decibels mainly so that a large as possible amount of values can be represented, discussed and graphed. Decibels measure the sound intensity level and are calculated using a logarithm, as the human ear uses a logarithmic scale to calculate sound.
 
If an acoustic wave is travelling along a work bench has a wavelength of 3.33m what will its frequency be? Why do you suppose that is it easier for this type if wave to be travel through solid materials?

Upon the presumption that the bench is made of steel the calculation would be as follows:

Frequency = Velocity / Wavelength
Frequency = 5000/3.33
Frequency = 1501.50

Sound travels quicker through solid materials as the molecules are closer together than they are in other mediums.

Sketch a sine wave accurately of amplitude 10, frequency 20Hz. Your sketch should show two complete cycles of wave. What is the duration of one cycle? What is the relationship between the frequency and the duration of one cycle?

Research the topic “Standing Waves”. Write a detailed note explaining the term and give an example of this that occurs in real life.

A standing wave, also known as a stationary wave, is a wave which remains in a constant position. This type of wave can occur in one of two situations, firstly if the wave is moving in the opposite direction from the medium and secondly if two waves travelling in the same direction interfere with each other.

If you were looking for a suitable comparison of the two, the first occurence (moving medium) would be seen often in river rapids, whereas the second occurence (opposite directions intersecting) would be more likely to occur in open ocean waves.

What is meant by terms constructive and destructive interference?

Constructive Interference is when two waves interfere with each other and the shape of the medium is determined by the amplitude of each separate wave. The shape of the wave is the sum of the amplitude of both of the waves which intersect. At areas where the waves do not intersect, they take the shape of the original wave.

Destructive interference is the opposite, when two waves do not intersect, cancelling each other out. The shape is determined by a subtraction calculations of both amplitudes.

What aspect of an acoustic wave determines its loudness?

Amplitude determines how loud a sound from an acoustic wave is. The higher amplitude, the louder sound.

Does sound travel under water? If so what effect does the water have? 

It travels quicker than in area due to the molecules being more compact under water. In water, sound travels at the velocity of (insert speed here) whereas in air it is only 333 m/s.

Thursday, 18 October 2012

Lecture 1: Introduction to Sound

Blog One: Sound

Welcome to my blog on Image, Audio and Video Processing. This blog is completed as coursework for my second year university module on the subject, which I am undertaking at the University of the West of Scotland. In this series of blogs I intend to cover various aspects of the subject, starting with this week's blog on Sound.

Sound is transmitted as a wave through a type of medium - normally air, water or metal.  There are two main categories of waves which I will go into detail about- transverse waves and longitudinal waves. Waves will be categorised accordingly depending on the direction of the displacement against the direction of the wave.

Sound Waves- Types:


The best example of transverse waves is a disturbance in water, creating ripples. In a transverse wave the vibrations of the water molecules are at right angles to the motion, moving out from the disturbance.

Longitudinal waves occur if the vibration is parallel to the direction of the motion. The sound wave moves out from the area of disturbance and the individual air molecules move parallel in conjunction with the direction of the wave. The molecules pass energy to the molecules beside them, however as the energy is passed, the molecules remain mainly in the same position.

Compression of a sound wave results in and increase in density, which can be useful as increased density can potentially mean the storage requirements and transmission bandwidth required can significantly reduce. The opposite of this is rarefaction, where density is decreased, the process of this can easily be represented graphically by using a spring as an example. See below to see the difference in wavelength as a direct result of compression or rarefaction.

Sound Calculations

There are three main components in calculations relating to sound, velocity, wavelength and frequency.

Velocity is essentially the speed of sound, although the terminology is slightly different as speed only describes the pace that an object is moving at, whereas velocity also specifies the direction of the movement. Velocity differs for each of the previously mentioned mediums. In air, velocity is roughly 333 metres per second, in steel it travels just under 5000 metres per second and in water it is 1500 metres per second.

Wavelength is the distance between two successive periods of a wave. To simplify this, sound is a succession of waves, all which have the same shape and wavelength is the distance taken for an individual wave to be completed.

Frequency is the number of vibrations per second, to simplify this, it is the number of cycles (individual waves) that are completed each second. Measured in Hertz (Hz) or KiloHertz (KHz).

In layman's terms you could say that you were calculating Speed (Velocity), Distance (Wavelength) and Time (Frequency), but it is more complex than that!

If you have two of these elements, you can work out the third, as shown in the diagram below:


Velocity equals frequency multiplied by wavelength.
Frequency equals velocity divided by wavelength.
Wavelength equals velocity divided by frequency.

An example of a calculation is as follows:

Work out the wavelength of a 1KHz tone going through water.

Velocity in water = 1500 metres per second
Wavelength = Velocity / Frequency
= 1500 / 1000
= 1.5 metres

Sound Particulars


In sound, a standing wave, also known as a stationary wave is a wave which remains in a constant position. This constant position is achieved either because the wave and the medium are travelling in separate directions or it can occur when two waves going in opposing directions interfere with each other.

In a wave, the node is the minimum and can be found at either side, the anti-node is the maximum which occurs in the middle. A standing wave can be supported in a room with nodes situated at opposing walls.

An example of the difference between the two could be described as a standing wave with moving medium could occur due in fast-flowing river rapids or tidal currents, whereas a standing wave which consists of two interfering waves would be likely to occur in open ocean waves.

A sound wave produces a basic tone, which is known as the fundamental tone. A harmonic is an integer multiple of the fundamental tone. For example, if a sound consisted of components at 500hz and 2KHz, the 2KHz would contain a fourth fundamental of the 500hz component.

In simplistic terms, the amplitude is the distance between zero and the highest point of the wave. This is calculated in one of three waves - the energy invovled, the distance travel by the air molecules, or the pressure difference in the compression and rarefaction. Although it can be expressed and understood as the distance between zero and the highest point of the wave, by definition it is the vertical distance between the extrema of the curve and the equilibrim value. 

Sound Measurement

The best method to represent sound intensity (power) level is through decibels, as they can represent a large amount of values. These are defined through a logarithm of the following formula:

This formula demonstrates that multiplying sound intensity level by ten results in an additional 10 decibels of sound. If there was a three times increase in the standard power the equation would be 10log10 (3*P/P).

In order to make this more practical and easier to relate to, below I have included a diagram which on the left hand side states which sounds occur at certain levels of decibels and on the right hand side it depicts how tolerable the human ear is to those sounds. It would appear a live rock band would be the loudest common sound (measured in decibels) that a human is able to tolerate, occuring at 130 DB. Between 150 - 160 Decibels is when sound no longer is tolerable by a human, presenting pain and and a great burden on their ears.



The Inverse Square Law states that a physical quantity or intensity is inversely proportional to the distance from its source (1/R squared). Below is an example of measuring the difference between the sound intensity of two separate distances.


  • If the distance was two metres, the sound intensity in the air would be 1/2 squared, which is 1/4
  • If the distance was four metres, the sound intensity in the air would be 1/4 squared, which is 1/16
  • So comparing the two, sound at two metres from its origin is four times less intense than sound four times from its origin
An echo is defined as the perceived reflection of sound from a surface. This arrives at the listener some time after the initial and direct sound. Echoes can occur from walls, through tunnels and through wells for example. The time delay can be worked out as the extra distance divided by the speed of sound. Echoes first appeared in the music industry in the 1950s and have evolved significantly, with desired echo effects in music today often being achieved by electronic or digital circuitry.

Reverberation is the process which allows copious echoes to build up and transmit before decaying upon the sound being absorbed by the medium creating the echo, i.e. walls or air. Reverberation can occur as a single reflection or as multiple reflections.

Sound graphs can depict time history, showing amplitude against time, and spectrum, showing frequency against time.