• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

Signals, noise, digital errors ... all that stuff.

My Satellite Setup
Pace 2200 Sky digibox with ftv card, Comag SL65 FTA sat receiver, 40cm Sky minidish, Setpal terrestrial receiver (for free uk tv only!).
My Location
(SORRY, THIS POST CURRENTLY "INCOMPLETE", WILL FINISH AS/WHEN I'VE TIME, and then it will still need "tidying up"!).

(Like my other "guides", this is attempting something not otherwise found on the Internet (there would be no point in duplicating what's already been done!). So, it's pretty much an "experiment" ..... ).

Receiving any digital tv is very much a "practical thing"! (First, decide if it's possible where you live, then buy an aerial or dish (maybe with motor), cable, receiver, find the channels, maybe program a CAM/card, if using a computer then configure receiver card, PVR software, etc ..... ).

For achieving which, any knowledge of signals, coding, error correction, etc is ...... entirely irrelevant!

But, it's still nice to know (very roughly, in a qualitative way!) how such things work, if only to satisfy intellectual curiosity. Unfortunately, the exact details are highly mathematical, and a "challenge" even to professional engineers! In limited cases, where some details are accessible - eg, details of various error correction systems as described on websites - they're still difficult to follow, needing far too much time and effort to get anywhere.

But, I think it's possible to describe the main results, in a mostly qualitative understandable way, without "too much" maths ....... (I'll include some fairly simple equations, but only where these are quite easy to follow, and - importantly - help with understanding the principles involved!)

(((What follows is - fairly closely! - a "walk through" the history of Bell Labs, where Communication Theory, Transistors, Comsats, Codes, Noise Theory, Video Compression, etc, were either entirely or at least partially invented .........
www.att.com/attlabs/reputation/timeline/ ))).

(Some history. The USA is a big place (!), and at first transcontinental phone circuits were very difficult to make, requiring very thick cables with special loading inductors, and even then they were "only just" usable.
This all changed quite suddenly, when one day Bell Labs researcher Harold Black just jotted down the basic negative feedback equation on an old newspaper, while travelling on the Hudson Ferry! This one invention completely revolutionised telecoms, sound recording, broadcasting, electronics ..... see:


Once long distance telephony was possible, then the main remaining problem became electrical noise, which had to be investigated!).

ELECTRICAL NOISE (the enemy of all tv signals!).

Electrical Noise comes from random molecular movement, the "basic chaotic state" of matter, and it increases with temperature. As part of telephone systems research (at Bell labs), this was originally measured by Johnson, so is sometimes called Johnson Noise. It has a "flat" frequency spectrum, so, as you "go up" through am, fm, uhf, sat broadcast bands, this has a fairly constant level.

Later on (at Bell Labs) Harry Nyquist derived this result from first principles, and found a formula! As in the kinetic theory of gasses, just as a gas has a pressure, there's also a noise power which depends on temperature (produced by the "electron gas"):

Noise power (n) = 4kT Watts per Hertz, T = absolute temperature, K = Boltmann's constant.

Practically speaking, this power will always be "manifested" in the resistance of various circuit components, in circuit configurations with specific bandwidths, which gives us the very useful "noise golden formula":
Noise average voltage = (square root of(4KTBR)). R = circuit device resistance, B = circuit bandwidth.

(for more, with some numerical examples, see:

www.cisco.com/en/US/products/hw/cable/ps2209/products_white_paper0900aecd800fc94c.shtml ).

(Wherever electronic systems noise is discussed, you'll see this "golden formula"! It might look slightly different - depending whether we're discusing spectral density, total noise power, or the voltage across a resistor - but they're basically just different ways of writing down the same formula).

What does this mean, practically speaking?

Let's take a Freeview receiver. The aerial cable feeds into a 1st stage radio frequency amplifier, having an input impedence of 75 Ohms. That's deliberate, because the input must "match" the cable impedence (or you get standing waves!), and this 75 Ohms is actually the input resistance of the rf amplifier transistor! UHF tv bandwidth is 8MHz, and putting these figures into our above "golden formula" gives a particular average noise voltage. Note that this noise is generated inside the receiver, it doesn't come from anywhere outside! And, it limits the receiver sensitivity. The incomming signal - from an aerial - must be strong enough to "overcome" this receiver generated noise!


Now, very much to the point, is that this "Johnson" (or "Nyquist", same thing!) noise is "white". In other words, as Johnson discovered by measuring (and Nyquist later worked out theoretically), it has a "flat" frequency spectrum. Mathematically this means it must have the so-called "Gaussian distribution" (for completeness only - there's lots of maths here (!) - see: http://en.wikipedia.org/wiki/Normal_distribution ).

Just "describing" the maths, this distribution actually arises from adding together all the contributions made by lots of non-connected tiny oscillators, all vibrating away independently (in this case, "jiggling electrons"!). The "Gaussian" property means that, looking at how the noise level varies with time - for example, on an oscilloscope screen - what you see is a very roughly constant average level, but with occasional "spikes" added, and now and again a really big spike (see: www.omatrix.com/spt/EX_AWGN.GIF ). It's these spikes that give huge problems when receiving digital signals, they're why we have to use error correction systems!
If you're watching analogue tv, then a single spike just causes a tiny "dot" on the picture, which you won't even notice! Even with lots of spikes - bad interference - the picture becomes "speckled", but remains watchable. However, with digital tv, sent as "bits", only 1 small spike is enough to change a "1" into a "0", thus making binary numbers become complete gibberish (then the picture can't possibly be decoded!). With compression, this problem obviously becomes worse, as then large areas of the picture use just a single numerical value to "fill in" a single brightness/colour!


From above, using the noise "golden formula", the noise power level actually falls as we reduce the system (here, effectively the tv receiver) bandwidth! So, if we do that, although the noise spikes are still there, they will have a smaller size and be less of a problem. So, what we want to do is reduce the system (receiver) bandwidth as far as possible! Right?

Er, well, that's not quite how it works out! What we want to receive is a digital signal, or "square" pulses. But, unfortunately, electrical signals having sharp pulses also contain very high frequencies (Fourier theory), and hence require a high bandwidth! So, it's "swings and roundabouts". As we reduce bandwidth, noise goes down, and there's less chance of a "1" becoming a "0" (or vice versa), but on the other hand the received digital pulses also become "less sharp" as well. As the reduced bandwidth "removes" high frequencies, the pulses actually become "more rounded". Then - even without added on noise spikes - it becomes more difficult for the tv receiver to decide whether it's getting a "1" or a "0"!

So, at some particular bandwidth figure, there will be a "best compromise" - the optimiun receiver bandwidth - that gives the best possible "tradeoff" between reducing Johnson noise level and not "rounding off" the digital pulses by too much! At which, bit errors due to both Johnson noise and pulse distortion combined will be reduced to a minimum possible number. And that's what we want to go for, in specifying and designing any digital tv receiver.

(In actual fact, a "matched filter" is a specific electrical network concept mathematically derived, not just a trial-and-error adjustment! But, describing it as a "best compromise" is entirely true, and exactly why we wish to use one! See: http://en.wikipedia.org/wiki/Matched_filter ).

The "goodness" of digital reception is often described - intuitively - in terms of an "eye diagram". What a matched filter does is "open up" the eyes - as far as possible - so making decoding errors as unlikely as possible! (the "eye diagram" is a visual display of the received digital signal - usually on an oscilloscope - directly showing how easy (or difficult!) the reception conditions are).

(For eye diagram "quickie" explanation, see: www.complextoreal.com/chapters/eye.pdf .
For a really excellent account of eye diagrams, see:
www.bertscope.com/Literature/Primers/Eye_Anatomy.pdf ,
both these take a while to load, but worth waiting for, "a picture is worth a thousand words "!).

Well, ok then, so we've got a matched filter digital tv receiver (the matching actually being done in the intermediate amplifier stages). The optimum filter characteristic for this is fixed, for any particular digital modulation system (eg, QPSK, QAM, COFDM, ATSC, or whatever!), so normally the fixed if bandwidth wouldn't be "user switchable" - as it might be for older analogue receivers - because there's no way you can further improve something already optimised!
Hence, bit reception errors are now as low as they possibly can be! Unfortunately, we've still got that damn Guassian noise, arising from devices inside the receiver. And - bad news - no matter how much the transmitting power is increased - to improve signal to noise ratio - there's always still enough noise "spikes" to completely wreck any digital tv pictures. So, what can we do?

SHANNON'S FORMULA (information theory).

(((Historical note. Once the amplifier/noise problem for transcontinental telephone lines had been solved, the next big problem was bit corruption in Telex systems, which slowly replaced "cables" (operators tap-tapping away on morse keys!). Telex signals were sent down ordinary phone lines, "underneath" telephone audio, and until quite recently many businesses had Telex terminals (now superseded by Internet email).

See: http://en.wikipedia.org/wiki/Telegraphy#Telex .

Shannon's work on Information Theory took place in this context - ie of Telex systems - but has since been universally applied to all digital telecoms (and many other things!).

See: http://cm.bell-labs.com/cm/ms/what/shannonday/work.html .
also: www.lucent.com/minds/infotheory/ (gentle intro).
also: http://en.wikipedia.org/wiki/Information_theory (much maths!).
also, Shannon's original "groundbreaking" paper at:
www.essrl.wustl.edu/~jao/itrg/shannon.pdf ))).

OK, then, without any further fuss, here's that very famous formula:

C = B log2 (1 + S/n:cool:.

(C = maximum signalling rate in bits/sec, S = signal power, n = Gaussian noise power density as explained above, B = system bandwidth).

This formula is extremely simple, but very widely misunderstood! What is says is, basically:

"For any commnunication channel - (including digital tv) - when signalling at a rate below C bits/sec, in Gaussian noise, with a signal to noise power ratio of S/nB, then - in theory! - it's possible to devise an error correcting code which - if complicated enough! - can lower the received bit error rate to at or below any particular desired figure" (phew!).

(So, although there's no way to stop those nasty noise "spikes" from being added to received digital tv signals - either from unwanted radio interference, or more usually from device noise actually inside the receiver - nevertheless, we should still be able to "get round them" by using error corrrection!
But, since we always have to use error correction anyway - no matter how large the transmitted power - then we might as well take full advantage of this, and actually reduce the power but rely much more on error correction. After all, why not reduce the electricity bill, if you can? So - for example - digital terrestrial reduces transmitter power so to cover roughly the same area as before, but at maybe 10-20 dB lower output power. And this is even more important for satellite, where - since just a few solar cells have to power transponders covering whole continents! - we must use the lowest possible transmission power.
This principle also applies to digital storage media - eg DVD etc - where use of error correction allows a lower signal to noise ratio, which for example permits the use of worse - but much cheaper! - laser optics).

Of course, "It ain't quite that simple"! Shannon's formula only gives an upper limit, and the actual error rate for any particular modulation system depends strongly on that system (digital telecom books are full of such error functions!). And, Shannon gave no general method for contructing such codes, other people had to do that!

(Also, as the actual signalling rate R approaches theoretical maximum C, the received bit error rate very rapidly increases, so the error correction code has to become increasingly - and impractically! - complicated, to still keep corrected errors below the desired rate!).

See: http://cis.stvincent.edu/carlsond/cs321/Ch3examples.html .
also: http://en.wikipedia.org/wiki/Shannon-Hartley_theorem .

For any particular modulation system (eg, PCM), the actual error function "looks something like":

(a fixed term) x (an exponential function, depending on C-R);

where: C is as above, and R is the actual signalling rate. So, as R rises to approach C the exponential term hugely increases in value, and becomes enormous when R=C (ie, it's not possible to signal "reliably" at or above Shannon's maximum rate!).

So, actual "real-world" digital systems must function "well below" the Shannon limit! Especially in broadcasting applications, were as you move away from the transmitter received power falls, so S/nB also falls - remember n is usually fixed by the receiver - and the value of C also falls!

For simple digital modulation systems (eg, again PCM) it's possible to calculate the error function values directly. For more complex systems (eg, cofdm) the maths involved can't be solved directly, and computer simulations are used to show error rates under "typical" reception conditions.

(As an example, here's the actual error function derived for "simple" baseband PCM:
www.dip.ee.uct.ac.za/~nicolls/lectures/eee482f/04_chancap_2up.pdf .
(In fact, that's "energy per bit", so error probability is some inverse funcion of that!).
Most real-life modulation systems are much more complicated! Here's the error functions for a few "simple" (!) systems:


To start with, how can we tell if a particular bit's been corrupted (ie, changed from 1 to 0, or vice versa)? Well, if you see something like: "this sentunce has a phew spalling mishtakes", it's obvious what it should be, because only certain English words are "allowed" (ie, those with correct spellings!). So, we can easily correct it. But, suppose the corruption gets worse, to give: "pish slobance nas i fug shillung shrimpbakes", then we can see that's obviously nonsense, but not what it should be, so we can't reconstruct the original message.

Error "detection and correction" codes are very like that! There's a set of "allowed" code words. With slight corruption, they're still quite close to actual codewords, and we chose the "closest allowed" words to reconstruct the original message. With slightly worse still errors, then we can tell we've got rubbish, but we can't correct.

But, how does that work with bits (binary digits)? Well, inventing an "extremely silly" example - for illustration only - suppose the code is that you send each bit 4 times. So, "1" becomes "1111", and "0" becomes "0000". Then, after added noise spikes, if the digital receiver gets "1101", then that's closest to "1". And, if it gets 0100, that's closest to "0". (note that, we're just repeating each bit 3 times, NOT sending a binary number!). Of course, if we get something like 1001 or 0110, it could go either way, in which case we've obviously got reception errors but can't correct (and with very high noise levels, you might actually "correct" into the wrong bit!).

This introduces the very important concept of CODE DISTANCE. Only certain "codewords" are allowed, and we can detect received errors because they're "closest to" known allowed codewords . But, as noise level increases, the number of reception errors goes up, and what we receive gets "further away from" allowed codewords. Until, at a certain point, we can still tell that we're getting errors, but it's now impossible to correct them (at which point, the digital reception system "falls over").

ALGEBRAIC AND CONVOLUTIONAL CODES (this is where it gets complicated!).

Basically, there's 2 different types of error correcting code:

a) Convolutional. Used for "first line of defence" general correction of small-ish noise spikes.

:cool: Algebraic. Used for "second line of defence" correction of fewer but bigger noise spikes.

Being used in Gaussian noise conditions - see above - digital tv systems suffer from both types of noise (both small and big spikes), hence both these code types are used together. Often, they're called "outer" (convolutional) and "inner" (algebraic) coding.

((( Historical note.
A very large noise spike is quite similar - in its undesirable effect - to "dropout" in magnetic tape and disc systems, due to occasional flaws in the magnetic coating. So, algebraic codes were first used in the early 1950s/60s computer systems to cope with this problem. More particularly, these were cyclic reundancy codes, chosen because the decoding logic can very easily be construced with very simple binary logic gates (important if you're using entirely discrete components, ie individual transistors!). This method is still used in computer disc drives, you might be familiar with the "CRC checksum failure" message.
Convolutional codes can cope with far more frequent - but much lower amplitude - noise spikes. They were first generally used to receive tv images from space probes. The "Martian Face" was actually due to missing pixels, with early crude error correction!))).

Note that - upon decoding - these two code types work slightly differently! De-convolution is "probabalistic", and always tries to give the best overall result, even in extreme noise conditions (although then it fails badly!). Whereas, algerbaic error correction is deterministic, ie it can correct only up to so many single bit errors, after which it just "gives up"!

CONVOLUTIONAL CODES (also called Viterbi and Trellis codes).

This is so called from its similarity to the signal processing operation of convolution (especially digital version). However, the more usual meaning of “convolution” - as “complicated folding” or “tangled up” - also gives a good intuitive idea of what’s happening.

Unfortunately, it’s easy to give a “clear and straightforward” account of simple convolutional coding - complete with diagrams - but still leave the reader with absolutely no idea about what’s really happening (lot of websites do exactly that!). So, I’ll mainly use analogies, and give a brief description of what actually happens, then rely on “good” website links for those wishing to follow-up in more detail.

OK, then. Imagine that - for some reason! - you’re sending a jigsaw (digital tv signal) down a long narrow metal tube (digital transmission channel), and only one piece at a time will fit. At the far end, the pieces are received, and have to be re-assembled to form the original picture, with no clue as to what the picture looks like! Also, the tube is dirty inside (noise spikes), and the pieces are received slightly discoloured and misshapen, but still “very like” their original condition (low level noise spikes). Then, it’s still possible to re-assemble the jigsaw to form a picture, even though the individual pieces have been “knocked around” a bit.

Here’s what actually happens. The digital bitsteam goes through a shift register n bits long (length of n chosen for “convenience”). The register locations go to binary (XOR) adders – in various different combinations – and a commutator (or multiplexer) then selects all the adder outputs successively (so, the coder output bitrate is (number of adders) x (input bitrate)).

That’s quite clear? (ha!).

For decoding, exactly the reverse process takes place (so what’s the point)? Well, during transmission noise spikes have been added to the bitstream, causing some bits to be received in error. So, on “reversing” this process, you have to select the “most likely” reverse route (because some bits were corrupted, the exact original reverse route no longer exists!). This involves using a “decision tree”, which when shown as a diagram looks a very like a trellis fence (hence “trellis coding”, or more accurately, “trellis decoding”). You know, the sort of fence you see in gardens, with climbing roses and runner beans growing up them!
Because this gets complicated, what’s normally done is to use special decoding algorithms, invented by Viterbi (hence, “Viterbi decoding”). These are normally implemented in hardware, as dedicated chips.
(The “quality” reading (or Bit Error Rate) - as given on digital receivers - refers to the error rate detected by the Viterbi decoder).
There’s another (helpful?) way of thinking about this, “error slicing”! Without convolution, (small) noise spikes are big enough to corrupt individual bits. But with it, these spikes are “shared out” between ALL bits, so become much smaller, and it’s then much easier to decide whether “1” or “0” was sent, despite the added noise.

OK, after all that, here’s some links with descriptions, pics, etc:

http://www.ee.unb.ca/tervo/ee4253/convolution1.htm (intro!).

http://scholar.lib.vt.edu/theses/available/etd-71897-15815/unrestricted/chap2.pdf (more detailed!).

http://en.wikipedia.org/wiki/Convolutional_code (also good).

http://en.wikipedia.org/wiki/Viterbi_algorithm .

(Nowadays, older style Viterbi convolutional codes are slowly being replaced with newer more powerful Turbo Codes. See: http://en.wikipedia.org/wiki/Turbo_codes)

BIT INTERLEAVING. (this comes between convolutional and algebraic coding)

As said, algebraic error correcting codes are the “2nd line of defence”, correcting fewer but bigger noise errors. However, per codeblock (see below) they’re limited to correcting just a few bits – far less than convolution codes can do – so a really big noise “spike” can exceed the correcting power of such codes. For which reason, bits are interleaved (scrambled, re-distributed); then a really big noise spike - affecting lots of consecutive bits - only affects just a few bits per block (after de-interleaving), which can still be corrected.

ALGEBRAIC CODES (also called block codes).

(Continuing the jigsaw analogy, we've now re-assembled the almost complete picture from the pieces received, but it now truns out that there's a few pieces missing - due to dropping out through through a small hole in the pipe (big noise spikes!) - so we now have to replace these few pieces!).

Whereas convolutional coding is continuous, algebraic coding breaks up the bitstream into “blocks”, then adds error correction bits to each block. The more powerful and complicated the code used, the more correction bit are added, and the longer the blocks become. Also, whereas convolutional codes are probalistic - ie, the "most likely path found" through the decoding trellis depends on exactly which decoding algorithm is used, algebraic codes are deterministic and always give the same error correction results from the same input.

The simplest type of algebraic coding (error detection only, no correction!) is parity checksum. For example, when data comes in "bytes" - 8 bit binary words - as it usually does, then there's either an odd or even number of "1"s per byte, and adding a 9th "1" to each odd byte then makes all bytes "even parity" (as - for example - in simple RS232 serial links).
It's possible to extend this principle - to combinations of bits within the byte - so that error correction becomes possible.
The earliest and simplest – and easiest to follow! – such correction codes are the Hamming Codes, relying on parity bits (note, Hamming also worked at Bell Labs!)


www.ee.unb.ca/tervo/ee4253/hamming.htm] (good 1st intro).

Also, more generally:

(Note that the coding/decoding process often uses "generator"/"correction" matrices, matrix algebra being fundamental to most coding theory!).

Hamming codes are easiest to understand, so are often given as introductory examples, but are limited! So, the earliest algebraic codes generally used were cyclic redundancy ones, which are excellent at detecting errors, but can't usually correct them (for "formatting", computers can write a "known pattern" to all hard disc blocks, and read these back to detect bad disc sections, but can't normally error correct, instead the computer just marks bad blocks as "don't use" in a lookup table. Normally, you don't want to try correcting disc read errors, as just a single wrong bit can cause the computer to crash!).

CRC codes use polynomial division, i'll let Wiki explain that:
http://en.wikipedia.org/wiki/Cyclic_redundancy_check .

(Basically, a the data in a block is all added up, and this result is then divided by different pre-agreed check numbers, and the left over "remainders" are then added to the "data sum" before writing, so that all these numbers now divide exactly. On reading back, if they no longer divide exactly, its' clear there's been a read error. But, this isn't used at all for digital tv, so i'm digressing!).

Hamming codes are also very inefficient, requiring an extra bit for each parity check, and can only correct one bit per blcok, not very useful! So, for the first digital recording media (digital audio, stationary head), more sophisticated codes were used, depending on the particular recording format.

Nowadays, from the CD onwards, the Reed Solomon error correction codes have been used for all consumer digital formats (both media storage and broadcast). This is because they have a number of advantages, being basically a "DIY error code kit", where you can specify exactly the degree of error detection and correction you want in any particular digital system.
(Also, "optimal" bit interleaving is built into Reed Solomon codes, so you don't have to add it on separately).

see: http://en.wikipedia.org/wiki/Reed-Solomon_error_correction .

also: www.4i2i.com/reed_solomon_codes.htm .

(These widely used codes don't use the simple parity check system - as in Hamming Codes - but instead more complex Galois Fields ("fields" are a type of abstract mathematical structure). For info on which, use google to search, you should get maths tutorial websites, etc. You don't need a maths degree - just simple algebra - but some learning is required!).


As was said above, because there's no way to stop noise spikes causing digital reception errors - no matter how high the transmitter power - so instead we "go the other way", and rely very heavily on error correcting codes (paradoxically) taking advantage of these to actually REDUCE the transmission power, by quite a bit!
And - as was also said - as we move further away from the transmitter, and S/nB becomes smaller, so the error correction has to work "increasingly hard", ie, more wrong bits need corrrecting!
Error correction can only cope with so many wrong bits (!), most easily at a point where we're still well inside Shannon's limit. As this limit is approached, then the "error function" value increases enormously (also see above), and error codes must get increasingly complex to cope. Afer a certain point, they become too unweildy - you're adding huge numbers of error correction bits - and it's no longer "worth bothering".
A "practical limit" is about 1 in 1000 bit error after the convolutional coder. Much more, and error rate then "shoots up", and the needed codes become "too complicated". So, both the convolutional and algebraic codes are designed to fail at around this point.
Which gives us the "digital cliff". The difference between the error codes still working - and then failing - is an extremly narrow power margin (remember, S/nB !), a difference of maybe only 2-3 dB!
Which can give some quite strange results. For example, just a "slightly different" aerial can make all then difference between getting and not getting digital terestrial. Or, you might get it at night but not in daytime. Or - with digital satellite - light rainfall can stop reception, or even just clouds in the Sky!


OK, then, so we've got multiple "baseband" digital bitstreams ..... 011001010000111110101, and so on .... representing many different digital tv channels. Somehow, these have to all be simultaneously broadcast. The best way to do it is via "modulation", ie, putting the bitstreams onto several different frequency "carrier" radio transmissions.
Traditionally, the older analogue modulation systems were classified in terms of used bandwidth, sig to noise, etc. While these same characteristics are also true of the newer digital systems (the same maths governs both!), we tend not to concentrate on the spectra, but instead on "digital characteristics".

Modulation overview:
http://en.wikipedia.org/wiki/Modulation .
http://robotics.eecs.berkeley.edu/~sastry/ee20/modulation/modulation.html .

Generally, analogue sound requires only a small bandwith, and may be easily transmitted on am freqencies. The other extreme is digital tv, which needs huge bandwidths, and - at the very least - requires UHF frequencies, or (even better) satellite C or Ku band.


The simplest possible digital modulation scheme would be just switching a single tone (sine wave) on and off! However, there's some snags with this. What to do for long strings of "1"s or "0"s? And, you've got to be able to recover a "data rate clock" for demodulation, and most receivers use automatic gain control so can't cope with "no signal". So, it isn't very practical.

Frequency Shift Keying (FSK).

This is the simplest "feasible" method. 2 different frequncies are sent, representing either "1" or "0". The transmitter needs only a coder and two oscillators with switch, and the receiver just two tone filters with decoder. There's a continuous signal, so the AGC isn't affected. Simple technology, and has been used for many years (still is) for sending telegraphy over short waves. Later on, for the first telephone line modems.
http://en.wikipedia.org/wiki/Frequency-shift_keying .
http://en.wikipedia.org/wiki/Radioteletype .

Telephone modems development.

The receiver filters are limited by how quickly they can "settle down" each time the frequency changes. So, to get faster signalling, you have to a) use more frequencies, and/or :cool: also have different send levels (amplitudes). As integrated circuits became available, these techniques became economic and were increasingly used:
http://en.wikipedia.org/wiki/Modem .
http://inventors.about.com/library/inventors/blmodem.htm .

The highest speed telephone line modems use multiple techniques, full QAM with adaptive filtering, but with 3.5 KHz bandwidth top rate is limited to 50Kbits/sec.

Phase Shift Keying (PSK).

In this, the carrier amplitude/frequency is kept constant (which has some advantages), but the phase is changed. Earlier systems are usually limited to QPSK, where each 90 deg phase shift represents two consecutive data bits (higher rates would require smaller shifts, and become impractical).
This system is used for DVB-S, where the constant level helps offset low transmission power and poor signal to noise. It isn't really suitable for telephone lines, which are amplitude equalised at the expense of poor phase charactersitics.
http://en.wikipedia.org/wiki/Phase-shift_keying (loadsa maths, but also good pictures!).

Quadrature Amplitude Modulation (QAM).

With analogue telephone line modems, just using a single carrier frequency, the fastest signalling is obtained if you also amplitude modulate the basic FSK signal, using - say - 3 or 4 different levels to repersent pairs of bits. This will work providing the line isn't too noisy (if it is, then the lowest send level is - obviously - in most danger of being corrupted by occasional noise spikes. In which case, you might want to use a scheme which puts the least important information - ie, the least significant bits in binary numbers - onto these levels!).

As said, the phase response of analogue phone lines is usually poor. However, when phase response is better, then it becomes possible to combine FSK (multiple amplitudes) and PSK (multiple phases) onto a single carrier frequency, to give a higher signalling rate. Then, you've got "quadrature (PSK!) / amplitude (FSK!) modulation" ... QAM!

The sorts of distribution methods suitable for QAM especially include coaxial cable tv systems, where send levels are relatively high, so there's no significant noise problems, hence little danger of noise spikes corrupting the lower send amplitudes.

www.physics.udel.edu/~watson/student_projects/scen167/thosguys/qam.html .

http://www.blondertongue.com/QAM-Transmodulator/QAM_defined.php .

http://en.wikipedia.org/wiki/Quadrature_amplitude_modulation .


This bit gets a bit political and controversial, but explains the background to the original ATSC and COFDM systems (Rolf can always remove this bit, if he wants!).
During the 1970s, Japanese consumer electronics manufacturers pretty much closed down competing USA and European manufacturers (using the "surprise tactic" of manufacturing tv sets which worked reliably, and didn't keep on breaking down every few weeks).
During the 1980s, this became a matter of huge concern. Then panic, when NHK started peddling its Hi VIsion around the world, it looked as if Japan would soon "own" all tv broadcasting and manufacturing.
In the USA, the reaction was protectionism, slapping on an import tax (which didn't work). In Europe, the different approach was a "technical fix". The original PAL patents were running out, but inventing a new broadcasting standard would force foreign importers to continue paying royalties (effectively an import tax) for the forseeable future. Thus was born MAC (multiplexed analogue component).
(MAC was developed at IBA research dept, but was fairly pointless! A similar "improved picture" could be derived from the existing PAL signal, very simply, by putting a comb filter inside existing PAL sets. Some manufacturers did, but couldn't sell these slightly more expensive sets!).
MAC was available, and was very quickly promoted to "official satellite tv standard" by the EU, as a form of protectionism (similar remarks apply to NICAM728, which was used as "patent protection", although a simpler system could easily have been used for stereo tv sound).
When - a little later - the ATSC and COFDM transmission systems were developed, it was still very much in this protectionist context, rather than a purely technical one, and this explains some of their characteristics.
(There, I've "said my piece", now wait for the flack .....).

(When Murdoch's Sky satellite tv first began analogue broadcasting, standard PAL on fm modulation was a very sensible decision, giving fine pictures! However, strictly speaking this was illegal, since MAC was by then the official EU standard ....).


By the late 1980s, Japanese manufacturers were aggressively promoting MUSE, and the USA government responded by promoting an "open competition" between domestic manufacturers, to instead develop a USA "home grown" HDTV system.

UNFINISHED (obviously!).