On funding the BBC

It looks like we're going to see another debate on the future of the BBC in the not too distant future. I happen to believe that the BBC plays two important roles in UK television, so I'm not in favour of anything that guts it; I do, however believe that those two roles could be improved by a change to the BBC's funding.

So, firstly, what do I think the BBC does that's important?

  1. It takes risks. The BBC can do a programme like Sherlock, which could easily have been a complete disaster, because it doesn't have to worry about making a profit on every slot.
  2. It produces programming for minority interests. The BBC can broadcast the Paralympic Games knowing that the majority of people would prefer to watch football or mainstream athletics, because its funding method means that it can do the right thing anyway.

Further, having broadcast risky or minority programmes, the BBC sometimes ends up demonstrating to commercial broadcasters that their intuitions on what's potentially profitable are wrong; thus, it also stops the commercial channels descending into wall-to-wall dross, because they know that their viewers can always switch to the BBC.

Given these priorities, the BBC's funding mechanism needs to give them incentives to take risks rather than play it safe, and to worry about providing something for everyone rather than something for the few. As always when I present a problem, I have an idea to solve it.

To understand my suggestion, you first need to understand the audience measurement concepts of reach and audience size.

Audience size is really simple - just count everyone who watches a programme, or a channel, or a group of channels. There are some slight complexities here, such as deciding when someone counts as having watched a programme, but it's otherwise not hard.

Reach is a much more complex measurement; you have to count the number of unique viewers. The important thing about reach is that you can't just appeal to the same viewers again and again to increase your reach - you have to bring in new people each time.

A very simple example; imagine two TV stations, 1 and 2, and 6 people, A to F, and the following audience figures:

Time of dayStation 1 viewersStation 2 viewers
6pm to 7pmA, B, CD
7pm to 8pmB, C, DE, F
8pm to 9pmC, DA, B
9pm to 10pmA, B, DC, F

By audience size (the most commonly quoted rating), station 1 consistently equals or beats station 2; with the exception of the 8pm to 9pm timeslot, it has more viewers in every timeslot. However, on a reach rating, station 2 beats station 1 - it reaches 100% of the audience over that evening, whereas station 1 keeps attracting the same viewers again and again to get a 67% reach.

With my terms explained (albeit badly), I can explain my proposal. I intend to reuse the BARB reach measurement for TV, and the RAJAR reach measurement for radio, as they're already taken anyway for the benefit of commercial broadcasters. I'm ignoring the BBC's online services completely, for these purposes; I believe that there's enough competition online, and it's easy enough to find, that the BBC is not needed as an aid to competition.

The BBC would have to face two measurements in my world. First, it would be measured on the reach of its television services alone; it must reach 99% of households with a TV licence. This allows for a small number of law-abiding households that don't watch the BBC on philosophical grounds, but still forces the BBC to provide something for everyone. If the BBC fails to achieve this, it has to "buy" ratings from the commercial and state broadcasters until it has the 99% figure; note that because it's buying historic ratings, it has to buy programming that helps it achieve its reach goal, thus giving commercial broadcasters an incentive to find a minority interest that the BBC does not service.

The second measurement includes both TV and radio services, and controls the BBC's future funding. For each of the two figures, you end up with three outcomes:

  1. Reach increased compared to last year.
  2. Reach the same as last year (within error bounds).
  3. Reach decreased compared to last year.

If either reach figure decreases, the BBC is limited to an inflation-only rise in income; if both decrease, the BBC's revenue is not increased at all. This gives the BBC a strong incentive to avoid losing reach - lose reach in one medium, and you're limited to an inflation-only rise in income. Lose reach in both media, and you're making cuts.

If one figure rises, and the other stays the same, the BBC is permitted a small increase over and above inflation - say the lower of 5% or the inflation rate. If the BBC can make both reach figures rise, it's permitted a larger increase - say the lower of 10% or four times the inflation rate. In both cases, the tie to inflation ensures that the BBC never grows rapidly; but by tying increases to reach, the BBC is prevented from growing at all unless it can appeal to more of the population than before.

Obviously, this is just an outline, and thus rather incomplete - but hopefully my thinking is clear.


OFDM - a technology miracle I've only just started to understand

In an attempt to stave off the winter blues, I've been trying to get a detailed understanding of technologies I depend on for everyday life. Most recently, I've been trying to get a handle on OFDM - the underlying modulation scheme in ADSL and in DVB-T.
To understand why OFDM is such an amazing invention, you first have to understand a little about digital transmission of data. At heart, digital modulation schemes are all simply ways of converting the digital signal (ones and zeros) to an analogue signal that can both be transmitted easily, and then decoded easily at the other end.
There are three basic classes of digital modulation:
  1. Amplitude modulation. In these, you're looking at different signal levels to get different bit patterns - for example, you might have a rule that says "loud signal is a 1, quiet signal is a 0". The advantage here is simplicity - to implement the transmitter and receiver for these modulation schemes is easy (a morse code transmitter and receiver are an amplitude modulation scheme). The disadvantage is that it's very prone to being disrupted by noise or imperfections in the transmission channel, so you struggle to approach the Shannon limit.
  2. Frequency modulation. In these, you look at the frequencies of the signal to determine what bit pattern is being transmitted - for example, you might say that "2kHz (high) pitch is a 1, 200Hz (low) pitch is a 0". DTMF in touch-tone phones is an example of a frequency modulation scheme. Again, these systems are simple (although not quite as simple to implement as amplitude systems), but they're still easily disrupted by some classes of noise.
  3. Phase modulation. In these schemes, you represent data by a phase shift - this is much harder to decode, but has the advantage of being much more noise resistant than any of the other modulation schemes.
There's a particular combination of phase and amplitude modulation that's commonly used for its efficiency and noise resistance, called quadrature amplitude modulation (QAM). In QAM, you take two carriers at the same frequency, but 90° out of phase with each other. You control the amplitude of each carrier at the same time, using the amplitudes of each signal to select a point in a constellation - each of these points represents a different bit pattern.
By changing the amplitude of two waves that are 90° apart in phase, and combining them, we effectively shift both phase and amplitude of the signal at the same time, resulting in QAM being a mixture of phase and amplitude modulation, and getting some of the advantages of both, at the expense of a more complex receiver.
So, how does this all tie into OFDM? Well, firstly, you need to recall some basic trigonometry; QAM uses two carriers 90° out of phase with each other. Another way of looking at this is that QAM uses the sine and cosine waves at a given frequency and phase as its carrier pair. The first step of a QAM demodulator separates out the received signal into the cosine and the sin wave at that frequency.

The other bit of information you need to make OFDM make sense is to realise that signal impairments (noise, attenuation, multipath interference etc) don't affect all frequencies easily. For example, attenuation in a length of copper wire goes up as the frequency of the signal goes up; electronic equipment emits noise at various frequencies, but concentrated around some specific (equipment-dependent) frequencies - such as 33MHz for a PC with a PCI bus.

So, finally, onto OFDM; the basic idea is that rather than try and use the entire channel bandwidth to transmit a single high bitrate signal (as is done in Ethernet, for example), we'll split the channel into lots of narrow channels, and transmit a low bitrate signal on each of them. We can then recombine all the low bitrate signals together into the original high bitrate signal at the other end.

This sounds expensive, at first sight; 802.11a/g use OFDM with relatively few carriers (52), yet some applications, like DVB-T, can use thousands of carriers. If we had to have a full-blown QAM receiver for each carrier, it would be impossible to afford; here's where the miracle comes in. There's a mathematical trick, called the Fourier transform, which can split your incoming signal by frequency, giving you the amplitude of the sine wave and cosine wave at each frequency. Co-incidentally, the first step of a QAM decoder is to split your signal into sine and cosine waves at each frequency. It's this trick that makes OFDM economic to implement; instead of having a QAM decoder for every frequency in the OFDM signal, we perform a Fourier transform on the incoming signal (which we can do efficiently using an algorithm called the FFT), and then all that remains is to quantise each of the amplitudes that comes out, and reconstruct the original digital signal.

From this point, we can do all sorts of tricks; ADSL, for example, uses a different QAM constellation at each frequency to get the maximum data rate out of a single line. DVB-T uses the same QAM constellation in each sub-band, but uses a different subset of the available OFDM carriers for data in each symbol, so narrow band interference doesn't wipe out too much of the wanted data.

Further, because OFDM transforms a high symbol-rate (a symbol is a bit-pattern) wideband signal into a collection of low symbol-rate narrow-band signals, it makes it possible to play tricks that rely on the fact that you can receive at a higher symbol rate than you're actually needing to use; for example, DVB-T can operate in a "single frequency network" mode, where several synchronised transmitters all send the same signal. Because the symbols are relatively slow, and all contain identical data, a receiver can work out what the original digital signal was, and correctly ignore the interfering transmitters.

All in all, OFDM is amazingly simple for what it actually achieves. There are other systems that do similar things (such as CDMA, used in mobile telephones and GPS), but they aren't as simple to explain.

Next on my list of interesting technologies to get the hang of is MIMO, as used in 802.11n wireless networks. This one's even more of a brain-strainer, as it somehow manages to use multiple aerials all tuned to the same frequency to somehow get more data rate than could be done with a single perfect aerial.