2010/12/31

On Agile, TDD, and other Software Engineering processes

I've been looking around the job market recently, and one thing that's become much more visible since I last did so is the requirement to have experience in some form of "Agile" software engineering process. While I'm not going to argue that the requirement is pointless, I do wonder how much of it is from companies who don't understand what they really want.

To understand where I'm coming from, you must first understand what an engineering process actually is. As a competent engineer, you develop patterns of work that help you produce good results every time. An engineering process is what you have when you look at a pattern of work, and write it down with explanations of why you do something. For example, test-driven development (TDD) is based on the observation that, when working with a pre-existing system, the safest way to improve things is to write tests for the functionality that you want. As long as those tests keep passing, you know the system works; as soon as they fail, you know that you've broken something important. TDD takes that further - you always write tests first for everything, even new features. You know that the new tests will fail, and you write just enough code to get the new tests passing; because you've got full test coverage, you can safely engage in quite major rework of the code, knowing that the test suite will alert you to bugs.

Another technique is pair programming. This relies on the observation that it's easier to spot other people making mistakes than it is to stop yourself making similar mistakes. So, put two programmers at one computer; have one of them write code, the other chiming in with observations and questions. Swap roles frequently, so that neither programmer gets bored.

The commonality here is that a good engineer will use whichever process is appropriate for the work they're doing; if I'm enhancing an existing feature, I will do TDD, as I can't risk breaking the existing feature, and it's easier to produce a better feature rather than a mudball of related features if I'm having to think about how it will be used as I write a test case. Similarly, if I'm deep in a complicated problem, I'll pair-program with another programmer who's as competent as I am; they force me to explain things that I think are obvious, and, as it turns out, sometimes what I've assumed is obvious is not only not obvious, but is also key to solving the problem.

It should at this point be obvious why I think that asking for experience of Agile processes isn't necessarily going to get companies what they're after. An Agile process isn't in itself helpful - they provide nice shorthands for good engineering processes, but you can practice Agile processes for years without ever doing a decent bit of engineering. For example, if you're doing TDD, you can write useless test cases, or write code that does a little bit more than the area the test case is supposed to cover. Before you know it, you're claiming to do TDD, but you're actually back in big ball of mud disaster programming territory.

Similarly, pair programming works because the members of the pair are of similar levels of competence - if you pair up with programmers who are considerably worse than you, you don't benefit from it. However, it's possible to do mentoring (pair a good programmer and a bad one together, so that the bad one can learn from the good one), and call it pair programming; from the outside, they look the same.

Practically, you don't normally want to stick with one Agile process slavishly; the core principles underlying all Agile processes are good (short release cycles, don't try and predict when you can adapt during the development process, don't waste time on something you might never need), but which detailed process is best depends strongly on what you're doing today. A good engineer knows enough to insist on pair programming when they're in a complicated problem, TDD when they're trying not to break an existing feature, Scrum-style sprints when appropriate, whatever parts of Agile will improve things today. A bad engineer will do TDD badly, so that it doesn't show gains, will use Scrum-style sprints to avoid facing up to the hard problems, pair programming as an excuse to goof off, and will generally "have experience in Agile processes", yet not show any gain from them.

What employers should really be looking for is good, adaptable engineers; these people will adjust to whatever process you have in place, will look to change it where it doesn't work, and won't hide behind buzzwords. Asking for "Agile processes" is no longer a way to catch engineers who keep up with the profession - it's now a way of catching people who know what the buzzwords are.

Having said that, I don't know what today's version of "Agile processes" should be; you need something that's new on the scene, that good engineers will be exposing themselves to and learning about, and that isn't yet well-known enough to encourage bad engineers to try and buzzword bingo their way past the HR stage.

2010/11/20

Economics of the telecoms industry

I seem to be in a ranty mood at the moment. Today's rant, however, is not negative - it's an education attempt. In particular, I've dealt with one person too many who doesn't seem to understand how the telecoms industry works in an economic sense, and thus why the price they pay their ISP for a home connection isn't comparable to the price a business pays for "similar" connectivity. On the way, I hope to convince people that different ISPs using the same wholesale suppliers can nonetheless offer radically different levels of service.

To begin with, a quick guide to important terminology:

Capex:
Capex (short for capital expenditure) is the money you have to spend up-front to buy equipment that you'll continue to use. For example, £20,000 on a new car is capex.
Opex:
Opex (short for operational expenditure) is the money you have to spend to keep something going. Using the car as an example again, your insurance costs are opex, as is your fuel costs
Time value of money:
Time value of money is a useful tool for making capex and opex comparable. The normal way to use it is to calculate the present value of your opex cashflow; this gives you the amount of money you'd need up front to do everything from capital, without supplying future cash for opex (or alternatively without needing to allow for opex into your pricing scheme).
Cost of money:
Cost of money is another tool for making opex and capex comparable; whereas time value of money converts opex to capex, cost of money converts capex to opex, by working out how much interest you could have earned (safely) if you didn't spend the money now.

So, with this in place, how does the telecoms industry stack up economically? Well, firstly, there are three activities a telco engages in:

  1. Building and owning a telecoms network, whether a small office-sized one, or a big nationwide network.
  2. Buying access to other telco's networks.
  3. Selling access to their own network.

Of these, the first is dominated by capex; depending on where you need to dig, and who you ask to do the digging, the cost of digging up the roads so that you can run your cables underneath them runs at anything from £20 per metre for some rural areas where no-one's bothered if your trenches aren't neatly filled in afterwards, to nearly £2,000 per metre for parts of London. In comparison, the remaining costs of running cable are cheap - ducting (so that you can run new cable later, or replace cables that fail) is around £3 per metre, expensive optical fibre is around £0.50 per metre (for 4-core fibre, enough to run two or four connections), while traditional phone cable is a mere £0.14 per metre. Even the coaxial cable used for cable TV and broadband is £0.32 per metre.

Once you've got your cables in the ground, you need to put things on the end of them to make them do good things. Using Hardware.com's prices on Cisco gear, and looking at silly kit (plugging everyone into a Cisco 7600 router, and letting it sort things out), you can get gigabit optical ports at aorund £1,000/port for 40km reach, including the cost of 4x10 gigabit backhaul ports from the router to the rest of your network.

Note that all of this is capex; given that your central switching points (phone exchanges, for example) are usually kilometres away from the average customer, you can see that the cost of setting up your network is almost all in building the cabling out in the first place; high quality fibre everywhere can be done retail for £4,000 per kilometre needed (complete with ducting), while your digging works cost you a huge amount more; even at £20 per metre, you're looking at £20,000 per kilometre. The cost of hardware to drive your link falls into the noise.

So, onto the opex of your network. You'll obviously need people to do things like physically move connections around; but most of your ongoing cost is power consumption. Again, this isn't necessarily huge; Cisco offer routers at 50W per port for 10 gigabit use, or 1.2kWh per day. At current retail prices, you'd be hard pressed to spend more than 50p/day on electricity to run the Cisco router, even allowing for needed air conditioning. Reframing that number differently, assuming that the typical customer needs £10/month of human attention, a 10 gigabit link has opex costs of around £40/month, including the 10 gig to other parts of the country.

When you compare this to the capex costs of building your network, you can quickly see that the basis of the telecoms business is raising huge sums of capital, spending them on building a network, then hoping to make enough money selling access to that network that you can pay off your capex, and spend a while raking in the profits before you have to go round the upgrade loop again; your opex costs are noise compared to the money you've had to spend on capex; assuming your network survives ten years, your opex is going to be under £5,000 per port, while your capex for a typical port is going to be over £25,000. Given normal inputs to a time value of money calculation, you can work out that a network has to survive 20 years without change before your opex becomes significant.

So, how do you make money on this? Answer: you sell connections to people; you start by charging some fixed quantity per user, to cover the bare minimum of opex and ensure that no matter how the customer uses the connection, you don't lose money on opex. Then, you add a charge for usage; there are three basic models:

  1. Pay as you use of a high capacity link.
  2. Pay per unit available capacity.
  3. Percentile-based charging of a high capacity link.

The first is the familiar charge per second for phone calls; in this model, adapted for data connections, I pay you per byte transferred. You set the price per byte as high as you think I'll pay, so that you can pay off your capex, make a profit, and prepare for the next round of capex on network upgrades. You may also offer a variable price for usage (as my ISP, Andrews & Arnold do), in order to encourage users to shift heavy use to times when it doesn't affect your network as much. This is also where peak and off-peak phone charges came from; if you use the phone at a time when the existing network is near capacity, the telco charged you more in order to encourage you to shift as much usage as possible to off-peak, where there was lots of spare capacity, and hence allow the telco to delay upgrades.

The second is also simple. I pay you for a link with a given communications capacity, and I get that capacity whenever I use it; paying for unlimited phone calls is an example, as is an unlimited Internet connection. In this model, the telco is playing a complex game; if they make the price for the capacity too low, people will use enough capacity on the "unlimited" link that you have to bring forwards a high-capex network upgrade. If you set it too high, people will go to your competitors; a median position, used especially by consumer telcos, is to offer "unlimited with fair use", where you will be asked to reduce your usage or disconnect if you use enough that a network upgrade is needed to cope with you. This position can cause a lot of grief; people don't like to be told that, actually, this good deal for their usage level isn't for them, and that they're "excessive" users.

The third option (percentile billing) is the most common option used in telco to telco links. In a percentile billing world, there is a high capacity link that neither end expects to see fully utilised. Instead, the current utilisation is measured repeatedly (e.g. once per second). The highest measurements are ignored, leaving the percentile behind. Payment is then made based on the peak utilization in the percentile. A very common version of this is monthly 95th percentile; as used by ISPs, you measure once every second. You sort your month's measurements, and discard the highest 5% (e.g. in September, a month with 30 days, you have 2,592,000 seconds; you discard your highest 129,600 readings to get your 95th percentile). You then charge for the highest remaining measurement. For a simplified example; imagine that I measured a day's usage, and charged you 75th percentile. In February, you used 5 units a day for the first week, 1 unit a day for the next 20 days, then 50 units on the last day. 75th percentile of 28 periods involves discarding the highest 7 measurements, so I discard the 50, and 4 of the 5s, to get a measurement of 5 units peak use. I thus charge you for 5 units/day for the entire month. Had you been able to keep the last day at 1 unit, your bill would have fallen to just 1 unit/day; you can thus see how percentile billing avoids charging for rare peaks, but doesn't let a user get away with lots of heavy use cheaply.

I hope this has piqued some interest; as you can see, running a telco, especially at consumer prices, is much more akin to running a mortgages bank than a shop.

2010/11/07

A dangerous "benefits trap"

I'm feeling compelled to write this post, because I don't see any evidence that the current Chancellor understands this trap, and I see otherwise intelligent friends not understanding just why this is a dangerous trap for a government to fall into; often, they're letting themselves be blinded by an ideological view.

The trap in question is one where increasing my taxable income decreases my net income; there are two ways for this to happen. The first is obvious - if the tax rate applicable (once you combine income tax, national insurance, and any other taxes paid on income) is above 100%, increasing your pay decreases your net income. The second is more subtle; if you are paid income-linked benefits, and the increase in net pay is offset by a greater decrease in your income-linked benefits, you lose out.

Why is this so bad? There are two reasons:

  1. You set things up such that I could pay more tax, but I'll be worse off than if I pay less tax. This results in things like someone refusing a pay rise that takes them into higher-rate tax, until it's enough to make up for the loss of child benefit; the government is thus losing out on tax revenue, and paying out in benefits.
  2. You encourage people to depend on benefits rather than earned income, because they're better off that way - this is bad enough when people are depending on benefits because they value their free time above their possible earnings, but becomes utterly crazy when they're doing it because they could get off benefits, but then they'd work as hard for less money in the bank.

I have set up an example spreadsheet on Google Documents to illustrate this. My hypothetical benefit is £1,000/month (housing costs, say), paid to people who earn £12,000 per annum or less. To set up the trap, we've excluded people who earn £24,000 per annum from claiming the benefit at all, and we've decided that you'll lose benefit linearly as you earn more (e.g. people on £18,000pa get £500/month benefit). I've also simplified taxation - instead of multiple taxes on income, I've got a single rate, and a personal allowance. We could add higher rate tax, and more personal taxes, but it doesn't seem worthwhile for a simple example.

You will notice that in my simple example, anyone who earns between £12,000 and £28,000/year is worse off than someone earning less than them. A rational actor will handle this by refusing to accept any job where they're paid in that range; the result is that (for example), instead of taking a job at £24,000, and paying a net £354.17/month in tax, someone sane will take a job at £12,000, and receive a net £895.83/month in benefits after tax is allowed for. It's obvious how this is bad for the government.

It's also not all doom-and-gloom; in this case, by increasing the upper threshold for £24,000 a year to £28,000 a year, people earning between £12k and £28k simply don't take home more; increase the upper benefit threshold a bit further (say to £30k), and although you don't take home much more as you increase your pay, you do take home more money, and thus escape the trap. You can also escape the trap by limiting benefits; this has other, deeper, social implications, as it can result in the benefit not helping the people it's supposed to assist.

So, how do governments end up in this trap accidentally? The usual route is to link the level of a benefit to something different to the thresholds for being permitted to claim the benefit; for example, in the example, I could have tied the benefit level to rents in London. When it started, I was paying £300/month maximum benefit; at that point, there is no trap. As rents rise, I don't review my thresholds, nor do I limit the benefit level; once rents exceed a certain threshold (around £750/month in my example), the trap appears. Capping the benefit just forces people out of an area completely, and creates "ghettos" of poor people kept out of the way of the rich, making it easier for the genuinely rich to be ignorant of what it's like living in a world where £10,000 isn't just the cost of a good night out, but is a sigificant sum of money.

2010/10/30

On funding the BBC

It looks like we're going to see another debate on the future of the BBC in the not too distant future. I happen to believe that the BBC plays two important roles in UK television, so I'm not in favour of anything that guts it; I do, however believe that those two roles could be improved by a change to the BBC's funding.

So, firstly, what do I think the BBC does that's important?

  1. It takes risks. The BBC can do a programme like Sherlock, which could easily have been a complete disaster, because it doesn't have to worry about making a profit on every slot.
  2. It produces programming for minority interests. The BBC can broadcast the Paralympic Games knowing that the majority of people would prefer to watch football or mainstream athletics, because its funding method means that it can do the right thing anyway.

Further, having broadcast risky or minority programmes, the BBC sometimes ends up demonstrating to commercial broadcasters that their intuitions on what's potentially profitable are wrong; thus, it also stops the commercial channels descending into wall-to-wall dross, because they know that their viewers can always switch to the BBC.

Given these priorities, the BBC's funding mechanism needs to give them incentives to take risks rather than play it safe, and to worry about providing something for everyone rather than something for the few. As always when I present a problem, I have an idea to solve it.

To understand my suggestion, you first need to understand the audience measurement concepts of reach and audience size.

Audience size is really simple - just count everyone who watches a programme, or a channel, or a group of channels. There are some slight complexities here, such as deciding when someone counts as having watched a programme, but it's otherwise not hard.

Reach is a much more complex measurement; you have to count the number of unique viewers. The important thing about reach is that you can't just appeal to the same viewers again and again to increase your reach - you have to bring in new people each time.

A very simple example; imagine two TV stations, 1 and 2, and 6 people, A to F, and the following audience figures:

Time of dayStation 1 viewersStation 2 viewers
6pm to 7pmA, B, CD
7pm to 8pmB, C, DE, F
8pm to 9pmC, DA, B
9pm to 10pmA, B, DC, F

By audience size (the most commonly quoted rating), station 1 consistently equals or beats station 2; with the exception of the 8pm to 9pm timeslot, it has more viewers in every timeslot. However, on a reach rating, station 2 beats station 1 - it reaches 100% of the audience over that evening, whereas station 1 keeps attracting the same viewers again and again to get a 67% reach.

With my terms explained (albeit badly), I can explain my proposal. I intend to reuse the BARB reach measurement for TV, and the RAJAR reach measurement for radio, as they're already taken anyway for the benefit of commercial broadcasters. I'm ignoring the BBC's online services completely, for these purposes; I believe that there's enough competition online, and it's easy enough to find, that the BBC is not needed as an aid to competition.

The BBC would have to face two measurements in my world. First, it would be measured on the reach of its television services alone; it must reach 99% of households with a TV licence. This allows for a small number of law-abiding households that don't watch the BBC on philosophical grounds, but still forces the BBC to provide something for everyone. If the BBC fails to achieve this, it has to "buy" ratings from the commercial and state broadcasters until it has the 99% figure; note that because it's buying historic ratings, it has to buy programming that helps it achieve its reach goal, thus giving commercial broadcasters an incentive to find a minority interest that the BBC does not service.

The second measurement includes both TV and radio services, and controls the BBC's future funding. For each of the two figures, you end up with three outcomes:

  1. Reach increased compared to last year.
  2. Reach the same as last year (within error bounds).
  3. Reach decreased compared to last year.

If either reach figure decreases, the BBC is limited to an inflation-only rise in income; if both decrease, the BBC's revenue is not increased at all. This gives the BBC a strong incentive to avoid losing reach - lose reach in one medium, and you're limited to an inflation-only rise in income. Lose reach in both media, and you're making cuts.

If one figure rises, and the other stays the same, the BBC is permitted a small increase over and above inflation - say the lower of 5% or the inflation rate. If the BBC can make both reach figures rise, it's permitted a larger increase - say the lower of 10% or four times the inflation rate. In both cases, the tie to inflation ensures that the BBC never grows rapidly; but by tying increases to reach, the BBC is prevented from growing at all unless it can appeal to more of the population than before.

Obviously, this is just an outline, and thus rather incomplete - but hopefully my thinking is clear.

2010/10/17

OFDM - a technology miracle I've only just started to understand

In an attempt to stave off the winter blues, I've been trying to get a detailed understanding of technologies I depend on for everyday life. Most recently, I've been trying to get a handle on OFDM - the underlying modulation scheme in ADSL and in DVB-T.
To understand why OFDM is such an amazing invention, you first have to understand a little about digital transmission of data. At heart, digital modulation schemes are all simply ways of converting the digital signal (ones and zeros) to an analogue signal that can both be transmitted easily, and then decoded easily at the other end.
There are three basic classes of digital modulation:
  1. Amplitude modulation. In these, you're looking at different signal levels to get different bit patterns - for example, you might have a rule that says "loud signal is a 1, quiet signal is a 0". The advantage here is simplicity - to implement the transmitter and receiver for these modulation schemes is easy (a morse code transmitter and receiver are an amplitude modulation scheme). The disadvantage is that it's very prone to being disrupted by noise or imperfections in the transmission channel, so you struggle to approach the Shannon limit.
  2. Frequency modulation. In these, you look at the frequencies of the signal to determine what bit pattern is being transmitted - for example, you might say that "2kHz (high) pitch is a 1, 200Hz (low) pitch is a 0". DTMF in touch-tone phones is an example of a frequency modulation scheme. Again, these systems are simple (although not quite as simple to implement as amplitude systems), but they're still easily disrupted by some classes of noise.
  3. Phase modulation. In these schemes, you represent data by a phase shift - this is much harder to decode, but has the advantage of being much more noise resistant than any of the other modulation schemes.
There's a particular combination of phase and amplitude modulation that's commonly used for its efficiency and noise resistance, called quadrature amplitude modulation (QAM). In QAM, you take two carriers at the same frequency, but 90° out of phase with each other. You control the amplitude of each carrier at the same time, using the amplitudes of each signal to select a point in a constellation - each of these points represents a different bit pattern.
By changing the amplitude of two waves that are 90° apart in phase, and combining them, we effectively shift both phase and amplitude of the signal at the same time, resulting in QAM being a mixture of phase and amplitude modulation, and getting some of the advantages of both, at the expense of a more complex receiver.
So, how does this all tie into OFDM? Well, firstly, you need to recall some basic trigonometry; QAM uses two carriers 90° out of phase with each other. Another way of looking at this is that QAM uses the sine and cosine waves at a given frequency and phase as its carrier pair. The first step of a QAM demodulator separates out the received signal into the cosine and the sin wave at that frequency.

The other bit of information you need to make OFDM make sense is to realise that signal impairments (noise, attenuation, multipath interference etc) don't affect all frequencies easily. For example, attenuation in a length of copper wire goes up as the frequency of the signal goes up; electronic equipment emits noise at various frequencies, but concentrated around some specific (equipment-dependent) frequencies - such as 33MHz for a PC with a PCI bus.

So, finally, onto OFDM; the basic idea is that rather than try and use the entire channel bandwidth to transmit a single high bitrate signal (as is done in Ethernet, for example), we'll split the channel into lots of narrow channels, and transmit a low bitrate signal on each of them. We can then recombine all the low bitrate signals together into the original high bitrate signal at the other end.

This sounds expensive, at first sight; 802.11a/g use OFDM with relatively few carriers (52), yet some applications, like DVB-T, can use thousands of carriers. If we had to have a full-blown QAM receiver for each carrier, it would be impossible to afford; here's where the miracle comes in. There's a mathematical trick, called the Fourier transform, which can split your incoming signal by frequency, giving you the amplitude of the sine wave and cosine wave at each frequency. Co-incidentally, the first step of a QAM decoder is to split your signal into sine and cosine waves at each frequency. It's this trick that makes OFDM economic to implement; instead of having a QAM decoder for every frequency in the OFDM signal, we perform a Fourier transform on the incoming signal (which we can do efficiently using an algorithm called the FFT), and then all that remains is to quantise each of the amplitudes that comes out, and reconstruct the original digital signal.

From this point, we can do all sorts of tricks; ADSL, for example, uses a different QAM constellation at each frequency to get the maximum data rate out of a single line. DVB-T uses the same QAM constellation in each sub-band, but uses a different subset of the available OFDM carriers for data in each symbol, so narrow band interference doesn't wipe out too much of the wanted data.

Further, because OFDM transforms a high symbol-rate (a symbol is a bit-pattern) wideband signal into a collection of low symbol-rate narrow-band signals, it makes it possible to play tricks that rely on the fact that you can receive at a higher symbol rate than you're actually needing to use; for example, DVB-T can operate in a "single frequency network" mode, where several synchronised transmitters all send the same signal. Because the symbols are relatively slow, and all contain identical data, a receiver can work out what the original digital signal was, and correctly ignore the interfering transmitters.

All in all, OFDM is amazingly simple for what it actually achieves. There are other systems that do similar things (such as CDMA, used in mobile telephones and GPS), but they aren't as simple to explain.

Next on my list of interesting technologies to get the hang of is MIMO, as used in 802.11n wireless networks. This one's even more of a brain-strainer, as it somehow manages to use multiple aerials all tuned to the same frequency to somehow get more data rate than could be done with a single perfect aerial.

2010/07/27

Rant: On asking questions of your engineers

So, this is a pure and simple rant. Feel free to ignore it if you prefer positive thoughts.

When you are asking me a question about something, and you have a definite idea of how you expect me to answer, ask me a specific question that will elicit the answer you want. Don't ask "how well does it perform?" when what you want to know is "how many packets per second can it push?" or "how many videos can it play at once?".

If you can't work out how to phrase the question in a way that makes the type of answer you want obvious, be prepared to have to go back and forth; I'm a software engineer, not a telepath. When I respond to your question with a question of my own, it's because I'm not clear on what you want to know; I'm not trying to shove you away.

Don't get upset when I answer a question with an apparently evasive answer like "well, it's like this if you're thinking of UK Freeview, but like this if you're thinking of New Zealand Freeview, and like this if you're thinking of US cable". I'm not aiming to be deliberately irritating - I want to give you the most accurate answer I can, so that you can avoid accidentally misleading people.

I will lapse into jargon - if you understand it, it's clear as day. If you don't, say so, tell me which bits of jargon are confusing you, and I will try and explain it better. Again, I'm not trying to confuse, I'm trying to help.

When you ask me a question, give me context; don't just ask the question you think you want answered, but include details of why you want it answered. Sometimes, I'll come up with a solution that doesn't need an answer to the question you thought of; sometimes I'll find a way to give you an answer that isn't what you wanted, but that solves your problem perfectly.

Make it clear when you want a detailed answer, and when you want a high level overview; "Can we do this?" is likely to get a "Yes, but there are these problems you need to be aware of when you try and do it" or a "No, but if we did this instead, you'd get a similar result" - I expect you to know enough about why you want to do this to know if my caveats or suggestions are helpful. "I'm filling in a checklist for a bid - can I tick this box?" will get you a "Yes", a "yes, but you have to understand it this way", a "no, but if you pretended they meant this instead you could", or a plain, "no, you can't".

Finally, remember that I'm a person, too, and that means I want to help you if I can - don't cut me off or get annoyed when I offer an answer that isn't what you want; instead, work with me to get yourself an answer that's possible, and that you can live with.

2010/05/25

Thinking rationally about big numbers

One difficulty you see in both politics and everyday life is in comprehending the meaning of numbers; things like "doubling your risk", "as many as 2 million British residents affected", or even "just 50p a day" get thrown around, and you are expected to somehow understand what these numbers mean to you, and whether they should or should not influence your behaviour. I'm not going to look at how these numbers get abused; I recommend Ben Goldacre's Bad Science and Mark Chu-Carroll's Good Math, Bad Math for that.

Instead, I'm going to discuss techniques you can use to help turn overwhelming numbers that you can't get a good grip on into numbers you can understand intuitively. What do I mean by intuitive? In this case, it means that you've got them into the range of numbers that you encounter every day and that you can reason about - this means small fractions (nothing much less than 0.05, or one twentieth), and small numbers (nothing more than a few dozen). It means bringing numbers that refer to big groups down to talking about the group of people you know personally. It means that you've got numbers where your instinctive feel for what a number means actually works properly.

So, let's start with an example; in their manifesto, the BNP claim that there are 300,000 to 500,000 third world immigrants to Britain every year. This is too big a number for me to comprehend instinctively, so my first reaction is to shrink it. The population of the UK is around 60 million, so the BNP's number is less than 1% of the population.

This, however, is still an awkward number to think about; I don't normally think about 1% of a person. So, let's look at this a different way; in a typical week, how many people do you interact with who you'll still recognise a week later (including things like the helpful checkout assistant at the supermarket, whose name you'll never remember, but who you'll recognise by sight)? I reckon there's under 50 of those in a week for me; so, the BNP statistic reduces down to half a person extra in my week who I might recognise in future. Even thinking about the number of people I interact with in a year who I'd still recognise later, I struggle to get to 100 people, which reduces the BNP statistic to "less than one person a year in the size of group I understand".

Once you reduce the scary big number to that, it's not so scary - you can now get into a more meaningful consideration of the number - is an extra person a year in the group you interact with something the country can absorb? Are they being absorbed into the general ebb and flow of British citizenry, or are you seeing Third World ghettos appear in your neighbourhood?

For a different example, imagine newspaper headlines telling you that getting up at 6am and going to bed at 6pm halves your chances of dying from a heart attack before you're 60. It sounds like you should change your habits to match the study; a 50% reduction is huge. However, on further reading, it's not nearly as impressive as it sounds: 94,000 people die of a heart attack in the UK each year, in a population of 60 million, which works out to around 0.16% of the population, or 1 person in every 600, each year. This includes people who die of heart attacks while over 60, which biases the analysis a little, but we'll continue with the known-faulty figure. A useful way to think of this is that in a typical group of (say) 10 people - yourself and your closest friends and family - you'd expect to get through 60 years before one of the group had a heart attack. If you make the group a bit bigger (say 30 people), you now only expect to get through 20 years before someone has a heart attack.

A halving of risk in this case doubles the time to expected heart attack, so our big group now has 40 years between heart attacks, not 20. And, of course, this figure is based on a known-bad assumption; we assumed that under-60s were as likely to have a heart attack as over-60s. However, we know have a more useful way to think about the headline claim - it means that a group of 30 people who follow the advice go from losing a member every 20 years due to heart attacks, to a member every 40 years. Thinking about it this way, you may decide you prefer not to worry about making drastic lifestyle changes.

Two examples, two different sets of reasoning; what's the link? In both cases, I started with a number I couldn't intuitively handle; either too large, or too abstract. I used external statistics to convert it from a raw figure to a probability or proportion I could apply to a group. Finally, I applied the probability to a group that I could picture in my head (all the people I deal with in a week, a group of my closest friends), instead of to an abstract group that I can't really think about. Once I'd done this, I had a number I could reason about rationally, and where my intuition about what it meant matched cold logic, instead of a scary number that I couldn't handle.

2010/05/14

On freeloaders in the benefits system, and reducing the offence they cause

One complaint that appears in the popular press from time to time is that the welfare state is too generous to undeserving people, while not being generous enough to hard-working citizens.

It's an unfortunate reality that no matter how careful you are in designing your welfare state, you're guaranteed one of three outcomes:

  1. Some people freeload and get away with it, because there's nothing in the system to stop them.
  2. Some deserving people don't get the help they need, because the system says no.
  3. The system costs a huge amount to administer, and is much more expensive overall than a system that permits freeloading - you can easily end up spending hundreds of pounds on administration for each pound of freeloading prevented.

I hope that people would agree with me that the third option is absolutely insane - there comes a point where pragmatism says that you shouldn't spend hundreds of pounds just to guarantee that a few pennies don't get given to people who don't need them. This leaves us with just two choices; I would personally prefer that deserving people don't miss out, which means that I have to tolerate freeloaders.

Given this state of affairs, what can be done to ensure that freeloaders don't upset people? First, we need an understanding of why freeloaders upset people; I believe that, in large part, people get upset because they work hard "for what they have", while freeloaders appear to get more by not working at all. We therefore need to ensure that freeloading isn't going to leave you better off than working.

Of course, I have an idea to solve this; simply put, we have plenty of benefits that could easily be paid to everyone, regardless of need, with those who don't need them paying them back in taxes. For example, the highest rate of Jobseeker's Allowance is £65.45 per week per person (around £3,500 per year). The basic rate of income tax is currently 20%, but by removing the personal allowance completely (replacing it with what used to be JSA), you claim back £1,500 from basic rate tax payers; you could then increase the basic rate to claim back the remaining £2,000, or accept that by pushing more people into the higher rate tax band, you're getting more income tax. Further, because people continue to get the same rate of benefits whether they work or not, you encourage people who don't work because they're scared of losing their benefits, or who can't stick out a job for more than a few months at a time to work, and thus pay taxes.

As a second upside, you reduce the amount of administration needed; if you look at all means-tested benefits, and remove those that can fairly be paid to everyone and reclaimed via tax, you reduce the number of people you need to administer the benefits system. There will still be exceptional cases (disability benefits for one), but they're fewer and further between.

There are, of course, downsides. For one, you need to be good at catching out tax dodgers; if people hide from the tax system in the black market, the maths stops working out. You still have a social problem; while you're now always better off working for a living, people will still resent carrying people who don't work, even though they should. I do, however, believe that this system will lead to less resentment, and even, possibly, more employment overall (it becomes possible to work for a couple of hours a week, without losing out on your "bennies").

2010/05/01

A proposal for Parliamentary reform in the UK

Anyone in the UK will be aware that we've got a general election coming up, and thus that our parliament is suspended until after the election; one of the many things we were promised that hasn't materialised is full reform of the upper chamber (the House of Lords). This, then, is my modest proposal for reforming Parliament to better reflect the modern UK.

First, a brief look at what we have now. Parliament is split into two houses; Commons, the elected MPs, and Lords, who are appointed for life (in theory by the monarch, in practice by the Commons). New legislation can be proposed in either house; it's debated and amended by the house in which it was proposed. Once it's approved, it's sent to the other house, where the process is repeated by a different group of people; eventually, both houses approve the legislation without amendment, and it becomes law.

There is a tie-break process in place (the Parliament Act) to ensure that the Lords cannot completely block legislation. If the Commons can agree to a proposed bill three times without amending it themselves or accepting amendments from the Lords, the Lords are not given the chance to reject or amend the legislation again.

By and large, this setup works. The Lords act to balance the Commons' short-term thinking, but cannot hold back a Commons that is determined to pass a particular piece of legislation. The back and forth process between the two houses ensures that most legislation gets discussed properly, and that it's possible for Commons MPs (who are under pressure from voters) to say "we tried to get your knee-jerk legislation in place, but it got watered down by the Lords".

There are, however, two issues that I see with it:

  1. The Lords is often reactionary and disconnected from the populace; this leads to media discontent with the Lords. I'm not entirely sure that this is an issue worth solving, as it's more about perception than reality.
  2. By and large, the flow of legislation is entirely one way; the statute book grows, but does not normally shrink. Ignorance of the law is no defence in English law, so as time goes on, it becomes harder to remain a law-abiding citizen.

Naturally, I have proposals for fixing both of these; I will start with the second, as I see this as a bigger issue.

Firstly, we need to think about why the statute book keeps growing; there are many causes for this:

  • Statute starts to cover more and more things that were previously not a problem. For example, there are no 17th century laws regulating driving on the motorway, or mobile telephony.
  • Statutes get introduced to cover things that are a concern at the time, but which are no longer relevant; for example, we still have statute covering the militias, which have long been replaced by the modern Army.
  • Statutes are introduced to cover a very specific problem; after a while, we work out that the specific problem is just one example of a more general problem, we legislate to cover the more general problem, but the specific legislation remains.
  • Statute covers something that, at the time of introduction, is believed to be a problem. As time goes by, and society adjusts, we stop enforcing statute, as we discover that there is no problem; however, the statute remains, and can be suddenly enforced at any time. Enforcement of the statute will surprise people.

It's clear that some of this can't be avoided - we need new legislation to handle problem that never existed before, and it's not surprising that general legislation overtakes specific legislation as the true scope of a problem becomes clear. However, some of it is simple waste; once we have the general legislation, we don't need the specific legislation. We don't need legislation that refers to a time gone by, and that's now irrelevant. We should lose legislation that's no longer enforced.

So, on to my modest proposal to fix this: all legislation should have a built-in expiry date. If it's not renewed by the expiry date, it comes off the statute books, and, if it's still needed, it must be reintroduced. In order to stop important legislation falling off the books, I would put a duty on the Lords to review all legislation that's approaching expiry.

Obviously, there are details to work out; what expiry date should be put on legislation? If it's too short, the Lords wastes time renewing legislation again and again; if it's too long, it doesn't keep the statute book clear. I would suggest that the ideal unit of time is the lifetime of a parliament (typically 5 years, although it can be shorter). By making the expiry the end of a parliament, we give the Lords time to renew legislation before it expires. In order to keep statute changes flowing through, I would suggest setting the expiry based on the vote that passed the legislation, according to the table below:

Size of majority in CommonsSize of majority in LordsExpiry of legislation
50% or more of the MPs who voted50% or more of the MPs who votedEnd of the next parliament (circa 5 years)
75% or more of the MPs who voted, and at least 50% of all MPs voting50% or more of the MPs who votedEnd of the parliament after next (circa 10 years)
50% or more of the MPs who voted75% or more of the MPs who voted, and at least 50% of all MPs votingEnd of the parliament after next (circa 10 years)
75% or more of the MPs who voted, and at least 50% of all MPs voting75% or more of the MPs who voted, and at least 50% of all MPs votingEnd of the 4th parliament after this one (circa 20 years)

This ensures that legislation which MPs can be persuaded to care about lasts longer than legislation that doesn't particularly appeal to their sense of duty; for example, murder laws would almost certainly get renewed for close to 20 years at a time, as would a new bill protecting people from unlawful killing, while "special interest" legislation is unlikely to last much more than 5 years at a time.

The renewal process would work much like the process for introducing new legislation; someone brings the bill for renewal before the house, it's debated, voted on, and eventually, passes; the size of majority affects when it next comes up for renewal, just like new legislation.

I would expect this to gradually reduce the size of the statute book; because many laws are non-controversial (and thus will get renewed without question), it reduces the time available to produce new laws. Laws that should be repealed will face fresh scrutiny on a regular basis, and, with any luck, the resulting reduction in rate of change of legislation will enable any citizen of the UK to memorise all the statutes that affect them.

So, onto the less important reform; making the Lords democratically accountable. There are 646 seats in the House of Commons. I would make the Lords the same size as the Commons, but, instead of electing each seat using first past the post, I would group Commons seats into 38 groups, each of which returns 16 members to the Lords. These members would be elected by the constituents of the 17 seats they represent using single transferable vote; a system in which electors mark their preferences in order, and which aims to elect the least objectionable of the candidates. Elected members survive 4 parliaments, and each constituency would be expected to elect in groups of 4 at a time at the same time as they elect their Commons MP.

This results in members of the Lords being able to take a much more long-term view than Commons MPs; they can expect to be in office for 20 years at a time, to the 5 years of a Commons MP. It also reduces the turnover, resulting in a lot of continuity in the Lords. STV also tends to prevent tactical voting - you can afford to put the virtually unelectable candidate you agree with before the very electable candidate you just about could live with, knowing that if you're right, and they don't get elected, your vote still helps prevent the candidate you couldn't live with from winning.

This leaves 38 seats empty, as against the Commons. I would increase this by 2, to get 2 more Lords than Commons. Again, these 40 would survive 4 parliaments at a time, and again, I would replace in groups of 5, so that there's a slow turnover. However, these 40 would be appointed by the incoming Commons administration; their role is to ensure that there are people in the Lords who can act as a link between the administration, including past administrations whose laws are up for renewal, and the Lords, to explain the rationale behind Government bills in Lords debate.

As you may have gathered from this ramble, I do have an interest in politics. I would be interested to hear other people's views on reforming UK politics - including explanations of why it's not needed.

2010/03/27

How a 1980s telecoms compromise helped set the Bluray video format.

I've mentioned the ATM committee's weird decisions before, when talking about ADSL; this post is meant to make you think about how seemingly small committee decisions can have long term impact.

To recap; in the 1980s, telecoms engineers were setting standards for "fast" data links (45MBit/s and above), to be used to carry voice, fax and data; the decision was made to use small cells, so that even slow links could use the same standard. The resulting standard, ATM, has a 48 byte payload and a 5 byte header on every cell.

There's only one standard set worth considering if you're interested in serious video; the MPEG standards. Both DVD and Bluray are built on MPEG; DVD uses MPEG-2 exclusively, carrying video and audio in an MPEG-2 program stream, extended to add timecode to packets, resulting in the "VOB" file format. This has limitations when it comes to seeking; when an optical disk player seeks, it moves the read head to a location that's approximately right, then reads the disk until it finds the timecode it's after, then either moves the heads again (to a better estimate of the correct place), or resumes playback. Because program stream packets are variable-length, the player can end up reading a significant amount of data, only to discover that the timecode tells it that it's badly off, and it has to seek again.

Bluray escapes this by using MPEG-2 transport streams as the container, but with a 4-byte timecode added to each transport packet. Transport stream packets are fixed in length at 188 bytes, so a Bluray player never needs read more than 192 bytes after seeking before it can decide whether it needs to seek again to a better guess at the correct location, or whether it can just read to the right point.

188 bytes is a rather unusual number; the header is 4 bytes long, and the payload length of 184 bytes is neither a nice number for humans, nor is it a nice number for computers to deal with. So, why did the MPEG-2 committee choose 188 bytes for transport stream packet size? It all comes back round to ATM; when MPEG-2 was being designed, ATM was the telecommunications networking technology of choice, and it was considered important that you should be able to easily carry MPEG-2 transport packets in ATM.

There are two sensible ATM Adaptation Layers for MPEG-2; AAL1, meant for constant bit rate services (such as carrying a multiplex from BBC headquarters to the Freeview transmission sites around the country), and AAL5, for services that can cope with a long delay. AAL1 takes 1 byte from every cell's payload, leaving 47 bytes for the user; 47 times 4 is 188, which is where MPEG-2 gets 188 bytes from. It also works well for AAL5; AAL5 uses 8 bytes from every group of up to 1366 cells, leaving 40 bytes in that cell and 48 bytes in the remaining cells in the group for user data; two 188 byte packets plus 8 bytes of AAL5 overhead fit precisely in 8 cells with no padding.

So, to interoperate well with ATM, the MPEG-2 guys chose 188 bytes for their fixed-packet-length container. MPEG has been wildly successful, so that the Bluray guys didn't want to design their own container format. As a result, a compromise between France (who wanted 32 bytes payload), and the USA (who wanted 64 bytes payload) has influenced the design of the most modern consumer video format to date. Next time you're compromising on a technical issue, think hard; your compromise may live on longer than you expected, and affect more people than you thought it would.

2010/03/24

Ada Lovelace Day - Valerie Aurora

So, it's Ada Lovelace Day. The day on which geeks everywhere are asked to point out geeky women who've inspired them. I would love for this to be an irrelevance, and for no-one to care that much; unfortunately, the geek community is still male-dominated. I've therefore picked on Valerie Aurora as a geek who's work inspires me, and who happens to be female.

Why do I find Val's work inspiring? In part, it's inspiring because she tackles important problems; not necessarily interesting problems to the general population, but problems that need to be dealt with now. However, there are lots of programmers who do that; what makes this one special?

The difference in Val's work is the clarity of explanation she produces. Most programmers struggle to explain their work to other programmers in the same field; if you read any of the publications linked on her homepage, you'll note that she writes clearly and concisely in English, not just in the usual set of programming languages. Combine that with the high technical standards maintained in the publications and in Val's work on things like the Linux kernel, and I'm impressed.

To top all this off, she's written a very good document aimed at encouraging women to break out of unfair social constraints: HOWTO negotiate your salary. Often, all it takes to break a bad cycle is someone to notice it, and point it out to the people involved; this document is a perfect example of how to make a difference.

It's often said (with good cause) that when members of a previously underrepresented group enter a field, they have to be not just good, but noticeably above average. Val's work fits this stereotype, and sets a standard that I hope to be able to meet one day.

2010/03/14

DIY garden replacement

People who know me well know that about 3 years ago, my wife and I bought a house in need of a lot of TLC. This week's decision has been that we should redo the garden before hay fever season makes it horrendous to go outside.

We've got a lot ahead of us. The grass isn't appropriate for a lawn, there's stones everywhere, and Rachael wants a vegetable patch. The current plan of campaign is to use two weekends to do it.

Weekend 1 plans

  • Hire a sod cutter and remove the entire top surface of the garden.
  • Tear down the remains of the shed.
  • Place new turf over the lawn area.
  • Cover the area that's going to be vegetable garden with plastic sheeting.
  • Cover the area that'll be used to extend the patio with plastic sheeting.
  • Cover the area that'll be used for a new shed with plastic sheeting.

Weekend 2 plans

  • Buy a new shed, cement, and a taller pole for the satellite dish.
  • Cement in the tall pole.
  • Move the satellite dish and recable.
  • Lay the shed foundations.
  • Put up the new shed (just for storage space, no need for power/light).
  • Lay extra patio.
  • Start putting in raised beds (large) for the vegetable garden.

As you can see, this is a fairly hefty work plan; I hope we're up to it...

2010/03/06

Calculating the number of VoIP channels you can fit on an ADSL line

A question that occasionally comes up in the #A&A IRC channel is "how many VoIP channels can I fit on one ADSL line?" The answer is quite simple to calculate, and you can do similar arithmetic for other realtime traffic:

We start with some general knowledge; people who read my previous post on ADSL overheads will be aware that ADSL sync rates are the number of ATM cell bits you can fit down the line. ATM cells are 53 bytes long, so you divide your sync rate by 424 (bits per cell) to get the cell rate. So, all we need to know is how many cells per second in a VoIP call. The table below has details of all the bits in a single 20 millisecond packet of G.711 (a-law or µ-law) VoIP:

BytesReason
8AAL5 trailer
2PPP framing for PPPoA
20IPv4 header
8UDP header
12RTP header
160G.711 voice data
This adds up to 210 bytes; cells carry 48 bytes each, and can only be used for one packet, so we need 5 cells, with 30 bytes padding. There are 50 of these packets in a second of VoIP, so each call needs 250 cells/second, or 106 kbit/s of sync rate.

Because we have 30 bytes of padding, we can deduce that the extra 20 bytes overhead of PPPoE won't hurt us; nor will the extra 20 bytes header size of IPv6. However, if we use both IPv6 and PPPoE, we will need one more cell per packet, making it 6 cells for one packet, or 300 cells/second, or 127.2 kbit/s of sync rate per call.

We know we need a little spare capacity for signalling and monitoring, but from this, we can deduce that a 448kbit/s ADSL upstream (IP Stream Standard) can support 4 calls, or 3 if you're using both PPPoE and IPv6. An 832kbit/s upstream supports 8 calls, or 6 if you're using both PPPoE and IPv6. The 2MBit/s I get on Annex M supports 15 calls with PPPoE and IPv6, or 18 calls with either IPv4 or PPPoA.

2010/02/26

On PPPoA, PPPoE, ATM and ADSL

This blog entry is based on my Usenet posting, which has been archived by Google.

The question raised was "what is the difference between PPPoE and PPPoA on ADSL? Is there any benefit from choosing one or the other?". This post goes into the details of how PPPoE, PPPoA and ATM are interrelated on UK ADSL providers.

Anyone who's looked at the options offered by their ADSL modem will have seen a huge variety of options; there's VC Mux, LLC, PPPoA, RFC1483/2684, PPPoE, CLIP and often others. These are all ways of carrying your Internet traffic in ATM. I'm only going to consider PPPoE and PPPoA encapsulation, as those are the only two options AAISP offer.

On a BT Wholesale line, you get to choose an encapsulation method (either PPPoA or PPPoE); the other end can autodetect what encapsulation you're using, by looking at the incoming ATM cells.

For PPPoA, the incoming ATM cells contain PPP directly. When the PPP link is not yet up, these will always begin C0-21.

For PPPoE, the incoming ATM cells contain a 6 byte Ethernet over ATM header, a 14 byte Ethernet header, then the PPP. The Ethernet over ATM header always begins AA-AA-03.

Incidentally, this means that PPPoE is always going to be slightly slower than PPPoA - when you're unlucky, it will use one ATM cell more than PPPoA for a packet, costing you some speed. Eagle-eyed readers will also recognise that this makes Be Retail's ethernet over ATM services use slightly more overhead than PPPoA services; there are 20 bytes of overhead for Be, to the 2 bytes of PPP overhead.

There are some differences in how this all works between BT 20CN and 21CN:

In 20CN, your DSLAM port is an ATM device - it knows nothing about PPP, IP, Ethernet etc, and just transfers ATM cells to the BRAS, using BT's ATM cloud. The BRAS does the encapsulation autodetection, handles the PPP to L2TP translation, and passes L2TP to AAISP.

In 21CN, the ATM cloud is replaced with an Ethernet cloud, running with jumbo frames. Your MSAN port does the autodetection, removes the ATM cell framing, adds Ethernet framing if you're doing PPPoA, and sends the resulting PPPoE frame (which can be oversized - allowing a full 1500 byte MTU) to the BRAS, which then does PPP to L2TP and passing data to AAISP.

There is never a performance gain on BT ADSL from using PPPoE; in 20CN, it's just one more block of bytes for the BRAS to strip. In 21CN, you save the MSAN adding a block of bytes to each packet, but it can do that without measurable impact at incredible speeds (easily into the gigabits per second per ADSL port), and you lose data rate on your line carrying those bytes.

Techy geeky stuff

First, a quick walkthrough of ADSL layers:

  • At the bottom-most layer, you have a broadband analogue signal, changing at a rate of 4 kBaud; this is split into bins of 4.3125kHz, each of which encodes up to 15 bits per bin. Bins 0 to 5 are reserved for voice. The remaining bins vary depending on the standard.
    • Upstream is bins 6 to 31 in Annex A (for all the supported standards, ADSL1, ADSL2 and ADSL2+)
    • Upstream is bins 6 to 63 in Annex M (ADSL2+ only)
    • Downstream is bins 32 to 255 in ADSL1 and ADSL2
    • Downstream is bins 32 to 511 in ADSL2+ Annex A.
    • Downstream is bins 64 to 511 in ADSL2+ Annex M.
  • The next layer is ADSL frames, which include error correction and sync framing. This level is normally invisible to the user.
  • The uppermost layer I'm going to consider is the user frame level. There are two modes permited in ADSL1, and three in ADSL2 and ADSL2+: STM, ATM and PTM. STM uses the SDH frame structure; PTM carries higher-level protocol frames directly, typically Ethernet frames. ATM is the mode used in the UK (and most deployments worldwide); ATM cells are packed into ADSL frames, but all you see is a stream of ATM cells.

An ATM cell is 53 bytes long, 5 bytes header, 48 bytes payload. A 48 byte MTU is too small, so we use an encapsulation called AAL5, which takes 8 bytes from the last cell in a set, and lets you have up to 65535 bytes of payload. So, 100 bytes of payload in AAL5 is carried in 3 cells; the first two cells have 48 bytes each of the payload, and the last cell has 4 bytes of payload, 36 bytes of padding, and 8 bytes AAL5 trailer. For a second example, if you had 96 bytes of payload (a disaster case), you get 3 cells. Two cells contain 48 bytes each of payload, and the last cell contains 40 bytes of padding, and 8 bytes of AAL5 trailer.

PPP adds a 2 byte header to each IP frame - so, to send 1500 bytes of IP data, you send 1502 bytes of PPP. If you're doing PPPoA, this 1502 bytes becomes 31 cells containing 48 bytes of payload, and a cell containing 14 bytes payload, 26 bytes padding, and the 8 byte AAL5 trailer. Your sync speed is measured in kilobits of cells per second, so a 832kbit/s sync speed is around 1,900 cells per second (832kbit/s divided by (53 bytes per cell)).

If you add PPPoE into the mix, the PPPoE header adds another 6 bytes. The Ethernet header adds a further 14 bytes, for 20 bytes overhead. When doing Ethernet over ATM, you normally drop the Ethernet checksum, but add 10 bytes of headers, for a further 30 bytes of overhead. To send 1500 bytes IP in this setup, you need to send 1532 bytes of payload. This becomes 31 cells carrying 48 bytes of payload each, a cell carrying 44 bytes of payload and 4 bytes padding, and a cell carrying 40 bytes padding and 8 bytes of AAL5 overhead.

As you can see, there are whole ranges of packet sizes where PPPoA takes up one fewer frame on the wire than PPPoE. 8 to 37 bytes of IP (never normally seen, as IPv4 is 20 bytes overhead, and TCP adds another 20) are 1 cell in PPPoA, 2 cells in PPPoE. You then get 18 bytes (38 to 55) where the overheads are the same. 56 bytes to 85 bytes of IP take up 2 cells in PPPoA, 3 cells in PPPoE, and the cycle continues. Thus, depending on your packet size distribution, you may see the same speeds with PPPoA and PPPoE, or slightly more with PPPoA.

This is also where the faulty advice to use a 1478 byte MTU with PPPoA comes from - when doing bulk transfers, most of your packets are MTU-sized. A 1478 byte MTU, with the 10 bytes of PPP and AAL5 overhead works out to precisely 31 cells, so you don't "waste" 26 bytes in every 1536 bytes of payload on padding; the downside is a slight increase in number of packets sent, so more lost to TCP/IP headers. Assuming IPv4, a minimum size IP and TCP header is 40 bytes, so this can end up costing you more in extra IP headers than it saves in ATM padding.

Finally, if you're wondering why 48 bytes - there are good technical reasons to make the payload size a power of 2. The Americans wanted 64 bytes, because that would be more efficient for data, and still good enough for voice with echo cancellers in a US-wide network. The French wanted 32 bytes, because that would allow them to have a France-wide network without echo cancellers for voice. They refused to agree, so management types compromised on 48 bytes, a pain for *everyone*.

Aren't committees wonderful?

Edited 2010-04-27 - the 1478 byte MTU advice is correct for bulk transfers, but not for some mixed size transfers. In a bulk transfer, you get one padded ATM cell per 1500 byte packet; my error in reasoning was in not noticing that you don't also get one extra IP packet per 1500 bytes transferred if you do 1478 byte MTU. In practice, this means that for small transfers, 1500 byte MTU is more efficient - for larger transfers, 1478 byte MTU is more efficient; I leave calculating the point at which the extra IP headers outweigh the extra ATM cells to the reader. Thanks to Simon Arlott for pointing this out to me.

2010/02/20

Subtle discrimination

Note: this post was edited on 26th February 2009.

This blog post has been triggered by a discussion on LWN.net (a site whose editorial content is of a very high standard - if you have an interest in FOSS, I'd highly recommend subscribing - I comment on there as "farnz").

The particular subthread that set me off was one on the (relatively) low participation of women in Free Software; it began with a commenter suggesting that whatever was ongoing was not discriminatory, as all project work happens over the Internet, and it's therefore impossible to even guess whether someone's male or female.

We know that women are generally underrepresented in programming compared to the general population, but the numbers we have suggest that around 20% of programmers in the proprietary software world are women, whereas only around 3% of programmers in Free Software are women. It's clear from these figures that something is putting women off contributing to open source; the remaining question is what?

One argument is that women just aren't interested in programming, and those that are aren't interested in open source. This is a possibility, but before we can accept it as an explanation, we need to understand why women aren't interested; as yet, I have not seen anyone try to do this in terms that don't either beg the question, or assume a skills difference without evidence.

Another argument is that the people in charge of open source projects are openly and actively discriminating against women; in particular, if they notice contributions from a woman, they'll either actively campaign against her, or simply ignore her contributions until she gives up. As this requires an active anti-female conspiracy, and yet there are some known female contributors, who are not protesting sexist behaviour by the people in charge, it's a weak argument.

The third class of argument is the one I wish to focus on, as it's often ignored. It takes as an axiom that different groups of people have different views on what's appropriate behaviour for other people, and that those views tend to splinter down lines like gender and race. For example, in the culture I live in, men who argue vehemently against a view they disagree with are being "strong" and "decisive"; women who do the same are being "pushy" and "opinionated". English hacker culture respects people who stand up to authority and say "no, you're wrong, and this is why"; other cultures simply do not permit such behaviour, and actively cast out people who stand up to authority in such a crass fashion; you must instead influence authorities into recognising their mistakes themselves.

Following on from this, people learn to stay away from communities where they believe that there will be a culture clash; for example, women learn to not get involved if they believe that they'll be forced into arguing vehemently against something, as past experience tells them that if they behave equally, they'll be seen as pushy and opinionated. This applies even if the community wouldn't actually treat them that way; the perceived risk is much higher than the perceived gain.

At this point, there's a strong risk of a vicious cycle. The community is behaving in a way that puts off a group; there aren't enough people from that group to change the way the community behaves, and the existing community members don't see a problem. Sometimes, this vicious cycle is exactly what you want; if the group you put off cannot contribute in any useful way. More often, it's not actually helpful, as you lose useful contributors.

So, how does a community avoid getting caught in the vicious cycle? The first problem is finding out that you even have a problem; the people being put off won't speak up, as they've learnt that the consequences of doing so are bad. People outside the group being put off have to make a conscious effort to notice when the community is heading in a bad direction and speak up; note that you will be attacked for speaking up, as you are going to be telling people that they're behaving unacceptably, when they don't think that they are.

The good news is that in the long term, speaking up helps. By making it clear that the offputting behaviour is unacceptable, you create an environment in which some of the people who would have been deterred from contributing feel able to speak up if they're feeling that they're being discriminated against. In turn, this lets them join the community, and over time, their presence will shape the community into being welcoming of members of their group, too, not just the existing people.


Footnotes:

An explanation that begs the question is one that includes the question in the answer; for example, "most women who could contribute to open source don't do so because there aren't enough female role models". There aren't enough female role models because not many women contribute, so this explanation begs the question.

An explanation that assumes a skills difference without evidence is one that asserts that the lack of contributions from a group can be entirely explained away by the relatively high standards of the community, without first showing that the group does not normally meet those standards. There are two common variations on this theme:

  1. This community wants higher quality contributions than whatever other community has a more even statistical record. We know that some people are better than others at this activity, ergo the bias is just because we want high quality contributions; when the group which is feeling discriminated against raises its game, the balance will even out.
  2. This community doesn't pay (or pays less) for contributions, whereas the other one pays well - that's why the bias.

The first of these needs evidence that the skills difference is really present; it's objectively measurable, so you could back it with something like "the proportion of women with 1st class degrees in software engineering as against women with software engineering degrees of any class matches this" - if 3% of firsts in SE are women, but 20% of SE degrees are women, the figure for open source versus proprietary has an explanation.

The second isn't an explanation; all it says is that money can persuade people to do things that are otherwise going to make them unhappy.

Edit on 26th February 2009:

A friend of mine has pointed out that I've been rather one-sided in this post. I've talked about ways in which men unintentionally deter women from participating, but I've omitted discussion of ways in which women end up excluded by their own social expectations.

In my culture, men and women are expected to resolve conflict in very different ways; men typically have it out as a big, big argument, after which everything is settled. Women typically acquiesce in the short term, but apply gradual pressure to individuals (often seen as manipulative "nagging" and "whining") to bring the final result round to something they can live with.

I would argue that the "male" way of resolving conflict is more appropriate for a group project; it results in arguments being resolved in the group as a whole, not in private discussions. Further, the "female" route leads to a perception that there's no point arguing against women who speak up; they'll get their way in the end, so you might as well give in now.

There's no FOSS-only solution to this; as a man, all I can do is avoid being suspicious of women's motives when they break this cultural model. Everyone needs to be aware of when their cultural conditioning is pushing them into bad habits or behaviours, and we need more general cultural change, so that we don't expect people to behave differently (as programmers) depending on gender, race, or other irrelevances.

2010/02/14

Market abuse; or, extending control from one market to another

As a geek with an interest in broadcasting, I see an awful lot of people justifying market abuse by Apple, Microsoft and Sky, amongst others. I thought I would jot down my personal views on what is and is not market abuse, and why I don't consider it acceptable.

So, what is market abuse? Market abuse is where you use your power in one market to unreasonably influence another market. In this circumstance, unreasonable means that a competitor who's only playing in the market you're trying to influence cannot possibly match your influence, without first having power in the market you're using.

It's easier to explain by examples of abuse, and not abuse:

Abuse
Tying your web browser in with your OS, and forbidding resellers of your OS from removing your browser or adding another browser.
Not abuse
Giving your web browser away for free, and permitting unlimited distribution of copies without a fee.
Abuse
Tying reception of your TV channels with your TV receiver box.
Not abuse
Tying reception of your TV channels with your encryption card.
Abuse
Tying your mobile phone to a specific provider, and requiring people who want your phone to take out service from your chosen provider.
Not abuse
Permitting mobile providers to subsidise your phone and apply a subsidy lock.

There's a thread running through all of these; when you are engaging in market abuse, there are two separate products being sold. One is worth buying (or getting for free) on its merits, the other is not, so you tie the two together, forcing me to take the poor quality product if I want the good product. But note that there's a slight twist; it's not market abuse if I can get equivalent products elsewhere in the market without the tie.

So, offering me your MP3 player tied to your music player software isn't abuse; I can buy other MP3 players, not tied to your music player easily enough. If I can't get an equivalent to your phone elsewhere in the market (e.g. because your application store means that network effects compel me to have your phone or lose out), it's abuse.

Why do I consider market abuse bad? Simply put, it's because I'm compelled to accept a worse overall experience. Either I accept the poor quality product with the good product, or I lose out on both; it gets really bad when network effects mean that I must accept one of the two products and lose out.

How do companies avoid market abuse? By not tying two products together unreasonably; sell them both separately, and let your immediate customers make decisions for themselves. If you're moving into a new market, don't get tempted to do anything that couldn't be done by a competitor in the new market with enough money. In particular, don't tie your existing product together with your new product, unless a competitor could tie your existing product together with their existing product.

2010/02/07

Science, pseudo-science and non-science

As someone who holds strong opinions, I'm often found arguing with people; because my education is scientific, the debate is often around a subject where "scientific evidence" is expected to carry the day. Most of the time, the claimed "scientific evidence" is nothing of the sort - often, it's nonsense using "sciency" words to seem right.

It's quite simple to tell science from non-science, and not that much harder to distinguish pseudo-science from science. If you're planning to argue using "scientific evidence", please make sure it's genuinely scientific before you start.

Non-scientific claims are based on authority, not on testable evidence. Examples include "my doctor tells me that this drug will suppress the symptoms of my hayfever", or "the Bible tells me that God loves everyone". Non-scientific claims don't have to be false; they can be a useful shortcut to avoid going over basic knowledge again and again, or you could be in a realm that science cannot speak on.

Scientific claims have two properties. Firstly, they are based on testable evidence; secondly, when the claim has been tested, it has been found to be "not false".

Pseudo-scientific claims are very similar to scientific claims. The difference is that where a scientific claim has been found to be not false, a pseudo-scientific claim has been found to be true.

The distinction between found not false and found to be true is subtle, and bears further examination; it's about the way in which you test the relationship between your claim and your evidence. When you find a claim "not false", you start with your claim, and think of a way to gather evidence that the claim is false. You then try to gather that evidence; the claim is found not false when your evidence does not contradict the claim.

In contrast, when you find a claim "true", you start with the claim, and try and gather as much evidence as you can of the claim being true. The claim is found to be true if the evidence does indeed agree with your claim.

This does not sound like a huge difference; however, in practice, people tend to find the evidence they set out to gather, unless that evidence genuinely does not exist. What's more, for many of the claims we wish to evaluate, there are many reasons why the claim could appear true.

For example, earlier in the year, I suffered from a bout of flu, lasting about 4 days in total; during that time, I took Tamiflu. Now, I could claim that Tamiflu cured my illness, and point to the fact that I had flu, I took a course of Tamiflu, and I was then healthy again as evidence. This would be evidence that backs up the claim that "Tamiflu cures flu" in a pseudo-scientific fashion, and would be clearly flawed - would I have recovered without Tamiflu?

Against that, I might claim that my camera "sees" infrared light; the test would be to get a high intensity IR source, and if it didn't show up in a photo taken by my camera, I'd know my claim was wrong. If it shows up, then that reinforces the claim, and it's up to me to try another way of disproving it; as I struggle to find a way to disprove it that works, it becomes less and less likely that my claim is false.

The challenge, then, for people presenting "scientific" claims, is to show that they are backed by evidence, and that the evidence has been gathered by trying to prove the claim wrong. In an ideal world, you describe why you gathered the evidence you did, and how I could gather it myself, so that I can check your claim. I can then be confident in your claim, knowing that if I start to disagree, I can check it.

Ground Rules

Welcome to my blog. This is a place for me to post my opinions and thoughts, and to invite feedback from anyone who happens to be interested.

I do not intend to cause offence; I will try and explain my views clearly and rationally. In return, I would like anyone who comments to do the same. All comments are moderated; if I edit or redact part of a comment before letting it through, I will clearly indicate the change. I will also clearly indicate any changes made to blog posts after I publish them.

I am English, socially liberal, fiscally conservative, white, male, middle-class and university-educated; I try to see other points of view, but you may have to explain things that you would expect to be obvious. If it looks like I don't get it because I've not been there, tell me what I'm missing.

If you think I'm wrong on an objective fact, please provide references to back your point of view up. Web references are best, as I can check them at the computer, but I can always go and look in the local library for a book if needed. In particular, if you are claiming that something is illegal in the UK, please cite an act that I can look up on the Statute Law website; similarly, if you are claiming that an organisation prohibits or permits something, please give me something that I can use to go and check exactly what they claim.

For example, if you wish to claim that Ofcom (a UK regulatory body) requires telephone providers to route 112 to the emergency services, saying "Ofcom requires them to route 112 to the 999 call centre" isn't helpful. Saying "Ofcom's general condition 4 requires them to route 112 to the 999 call centre" allows me to check what the regulation actually is, and is therefore a better way of saying it. There are bonus points if you provide a link to your reference on the web, so that I can easily find it - in this example, something like "Ofcom's general condition 4 requires...".

I hope to learn from this experience, and I hope you enjoy reading my blog.