2011/08/24

Making car insurance affordable

Motor insurance prices have been in the news recently, as the cost of legally required third party insurance has been rising recently. The claim made by the industry is that injury claims are the biggest cause.

It seems to me that there's a pair of simple fixes, which could be funded by a further tax on motor insurance.

First, you cannot claim for injury after an accident unless the accident was reported to the police within 24 hours of the accident; if you are incapacitated by the accident, this time is extended until 24 hours after you become medically capable of reporting the accident. This ties into the requirement to report injury accidents to the police that already exists (Road Traffic Act section 170), and just has the injured party also required to report to the police if they might choose to claim later. In theory, therefore, this is no burden on the authorities, as they are dealing with these reports anyway.

Additionally, an injury claim is only valid if two things apply:

  1. You make arrangements to see an NHS doctor within 14 days of the accident - being taken to hospital by emergency ambulance counts here, as does making an appointment to see your GP.
  2. An NHS doctor, at risk of losing their licence to practice if found to be lying, agrees that your injury is such that compensation is appropriate.

In other words, make it cheap to dismiss injury claims that aren't backed by medical evidence. Put the onus on claimants to seek medical advice - if you really think you're injured, you should be seeing a GP anyway, and we should have mechanisms in place to deal with you if you're abusing NHS services (a topic for another rant).

2011/07/28

On the cost of regulations

I've been pointed at the Telegraph article on Steve Hilton's advice to the UK government, and one thing stands out in the article; Mr Hilton is able to count the cost of regulation, but not the benefits.

It's trivially obvious that (to take one of Mr Hilton's examples) consumer rights laws cost some businesses money; what's not discussed is the benefits of those laws, and the cost to society of not having them. Let me lead you through an example, using an expensive consumer product that many people have: a television.

You can buy televisions at many different prices, each with different features; just looking at 24" screens, I can find a cheap TV at £129, or I can spend over £1,000 on a similar looking set that's the same physical size. It's when I start to look at the differences between the sets that the sheer diversity of product becomes clear; my cheap set has one HDMI input, and a DVB-T tuner. It must be powered from a 230V nominal 50Hz AC source (e.g. household mains). My expensive set has multiple HDMI inputs, a DVB-T2 tuner as well as the DVB-T tuner, the ability to be powered directly from a car "cigarette lighter" socket (e.g. for in-caravan use) as well as from AC between 75V and 250V nominal, at frequencies between 50Hz and 400Hz nominal (making it suitable for powering from a typical luxury yacht power socket). It has the ability to record TV programmes to an attached USB HDD, and to play them back later; generally, it is of much higher specification than the cheap set.

So far, this sounds reasonable; the expensive set is a much more desirable piece of equipment. Yes, I'd have to pay more for it, but I'll get what I'm paying for. Now imagine removing consumer rights laws - let's take the law that says that goods must be fit for the purpose they are sold for. Without that law, a dishonest trader could buy the £129 TV, put appropriate connectors on the case, and sell the TV for (say) £800. They'd make a hefty profit on every set they've sold, and because consumer rights laws don't exist, nothing stops them conning a few customers, changing their trading name (so all the noise about how crooked they are doesn't impact sales), and continuing to rip people off.

Who gains in this situation, and who loses? Obviously, the dishonest trader gains. The manufacturer of the cheap TV set gains. Consumers who get ripped off lose. Less obviously, honest traders lose out; because I can't be confident that a set is genuinely what it's claimed to be, I am less likely to buy an expensive set that does everything I want, preferring to buy a cheaper set, and easier-to-verify adapters to get the added functionality.

Worse, if you say that I should rely on well-written contracts to protect me, you get into a situation where every purchase I make is slowed down, as I get the retailer to read, analyse, and eventually agree to the contract terms I insist on to make the sale. The costs to all retailers and consumers of having to deal with individual contracts, each with their own quirks are high; the benefit of consumer rights laws is that we have a baseline that the retailer cannot attempt to wriggle out of that I, as a consumer, am happy to accept. Businesses thus only need spend the costs of understanding the laws once; consumers accept that they can trust businesses, and rely on the law protecting them from the few bad apples. Neither side pays day-in, day-out for the costs of protecting themselves against the rogues in the market - instead, the legal apparatus puts as much of that cost as possible on the rogues. Further, the nature of regulation means that the net cost to legitimate businesses of consumer protection is lower than it would be if each transaction attempted to include the terms required in an individual contract.

In short, even regulations that cost money in the short term don't necessarily cost money in the long run; often, the regulations exist so that honest businesses can survive in the market, even in the presence of crooks. The trouble is that the cost of regulation is obvious up-front; the benefits are not only sometimes taken for granted, but often are spread across all of society.

2011/02/01

IPv4 run-out has started - prepare for IPv6

So, checking technical blogs and tweets this morning, I learn that APNIC have triggered IANA IPv4 exhaustion. What does this mean for the non-technical user? Well, in the short term, nothing - RIRs like RIPE and ARIN still have stocks of IPv4.

In the medium term, it means you have to move to IPv6 soon. Given the rate at which IANA ran out, you have about a year from now before IPv4 is simply unavailable to you, and services will have to be IPv6 enabled or else. If you're buying network-enabled kit that you expect to keep using in 12 months time, make sure it's IPv6 ready. If it's not, talk to your salesman, and tell them that the reason you're delaying the purchase is that you want IPv6 support.

As a product developer, I'm not seeing any pressure from the field to get IPv6 into our Internet enabled devices; it's simply not something that impinges on people who buy equipment. You need to change this now. Within the lifetime of anything you buy today, IPv4 will run out, and you will need your equipment to be IPv6 enabled if it's going to continue working.

Please, put pressure on sales teams to IPv6 enable everything - it won't happen until you do. If you don't, don't be surprised when you're rebuying everything in a year or two, simply because IPv4-only kit is no longer usable for the task you bought it for.

2011/01/29

Why allocating /48s for end users in IPv6 is a good idea.

There are people out there already worrying that assigning /48s to end users in IPv6 is going to cause problems in the long term, matching the existing IPv4 problems with address shortages. I'm going to try and present a few ways to understand just why it's not going to happen that way.

Firstly, we'll need to think about the world population. Current figures show that we're at around 7 billion people. Taking the worst-case model the UN is prepared to consider, we're unlikely to reach more than 35 billion people worldwide before 2100. Against that, we have assigned a single /3 for unicast, and kept 5 /3 blocks in reserve.

A quick bit of maths shows us that we have 245 /48s to assign, before we have to use up more of the reserved address space. This is (roughly) 35,000 billion blocks to use. We have already determined that we're not going to have more than 35 billion people any time soon; so, let's assume that there are 3,500 billion people on Earth, or 500 times the current population. That's still enough /48s for each person to use an average of 10. So, one /48 at home (65,000 individual networks, of which a "typical" home might have two WiFi networks, one "server" network and a wired network). One /48 in the office (again, 65,000 individual networks in the office). Three /48s on the mobile network (one for each handset, plus one for your mobile broadband dongle). We're still only using 5 of the 10 we can allow after a 500 times population growth. Assume that ISP overheads (running routers and the like) cost a typical user another /48, and we're still within a safety margin.

Note also that we haven't yet permitted the use of the reserved /3s. If we have population growth well beyond that which we currently believe the planet can sustain, and we use more blocks that I have considered (I've assumed one connection at work, three mobile connections, one at home), we still have room to expand into. And it gets better: if the UN's worst-case projection is vaguely accurate, and we stabilise at under 70 billion people, we can each fill 50 /48s before we have to use some of the reserves.

In short, big numbers are hard. It's all too easy to see that the IPv6 address is only 96 bits longer than the IPv4 address, but hard to get a handle on just how much extra space that represents.

2011/01/27

How MIMO works - and why it's amazing

Long term readers may recall my mentioning in my OFDM post that I was going to try and understand MIMO. I think I've made sense of it - and it's completely and utterly mindblowing that the processing power needed to do this exists in something I can buy for under £100.

Described simply, OFDM and related channel coding technologies produce signals of very clearly defined form, which can be arranged such that instead of interfering destructively, they interfere constructively. If I transmit two or more OFDM signals on the same frequency on different antennae (in different places), you can use the information from two or more antennae to reconstruct not just one of the original signals, but both of them. You do this by relying on the fact that each signal has taken a different path to get to each of your receive antenna; in pseudomath terms, if antenna 1 transmits signal 1, you receive "(path loss for path 1) * (signal 1) + (path loss for path 2) * (signal 2)" at your first antenna, and "(path loss for path 3) * (signal 1) + (path loss for path 4) * (signal 2)" at your second antenna. The OFDM signals are set up to include pilot tones, so that you can calculate values for each of the four path loss terms, and thus adjust and subtract to get back the original two signals.

The neat thing is that you don't have to slow down either signal to make this possible; an OFDM signal from antenna 1 at 150MBit/s can still be decoded, even as you add a second OFDM signal to antenna 2 at 150MBit/s.

So, that's how MIMO gives you more speed. How does it give you more range as well? This is a tradeoff - recall that Shannon's limit stops you going infinitely fast over a given channel. However, if I need to communicate between A and B at 1MBit/s, in a MIMO world, I can send two OFDM signals at 0.5MBit/s (with associated range). I can alternatively send two 1MBit/s signals, each of which is 50% ECC data, making dropouts easy to correct. Lots of options here, all of which can help.

2010/12/31

On Agile, TDD, and other Software Engineering processes

I've been looking around the job market recently, and one thing that's become much more visible since I last did so is the requirement to have experience in some form of "Agile" software engineering process. While I'm not going to argue that the requirement is pointless, I do wonder how much of it is from companies who don't understand what they really want.

To understand where I'm coming from, you must first understand what an engineering process actually is. As a competent engineer, you develop patterns of work that help you produce good results every time. An engineering process is what you have when you look at a pattern of work, and write it down with explanations of why you do something. For example, test-driven development (TDD) is based on the observation that, when working with a pre-existing system, the safest way to improve things is to write tests for the functionality that you want. As long as those tests keep passing, you know the system works; as soon as they fail, you know that you've broken something important. TDD takes that further - you always write tests first for everything, even new features. You know that the new tests will fail, and you write just enough code to get the new tests passing; because you've got full test coverage, you can safely engage in quite major rework of the code, knowing that the test suite will alert you to bugs.

Another technique is pair programming. This relies on the observation that it's easier to spot other people making mistakes than it is to stop yourself making similar mistakes. So, put two programmers at one computer; have one of them write code, the other chiming in with observations and questions. Swap roles frequently, so that neither programmer gets bored.

The commonality here is that a good engineer will use whichever process is appropriate for the work they're doing; if I'm enhancing an existing feature, I will do TDD, as I can't risk breaking the existing feature, and it's easier to produce a better feature rather than a mudball of related features if I'm having to think about how it will be used as I write a test case. Similarly, if I'm deep in a complicated problem, I'll pair-program with another programmer who's as competent as I am; they force me to explain things that I think are obvious, and, as it turns out, sometimes what I've assumed is obvious is not only not obvious, but is also key to solving the problem.

It should at this point be obvious why I think that asking for experience of Agile processes isn't necessarily going to get companies what they're after. An Agile process isn't in itself helpful - they provide nice shorthands for good engineering processes, but you can practice Agile processes for years without ever doing a decent bit of engineering. For example, if you're doing TDD, you can write useless test cases, or write code that does a little bit more than the area the test case is supposed to cover. Before you know it, you're claiming to do TDD, but you're actually back in big ball of mud disaster programming territory.

Similarly, pair programming works because the members of the pair are of similar levels of competence - if you pair up with programmers who are considerably worse than you, you don't benefit from it. However, it's possible to do mentoring (pair a good programmer and a bad one together, so that the bad one can learn from the good one), and call it pair programming; from the outside, they look the same.

Practically, you don't normally want to stick with one Agile process slavishly; the core principles underlying all Agile processes are good (short release cycles, don't try and predict when you can adapt during the development process, don't waste time on something you might never need), but which detailed process is best depends strongly on what you're doing today. A good engineer knows enough to insist on pair programming when they're in a complicated problem, TDD when they're trying not to break an existing feature, Scrum-style sprints when appropriate, whatever parts of Agile will improve things today. A bad engineer will do TDD badly, so that it doesn't show gains, will use Scrum-style sprints to avoid facing up to the hard problems, pair programming as an excuse to goof off, and will generally "have experience in Agile processes", yet not show any gain from them.

What employers should really be looking for is good, adaptable engineers; these people will adjust to whatever process you have in place, will look to change it where it doesn't work, and won't hide behind buzzwords. Asking for "Agile processes" is no longer a way to catch engineers who keep up with the profession - it's now a way of catching people who know what the buzzwords are.

Having said that, I don't know what today's version of "Agile processes" should be; you need something that's new on the scene, that good engineers will be exposing themselves to and learning about, and that isn't yet well-known enough to encourage bad engineers to try and buzzword bingo their way past the HR stage.

2010/11/20

Economics of the telecoms industry

I seem to be in a ranty mood at the moment. Today's rant, however, is not negative - it's an education attempt. In particular, I've dealt with one person too many who doesn't seem to understand how the telecoms industry works in an economic sense, and thus why the price they pay their ISP for a home connection isn't comparable to the price a business pays for "similar" connectivity. On the way, I hope to convince people that different ISPs using the same wholesale suppliers can nonetheless offer radically different levels of service.

To begin with, a quick guide to important terminology:

Capex:
Capex (short for capital expenditure) is the money you have to spend up-front to buy equipment that you'll continue to use. For example, £20,000 on a new car is capex.
Opex:
Opex (short for operational expenditure) is the money you have to spend to keep something going. Using the car as an example again, your insurance costs are opex, as is your fuel costs
Time value of money:
Time value of money is a useful tool for making capex and opex comparable. The normal way to use it is to calculate the present value of your opex cashflow; this gives you the amount of money you'd need up front to do everything from capital, without supplying future cash for opex (or alternatively without needing to allow for opex into your pricing scheme).
Cost of money:
Cost of money is another tool for making opex and capex comparable; whereas time value of money converts opex to capex, cost of money converts capex to opex, by working out how much interest you could have earned (safely) if you didn't spend the money now.

So, with this in place, how does the telecoms industry stack up economically? Well, firstly, there are three activities a telco engages in:

  1. Building and owning a telecoms network, whether a small office-sized one, or a big nationwide network.
  2. Buying access to other telco's networks.
  3. Selling access to their own network.

Of these, the first is dominated by capex; depending on where you need to dig, and who you ask to do the digging, the cost of digging up the roads so that you can run your cables underneath them runs at anything from £20 per metre for some rural areas where no-one's bothered if your trenches aren't neatly filled in afterwards, to nearly £2,000 per metre for parts of London. In comparison, the remaining costs of running cable are cheap - ducting (so that you can run new cable later, or replace cables that fail) is around £3 per metre, expensive optical fibre is around £0.50 per metre (for 4-core fibre, enough to run two or four connections), while traditional phone cable is a mere £0.14 per metre. Even the coaxial cable used for cable TV and broadband is £0.32 per metre.

Once you've got your cables in the ground, you need to put things on the end of them to make them do good things. Using Hardware.com's prices on Cisco gear, and looking at silly kit (plugging everyone into a Cisco 7600 router, and letting it sort things out), you can get gigabit optical ports at aorund £1,000/port for 40km reach, including the cost of 4x10 gigabit backhaul ports from the router to the rest of your network.

Note that all of this is capex; given that your central switching points (phone exchanges, for example) are usually kilometres away from the average customer, you can see that the cost of setting up your network is almost all in building the cabling out in the first place; high quality fibre everywhere can be done retail for £4,000 per kilometre needed (complete with ducting), while your digging works cost you a huge amount more; even at £20 per metre, you're looking at £20,000 per kilometre. The cost of hardware to drive your link falls into the noise.

So, onto the opex of your network. You'll obviously need people to do things like physically move connections around; but most of your ongoing cost is power consumption. Again, this isn't necessarily huge; Cisco offer routers at 50W per port for 10 gigabit use, or 1.2kWh per day. At current retail prices, you'd be hard pressed to spend more than 50p/day on electricity to run the Cisco router, even allowing for needed air conditioning. Reframing that number differently, assuming that the typical customer needs £10/month of human attention, a 10 gigabit link has opex costs of around £40/month, including the 10 gig to other parts of the country.

When you compare this to the capex costs of building your network, you can quickly see that the basis of the telecoms business is raising huge sums of capital, spending them on building a network, then hoping to make enough money selling access to that network that you can pay off your capex, and spend a while raking in the profits before you have to go round the upgrade loop again; your opex costs are noise compared to the money you've had to spend on capex; assuming your network survives ten years, your opex is going to be under £5,000 per port, while your capex for a typical port is going to be over £25,000. Given normal inputs to a time value of money calculation, you can work out that a network has to survive 20 years without change before your opex becomes significant.

So, how do you make money on this? Answer: you sell connections to people; you start by charging some fixed quantity per user, to cover the bare minimum of opex and ensure that no matter how the customer uses the connection, you don't lose money on opex. Then, you add a charge for usage; there are three basic models:

  1. Pay as you use of a high capacity link.
  2. Pay per unit available capacity.
  3. Percentile-based charging of a high capacity link.

The first is the familiar charge per second for phone calls; in this model, adapted for data connections, I pay you per byte transferred. You set the price per byte as high as you think I'll pay, so that you can pay off your capex, make a profit, and prepare for the next round of capex on network upgrades. You may also offer a variable price for usage (as my ISP, Andrews & Arnold do), in order to encourage users to shift heavy use to times when it doesn't affect your network as much. This is also where peak and off-peak phone charges came from; if you use the phone at a time when the existing network is near capacity, the telco charged you more in order to encourage you to shift as much usage as possible to off-peak, where there was lots of spare capacity, and hence allow the telco to delay upgrades.

The second is also simple. I pay you for a link with a given communications capacity, and I get that capacity whenever I use it; paying for unlimited phone calls is an example, as is an unlimited Internet connection. In this model, the telco is playing a complex game; if they make the price for the capacity too low, people will use enough capacity on the "unlimited" link that you have to bring forwards a high-capex network upgrade. If you set it too high, people will go to your competitors; a median position, used especially by consumer telcos, is to offer "unlimited with fair use", where you will be asked to reduce your usage or disconnect if you use enough that a network upgrade is needed to cope with you. This position can cause a lot of grief; people don't like to be told that, actually, this good deal for their usage level isn't for them, and that they're "excessive" users.

The third option (percentile billing) is the most common option used in telco to telco links. In a percentile billing world, there is a high capacity link that neither end expects to see fully utilised. Instead, the current utilisation is measured repeatedly (e.g. once per second). The highest measurements are ignored, leaving the percentile behind. Payment is then made based on the peak utilization in the percentile. A very common version of this is monthly 95th percentile; as used by ISPs, you measure once every second. You sort your month's measurements, and discard the highest 5% (e.g. in September, a month with 30 days, you have 2,592,000 seconds; you discard your highest 129,600 readings to get your 95th percentile). You then charge for the highest remaining measurement. For a simplified example; imagine that I measured a day's usage, and charged you 75th percentile. In February, you used 5 units a day for the first week, 1 unit a day for the next 20 days, then 50 units on the last day. 75th percentile of 28 periods involves discarding the highest 7 measurements, so I discard the 50, and 4 of the 5s, to get a measurement of 5 units peak use. I thus charge you for 5 units/day for the entire month. Had you been able to keep the last day at 1 unit, your bill would have fallen to just 1 unit/day; you can thus see how percentile billing avoids charging for rare peaks, but doesn't let a user get away with lots of heavy use cheaply.

I hope this has piqued some interest; as you can see, running a telco, especially at consumer prices, is much more akin to running a mortgages bank than a shop.