This blog entry is based on my Usenet posting, which has been archived by Google.
The question raised was "what is the difference between PPPoE and PPPoA on ADSL? Is there any benefit from choosing one or the other?". This post goes into the details of how PPPoE, PPPoA and ATM are interrelated on UK ADSL providers.
Anyone who's looked at the options offered by their ADSL modem will have seen a huge variety of options; there's VC Mux, LLC, PPPoA, RFC1483/2684, PPPoE, CLIP and often others. These are all ways of carrying your Internet traffic in ATM. I'm only going to consider PPPoE and PPPoA encapsulation, as those are the only two options AAISP offer.
On a BT Wholesale line, you get to choose an encapsulation method (either PPPoA or PPPoE); the other end can autodetect what encapsulation you're using, by looking at the incoming ATM cells.
For PPPoA, the incoming ATM cells contain PPP directly. When the PPP link is not yet up, these will always begin C0-21.
For PPPoE, the incoming ATM cells contain a 6 byte Ethernet over ATM header, a 14 byte Ethernet header, then the PPP. The Ethernet over ATM header always begins AA-AA-03.
Incidentally, this means that PPPoE is always going to be slightly slower than PPPoA - when you're unlucky, it will use one ATM cell more than PPPoA for a packet, costing you some speed. Eagle-eyed readers will also recognise that this makes Be Retail's ethernet over ATM services use slightly more overhead than PPPoA services; there are 20 bytes of overhead for Be, to the 2 bytes of PPP overhead.
There are some differences in how this all works between BT 20CN and 21CN:
In 20CN, your DSLAM port is an ATM device - it knows nothing about PPP, IP, Ethernet etc, and just transfers ATM cells to the BRAS, using BT's ATM cloud. The BRAS does the encapsulation autodetection, handles the PPP to L2TP translation, and passes L2TP to AAISP.
In 21CN, the ATM cloud is replaced with an Ethernet cloud, running with jumbo frames. Your MSAN port does the autodetection, removes the ATM cell framing, adds Ethernet framing if you're doing PPPoA, and sends the resulting PPPoE frame (which can be oversized - allowing a full 1500 byte MTU) to the BRAS, which then does PPP to L2TP and passing data to AAISP.
There is never a performance gain on BT ADSL from using PPPoE; in 20CN, it's just one more block of bytes for the BRAS to strip. In 21CN, you save the MSAN adding a block of bytes to each packet, but it can do that without measurable impact at incredible speeds (easily into the gigabits per second per ADSL port), and you lose data rate on your line carrying those bytes.
Techy geeky stuff
First, a quick walkthrough of ADSL layers:
- At the bottom-most layer, you have a broadband analogue signal, changing at a rate of 4 kBaud; this is split into bins of 4.3125kHz, each of which encodes up to 15 bits per bin. Bins 0 to 5 are reserved for voice. The remaining bins vary depending on the standard.
- Upstream is bins 6 to 31 in Annex A (for all the supported standards, ADSL1, ADSL2 and ADSL2+)
- Upstream is bins 6 to 63 in Annex M (ADSL2+ only)
- Downstream is bins 32 to 255 in ADSL1 and ADSL2
- Downstream is bins 32 to 511 in ADSL2+ Annex A.
- Downstream is bins 64 to 511 in ADSL2+ Annex M.
- The next layer is ADSL frames, which include error correction and sync framing. This level is normally invisible to the user.
- The uppermost layer I'm going to consider is the user frame level. There are two modes permited in ADSL1, and three in ADSL2 and ADSL2+: STM, ATM and PTM. STM uses the SDH frame structure; PTM carries higher-level protocol frames directly, typically Ethernet frames. ATM is the mode used in the UK (and most deployments worldwide); ATM cells are packed into ADSL frames, but all you see is a stream of ATM cells.
An ATM cell is 53 bytes long, 5 bytes header, 48 bytes payload. A 48 byte MTU is too small, so we use an encapsulation called AAL5, which takes 8 bytes from the last cell in a set, and lets you have up to 65535 bytes of payload. So, 100 bytes of payload in AAL5 is carried in 3 cells; the first two cells have 48 bytes each of the payload, and the last cell has 4 bytes of payload, 36 bytes of padding, and 8 bytes AAL5 trailer. For a second example, if you had 96 bytes of payload (a disaster case), you get 3 cells. Two cells contain 48 bytes each of payload, and the last cell contains 40 bytes of padding, and 8 bytes of AAL5 trailer.
PPP adds a 2 byte header to each IP frame - so, to send 1500 bytes of IP data, you send 1502 bytes of PPP. If you're doing PPPoA, this 1502 bytes becomes 31 cells containing 48 bytes of payload, and a cell containing 14 bytes payload, 26 bytes padding, and the 8 byte AAL5 trailer. Your sync speed is measured in kilobits of cells per second, so a 832kbit/s sync speed is around 1,900 cells per second (832kbit/s divided by (53 bytes per cell)).
If you add PPPoE into the mix, the PPPoE header adds another 6 bytes. The Ethernet header adds a further 14 bytes, for 20 bytes overhead. When doing Ethernet over ATM, you normally drop the Ethernet checksum, but add 10 bytes of headers, for a further 30 bytes of overhead. To send 1500 bytes IP in this setup, you need to send 1532 bytes of payload. This becomes 31 cells carrying 48 bytes of payload each, a cell carrying 44 bytes of payload and 4 bytes padding, and a cell carrying 40 bytes padding and 8 bytes of AAL5 overhead.
As you can see, there are whole ranges of packet sizes where PPPoA takes up one fewer frame on the wire than PPPoE. 8 to 37 bytes of IP (never normally seen, as IPv4 is 20 bytes overhead, and TCP adds another 20) are 1 cell in PPPoA, 2 cells in PPPoE. You then get 18 bytes (38 to 55) where the overheads are the same. 56 bytes to 85 bytes of IP take up 2 cells in PPPoA, 3 cells in PPPoE, and the cycle continues. Thus, depending on your packet size distribution, you may see the same speeds with PPPoA and PPPoE, or slightly more with PPPoA.
This is also where the faulty advice to use a 1478 byte MTU with PPPoA comes from - when doing bulk transfers, most of your packets are MTU-sized. A 1478 byte MTU, with the 10 bytes of PPP and AAL5 overhead works out to precisely
31 cells, so you don't "waste" 26 bytes in every 1536 bytes of payload on padding; the downside is a slight increase in number of packets sent, so more lost to TCP/IP headers. Assuming IPv4, a minimum size IP and TCP header
is 40 bytes, so this can end up costing you more in extra IP headers than it saves in ATM padding.
Finally, if you're wondering why 48 bytes - there are good technical reasons to make the payload size a power of 2. The Americans wanted 64 bytes, because that would be more efficient for data, and still good enough for voice with echo cancellers in a US-wide network. The French wanted 32 bytes, because that would allow them to have a France-wide network without echo cancellers for voice. They refused to agree, so management types compromised on 48 bytes, a pain for *everyone*.
Aren't committees wonderful?
Edited 2010-04-27 - the 1478 byte MTU advice is correct for bulk transfers, but not for some mixed size transfers. In a bulk transfer, you get one padded ATM cell per 1500 byte packet; my error in reasoning was in not noticing that you don't also get one extra IP packet per 1500 bytes transferred if you do 1478 byte MTU. In practice, this means that for small transfers, 1500 byte MTU is more efficient - for larger transfers, 1478 byte MTU is more efficient; I leave calculating the point at which the extra IP headers outweigh the extra ATM cells to the reader. Thanks to Simon Arlott for pointing this out to me.