a full-coverage Customer Access Network for Broadband (Internet) and Telephone.The Coalition can only achieve its "cheaper, sooner, more affordable" promise by delivering only a fraction of the NBN Co plan. Voters need to read 'the Fine Print'.
In this piece, I want to layout why in the 10 years since TransACT rolled out an FTTN covering half of the ACT, it remains the only large-scale attempt at taking on Telstra (and TransACT was not able to proceed as planned onto the other half):
FTTN 'Nodes' (cabinets), especially the MSAN (Multi Service Access Node) favoured by Mr Turnbull, are remote/in-field telephone (POTS) and DSLAM terminations: exactly the RIM/CMUX "pair-gain" solution Telstra has deployed extensively.In short, they are the last thing you'd do, if had you a choice:
- it only makes economic sense for the 'incumbent' (Telstra),
- it costs a lot more, in running fibre and the electronics in the node, than DSLAM's in the Exchange , and
- the operational and maintenance costs of remote nodes is much, much higher.
If you're a customer, the Gungahlin Experiment says "stay away!" But Telstra, not you, decides on how you're connected.
Ownership and Access
In the Telco world, natural monopolies, like the copper Customer Access Network, is often referred to as "our birthright". Which makes sense of a lot of decisions and behaviour.
Telstra has had to be dragged kicking and screaming by the Independent Regulator, the ACCC, to share access to its infrastructure. It charges commercial rates for access, it doesn't give it away for free. But they hate doing it with a passion. The ULL, Unbundled Local Loop, agreements allow other Telcos to use the Telstra copper, but their margins are "thin" - they make very little money because Telstra sets both the Wholesale prices and the Retail prices they have to compete against.
We know from the multi-billion write-offs of the HFC Cable TV networks, that Telstra would rather destroy gobs of shareholder capital than share assets or co-operate.
Telstra happily moves customer circuits from direct exchange connection to its pair-gain systems, making other Telco DSLAM's inaccessible, without penalty: they own the lines, they can do what they like, when they like.
Other Telcos cannot do the same thing: go out into the field, install a node and move all the lines in the area on it. They don't own the lines.
Even if they did install a node and move a very small number of clients onto it, we know from the HFC rollout that Telstra would, within a matter of weeks, have installed a pair-gain system (a RIM/CMUX) in-front of the new node, rendering it useless.
The ACCC can do nothing about this... It had to stand by and let the HFC networks get rolled down both sides of the same streets while more than half the country was left uncovered.
Only Telstra can install remote nodes with internal DSLAMs and not be denied access.
CapEx
Everything that you need in an Exchange, you need in a node:
OpEx and Maintenance
Does the power to run many nodes add up to much more than an exchange?
Yes, in three ways:
But the extra power consumed is really only a side-issue.
The engineering "gotcha" with all remotely installed equipment is not inquisitive cockatoos, floods, frost and spiders: they cannot be economically "climate controlled". We know from Accelerated Ageing Testing that failure rates of electronics radically increases with temperature: when ambient temperature is well above 20° Celsius, semiconductor, wiring and board reliability plummets. The usual heuristic is "lifetime halves with each 10°C rise in temperature". Outside in the sun on a hot day, the electronics cook: you can see your dollars melting away...
How do you get 20-30 years service life out of all electronics in a node? (one piece fails, the node fails and everyone on it loses service).
There are two parts to Climate Control: Temperature and Humidity.
Too dry or too wet, to the point of condensation forming, damages electronics. Too much water beading for too long, and you've got mildew and mold. Not just OH&S hazards, but toxic to electronics... Those nodes in a desert won't fare better: capacitors dry-out and static electricity builds up when the air is too dry. Static discharge, even something a technician doesn't notice is lethal to semiconductors.
Yeah, you really want to put expensive, sensitive electronics in a secure, climate controlled room if you can: let's call it "an exchange"! Nodes located in the field are a really bad idea on so many levels...
If there are 50-300 nodes within an exchange service area, how many kilometres must a technician drive to visit them all? They are at least 0.8-1.6km apart.
With 50 nodes, 2km apart, that's well over 100km driving around them.
300, 1km apart and that's enough driving in an urban area to take you all day.
This is the Operational problem with nodes: they are scattered all over the countryside, in places chosen for their network suitability, not road access. There is a considerable overhead (read 'waste of work time') in servicing anything located outside an exchange, let alone hundreds of little beige boxes dotted around the countryside.
Line faults and customer connections or line upgrades, arrive randomly: They cannot be grouped and scheduled to minimise technician driving time.
Instead of a Tech rolling up to an exchange and fixing half-a-dozen faults, in comfort and safety, in a morning, they will be out in all weathers, spending two- to three-times more time driving than what they are paid for: fixing things. That's really bad business.
There are two other times you'll send technicians around to every node:
Doing that for every node in Australia will be a massive undertaking, on the same scale as the initial rollout, not something "quick and easy".
Did I say this equipment is expected to last 30 years with little maintenance?
If you don't house equipment in climate controlled conditions, you have to under-rate components and over-engineer systems to get anywhere close to your design life goal.
Performance and Upgrade
How much performance (bandwidth) can you expect over your average 40-yo copper phone cable?
It was designed for voice and 28kbps is pretty reliable on good phone services, though 2.4-9.6kbps may be all that's possible on rural and remote phone lines. [But they cannot get FTTN services: way beyond 1.6km. They've always been connected by Fixed Wireless or Satellite.]
That 2.88kbps has been pushed to 25Mbps at 800m, by a lot of cleverness and more luck.
For high-bandwidth transmission systems, not customer lines, there is for any type of cable, a constant that describes it: the "Distance-Bandwidth Product". You might have 100Mbps with repeaters every 10km. You can have 200Mbps with repeaters every 5km, and 1,000Mbps with repeaters every 1 km...
For carefully designed and specially manufactured cable, this is the best that is possible.
The simple unshielded conductors on the customer access loop aren't that well built: they look more like capacitors and their performance falls off very quickly with distance, leading to some counter-intuitive results: ADSL2 is faster than VDSL2 from 800m to 1600m.
At extreme distances, the lower frequency ADSL1 is faster than everything.
It'll get 512kbps - 1.5Mbps well past when ADSL2 has stopped completely.
The statements that "people have achieved 10Gbps in the laboratory" might be true, but are just noise. Right now you can buy a specially manufactured "Thunderbolt" cable that does 10Gbps: over for 2-5m!
There is a physical limit for the low-spec cable already in use, and as it gets older, it becomes noisier, less reliable and less able to support higher line speeds: it wears out. The other little wrinkle is that to overcome line noise thats increases with frequency, you have to boost transmit power. This increases mutual-interference (cross-talk) and reduces achievable speeds on all links. Not what you want.
One day we might a have DSL capable of 1Gbps: but it won't go as far as your gate!
What's the point of a 5-10m copper lead-in and a 2-port node that costs $25,000?
It can never make sense, technically or economically, to run 1Gbps DSL.
If you want 100Mbps services, ever, the message is clearcut: start with fibre.
As I've alluded to above, when the uplink gets congested, the service is unusable for everyone on the node, what can be done?
This isn't academic: exactly this problem has plagued those forced into the Gungahlin Experiment when Telstra finally provided a reasonable number of DSL ports per RIM/CMUX.
Lets consider a fully populated 150-port ADSL2 node, the sort of thing Telstra proposed in 2005 to provide a minimum of 12Mbps, maximum 20-24Mbps.
Because all services are priced on speed, most customers will initially use the cheapest, and slowest, connection rate, and over time they will upgrade.
150-ports at 6Mbps each is an aggregate of under 1Gbps: providentially, SFP's (the ethernet optical transceivers) are cheap. The service is not "over-subscribed" and we'll have happy users, if the upstream links are also properly dimensioned.
What happens when people start to upgrade to 12Mbps services and exceed the capacity of the uplink?
Most of the time, nobody notices anything! Because not everyone is demanding full speed together.
It is only during "busy hour", say 6PM-9PM, that congestion will show up. In the way of these things, it starts gently with the odd "glitch", but worsens. Surprisingly quickly nobody will be getting usable speed: this is an artefact of the Internet TCP protocol: if at first you don't succeed, try again, relentlessly! It is a computer, after all.
There are two options at this point:
In the Telco world, natural monopolies, like the copper Customer Access Network, is often referred to as "our birthright". Which makes sense of a lot of decisions and behaviour.
Telstra has had to be dragged kicking and screaming by the Independent Regulator, the ACCC, to share access to its infrastructure. It charges commercial rates for access, it doesn't give it away for free. But they hate doing it with a passion. The ULL, Unbundled Local Loop, agreements allow other Telcos to use the Telstra copper, but their margins are "thin" - they make very little money because Telstra sets both the Wholesale prices and the Retail prices they have to compete against.
We know from the multi-billion write-offs of the HFC Cable TV networks, that Telstra would rather destroy gobs of shareholder capital than share assets or co-operate.
Telstra happily moves customer circuits from direct exchange connection to its pair-gain systems, making other Telco DSLAM's inaccessible, without penalty: they own the lines, they can do what they like, when they like.
Other Telcos cannot do the same thing: go out into the field, install a node and move all the lines in the area on it. They don't own the lines.
Even if they did install a node and move a very small number of clients onto it, we know from the HFC rollout that Telstra would, within a matter of weeks, have installed a pair-gain system (a RIM/CMUX) in-front of the new node, rendering it useless.
The ACCC can do nothing about this... It had to stand by and let the HFC networks get rolled down both sides of the same streets while more than half the country was left uncovered.
Only Telstra can install remote nodes with internal DSLAMs and not be denied access.
CapEx
Everything that you need in an Exchange, you need in a node:
- intrusion-proof building, security, alarms, monitoring, test equipment, ...
- primary and secondary power-supplies
- cooling, ventilation and environment protection (floods, fire, termites, vermin, hail, vandals, ...)
- up-links, distribution frames for customer lines, patch panels, line-filters to separate phone/DSL signals, line-cards for phone and ADSL and different line equalisations (for echo and noise cancellation)
All this, in something that will fit on the back of a ute, and can be knocked over and destroyed by that same ute!
These days, Telcos are not a law unto themselves as they once were: they have to lodge Development Proposals to Council, the same as everyone else, for each and every node. Approval is not automatic.
Getting permission to build a node is the first hurdle, then construction, seeking power connection (and paying for that), running fibre, laying foundations, joining into existing customers lines and finally transferring them. Surprisingly, the cost of constructing a node does vary much with size: the overheads are that big.
Remember that these nodes sitting on the side of a road or on a pole are expected to last 30 or more years with minimal maintenance. The cabinets must be toughened to withstand misadventure, deliberate vandalism and normal environmental conditions: with "minimal" (read NO) maintenance.
That sort of over-engineering comes with a BIG price-tag.
Compare the cost of even 50 nodes against one exchange building: the nodes are likely 5-10 times more expensive and they still need to connect to that exchange. If my high-range estimates are right and we'll need 300+ nodes per subscriber exchange, the additional cost will be crippling.
They are also workplaces for the technicians that install and support them: all relevant OH&S and IR laws apply. If a worker is injured because of something outside the employers control, like a wandering dog or falling tree, the employer is still liable.
If you have any other choice, you don't put DSLAMs in nodes somewhere on the streets: exposed and prone to damage and failure.
OpEx and Maintenance
Does the power to run many nodes add up to much more than an exchange?
Yes, in three ways:
- Each node has small, inefficient, versions of common equipment (power supplies, ...) at the Exchange. For reliable operation, I'd expect 2 or 3 "hot-swap" 240V power-supplies. They all operate at low output, ironically well away from their maximum efficiency.
- There is a lot of replicated electronics that aren't there in an exchange.
- All the links in a node, uplinks and customer, are active: they consume power and their electronics uses more again. The 'P' in GPON, the NBN Co technology is "Passive": there is no power used if they aren't provisioned. GPON also shares many connections on the one head-end: DSL technologies are not shared and use around 10-times more power per customer.
But the extra power consumed is really only a side-issue.
The engineering "gotcha" with all remotely installed equipment is not inquisitive cockatoos, floods, frost and spiders: they cannot be economically "climate controlled". We know from Accelerated Ageing Testing that failure rates of electronics radically increases with temperature: when ambient temperature is well above 20° Celsius, semiconductor, wiring and board reliability plummets. The usual heuristic is "lifetime halves with each 10°C rise in temperature". Outside in the sun on a hot day, the electronics cook: you can see your dollars melting away...
How do you get 20-30 years service life out of all electronics in a node? (one piece fails, the node fails and everyone on it loses service).
There are two parts to Climate Control: Temperature and Humidity.
Too dry or too wet, to the point of condensation forming, damages electronics. Too much water beading for too long, and you've got mildew and mold. Not just OH&S hazards, but toxic to electronics... Those nodes in a desert won't fare better: capacitors dry-out and static electricity builds up when the air is too dry. Static discharge, even something a technician doesn't notice is lethal to semiconductors.
Yeah, you really want to put expensive, sensitive electronics in a secure, climate controlled room if you can: let's call it "an exchange"! Nodes located in the field are a really bad idea on so many levels...
If there are 50-300 nodes within an exchange service area, how many kilometres must a technician drive to visit them all? They are at least 0.8-1.6km apart.
With 50 nodes, 2km apart, that's well over 100km driving around them.
300, 1km apart and that's enough driving in an urban area to take you all day.
This is the Operational problem with nodes: they are scattered all over the countryside, in places chosen for their network suitability, not road access. There is a considerable overhead (read 'waste of work time') in servicing anything located outside an exchange, let alone hundreds of little beige boxes dotted around the countryside.
Line faults and customer connections or line upgrades, arrive randomly: They cannot be grouped and scheduled to minimise technician driving time.
Instead of a Tech rolling up to an exchange and fixing half-a-dozen faults, in comfort and safety, in a morning, they will be out in all weathers, spending two- to three-times more time driving than what they are paid for: fixing things. That's really bad business.
There are two other times you'll send technicians around to every node:
- software/firmware upgrades
- hardware updates, such as installing faster fibre uplinks.
Doing that for every node in Australia will be a massive undertaking, on the same scale as the initial rollout, not something "quick and easy".
Did I say this equipment is expected to last 30 years with little maintenance?
If you don't house equipment in climate controlled conditions, you have to under-rate components and over-engineer systems to get anywhere close to your design life goal.
Performance and Upgrade
How much performance (bandwidth) can you expect over your average 40-yo copper phone cable?
It was designed for voice and 28kbps is pretty reliable on good phone services, though 2.4-9.6kbps may be all that's possible on rural and remote phone lines. [But they cannot get FTTN services: way beyond 1.6km. They've always been connected by Fixed Wireless or Satellite.]
That 2.88kbps has been pushed to 25Mbps at 800m, by a lot of cleverness and more luck.
For high-bandwidth transmission systems, not customer lines, there is for any type of cable, a constant that describes it: the "Distance-Bandwidth Product". You might have 100Mbps with repeaters every 10km. You can have 200Mbps with repeaters every 5km, and 1,000Mbps with repeaters every 1 km...
For carefully designed and specially manufactured cable, this is the best that is possible.
The simple unshielded conductors on the customer access loop aren't that well built: they look more like capacitors and their performance falls off very quickly with distance, leading to some counter-intuitive results: ADSL2 is faster than VDSL2 from 800m to 1600m.
At extreme distances, the lower frequency ADSL1 is faster than everything.
It'll get 512kbps - 1.5Mbps well past when ADSL2 has stopped completely.
The statements that "people have achieved 10Gbps in the laboratory" might be true, but are just noise. Right now you can buy a specially manufactured "Thunderbolt" cable that does 10Gbps: over for 2-5m!
There is a physical limit for the low-spec cable already in use, and as it gets older, it becomes noisier, less reliable and less able to support higher line speeds: it wears out. The other little wrinkle is that to overcome line noise thats increases with frequency, you have to boost transmit power. This increases mutual-interference (cross-talk) and reduces achievable speeds on all links. Not what you want.
One day we might a have DSL capable of 1Gbps: but it won't go as far as your gate!
What's the point of a 5-10m copper lead-in and a 2-port node that costs $25,000?
It can never make sense, technically or economically, to run 1Gbps DSL.
If you want 100Mbps services, ever, the message is clearcut: start with fibre.
As I've alluded to above, when the uplink gets congested, the service is unusable for everyone on the node, what can be done?
This isn't academic: exactly this problem has plagued those forced into the Gungahlin Experiment when Telstra finally provided a reasonable number of DSL ports per RIM/CMUX.
Lets consider a fully populated 150-port ADSL2 node, the sort of thing Telstra proposed in 2005 to provide a minimum of 12Mbps, maximum 20-24Mbps.
Because all services are priced on speed, most customers will initially use the cheapest, and slowest, connection rate, and over time they will upgrade.
150-ports at 6Mbps each is an aggregate of under 1Gbps: providentially, SFP's (the ethernet optical transceivers) are cheap. The service is not "over-subscribed" and we'll have happy users, if the upstream links are also properly dimensioned.
What happens when people start to upgrade to 12Mbps services and exceed the capacity of the uplink?
Most of the time, nobody notices anything! Because not everyone is demanding full speed together.
It is only during "busy hour", say 6PM-9PM, that congestion will show up. In the way of these things, it starts gently with the odd "glitch", but worsens. Surprisingly quickly nobody will be getting usable speed: this is an artefact of the Internet TCP protocol: if at first you don't succeed, try again, relentlessly! It is a computer, after all.
There are two options at this point:
- install another 1Gbps uplink, or
- increase the speed of the uplink to 10Gbps, the next available.
- Higher speed SFP's still are "premium priced". 10Gbps might be 15-20 times the cost of 1Gbps.
Only, it is never so easy...
- How many extra fibres did you pull to the node for upgrade, remembering that bundle of fibres is being shared between quite a few nodes?
- Does your node even allow you to add another 1Gbps interface?
- It might support two interfaces, but not both sharing traffic, the second only as a 'fail-over standby'.
- Whilst the 10Gbps SFP will plug-in and the electronics will operate, it won't move more than the board can handle...
- Unless you originally bought the much more expensive electronics to support 10Gbps, unlikely because we are trying to do this on the cheap, installing a faster interface will buy you nothing...
- Why would this situation arise? In 10 years time, the availability and prices will have fallen... Technicians won't expect it not to work.
But wait, it gets worse, much worse...
- Our 150 customers won't just be limited to 12Mbps (2Gbps aggregate), ADSL2+ reliably delivers 20-24Mbps for those closer. Lets upgrade 33% of ports to 20Mbps and another 33% to 15Mbps...
- We can realistically have aggregate latent demand for 2.5Gbps...
- We can saturate the link just with the fastest users.
- Busy hour is going to be most of the day.
- But we aren't just selling ADSL2 services, we're selling VDSL2 at a minimum of 25Mbps and a maximum of 80Mbps.
- 12% of ports at 80Mbps will load the link to 150%. (1.6Gpbs)
- 25% of users will get 25-30Mbps, adding another 1Gbps.
- And the rest, 63%, will get 12-15Mbps, another 1Gbps.
- We end up being four times over-subscribed on the Customer Access Network. That's really, really bad for Reliability and Performance.
- Would you believe it can get worse?
- Mr Turnbull proudly says "those that want it, say a business, can have Fibre Optic connections, if they can and will pay". Oops.
- That's 100Mbps on a single link.
- 10 of those on a single node, and it is saturated.
- But even if there are only 2-3 business buying 100Mbps on that node,
- they won't be happy, because they get to share a congested link with everyone else, and
- the ADSL/VDSL customers will be very unhappy when one of the businesses decides to move a few GB files around overnight: residential busy hour is briefly in the morning and evening. Exactly when businesses aren't working and SysAdmins like to do stuff that would upset workers during the day.
- All these scenarios end in tears...
Wrap Up
Why did we want to install all those thousands of nodes around the countryside?
Because we didn't design the telephone network to handle broadband, and we're trying to cheat and get it cheaply with what we already have.
The quick summary of the above is: You want anything but nodes in the field for wide-scale, 30-year life equipment to provide Telco-grade reliability.
There is one last "gotcha": Death by Success.
I briefly mentioned the mutual-interference (far-end cross-talk) of DSL services and that it can limit speeds.
The original 4-wire 2Mbps PCM systems (Pulse Code Modulation, pre-full-digital) carried 30 voice calls, but were only specified as "use one per cable".
The designers of DSL systems have pushed the envelope somewhat, but this is a fundamental and unresolvable problem: if you have large numbers of DSL services all on the same large cable, they will all suffer interference.
This is exactly the Telstra problem (with 2,400- and 800-pair cables) described by Richard Chirgwin in June, and was misinterpreted by Malcolm Turnbull as "cable faults".
The problem arises because all the DSL modems and receivers operate on identical frequencies, using identical coding methods and hopping frequencies in the same way.
If there is one modem operating in a cabinet with 149 other identical modems blasting away at the same power level, that's a pretty cacophonous environment: the one guarantee you can make is, NO service will be able to achieve its full speed.
Just how bad is this problem? With our cheap, worn-out cabling, I think we'll start to plumb the depths of that answer.
The worst case may be: only 50% of ports on a node can be activated. Pretty cool technology, eh?!
There is an answer: dig some new trenches and run brand-new cables, away from the working ones.
While you're replacing the failed copper, just install fibre.
And in case you were wondering, Fibre Optic doesn't have this problem. It comes with immunity to crosstalk.
The Time Division Multiplexing Uplink of GPON, like GSM mobile phones, does have a failure mode. It can go wrong if a transmitter fails "on", but that's limited outage and quickly fixed (turn it off).
No comments:
Post a Comment
Note: only a member of this blog may post a comment.