Showing posts with label cable TV. Show all posts
Showing posts with label cable TV. Show all posts

Saturday, 4 May 2013

NBN: Multicast - is it the secret sauce for Broadband take-up?

A good piece by Dom Wright about on-Demand TV in Australia, links iTunes, Spotify, Netflix and other live-streaming services with the pirating wars of 10 years ago. When the content gatekeepers dropped the price barriers and gave us "just works, anywhere", very predictably on-line paid usage has soared. Wright argues that he'd like an "all you can eat, infinite choice" (low) fixed price rental service here.

See my previous piece on this topic as well.

Pirating is driven by a simple equation: the retail cost, in dollars, of an item is high compared to its consumer utility and the cost, in money, time and potential legal action, to bypass the technology is low. People are not motivated to bypass barriers for things that provide higher utility than the sticker price - they'll happily pay for stuff that is perceived as "value for money".

If people are stealing your content in preference to buying it, you've two problems: you're charging more than the middle of the market is willing to pay, you're above their price-point, and you haven't bothered to implement good technical security measures, like not putting a door on a bank-vault.

Which is why TV licences were dropped here. Australia, followed the UK in charging a yearly fee to consumers to watch TV. There was a compliance service & people were fined. Around the introduction of colour TV, licences were scrapped following community outcry. The consumer value placed on the service was much lower than the license fee, and the technology bypass was "too easy": turn your set on.

In all these discussions there is a critical party omitted: the talent.
The way the music/recording business has evolved, the creators of material get the smallest share of the pie. Even if successful, most never make a cent from a recording contract. Which is why concerts and direct-sales are important to them: its where they are allowed to make money.

There are two questions that Dom Wright doesn't explore:
  • Are there technological enablers or barriers in the business?
  • What is the consumer price-point for broadcast TV?
    • We know from Netflix and iTunes that it's a small amount, above the delivery costs, so attractive to content owners/resellers as well.
For 10-15 million TV sets and 2-5 million mobile devices to simultaneously live-stream using TCP requires a massive head-end infrastructure and multiple very high bandwidth pipes. E.g. multiple data centres of 1000 servers per million streams. It's not that much, say $100M to setup with a life of 3-4 years and 20%/year to operate/maintain.

The economic killer is the bandwidth: even at 4Mbps/stream, 20 million streams is 80 Terrabits/sec, or 800 of the fastest ethernet links (100Gbps) available. Two transceivers (SFP/GBIC's) are needed per link and the high-end kit to plug them into. Guess $25-50,000 per link, to setup, then line-rental and volume charges on top. $50M to start and $100M+/year to run, maybe.

A low-ball guess is $250M setup and $150-$250M/year to operate a service to 20M subscribers. More if they want HD. Add marketing costs, content licensing fees and profit-margins and its $750-$1,500M/year. Or $50-$75/user/year, but only if you get everyone to sign-on to your service.

The sting-in-the-tail is you have to overbuild your server farm and link. What's worse than nobody signing-up? Everybody signs-on and your facilities can't cope. As soon as you can't deliver, you're hosed and it'll take a very long time, if ever, to recover. Think "click-frenzy", they were slammed in their first year and learnt from that experience, but people didn't come back in droves after the first bad experience.

TCP links are point-to-point, they have packets following both ways. For each data packet sent to you, your machine sends one back. At 4Mbps, that's around 400 packets/sec each way. For a 3-hour video, that 8.5M packets processed, each end. For 20M streams: 170,000,000,000,000 packets - 170 trillion.
And that's on a normal day.

If we adopt a simple-minded approach to live streaming, we end up doing two things:
  • perpetuating the current situation where very high barriers to entry create a very small marketplace. Would it be more of the same: FOXTEL and nobody, or FOXTEL v NetFlix v iTunes?
  • Consumers will have limited choice and will pay much more than they need, both because of the high delivery costs and the small marketplace reducing competition.
The talent, the performers and creatives, are still left with no power and almost none of the revenue. The people that we need cultivated and encouraged are again duded by the hordes of Important Fatcats who can't possibly survive on less than $1 million/year, "it's a self-image thing".

There's a simple technology that comes for free with the NBN "bit-stream" design: multicast.

It works identically to radio & TV broadcast, but over the Internet. There are distinct channels and on every channel there is one transmitter and many receivers. Everybody gets the same stream of packets and have to cope with "noise", data errors and drop-outs themselves.

With the Internet, to reach more people, you don't need higher and higher-power transmitters. The network switches copy the packets as needed, acting like perfect local amplifiers for radio & TV.

Not only does the cost of setting up as a broadcaster fall to a few thousand dollars, the consumer charges are low. I'd expect $5/mth for every channel would do it. As the Coalition point out, 12Mbps will stream a couple of StdDef TV channels or a HiDef channel. If you have a phone service and an NTD, you've got 12Mbps. For an extra $5/mth, why wouldn't you get multicast TV?

To set yourself up as a Internet broadcaster over multicast, what do you need?
  • Access to RSP's multicast channels, presumably free or cheap.
  • One server, even in a home. Probably three in different locations would be good.
  • One uplink into the network to handle your 3-8Mbps. Domestic quality links will probably do.
  • More likely, you'll pay a small fee ($5-50) to a commercial provider to upload your content ahead of time, then they'll stream it everywhere at a scheduled time.
This is like Youtube: anyone can upload short content for free, longer programs cost money.
Then "servers in the cloud" send it out...

Multicast is exactly like Broadcast, you trade flexibility for price and scheduling of programs.
For people who are willing to trade program cost for time of viewing, this works exceedingly well. 
In these days of cheap PVR's, it's a boon: you get "watch on demand" for the price of broadcast.

There is a significant and valuable market segment that still needs to be catered for:
  • people who want more than the scheduled service and are willing to pay.
    • you might only broadcast StdDef programs and charge for HighDef, 3-D and others.
    • Sports fans will pay a lot to see events live.
      • and even more to access multiple, special views
    • or if I love a movie, program or series, I might want to buy it and add it to my iTunes/whomever account, then and there.
    • There is also a well-established market for "niche" content, be it First-Run movies, Concerts or "Adult Content".
Like broadcast satellite TV, multicast packets can be encrypted, ensuring that only those with the program key can view it. It's relatively easy and low-cost to securely distribute program keys only to paid-up subscribers.

As I've outlined previously, Free-to-Air broadcasters, both commercial and funded, can leverage multicast:
  • accurate, real-time program viewing data ("ratings"), including time-shifting, and
  • personalised advertising streams
    • and "click-to-buy" or "click-for-info" on adverts, directly linked to advertisers.
    • The pizza delivery guys would love this!
This is the disruptive business model a universal broadband system can bring: a cheap multicast system for scheduled programs, accessible to micro-vendors with a simple "upsell" capability included.

What we don't know is the market price-points, but evidence from iTunes and Amazon suggest $1/item is one and under $10/month another.

We don't need to reinvent the Big Studio and near-monopoly Premium Services again on the Internet, we can do something different and for the first time allow near direct access to artists, performers and entertainers by the mass-market. 

We could enable many talented people to earn a living practicing their craft in this way, just as Amazon direct publishing is allowing a slew of writers to earn real money, $1-$2 a time. The publishers also benefit from this: authors get to hone their craft, they improve their work with market feedback and establish a following which the publisher can then leverage with the skills, resources and marketing channels at their disposal. Lower risk, higher sales, what's not to like?

This is a powerful and worthy shift in our society.

Sunday, 21 April 2013

NBN: Free to Air over Multicast - Is this the Killer App?

I'm wondering if the Free-to-Air (FTA) stations might combine to become the video RSP and drive the take-up of NBN services.
NBN Co's multi-cast capability would allow the RSP to sell multi-cast video for $5/mth.
Would NBN Co come to the party and sell a "video-only" plan for $5/mth wholesale as a loss-leader?

This allows Broadcasters to service niche markets, not just with content, but new technologies like 3-D and 4K, without sacrificing broadcast bandwidth.

Multi-cast will work on both DSL & Fibre NBN's, with higher-bandwidths needed for simultaneous downloads. By trading scheduled broadcast times, with time-shifted viewing for on-demand viewing, there significant technical benefits for broadcasters, RSP's and network operators.


These are some of the hidden technical benefits:
  • Multicast minimises the broadcaster and NBN switch bandwidth needed, more similar to broadcast radio/TV transmitters than current "on-demand" video using a fresh TCP/IP connection per viewer.
    • Anyone with a simple uplink can afford to broadcast a channel to unlimited numbers.
    • NBN Co reduce duplicated traffic a few thousand times, allowing consumers to do more with their subscription fees.
  • The TCP/IP "drop-dead" of a congested uplink is removed.
    • If the video feed is on a separate service, there is no contention with data. The NTD's are supplied with four service outlets.
      • This means families will never say "Who's killed the TV? Stop that!"
    • Multicast streams rely on "Forward Error Correction", unlike TCP, they don't know or care if an individual receiver has lost data. Because fibre is nearly error-free, this is a very good match.
  • Simple and effective copyright and limited use enforcement, based on encryption, can be implemented without intensive per-user computations: it's thousands/millions of times cheaper for broadcasters and more energy efficient by the same margin.
It took until the mid-1970's for 90% of Australian residences to have a telephone, I guess 75 years. Around 25 years after the introduction of the mobile phone, Australia has ~125% take-up of the whole population.

This says two things:
  • People love to talk, the business driver O.T.C. understood so well, and
  • Australians are Early Adopters: collectively we experiment & use new things, such as leading the world in Internet usage in the mid 1990's when dial-up was king.
We've only seen a few big changes in Television in this country since B&W was introduced in the 1960's: Colour in 1974 and Digital (DVB-T) in 2000's. Now, the phase out of Analogue, freeing up VHF spectrum for the voracious appetite of mobile phones. Cable TV,  to only 2.5M premises in 1994-1997, was an expensive bust here, quite unlike the Rivers of Gold it was everywhere else.

TransACT offered voice, data and video over a VDSL-FTTN: exactly the Coalition NBN's offering, but a decade earlier and an ultimate commercial failure.

The thing that really impressed me with their video offering, it included not just the few local FTA stations but others not normally available, like the BBC World Service. Not enough for me to buy it or a sign-on for FOXTEL or whatever service they sold: I don't claim to be a representative consumer.

FTA stations in the digital TV era, with multiple streams per broadcaster, are seeing a resurgence in interest and are competing effectively against Cable TV. We only have the three commercial networks because they can charge for advertising and it provides businesses a net positive benefit: its expensive but makes business sense to advertise on TV. If the business benefit reduces, we the consumer lose at least one FTA network. The economics are that simple and harsh.

The huge number of channels on Cable TV in the USA plays a major role in fostering diverse areas:
  • Entertainers, interviewers and comedians, have a way to establish themselves and be seen. The major networks know the up-and-coming stars and shows from their small, niche offerings on Cable.
  • Surprisingly, the democratic process is informed by Cable TV through the "Public Access" system: the US Government supply free access to everyone on Cable. Special interest groups just have to use it.
  • There's also a whole lot of niche material out there, including Televangelists, Shopping Networks and Adult channels and a bunch of plain rubbish answering the question, "Where do those rejected before the first cut of Talent Shows go?".
Universal Cable TV offers a valuable cultural, democratic and commercial medium, albeit with a lot of dross coming along for the ride. With NBN Co multi-cast, we can have that.

The problem I have with FTA digital TV is programming clashes: whilst the PVR I have can record 2 shows at once, often enough the five broadcasters schedule more than 2 programs I like up against one another, often without an 'encore' allowing me to catch-up later.

These clashes happen for a very simple reason: ratings.

Independent agencies make a nice living measuring what people watch, allowing FTA broadcasters to charge a premium for "eyeballs". They then publish "ratings", broken into fine detail for advertisers to consider in placing their "messages".

The world of Internet multi-cast TV offers a slew of benefits and opportunities for both sides of the equation: producers and consumers.
  • Consumers get a whole lot more choice without forced conflict.
    • There has to be a 'contra' for unlimited free content:
      • data collection for ratings agencies with selected upload via the network, answering "what got watched in full or part, when?".
      • automatic, timed deletion of first-release content.
      • locally injected advertising, including some "can't skip this".
        • Geographical and Demographic appropriate advertising serves both sides better.
        • That's the deal FTA broadcasters make with both consumers consumers and producers: If you watch Ads, we'll give you interesting viewing for free.
        • While the TV might play the advert, viewers aren't forced to watch. People are inventive and adaptable: onerous conditions will be overcome with inventive solutions, or be simply rejected by viewers.
      • targeted advertising: Google for TV. As simple as "don't need to see gender-specific ads" to "compile a specific profile on me".
      • Low repetition: I tend to skip adverts I've seen before, but take time to watch well-made, interesting adverts, especially those in a series. Also, I will back-up and watch Ads when something catches my eye. Advertising is part of the entertainment & information stream: its doesn't have to be forced down my throat against my will.
  • Whilst PVRs can record unlimited content, they aren't built to respect copyright.
    • There's no technological reason that Multicast IPTV can't include restrictions on viewing rights:
      • How long can the copyright content be held? A day, a week, a month or forever?
      • Is there a way to convert the FTA copyright to a sale? Like iTunes, can I buy some content and add it to my permanent library, to download and play when I like?
      • Can I purchase limited-play or permanent rights to a program I enjoy?
      • Can I upgrade to a high-bandwidth version of the content?
    • With the VCR, copyright owners found that anything broadcast may replace sales of recent releases and increase sales on the "long tail" as older releases get renewed interest.
    • Programs such as "Glee" are found to increase sales on iTunes, it's not all one way.
As a consumer, I'd like not to be caught in the crossfire of Ratings Season and be forced only then to choose between the best shows. The rest of the time having to cast about for something vaguely interesting to watch.

Currently I can only accept or reject the Free Broadcasters offer: Watch the Ads in return for interesting content. With an unrestricted PVR, I can change the deal: Watch the program later and you can skip the Ads. Nobody benefits from pretending the deal hasn't changed.

The Internet, with intelligent devices and a real-time backchannel, can offer much more appealing and fine-grained deals than a simple "take it or leave it", allowing consumers to trade-up or down to their price-point and producers to maximise their revenue. The Free Market does allow a win-win game, the power of real choice.

By 2021, the scheduled finish of the Fibre rollout, we could have 90% take-up of the NBN, if we cared to focus on targeted TV broadcast. From that base, what further services might evolve?

We know from the many surprising and successful services/devices that evolved since the "Dot Boom & Bust" of 2000, that the Internet fundamentally changes everything and people will continue to surprise and delight us with new ideas and offerings. New wide-spread technologies gnerate new opportunities galore.

To get to multicast IPTV requires a conscious decision by different players and their co-operation:
  • The Free to Air broadcasters need to co-operate and offer a single FTA multi-cast channel with multiple rates (SD, HD, 3-D, 4K) and some encryption for limited-use copyright material.
  • NBN Co needs to offer a video/multi-cast only entry-level service, well below AVC current rates.
    • In return for this loss-leader, NBN Co gets to minimise duplicate traffic flows and maximise the multi-cast capability of their equipment.
    • Consumers who want to simultaneously download multiple high-definition streams must upgrade access to higher rates at normal AVC rates.
  • The Ratings Agencies need to co-operate with PVR software producers on real-time data collection standards and codes of conduct.
  • PVR manufacturers and broadcasters need to co-operate on technical standards for respecting copyright and enforcing limited-use rights.
  • PVR software producers, broadcasters and Advertising Resellers, like Google, to provide targeted advertising and associated advertisement channels. PVRs can store a single copy of an Ad and insert appropriately, allowing per-market specific advertising and lower bandwidth broadcasts.
  • Paid Content Providers, like iTunes and You-tube, to co-operate with PVR software producers and broadcasters on providing permanent consumer content libraries, which allows upselling consumers with higher-quality variants of their existing content.
There will be 9M households capable of connecting to multicast IPTV in 2021. At $250-$1,000 per Intelligent PVR, that's $4.5 billion in initial sales and a $500M/yr on-going new business.

This is a multi-billion dollar industry for which Australia could serve as the global test-bed, without any Government regulation or intervention.

Will the commercial players that can make it happen, do so?

Monday, 18 June 2012

NBN: Will Apple's Next Big Thing "Break the Internet" as we know it?

Will Apple, in 2013, release its next Game Changer for Television following on from the iPod, iPhone, and iPad?
If they do, will that break the Internet as we know it when 50-250MM people trying to stream a World Cup final?

Nobody can supply Terrabit server links, let alone afford them. To reinvent watching TV, Apple has to reinvent its distribution over the Internet.

The surprising thing is we were first on the cusp of wide-scale "Video-on-Demand" in 1993.
Can, twenty years later, we get there this time?


Walter Isaacson in his HBR piece, "The Real Leadership Lessons of Steve Jobs" says:
In looking for industries or categories ripe for disruption, Jobs always asked who was making products more complicated than they should be. In 2001 portable music players ... , leading to the iPod and the iTunes Store. Mobile phones were next. ... At the end of his career he was setting his sights on the television industry, which had made it almost impossible for people to click on a simple device to watch what they wanted when they wanted.
Even when he was dying, Jobs set his sights on disrupting more industries. He had a vision for turning textbooks into artistic creations that anyone with a Mac could fashion and craft—something that Apple announced in January 2012. He also dreamed of producing magical tools for digital photography and ways to make television simple and personal. Those, no doubt, will come as well.
This doesn't just pose a problem that can be solved by running fibre to every home, or Who can afford the Plan at home, it's much bigger:
  • On-demand, or interactive TV, delivered over the general Internet cannot be done from One Big Datacentre, it just doesn't scale.
  • Streaming TV over IP links to G3/G4 mobile devices with individual connections does scale at either the radio-link, the backhaul/distribution links or the head-end.
The simple-minded network demands will drown both the NBN and Turnbull's opportunistic pseudo-NBN.

In their "How will the Internet Scale?" whitepaper, Content Delivery Network (CDN) provider Akamai, begins with:
Consider a viewing audience of 50 million simultaneous viewers around the world for an event such as a World Cup playoff game. An encoding rate of 2 Mbps is required to provide TV-like quality for the delivery of the game over IP. Thus, the bandwidth requirements for this single event are 100 Tbps. If there were more viewers or if DVD (at ~5 Mbps) or high definition (HD) (at ~10 Mbps) quality were required, then the bandwidth requirements would be even larger.
Is there any hope that such traffic levels be supported by the Internet?
And adds:
Because of the centralized CDN’s limited deployment footprint, servers are often far from end users. As such, distance-induced latency will ultimately limit throughput, meaning that overall quality will suffer. In addition, network congestion and capacity problems further impact throughput, and these problems, coupled with the greater distance between server and end user, create additional opportunities for packet loss to occur, further reducing quality. For a live stream, this will result in a poor quality stream, and for on-demand content, such as a movie download, it essentially removes the on-demand nature of the content, as the download will take longer than the time required to view the content. Ultimately, “quality” will be defined by end users using two simple criteria—does it look good, and is it on-demand/immediate?
Concluding, unsurprisingly, that their "hammer" can crack this "nut":
This could be done by deploying 20 servers (each capable of delivering 1 Gbps) in each of 5,000 locations within edge networks. Additional capacity can be added by deploying into PCs and set-top boxes. Ultimately, a distributed server deployment into thousands of locations means that Akamai can achieve the 100 Tbps goal, whereas the centralized model, with dozens of locations, cannot.
Akamai notes that Verisign acquired Kontiki Peer-to-Peer Software (P2P) in 2006 to address this problem. If the Internet distribution channels were 'flat', P2P  might work, but they are hierarchical and asymmetrical, in reality they are small networks in ISP server-rooms with long point-to-point links back to the premises.  Akamai's view is the P2P networks require a CDN style Control Layer to work well enough.

In their "State of the Internet" [use archives] report for Q1, 2011, Akamai cites these speeds:
... research has shown that the term broadband has varying definitions across the globe – Canadian regulators are targeting 5 mbps download speeds, whereas the european Commission believes citizens need download rates of 30 mbps, while peak speeds of at least 12 mbps are the goal of australia’s National broadband Network. As such, we believe that redefining the definition of broadband within the report to 4 mbps would be too United States-centric, and we will not be doing so at this time.
As the quantity of HD-quality media increases over time, and the consumption of that media increases, end users are likely to require ever-increasing amounts of bandwidth. a connection speed of 2 mbps is arguably sufficient for standard-definition TV-quality content, and 5 mbps for standard-definition DVD quality video content, while blu- ray (1080p) video content has a maximum video bit rate of 40 mbps, according to the blu-ray FAQ.
There are multiple challenges inherent for wide-scale Television delivery over the Internet:
  • Will the notional customer line-access rate even support the streaming rate?
  • Can the customer achieve sustained sufficient download rates from their ISP for either streaming or load-and-play use?
    • Will the service work when they want it - Busy Hour?
  • Multiple technical factors influence the sustained download rates:
    • Links need to be characterised by a triplet {speed, latency, error-rate} not 'speed'.
    • local loop congestion
    • ISP backhaul congestion
    • backbone capacity
    • End-End latency from player to head-end
    • Link Quality and total packet loss
  • Can the backbone, backhaul and distribution networks support full Busy Hour demand?
    • Telcos already know that "surprises" like the Japanese Earthquake/Tsunami, which are not unlike a co-ordinated Distributed Denial of Service attack, can bring an under-dimensioned network down in minutes...
    • With hundreds of millions of native Video devices spread through the Internet, these "surprise" events will trigger storms, the like of which we haven't seen before.
  • Can ISP networks and servers sustained full Busy Hour demand?
  • Can ISP's and the various lower-level networks support multiple topologies and technical solutions?

CISCO in their "Visual Networking Index 2011-2016" (VNI) report have a more nuanced and detailed model with exponential growth (Compound Annual Growth or GAGR). They also flag distribution of video as a major growth challenge for ISP's and backbone providers.

CISCO writes these headlines in its Executive Summary:
Global IP traffic has increased eightfold over the past 5 years, and will increase nearly fourfold over the next 5 years. Overall, IP traffic will grow at a compound annual growth rate (CAGR) of 29 percent from 2011 to 2016.
In 2016, the gigabyte equivalent of all movies ever made will cross the global Internet every 3 minutes.
The number of devices connected to IP networks will be nearly three times as high as the global population in 2016. There will be nearly three networked devices per capita in 2016, up from one networked device per capita in 2011. Driven in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 15 gigabytes per capita in 2016, up from 4 gigabytes per capita in 2011.
A growing amount of Internet traffic is originating with non-PC devices. In 2011, only 6 percent of consumer Internet traffic originated with non-PC devices, but by 2016 the non-PC share of consumer Internet traffic will grow to 19 percent. PC-originated traffic will grow at a CAGR of 26 percent, while TVs, tablets, smartphones, and machine-to-machine (M2M) modules will have traffic growth rates of 77 percent, 129 percent, 119 percent, and 86 percent, respectively.
Busy-hour traffic is growing more rapidly than average traffic. Busy-hour traffic will increase nearly fivefold by 2016, while average traffic will increase nearly fourfold. Busy-hour Internet traffic will reach 720 Tbps in 2016, the equivalent of 600 million people streaming a high-definition video continuously. 
Global Internet Video Highlights
It would take over 6 million years to watch the amount of video that will cross global IP networks each month in 2016. Every second, 1.2 million minutes of video content will cross the network in 2016.
Globally, Internet video traffic will be 54 percent of all consumer Internet traffic in 2016, up from 51 percent in 2011. This does not include the amount of video exchanged through P2P file sharing. The sum of all forms of video (TV, video on demand [VoD], Internet, and P2P) will continue to be approximately 86 percent of global consumer traffic by 2016. [emphasis added]
Internet video to TV doubled in 2011. Internet video to TV will continue to grow at a rapid pace, increasing sixfold by 2016. Internet video to TV will be 11 percent of consumer Internet video traffic in 2016, up from 8 percent in 2011.
Video-on-demand traffic will triple by 2016. The amount of VoD traffic in 2016 will be equivalent to 4 billion DVDs per month.
High-definition video-on-demand surpassed standard-definition VoD by the end of 2011. By 2016, high-definition Internet video will comprise 79 percent of VoD.
In their modelling, CISCO use considerably lower rates for video rates than Akamai (~1Mbps) with an expectation of 7%/year reduction in required bandwidth (halving bandwidth every 10 years). But I didn't notice a concomitant demand for increased definition and frame-rate - which will drive video bandwidth demand upwards much faster than encoding improvements drive them down.

Perhaps we'll stay at around 4 Mbps...

Neither CISCO nor Akamai model for a "Disruptive Event", like Apple rolling-out a Video iPod...

History shows previous attempts at wide-scale "Video on Demand" have floundered.
In 1993 Oracle, as documented by Fortune Magazine, tried to build a centralised video service (4Mbps) based around the nCube massively parallel processor. An SGI system was estimated at $2,000/user which was 10-times cheaper than an IBM mainframe. A longer, more financially focussed history corroborates the story.

What isn't said in the stories is that the processing model was for the remote-control to command the server, so the database needed to pause, rewind, slow/fast forward etc the streams of every TV. There was no local buffering device to reduce the server problem to "mere streaming". Probably because consumer hard-disks were ~100MB (200secs @ 4Mbps) at the time and probably not able to stream at full rate. A local server with 4Gb storage would've been an uneconomic $5-10,000.
 He (Larry Ellison, CEO) says the nCube2 computer, made up of as many as 8,192 microprocessors, will be able to deliver video on demand to 30,000 users simultaneously by early 1994, at a capital cost of $600 per viewer. The next-generation nCube3, due in early 1995, will pack 65,000 microprocessors into a box the size of a walk-in closet and will handle 150,000 concurrent users at $300 apiece. 
Why did these attempts by large, highly-motivated, well-funded, technically-savvy companies with a track-record of success fail with very large pots of gold waiting for the first to crack the problem?

I surmise it was the aggregate head-end bandwidth demands: 30,000 users at 4Mbps is 120Gbps and the per-premise cost of the network installation.

Even with current technologies, building a reliable, replicated head-end with that capacity is a stretch, albeit not that hard with 10Gbps ethernet now available. Using the then current, and well known, 100 channel cable TV systems, distributing via coax or fibre to 100,000 premises was possible. But as we know from the NBN roll-out, "premises passes" is not nearly the same as "premises connected". Consumers take time to enrol in new services, as is well explained by Rogers, "Diffusion of Innovation" theory.

The business model would've assumed an over-subscription rate, ie. at Busy Hour only a fraction of subscribers would be accessing Video-on-Demand content. Thus a single central facility could've supported a town of 1 million people, with 1-in-four houses connected [75,000] and a Busy-Hour viewing rate of 40%.

If Apple trots out a "Game-Changer for Television", with on-demand delivery over the Internet, the current growth projections of CISCO and Akamai will be wildly pessimistic.

New networks like the NBN will be radically under-dimensioned by 2015, or at least the ISP's, their Interconnects and backhauls will be...

The GPON fabric of the NBN may handle 2.88Gbps aggregate downstream, and 5-10Mbps per household is well within the access speed of even the slowest offered service, 12/1Mbps.

But how are the VLANs organised on the NBN Layer-2 delivery system? VLAN Id's are 12-bit, or limited to 4,096.
I haven't read the detail of how many distinct services can be simultaneously be streamed per fibre and per Fibre Distribution Area. A 50% household take-up means

When I've talked to Network Engineers about the problem of streaming video over the Internet, they've agreed with my initial reaction:
  • Dimensioning the head-end or server-room of any sizeable network for a central distribution model is expensive and technically challenging,
  • Designing a complete network for live-streaming/download to every end-point of 4-8Mbps sustained (in Busy Hour) is very expensive.
  • Isn't this exactly the problem that multi-cast was designed for?
The NBN's Layer-2 VLAN-in-VLAN solution should be trivially capable of dedicating one VLAN, with it's 4096 sub-'channels' to video multicast, able to be split out by the Fibre Network Termination Unit (NTU) - not unlike the system Transact built in the ACT.

Users behaviour, their use of Video services, can be controlled via pricing:
  • The equivalent of "Free-to-Air" channels can be multicast and included in the cost of all packages, and
  • Video-on-Demand can be priced at normal per GB pricing, plus the Service Provider subscription fee.
As now with Free-to-Air, viewers can program PVR's to timeshift programs very affordably.

In answer to the implied Akamai question at the start:
  • What server and network resources/bandwidth do you need to stream a live event (in SD, HD and 3-D) to anyone and everyone that wants to watch it?
With multicast, under 20Mbps, because you let the network multiply the traffic at the last possible point.

Otherwise, it sure looks like a Data Tsunami that will drown even the NBN.

Sunday, 3 May 2009

Telco Engineering vs Network Engineering

For many years I had the uneasy feeling that the Telco Engineers I worked with/for over most of a decade at O.T.C. did not actually understand Computer Data Networks.

Here I attempt to formalise and explain that thought.
The implications/ramifications won't be examined in this piece.


History

Telephone/Telco Engineering dates back to between ~1880 with the formation of the first phone companies and 1915 with the first US transcontinental (long-distance) calls. Dial phones were introduced in the US circa 1919, but automatic exchanges had been invented well before. The profession of Telco Enginering has around 125 years of tradition and practice.

As a profession, they have been very good at doing what they do - creating reliable point-to-point voice communication. They have extended into high-availability point-to-point digital services.

Modern Computer Data Network Engineering dates from around 1982 with cheap mini-computers and the IEEE 802.3 standard for Ethernet. Cheap PC's, affordable Interface cards and UTP (10Base-T), standardised in 1990, set the stage for current LAN's - desktop and server. This gives the modern discipline around a 25-year history.

The Universal Network Glue, IP (Internet Protocol(s)), dates from 1969 with the interconnection of the first two systems. Early Telco Data Networks go back to Telegraph (morse code) and Telex and ended with X.25.

For completeness, the World Wide Web, created by Tim Berners-Lee and Robert Cailliau at CERN, and now popularly referred to as 'The Internet', leveraged in 1989 PC's, ethernet and TCP/IP.

1996, the birth of the Modern widespread Internet, was marked by Microsoft abandoning it's proprietory MSN network & protocols and adopting Internet Everywhere.

Differences in Networks and Approach.

Telco networks started with patent wars, bleeding-edge technology and on-going & increasing requirements for large capital investments. That 'sunk cost' became huge as phone access was rolled out almost universally, at least in the 'First' and 'Second' world, providing a considerable barrier-to-entry for new players. After some decades, incumbents could easily kill new competitors by under-pricing them - they had paid for their networks and with great Free Cash Flow, could upgrade & extend their cable plant out solely from operating revenues.

To compete in the Telco world required huge investments in cable plant and switching equipment, and extensive, preferably full, network coverage. "Metcalfe's Law" states the value of a network increases with the square of the number of connections. A provider with a slightly better coverage in an area, all else being equal, quickly gained an economic advantage. Achieving better cash flow & profits either through higher charges or more subscribers. This could pay for faster expansion of the network, increasing their advantage. A virtuous circle.

Subscribers could not, even if they wanted, install & run their own cable plant & subscriber equipment. Initially, it was too expensive and patent-protected, then precluded by technical compliance requirements, then by legislation and regulation.

Telco operating principles became:
  • effective monopolies per region
  • hub-and-spoke design
  • extensive overbuild to cater for projected demand
  • high-availability, high-cost central equipment and interconnects
  • simple subscriber equipment and complex exchanges and transmission systems
  • 'Premium Pricing' model ("what the market will bear", vs "cost plus")
Telco networks are a classic Cost Accounting study: almost all costs are Fixed and Indirect, often dominated by Financing costs. There are almost no Direct or Variable costs.

E.g. the only marginal cost for any phone call is the cost of electricity: ~1 watt per phone. Around 20 milli-cents per hour. Capturing & processing billing data is 100-1000 times more expensive.

Telephony and Data Networks differ in almost all details, "Let me count the ways":



FactorTelcoData
connectionCircuitPackets
speedfixedvariable
Noiseignorederror-correction
echoescancellationn/a
multiplexingexternal systemsinherent
ModelCentral SwitchingDistributed switching
TopologyHub-and-SpokeBuses, self-healing loops
distributed equipment
High-Availabilitycomponent redundancywhole switch duplication
congestionno circuit, call failsslower transfers
lost datanoise & dropoutsretransmit
switch designnon-blocking,
continuous connection
queues, dropped packets,
retransmit
lost datanoise & dropoutsretransmit
variable delay, jitterhighly sensitivetolerant, retransmit
multiple connectionsmore linksincrease link speed
EncryptionExternal, expensiveEmbedded, Extensible
Intelligiblity and
'Quality of Service'
good, guaranteedvariable, no general QoS
Local loop ScalabilityRebuild, reinstallIn-place link upgrade
UpgradeLong-range forecasting,
Initial overbuild
In-place upgrade, incorp
Technology advances
Equipment sourceSpecialised, expensivegeneric, commodity
FinancingLarge CapEx hidden
in monthly rental
Prepaid install and
Customer owned
BillingPost-paid,
unlimited Credit,
Itemised bills
Prepaid with limits
Capacitymegabit rangeHD video capable,
tens of gigabits
MulticastOnly single-castmulticast capable

What must be made clear:
there are some services that IP Data Networks do not currently deliver as well as the Telco networks. Those requiring low-latency, low-jitter, and low-noise. I.e. a guaranteed (high) Quality of Service. Even the traditional consumers of these services, radio and TV, are changed their work practices and moving to in-house IP networks or general Internet delivery.

Until 1999/2000 overseas telephony dominated those trunks. Since then, direct internet traffic has kept growing (exponentially - what doubling period?) and now swamps all other service demands.

As an example of the capacity differences: the ~10M landline services (at 32kbps) and 21M mobile services (at 9.6kpbs) represent a maximum of ~500Gbps demand, possible in a single, albeit large, router. The usual demand is around 2-5% of the maximum (10Gbps) - now well within the capability of low-cost routers.

Telco Engineering applied to Public Data Networks.

Large modern corporate Data Networks are "Pure Internet" networks. Networks such as the Department of Defence, cover most parts of Australia and extends overseas. It provides for 100,000+ desktops, a larger telephone networks, audio & video broadcast and secure services. They compete in size, service range and complexity with normal "common carrier" (Telco) networks.

If 'traditional' Telco Engineering approaches were more cost-effective or provided better availability & reliability, then they would be in use. The usual arguments for not delivering "Pure Internet" (commodity links/equipment, symmetrical upload/download) to households is population density. Defence faces the same distribution problems across its many, large campuses - and still run "Pure Internet". When cabling/trenching costs dominate, it still makes sense to run small copper or optical fibre cables with switching systems distributed through the network.

The costs of running individual copper or optical fibre from a central exchange to households increase dramatically over a commodity Ethernet/Internet solution:
  • total physical copper or fibre required is 10-100 fold more.
  • large cables (eg 200-pair) are expensive to buy, install, join and repair/maintain
  • many joints, each a failure point, are required to each customer premises, versus a single clear run to a local access point
  • duct sizes near the centre get very large, compounding the costs, complexity and maintenance/upgrade problems
  • with copper, cross-talk & interference problems compound with increasing circuits
  • generic, commodity equipment cannot be used in either the Central Office or subscriber
  • link speeds are fixed unless a major equipment upgrade is performed.
The Transact and the HFC Cable TV networks utilised this 'network embedded switching equipment' approach, resulting in per-house-passed costs of under $2,000 vs the $5,500 of the proposed NBN.

Perhaps the most convincing argument is what the Telcos now use for their backbone networks: Pure Internet. Many of their new services offerings are managed services derived from this internal IP network.

Or ask what companies are best positioned to offer "Triple Play" (TV, Data/Internet, Phone).
People with high-bandwidth backbones and upgradeable local-loops.

Saturday, 2 May 2009

NBN - Powerplay or 'for real'?

During the week I had a fault on my phone line and got to talk to the Telstra tech afterwards.

His view was naturally Telstra-centric, but contained wisdom & insight.
It would make sense to have just one local access loop with just one maintenance organisation to which all Telco's have equal access. And it couldn't be owned by any one or two commercial players, that distorts the market.


He described an asymmetry: how new Telco entrants can build their own infrastructure and deny access to all others, but Telstra was obliged to provide access to 'their' copper network to everyone. He sited the sad case of a pensioner needing a phone and waiting for quite sometime (and paying a big fee) while Telstra dug in a cable that the developer should've installed... All the time there was physical cable to the premises owned by another Telco, but inaccessible.

The 1994/5 HFC cable TV rollout by Optus & Telstra (80-90% duplication) shows the insanity of Telecomms commercial arrangements and regulation in Australia. Behaviour that you wouldn't tolerate in school children.

That it was never a commercial decision is shown by the subsequent huge write-offs by both players. If they'd be ordered to "play nice" and construct a single infrastructure with bilateral access, the face of Australian Telecomms would be fundamentally different today. Cable TV would be a real force - possibly covering 80% of houses and making real profits.

The Telco regulators have allowed the same pattern to be repeated with mobile phone operators. In the USA & Europe, operators allow competitors access to their networks and make good income from it. It's called "roaming" and is supported by all the standards, hardware and handsets...

Everyone benefits, it's a positive-sum game. It doesn't stop operators extending their networks when they know they'll make more money by building their own infrastructure. [They have hard data on their customer call patterns.] It also means new entrants are 'born global' and have time to build-out their network and manage their cash-flow and CapEx. It encourages and enables real competition, which again benefits everyone.

Only it doesn't happen in Australia. There are 4 or more independent networks, everywhere, and no roaming agreements. It's not about commerce or service, but sheer bloody mindedness. All these Telco's have roaming arrangements with overseas operators. They have both the technical and commercial knowledge to do local roaming. [There is some for mobile wireless internet.] Everybody loses - it's a negative-sum game.

Regulators should not allow "Coverage" to be a marketing differentiator.
Failure to cross-connect/access should lead to heavy fines and eventual revocation of a Telco license. Telso Licenses, like Banking, are granted so commercial entities to primarily provide a public service. In return, a limited monopoly is granted.

It's all about providing public services, not about the profits of licensees.

Back to the NBN.

What's the long-term impact on Telstra if Kevin Rudd & Co forcefully construct "The One True Local Access Network"? If I were Rudd and was forced down this path, I'd deny Telstra access for forcing the issue in the first place.

Everybody loses, comms prices are high, services are limited and national economic competitiveness declines.

There is a 'critical point' of market share at which Telstra cannot continue to maintain and operate a parallel, full-coverage network - even with more limited services. As their market share declines, their profits reduce, putting pressure on maintenance, operations and customer service. This leads to more limited offerings, poorer service and unhappy customers, leading to yet smaller market share, decreasing profits and an inevitable "death spiral" that can only be broken by massive capital injection or embracing the NBN.

Telstra embracing the enemy seems very unlikely. Over several decades, the Telsra Board and Management have demonstrated they will not work with others, sometimes even after ACCC action and court directions. The have proven to be obstinate and recalcitrant.

There is also the competitive services problem: if the NBN provides desirable services that Telstra cannot deliver, then an increasing number of people leave Telstra, leading to the 'death spiral' by another route. Telstra can buy market share by dropping prices, but at the price of longevity - they destroy their profits and the ability to fund growth & new services.

The Federal Government and its regulators must know these things.

Are they taking Telstra on head-on with the intention of putting it out of business? (It can only be a 'take no prisoners' struggle to be won by the deepest pockets or a change of political direction.) The last thing Australia needs is the NBN being privatised before it is the dominant player. Without unlimited financial backing, Telstra would win, even if mortally wounded itself.

Or is it just a Power Play to force the Telstra Board and Management to wise up and 'play nice'?

We'll only know in hindsight.
Australia shouldn't have to bear either the massive cost of duplicating the local access loop or of Telstra failing.


Update 1. Sunday May 3, 2009. Senator Kate Lundy points to this piece by Richard Alston (Minister for Communications etc 1996-2003). Alston notes that Telstra aggressively duplicated the Optus Cable TV roll-out. He doesn't say that as the responsible Minister that he could've acted to prevent or change that.


Update 2. Tuesday May 12, 2009. The Australian reports :
The federal Government will offer Telstra the chance to buy up to 49 per cent of its national broadband network, if it agrees to voluntarily hive off its wholesale arm.
Telstra undertakes "structural separation" - into Wholesale & Retail arms. In return for its current Fibre Network, gets 20% of NBC (National Broadband c/o).

With both a new CEO and a new Chair (now Donald McGauchie is gone), Telstra may be able to resile from its "never, ever separate" position.

That values Telstra's 'Fibre Network' at $8.6Bn and allows them another $12.5Bn investment.
After the original $4.7Bn for NBN 1.0, the Govt. needs an additional $12.4Bn - half public, half private.

Guesstimates for NBN components:
  • Domestic subs, 9M @ 100Mbps ($1,500/house): $13.5Bn
  • Exchanges, 1,000 @ ($500/sub + routers/uplink): $5-8Bn
  • Backhaul/Interstate upgrades (10+Tbps scale): $5Bn
  • International cables - 10Tbps (1Mbps/household): $5-$10Bn
  • Peering & ISP interconnects (20-100): $2Bn
  • Rural/Remote radio/Satellite, 1M @ 12Mbps: $5-10Bn
  • Content provider feed network: $2Bn
  • High bandwidth subs: Business, Schools, Hospitals, Govt: $5Bn
  • Network Operations, Maintenance & Test spares/depots/equip, Training: $5Bn
  • Billing, Call record & traffic analysis, Line management & other IT systems: $2Bn

Update 3. Friday May 15, 2009. Stephen Bartholomeusz in "Business Spectator" reports:
Lindsay Tanner has confirmed what was suspected. In arriving at its estimate that the cost of the revised national fibre-to-the-premises broadband network would cost $43 billion, the Rudd government essentially dreamed up a big number and then added to it. ....

The evaluation that should have preceded the commitment will now occur, with the government commissioning an "implementation study", which one assumes will consider the complex economic issues involved and come to a conclusion whether the NBN is a viable commercial proposition.
If Telstra agrees to "structural separation" & buys in, the economics change.
If not, there's enough money in the bucket to do this thing "right".