Monday, February 12, 2007

These (h)IP(s) don't lie

Back to the usual week, everyone seems to have a comment about problems with IPTv, "this internet won't scale" says Google's TV chief. "IPTV/VoD: Cutting off the air supply" published on El Reg.

I suppose it is true that there are issues around how ISP's have built their networks, contention ratios are changing dramatically in order to accommodate the ever increasing bandwidth utilization by consumers.

But, how can these perceived problems be mitigated? My humble views:

- Don't broadcast over your network: that is an easy one to work out, it might sound "cool" and "cutting edge", but the preallocated/reserved bandwidth will bring your company to bankruptcy - just look at how cable companies have struggled to recover investment (whilst satellite companies seem to do 'ok-ish'). A more scalable model is always to use a cheap(er) media to stream the broadcast products - air waves, a combination with satellite, 3G for convergence.
- Cache, cache and cache; these days storage is cheap, so try to follow:

  1. Use http cache and/or peer-2-peer caching for inter-ISP content (i.e. cache your youtube's ; myspace's; joost's and all the rest) based on cheap commodity software, that will not only reduce long term costs of transit data, but will improve your customer's experience (remember that as internet video becomes ubiquitous, more and more people will "hit" the peak of the popular videos at the same time). There is an overall benefit with this mechanism, as ALL internet content to your consumers will benefit from having this layer built (be careful with the technicalities, transparent caching is the best for user experience but in some circumstances can cause glitches).
  2. For internal/local content, use a multi-tiered storage architecture. First in your data centre/head end, create capabilities for near real time/back up - cheap IDE and slower media that gives you massive storage capacity at low cost - a SAN will always help and it is already part of the infrastructure of the ISP for databases, so expansion of use will also allow for consolidation of operations. But additionally, cache at the points of aggregation (on the "edge" closer to the consumer, so you also save transit bandwidth across aggregation links). This cache should be smart, in order to help in real time, it needs to have visibility of what are the viewing patterns happening (i.e. in order to cache a piece of content requested by a viewer for future use, the cache requires information about past behaviour related to similar pieces of content, for example other episodes of the series), it is likely that a lot of this information can be put as part of the metadata for the stream, dynamically generated by the content server located in the data centre (as content gradually ages and viewing patterns change, the metadata will tell the edge caching what to do with the content) (This looks good for a patent, using a bit of Business Intelligence, the content server can request, perhaps via the content management system what are the metadata tags to generate at the beginning of the transmission).
Prices will definitely rise if ISPs keep a blind eye on how the overall network infrastructure needs to cater for the new requirements, the centralised deployment model might benefit if you own the infrastructure, but in general, transmitting as little data as possible will give you extra bandwidth to transmit other data.

Hope to read more views !