Monday, August 20, 2007

Of hares, tortoises and convergence

It was an interesting article written by Alex Cameron "IPTV/VoD The Hare & The Tortoise", one his statements being a bit parochial - "The average consumer is way ahead of any industry professional out there today." It is agreeable that there is a significant group of consumers that are way ahead of the industry professionals - basically the technophiles - but the fact that most people have an iPod, doesn't mean that the average consumer could have developed it nor is skilled at doing more than playing their songs in sequence (some "average" users I've found can't do more than upload songs and play them in sequence).

In the convergent word, the questions are out, and have been out for quite a few years now - is the long tail going to be 'the next big thing' ? depends on who is the "Apple" that designs it and makes it go to the masses easily.
Is user generated content the next big thing ? depends on the YouTube that makes it appealing - and when Enteprise 2.0 finds out that a lot of their personnel spend their time downloading free parodial clips from the net and blocks the content to save in its operational costs, see how much of the YouTube model really works out of the boring office hours !
Is the problem the middleman ? I doubt it is ... one middleman in audio yesterday (Sony/EMI/etc) was replaced by another one (iTunes & Apple), why ? because the "middle man" intelligence to get the content (not only mainstream, think about long tail and targetted advertising)
Is the problem the network ? Not necessary - it depends on who is delivering the technology, a satellite for broadcast with wireline/wireless for VoD works OK, but so does Fiber as the bandwidth increases significantly from the ADSL/ADSL2+/CMTS world ... both solutions would work in the long term. Are they economically viable ? Well ... let's go back and see how much had to be written off by the NTL enterprise ... digging is not cheap, digging doesn't give you instant access to a significant amount of the population to make the ROI visibile in a reasonable time ... with a Satellite-wireless/wireline combo you can appeal to a mass market faster (with the obvious initial cost) with simple packages, from 'entry' SD to 'full' HD/VOD/Internet ... the obvious disadvantage ? well, satellites are single points of failure, you need to launch more than one and it will cost you more money.

So are all the questions answered ? Not really, the market can easily shift and competing (and economically viable) options can co-exist?

For the moral of this story, read "The Doe and the Lion".

Friday, March 23, 2007

Could IPTV kill my bake potatoes ?

Based on a recent set of studies, your baked potatoes will be history by 2015 and will be replaced by baked yams.

On a serious note, how come people still take serious the studies/analysis/research that use the word 'kill' for one technology to other - it is true that some of the old emerging technologies (betamax/vhs) had a head-to-head competition, but others (ADSL/cable/satellite) have been around coexisting in 'harmony' for a while - and no sign of one 'killing' the other any time soon - specially with the big mass of consumers fluctuating from one service to the other.

The same way it has been said and said that e-mail would kill snail mail - but not yet as snail mail has bounce back - (1, 2), it is difficult to see that IPTV ("TV over IP") will kill TiVO ("end-user device") - it is true that TiVO is not only user equipment - there is infrastructure and services behind - but IPTV is more of an end-to-end framework (OK i am oversimplifying), with currently many flavours competing (and probably coexisting in the future) - from streaming to peer-to-peer.

Thursday, March 22, 2007

YAPD (yet another P2P diagram)

OK, these are a couple of diagrams that try to do simple description of the magic of peer-to-peer, in the 1st image, the ISP ignores the p2p traffic, in the 2nd, it implements a layer of caching near the edge/access network (CMTS/UMTS/ADSL/FTTH):

(1) - end-user accesses metadata repository to find content of interest;
(2) - then complete (if any) access lifecycle (purchase, drm key generation, etc.);
(3) - peer-to-peer protocol then kicks in, trying to discover peers and find appropriate "segments" amongst near ones (moving across to peers further and further away)
(4) - Peers start deliverying segments to requestor (until enough segments have been collected to create a buffer and 'play' the content)
(5) - If peers don't have the segments, it is necessary to access the peer-2-peer data centre.




In this second diagram, pretty much steps 1 to 4 remain the same, difference is now:

(5) - ISP has a caching layer near the edge/access that allows not only to reduce the amount of times:
  • segments have to be retrieved from remote peers
  • content/segments have to be retrieved from the p2p data centre
(6) - If peers don't have the content and/or segments are not found, it is necessary to access the peer-2-peer data centre.

It is a very high level description - but as common sense would tell you: the closest the content is to the edge, the faster the access from the end-user's perspective.

There are also benefits for the ISP (i.e. ability to reduce transit bandwidth, mediate if there is profit share involved, etc).

Thursday, March 01, 2007

Joost :)

I have applied and got the: Joost™, will keep you posted ! (installation ran smoothly, it looks gorgeous - but neither the office nor the hotel LAN allow for p2p to run, so i will not see content until I get home later today).

Monday, February 12, 2007

These (h)IP(s) don't lie

Back to the usual week, everyone seems to have a comment about problems with IPTv, "this internet won't scale" says Google's TV chief. "IPTV/VoD: Cutting off the air supply" published on El Reg.

I suppose it is true that there are issues around how ISP's have built their networks, contention ratios are changing dramatically in order to accommodate the ever increasing bandwidth utilization by consumers.

But, how can these perceived problems be mitigated? My humble views:

- Don't broadcast over your network: that is an easy one to work out, it might sound "cool" and "cutting edge", but the preallocated/reserved bandwidth will bring your company to bankruptcy - just look at how cable companies have struggled to recover investment (whilst satellite companies seem to do 'ok-ish'). A more scalable model is always to use a cheap(er) media to stream the broadcast products - air waves, a combination with satellite, 3G for convergence.
- Cache, cache and cache; these days storage is cheap, so try to follow:

  1. Use http cache and/or peer-2-peer caching for inter-ISP content (i.e. cache your youtube's ; myspace's; joost's and all the rest) based on cheap commodity software, that will not only reduce long term costs of transit data, but will improve your customer's experience (remember that as internet video becomes ubiquitous, more and more people will "hit" the peak of the popular videos at the same time). There is an overall benefit with this mechanism, as ALL internet content to your consumers will benefit from having this layer built (be careful with the technicalities, transparent caching is the best for user experience but in some circumstances can cause glitches).
  2. For internal/local content, use a multi-tiered storage architecture. First in your data centre/head end, create capabilities for near real time/back up - cheap IDE and slower media that gives you massive storage capacity at low cost - a SAN will always help and it is already part of the infrastructure of the ISP for databases, so expansion of use will also allow for consolidation of operations. But additionally, cache at the points of aggregation (on the "edge" closer to the consumer, so you also save transit bandwidth across aggregation links). This cache should be smart, in order to help in real time, it needs to have visibility of what are the viewing patterns happening (i.e. in order to cache a piece of content requested by a viewer for future use, the cache requires information about past behaviour related to similar pieces of content, for example other episodes of the series), it is likely that a lot of this information can be put as part of the metadata for the stream, dynamically generated by the content server located in the data centre (as content gradually ages and viewing patterns change, the metadata will tell the edge caching what to do with the content) (This looks good for a patent, using a bit of Business Intelligence, the content server can request, perhaps via the content management system what are the metadata tags to generate at the beginning of the transmission).
Prices will definitely rise if ISPs keep a blind eye on how the overall network infrastructure needs to cater for the new requirements, the centralised deployment model might benefit if you own the infrastructure, but in general, transmitting as little data as possible will give you extra bandwidth to transmit other data.

Hope to read more views !

Monday, January 22, 2007

Joost it?

Interesting new articles have expanded my views since November. I think commercially, Joost is an extremely viable (cost effective, easy to deploy) product, but (yes there is a but) it has 2 potential problems:
  • A telco doesn't like it: to introduce packet loss or filtering is easy these days - get a DPI, tweak parameters in your cache appliance (1, 2) to increase the latency or to never deliver some packets. So unless the telco's become infrastructure (like roads that provide the best QoS to *anyone* wanting to use them), the Joost could be out of these world (like the guy the wrote the article), or it can be Just crap ...
  • A telco likes it too much: So in this case, the telco allows Joost to work - maybe through syndication -, deploys infrastructure for caching (1, 2) to enhance the user experience. So here the telco would need good BSS/OSS systems to monitor and manage network deployments (one thing is talking about the "consumer experience" in isolation, a different animal is to see how 1 million consumers receive good QoS).
After using Skype in and Skype out in 3 different networks in the last 24 hours (home cable at 2 Mbps, hotel shared wLAN at 8 Mbps and office at 100 Mbps), the experience of peer to peer is reasonable - but none of my 8 calls was 100 % satisfactory - people could understand me or i could understand them, but packets were lost, echo introduced, etc. Since the 3 networks are not 'aware' of this particular use, it is a good example of how an un-managed telco environment would operate.

Could anyone with access write more about what is Joost offering to partners, or is it really Joost it ?

Wednesday, January 03, 2007

Converge !

Uhmmm I was chatting to one of my mates and she told me she spends now lots of time down in Stickam; I had never heard of it (so the word-of-mouth advertising is not working anymore?), but it looks just like a more "converged" version of MySpace ! I do remember the good old days when cuseeme and netmeeting came along to start the "video social networking", now with bandwidth being a commodity, and the web maturing ("2.0"), it is easy to see sites like Stickam "stick" out more (cheesy!) and acquiring more and more subscribers.

Still the problem is how to make the business model ! I find the way google ads are introduced into myspace annoying, sometimes links move around because of the dynamics with the advertising ... not good! (before you go about ... "http://www.myspace.com/joshcruz" is not me ... so no jokes!). For advertising to work out, it has to be less intrusive ... how ? well, that is the pot of gold at the end of the rainbow, no one is going to publish any ideas anywhere!

I have some other ideas I need to structure, so I will leave them for new posts.