Is broadcast technology being driven by raised consumer expectations – or is it the other way around where we are all watching more OTT services because technological advances are enhancing delivery and quality?
As much as operators would like to be ahead of the game, it seems likely that consumer demand for anytime, anywhere, any device viewing is setting the agenda. Last year, 12.6 billion hours of content were viewed using OTT services, more than double that of the year before.
Yet, one thing is certain; this demand is powering a continuous string of innovations, especially around the issue of streaming. The most recent of these embraces artificial intelligence (AI) to analyse hundreds of thousands of assets before making its recommendations. This development has been shown to save operators around 30 per cent of content delivery costs, while also improving the quality of delivery.
NO PLACE FOR COMPROMISE
Many operators are now finding that ‘one size fits all’ streaming reduces the quality of the viewing experience, especially when bandwidths are low. In such a fast-moving competitive environment, they can’t afford this compromise.
At first, adaptive streaming looked like the answer where the same media file is encoded at a number of different bitrates producing multiple representations at different qualities. As the quality of an internet connection varies, the stream can switch between different representations to provide a smoother viewing experience.
The downside to this is that the bitrate doesn’t match the content complexity. For half the content the bitrate will be too high – and for the other half it will be too low, so the quality of content is never fully optimised.
SMARTER USE OF BITRATES
Recognising this shortfall, developers have been working on streaming that adjusts the bitrates based on the complexity of content rather than just the internet connection. The result is content adaptive streaming which uses AI to compute all the necessary information, such as motion estimation, to make intelligent allocation decisions. Using a variable bitrate to reach constant quality allows bits to be saved when the complexity drops on slow scenes for example, also using less profiles on easier content.
The other difference between adaptive and content adaptive streaming is the chunking. The traditional approach is to keep chunks at fixed lengths. The ecosystem usually requires chunks to start with an I-frame so that profile switches can occur between chunks, but with fixed-size chunks this implies arbitrary I-frame placement. Therefore, a scene cut before a chunking point results in a major compression inefficiency as the image is encoded twice.
Typically though, advertising segments are not usually aligned with a chunking period. This can lead to inaccuracies when using fixed-size chunks. For dynamic advertising insertion, the delivery ecosystem is already aligning chunks on ad borders and so already using variable chunk size.
Content adaptive streaming combines a scene cuts detection algorithm in the video encoder with rules to keep chunk size reasonable and minimise drift, in order to prepare the asset for more efficient packaging. This not only brings cost saving benefits due to reduced traffic, storage and other overheads, but also improves the quality of experience for the consumer.
The good news is that new content adaptive streaming solutions have been developed with interoperability in mind, so individual parameters such as dynamic chunking can be turned on and off. Operators also have the option to use the specific resolutions they want, even if these appear to be suboptimal to the system.
Any advance that saves operator costs at such a level is always welcome. However, whether this is just the beginning of the use of AI within the industry is yet to be seen. It’s clear though that content adaptive streaming is more than just an upgrade of what went before but rather a whole new way of thinking about streaming.
Written by Jérôme Vieron, PhD, director of Research & Innovation for ATEME