
Customer feedback on 4K UHD tells us that more pixels alone will not satisfy market expectations for a next-generation broadcasting format. Having better pixels with a higher dynamic range is therefore a hot topic of discussion.
While standard dynamic range (SDR) and high dynamic range (HDR) production workflows in the film and post-production industries are relatively well known, simultaneous SDR/HDR processes for live production are still to be established. This is due to specific challenges with live production, such as the current limitation to signals with 10-bit depth and the related requirement of having native HDR optimally defined transfer functions (OETF) at the source, rather than converting between different OETFs, to avoid banding errors.
4K UHD expectations and reality
With the introduction of 4K UHD, image resolution has reached a level that exceeds human viewing capabilities in many practical viewing conditions. Additionally, in many typical broadcast applications, like sports productions, the motion blur caused by the relatively long exposure time of each frame is the main limitation of the perceived image sharpness. And since motion blur is format independent, a larger pixel count does not improve resolution or the sharpness impression if the exposure time cannot be reduced.
This is why high frame rate (HFR) acquisition is included in UHD recommendations but where 4K acquisition requires 8x the bandwidth of today’s HD acquisition with the same frame rate, 4K HFR with 100 or 120fps would require 16x the bandwidth of the HD formats used today. Beside these extremely large bandwidth requirements, the shorter exposure time in combination with the smaller pixels strongly reduces the sensitivity of the camera to a level where – in some cases – the camera is no longer usable. These challenges will not permit implementing 4K with HFR in the near future but will make it an option for future implementation.
HDR and its advantages to image quality
Since the high pixel count of the 4K UHD image alone in many cases is not delivering the improvements to the image quality as expected by the users, HDR becomes an important topic to next-generation acquisition and distribution formats. HDR allows image reproduction that is much closer to reality due to its contrast ratio, which is much closer to the conditions found in real life. HDR also allows more dependable results under difficult shooting conditions found at many outside broadcast productions. For field productions, this feature is possibly the most important improvement delivered from any of the new 4K UHD features and a primary advantage of HDR is that it is fully format independent, and does not need any specific viewing condition to show its advantages.
With HDR a larger contrast range is captured, processed and transmitted to the display, and it’s important to note that squeezing a larger contrast range into a given signal range will bring the signal closer to its limits.
The quantisation of the signal inside the imagers and the processing inside the camera is done with a bit depth far above the 10-bits used for production, and the latest CMOS broadcast cameras provide signal processing inside the camera head with up to 34-bit accuracy. This helps to avoid any unnecessary banding artefacts from the camera side but once the signal is converted and then output into a certain OETF, it is limited to the maximum 10-bit depth as supported by current broadcast interface standards.
Which OETF to be used
With SMPTE 2084 or PQ and HLG there are two different transfer functions, standardised and used, for all live HDR applications worldwide (see Figure 1). It’s not likely that only one of the two OETFs will be used for all applications and in every place around the world. This means that both transfer functions need to be supported in live production and the generation of a native OETF coming directly from the camera system helps to provide enough headroom for unavoidable conversions.
By Klaus Weber, senior product marketing manager, Cameras, Grass Valley