Home Authors Posts by Ian Dormer

Ian Dormer

Ian Dormer
43 POSTS 0 JOBS
Ian Dormer was born in Zimbabwe and has been in the TV business since the 1980s, having served in various positions a the SABC, M-Net and SuperSport. Ian currently works and resides in New Zealand.

IBC 2019: It’s a record

SCREEN AFRICA EXCLUSIVE:

It’s a place where cutting edge innovations and creative ideas are shared and relationships are formed. Each year IBC continues to grow and this year attendance hit a record number! The world’s media, entertainment and technology industry once again gathered in Amsterdam by the droves, while the total attendance figure of 56,390 saw a record number of next-generation (18 – 35s) attendees, demonstrating the vital role that the show has in the broadcast and entertainment industry.

IBC CEO Michael Crimp was delighted to see audience growth in key target areas, “Particularly in welcoming more young people, senior level executives and overseas visitors,” he said. ”While this gives us a focus to build on next year, our metrics for success also include crucial elements like quality of experience, audience engagement and IBC’s influence on the industry, and our conversations with exhibitors and attendees tells us that these have all improved on 2018.”

This year’s show was indeed jam-packed with the technology and trends of tomorrow and perhaps the biggest highlight for most was the first-ever IBC Esports Showcase live tournament. I think it highlighted just how gripping and entertaining Esports can be and why the media and broadcast industry should be getting involved. Esports is an incredibly fast-growing movement and IBC attendees saw it first-hand, with two professional teams from ESL’s network of National Championships across Europe going head-to-head in the classic FPS multiplayer Counter-Strike: Global Offensive.

The broadcast, media and entertainment industry’s sense of social responsibility is stronger than ever. Movements championing diversity and inclusivity are gathering momentum and there is a conscious increase in company initiatives making a positive impact in the workplace and community. To reflect this, and its commitment to driving change in the industry, IBC has for the first time recognised social responsibility as part of its prestigious awards programme with a stand-alone award: the Social Impact Award. Competition for this award was so intense that the judges awarded not one, but three trophies: to Turkish broadcaster TRT for its World Citizen programme; Sagar Vani, an Indian initiative, which is an omni-channel citizen engagement platform; and finally Chouette Films, an initiative of the University of London’s School of Oriental and African Studies, whose aims are to produce academic and informative content with the smallest of environmental footprints.

The conference sessions provided attendees with much food for thought. Google’s Android TV and Roku were singled out as two of the most transformative technologies at IBC by Accedo’s Fredrik Andersson in a What Caught My Eye conference session on Innovation. A big topic of conversation on the main IBC stage was change: changing monetisation models, changing consumer habits, even changing content expectations (have we reached Peak Content yet?).  Something that never changes, though, is the ever spectacular Big Screen Events. As always, the IBC Big Screen, which is equipped with Dolby Vision and Dolby Atmos, delivered a stunning programme of events and screenings. An exclusive cinematic screening of Game of Thrones’ epic Season 8 battle episode drew a huge crowd, as did a session on the stories behind the edit and the music of the Elton John biopic, Rocket Man.

The IBC2019 exhibition featured 1,700 exhibitors over 15 halls offering attendees the opportunity to discover all the latest trends and technologies at their own pace. In the post-production environment, Adobe used the show to unveil Auto Reframe, a new feature for its Premiere Pro video editing software that is powered by the company’s Adobe Sensei artificial intelligence (AI) and machine learning (ML) platform. Auto Reframe automatically reframes and reformats video content so that the same project can be published in different aspect ratios, from square to vertical to cinematic 16:9 versions. Avid used the opening day of IBC to share that its Media Composer video editing software offering will now be able to deliver native support for Apple’s ProRes RAW camera codec, and will support ProRes playback and encoding on Windows.

The exponential growth in video consumption worldwide is a challenge, as consumer demands and expectations increase. Taking notice of the market trends, Nikon opportunely unveiled its all-in mirrorless moviemaking set-up: the Nikon Z6 Essential Movie Kit, built around the video-friendly 24.5MP full-frame 4K Nikon Z6 body. Comprising filmmaking essentials such as the Atomos Ninja V monitor, SmallRig camera cage and spare batteries, Nikon describes the Movie Kit as “providing the pure essentials to get rolling quickly, with all the core tools to make high-quality movies,” while “leaving filmmakers free to customise further components to suit their personal preferences.” The package will cost aspirant filmmakers around US$3800.

6K was a buzzword often dropped into conversations at IBC, and Blackmagic Design used the occasion to announce the Blackmagic Pocket Cinema Camera 6K, a new handheld digital film camera with a full Super 35 size 6K HDR image sensor. There are no surprises when it comes to Blackmagic Design: the Aussie company continues to impress with their amazing technology, and of great interest was the release of their ATEM Mini, a new low-cost live production switcher specifically designed to allow live streaming to YouTube and business presentations via Skype.

As always, Sony showed off their prowess in the industry, unveiling a whole new range of products, solutions and services. The highlight for many a DoP had to be the new PXW-FX9 – XDCAM camera, featuring Sony’s newly-developed Exmor R 6K full-frame sensor and Fast Hybrid Auto Focus system. Building upon the success of the PXW-FS7 and PXW-FS7M2, and inheriting its color science from the digital motion picture camera VENICE, the new camera offers greater creative freedom to capture stunning images and represents the ultimate tool of choice for documentaries, music videos, drama productions and event shooting. Also of interest to shooters was the launch of Sony’s new full-frame E-Mount FE C 16-35mm T3.1 G cinema lens, an ideal match for large-format cameras such as the PXW-FX9 and VENICE, where the wide angle zoom combines advanced optical performance, operability and intelligent shooting functions. For the audiophiles, an impressive third generation of the DWX digital wireless microphone system was great to see, with the compact DWT-B30 bodypack transmitter catching my eye.

For an industry that is constantly changing, being able to experience all the latest tech and hear about the challenges and opportunities facing the industry from key industry players all in one place is invaluable. IBC Director Imran Sroya commented: “IBC continues to succeed because we work hard to present the most knowledgeable speakers, the most topical sessions and the technology of tomorrow, providing a global meeting point that enables industry professionals to get together and share vital information about all aspects of media, entertainment and technology.”

It really is the place to be and the place to meet and I am already looking forward to see what IBC has up its sleeve for next year!

A whole lot of firsts for Rugby World Cup

SCREEN AFRICA EXCLUSIVE:

The first sports event broadcast of a Rugby Union international was radio coverage between England and Wales from Twickenham in the United Kingdom, back in January 1927. Some 40 years later, the first-ever rugby match was broadcast on television in colour. It was a highly charged third test between England and the New Zealand All Blacks in 1967, also at Twickenham.

The idea of a Rugby World Cup had been suggested on numerous occasions going back to the 1950s, but met with opposition from most unions until 1985 – when it was finally agreed upon, seeing the inaugural tournament played in 1987. It is now the third-largest sports event in the world, after the summer Olympics and the Football World Cup, and this year sees the ninth Rugby World Cup take place in Japan, with a whole host of firsts when it comes to broadcast technology.

Rugby has been played in Japan since at least 1866, when the Yokohama Football Club was founded. It’s fitting, therefore, that Yokohama, which borders Tokyo, will host this year’s final. It’s the ninth Rugby World Cup and Japan is set to break new ground as the host of the first tournament in Asia. International Games Broadcast Services (IGBS), a joint venture between HBS and IMG Media, have been appointed to be the host broadcaster. The decision to appoint a specialised host broadcaster for the first time reflects World Rugby’s commitment to the highest standards of ground-breaking technical production and consistency between tournaments.

Also, for the first time, all 48 matches of a Rugby World Cup will be produced in multiple formats. The UHD standard is 4K SDR 2160p/59.94, while HD standards are 1080p/59.94 and 1080i/59.94. In addition to the traditional World Feed, Rights Holding Broadcasters (RHBs) will have access to uninterrupted live feeds to complement their studio operations, plus access to action clips during the match to enhance their analysis and programming. Dedicated ENG Crews will provide content from around the country, the tournament and the competing teams. All the live and ENG content will be available via the World Rugby Media Server, which is being supplied by EVS, with logged rushes plus some post-produced features all available for RHBs’ programming, be it a traditional broadcast or online offering either at the International Broadcast Centre or remotely at their home studio.

IGBS is also introducing another first, the Match Day Preview Show, which will look ahead to the next day’s games. This, combined with the live matches and the daily highlights show, will offer broadcasters access to all-day programming. In a world that now has the need to feed social media, World Rugby has introduced a specific content package to promote the event on a variety of social media platforms. The social media content production team will enable rights holders to simply populate their own streams, websites and apps with high-quality content, with short-form content, infographics and 360° virtual reality (VR) clips all made available.

Production teams have been drawn from France, UK, South Africa, New Zealand and Australia by the host broadcaster in order to maintain the highest standards throughout the six-week tournament. There are a number of first-time innovations that have been introduced to enhance the coverage this year. Depending on the rating of the match, there will be 23, 28 or 32 cameras, as well as corner-flag cameras and Spidercam will be

operational at 34 of the 48 matches. Hawkeye will be providing facilities for the Television Match Official (TMO) and for Citing and Head Injury Assessment.

NHK, the sole public broadcaster of Japan, are going to provide 8K Super Hi-Vision coverage to the domestic market with unprecedented free-to-air coverage offering an opportunity to use rugby’s biggest event to reach the widest possible audience. Under the IGBS umbrella, they will use nine cameras together with some host 4K cameras up-converted. They are planning to broadcast 31 of the 48 matches in 8K, and NHK will use Japanese UHD graphics created on-site as well as Augmented Graphics in conjunction with Spidercam to add to the 8K spectacle.

All the broadcast title and program graphics will be run by Alston Elliot, who also provide the official data throughout the tournament. Alston Elliot is a graphics production company that specialises in televised sports graphics and data systems, and also serves as technology partner to broadcasters, if required, by supplying turnkey graphics systems and custom output software. In particular, football broadcasters such as the English Premier League, FA Cup, Europa League and FA Women’s Super League have adopted their turnkey services. Other sports they supply to are golf, motorsports, athletics, tennis, hockey, fishing and, of course, IPL cricket. Their technical innovation for rugby includes scrum analysis, play patterns, try origins, team trends, ruck analysis, tackle analysis and field position analysis.

The company’s graphics creation workflows are based mainly on Vizrt and ChyronHego software and they have come a long way since they started out in the UK back in 1992. The company now has offices in South Africa, India and also Australia, where they recently designed and supplied a ground-breaking broadcast graphics package for Augmented Reality on Spidercam for the National Rugby League.

One of the major challenges facing the broadcasters is a bit of a strange one. Remarkably there are four varieties of local power in Japan – 200v/60Hz, 100V/50Hz, 200V/50Hz and 100V/60Hz, depending on the stadium location. These challenges have been overcome, however, with the appointment of Aggreko, a UK based company who will provide critical power systems and distribution for all broadcasts at the various stadiums as well as backup systems for the 12 venues across Japan.

Meanwhile, world lighting leader, Signify, have installed its connected lighting system Interact Sports at the Toyota Stadium in Aichi, Japan. It’s the first outdoor stadium in Japan to install connected LED pitch lighting in combination with high performance Philips ArenaVision LEDs. This new lighting meets the stringent broadcast standards for flicker-free Ultra-HD 4K television and super slow-motion action replays. People at home will clearly see every detail and emotion on the pitch in a tournament that is bound to provide us the best that sport broadcasting has to offer and the most exciting rugby we have seen…ever!


It’s IBC time again!

September is nearly upon us, where does the time go? It’s a month of changes: for us in the southern hemisphere it’s the announcement of spring; for our friends up north, the start of autumn; and, more importantly, the start of the world’s most influential media, entertainment and technology show – IBC2019.

The IBC theme this year is ‘See it differently’, an apt theme considering that broadcasting is going through an immense amount of disruption, thanks widely to new technology and consumer viewing habits.

This year’s show is slightly different, in that IBC has decided to align the dates of the exhibition and conference so that they will now both take place from Friday, 13 September through to Tuesday, 17 September 2019. Until now, the IBC Conference always started a day earlier than the Exhibition, on the preceding Thursday. Over the five day conference, 1,700 delegates and guests from across the globe will hear from an outstanding line-up of 300-plus speakers, enjoy fantastic networking opportunities and be inspired to embrace the changes in our industry together. The exhibition expects to draw a crowd of over 55,000 attendees.

This year’s IBC conference programme features some of the foremost thought-leaders, innovators and policy-makers in their fields and covers a wide breadth of topics. The programme will explore new strategies, business disruptors and future technological progress, and will hopefully reveal the future roadmap of the industry.

Top of my list from the conference sessions is a peak into business disruptors YouTube on their content creation and monetising of the channel, with a keynote speech from Cecile Frot-Coutaz, Head of YouTube EMEA. YouTube has largely been about user-generated content shot on mobile phones. But large viewing figures and big sponsorship deals have seen some YouTube stars become increasingly more professional in their approach to video production, creating competition for traditional broadcasters. The desire for instant-access content on YouTube, as well as a growing number of other platforms, can be an opportunity for broadcasters, giving them a new outlet and way of engaging their audience.

Another not-to-miss from the conference programme will be the Global Gamechangers session on Friday 13 September. Here, you’ll have the opportunity to meet the pioneers of creativity and innovation and be inspired and informed by the greatest creative, innovative and future-facing talents making headlines across the global stage. The Global Gamechangers will debate the future of the industry as they consider where revenues will come from, the creative challenges facing content makers and how broadcasters can remain relevant in a future dominated by digital media.

Always fully booked in advance, the IBC2019 Big Screen Programme focuses on how innovation in tech is allowing us to bring stories to life like never before. This year you will be able to gain from insights from creative and technical deep dives, and from cinematographers to colourists involved in the production of hits like Toy Story 4 and Games of Thrones. A world-class forum where creativity meets tech, the Big Screen programme allows us to hear from the talent behind the camera on everything from cinema and big event programming to boxset dramas and an in-depth look into the tech bringing this content to our screens.

Outside the IBC2019 conference doors – which contain over 50,000 square meters of exhibition space, over 1,700 exhibitors and over 55,000 attendees made up of innovators, key decision-makers and press – you’ll get the opportunity to discover adjacent technology and sectors, and catch up with the latest developments in broadcast and how they can fit into your future media plans.

One of the major areas of interest will be Artificial Intelligence. Is AI still hype or is it really the next big thing? At last year’s IBC, the Future Zone looked at how Augmented Reality (AR) and Virtual Reality (VR) had already had an impact on the broadcast, media and entertainment industries. This year, with both technologies having dramatically improved, there’s a wider look at existing projects and new ways that AR and VR can impact broadcasting, for creators and viewers alike. From the creation of virtual objects in a TV studio, to complete virtual sets, AR is already a big part of many broadcasts. Looking to the future, many believe that high-speed 5G mobile networks will create new opportunities for AR and VR, creating new ways of telling stories and delivering immersive narrative experiences.

Other hot topics will no doubt be Cloud Production, Cyber Security, High Dynamic Range (HDR) and one of my personal favourites, Esports. Esports is already a billion-dollar industry and all signs point to it growing rapidly over the coming years. With this potential comes fresh challenges, such as how to create interesting stories from in-game streaming. Esports was introduced at IBC2018, but the focus will be much larger at this year’s show. For the first time, IBC2019 is hosting the IBC Esports Showcase designed to give attendees an insight into this growing area. From managing the complexity of production to delivering an Esports broadcast, the Esports Showcase will host a live Esports tournament to demonstrate the techniques, trends and technologies required to bring this exciting new form of entertainment to life.

Something that has been in the headlines – be it good or bad – is the implementation of Mobile 5G networks. 5G networks are starting to be switched on across Europe, with plans to rapidly expand coverage. Offering broadband-like speeds, 5G is a revolutionary new type of mobile network that makes high-speed internet access possible for mobile devices. For broadcasters, 5G can offer a complete portable transmission solution, even delivering 4K video streams. For consumers, 5G can be used for streaming high-capacity content, such as with Barcelona Football Club, which has used 5G to embed wireless 360-degree cameras throughout the Nou Camp stadium, streaming the video to home fans using VR headsets. I have no doubt that 5G will be bigger than ever and a talking-point long after IBC finishes.

IBC organisers say that this event is the world’s most influential media, entertainment and technology show – and they aren’t wrong. It’s set to be a goodie, and offers everything from new product launches to opportunities to engage with customers old and new and to meet up with your broadcast colleagues as well as all the industry leaders. IBC 2019 is heading to Amsterdam, and – as they say in Dutch – “zie je daar.”

An increased need to test and measure

SCREEN AFRICA EXCLUSIVE:

In the good old days of analogue, broadcast test equipment was traditionally an elementary electronic device, such as a waveform monitor, vectorscope or audio level meter (in post-production) and spectrum analysers and field-strength meters (in broadcast transmission).

Traditionally, it was a task for broadcast engineers to ensure that our workflow signals were within spec and met all standards. In today’s post-production and broadcast environment, we now work with multiple digital signals. The ever-evolving complexity of production workflows such as streaming video, UHD, 4K and high frame rates, makes test and measurement even more important in the broadcast workplace. As a result, the equipment has changed from a hardware-centric environment to software-based systems, with more and more intelligence incorporated into the product. The engineer sitting at a bench in the workshop has been largely replaced with software tools like automated quality control (QC) to meet the needs and workload of multi-platform delivery.

The original use of test and measurement equipment like the waveform monitor (WFM) and vectorscope was to line up video-tape based equipment and check a few basic parameters of the recorded video, which included checking black level, peak white, colour phase, noise, colour gamut and timing. Videotape technology could suffer from alignment issues, head clogs and other problems which impacted the video quality. Although tape was very reliable, each record operation still needed at least a start, a middle and an end check. QC was easy, then, and the general process was to spot-check a process or watch a screen – glancing momentarily at the waveform monitor and back to the CRT monitor with an eagle-eye watching for ill-timed videotape dropouts.

The move to file-based workflows eliminated a number of the parameters and, with them, the need to test for analogue-based faults. However, due to the complexities of digital workflows, the number of test and measurements that need to be performed to ensure that the content delivered to the consumer is at a suitable quality level has increased. The sheer scale of the number of measurements that must be performed has naturally led to software-based digital test equipment and automated quality control systems.

It was the EBU (European Broadcasting Union) who recognised quality control (QC) in file-based broadcast workflows back in 2010. They commented that “broadcasters moving to file-based production facilities have to consider how to implement and use automated quality control (QC) systems. Manual quality control is simply not adequate anymore.”

In most countries or regions where content will be shown there are regulatory requirements for several aspects of produced content: audio loudness levels should comply with the CALM act in the USA, or EBU R128 in Europe, for example. Closed captions or subtitles must be present, sometimes in multiple languages and formats, while in the UK and Japan you must test content to ensure the absence of flashing patterns which may trigger PSE (photosensitive epilepsy) in susceptible viewers.

A human carrying out a QC test can verify audio quality and language, check for visual video artefacts and make the call whether to fail or pass the content. What humans can’t see, though, is the mass of ancillary data (in digital form) that makes the digital media file valid. Automated QC uses computers and software to check technical parameters that can’t readily (or at all) be examined by a human, and can augment the work of expert QC viewers by alerting them to issues that should then be examined by such an expert. Automated QC systems are now the only practical way to validate that a file is correctly constructed according to the requirements of the target platform, including resolution, format, bitrates and file syntax — a task that is beyond the ability of most technical experts.

The multi-platform, multi-screen media world of today offers content owners and content distributers a host of opportunities to develop substantial new revenue streams. With the large and varied amount of data being produced, automated QC systems need functionality to achieve what may seem impossible. Most systems have been developed to have wide file format support to include everything used in broadcast and post-production, as well as support for streaming and network sources, while some even offer RAW file support.

Systems will primarily do a container check to ‘recognise’ the file format, evaluate how many video and audio streams there are, what the bitrate is, start timecode and duration. After that, it checks the video codec, frame size and frame rate, as well as frame aspect and pixel aspect ratios. The system checks that all video and audio levels conform to the standards asked for and the more intelligent systems will automatically correct chroma, black and RGB gamut levels if they are outside the limits (and will even correct PSE flashing errors). Audio problems such as clipping are easily observable in the decoded stream and QC systems can determine whether loudness limits, peak limits, instantaneous peaks and true peak value limits have been exceeded, as well as long-term loudness over the span of the content. Other types of checkable baseband audio flaws include silence due to audio dropouts.

Automated QC systems are not fool proof, however, and human intervention is required from time to time. A correctly setup and administered system running automation will get over 90 percent of file-based work done. The balance of the process will rely on human input because there are creative interpretations in both audio and video that an automated system may fail to identify through misinterpretation. I have had content ‘failed’ because a close-up shot of a zebra was misinterpreted by a machine as being an excessive moiré pattern. A 5.1 surround soundscape mix in a scene shot in the height of cicada breeding season was rejected because the audio track contained continuous DC electrical buzz. My submissions required human intervention and, once passed, I was assured that the system had now learned what a close-up of a zebra looked like and what a cicada sounded like!

The media industry has been revolutionised by the adoption of digital file-based workflows – and having an understanding of the functions that make up file-based workflows, and what needs to be tested, is essential for knowing how to effectively implement quality control. Broadcasting success relies on quality and, most importantly, consistency in the processes that produce quality. It’s what broadcasting has been about since day one, and – thanks to some automation, artificial intelligence and whole lot of human chutzpah – it will always be.

Artificially Intelligent Media

SCREEN AFRICA EXCLUSIVE:

There is no doubt that artificial intelligence (AI) will touch every aspect of business across all industries in the years ahead. In broadcasting and media, it is already having a profound effect. The technology is widely being used to analyse and understand video content, speeding up processes like searching and logging, for example. AI is now developing into an intelligent video creation tool, being able to film and edit complete productions thanks to machine learning algorithms.

Media, in general, holds large amounts of unstructured data, which requires humans to understand it. Tasks like content management, processing, interpretation, quality checking, all take a lot of time and effort. However, current AI and machine learning (ML) algorithms have reached a level of accuracy close to human capabilities. This means many labour-intensive processes are now taken over by AI instead.

All major cloud providers are offering varying forms of AI to assist with post-production. From shot logging and speech-to-text, to scene and object identification, AI augments human logging, providing richer metadata for each scene and shot. Some post-production software integrates directly with cloud AI for a seamless in-application experience.

Over the past few months, most major post-production edit software has included some form of AI into their platforms. Blackmagic Design Resolve, for example, introduced DaVinci Neural Engine which uses deep neural networks, machine learning and artificial intelligence to power new features like speed warp motion estimation for retiming, super scale for up-scaling footage, auto colour and colour matching, as well as repetitive time-consuming problems like sorting clips into bins based on who is in the shot, for example.

Avid’s new AI tools are available through Avid | AI, which is also part of the Avid | On Demand cloud services. Avid | AI is a set of cloud services (a combination of Avid-developed tools and tools from Microsoft Cognitive Services) that utilise machine learning, including facial and scene recognition and text and audio analysis. Also released recently was Avid | Transformation, a new suite of automated services including auto-transcoding, watermarking and content repackaging for delivery to any device, anywhere.

Adobe has also updated its video editing applications with useful new features for both After Effects and Premiere Pro users, and some really cool Adobe Sensei AI integration specifically for Premiere Pro. First and foremost, the new Colour Match feature leverages the Adobe Sensei AI to automatically apply the colour grade of one shot to another. This feature comes complete with Face Detection, so Premiere can match skin tones where necessary, and a new split-view allows you to see the results of your colour grade as you go – either as an interactive slider, or as a side-by-side comparison.

In addition to Colour Match and Split View, Adobe has used its Sensei AI to make some audio improvements as well. Autoducking will automatically turn down your music when dialog or sound effects are present, generating key frames right on the audio track so you can easily override the automatic ducking, or else simply adjust individual key frames as needed.

Adobe After Effects, meanwhile, rolled out a new feature that can automatically remove objects from a video. While Adobe Photoshop has long offered a tool that can conceal areas of a still image with a camouflage fill, the software giant said the ability to do so across multiple frames was made possible by improvements to its machine learning platform, Adobe Sensei. The feature is the latest example of how artificial intelligence is transforming the video production process, making professional content quicker and easier to produce at scale. The new tool is able to track a discrete object across a given clip, remove it and fill the space it occupied with pixels that blend with the surrounding imagery. Adobe suggests that it can be used for anything from removing anachronistic giveaways within a period piece to erasing a stray boom mic.

AI has suddenly become one of the most important technologies and the most in-demand tool for the video creation market owing to its ability to sense, reason, act and adapt. The general popularity of automation (in various business practices) is another contributing factor. But do we think that AI will ever replace human input?

There are many applications that are starting to hint that it is possible. An early example has to be GoPro’s QuickStories, a quirky piece of software that copies the latest footage from your camera to your phone and – using advanced algorithms – automatically edits it into an awesome video.  Another intriguing piece of kit is SOLOSHOT3. Described as ‘your robot cameraman’, SOLOSHOT3 is a 4K camera on a tripod that automatically tracks a subject wearing a tag, keeping them perfectly in frame and in focus whilst recording the action. SOLOSHOT3 can quickly produce an edited and shareable video of highlights using its automated editing tools and post the video online – with no human intervention required.

The BBC’s Research and Development arm has been experimenting with how machine learning and AI could be used both to automate live production and search the broadcaster’s rather large archives. Their experimenting resulted in a documentary, screened late last year, made entirely by algorithms – and while it wasn’t the best bit of television ever made, it was a pioneering achievement from a machine learning perspective.

In Tel Aviv, Israel, a company called Minute have developed a deep learning AI video optimisation tool that automatically generates highlights from full-length videos. Minute’s AI-powered deep learning technology analyses video content to identify peak moments, allowing the system to automatically generate teasers from any video content with simple, seamless integration. Whilst pessimists claim this kind of application could one day replace humans altogether, the developers at Minute believe that their technology complements, rather than replaces, content creators and storytellers.

Most organisations today are exploring how they can best leverage and embrace these new technologies. This technology is also proving to be a boon for video editors and production teams. It enables professionals to focus more on artistic aspects rather than editing, which is considered a rather boring and mechanical task by many. Learning how AI technologies can help the entire production chain by improving quality and efficiency should benefit everyone. New things shouldn’t frighten us, they should excite. Two decades ago, we were all worried about non-linear editing – and look what happened to that concern!

Is OTT over-subscribed?

SCREEN AFRICA EXCLUSIVE:

To say that OTT (over-the-top) technologies have disrupted the world entertainment landscape would be a major understatement. Subscription-based, on-demand OTT platforms have risen in popularity over the past couple of years and are fast-displacing traditional TV programming as the preferred medium of entertainment for many. But the huge amount of options has also made streaming TV an expensive and a bit of a complicated mess. With even more streaming services set to launch this year, are we ‘over-subscribed’? And will the newcomers be forced to adopt ad-supported streaming models?

It’s hard to imagine that the OTT space could get even more crowded than it already is, but that’s what 2019 is about to usher in. A number of new streaming services will arrive on the scene, competing with current big players like Netflix, Amazon Prime, Hulu, Showtime, HBO Now and YouTube Premium. Late last year AT&T, the American telecom giant, completed its purchase of Time Warner and immediately announced an October 2019 launch of their own OTT service will rely heavily on content from WarnerMedia and the Warner Brothers library. AT&T doesn’t see their platform becoming another Netflix, but rather a huge warehouse of quality content available for purchase. There are no details on the model to be used nor its pricing as of yet.

Last month Apple announced Apple TV Plus, a brand-new streaming service that, according to its CEO, Tim Cook, “is unlike anything that’s come before.” Apple TV Plus will offer exclusive shows, movies and documentaries and will be ad-free from the start. It will be available in 100-plus countries through a section of the Apple TV app from September 2019 on smart TVs (surprisingly, even on Samsung smart TVs), MacOS and iOS.

Not to be outdone, Disney has also announced the launch of its upcoming OTT channel, Disney+. The channel will feature a second live-action Star Wars series, currently in development, and programming from other Disney brands such as Pixar, Marvel and Lucasfilm – and don’t forget, they also own National Geographic. Not only is National Geographic’s content attractive, they have renewed relevance with younger generations of consumers, with 100 million Instagram followers! Disney has assured potential viewers that their new service will be cheap – with some touting the figures of $6 to $8 monthly.

Even Discovery, Inc. executives have said they’re considering a direct-to-consumer offering, especially now that the company has 17 networks in its portfolio following their recent merging with Scripps Networks Interactive. Although, so far, the executives have said they’ve only considered the options – which could include bundling a number of brands, like HGTV, Food Network and TLC, into one channel. In theory, it could cost as low as $5 to $8 per month, said president and CEO, David Zaslav.

Cutting-edge technologies are enabling OTT players to gather, analyse and generate insights from vast volumes of digital data pertaining to user viewing patterns, mostly thanks to artificial intelligence and machine learning. This not only helps players to streamline the way they curate and recommend content to their users, but also enables them to create original content which is in synch with the viewing preferences of different audience demographics.

Armed with research data, BBC Studos – the commercial arm of BBC and ITV, the UK’s biggest commercial broadcaster – recently launched Britbox, a service that offers high-quality British TV to North American audiences for $6.99 a month. Britbox has 4,000 hours of content on it at present, which makes it the largest collection of British content available to US and Canadian customers. It has some archive classics but also brand-new shows that are available within hours of UK transmission. Essentially, the Britbox model is all about offering subsets of passionate fans the kind of high-quality content that they simply can’t get anywhere else. The less the content resembles what is available elsewhere (e.g. Netflix), the more consumers will be inclined to pay for it, according to the BBC.

Low-cost services may be the key to ongoing OTT success, despite survey results indicating that the majority of consumers will subscribe to only 2.25 streaming services, potentially leaving many players out in the cold. As the growing number of digital publishers and broadcasters enter into the OTT space, it’s becoming increasingly clear that a subscription-based business similar to that of incumbents like Netflix and Hulu would not be able to sustain newcomers.

These newcomers will need to rely on ad-supported streaming models (AVOD) and this presents a tremendous opportunity for advertisers. AVOD models offer ‘free’ premium programming as a viable consumer alternative to a subscription service. However, there are inherent complexities to maintain consistent revenue. OTT advertising is in its infancy when compared to broadcast television. Brands and ad agencies have been slow to ditch the traditional cost measurements (such as CPM) that rely on gross impressions to assess cost-effectiveness and profitability.

For the AVOD service provider, this means success is directly tied to scale—more viewers, more revenue. But building, maintaining and growing a large and sustainable audience can be an expensive marketing proposition.  On the plus side, the OTT consumer trend is not lost on advertisers looking to reach segmented audiences. The ability to deliver non-skippable pre-roll ads that can be hyper-targeted and localised with back-end performance metrics is very attractive to advertisers, but – for the viewer – will this model simply be over-the-top?

 

Strangler Application Pattern approach to Media Asset Management

SCREEN AFRICA EXCLUSIVE:

Back in 2004, a Chicago-based software designer, Martin Fowler, visited the rainforests of Queensland on the east coast of Australia. He was intrigued by the huge strangler vines, which seed themselves in the upper branches of fig trees and gradually work their way down the tree until they root in the soil. Over many years they grow, strangling and killing the tree that was their host. Equating this to rewriting critical systems software – where the chosen route is gradually to create a new system around the edges of the old, letting it grow slowly over several years until the old system is no longer used – he coined the metaphor ‘Strangler Application Pattern.’

The strangler pattern is commonly used in technology fields these days and the terminology is best understood as incrementally migrating a legacy system, by gradually replacing specific pieces of functionality with new applications and services. As features from the legacy system are replaced, the new system eventually replaces all of the erstwhile system’s features, ‘strangling’ the old system and allowing you to decommission it. There are hundreds of media asset management (MAM) systems out there and as technology develops, some organisations are adopting artificial intelligence (AI) to help them create and use the strangler software development pattern to slowly take over legacy MAM system functions until nothing is left but the new system. There are, furthermore, also many MAM systems out there that are allowing third-party AI applications to enable their users to train their machines to think like them – with the end goal of eventually replacing the user in order to cut costs.

I have been fortunate enough to be involved in the beta testing of software being designed as a potential plug-in to many existing MAM systems, with the aim of utilising AI to do all the hard graft in terms of logging and metadata creation – by having the system learn to do what the operator does.

The ultimate objective of these initiatives is to provide a completely unmanned media asset manager: a system with no human involvement at all, and a typical strangler pattern in practice. The software was originally developed to count and identify the sex of salmon swimming through a canal leading to an upstream river breeding area. The software not only extracts enough metadata from a camera clip to make presumptions about seasons, GPS locations and time of day, but also ‘visualises’ the clip to identify species type (human, animal, insect, etc.), identify the species itself (e.g. lion, leopard) and even possibly its gender. Another method used is audio sampling, which further assists the system log the shot.

While all of these things can easily be done by human eyes and ears, the advantage of having artificial intelligence perform the tasks is phenomenal when it comes to speed. Though it is still in a very rough beta stage, this system – in a test conducted using footage that I supplied – was able to identify all the species of animals around a watering hole, and from the camera’s metadata, told me where it was filmed, the time of day and season of the year. There were 27 shots and the logging took less than a second and even in this early stage of development, it was 100% accurate.

The beauty of AI is that the system continually learns from any errors it generates. When human intervention overrides incorrect information, the system remembers the changes – and even improves on them.

This developmental software is not unique at all. Third-party plug-ins are being used in many asset management solutions, such as Squarebox Systems’ CatDV, one of the pioneering media asset management solutions out there. The CatDV developers have started integrating video and image analysis options from AI vendors into their suite of systems and, through these integrations, CatDV is offering a range of advanced capabilities, including:

  • Speech-to-text, to automatically create transcripts and time-based metadata;
  • Place analysis, including identification of buildings and locations without using GPS tagged shots;
  • Object and scene detection, e.g., daytime shots or shots of specific animals;
  • Sentiment analysis, for finding and retrieving all content that expresses a certain emotion or sentiment (e.g., “find me the happy shots”);
  • Logo detection, to identify when certain brands appear in shots;
  • Text recognition, to enable text to be extracted from characters in video; and
  • People recognition, for identifying people, including executives and celebrities.

Another example is Avid | AI Media Analytics, which provides a framework that automates content indexing, such as facial detection, scene recognition and speech-to-text conversion, by using Microsoft Cognitive Services (Azure Video Indexer) to auto-index content using machine learning algorithms, creating a library of rich metadata that can readily be searched.

Avid have also teamed up with Finnish start-up firm Valossa, which grew out of one of Europe’s leading computer science and AI labs at the University of Oulu and integrated their software into Avid | Media Central. The comprehensive audio/visual recognition solution from this partnership can detect and identify people based on their age, speech patterns, sounds, emotions, colours and dialogue.

Dave Clack, CEO of Squarebox Systems, sums it up: “An AI-powered MAM solution offers a way forward. A great approach is to add to the MAM’s existing logging, tagging and search functions through integrations with best-of-breed AI platforms and cognitive engines, such as those from Google, Microsoft, Amazon and IBM, as well as a host of smaller, niche providers. These AI vendors and AI aggregators enable media asset managers to leverage AI analysis tools for speech recognition and video/image analysis, with the flexibility to be deployed either in the cloud or in hybrid on-premises/cloud environments.”

With the media and entertainment industry slowly being transformed by artificial intelligence, the future is bright for AI-powered MAM. In the right hands, AI becomes the key that unlocks the next generation of MAM technologies, and – just like the strangler vine found in the forests of Queensland – artificial intelligence is slowly, ever-so-slowly growing in many broadcast applications, and in some faster than you think.

From Absconsion to Obsolescence: current challenges facing the rental equipment industry

SCREEN AFRICA EXCLUSIVE:

Camera rental houses have been in business from around the 1920s. Scouring through an archived 1924 edition of American Cinematographer magazine, I found numerous ads for ‘equipment for rent’. Even back in those days there were DoPs who owned their own cameras but couldn’t afford lenses, lights or hefty equipment like cranes and dollies, and so – through necessity – the rental industry was born.

The unique specialty of any good rental house is the ability to stay on top of technology and changing trends. In the mid-1990s, there were really only two video camera choices: the Sony Betacam SP (analogue) and Sony Digital Betacam (digital). Nowadays the industry is awash with formats, codecs, cameras and hundreds of different types of lenses, offering TV producers more choices than ever.

The first big shift was when, after years of development, the ATSC (Advanced Television Systems Committee) established 18 different categories of High Definition. Sony and Panasonic took different paths, towards 1080i and 720P respectively, and both manufacturers began developing cameras and technologies to handle their chosen solutions. Meanwhile, TV networks, production companies and cinematographers also had to choose, because cameras couldn’t do both, and so created a battle that challenged rental houses by making them have to support more formats. What’s more, when cameras used to cost six figures, rental companies had a monopoly – but with the increased use of ‘prosumer’ cameras being used for at least portions of TV shows, that is no longer the case.

Travis Boult, of Camera Hire in Adelaide, Australia, maintains that: “Rental facilities have to offer lots of choices, but choices that make sense in the bigger financial picture”. Even the smaller rental companies generally carry selections like the ARRI Alexa, the Phantom, RED, Sony FS7, FS5 and F55, Canon Cinema EOS C300 and C500, as well as the Canon 5D Mark II, the Sony a7S Mark II and various SD-card based POV cameras. Each rental house has had to decide what video cameras to purchase and support.

As the number of formats and, more recently, codecs change rapidly, rental houses must be cautious about amortising technology that may be obsolete before it’s paid for. Although changing technology and time would seem to strike a blow against the existence of rental houses, Dave Kenig of Panavision in the USA says they are, in fact, doing better than ever. “Today there are probably more camera rental houses than ever due to the proliferation of digital equipment. Many of the older survivors have now transitioned over to digital cameras, while many new smaller houses have appeared.”

Meanwhile, Stacey Keppler of Zootee Studios in Cape Town feels technology changes are not felt as severely in South Africa as in the States or European markets, because the South African market seems only to rent what they know. “For example, the Panasonic Varicam seems hugely underrated here, or even the newish EVA1 gives you 10-bit 4:2:2 and dual native ISO onto an SD card, which is really great and certainly better than Sony’s FS5 – but we haven’t bought one yet, as people would rather rent an older Sony FS7, which is tried and tested amongst their peers, than ‘experiment’ on other camera formats,” says Keppler. “This is peachy for rental companies though, because we don’t have to stock every camera under the sun. So local filmmakers and content creators being sceptical of what they don’t know actually makes stocking our inventory a lot easier.”

Jenny Balee van Vlerken of Bangkok Video Services agrees, maintaining that technology shifts are a double-edged sword. “While it’s good to offer your customers a choice, the problem is that you can’t own everything and it doesn’t make financial sense to have every grade of camera in your rental house.”

Many, if not all, rental companies no longer have a monopoly on the cameras used in film and television productions, and so lenses and accessories have become the emphasis.  For Camera Hire in Adelaide, lenses are a big investment. “Digital cameras have a shelf life of about three years,” says Boult, “but lenses will be around for twenty years plus.”

Similarly, Rule Boston Camera, a rental facility based in Massachusetts, USA, has a full line of accessories and has concentrated its inventory on building out this part of the market. “We have an eye towards what people need, be it lighting or dollies, jibs and so on,” says general manager Brian Malcolm. “Lenses in particular are a great investment, as their price has gone up — not down — and there are more choices out there.”  According to Malcolm, ten years ago, almost everyone rented a complete camera package. “Now, every other job is to accessorise someone’s C300 or Epic,” he says. “DPs are being hired because of their talent, but also on the basis of their personal camera package. And they come to us for the accessories.”

Embracing change is usually the way to keep the doors open and the company flourishing – and, while rental houses have demonstrated nimbleness in adjusting to industry changes, at the same time their executives realise that their business has become more precarious and changeable than ever before. Which brings us on to an unfortunate challenge that rental houses of today have to cope with: theft.

Theft in the rental industry has, unfortunately, become an unavoidable issue worldwide. American and Canadian rental companies have formed a trade organisation called the Production Equipment Rental Group, which runs a collective database to try track and recover missing equipment being offered for resale throughout the world. Last year’s listings were valued at more than $20 million for small rental houses alone. Most thefts fall into one of three categories. Some are direct break-ins to rental facilities, others have been cases of theft from production vehicles. Some of the thefts are clearly done by professionals who know exactly what they are after, while others appear to be random crimes of opportunity. Perhaps the largest category has been by fraud, where customers with fake ID have rented gear and disappeared.

Insurance for rental equipment is different in each country, and for Stacey Keppler of Zootee Studios, it’s a tricky subject in South Africa. “There is essentially only one underwriter that will insure smaller camera rental companies and it would be ideal if there was some competition in the market, or even if insurance excesses weren’t so high.” Keppler continues, “The threat of absconsion holds us back a lot. It is seen as a ‘trade risk’ and not covered by the insurance underwriter. ‘Absconsion’ is essentially when we hand over our equipment willingly to someone who then does not return it. Ever. Yes, one would think this is just plain theft – but to the insurance underwriter, it is not. These scammers have different ways of deceiving us and, as a result, our registration protocol has become more and more elaborate which frustrates honest customers. Every year these scammers get smarter which forces us to do deeper background checks into our new customers. But the fact is, every new customer is a risk – and this shouldn’t be the case”.

As a matter of interest, do you know which camera is the most-rented world-wide? The humble Sony a7S Mark II mirrorless camera tops the list, while ARRI’s Alexa takes a close second spot!

Brainstorming for Better On-Air Graphics

SCREEN AFRICA EXCLUSIVE:

Integrated media organisation and recognised leader in global entertainment, World Wrestling Entertainment (WWE), decided during the course of last year that it was time to up their creative game and develop new and innovative visual content to enhance the production values of their flagship product, WrestleMania. They embarked on an ambitious project to improve their on-air graphics presentation, by incorporating high-end Augmented Reality (AR) content and technology to enhance their already action-packed productions in new and exciting ways. The powerhouse of smack-downs and take-downs threw out many questions, searched high and low for the perfect solution and ended up in Spain to find the answer.

World Wrestling Entertainment consists of a portfolio of businesses that create and deliver original content 52 weeks a year to global audiences, reaching more than 800 million homes around the world in 25 languages. WWE’s events, especially WrestleMania, are some of the world’s most visually stunning when it comes to graphics production value, which could – arguably – be called more realistic than the wrestling action itself.

For WrestleMania 34 in 2018, WWE wanted to include high-end AR content and technology to enhance both production values and the WWE Superstars’ presence. After research and deliberation, they chose Spanish company Brainstorm to help them in their quest. Jointly headquartered in Madrid and Valencia,  Brainstorm is a global company that provides real-time 3D graphics solutions for broadcast, feature film production and corporate presentations. Brainstorm’s flagship product, eStudio, is unique in the market due to its sophistication, open architecture and versatility, enabling both design and real-time playout of virtual studios and 3D graphics, as well as the easy creation of customised products and applications.

WWE chose InfinitySet, Brainstorm’s virtual set solution, because of its ability to render realistic content with convincing reflections and transparencies in real-time. WWE especially liked the quality of InfinitySet’s rendering with PBR materials and advanced shaders like refractions, which they could add to their existing models. WrestleMania was phase one of an ambitious project aimed to create new ways to impact global audiences by using advanced imagery mixed with live entertainment. The AR graphics package they developed for WrestleMania included substantial amounts of glass, as well as other semi-transparent and reflective materials. They also needed an engine that could render particles well, which was something InfinitySet was very capable of.

In the virtual environment, WWE required a powerful graphics solution to create content that is a convincing representation of real life mixed with virtual elements, where the goal is to create a composite that feels married together and where one cannot discern the real from the computer-generated. The toolset available in InfinitySet has allowed WWE to accomplish this objective better than all the other solutions they researched in the market. Especially useful were the depth of field/focus feature, and the ability to adjust the virtual contact shadows and reflections to achieve very realistic results. InfinitySet also allowed the WWE production team to create a wide range of content, from on-camera wraparounds to be inserted into long-format shows, to short, self-contained pieces.

INSIDE THE BOX

The complete setup comprises three different studios: a multi-render Virtual Studio, a smaller AR Studio and a portable AR system. The Virtual Studio includes three cameras with an InfinitySet Player renderer per camera (with Unreal Engine plugins), all controlled from the InfinitySet Controller via a touchscreen in the control room and Blackmagic Ultimatte 12 chroma keyers. For receiving the live video signal, InfinitySet integrates with three Ross Furio robotics on curved rails, two of them on the same track with collision detection. The setup also includes an OnDemand license to manage the playout of data-driven AR graphics.

The AR Studio, meanwhile, is a compact version of the multi-render one, and relies on a single camera on a jib with Mo-Sys StarTracker with InfinitySet + Track license. The AR Studio receives only video from the camera, and all the keying required is done using InfinitySet’s internal chroma keyer. This smaller studio, suitable for more compact events, allows the creation of AR content with simpler setups and requires less resources to install, drive and derig.

Finally, the Portable AR system is a custom-made road case with a redundant InfinitySet with tracking and internal chroma keyer for live AR productions on the road. This sytem will deliver  advanced AR content to a wider range of WWE productions and special events held throughout the year. This kit was designed to be sent anywhere in the world and requires minimal installation. Operators just need to take it to the event, open the lid, plug the power in-  and they instantly have a turnkey AR system with redundancy to ensure reliable operation on remote sites. InfinitySet is used to create content that airs across WWE’s many media platforms, including their award-winning direct-to-consumer WWE Network, as well as their various digital outlets and through their broadcast partners.

Brainstorm Multimedia is a relatively young company who started activities in 1993 as a provider of 3D graphics services to broadcasters. These services were based on the software previously developed by the company’s founder, Ricardo Montesa, later commercialised as the eStudio suite, a 3D graphics and virtual studio solution currently regarded as the industry’s fastest 3D real-time rendering engine. Today, Brainstorm’s product portfolio covers a wide range of solutions, from news clients like CNBC and sports affiliates within the European Football League to financial and elections graphics, weather or film pre-visualisation, not forgetting branding applications like those of the WWE.

Brainstorm is currently immersed in an ambitious technological partnership with Avid to provide eStudio’s sophisticated and highly intuitive real-time 3D rendering technology embedded in AMG (Avid Motion Graphics), Avid’s new line of broadcast graphics products. Through this agreement, Brainstorm provides Avid, one of the most influential companies in the broadcast sector worldwide, with the core technology behind eStudio’s 3D real-time rendering engine.

Virtually real – studios from the future

SCREEN AFRICA EXCLUSIVE:

Advanced design and production tools that enable broadcasters to create virtual objects that appear as if they’re actually ‘in the studio’ have been available for a few years now, but have only become a mainstream feature of major TV networks over the last 12 months. Political elections, weather updates, news coverage and major sporting events – from the Olympic Games to the FIFA World Cup – have all made augmented reality graphics one of the broadcast industry’s hottest trends, and there is a wide range of solutions to enhance the viewing experience.

Broadcasters are increasingly adopting augmented reality (AR) graphics for enhanced storytelling, allowing for better interaction between presenters and graphics objects (or even remote locations) to get the story across. For example, the BBC’s coverage of the 2018 FIFA World Cup took advantage of the latest in augmented reality tools to help further its Match of the Day broadcast, featuring AR graphics to help experts and hosts tell the story of each match with stats and team news. Denmark’s TV 2 built a new studio for coverage of the Tour de France, featuring on-air personalities, virtual backgrounds and a large touchscreen tabletop that displayed 3D content to aid the analysis of the race. In the US, Fox Sports has just completed building a massive new state-of-the-art, multi-purpose augmented reality studio set to become the new home of the 2019 NASCAR Racing Hub and the 2019 Daytona 500.

AR gives broadcasters the tools they need to tell a complex story in a very visual way, with the presenter driving the narrative by presenting visually-engaging representations of data. With a few years of experimentation behind them, broadcasters now have a much better idea of where AR makes sense, how to use it and what kind of AR elements are effective.

Recently, the Weather Channel in the US showed off its augmented reality broadcast prowess, taking viewers inside a virtual version of Hurricane Florence. Guided by a host surrounded by ‘virtual peril,’ the network employed the approach to show residents in the path of the hurricane why they should evacuate their homes. The insert went viral and the internet couldn’t stop talking about how immersive the presentation was and the impact it had on viewers. Although nothing like the kind of immersion one experiences while wearing an AR headset, or even using an AR app on a smartphone, what The Weather Channel’s technique achieves is getting viewers accustomed to consuming content in the context of immersive environments.

Moreover, the growing maturity of the industry has helped graphics vendors develop better solutions. As technology develops, costs decrease – and we’re now seeing more realistic graphics, with the introduction of hyperrealism, using more cost-effective render engines that are bridging the gaps and removing barriers to entry. Denmark’s TV 2 Tour de France coverage used 3D graphics software from Vizrt for their visual content. The Viz Engine platform produced the studio’s virtual backgrounds, AR content and touchscreen board, while Viz Pilot enabled the production team to create 3D content using journal templates. Viz Virtual Studio gave TV 2’s producers the ability to tell stories easily without worrying about the limitations of physical studio facilities.

AVID has also moved into the market, offering a full solution for virtual studios, camera tracking, augmented reality and video wall control – enabling the broadcaster to work within a single unified workflow. Maestro | Designer is a tool for real-time graphics creation which seamlessly integrates with Maestro | AR, a suite of augmented reality tools, which – when paired with the new Maestro | Engine real-time graphics and video rendering platform – gives broadcasters the power to manage their entire production process simply and easily.

The race to perfect virtual environments and AR production solutions has largely centered on games engines, like the Unreal Engine from Epic Games – the technology behind the mega-popular online video game Fortnite. The Unreal Engine is what Fox Sports are using to power the graphic elements in their new NASCAR studio. This is because graphics packages tied to game engines give you better animation and higher-quality content, and create a level of realism unrivalled by any broadcast character generator.

Therefore it’s no surprise that VR solution providers like Ross Video and Vizrt are currently integrating the Unreal Engine into their products. XPression is Ross Video’s line of real-time motion graphics systems, clip servers, workflow tools and software applications that power its augmented reality offerings. The Canadian company’s solutions have impressed the audiences and marketing partners of NBC Sunday Night Football, Eurosport, BBC World, Google, YouTube, Space London and China’s eSports powerhouse VSPN with their AR solutions, virtual studios, real-time motion graphics and robotic camera systems.

Recently, CBS announced some big plans for its Super Bowl LIII broadcast in February, including 8K cameras and the use of augmented reality. The network will have 115 cameras at the game, with “multiple” 8K cameras intended for “dramatic close-up views” of the action: a first for any US network. In addition to these Ultra HD 8K shots, CBS plans to use augmented reality graphics as a major part of its Super Bowl feed. Four cameras will be used to present live AR images, with a total of 14 cameras being used as part of its virtual graphics strategy.

After decades of just presenting in front of a green screen, AR now gives the likes of TV news presenters and weathercasters, sports broadcasters and documentary makers incredible new tools and technologies that can enhance storytelling capabilities and permit them to engage more deeply with viewers. As TV audiences become more distracted (and distractible), broadcasters should look to use every device available to them to secure the viewer’s attention and loyalty. Augmented Reality is an ever-evolving and exciting way for TV stations to meet the audience’s desire for differentiated big screen experiences, and to do so without really breaking the bank.

- Advertisement -
- Advertisement -
- Advertisement -

Pin It on Pinterest