SCREEN AFRICA EXCLUSIVE:
There is no doubt that artificial intelligence (AI) will touch every aspect of business across all industries in the years ahead. In broadcasting and media, it is already having a profound effect. The technology is widely being used to analyse and understand video content, speeding up processes like searching and logging, for example. AI is now developing into an intelligent video creation tool, being able to film and edit complete productions thanks to machine learning algorithms.
Media, in general, holds large amounts of unstructured data, which requires humans to understand it. Tasks like content management, processing, interpretation, quality checking, all take a lot of time and effort. However, current AI and machine learning (ML) algorithms have reached a level of accuracy close to human capabilities. This means many labour-intensive processes are now taken over by AI instead.
All major cloud providers are offering varying forms of AI to assist with post-production. From shot logging and speech-to-text, to scene and object identification, AI augments human logging, providing richer metadata for each scene and shot. Some post-production software integrates directly with cloud AI for a seamless in-application experience.
Over the past few months, most major post-production edit software has included some form of AI into their platforms. Blackmagic Design Resolve, for example, introduced DaVinci Neural Engine which uses deep neural networks, machine learning and artificial intelligence to power new features like speed warp motion estimation for retiming, super scale for up-scaling footage, auto colour and colour matching, as well as repetitive time-consuming problems like sorting clips into bins based on who is in the shot, for example.
Avid’s new AI tools are available through Avid | AI, which is also part of the Avid | On Demand cloud services. Avid | AI is a set of cloud services (a combination of Avid-developed tools and tools from Microsoft Cognitive Services) that utilise machine learning, including facial and scene recognition and text and audio analysis. Also released recently was Avid | Transformation, a new suite of automated services including auto-transcoding, watermarking and content repackaging for delivery to any device, anywhere.
Adobe has also updated its video editing applications with useful new features for both After Effects and Premiere Pro users, and some really cool Adobe Sensei AI integration specifically for Premiere Pro. First and foremost, the new Colour Match feature leverages the Adobe Sensei AI to automatically apply the colour grade of one shot to another. This feature comes complete with Face Detection, so Premiere can match skin tones where necessary, and a new split-view allows you to see the results of your colour grade as you go – either as an interactive slider, or as a side-by-side comparison.
In addition to Colour Match and Split View, Adobe has used its Sensei AI to make some audio improvements as well. Autoducking will automatically turn down your music when dialog or sound effects are present, generating key frames right on the audio track so you can easily override the automatic ducking, or else simply adjust individual key frames as needed.
Adobe After Effects, meanwhile, rolled out a new feature that can automatically remove objects from a video. While Adobe Photoshop has long offered a tool that can conceal areas of a still image with a camouflage fill, the software giant said the ability to do so across multiple frames was made possible by improvements to its machine learning platform, Adobe Sensei. The feature is the latest example of how artificial intelligence is transforming the video production process, making professional content quicker and easier to produce at scale. The new tool is able to track a discrete object across a given clip, remove it and fill the space it occupied with pixels that blend with the surrounding imagery. Adobe suggests that it can be used for anything from removing anachronistic giveaways within a period piece to erasing a stray boom mic.
AI has suddenly become one of the most important technologies and the most in-demand tool for the video creation market owing to its ability to sense, reason, act and adapt. The general popularity of automation (in various business practices) is another contributing factor. But do we think that AI will ever replace human input?
There are many applications that are starting to hint that it is possible. An early example has to be GoPro’s QuickStories, a quirky piece of software that copies the latest footage from your camera to your phone and – using advanced algorithms – automatically edits it into an awesome video. Another intriguing piece of kit is SOLOSHOT3. Described as ‘your robot cameraman’, SOLOSHOT3 is a 4K camera on a tripod that automatically tracks a subject wearing a tag, keeping them perfectly in frame and in focus whilst recording the action. SOLOSHOT3 can quickly produce an edited and shareable video of highlights using its automated editing tools and post the video online – with no human intervention required.
The BBC’s Research and Development arm has been experimenting with how machine learning and AI could be used both to automate live production and search the broadcaster’s rather large archives. Their experimenting resulted in a documentary, screened late last year, made entirely by algorithms – and while it wasn’t the best bit of television ever made, it was a pioneering achievement from a machine learning perspective.
In Tel Aviv, Israel, a company called Minute have developed a deep learning AI video optimisation tool that automatically generates highlights from full-length videos. Minute’s AI-powered deep learning technology analyses video content to identify peak moments, allowing the system to automatically generate teasers from any video content with simple, seamless integration. Whilst pessimists claim this kind of application could one day replace humans altogether, the developers at Minute believe that their technology complements, rather than replaces, content creators and storytellers.
Most organisations today are exploring how they can best leverage and embrace these new technologies. This technology is also proving to be a boon for video editors and production teams. It enables professionals to focus more on artistic aspects rather than editing, which is considered a rather boring and mechanical task by many. Learning how AI technologies can help the entire production chain by improving quality and efficiency should benefit everyone. New things shouldn’t frighten us, they should excite. Two decades ago, we were all worried about non-linear editing – and look what happened to that concern!