Media Asset Storage: it’s in our DNA



When it comes to the storage of media assets (or any data, for that matter), whether it’s online or up on the cloud, there’s no doubt that the immediate future of data storage remains magnetic tape. Recent technological advancements have given new life to hard drives, but when it comes to long-term archiving of assets, the tape or hard drive of the future could be something very old, something that everyone has inside them: DNA.

The first commercial digital-tape storage system, IBM’s Model 726, could store about 1.1 megabytes on one reel of tape. Today, a modern LTO tape cartridge can hold 30 terabytes. Meanwhile, a single robotic tape library can contain up to 556 petabytes of data. While tape doesn’t offer the fast read/write speeds of hard drives, the medium’s advantages are many.

For starters, tape is reliable, with error rates four to five orders of magnitude lower than those of hard drives. They are energy efficient: once all the data has been written, the tape cartridge simply sits in a slot in a tape library without consuming any power until it’s needed again. And tape is very secure, with built-in, on-the-fly encryption – and if a cartridge isn’t mounted in a drive, the data cannot be accessed or modified. The main reason why tape is so popular is simple economics. Tape storage costs one-sixth the amount you’d have to pay to keep the same amount of data on disks, which is why you find tape systems almost any place where massive amounts of data are being stored.

But, as mentioned, tape is slow, and so the development of hard drive technology continues. The longevity of hard disks, and the rapid rise of solid-state drives (SSDs), can be attributed to a continual improvement process to minimise the drawbacks of tape technology. The hard disk game changed dramatically in 2005 with perpendicular magnetic recording (PMR), where, broadly speaking, magnetised bits stand perpendicular to the head of the hard disk platter instead of lying down, making room for more bits. However, after years of data density improvements using PMR (densities doubled between 2009 and 2015), researchers are once again hitting the physical limits: each magnetic ‘bit’ is becoming too small to reliably hold its data, increasing the potential for corruption.

Shingled magnetic recording (SMR), introduced by Seagate in 2014, is one way to fit more data on a disk’s platter. In an SMR disk, when the write head writes a data track, the new track will overlap part of the previously written track, reducing its width and meaning more tracks can fit on a platter. The thinner track can still be read, as read heads can be physically thinner than write heads. Western Digital launched a 15TB SMR hard drive in 2018 targeting data centres, with plans to increase the capacity per rack by up to 60TB soon.

The next big thing is two-dimensional magnetic recording (TDMR). This is another Seagate technology, and aims to solve the problem of reading data from tightly packed hard disk tracks, where the read head picks up interference from tracks around the one being read. TDMR disks use multiple read heads to pick up data from several tracks at a time, then work out which data is needed, turning the noise into useful data that can be analysed and then discarded when not required. 14 and 16TB TDMR drives came onto the market in 2019.

The multiple read heads of TDMR disks can improve read speeds, but to improve write speeds while increasing data density you need to move away from SMR to the latest hard disk technology: heat-assisted magnetic recording (HAMR). This aims to overcome the compromise of SMR by changing the material of the hard disk platter, to one where each bit will maintain its magnetic data integrity at a smaller size. As HAMR’s name implies, the solution is to use a laser to heat up part of the hard disk platter before the data is written. This lowers the material’s coercivity enough for the data to be written, before the heated section cools and the coercivity rises again to make the data secure. HAMR has the potential to increase hard disk density tenfold.

Therefore, both hard drive and magnetic tape technologies work for the storage and retrieval of data assets – but the trouble is that technology is battling to keep up with the continual flood of data currently being generated, and forecast to be generated in the future. What’s the solution? The hard drive of the future could actually be something very old, something that is inside every person reading this: DNA.

Deoxyribonucleic acid, or DNA, is the molecule that dictates how an organism develops. DNA can also hold a staggering amount of information: 215 petabytes (1 petabyte is about 100 million gigabytes) of data on a single gram. Just as impressive is its longevity. Traditional mediums like magnetic tape and flash memory tend to degrade, whether through repeated use or simply time. DNA degrades, too, but at a significantly slower rate: depending on the storage conditions, it can last thousands, or even tens of thousands, of years.

The idea of storing data on DNA was proposed back in the 1960s by Soviet scientist Mikhail Neiman. In the decades since, researchers have made great strides in making it achievable – though at a price. Currently, the most cost-effective DNA storage technique costs about US$3,500 per MB to write the data and US$1,000 per MB to read it, so don’t retire your LTO or hard drive array just yet.

DNA’s storage capabilities, however, are intriguing and have huge potential for computing in the future. For years, technology roughly followed the path laid out by Moore’s Law, which stated that every two years or so, we could double the number of transistors that fit on a microchip. However, computer chips have become so small these days that it’s increasingly unlikely we can continue to squeeze more transistors in there. Essentially, Moore’s Law is dead, but DNA-based computing for the future is very much alive and well.