The Reversal: Codecs & Capture at NAB 2011

It is common at the annual equipment exhibition of the National Association of Broadcasters (NAB) convention to find products that are smaller, lighter, less-expensive, and better than what came before. Occasionally, there are even breakthroughs. But a complete reversal is something different. Yet that’s what happened at NAB 2011.

Some think “How?” is the most important question in television engineering. For me, it has always been “Why?” Why did the engineers who came before me either do something a certain way or not do it that way? And is their reason still valid?

Consider reducing information transmission or recording requirements. Today we refer to that as digital bit-rate reduction or compression, but the techniques long predate the digital era. In fact, they predate broadcast television itself.

Samuel Lavington Hart received British patent 15,720 on June 25, 1915. He’d applied for it exactly one year earlier. It’s called “Transmitting pictures of moving objects and the like to a distance electronically,” and it contains a description of interlaced video.

Interlace is theoretically a way of increasing the picture resolution or frame rate without increasing transmitted or recorded information. The picture’s scanning lines are alternated between successively transmitted fields, and the human visual system combines them to full resolution. For still pictures, it works very well; for motion, not as well. Interlace artifacts can be considered the first degradation caused by compression.

More than 20 years later, the U.S. Radio Manufacturers Association (RMA) proposed television standards that included the 2:1 interlace (two fields of alternate lines per frame) still used today. Their 1936 standard also suggested double-sideband modulation, which would have restricted picture resolution (or required larger television channels) compared to what analog television in the U.S. ended up offering. The information-reducing scheme that dealt with the channel was called “vestigial-sideband” modulation, introduced by the RMA in 1939. It helped them win an Emmy award last year (some things take time).

Color television had its own information-reduction schemes based of the reduced sensitivity of the human visual system to detail in color than to detail in brightness. Versions of those schemes continue in the HDTV era in so-called “4:2:2” or “4:2:0” color-sampling formats and even beyond HDTV in the sensor of Sony’s brand-new F65 (right), introduced at NAB 2011.

The first commercially successful videotape recorder, in 1956, was made possible by two engineering breakthroughs: spinning heads (to deal with the high frequencies necessary for video without requiring tape speeds as high as 30 feet per second – 360 inches per second – as in previous prototypes) and narrow-band frequency modulation (to deal with variations of magnetic output with frequency). Neither breakthrough reduced the information necessary to record, and that’s how video recording worked well into the digital era. But the digital era did usher in data compression.

It started with videoconferencing (requiring video to be squeezed through low-data-rate telephony channels), then spread to cable TV (six digitally-compressed channels in the space of one analog), and, then HDTV broadcast transmission. The same technologies made possible consumer DVDs and consumer and even professional digital video camcorders.

As with all previous compression, there were quality sacrifices. So high-end recording still used uncompressed (except for interlace and color sub-sampling) techniques. Uncompressed HDTV, unfortunately, required a huge data rate, leading to the enormous, heavy, and expensive type D-6 format. A Thomson Voodoo unit is shown at right, the tape transport sitting atop the processor electronics.

The combined transport and processor weighed close to 250 lbs. The cassettes, alone, weighed about six pounds each, were more than 14 inches long, and could capture just one hour of programming. There were 34 spinning heads on the scanning drum.

In lieu of trying to squeeze all of that into something that could fit into a shoulder-mounted camcorder, Sony chose, instead, to go for a compressed and subsampled (3:1:1) format for HDCAM, the first digital HD camcorder. As they did when they introduced the compressed Digital Betacam format previously, Sony demonstrated multiple generations of re-recording, showing no apparent additional picture degradation.

Unfortunately, there is no guarantee that HDCAM will be the only bit-rate-reduction coding-decoding (codec) stage that a video sequence will pass through. Something might be captured in one codec, edited in another, distributed in a third, broadcast in a fourth, and re-transmitted (by cable or satellite) in a fifth. At least one major broadcasting organization has been experiencing quality degradation apparently caused by such codec concatenation.

Even a low-bit-rate consumer codec like AVCHD might be acceptable for single-generation viewing. Multi-stage broadcasting has different requirements.

HDCAM was introduced in 1997. There have been many developments in video compression since. Sony’s own HDCAM SR uses much gentler compression. Today there are also Apple’s ProRes, Avid’s DNxHD, CineForm, Dirac, JPEG-2000, Panasonic’s AVC-Ultra, REDCODE, SMPTE VC-1, and others. Some are standardized, others proprietary. Advances in data-transfer rates and circuit integration have brought compressed HD recorders down to palm size, as in the Atomos Ninja, Cinedeck, Convergent Design nanoFlash, Fast Forward Video SideKick, and Sound Devices PIX (left), among others (such as those based on the Fraunhofer Institut’s Micro HD design).

In addition to their small size, these small recorders also have relatively low costs (most are between $2000 and $3000). But, at NAB 2011, Blackmagic Design introduced the HyperDeck Shuttle. It is probably smaller and lighter than any of the others, and it is almost a factor of magnitude less expensive: $345. Furthermore, it is an uncompressed HD recorder, like the D-6, but capable of dealing with essentially any HD format, with 10-bit precision, and with up to sixteen 48-kHz-sampled, 24-bit precision audio channels. At right is a picture of it that appeared in David Leiner’s “Mondo NAB, Part 3” in Filmmaker magazine <http://www.filmmakermagazine.com/news/2011/04/leitners-mondo-nab-part-three/>.

How could Blackmagic Design cram so much circuitry into such a small package at such a low price? The simple answer is that they didn’t. Instead, they asked, “Why?”

Why did manufacturers start recording compressed forms of HD? At the time their formats were introduced, there wasn’t much choice. A D-6 deck was perhaps the size of a dishwasher (but probably heavier). Recording a billion bits per second (let alone the somewhat higher figure Backmagic’s HyperDecks are capable of) was nearly impossible. But that was a long time ago.

Today, it’s actually easier to record uncompressed than compressed (and other uncompressed recorders showed up at NAB 2011). For a while, there will still be capacity and media-cost issues favoring compression, but storage improves constantly in capacity and economy.

Certainly, the $345 HyperDeck Shuttle won’t replace all HD recorders.  Even Blackmagic Design introduced another model, the HyperDeck Studio, with dual media slots to get around the capacity issue and with a picture display and more physical controls. But even it has a list price of just $995.

Perhaps it’s wise to ask whys.

 

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters

;
SVGLogoHR_NOTAG-200

The Latest in Sports Video Production & Technology
in Your Inbox for FREE

Daily Email Newsletters Monday - Friday