It Will Be Converted
If you shoot HDTV, one thing is pretty much guaranteed. Whatever your show looks like, someone will change it.
The world of moving imagery has changed many times. There was the first motion-picture toy, the first live-action movie, the first theatrical projection, the introduction of sound, and the introduction of color. There was also the parallel development of television. But perhaps the most earth-shaking event occurred on September 23, 1961. That’s when NBC first broadcast the CinemaScope movie How to Marry a Millionaire.
It was by no means the first time a movie had been shown on television. The aspect ratio (ratio of width to height) of ordinary U.S. television pictures, 4:3 or 1.33:1, was chosen specifically to match that of movies (although, by the time the selection was made, movies had already shifted to something slightly wider). But before September 23, 1961, television had previously stuck to broadcasting only movies that had approximately that 1.33:1 shape.
There had been giant-screen movies even in the 19th century, but, for the most part, until the early 1950s, most viewers saw movies on relatively small screens — even in theaters. With some of the audience watching from the last row of seats, directors had to make sure that any detail in the image significant to the plot was big enough to be seen by all.
That’s why those early movies transferred so well to television. The screen shapes matched closely, and television viewers had no difficulty discerning any details of importance.
When television was first broadcast in 1927, the motion-picture industry was already huge. Even 20 years later, movies had little to fear from TV. But the growth of the small screen beginning in 1948 was tremendous; television began to cut seriously into theatrical motion picture revenues.
Movie exhibitors sought to find ways to offer theatrical experiences that couldn’t be duplicated at home. Those ranged from free housewares at some showings to stereoscopic 3-D to seats wired to deliver “shocks” to viewers during horror movies. In one 3-D movie show, during a rockslide scene, theater employees hurled foam rocks at the audience from above the screen. For one horror movie, a promoter had a “nurse” clearly visible at the theater, supposedly to administer first-aid to patrons whose hearts couldn’t stand the thrill. And then there were the widescreen processes.
Cinerama offered viewers a different sort of “in-depth” experience than did the 3-D movies. It involved shooting with three cameras and projecting with three projectors onto a curved screen. One camera/projector pair covered the center; the other two covered the sides. The seams between the three views were intended to be invisible.
As Cinerama movies could be seen only in specially equipped Cinerama theaters, no previous motion-picture technical standards needed to be observed. In addition to their wide view, therefore, Cinerama movies were also shot and projected at rates faster than the usual 24 frames per second.
Todd-AO, another widescreen process, also used a higher shooting speed (and wider film). But, recognizing that revenues would be limited if movies could be released only in specially equipped theaters, Todd-AO producers also had their movies shot on 35-mm film at the usual 24 frames per second for ordinary theatrical release.
Perhaps the most successful of the anti-television widescreen processes was CinemaScope (also said to be the first English word to incorporate a capital letter in the middle). Like Cinerama and Todd-AO, CinemaScope required projection in specially equipped theaters, but the modifications were minor — largely the addition of a cylindrical lens to each projector to stretch out the widescreen image previously squeezed onto standard 35-mm film run at the usual 24 frames per second. Stereophonic sound was also added.
Not every theater was a candidate for the CinemaScope modifications, however. It was essential that the screen be not only wide but also very large. The intention was that CinemaScope movies would provide an immersive experience even to viewers in the last row. Fine details in the images could be used to further the plot because they would be blown up to large proportions on the giant screen.
Directors were encouraged to make use of the abnormal wide- and large-screen characteristics of the new processes. Whenever a scene could be staged so as to utilize the full width of the aspect ratio, it was supposed to be.
Jean Negulesco leapt into the new process willingly. In his 1953 CinemaScope movie How to Marry a Millionaire, he had his actresses stretch out on a chaise lounge and hold conversations that extended from one end of the 2.55:1 aspect ratio to the other.
Television could offer nothing comparable. In fact, when How to Marry a Millionaire came out in theaters, television didn’t even offer color, let alone a giant screen or a wide aspect ratio.
Then came the earth-shaking event of September 23, 1961. One of the big commercial networks had decided to create what it thought would be a hit series, NBC Saturday Night at the Movies. If it were to be special, it had to be different from the usual movies seen on television. The network needed a big, star-laden hit of recent memory. It chose How to Marry a Millionaire.
Unfortunately, TV screens in 1961 were still quite small, and all of them had a 1.33:1 aspect ratio. Could a 2.55:1 giant screen movie be properly displayed on a small, narrow television set? No. Negulesco’s framing was lost completely. With commercial breaks, it also lost its timing. What the movie retained for TV audiences was the memory of its being a hit and the name recognition of such stars as Lauren Bacall, Betty Grable, and Marilyn Monroe.
For decades, the movie and television industries struggled with the problem of how to deal with putting widescreen, larger-than-life movies onto 21-inch (and smaller) picture tubes. Nothing could make up for the loss of detail, but, from a technical standpoint, there were three techniques for dealing with the mismatch in aspect ratios.
The truncation technique lopped off whatever wouldn’t fit. Variations tried to move the TV-screen-shaped window to follow the most significant action in the wider movie-screen shape, either introducing panning the director never intended (“pan-and-scan”) or cutting from position to position between frames, introducing new, unintended edits.
Sometimes neither was possible. A lengthy scene in the television transfer of Little Big Man consists of a conversation taking place while all that is seen is two pairs of feet propped on a barroom table. The rest of the bodies were outside the truncation area.
The squeezing technique matched the top, bottom, and sides of the two screen shapes. As a result, the television version showed very tall, skinny characters, horizontally squeezed. The technique was typically used only for title sequences, to ensure that all of the titles would fit on the TV screen.
Finally, there was a shrinking technique. It matched only the sides of the images on the two screens, leaving black bars above and below the picture area on the TV set. The shrunken image within the larger video frame looks something like a mail slot, so the technique acquired the name letterbox. It perfectly preserved the film director’s frame but not the reason for the framing. Not only was detail that would have been perceptible only on the large screen lost, but this technique shrank the images even further. If NBC had chosen to use letterbox to transmit How to Marry a Millionaire in 1961, little more than half the TV screen would have had a picture.
The advent of the home videocassette recorder, originally fought by the movie industry, created a new revenue stream for Hollywood. Today, a theatrical feature will usually generate more money through video release than from large-screen projection.
As a result, the same studios that once asked directors intentionally to use wide- and giant-screen characteristics to spite television are now asking directors to bear television in mind throughout the production process. In the case of the animated feature A Bug’s Life, it was possible to create completely different versions optimized for each of the two media. More often, there’s a compromise.
Now, even aside from any residual concerns about motion pictures, videography is facing its own wide/narrow and large/small problems. The issue is high-definition television (HDTV) seen on ordinary TV sets.
Less than ten years ago, an HDTV camera, lens, and recorder combination could have cost a million dollars. Some HDTV programming was converted to film for theatrical release. Other material was seen in small (but still large-screen) HDTV theaters.
The fine detail and wide aspect ratio of HDTV meant that shots could be framed wider and last longer than in typical video programming. A typical standard-definition television (SDTV) sequence, for example, might begin with an establishing shot, cut to a close-up of one character, cut to a close-up of another character, and then return to a medium shot showing both. In HDTV, that four-shot sequence might be replaced by a single, stationary shot.
If there’s enough detail in the image for the viewer to see what the close-ups might otherwise have shown, there’s no need for cutting and zooming. Unless the director wants to call attention to a particular face (even giant-screen movies have close-ups), picking out certain details can be left to a viewer’s eyes.
Unfortunately, as with CinemaScope movies, such “pure” shooting works only if it’s known that the result will be seen exclusively on a large, detailed, wide, HDTV or movie screen. In the era of digital television (DTV) and inexpensive HDTV production equipment and displays, nothing of the sort is known.
The Advanced Television Systems Committee (ATSC) DTV standard for the U.S. lists 36 different possible video formats, from 640 x 480 in a 1.33:1 aspect ratio to 1920 x 1080 in a 1.78:1 (16:9) version. And, when the Federal Communications Commission (FCC) adopted the standard, they refused to restrict broadcasters even to those 36 choices.
ABC has adopted a 1280 x 720 form of HDTV with progressively scanned images transmitted roughly 60 times a second (720p). CBS has chosen 1920 x 1080 with about 30 interlaced frames each second (1080i). The Fox Super Bowl coverage this year was shot 720 x 480 with close to 30 interlaced frames per second (480i), but with a 1.78:1 aspect ratio instead of the usual 1.33:1. Some stations carried it as 1.33:1 480i; others converted it to 1.78:1 480p.
Then there are displays. Movie theaters have dealt with multiple screen shapes for years. A revival of Gone with the Wind should be seen in a 1.33:1 aspect ratio; Abel Gance’s Napoleon has scenes intended to be seen at 4:1.
Theaters adjust for these varying shapes through the use of draping at the screen and masking in the projector. JVC once showed at a video trade show a TV set that had moving slats, like those on a roll-top desk, that would automatically move in from the sides to mask the blank areas of the 1.78:1 screen when 1.33:1 programming was being shown.
Other TV sets exhibit the blank areas when programming of the wrong shape is displayed. For a long time, there was a feeling among those programming the major networks that letterbox transmissions, aside from their reduced detail, would meet with scorn from consumers, who would interpret the blank parts of the screen as a form of cheating them out of what they paid for. Recently, however, experiments with letterbox transmission have met with viewer approval.
Unfortunately, aside from viewer taste, there’s another problem. The light-emitting phosphors of a TV screen age as they are used. The blank areas, because they aren’t being used, don’t age.
As the phosphors age, they emit less light, and the blue phosphors age fastest. An occasional letterboxed program is not a problem, but substantial viewing with black stripes is. Eventually, the active area becomes slightly yellowish relative to the inactive area. The human visual system being particularly sensitive to hue shifts, when the screen is filled, bluish stripes become clearly visible.
Grey unused areas on direct-view picture tubes take longest to cause stripe burn-in (only stripes that vary in brightness according to the average picture level cause no burn-in at all). Next come black stripes on direct-view picture tubes. Then there are projection tubes. And worst of all are plasma panels. Their ultraviolet-light-activated phosphors are more susceptible to aging than are the electron-beam-activated phosphors in picture tubes and projection tubes.
That’s not the only problem with plasma displays. They also do not have the same light-emitting characteristics as picture and projection tubes. So they cannot reproduce low light levels as smoothly. Some plasma displays exhibit contouring (the introduction of edges that look like the lines on a contour map) in dark areas; others try to get rid of the contouring through an effect (error diffusion) that increases the grainy look of dark areas.
Bright programming looks great on plasma displays; dark programming doesn’t. And that’s just one new display technology. There’s an alphabet soup of PDP (plasma display panels), LCD (liquid crystal displays), DLP (digital-light-processing dynamic-micromirror projectors), LED (light-emitting diode displays), D-ILA (digital image light amplifiers), etc. — with more to come. Each, as one session at the recent Hollywood Post Alliance (HPA) Technology Retreat in Palm Springs pointed out, has its own idiosyncrasies.
Film projection and picture tubes have their idiosyncrasies, too, but we’ve learned, over the course of many years, how to use them to make beautiful pictures. In the beginning, movie makers knew their work would be seen only projected from film in a movie theater. Then they had to deal with television.
Now videographers are also in a strange position. It’s no longer safe to assume that a program will look anything like the way it looks on an edit-room monitor — even accounting for the artifacts of broadcast transmission.
Maybe it will be seen as HDTV; maybe as truncated SDTV. Maybe it will be viewed on a picture tube; maybe it will be on a plasma panel.
Is it better to compromise the highest quality that few might see so that all will see something pretty good? Or is it better to go for the best that a viewer with optimum picture-tube-based equipment will see and not worry about the rest of the audience?
In other words, tube or not tube? That is the question.
How often might a high-definition program be converted from one format to another — even if it stays HDTV the whole time? Consider this scenario:
Suppose ABC wanted to include a clip from CBS in an HDTV football show. The CBS HDTV format of choice is 1080i (though some older material at the network was shot 1035i). But ABC’s is 720p. That’s one required format conversion.
WFAA-DT is the ABC digital-television affiliate in the Dallas-Ft. Worth market. The station is owned by Belo Corp., which has decided upon a 1080i format for all of its stations, regardless of network affiliation. That’s two conversions.
AT&T Broadband operates cable-television systems in that market. A company official once told a congressional hearing that if the cable operator carried HDTV, it would be only as 720p. That’s acceptable — even for 1080i programming — according to current FCC DTV cable-carriage rules. Therefore, should an AT&T Broadband cable system carry WFAA-DT programming, it might be as 720p. That would be a third conversion.
A consumer hooked up to the AT&T Broadband cable system might decide to watch the show on a Pioneer HDTV plasma panel. Pioneer’s digital-television set-top receiver/decoder box had only a 1080i output, meaning a fourth conversion. But the same manufacturer’s HDTV plasma panel could display only 720p HDTV, a fifth conversion!
A bionic viewer with cameras instead of eyes might introduce yet another conversion, but that’s hardly likely. On the other hand, the scenario is about a football game, so, if the players are up for it, there could be many more conversions.