Getting Cranky

Originally published in Videography March 2001

Aside from offering the elusive “look” of film, cinematography has long been able to achieve certain functions that video couldn’t. With the introduction of a new variable-frame-rate camcorder, however, the list may have just gotten much shorter.

Question: Why do the people in silent movies appear to be rushing around so fast? Whatever answer you come up with may be wrong, but one possibility is that the speedy motion was intentional, and that may go a long way towards explaining the success of 24p video production.

Sometimes, history seems like an onion. It looks one way on the surface, but, peel back a layer, and the picture changes. Peel back another layer, and the picture changes again — perhaps back to where it was in the first place.

The projection speed of modern film-based motion pictures (the “modern” era being defined as anything since 1926, 75 years ago) is almost invariably 24 frames per second (fps). Similarly, movies are also shot at 24 fps. But that wasn’t always the case.

Motion picture pioneer Thomas Edison once insisted on a shooting speed of 46 fps, declaring that “anything less will strain the eye.” In a way, he was right.

We experience two different psychophysical phenomena with respect to the rate of change of sequences of images. One is called fusion, the speed at which the successive pictures cease to be individual photographs and turn into motion. The other is flicker, the speed at which flashing illumination appears to become steady.

Both phenomena vary with such parameters as image brightness and motion and visual angle, but, to pick numbers in an approximate ballpark for a viewer of normal motion in a dark movie theater with a dimly lit screen, the fusion frequency might be around 16 fps while the flicker frequency would be around 45 fps. Thus, Edison’s figure was about right for his viewers to enjoy flicker-free movies. Edison’s competitor, the American Mutoscope & Biograph Company, chose a similar 40 fps.

The high frame rates didn’t last long because they used up so much film, so the fusion frequency became the controlling factor, and early movies became known for their flickering, giving us the term “flick” for a motion picture. Even today’s 24 fps isn’t fast enough to eliminate flicker, so movie projectors use multi-bladed shutters. A frame is pulled into place, the shutter opens, the shutter closes, the shutter opens again, the shutter closes again, and only after that second closure is the next frame pulled into place. The film rate of image change of 24 fps is considered sufficient for fusion; the projector image-presentation speed of 48 per second is considered sufficient to eliminate flicker.

Motion picture camera operators in the silent era controlled their shooting rate with a crank they turned. Similarly, projectionists cranked their projectors.

A widely held theory is that both camera operators and projectionists of the silent era cranked film through their respective devices at 16-18 fps. That would explain why we see rapid motion when these films are shown today. We project them (or transfer them to video) at 24 fps, between 33% and 50% too fast.

Unfortunately, neither camera operators nor projectionists had speedometers regularly available to them. An operator might vary frame rate considerably over the course of a day. And then there were the economic factors involved.

The slower the shooting rate, the less film stock and processing costs were involved. On the other hand, the faster the projection speed, the more shows an exhibitor could offer, and, therefore, the more tickets that could be sold in a day.

Initially, movie suppliers suggested to exhibitors the appropriate projection speed, usually defined in minutes per reel. When those suggestions were largely ignored, one producer started printing running lengths onto movie posters, so audiences would know they weren’t getting their money’s worth. Exhibitors simply covered over those potions of the posters and continued to run projectors too fast.

Shortly before the beginning of the sound era, the industry reached a form of stability, as identified by Stanley Watkins, the chief engineer of Western Electric, developing one of the motion-picture sound systems. Large movie theaters were exhibiting films at 20-24 fps, smaller ones at a slightly higher rate (to compensate for fewer seats). Meanwhile, movies were being shot at 16-20 fps, sometimes even slower.

Thus, it isn’t merely modern audiences who see rapid motion in silent movies. Audiences of the time did, too. And, knowing that, directors set the pacing of their movies accordingly. In fact, they could intentionally “undercrank” (run cameras too slowly) to create even faster pacing for some scenes or “overcrank” (run cameras faster than usual) to slow things down for other sequences. When sound brought in an era of matched shooting and exhibiting speeds, some initial reactions were that the pacing of sound movies was excruciatingly slow.

The inventors of television faced the same fusion and flicker issues, but they couldn’t count on homes being as dark as movie theaters. The National Television System Committee (NTSC) in the United States chose to have 60 images per second presented to viewers to take care of the flicker frequency. There wasn’t any mechanism — even as simple as a two-bladed rotating shutter — to divorce the exhibition image rate from the camera image rate.

That is, there wasn’t any mechanism other than film. Movies could be shot at 24 fps and shown on television at 60 images per second. The difference in the image rates was accommodated through what came to be known as “3-2 pulldown.” One film frame would be pulled down into position and used to illuminate three video images. The next would be pulled down into position and illuminate two. The 3-2-3-2 sequence would repeat indefinitely.

At least that’s what was done in the United States. In Europe, where 50 television images per second is standard, 24-fps film was sped up to 25 fps and converted to video with a 2-2 sequence.

As in the United States, much television programming in Europe is film based. In fact, it’s sometimes U.S.-film based. Back in the days when film-shot programming was also edited and distributed on film, this posed little technical difficulty. Reels of film would be shipped from the U.S. to Europe. The only “problem” was a 4% increase in speed (and audio pitch).

There were, however, financial issues. Creating film copies was expensive. So was shipping 35-mm film. And the U.S. television industry was shifting from editing on film to editing electronically.

That last shift meant that there might no longer be a final version of a program on film. Conversion between 60- and 50-image-per-second video was never perfect, and the 3-2 pulldown process made it worse. European broadcasters began balking at airing standards-converted film-shot programming.

A number of organizations came up with an alternative. The 3-2 pulldown process would be electronically reversed, creating 48-image-per-second video. That, in turn, would be recorded in such a way that, when the tape was played back, it would look like normal 50-image-per-second video made from 24-fps film transferred at 25 fps.

In normal, interlaced video (video in which images alternate between fields of odd-numbered or even-numbered scanning lines) two images form a single frame with all scanning lines. Since video frames created from film may be considered progressively scanned (non-interlaced), that 48-image-per-second interim video step was actually one of the first commercial applications of 24p (24-fps progressively scanned video). Some earlier 24-fps video was only intended to be recorded on film.

The advent of digital television offered another application for 24p. The CBS and NBC television networks adopted 1080i as their format for high-definition television (HDTV), 1080 image-carrying scanning lines, nominally interlaced. The ABC and Fox networks, however, chose 720p for HDTV, 720 image-carrying scanning lines, progressively scanned. Would different film-to-video transfers be required for each?

No. A single film transfer could be made to 1080-line 24-fps video, progressively scanned. CBS and NBC would be satisfied because their main concern is getting 1080 lines. ABC and Fox would be satisfied because their main concern is progressive scanning. A 3-2 pulldown process would eventually be performed to create 60 images per second for flicker-free display to home viewers.

If 24p was good enough for editing and distribution, what about for shooting? For many years, there had been hope that an electronic cinematography system would reduce the costs of moving-image production, but the available technology never seemed up to the task. Certainly, video-shot material didn’t look like film-shot material.

Part of the problem may have been camera-detail resolution. Even though film transferred to video cannot end up with any more detail than the video system can offer, more detail in the camera can lead to a greater contrast ratio for the detail that makes it through. The human sensation of “sharpness” is related to the square of the area under a curve plotting contrast ratio vs. detail. More contrast ratio, therefore, means much more sharpness.

Another part of the problem may have had to do with image rate. The film director Douglas Trumbull also invented the Showscan movie system, one in which there is a 60-fps rate instead of 24. The result is a greater sensation of reality. In an amusement-park simulation “ride,” a viewer watching the Showscan screen might believe he or she is on a bobsled or a runaway train or a roller coaster.

Then Trumbull tried making a traditional, storytelling movie in Showscan but stopped. The images looked too real — too much like being in the studio with the actors. Just as the sped-up motion of silent movies was sometimes considered superior to the more-realistic motion of sound motion pictures, so, too, the look of 24p for storytelling is sometimes considered superior to more-realistic 60-fps.

So, what about shooting video at 24 fps? Sony recently introduced the first 24p camcorders, and it was reported at the Association of Imaging Technology and Sound (ITS) Technology Retreat in February that there are already almost as many 24p HDTV camcorders in use as the original 60-image version — in a tiny fraction of the time.

Some are being used to shoot movies, and some are being used to create HDTV programs. But some have been used to make the Fox sitcom Titus and the A&E dramatic series 100 Centre Street, neither of which is currently being transmitted in HDTV. At least one major television production studio plans never to shoot another series pilot in anything but 24p.
Although 24p imagery doesn’t look exactly the same as film’s, it is probably closer to that look than any previous video imagery. But that might not be enough.

Film cameras are no longer cranked, but the terms “undercrank” and “overcrank” remain. A cinematographer might add intensity to a sequence by slightly undercranking it, shooting at a rate slower than 24 fps so that motion will be a little too fast when the film is viewed at normal speed. Similarly, another sequence might be overcranked to create fluid slow motion when it is played back at 24 fps.

Variable speed video has long been a desirable feature. Helical-scan videotape recorders have been able to operate at faster or slower than normal play speeds, but the resulting slow motion has not been fluid; it jerks from frame to frame.

Sony long ago introduced a video system that could shoot at three times normal speed, creating a tape that could be slowed to the usual 60 images per second with smooth, one-third-of-normal slow motion. But that was all it could do.

Snell & Wilcox attempted to create fluid variable-speed slow motion electronically, by interpolating between video images. The results were impressive, but they weren’t perfect, and the device seemed as though it would have to be priced out of reach of most organizations.

At the ITS Technology Retreat in February, Sony showed what Roland House had done in the slow-motion-video field, shooting 60-image-per-second HDTV and electronically interpolating those images to 24p slow motion. The results were impressive, but they still didn’t have the smooth overcrank-look of film.

The ITS Technology Retreat is known for cooperation among manufacturers, so it’s not all that surprising that Sony conducted its demonstration on Panasonic-supplied equipment. What was more surprising was the working Panasonic AJ-HDC24A camcorder nearby.

The camera section of the camcorder is truly frame-rate variable. If it is set for 60 fps with no shutter, the exposure time is 1/60th of a second. If it is set for one fps with no shutter, the exposure time is one full second. In addition to providing appropriate temporal filtering for the selected frame rate, the increased exposure times allow the camera to operate with less light, something Thomson-CSF Laboratories touted in one camera back in the era of imaging tubes.

There are, of course, many shuttering possibilities in the camera, and, with its recorder section, it can also be set to record time-lapse intervals. One might choose a one-second frame rate, a one-frame recording sequence, and a one-minute interval to record a time-lapse sequence of a city going through a day with only ghostly inhabitants (at a one-second frame rate, people moving through the frame will register only as faint blurs).

The output of the camera section is made to be 60 images per second, regardless of what the camera is set to do. Thus, if the camera is set for 60 fps, that’s what comes out. If the camera is set for one fps, then each camera frame is repeated 60 times. If the camera is set for two fps, then each camera frame is repeated 30 times. At three fps, each camera frame is repeated 20 times; at four fps, it’s 15 repetitions; at five fps, it’s 12; at six fps, it’s 10. At non-evenly-divisible frame rates, a look-up table determines the 60-image output. Thus, at 24 fps, one frame will be repeated three times and the next two times, the classic 3-2 pulldown.

The camera output may be seen live. If the camera is set to 24 fps, it will have the temporal rate of film, but its output may be broadcast directly, with a look of “live film.” The recorder section always captures the 60 images per second of the camera output, so its tapes are also ready for broadcast.

If the camera is set to a low frame rate, and its lens is zoomed, the result can be a sort of starship hyper-speed effect of trails. Again, at very low frame rates, moving objects can appear ghost-like. All of these effects may be seen and recorded live, returning special effects to the domain of the camera (see “AVE More Real?” page TK).

“Overcranking” and “undercranking,” however, require two things external to the camcorder. The first is free — a leap of faith. The recorder always records 60 images per second. To believe that the camcorder can be overcranked requires believing that it normally operates at a lower rate than 60 fps — perhaps 24.

The second is an elaborate post-production system. Time-lapse videography may be done internally to the camcorder; any other deviation from real-time imagery requires some way of speeding up or slowing down the recording or of selecting appropriate frames for re-recording or processing.

If, for example, a user wanted a 24p look with everyone moving a little too fast, then the camera might be set to 20 fps, and, in post production, appropriate frames deleted and the result sped up. If a user wanted fluid slow motion, the camera might be run at 60 fps, and, in post, the sequence slowed and alternate frames repeated three times and two times.

It’s a start. Perhaps the next development will be an HDTV camera like the one in the AJ-HDC24A or the Philips LDK-7000, able to be operated at varying frame rates with a recorder that can vary its recording speed, too.

Of course, with some Philips broadcast HDTV equipment designed in Germany, it might be better to use terms other than a high or low crank rate, unless what is desired is really awful looking pictures. Crank height (Krankheit) is German for sickness.


AVE More Real?

A drunk staggers down the street. When we see the drunk’s point of view, it’s multiple images separating and rejoining.

Someone glances up. There’s a harp glissando, and the picture becomes wavy. We’ve entered the character’s mind — a memory or a dream.

A patient on a hallucinogenic drug escapes from a mental hospital. From the patient’s point of view, we see real people and objects, but they’re distorted, and unreal colors bleed and pervade the scene.

Such visual effects are almost cliches. Wavy picture? Dream sequence.

In the old days of tube-based cameras, the injection of an audio tone into the circuitry controlling the sweep of the electron beams could cause the wavy image. Today’s solid-state cameras have no such sweep circuitry.

Similarly, intentionally misregistering the images on the three tubes could create the drunk’s point of view. Alas, today’s cameras have their imaging chips permanently registered at the factory.

Psychedelic color patterns could be achieved in tube-based color cameras by misadjusting the current of the electron beams. Today’s cameras? What electron beams?

Today’s cameras offer stable, high-quality pictures without the hassle of lengthy and frequent set-up procedures. Digital video effects (DVE) systems offer many forms of image manipulation. But the analog video effects that could be easily achieved with old cameras are no more.

Note to camera manufacturers: This is not a plea to return to tubes.

Password must contain the following:

A lowercase letter

A capital (uppercase) letter

A number

Minimum 8 characters


The Latest in Sports Video Production & Technology
in Your Inbox for FREE

Daily Email Newsletters Monday - Friday