DTV Primer

Home | What Transition? | Buyer's Guide | Timeline | History | Glossary | Links | Tutorials | E-mail |

On the Sordid Murkiness of Deinterlacing

November 20, 2006 - Updated 12/3 (bottom of page)

I keep coming back to the same video processing topics because apparently some things are never quite as simple as they seem to be.

And, it seems, advanced technology never works quite as well as it should when it grandfathers in the limitations of legacy technology.

Example: I'm writing this with my computer keyboard more slowly than I should have to because of the vast engrained weight of the QWERTY keyboard layout. As I'm sure you know, that layout was incorprated into early typewriters to make sure people could not type fast enough to jam up the keys. The most often used letters were placed awkwardly.

When modern electric typewriters and fast computer keyboards became the norm, there was an effort to introduce the more logical Dvorak layout. People using these new keyboards typed incredibly fast (and probably did not suffer from carpal tunnel syndrome).

Nevertheless, the huge installed base of QWERTY keyboards and people who learned to type on them won out, and we'll probably be stuck with old QWERTY forever.

And so it is with TV and film technology. To wit, interlaced video (and 24 frames per second movies -- subject of a later article).

The old TV cameras and CRT TVs couldn't scan a whole frame in one pass, so they interlaced two fields to yield each frame. Of course, there was a time difference between the alternate scans that produced "combing" in those areas of the picture where there was motion. Hence, comb filters.

Interlaced video worked okay, I guess, if displayed on a TV that was designed to alternatively scan fields of odd and even lines. Digital fixed-pixel TVs don't do that; they only do progressive. But the ATSC digital TV standard was developed from CRT technology, so it embraces interlaced video.

Early TV electronics also found it a lot simpler to keep the rate at which the screen image was refreshed the same as the frequency of the alternating current that powered everything. Hence the NTSC TV standard is based on a 60 Hz refresh rate. In Europe, where their electricity is 50 cycles per second, the analog PAL TV standard refreshes at 50 Hz.

All carried over to digital. 60 odd/even (interlaced - 60i) fields per second, or combined to form 30 full frames per second (30p).

So this is where the sordid murkiness starts.

Deinterlacing. The interlacing is straightforward assuming the camera records and outputs a traditionally interlaced image. Not necessarily true anymore, and camera manufacturers all have their proprietary ("better") systems. But let's ignore that (more murkiness).

So, interlaced HD video signal is broadcast or sent over cable to your digital fixed-pixel TV. Then what? (Let's assume your TV has the same resolution as the programming, otherwise the video processor also has to deal with scaling.) The classical simplified explanation is that the TV takes the first field, holds it in its memory, and then combines it with the next field, and displays the complete frame on the screen at, presumably, 30 frames per second. Then repeats.

I was seduced by the simplicity of it all. Of course there is the matter of the slight motion offset from one field to the next, but that was why progressive is better than interlaced. Right?

But last Spring Home Theater magazine published an article by Gary Merson that looked into the deinterlacing practices of various TV manufacturers. Out of 54 sets tested for "correct" deinterlacing, roughly half failed, "losing up to one-half of the vertical resolution"!

Here's a link to that Round 1 (May 2006) article, and another link to a Round 2 (October 2006) article that presented the results of testing on another 61 HDTVs, including this time 3:2 pulldown and bandwidth tests.

The article said that instead of combining adjacent fields to form complete frames, some builders were using video processing chips that used a "simpler and cheaper" method. To wit, upconverting each one-half-resolution 540-line field to make 60 frames per second (rather than combining fields to make 30 full-resolution frames per second).

The test they used to determine pass or fail was a computer-generated test pattern consisting of alternating black and white lines. Odd scan lines were all black, even scan lines were all white (or vice versa). If you upconvert either odd lines or even lines, the whole frame will be respectively solid black or solid white. If you combine odd and even fields ("correct" deinterlacing), you'll get all gray frames.

I was convinced, and disturbed, for awhile at least. But then -- Could it all be so black and white? Most things are shades of gray (pun intended).

First of all, the sets tested were 2005 models, and I know with succeeding generations of digital tuner/demodulator chips, great improvements were realized. Video processing chips had to improve, right?

So I started researching, and after reading many deinterlacing articles, I discovered that deinterlacing is not nearly so simple as that one test pattern would seem to indicate.

The following is a quote from the second (Oct/06) Home Theater article:

"Some TVs take every one of the 1,080 interlaced lines and convert them to a progressive signal. This process is known as deinterlacing. It compensates for any motion in the image and sends it to the screen at its native resolution. Other HDTVs may take a cheaper shortcut and simply upconvert each single 540-line field. The latter process can result in a loss of up to 50 percent of the image's resolution (for a 1080p display)."

I'm assuming that the first process, described as "deinterlacing," is what should be done and gives good results while the second "cheaper shortcut" is presumed not to be "deinterlacing" at all, and simply throws away half the picture detail.

Actually, the first method he describes is likely "blend" or "weave" deinterlacing and the "shortcut" is actually "bob" deinterlacing. Motion compensation is further processing with either method, but is more intrinsic with bob deinterlacing.

Since another Home Theater author claimed that all digital TVs use a 60 Hz refresh rate (while later qualifying that at least one could also switch to 72 Hz to avoid 3:2 pulldown for 24p source material), we'll accept that for our analysis. 1080i in a CRT set does not deinterlace, but rather is scanned field by field, one every 1/60 of a second.

Each field has 540 lines, but no one is suggesting that the other 540 lines are thrown away with a CRT display. The next 1/60th-second field contains the other 540 lines and "completes" the frame. The human eye and brain combine the two fields of information to see a complete picture.

It's not quite that way, because objects in motion in the picture are shifted every 1/60 second, continuously from one field to the next, not frame to frame. There are no separate frames per se, and each field is not really paired with another specific field. It's -- field, field, field, field, . . . --every 1/60th of a second.

Blend or weave deinterlacing (not exactly the same) combines adjacent 1/60 second fields into a single frame, but since objects have normally moved between the time the first field was recorded and when the second was recorded, the resulting frame is subject to either combing or motion blur (depending on whether blend or weave was used). Those motion artifacts should hopefully be minimized by the video processor. We're left with 30 frames per second, each 1080 lines high.

But since digital TVs refresh at 60 Hz, each complete frame must be displayed twice, or alternatively the video processor interpolates successive complete frames and synthesizes in-between 1/60th-second "frames," for a total of 60 frames every second.

Sounds complicated.

Bob deinterlacing approaches the problem differently. It takes each 540-line 1/60th-second field and interpolates the missing 540 in-between lines. Good video processors can do this well, for real-life images. (Perhaps not so well for test patterns with alternating black and white one-pixel high lines.)

Does this mean that the other 540 lines are thrown away? Not hardly, because the very next 1/60th-second field contains the "other" 540 lines, and these are also upconverted to 1080 lines. The full 1080 lines of information is therefore preserved, and our eyes and minds combine all the detail.

Bob deinterlacing has the advantage of preserving the motion from one field to the next, so that the resulting image has smoother motion. There are 60 frames per second of true motion, no interpolating required, and no inherent combing or motion blur problems.

So you might want to use bob deinterlacing for video with a lot of motion, and blend or weave deinterlacing for video that has more static content. Some or many video processors are motion adaptive, combining different types of deinterlacing to suit the material or even parts of a single frame.

So is blend or weave better than bob deinterlacing? Can't say. It's more complicated than that. It's not black or white. One test pattern designed to expose which approach a video processor chip designer used for deinterlacing cannot in any way reveal good or bad, pass or fail.

Certainly the better the video processor, the better the picture, and video processor chips are getting better.

Ideally, we'd be a lot better off if movie and TV content was shot with video cameras at 1080/60p and displayed on 1920 x 1080 progressive displays at 60 frames per second. First the ATSC needs to add 1080p as an approved format, and add MPEG-4 as a standard compression protocol, and everybody get new equipment.

Well, that's not going to happen anytime soon.

I was going to write some about frame rates but this is getting too long so I'll just give you some links to some decent articles on the complexities of deinterlacing (and do the frame-rate article later).

I've listed these articles in order of clarity. If you're interested, look at the first, then keep working your way down the list if you're not sated:

UPDATE: One of the TVs tested by the Home Theater magazine people was the Sharp LC-37D90U, a 37" full-spec HD (1920 x 1080) flat-panel LCD. The Home Theater magazine deinterlace test gave it a "fail".

In its current December issue, Widescreen Review magazine published a review of the same Sharp model and had this to say about that set's deinterlacer:

"I used the HQV test disc to determine deinterlacing and scaling functions. The Sharp does an admirable job of deinterlacing, passing the HQV test for jaggies, flag, detail, noise, race car, and cadence with flying colors, and generally doing a much better job than the Marantz DV9600. Only the DVDO iScan outperformed it, but it is unrealistic that most people would use a scaler costing the same as the display."

They also had this to say about the set: "Its ability to resolve low-level black detail, unlike previous LCDs I have seen, was truly extraordinary." - "My real pleasure has been to watch HD-DVDs on the Sharp. Its ability to resolve every line to the pixel in a 1080i multiburst means that HD-DVD images are as sharp as they can be." - "The Sharp AQUOS, as represented by the LC-37D90U, is at the cutting edge of flat panel displays."

The proof is always in the picture.