I have scoured this forum, but I havn't yet found a definitive answer to the following question: How does kdenlive handle deinterlacing progressive segmented frame interlaced video?
I just bought a Canon HV40 HDV camera (ntsc) to replace my old panasonic DV camera (may it RIP) that I mainly use for home videos. I am pretty sure that I want to shoot everything in 30p mode. It seems to me that the decision to shoot at 60i or 30p should depend on the camera's sensor. If the sensor captures a full 1440x1080 pixels, wouldn't it have to thow out half of the data to make each field of a frame in 60i mode? Maybe I'm not thinking about it correctly. I would like to see interlaced video just go away as a relic of the past. I certainly don't have any interlaced displays left in my house, and I'm sure that I never will.
At any rate, the HV40 (along with many other cams) records 30p video to tape as 60i. Each frame captured at the sensor at 30x (29.97x really) per second is divided up into two segments before being recorded onto tape at 60 segments per second. This is equivalent to interlaced fields, except that each segment represents the same instant in time. In order to convert the segments back to 30p, one only has to "weave" the segments back togeter into a frame and play the frames back at 30x per second. So, here is my workflow: capture the video stream from the cam over firewire using dvgrab. Open a new project in kdenlive with HDV 1440x1080p 29.97 fps as the project settings. Edit the video. Render the project in some progressive 30p format. So my question is: does kdenlive recognize that the 60i source video is really 60 interlaced segments per second (not fields) and as such only needs to be "woven" back together and not deinterlaced in the traditional sense? What about if I transcode to DNxHD 30p first. Does ffmpeg properly weave the segments togther into frames?
So far, the best explaination of PsF, aside from the wikipedia article, is from: www.sony.ca/hdv/files/white/HDV_Progressive_Primer.pdf