.MTS editing in Kdenlive

Hi. This is my first post here so go easy on me:) Apologies if this is posted in the wrong section.

Kdenlive looks great. I installed the latest version. I want to edit .MTS files ideally in Ubuntu AMD x64 10.10 operating system.

My desktop has an AMD Phenom Quad core processor and 4gb of Ram installed, just in case you need to know. Also a ATI Radeon HD 4350 Graphics card. The ffmpeg GStreamer plugin is installed and third party ATI display drivers. My camera is a Canon Hf200 Pal version.

The problem is the .mts video skips and has distorted sound when played along the timeline. I play the sound on it's own with no video and it's fine. The video playback is jumpy.

This seems to play fine in Windows 7, and even edits fine in tiny Windows Movie Maker. But I'd rather use Ubuntu and Kdenlive if I can. My plan is to -

1. Edit high definition footage .mts files in kdenlive and keep them as master copies.

2. Convert this master footage to standard definition for Youtube.

3. Convert the master copies to Blu-ray or DVD (this isn't the most important issue right now.)

Can this be achieved with Kdenlive?

Thanks in advance for any advice.

ArtMonkey.

The original files are never touched .. that's the way of any video editing software, that i know. :o)
To editing mts-files, that is in dependency to your hardware, because mts-files are high compressed. When the hardware is fast enough, than it is possible.

There is a profile for youtube-videos. In german it is in target-section "Internetseiten" that means internetsites.

And DVD ist there too, at target DVD. ;o)

About blueray i don't know, because i never had done things with it, because i don't have a blueray-device. But i think, it must be possible. And when not with kdenlive, then with help of a other software . ;o)

Hi. I found this page about conversion. Would conversion from .mts to h.264/MPEG 4 be a good way of doing this??? I could then edit this and keep a high definition edit. Sorry. This is all new to me so I'm avidly searching the web for ideas.

http://karuppuswamy.com/wordpress/2010/06/07/how-to-convert-canon-mts-vi...

Nope. MTS is H.264 as well. Same performance problems.

If you render the project, you should not get any sound distortions. So, if you do not mind having them when editing the project, you can simply ignore them and work with the original files. If you need output of different size or bit rate, you can do this in the Rendering dialog, as TiKaey pointed out.

You can speed up editing by using Proxy Clips; This has to be done manually in 0.7.8 at the moment.

Simon

I use the proxy function in svn via sunabs PPA with Canon DSLR h264 and it works great with mjpeg 25% size proxies but audio I just copy not transcode I found stuttering audio was not a problem with the audio stream decode but the video stream also proxies can be automatic if you set them so in the project defaults

Thank you everybody. Looks like I have a lot of playing around to do. Proxy Clips looks great (Thanks Simon), but it looks complicated. I'll read through again and experiment with it.

This might not be what you want to hear, but if you had an nvidia card instead of ATI... ;-P

I'm editing avchd (.mts or h.264, whatever you call it) on a single core machine with 1,5 GB of RAM, and playback is smoove, no stuttering sound, no jumpy pictures - except for the transitions between scenes, where playback stops for about half a second or so. The trick is the VDPAU driver for nvidia cards. Ok, searching in the files is still a pain, and rendering takes ages, but hey - on this cheap piece of hardware...

Does anyone know where the bottleneck typically is when searching through avchd clips or during playback of transitions? Is is the CPU, is it the memory (or swap space), is it the data rate of the disks?

Chamo

I am too lazy to do the proxy thing, but I've found that If I first transcode the AVCHD footage to DNxHD, it works more smoothly inside kdenlive than the original MTS files, and any losses are minimal.

In terms of playback, I'm not 100% sure where the bottlenecks are, but I also do a lot of work creating videos from still images, and the video/audio playback inside kdenlive is still choppy if there are any effects or transitions happening, even on my 3GHz core2 quad. Because the source is still images, there appear to be bottlenecks other than decoding the source. Part of it may be that multithreading seems not to be leveraged very extensively, that may be an issue with MLT more so than kdenlive, I'm not sure. As others have mentioned, when you render the project, it will be smooth regardless of how choppy it is in the editor. Rendering takes time though, so I often render out small sections (20 or 30 seconds or so) to see how things look in critical areas.

Dave

Although kdenlive will probably happily edit any ".mts" file with h264 in it, I always recommend transcoding any non-intra-formats to mjpeg first (I frames only). Most non-linear-editors work better with intra-only-codecs (although they do not actually _need_ it).

DNxHD would be a good choice, but I find it has a few drawbacks. First of all, almost no application has support for the container it's supposed to be stored in (MXF). So you should use the "fallback" quicktime (mov) container instead. Furthermore it is really restrictive in the parameters it accepts (frame rate, resolution, quality...)

As an intermediate codec I would recommend mjpeg instead (as said) because it accepts all resolutions and quality. It needs a lossless transformation to a slightly different colour space and back when encoding to h264 again, but that's no problem. The codec leaves the frame rate to the container (as it should), so no restrictions there, either. The only disadvantage (in my opinion) is that the only containers that support mjpeg AND are widely accepted with mjpeg, are avi and mov. I don't like avi, but it's more widely accepted with mjpeg in it, than mjpeg in mov. Kdenlive probably accepts both, by the way. Also mjpeg is very easy to decode and allows you to do editing of HD material in realtime, on any not "very old" hardware.

Also my experience is that you'd better have any audio/video skewing and missing audio/video frames resolved before you start editing the material. You can do that in one run transcoding to the intermediate codec, using ffmpeg with the -async and -vsync flags.

It's my conviction that preparing for editing, editing itself and final encode should be different processes, possibly using different applications and different cpu's. For editing you'd like a machine with fairly good video AND some audio, probably on your desktop, for the other two steps, you'd rather use a server with a lot of cores, no audio or video, put away in a place where it can make as much noise as it wants.

I'm actually gradually upgrading and doing my first PC build . Has an Nvidia 460, so hopefully that will help a bit. It seems there are a few things to try here. Thank you everybody.

When rendering why are the many of the codecs unsupported? Just wondering why as it's strange them being there at all.

I've not done video editing in a few years so much has moved forward. It seems a shame in the digital age to convert footage to a different format to edit it as I always considered the master copy to be best. But I'm probably being too fussy.

Regarding the questions of bottlenecks in playback...

The seeking time issue on MTS is in ffmpeg libs. Well, we could do a faster seek, but not cleanly (image artifacts). An unclean manner of seeking would affect the any trimming or cutting you do on the footage since the first frame of a shot that is not the first frame of the file is a seek. I will consider making it possible to get fast unclean seeking within the editor preview and enforce the clean seeking in rendering. However, I can see the support issues that will create trying to explain the difference the artifacts to people and the assurance that render will be better. :-\

Other bottlenecks include lack of SSE-optimized code and lack of parallelism. On a dual core system, you already get some parallelism because audio/video preparation is a separate thread from output, but yeah output is light. Additional parallelism was worked on within MLT for nearly all of 2010, and it was recently merged into the git master branch for the next release:
http://sourceforge.net/news/?group_id=96039&id=296606
The new parallelism is not perfect. It requires some code refactoring to remove some locks, and all access to ffmpeg libs for demuxing/decoding still must be serial.

As HD H.264 decoding compares to Windows 7, the commercial outfits making the decoders have admittedly done a good job of leveraging the CPU and GPU. As pointed out, MLT can use NVIDIA VDPAU, but there is still some improvement that can be done there under certain circumstances. However, the poor seeking performance kinda defeats the overall goal. :-\ Also, we still have not integrated VAAPI for greater GPU support, and there are not yet any GPU-based effects.

In summary, there is lots of work but not a lot of time for the few developers there are.

What would be the smoothest and most stable way to edit AVCHD?

- Transcode first to Lossless Matroska and then edit those files under Kdenlive
- Use original AVCHD files but with proxy clips

Thanks
Alphazo

@alphazo

I use Proxies with h264AVC from my Canon 550D and find it excellent, proxies can be generated automatically via Project Settings for each import, there is a time delay in generating them but so would transcoding.

Matroska is a container like .avi, .mov etc so a quick remux using ffmpeg would put the original mpeg4 into a .mkv without need to transcode. I find matroska + haali media splitter good on MS Windows to bypass Quicktime but not an issue on Linux. Don't think there's much to gain it's all handled by ffmpeg anyway.

By Matroska transcoding I was refering to the Lossless Matroska template (huffyuv codec) found in the transcoding menu and that uses the following ffmpeg command: -sn -vcodec huffyuv -acodec flac %1.mkv

What about DNxHD? Would that bring any better experience?

I think I'm going to go with proxy for the ease of use and also as it takes much less disk space than DNxHD/huffyuv operation.

Thanks for the feedback
Alphazo

Please read the post I added a page back.