When you transcode your h.264 footage to anything in the world (and beyond), you will never (read: never) get a better image in terms of stored information of that footage. You get only (read: only) better editing expirience due to lower spec editing suite. With better PC or Mac specs, you will notice zero (read: zero) improvements.
Period (.)
Until next time when you'll call me ...
View attachment 59088
I never said you would get improvements. I said you will not get WORSE IMAGES. You need to blow it up so that you don't get a generation loss. I am sitting here walking on egg shells with you while you are being so matter of fact when I really want to scream how you really have no idea what you are saying. I've been doing this for many many years and facts are facts. If you want to do color grading, editing, VFX etc, you need to blow up the image to a larger color space and the final product will look better because you will get a better grade, composite and export so in the end YES (read: YES) better image than you would if you just exported the image from the original h.264. Will the original footage be better? Of course not, but I didn't think I had to say things that are obvious enough for my four year old to know. But if you want to END UP WITH A BETTER IMAGE, you will listen up instead of trying to be the man.
Now on to
@DonaldictTrumperbatch , here is the reason why the "create optimized footage" button does nothing to the actual footage itself in the end.
Let's go back in time to about 2000-2004 when Digital Intermediaries (DI) were all the rage. What it provided was the ability for us to have an intermediate digital version of the analogue film with which to work with and then go back to film at full resolution (at first 1 and 2k , and the then up to 10k and eventually even higher). It used to take about 3-6 minutes to scan each frame (24 frames a second for a film) and the intermediary codec needed to be something smaller because back then, even HD was hard to come by for editing or color or VFX so there needed to be intermediary codecs and proxys.
Every system did it different. In Avid's heyday, they used OMF which took the original footage and pre-rendered it into a format that could be easily viewed in Media Composer (in the form of Nitric, Symphony, etc) and for Quantel which was the first resolution independent system which I was a product specialist on (that's the million dollar system I was discussing) was the first system that could handle 2K and eventually 4k. The Pablo which was the track ball system of color correction used by everyone from BlackMagic with DaVinci and now Resolve uses proxies or intermediates so you can use the footage and not get bogged down. However, it still needs to go back to the original footage when you are done (the full resolution footage) and then it was re-printed for film using the 2k 6K 8K or 10K footage that was not used in the intermediary process but of course was used to create the best master of which all the slaves were made to be sent out to theaters across the world. The process was reserved only for the most expensive films (hollywood blockbusters) because the system and the people running them were costing and making millions.
So, if you understand that process, not that in general we don't go back to FILM but we still go BACK TO THE ORIGINAL FOOTAGE for a master slave with the best footage.
In our case, the footage we start with is H.264 which IS A PREVIEW CODEC. Not a codec with which to slave from. If you encode H.264 into H.264 it is exponentially worse than the already lossy codec. H.264 is a good codec because it is so small for what you get but what you get is not a great product.
So if you will consider the DI process above and say that our adding VFX and color correction with the proxies and intermediate codecs provided by the editors and even hardware accelerated editors, it still doesn't want to have crappy codecs like h.264. It needs an INTERMEDIARY CODEC. There is a codec for example called Apple Intermediate Codec (AIC) which was all the rage for a while which eventually became ProRes. It's not an uncompressed codec but its almost lossless meaning no artifacts. That does not mean it will remove artifacts created by the FIRST generation but when you export out of the editor, if you use ProRes, instead of h.264, you WILL NOT lose ANOTHER GENERATION. If you do use h.264 and export, the generation loss is exponential and WILL LOOK HORRIBLE!
Here is what a DI scanner looks like. You feed the film into it and it scans it one frame at a time.
It's not uncomplicated and your 8 minute video did a good job of explaining it actually but whoever posted it, I think you, didn't understand it obviously.
So even if you are using an intermediate codec like in Final Cut or Premiere, if you make the CPU try to figure out what the footage is to transcode on the fly so you can view it, it is highly complex because of the fact that it is so compressed. It has to basically unravel it.
So there you go, the reason blowing it up to an intermediate codec is because it makes the process of editing and VFX faster and the end product better because of the absence of a massive generation loss. If you don't believe me, go read the thread on here and everywhere else I posted it. DJI put it on their main site so they believe me.
Do you understand now? So no, because you have h.264, that doesn't end all discussion on how to handle the footage because "it can't get any better". This is true but IT CAN GET MUCH WORSE.
I am not interested in hearing about why this is wrong so if you want to tell me, save your breath. Instead of trying to be right, sit back and learn something because this is the facts.
I hope you get it, and are not upset that I am just spitting truth at you. That is all.
Happy Flying! and for god's sake beware of the h.264
