Announcement

Collapse
No announcement yet.

ProRes 4:4:4 possible?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Barry Green View Post
    As for c) hey, better is better. Now, the counterpoint is -- visually we don't see color resolution nearly as clearly as we see brightness resolution, which is why cameras with 4:2:0 codecs can still look strikingly good to us. On the other hand, the computer sees everything, and raw 1216x683 color res is going to be better for keying/compositing and heavy effects work than 960x540 4:2:0 would. And it will be better than the 960x683 you'd get from the 4:2:2 codec too. Better is always better, and more is better, so if you want to extract the absolute maximum performance from your BMC, raw will let you get at everything the sensor can deliver.
    Thank you for all this information. Can't wait to key BMC green screen footage with Keylight.

    Comment


    • #32
      I feel really really wary wading into this discussion.

      No. The camera doesn't do 4:4:4

      I don't think you can easily compare video colour sampling in this way.

      I think it's fair to say it doens't have the same colour resolution as it does for brightness when using a bayer sensor.

      But to say it's only doing 4:2:0 is going to mislead a lot of people because it's the wrong terminology.

      4:2:0 last I looked can't do 12 BIT and generally isn't compressed.

      So whilst there isn't the same colour resolution, comparing it to 4:2:0 is going to lead a lot of people to equate it to 8 or 10 bit compressed codec cameras.

      I also think that does a discredit to the upsides of 12 bit RAW and zero compression.

      I don't disagree with the ratio, it's just that it's not the right terminology, because for many this not behave like any 4:2:0 camera they've used.

      I just don't think you can describe sensors as being 4:2:0 or 4:4:4. That's video subsampling terminology and the more we do it, even if it's a convenient way to make a point, the more "true" this idea becomes.

      jb

      Comment


      • #33
        If you absolutely have to have 4:4:4 you can shoot on film and have it scanned in at 16 Bits 2k or 4k. Very expensive. The way things are advancing it won't be to far off before we can get this type of quality in some future BMC.

        Comment


        • #34
          Originally posted by Barry Green View Post

          As for c) hey, better is better. Now, the counterpoint is -- visually we don't see color resolution nearly as clearly as we see brightness resolution, which is why cameras with 4:2:0 codecs can still look strikingly good to us. On the other hand, the computer sees everything, and raw 1216x683 color res is going to be better for keying/compositing and heavy effects work than 960x540 4:2:0 would. And it will be better than the 960x683 you'd get from the 4:2:2 codec too. Better is always better, and more is better, so if you want to extract the absolute maximum performance from your BMC, raw will let you get at everything the sensor can deliver.
          Thanks so much for sharing your knowledge barry ,

          i have a question :

          i understand that in order to deliver 1920x1080 444 prores the BMC camera should have higher resolution bayer pattern sensor ,

          assuming that 444 keys better than 422 or 420 (so is preferable for post production ) ,

          are you saying that the 2.5k bayer pattern of the BMC camera even when loaded into Resolve on a 1080p project have the same chroma latitude and info of a 422 ? AKA a true 444 out of the F3 would actually key better ?

          ps: how about the alexa ? it does record 444 out of the 2.8k sensor ... you are saying that the info coming of the debayer are less than 444?

          sorry if miss understood

          thanks
          g

          Comment


          • #35
            are you saying that the 2.5k bayer pattern of the BMC camera even when loaded into Resolve on a 1080p project have the same chroma latitude and info of a 422 ?
            Seems like a simple question, but leads to a complex answer. It depends on what sensor made the 1080p image! But let's assume for the moment that we're talking about a true three-chip 1080p camera, so there's true color info in all of the 4:2:2 decimation, that would give you chroma resolution of 960x1080. Or, per frame, 1,036,800 color samples.

            The BMC has 1200x675 chroma samples from its sensor. If you debayered and stored in ProRes 444, you could preserve all that chroma, and would have 810,000 chroma samples. So no, it's not possible to have as much chroma resolution (even with ProRes 444) as a true 4:2:2 camera would deliver.

            Now, when shooting ProRes 422, it's a little worse actually, because your 1200x675 will get scaled down to fit into 960x1080, so the net result will be 960x675, for a grand total of 648,000 chroma samples.

            The way to preserve the most chroma information is to avoid going to a 4:2:2 codec. Keep it raw, or transcode to a 4:4:4 codec, and you'll have the most chroma the camera can give you.

            AKA a true 444 out of the F3 would actually key better ?
            No, because the F3 doesn't deliver true 444 either. It's a single-sensor Bayer, and will be subject to the same limitations. It will probably key very very similarly to the BMC. The F3 claims 3.36 megapixels, and the BMC has 3.32 megapixels. They should key basically exactly the same.

            how about the alexa ? it does record 444 out of the 2.8k sensor ... you are saying that the info coming of the debayer are less than 444?
            Yes. And no. In terms of pure color sampling, the Alexa cannot deliver a true sampled 444. It can't. It has 1440 red and 1440 blue pixels. It could make 444 at 1440x810, but it cannot make a true sampled 444 at 1920x1080. There just isn't enough actual color information on the sensor to deliver a true, each-and-every-pixel-gets-its-own-sample, 1920x1080.

            However, the F3 and Alexa almost certainly employ some manner of chroma up-res to fill out the missing information and create an image that stores 444 image data. It will have been digitally up-rezzed, not individually pixel-sampled, so it will be an upconverted 444. And there's nothing stopping the BMC from doing the same. In fact, that's basically what the debayer process is, is converting the subsampled pixel array into full resolution full color. The Alexa can deliver much higher color resolution than the F3 or BMC because the Alexa has a LOT more pixels on its chip, and that means a lot more red and blue pixels so it can deliver a higher-res color image. But it would need a full 3840-pixel-wide chip to deliver a true, honest 4:4:4 color signal. The Canon C100/C300/C500 can do it, because they have chips designed to provide an individual red, green, and blue pixel for every spot in the HD frame. And a Red 4K camera could do it too, you could get true 4:4:4 1080p out of a Red if you chose to interpret it through quad-pixel color sampling for each pixel, rather than through debayering.

            Comment


            • #36
              I think if the camera were to take the 12 bit raw sensor data, and do an in-camera debayer and re-pack to Prores4444 we would be able to obtain even higher quality than the 1920x1080 Prores 422HQ repack of the raw sensor data.

              This is just pure speculation on my part though.
              Dan Kanes
              www.dankanes.com

              Comment


              • #37
                Alexa raw pixel definition: 2880x2160
                BMC raw pixel definition: 2432x1366

                That is quite a bit of difference...I wonder if there are a "few" more usable pixels that can be scanned at 23.98fps tops maybe?...

                I would guess yes...
                Dan Kanes
                www.dankanes.com

                Comment


                • #38
                  Originally posted by DanKanes View Post
                  I think if the camera were to take the 12 bit raw sensor data, and do an in-camera debayer and re-pack to Prores4444 we would be able to obtain even higher quality than the 1920x1080 Prores 422HQ repack of the raw sensor data.

                  This is just pure speculation on my part though.
                  If they did prores4444 in-camera, yes, I would assert that they could indeed deliver higher quality in-camera than going to Prores 422HQ. They would be able to preserve all 1200 color samples horizontally, rather than having to subsample them down to 960 horizontally.

                  Probably won't make a HUGE difference in image quality, but it would retain a little more color resolution. Well, more than a little, it'd be 25% more.

                  Comment


                  • #39
                    If anything, I'm going to try using the film log LUT in Resolve then export to 2.5K 12-bit ProRes 444 and see how it compares to raw in relation to how the 422 HQ does.

                    Comment


                    • #40
                      Barry, the depth of your posts here are always a reveal experience.

                      Comment


                      • #41
                        Originally posted by DanKanes View Post
                        I think if the camera were to take the 12 bit raw sensor data,

                        Dan it's actually 16 Bit LINEAR sensor data, recorded into 12bit LOG DNG, then unpacked back to 16 Bit.

                        jb

                        Comment


                        • #42
                          Originally posted by DanKanes View Post
                          Alexa raw pixel definition: 2880x2160
                          BMC raw pixel definition: 2432x1366

                          That is quite a bit of difference...I wonder if there are a "few" more usable pixels that can be scanned at 23.98fps tops maybe?...

                          I would guess yes...
                          Interesting observation given that if BMCC was able to utilize the entire sensor at some point in the future with about 2192 lines of resolution), that would match the Alexa.

                          Comment


                          • #43
                            Ok so with all the technical numbers being presented, I have to ask, how much is perceivable to the human eye and how much is overkill. How much is "good enough?" I have to say that the knowledge offered here is insane and I am amazed as well as enlightened by the comments, especially from Barry (because things aren't always what you think due to titles like 4:2:2 not actually being 100% true), but at the same time, I can't help wonder if any of it really is noticeable to the human eye and I can't help feel that after a certain point, we can't even notice. I guess my question is, at what point do we stop worrying about the numbers and concentrate on the art because the numbers (possibly after a certain point) really won't produce noticeable differences. I have to say, it's really hard sometimes to even know what something was shot on.
                            Paul Del Vecchio - Director/Producer
                            http://www.PaulDV.net
                            https://vimeo.com/channels/directorpauldv
                            http://www.youtube.com/user/pdelvecchio814
                            http://www.twitter.com/pauldv

                            https://www.facebook.com/pages/Paul-...io/58731646898

                            Comment


                            • #44
                              Originally posted by PaulDelVecchio View Post
                              Ok so with all the technical numbers being presented, I have to ask, how much is perceivable to the human eye and how much is overkill. How much is "good enough?" I have to say that the knowledge offered here is insane and I am amazed as well as enlightened by the comments, especially from Barry (because things aren't always what you think due to titles like 4:2:2 not actually being 100% true), but at the same time, I can't help wonder if any of it really is noticeable to the human eye and I can't help feel that after a certain point, we can't even notice. I guess my question is, at what point do we stop worrying about the numbers and concentrate on the art because the numbers (possibly after a certain point) really won't produce noticeable differences. I have to say, it's really hard sometimes to even know what something was shot on.
                              There are two benchmarks.

                              The human eye, which is simultaneously completely amazing and totally fallible and not to be trusted.

                              The other benchmark is the pure hard mathematics. If somethings not there then there's no chance of mathematically adding it later.

                              This usually translates to me as....

                              Looks great out of camera to the eye (a 5D)

                              Looks terrible once you try to change the grade of what it is (a 5D).

                              I think you often see people mistaking BENCHMARK 1 in resolution. Like in SD tv..why the hell would you shoot on 35mm when it's only 625/525 lines ? Or for the web for that matter.


                              jb

                              Comment


                              • #45
                                Human vision is the reason chroma subsampling is so widely and unabashedly used in compression. We have lower colour acuity than luminance. We're also very good at recognising things regardless of quality level or clarity.

                                In general lossy compression is all about how humans perceive things. We filter out a lot of crap, and lossy compression generally tries to filter out those common irrelevant layers of information to save space.

                                Comment

                                Working...
                                X