Announcement

Collapse
No announcement yet.

24 mm lens with 0.64 speedboster , the math

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 24 mm lens with 0.64 speedboster , the math

    Hi

    here we go, another crop factor my mind is exploding thread. I'm used to my full frame canon and its quite easy to calculate the lens taking it into a 3d program to match, as all numbers are based around fullframe in it. What would be the correct focal length equation for my black magic pocket cinema camera using a 24mm full frame lens and a speed booster xl 0.64?

    In the 3d program I have my lens information and the camera have a setting for aperture in mm, it defaults to fullframe 35mm using 36 x 24mm

    I enter 18.960x10mm for my Pocket cinema camera 4k as that seem to be the sensor size found on the specs sheet

    would the formula for calculating the lens be 24 x 0.64 = 15.36
    or 24x2x0.64= 30.72 as I already defined my sensor size ?
    Peter A

    Meshmen Studio Youtube channel

  • #2
    If you're in a 3d space and trying to wrangle perspective, 24mm lens is a 24mm lens, regardless of the sensor it's projecting an image on. A speedbooster (focal reducer) does not change the focal length of a lens, it just squeezes the image circle to a smaller radius so you see more of the native image. "Crop factor" is just marketing jargon created by camera companies to simplify the difference between their digital sensors and their film cameras. Yes, the sensor is cropped, but lenses remain the same. They never clarified that.

    The BMPCC4K has a crop factor of 2 compared to 35mm. 0.64x2 means it will have a 1.28x crop factor. Just multiply the width and height by 1.28.

    I don't know why they say it has a 1.9 crop factor..the diagonal is 1/2 the size of 35mm diagonal
    Last edited by GeranSimpson; 06-25-2019, 02:45 PM.

    Comment


    • #3
      Ok so if i have two varables to edit in my 3d program.lens and aperture i should edit the aperture of the camera when using my focal reducer ? Is the reducer in a sense rescaling the sensor in a way? I know its not scaling it but the impression of a bigger sensor ?

      Do you mean the sensore size part would be 24.268 x 12.8 in my case?

      I think the 1.9 comes from dividing 36 by 18.96 = 1.89873417721519


      What im aiming to do is to insert a cgi object into a plate and want to match the perspective
      Last edited by Meshman; 06-26-2019, 12:24 AM.
      Peter A

      Meshmen Studio Youtube channel

      Comment


      • #4
        Originally posted by GeranSimpson View Post
        I don't know why they say it has a 1.9 crop factor..the diagonal is 1/2 the size of 35mm diagonal
        It depends on what your delivery aspect ratio is gonna be, that's how you can end up with different crop factors. The aspect ratio mismatch between the formats plays into this.

        In 16:9, FF35's diagonal is 41.3mm. The pocket 4k's diagonal is 20.4mm. Crop factor = 41.30/20.4 = 20.02
        In 2.39:1, FF35's diagonal is 39.03mm. The pocket's diagonal is 20.55mm. Crop factor = 40.68/20.55 = 1.9
        Aaron Lochert

        Comment


        • #5
          CGI cameras are simply based on real cameras. But you need to define it, or know how the virtual camera is defined, to understand and predict how it'll behave.

          So the first question would be, what type of virtual camera is it? In other words, is it based on Super 35, 135, Super 16 etc?

          Once you know that, then you can understand how that will relate to the tools you have in the real world. Because even though a 24mm lens is a 24mm lens is a 24mm lens, the resultant FOV will change depending on the camera system it's designed for.

          Comment


          • #6
            Sure i know this its the speedboster part that is confusing. If I in Maya set a 24mm lens using the default full sensor size 36x24 mm film back it will match my canon 5dmk3 now if I would set the camera to the pockets sensor size 18.96x 10 mm using the same lens using a dumb converter it would probably match as well I dont know this as I cant test. Now I insert my speedbooster into the mix and Maya have nothing to compensate for this other then lens and camera sensor size. Would I alter the sensor size or lens in mm you think? I would think sensor size of the camera right?



            Originally posted by trispembo View Post
            CGI cameras are simply based on real cameras. But you need to define it, or know how the virtual camera is defined, to understand and predict how it'll behave.

            So the first question would be, what type of virtual camera is it? In other words, is it based on Super 35, 135, Super 16 etc?

            Once you know that, then you can understand how that will relate to the tools you have in the real world. Because even though a 24mm lens is a 24mm lens is a 24mm lens, the resultant FOV will change depending on the camera system it's designed for.
            Last edited by Meshman; 06-26-2019, 01:23 AM.
            Peter A

            Meshmen Studio Youtube channel

            Comment


            • #7
              It's all simple math. You can change the lens' focal length OR change the sensor size to accommodate the speedbooster, they will yield identical data, the key is not doubling up and doing both.

              a 24mm lens on a 29.62X15.62mm sensor OR a 15.36mm lens on a 18.96x10mm sensor will yield identical AOV.

              There's a calculator here

              https://www.pointsinfocus.com/tools/depth-of-field-and-equivalent-lens-calculator/#{"c":[{"av":"8","fl":24,"d":7000,"cm":"0"}],"m":1}

              Comment


              • #8
                cool that kind of makes sense. Will do a practical test with some real world known object and recreate the setup and match it in 3d

                Thanks Howie
                Peter A

                Meshmen Studio Youtube channel

                Comment


                • #9
                  Back at this again I recreated my setup in 3d with exact measurements.. I filmed a cutting mat with a grid on top of a tabletop. I also photo scanned the set with my subject, the Pocket 4k camera is in my scan so I know exactly where the camera was in 3d space compared to my subject. I aligned the scan to my 3d set build. If I place my 3d camera where the Pocket 4k body in the scan I get a missmatch using the formula of my camera sensor 18.96x10 and my lens is a 24mm but my camera setup have a speed booster xl 0.64 so the formula would be 24 x 0.64 that would result in a new that is 15.36 in my 3d program Maya. Now if I slightly move my 3d camera towards the end of the lens is my scene seem to line up better. I wonder if the speedbooster magnification somehow is involved. Not really sure what is going on herephotoscan..jpg
                  Last edited by Meshman; 07-13-2019, 06:49 AM.
                  Peter A

                  Meshmen Studio Youtube channel

                  Comment


                  • #10
                    Camera position or sensor plane are both approximate guesses for the optical center or center of perspective projection, but it's not that simple because, well, life isn't that simple.

                    Here's a paper you can read more about it.
                    Aaron Lochert

                    Comment


                    • #11
                      I guess adding a magnifier glass in the mix will up the level of complexity by a notch, will test on a native full-frame lens combo and see if I get a closer match when I don't use a speed booster. Thanks for the link will have a readthrough
                      Peter A

                      Meshmen Studio Youtube channel

                      Comment


                      • #12
                        If you modeled a set from a scan, then try to add a camera afterwards it’s not ever going to line up with footage. Typically you take photos from the camera’s perspective, solve for world alignment with a perspective app like fspy, then start building the set on top of your reference image looking through the 3d camera. Or build off of a camera-solved 3d track. Either way you want to build a set looking through the 3d camera viewport with footage as a background image sequence for reference.

                        Comment


                        • #13
                          Building my set against the scan works fine, It's just when I place my 3d camera when using the speed booster my 3d camera is just slightly in front of my set camera when it matches what is in my plate. I wonder if it has to do with the booster magnifying its not a big difference.
                          Peter A

                          Meshmen Studio Youtube channel

                          Comment


                          • #14
                            Why is it important for the 3d camera to be where the scanned camera is? Didn't you say earlier the goal is to composite onto footage?

                            Comment


                            • #15
                              if everything is modeled to scale its important If all variables matched and I place my 3d camera where the physical camera was it should match reasonable. It kind almost do now but Im curious about the slight mismatch when I use the pocket lens and speedbooster combo. It's my first time using a speedbooster.
                              Peter A

                              Meshmen Studio Youtube channel

                              Comment

                              Working...
                              X