PDA

View Full Version : Comparing DOF and SNR between sensor sizes.



Andrew
09-11-2012, 05:11 PM
I'm carrying over a discussion from another thread here so as not to derail that thread anymore. Basically I wanted to know if it was beneficial to use a smaller format sensor in a low light situation where you wanted to maintain a certain amount of dof. With the smaller format you can have the lens more open while the larger format you'd have to stop down to get equivalent dof. The trade off is that smaller sensors are generally less sensitive.

This made me very curious so I did some basic research. I'll keep it simple and let anyone interested in getting more specific about it research it further.

First, I thought DXO sensor ratings would be a great place to figure out normalized signal to noise ratios for various sensor sizes. Luckily, someone WAY smarter than me already did some research. This post he shares a chart showing the theoretical snr difference between different formats.

http://www.pentaxforums.com/forums/photography-articles/129754-comparison-snr-18-across-formats.html#post1347917

With those figures in mind I used a DOF calculator to compare different formats. I'm assuming that the cameras are in the same location and have a matched fov. I put in different f stop numbers to match the dof. By the way, if you've matched all of those things, you basically have the exact image. Many people seem to think that is not the case thinking the wider lens used to match the larger format will look "wide angle".

Anyways, it's not exact but with the samples I did it was pretty close. If I had to stop down one stop to match dof between different sized sensors, then based on the data from dxo the larger sensor was about a stop better snr.

MFT to FF has a 2 stop snr difference and sure enough to get a similar dof with the FF setup as with a MFT you would stop down two stops. (also interesting MFT crop factor compared to FF is 2x. Read in the wiki link below about how dof is inversely related to format size.)

This was a pretty basic way to look at it all, so maybe there are situations where it's not so clean, but overall it's pretty interesting how closely these two things are tied together.

With this in mind I don't really see a benefit to choosing the smaller sensor when you have the choice assuming you have the right lenses for each format. The larger format gives you more options. You can stop down, improve the performance of you lenses by doing so, and still have roughly the same low light performance.

This will of course vary for individual cameras and sensors. For instance a large older generation sensor vs a small newer generation sensor may not work out to be the same as seen here, but two different sized sensors of the same technology and generation should.

Other interesting reads and findings:

http://en.wikipedia.org/wiki/Depth_of_field#DOF_vs._format_size

It's interesting the f numbers are relative, but with absolute aperture sizes you'll have the same dof regardless of format size.

Am I bashing the BMC sensor size? Not at all! I can't wait to shoot with it. This was just stupid curiosity about how this all relates.

rawCAM35
09-11-2012, 07:11 PM
The resolving power of today's Lenses can't keep up with the higher M-pixel sensors,the number count is getting bigger and bigger and the photosites size is getting smaller and smaller , lenses are having a hard time resolving to these photosites. but any lens can easily resolve to a Large sensor with lower count but large photosites size, better lens resolving power means higher light gain and better SNR.

Andrew
09-11-2012, 08:26 PM
The resolving power of today's Lenses can't keep up with the higher M-pixel sensors,the number count is getting bigger and bigger and the photosites size is getting smaller and smaller , lenses are having a hard time resolving to these photosites. but any lens can easily resolve to a Large sensor with lower count but large photosites size, better lens resolving power means higher light gain and better SNR.

Interesting. Another reason why stopping the lens down a bit on the larger sensor to match fov of a wide open lens on a smaller sensor makes sense. Not many lenses are optimally sharp wide open. Of course this could go the other way where the smaller format has reached the desired amount of dof at an optimal aperture, while the larger format has to stop down past it's optimal aperture and diffraction starts in. Not really a real world concern of mine though.

Yeah, lens design and technology just won't be able to keep up with new sensor tech. Luckily for us, it won't matter too much. Crazy good glass already exists, so even if it can't fully resolve an outrageously high mpixel count, who cares. Imo, after a certain level of resolution has been reached, I'm good. I do wonder if we'll see sensor designs that start to pull back on the mp count after we've reached an acceptable level for 4k with a decent oversampling amount or if we'll just keep on going to 8, 16, 32, and 64k? I guess we can look at the photography side of things to get an idea of the future of D. cinema.

morgan_moore
09-12-2012, 01:57 AM
I guess you can catch more rain with a bigger bucket

My Hassy 80/2.8 has a front element the size of a 50 1.4 or larger ff35 lens

So if you compare my 645 at 2.8 compared to a 50/2 on FF35 the front element is way larger,
The photo sites are larger too with if the 645 cam that was 16mp

My feeling is that a 16mp 645 camera would be a low light king for a given DOF, but Im not sure

I think we need to understand the relationship between sensor image circle and F/T stop

S

nickjbedford
09-12-2012, 03:34 AM
Sensitivity is less about sensor size than it is about the size of the sensor photo sites themselves.

Part of what contributes to the BMDCC's dynamic range is the the size of the photo sites. They rival the 5D Mark II & III (approx. 6.5um). My 7D & 60D and other Canons using the same 18mp APS-C sensor have photo sites approx. 4.5um. This means less dynamic range, less sensitivity.

The main reason that the BMDCC isn't a great low light camera is that it's resolution isn't high enough to get considerable antialiasing compared to 35mm and full frame cameras.

The 5D Mark III for example has large photo sites (around the same size as the BMDCC) but is also over 5K resolution due to its sheer relative size. This resolution creates a stunningly antialiased image viewed in most normal situations.

razz16mm
09-12-2012, 03:44 AM
IMO you want enough resolution in the sensor for the lens to be the limiting factor, not aliasing limits. This means optimally at least a 2x over sample of your release format. Pixel size is one factor, but light gathering power scales with total sensor size. Everything else being equal If you double the dimensions of a sensor it will have 4 times total area and light gathering power at the same f stop as the smaller sensor. I.E. 2 stops difference in sensitivity.

Very interesting ARRI white paper on film vs digital resolution.

http://www.efilm.com/publish/2008/05/19/4K%20plus.pdf

Andrew
09-12-2012, 11:25 AM
Pixel size is one factor, but light gathering power scales with total sensor size. Everything else being equal If you double the dimensions of a sensor it will have 4 times total area and light gathering power at the same f stop as the smaller sensor. I.E. 2 stops difference in sensitivity.

Very interesting ARRI white paper on film vs digital resolution.

http://www.efilm.com/publish/2008/05/19/4K%20plus.pdf

Thanks for the link.

It's not uncommon at all to hear that sensor size has nothing to do with light gathering ability, but it definitely does. The size of the photosites does factor in and things like gapless microlenses can help, but you're right.

Your 4 times total area example pretty much illustrates the difference between MFT and 135. MFT has a 2x crop factor as compared to FF 135. That's 4 times the surface area, and lo and behold (based on info from DXO sensor ratings) FF sensors, on average, have about 2 stops (4x) better snr than MFT.
http://www.pentaxforums.com/forums/photography-articles/129754-comparison-snr-18-across-formats.html#post1347917

I'm still in awe by how the sensor size, snr, and dof match up to where if you have a MFT camera with a 40mm lens at 2.8 and a 135 camera with 80mm lens at 5.6 you'll have pretty close to the same fov, dof, and snr. I didn't expect these things to be so closely matched.

Taking a look at the various sized sensors available on the market, it would seem that manufacturers have sized them to be about a stop apart.

rick.lang
09-13-2012, 01:08 PM
I'm still in awe by how the sensor size, snr, and dof match up to where if you have a MFT camera with a 40mm lens at 2.8 and a 135 camera with 80mm lens at 5.6 you'll have pretty close to the same fov, dof, and snr. I didn't expect these things to be so closely matched.

Simple rule of thumb seems to work and it's the sensor size that is the key. Easy to compare MFT and 135 since the multiplier factor is 2. You first want to match FOV so a 40mm lens in MFT gives the same FOV as 80mm on full frame 135 (=80/2) and you knew that. Then to make the DOF equivalent on the smaller MFT frame, the multiplier tells you to open up the iris by 2 stops. And since opening the lens is gathering more light, the SNR is now equivalent given the size of the photon gathering buckets is similar.

Andrew
09-13-2012, 02:44 PM
Simple rule of thumb seems to work and it's the sensor size that is the key. Easy to compare MFT and 135 since the multiplier factor is 2. You first want to match FOV so a 40mm lens in MFT gives the same FOV as 80mm on full frame 135 (=80/2) and you knew that. Then to make the DOF equivalent on the smaller MFT frame, the multiplier tells you to open up the iris by 2 stops. And since opening the lens is gathering more light, the SNR is now equivalent given the size of the photon gathering buckets is similar.

Thanks Rick. I didn't know it before my research the other day, but you're right there does seem to be a pretty simple rule of thumb to use concerning chip size, dof, and snr.

Deggen
09-13-2012, 05:49 PM
@Andrew

Thanks so much for the post. Brilliantly written and very helpful and interesting.

- Darren

nyvz
09-13-2012, 11:06 PM
Sensitivity is less about sensor size than it is about the size of the sensor photo sites themselves.


That is generally not true from what I understand. Sensitivity is directly proportional to sensor size, however there are other factors that confound the issue. What's more, the common misconception that larger photosites mean more sensitivity given the same sensor size is not true*. Just think about it in the simplest terms:

Take for example a FF35 camera compared to BMCC, the BMCC has an active sensor area of 137sq mm and the FF35 camera has an active sensor area of around 720sq mm, so over 5x the area. For a given f stop shooting the same exact subject/image at the same f-stop, there are over 5x as many photons hitting the FF35 sensor. That gives the FF35 camera a huge advantage for sensitivity, about a 2.5 stop advantage.

* http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#pixelsize

nickjbedford
09-14-2012, 01:59 AM
That is generally not true from what I understand. Sensitivity is directly proportional to sensor size, however there are other factors that confound the issue. What's more, the common misconception that larger photosites mean more sensitivity given the same sensor size is not true*. Just think about it in the simplest terms:

Take for example a FF35 camera compared to BMCC, the BMCC has an active sensor area of 137sq mm and the FF35 camera has an active sensor area of around 720sq mm, so over 5x the area. For a given f stop shooting the same exact subject/image at the same f-stop, there are over 5x as many photons hitting the FF35 sensor. That gives the FF35 camera a huge advantage for sensitivity, about a 2.5 stop advantage.

* http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#pixelsize

There may be five times the area, sure, but what I'm saying is that each photo site is the same size. So per pixel, you technically are getting the same size buckets to collect photons. But in the final resulting image, you are using 5x more pixels to generate the same 1080p (or whatever) image.

That's why you get the dynamic range, but not the super anti-aliasing of larger sensors. My 60D has a larger surface area than the BMDCC, but the BMDCC has better dynamic range due to it's larger photosites (more than 40% larger per pixel). But the 60D is 18mp and has far superior anti-aliasing ability when comparing Canon raw photos to raw CinemaDNGs at a certain comparison size (say, 2,048 pixels wide).

So in the BMDCC we get high dynamic range and some antialiasing, but in higher end cameras you get both.

Tom
09-14-2012, 02:44 AM
That is generally not true from what I understand. Sensitivity is directly proportional to sensor size, however there are other factors that confound the issue. What's more, the common misconception that larger photosites mean more sensitivity given the same sensor size is not true*. Just think about it in the simplest terms:

Take for example a FF35 camera compared to BMCC, the BMCC has an active sensor area of 137sq mm and the FF35 camera has an active sensor area of around 720sq mm, so over 5x the area. For a given f stop shooting the same exact subject/image at the same f-stop, there are over 5x as many photons hitting the FF35 sensor. That gives the FF35 camera a huge advantage for sensitivity, about a 2.5 stop advantage.

* http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#pixelsize


Nick is quite correct. If you took say the Canon 5d Mkii sensor, cut off the edges so that it is the same size as the BMCC sensor say (lets pretend the sensor would still function :-p ) its sensitivity would not change. It is the size of the photo sites and their efficiency which is key. Larger sensors just mean that (unless the MP count goes up too far), that the photosites themselves are generally also large. With the BMCC, even though the sensor is much smaller than FF, because it has a lower resolution, the number of photosites per the area of the sensor is very close to that of the MKii. (in fact the BMCC photosites are slightly larger!). This is not the only thing which affects SNR of course, internal NR and how the camera handles the data, Raw processing and resolution can all affect the SNR also. But mainly, my point here is that if you squashed 60 Megapixels worth of photosites into a FF sensor, compared to a 20 megapixel FF sensor of equal photosite design, the 20 Mp version would have a better SNR ratio. Ergo - It is the photosite size, not the sensor size which is the key.

Andrew
09-14-2012, 03:34 AM
Ergo - It is the photosite size, not the sensor size which is the key.

Hi Tom,

You're right that the signal to noise ratio of an individual pixel may increase as the size of the pixel increases, but by only looking at individual pixels you literally aren't seeing the big picture. :)

Have you read all the posts in the thread? Did you read the link in the post you quoted from nyvz?

I'm not trying to get after you, what you're saying is a very common thing to hear, but I'd be curious to know if you still have the same opinion and why after reading through the thread or doing some google research. :)

nyvz
09-15-2012, 09:26 AM
Nick is quite correct. If you took say the Canon 5d Mkii sensor, cut off the edges so that it is the same size as the BMCC sensor say (lets pretend the sensor would still function :-p ) its sensitivity would not change.

By your logic, 16mm film would not be grainer than 35mm since the emulsion is the same. If you've used RED cameras you'll easily see that 5k has less noise than 4k and 2k has about 2stops more noise than 4k even though they are all taken from the same sensor and the same size photosites and the only thing that changes is the active area (effective sensor size).

You are not understanding how sensitivity works. You can't look at only one pixel. You seem to be saying that a 10x10 pixel array only a few micrometers across would have the same sensitivity as BMCC as long as it has the same photosites. That simply doesn't work for comparing imagers, maybe it works for single photosites but that is irrelevant since images come from arrays not single photosites. SNR is entirely dependent on noise, and noise is not the same for one pixel derived from one photosite as it is for one pixel derived from multiple photosites. If you bin or downscale 4 photosites together to get one pixel, you get 2stops more sensitivity than if you had just taken one photosite to get one pixel because each original pixel has the same noise floor but when added together the signal doubles and for four pixels it quadruples.

Do you not at least see how the same image from an image circle consisting of 5x as many photons would give you more sensitivity? Clearly sensor size has a very important role.

Did you read the article? It shows from testing that the number of megapixels in a camera essentially has little to no bearing on imager sensitivity. It is true that your 20MP sensor would have single pixels with greater sensitivity than the same size sensor with 60MP, but it is clear that for the actual final images created by the sensor, the sensitivity would be about the same but the 60MP sensor would have more resolution. There of course would be some effect from putting more electronics and heat in the same sensor and slightly more wasted sensor area between photosites, but that is a different issue.

morgan_moore
09-15-2012, 12:05 PM
By your logic, 16mm film would not be grainer than 35mm since the emulsion is the same. If you've used RED cameras you'll easily see that 5k has less noise than 4k and 2k has about 2stops more noise than 4k even though they are all taken from the same sensor and the same size photosites and the only thing that changes is the active area (effective sensor size).


The enlargement is what is at play here, 16mm has to be blown up twice the size of 35 to fit a certain screen - more grain, same emulsion

S

rawCAM35
09-15-2012, 12:56 PM
The enlargement is what is at play here, 16mm has to be blown up twice the size of 35 to fit a certain screen - more grain, same emulsion S

Yes enlargement is part of the problem, but do not forget that the origin of the problem is that in a 35mm film frame the grain count that represent a part of an image is higher than the same part in 16 mm frame.

Andrew
09-15-2012, 01:34 PM
Exactly, the 35mm vs 16mm film example illustrates an improved snr when using a larger area of the exact same film.

Morgan, forget that 16mm need to be enlarged to 35mm for projection. In reality they both need to be enlarged many times their size to fit a cinema screen. It's just that the 35mm needs less enlargement than the 16mm and therefore the grain is smaller and the snr is better.

IMO it's all a balance. For great snr I'd like a reasonably large sensor. The bmc is great, s35 or even FF would be even better. I want the photosites on that large sensor to be fairly large themselves. I want the total resolution of the sensor to be quite a bit more than my final output.

nyvz
09-15-2012, 02:09 PM
The enlargement is what is at play here, 16mm has to be blown up twice the size of 35 to fit a certain screen - more grain, same emulsion

S

Yes but enlargement is a reality and absolute necessity of all imaging systems from lens to screen. They don't use smaller screens in movie theaters just to be used for movies acquired on 16mm, do they? If you want to watch a movie on your TV, you aren't going to watch it with big black bars around it or from twice as far away just because it was shot on 16mm instead of 35mm are you?

The practical question of noise performance of an imager must account for the fact that the finished content will be viewed in a constant environment regardless of acquisition format.

morgan_moore
09-15-2012, 04:57 PM
I would suggest that for the same viewing size, 16 is nosier than 35 with the same stock

and that a smaller sensor (everything equalised) is also noisier

S

nickjbedford
09-15-2012, 06:30 PM
I wasn't explaining it to defend Super 16 or the BMDCC's sensor or anything.

What I was doing was explaining the reasons why DR is X and noise is Y according to sensor size and photo site size.

Apparent noise decreases with resolution in the form of antialiasing, but noise increases with shrinking of the size of the photo sites themselves. That's why a 14mp point and shoot looks nowhere near as good as a 14mp DSLR. What does the DSLR have that the point and shoot doesn't? The sensor size is the enabler of larger photo sites, which decreases the relative noise floor and increases dynamic range.

So when I heard about the D800, I immediately shrugged it off because the photo sites were no larger than my 60D's (approx. 4.5 micrometers). This reduces the dynamic range and increases the noise floor to that of the 60D. Antialiasing can't help poor dynamic range.

A smooth point and shoot photo with blown whites and crushed blacks is worse than a slightly noisy DSLR photo with far greater dynamic range.

noirist
09-15-2012, 10:19 PM
This is the clearest explanation I've seen: http://falklumo.blogspot.com/2012/02/camera-equivalence.html

The key point is that "An image contains no information whatsoever about the size of the sensor within the camera which was used to capture it. None. Nothing. Nada. (except EXIF of course ;) ) ... Therefore, all cameras which could have captured a given image create a so-called equivalence class: they are all equivalent, producing indistinguishable images."

If you read on, you'll see that a larger sensor with a reduced aperture captures the same image as a smaller sensor with a wider aperture, etc.

Andrew
09-15-2012, 11:36 PM
What I was doing was explaining the reasons why DR is X and noise is Y according to sensor size and photo site size.

DR and noise are directly linked. At a certain point the true signal is overwhelmed by the noise and you won't be able to "dig" anymore range out of the shadows. Low noise=high DR. Have you ever noticed how your dslr's DR is reduced as iso is increased? More noise, less DR.



So when I heard about the D800, I immediately shrugged it off because the photo sites were no larger than my 60D's (approx. 4.5 micrometers). This reduces the dynamic range and increases the noise floor to that of the 60D. Antialiasing can't help poor dynamic range.


Maybe you shouldn't have shrugged it off. The D800 has quite a bit more DR than the 60d.
http://www.dxomark.com/index.php/Cameras/Compare-Camera-Sensors/Compare-cameras-side-by-side/(appareil1)/792%7C0/(brand)/Nikon/(appareil2)/663%7C0/(brand2)/Canon

We've been discussing for some time now that it's not just the photosite size that's important. It sounds like you wrote off the d800 because it had roughly the same size photosites as your 60d. Photosite size is not the end of the story.


The sensor size is the enabler of larger photo sites


It's not just that a larger sensor can have larger photosites. Take a small sensor and a large one with the exact same size photosites, tech, fabrication, etc. and the larger one will have better SNR. NYVZ's 16mm example is a great way of visualizing this.


Apparent noise decreases with resolution in the form of antialiasing

I'm not sure what anti-aliasing has to do with this. Could you explain?

To bring it back to my original post... DOF increases as the sensor size decreases at the same fov and f-stop. At the same time SNR increases as sensor size increases, assuming all other factors are the same. I found this relationship really interesting. For example FF35 to MFT is a 2x crop factor. To get the same shot with both cameras you would first need to match fov. Let's use a 50mm lens on the FF35 and a 25mm lens on the MFT. Now to match the dof of the MFT at 2.8, the FF35 will need to be stopped down 2 stops to f5.6. You might think that because you need at least this much dof for your shot and you don't have much light to work with that the MFT camera would be a MUCH better option because the lens is gathering 4x the light. That's the part I find really interesting because snr increases as sensor size increases, even at f5.6 on the FF35 camera we should have an overall similar snr.

I've said it before but just to be clear this is assuming the same exact sensor and camera electronics, just a small cut of the sensor and a large one. Maybe shooting with the Red Epic would be a better example. You might be tempted in a low light situation to shoot in 3k crop mode on the Red thinking that it will allow you to open the lens 2 more stops and keep an acceptable amount of dof for the scene. In reality by using a smaller portion of the sensor and opening the lens, you're left with about the same snr as if you had just shot in full sensor 5k and stopped your lens down to achieve the same dof.

Andrew
09-16-2012, 12:35 AM
This is the clearest explanation I've seen: http://falklumo.blogspot.com/2012/02/camera-equivalence.html

The key point is that "An image contains no information whatsoever about the size of the sensor within the camera which was used to capture it. None. Nothing. Nada. (except EXIF of course ;) ) ... Therefore, all cameras which could have captured a given image create a so-called equivalence class: they are all equivalent, producing indistinguishable images."

If you read on, you'll see that a larger sensor with a reduced aperture captures the same image as a smaller sensor with a wider aperture, etc.

Thanks for the link. I have been thinking about this topic for awhile especially concerning large format imaging systems.

http://bmcuser.com/showthread.php?13-Crop-Factor-Lens-Database&p=16456&viewfull=1#post16456

When you talk to people about 70mm, imax, or even large format still photography you often hear about some magical quality that is impossible to get with a smaller medium. I pretty much came to the conclusion that besides extra resolution and certain format/lens combinations that are impossible to match with a smaller format that there was no magical aspect of larger formats. I've still been open to learning of one, but have not heard a good explanation yet.

This blog post seems to cover exactly that! Thanks again for the link! I will definitely read the white paper.

morgan_moore
09-16-2012, 03:17 AM
I think the answer is 'in theory' there is little difference.

In practice due to the complexity of miniaturisation a larger sensor will typically perform better

For example if the minimum space between pixels on a sensor is XxY then XxY will be a smaller portion of a larger sensor leaving more space for photon catching.

Of course if pixels are round, more smaller pixels may tesselate better than fewer large ones, leading to that photon catching ability being higher per unit area.

In the large sensor (645) world there has been a MP race, 45+mp being now common, it has often been wondered (by users) what performance one would get from a 645 16-20mp back using current tech - probably awesome.

S

nyvz
09-16-2012, 12:11 PM
What I was doing was explaining the reasons why DR is X and noise is Y according to sensor size and photo site size.

Are you saying that you believe that DR and noise are independent variables and that DR is not dependent upon noise? If so, please explain how this is possible, since my understanding is that DR is specifically determined only by the noise floor relative to white clip point.