These actually aren't subjective numbers. These are, as stated before, based on the data from the BMD Film curve LUTs, DNG metadata, and independent tests. The independent tests were just used as supporting evidence. I provided the math behind calculating these numbers for generating the charts on the Blackmagic forum a while back. I wanted to be as objective as possible in making these charts. A lot of the time, there is still valid data in the noise floor that can't be viewed on a scope, but is still visible in the image. The data is there at the bottom and you'd be able to see it in a log image, whether it's considered usable or not is up to the user.
Which is why I said they were subjective. But I understand you're an engineer at heart, and your tests were probably very objective. I didn't mean to imply otherwise. Just that different people have their own definitions of what dynamic range means. If you say the sensor is able to record data 8 stops under, I believe you. It lines up with Blackmagic's stated dynamic range. Out of curiosity, how did you test this? Did you shoot a chart and average out the noise?
Originally Posted by Tomas Stacewicz
Thank you iaremrsir and RyLo for your answers! Really made it much clearer for me seeing those diagrams and curves, and the comments. :-)
Is it ok to quote you both on my blog?
It's okay with me, but I'm far from the first person to say any of it. I also need to add a correction, I'm used to thinking in ProRes and usable dynamic range. Technically, if you're shooting RAW, ISO is just metadata, so no ISO has an advantage in dynamic range. Also, I think 1600 ISO actually has the greatest dynamic range in ProRes, and it's the only one that exceeds 100% on a waveform. But as I said, it's a little too noisy for me. I also should clarify, that I'm basing this off of the Pocket, which I'm told is more or less the same as the BMCC. Your mileage will vary from camera to camera and manufacturer to manufacturer. I'm sorry, I feel like I've contributed to making this topic more confusing than it should be. The takeaway is that you'll have less noise if you overexpose RAW and then bring the exposure back down in the RAW converter (which, in the case of the BMPCC, happens automatically when you set the camera at an ISO lower than the native 800 and expose for that setting).
Does that make things a little more clear? Or have I muddied it up again?
Out of curiosity, how did you test this? Did you shoot a chart and average out the noise?
Just took the images of the gray scale charts and selected the entire region of the chip which averaged the pixel values for that chip. Once the chips no longer had a changing value, I stopped counting. And I forgot to mention that these charts are based on using highlight recovery in Resolve! I based it on that since that seems to be what most people use in their workflow. A more accurate and consistent test would be done with Imatest and a transmissive chart, but I'm broke hahaha. And Corey Robson's charts, as well as Ryan Walter's support the claims fairly well.
Originally Posted by Earl R. Thurston
You may want to check the camera that did those exposure tests for the infamous "crosshatching" problem. Demosaicing artifacts in those stills were quite noticeable on my iPad:
Those are not exhibiting the crosshatching/mazing pattern that you're thinking of. Those files weren't demosaicked. This is what is actually recorded by the camera, visualized: a bayer pattern, grayscale image. And yes, scaling causes some artifacts. If you zoom into the original image you'll actually see the individual pixels. It's kinda cool actually