12Bit Vs 14Bit Raw And Compressed Vs Uncompressed… Does It Matter?
Apr 7, 2013
Share:
You know that to get the most of your DSLR you should be shooting in RAW, right? But these days Nikon cameras gives you even more options: 12-bit or 14-bit, and compressed or uncompressed RAW (NEF) files. Which should you choose?
Short question: Does it matter? Will you see any difference between compressed (lossy) and uncompressed (lossless) RAW? And between 12 and 14 bits?
Short answer: No it does not matter. Choose 12-bit compressed (because they take up less space) and forget about this topic. Or choose 14-bit uncompressed because theoretically you’re getting the “most” from your camera – you just have to live with the file sizes.
Approximate RAW file size on a Nikon D7000 |
12 bit | 14 bit |
compressed | 12.6 MB | 15.7 MB |
uncompressed | 14.9 MB | 18.8 MB |
Not happy with the short answer? Then read on…
Basic computer science tells you that 14 bits store more data than 12 bits. To be exact: you can store 4 times as many shades of intensity in a given range, or if using the same step size you can cover a range of values 4 times as large.
Basic computer science also tells you that lossy encoding throws data away. So then it seems logical that images obtained from 14-bit lossless RAW files should have a larger dynamic range and be more detailed and nuanced than images from 12-bit lossy RAW files. The big question is whether these theoretical advantages are ever visible in real life.
This topic can get extremely complex. The best and most rigorous explanation I found online is in this article from by Emil Martinec: Noise, Dynamic Range and Bit Depth in Digital SLRs.
Conclusion: due to sensor noise you cannot see the difference between 12 and 14 bits, and neither will you see the difference between lossy and lossless RAW encoding.
This is also touched on by dpreview where they wrote “…it is easy to understand that [higher bit depth is advantageous] only IF the sensor itself has sufficient dynamic range.”
People have posted some experiments concerning this (for example see the D300 12-bit vs 14-bit comparison) but shooting a test chart in the dark is not totally convincing.
Curious as I am, I decided to see if I can experimentally find any difference in the recovery of over- or under-exposed “real-world” photographs. If maybe there is some advantage to be had from those extra bits in these extreme tests of dynamic range. After all – none of us know exactly how Nikon’s engineers implemented this. Since the Nikon D7000 has a good dymanic range it is as likely as any APS-C DSLR to show the advantages (if any) of higher bit depth.
For this I chose the following test scene:.
Correct exposure (1/250 at F8.0, ISO 100).
I then overexposed the scene by a massive 4 stops:
Overexposed by 4 stops (1/125 at F2.8, ISO 100).
And then I underexposed the scene by an even larger amount, 6 stops. I went 2 stops further in the underexposure since digital cameras are better at retaining shadows than highlights:
Underexposed by 6 stops (1/8000 at F11, ISO 100).
The next job was correcting for the over-and-underexposure in Adobe Lightroom 3.4, and comparing results between 12-bit, 14-bit, lossy and lossless encoding.
Crop from reference image (correctly exposed)
First, the overexposed examples. You’ll especially notice big white washed-out regions caused by channel clipping. It is interesting to see whether extra bit depth helps to lessen this effect. Also look for differences in detail caused by RAW file compression.
12-bit compressed (lossy)
12-bit uncompressed (lossless)
14-bit compressed (lossy)
14-bit uncompressed (lossless)
And the same sequence for the underexposed region. Instead of washed-out regions caused by colour channel clipping we now see a lot of noise, since correcting the underexposure requires massive amplification of a region with very low signal.
12-bit compressed (lossy)
12-bit uncompressed (lossless)
14-bit compressed (lossy)
14-bit uncompressed (lossless)
My conclusion is that even in this extreme example there is very little difference between either 12-bit and 14-bit or between uncompressed or compressed results.
If one had to seek differences I would say that the colour reproduction is less degraded in the uncompressed and for the 14-bit RAW images. It seems like the 2-bit difference between 12 and 14 bit is mostly applied to shadow detail and especially colour information in the shadow regions. But given how extreme my test example was and how subtle the effect, I would also call it a negligible difference.
Furthermore there seems to be effectively no difference in detail captured in any of these modes.
So I again conclude that there is little or no practical advantage to using either 14-bit or lossless raw compression.
About The Author
Francois Malan is a freelance photographer based in The Netherlands. You can follow his blog here and his 500px here. This post was originally published here.
Udi Tirosh
Udi Tirosh is an entrepreneur, photography inventor, journalist, educator, and writer based in Israel. With over 25 years of experience in the photo-video industry, Udi has built and sold several photography-related brands. Udi has a double degree in mass media communications and computer science.
Join the Discussion
DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.
20 responses to “12Bit Vs 14Bit Raw And Compressed Vs Uncompressed… Does It Matter?”
Interesting experiment. I’d love to see the same with ISO 800 or 1600 and I assume that it ‘ll make a significant difference. But that’s what I thought to happen here too and I thought wrong.
I’d love to see the same with ISO 800 or 1600 too.
This test make sense only for ISO 100, because is the one which has the maximum bit depth.
The sensor is actually GRGB, it’s possible to assume that the maximum performance is on Green colors, it would been interesting to test that too.
Also, I’m still for shooting in 14bit uncompressed, if images don’t need that extra 0.024% precision they can be converted in lossy DNG which compress better and is a standard format.
The difference between 12 and 14 bits isn’t insignificant with the newer sensors at ISO100, as also these test images demonstrate. The difference would probably be more striking if you had turned off noise filtering in Lightroom. But you can observe some rather bad misscolouration on the 12 bit image also like this, and loss of detail on the gray brick texture. Yes, the extra 2 bits are usually the least significant bits, so you would only see the difference in the underexposed images.
“so you would only see the difference in the underexposed images”
that is not correct — least significant bits work well in overexposed areas too…
They are simply the measurement stepping with which a ADC digitizes analog voltage values and the sensor is linear always.
it simply determines stepping between two brightness values – so, 12 bit demonstrates bad colors and 14 doesn’t – it simply has more color information than 12 bit image.
I think your test may be wrong. You should provide us the RAW files to check them on our own computers. JPG and PNG files that you provided us are just 8 bit of which the JPG file is just sRGB but the PNG files have no profile.
The next problem I see is that you didn’t tell us which monitor did you use to make your comparison?
If you used a wide gamut monitor there would be a greater chance to see a difference especially at a gradient fill. But if you used an ordinary sRGB monitor like Apple Retina or any of TN type of displays there is not such a chance you would see a huge difference.
Just my opinion.
David – Here is the link to the original NEF files.
https://www.dropbox.com/s/7pxnq8ecnrw7lyn/RAW_12bit_vs_14bit.zip
If you find anything different, feel free to share.
It doesn’t matter if JPGs are 8 bit in this case.
I’ve made these tests years ago with DNG conversion:
http://fotoemoto.wordpress.com/2011/07/21/tipos-de-raw-na-nikon-d7000/
The results:
14bits Lossless: 21.0mb(NEF) 18.8mb(DNG)
14bits Comprimido: 18.7mb(NEF) 16.6mb(DNG)
12bits Lossless: 16.7mb(NEF) 14.5mb(DNG)
12bits Comprimido: 15.2mb(NEF) 12.8mb(DNG)
Bud today you have a Lossy compression with DNG that works BETTER with 14bit losless RAW.
So today I shoot with 14bit lossless always, then after the work is done, store with lossu dng!
Simple and great!
Hey everyone – I’m the guy who wrote this article. If you go to my original blog post (link at the bottom of the post) you will see that I’ve since updated it with a link to the original NEF files. So David Habot – go for it. I’m pretty sure that I tested this correctly, but you’re welcome to try and replicate the results.
Some people asked about higher ISOs – if you read the thorough article on noise vs bit depth that I linked to, you’ll see that higher ISO values imply higer noise, that will mask the differences between 12 and 14 bit even further.
I’m not sure this is the best test methodology, but it’s sufficient to show that you certainly got extra usable data from the 14-bit file. The reason I’m not sure it’s the best is that you seem to be assuming that the 12-bit curve has less dynamic range than the 14-bit curve, presumably because when you map a 12-bit sensor curve (with 12-14 stops of DR) onto 8-bit sRGB (with about 6) or even 8-bit AdobeRGB (with about 8) you’re picking a curve suitable for display at conversion time and throwing away data outside the DR of a typical display. That’s not what the camera is doing when it uses 12-bit instead of 14-bit. If the camera is in fact just throwing away the two least significant bits then I would expect to see no worse performance at the extremes of DR than in the middle, it’s just that if you stretched the curves for a patch of blue sky out incredibly far it would start to get banding on 12-bit before 14-bit.
I suspect, though, that that’s not really what Nikon is doing, and I suspect that because your 14-bit file has considerably better tone on everything but the bottom bricks than the 12-bit file (there was more green and less red there in 12-bit than there really was, probably because of how demosaicing worked on the GRBG sensor array) which shows that you are throwing away useful color data in 12-bit if you needed to resurrect something 5-6 stops under.
It’s hard to know if this is a good test of compression or not. A lot of modern lossy compression engines would work really well on crisply defined geometric patterns like this and then fall apart completely on smooth curves and textures (like, say, a long exposure of a waterfall). I suspect there’s an extremely low compression level in RAW compression so it would probably be pretty hard to construct a test where you can really see the difference.
Hello all … First of it’s a given my limited photographic experience doesn’t qualify myself to make recommendations but when saving my images it is commonly recommend or I’ve always been told to archive in the highest resolution and colour bit depth my camera is capable of producing. This would also included shooting the widest gamut my camera is capable. It’s certain my cheap $200 Kmart monitor is not capable of viewing the images my Nikon D90 produces at their best. However my future plan is to buy wide gamut monitor or a professional photographers monitor. Therefor in anticipation of the day when I upgrade I don’t want to be tossing image data out the window (sorry for bad pun) simply because I don’t see any difference on my current hardware. I would have to guess the end result of all the seemingly insignificant improvements produce diminished returns at best if looked at in their singular. However when added up with all the minor improvements offered by current and future hardware then the benefits could be significant or very much visible. Again I’m still learning, looking for the ‘best’ information and recommendations but is my logic incorrect or flawed in a way I’m not seeing?
In this experiment you limited the data by blowing the highlights. You can instead try to overexpose it but not so much as the clip the data on the right side of the histogram, and then compare the images. You’re comparing hardly any data to hardly any data instead of lots of data to lots of data. You’ll get more distinguishable results using the latter.
Also the depth of field is different in the two images. It would make more sense to up the exposure setting instead of changing the aperture. They aren’t the same image otherwise.
Not all the data is blown out. You can tell in the transitions. You have to blow it out like this to find differences.
Thanks for the tests and opinions. Its important to consider your target subject and display method. If you are producing images for print, a wider range of colors are relevant than display on the web. I see color variations in the images above, for work with skin tones, that may be more relevant. I have different workflows going to Flickr than I do going to a photobook. At the end of the day we each find what works within our own parameters.
My understanding (which may be wrong) is that the 14 bit files will allow you to recover some detail from the dark shadows and highlights which you may not be not get from 12 bit files. So the difference is in processing not in visual examination from the unprocessed files.
Actually I wouldn’t say my eyes are particularly well trained and I found some of the the differences between 12 and 14 bit quite easy to spot. I just as easily spotted them here in your sample photos so I can’t say I agree with the conclusions arrived at in this article. Primarily there seems an obvious (to my eyes) increase in image clarity and sharpness.
Mine are well trained, and I see exactly the same thing on the flowerpot, clearly sharper and more detail with all the lossless versions (regardless of bitdepth). This is expected as compression will throw away fine detail first.
Now whether it matters depends on how you shoot, and how large you print (and/or how heavily you like to crop). I do very large prints, and that difference is significant to me. And anyway, I would always want the detail in case I need it, even if I often don’t. Storage is far less of a problem than it used to be (especially once you start shooting RAW video : ).
I do wonder how much of the difference we see between the 12- and the 14-bit files is really due to the bit depth, and how much is due to implementation differences between 12- and 14-bit readout modes.
With a D7100, I can see clear differences in banding patterns between 12 and 14 bit files (shadow banding is noticeably worse in 12-bit mode). Extracting the data from the RAW files reveals that the camera switches to digital amplification above ISO 3200 in 14-bit mode, but still uses analog amplification at ISO 6400 in 12-it mode.
Previously, I assumed that the sensor data is always read out the same way, at 14-bit resolution, and then downsampled to 12-bit in software if that type of NEF file was selected. The above observations suggest that this is not the case.
Of course, for practical purposes all that matters is the quality of the resulting images, not the reason behind that quality difference …