Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Is it worth it to remove the Bayer array?
Page 1 of 3 next> last>>
Feb 22, 2021 12:27:38   #
selmslie Loc: Fernandina Beach, FL, USA
 
If you like B&W photography it may be worth considering.

The A7 II was functioning perfectly but it was being used primarily for capturing raw images to be converted to B&W. The resolution was fine at 24 MP but the demosaicing process was introducing some occasional degradation.

During the B&W conversion of a raw file from a camera with a color filter array it takes information from four adjacent pixels to create a single RGB pixel. Information from each raw pixel is used to create four adjacent RGB pixels. This should cause a loss of sharpness. The same thing happens when you convert raw information from RGB to B&W.

Sharpness is a linear measurement but megapixel is an area measurement. So a 2x gain in linear sharpness could translate to a 4x increase in resolution. That would make a 24MP sensor as good as a 48 MP color sensor and maybe even a 96 MP color sensor.

But the lens will limit that improvement. No matter how much we increase the sensor's effective resolution and sharpness, the end result will be a combination of the sensor and the lens. Either the lens or the sensor will be the weakest link. So there is no way to achieve a really dramatic increase in sharpness or resolution without also getting a sharper (and significantly more expensive) lens.

In a 24 MP Bayer sensor there are 12 green MP, and 6 MP each of red and blue. The effective resolution should be somewhere between 12 and 24 MP but likely on the low end of that range.

But all of this is theoretical. The only way to know for sure is to actually compare a camera without the Bayer array to one that still has it.

Why invest about $4000 to get a used Leica Monochrom Typ 246 since some of the features in the A7 II were not in the Leica? For about $1000 plus some filters the Sony could get converted to B&W only. It would also be cheaper than replacing the A7 II with an A7R II.

In the next post you will see the result. The differences are not dramatic and you really need to look very closely to see the difference.

Reply
Feb 22, 2021 12:29:45   #
selmslie Loc: Fernandina Beach, FL, USA
 
Both images were taken with a 50mm lens* at f/8 and both cameras were set to base ISO 100. The cameras were on a tripod and the shutter was tripped by hand. The Sony had IBIS turned on. The one taken with the D610 was converted to B&W in Capture One via the normal demosaicing without applying any color adjustments. The one taken with the A7 II skipped the demosaicing step and no color filter was used on the lens. Otherwise both images received the same post processing – only a slight bump in clarity and structure.

You need to look closely to see the difference in sharpness. The highlights in the water and on the leaves are smaller in the Sony image. The grass and some of the branches are slightly sharper. If you look closer than 100% you will see that many of the point sources (highlights in the water drops) cover only one pixel in the A7 II image but usually more than that in the D610 image. The tree branches and the grass also look sharper in the A7 II version. Pixilation becomes visible in the D610 sooner than with the A7 II.

What you can’t see is that the shutter speed for the A7 II was 1/500s and the D610 it was 1/250s because the Bayer array blocks almost a full stop of light.

So what do you lose with the conversion? You can no longer tailor your B&W conversion by applying color adjustments. Glass filters are less effective and not as convenient.

Whether you need the extra sharpness and resolution is also questionable. If you are printing up to 13x19 inches, 24 MP is already more than enough. Besides, if you are not capturing landscape or astronomical images, sharpness may be low on your list of priorities.

So you can take this information and consider yourself fortunate to have saved yourself the expense.

On the other hand, you may find some additional benefits as I have.

* The A7 II used a 1965 50mm Leica Summicron Rigid and the D610 used a 2015 Nikon 50mm f/2.8G. Both lenses have been compared at f/8 using a Nikon Z7 and there was no discernible difference in resolution.

A7 II monochrome sensor at ISO 500
A7 II monochrome sensor at ISO 500...
(Download)

D610 color image converted to B&W
D610 color image converted to B&W...
(Download)

Reply
Feb 22, 2021 12:50:48   #
John from gpwmi Loc: Michigan
 
Interesting. I've wondered about this. Thanks for sharing.

Reply
 
 
Feb 22, 2021 13:13:36   #
JBRIII
 
I also believe it helps for ultraviolet photography, although a UV lens ($7500?) would probably help more. With UV, normal glass, coatings used on modern lenses, hot mirrors, and materials used for Bayer all absorb the UV.
Quartz and CaF2 do not, but all available such lenses cost more, even used, than most cameras. There are web instructions for making you own, but no bells and whistles obviously. The Nikon $7500 lens is said to be the best ever made, no chromatic problems from UV to 900 nm.

Reply
Feb 22, 2021 13:20:03   #
R.G. Loc: Scotland
 
selmslie wrote:
.....the Bayer array blocks almost a full stop of light......


Shouldn't that be 2/3 of the light?

Reply
Feb 22, 2021 13:49:33   #
selmslie Loc: Fernandina Beach, FL, USA
 
R.G. wrote:
Shouldn't that be 2/3 of the light?

I measured it at 0.8 stops of light. It's safe to assume a full stop because it will protect the highlights.

Remember that the green raw histogram is normally about one stop to the right of the red histogram and the blue histogram is still lower than the green. So the change in exposure comes from comparing the monochrome histogram to the green histogram.

Notice that 1 stop is only slightly lower than the green histogram and both highlights are protected.


(Download)


(Download)

Reply
Feb 22, 2021 14:05:42   #
R.G. Loc: Scotland
 
I can see why it worked out that way in practice. The scene you shot has large areas of just blue and just green. Since the Bayer filter allows only one colour per pixel, on average only 1/3 of the light reaches the sensor, but that assumes that the incoming light is white.

Reply
 
 
Feb 22, 2021 14:07:51   #
selmslie Loc: Fernandina Beach, FL, USA
 
JBRIII wrote:
I also believe it helps for ultraviolet photography, although a UV lens ($7500?) would probably help more. With UV, normal glass, coatings used on modern lenses, hot mirrors, and materials used for Bayer all absorb the UV.
Quartz and CaF2 do not, but all available such lenses cost more, even used, than most cameras. There are web instructions for making you own, but no bells and whistles obviously. The Nikon $7500 lens is said to be the best ever made, no chromatic problems from UV to 900 nm.
I also believe it helps for ultraviolet photograph... (show quote)

All digital cameras come with a UV and an IR filter (cut mirrors), including the Leica Monochrom. Many of them also have an AA filter.

I had all of these removed so that I could also use the camera for monochrome IR photography. The sensor can actually record wavelengths from about 300nm to 1000nm. Outside of that range the silicone sensor cannot record anything, regardless of the filters. See https://www.monochromeimaging.com/technical/full-spectrum-ir/

For normal photography I need to attach a UV/IR cut filter to the lens to block most of the UV and IR.

But this does not eliminate all of the UV or all of the IR. If it did that it would also block some of the deep violet and deep red. Even then it would not eliminate chromatic aberration since there is still about a 1:2 ratio between the UV limit and the IR threshold and that still needs to be dealt with.

But with a monochrome sensor, using only a red or green filter does have a soft upper and lower limit on the wavelengths that get through. A yellow or orange filter also helps. Even though you might not see the color evidence of chromatic aberration it can show up as a loss of sharpness.

Reply
Feb 22, 2021 14:11:47   #
selmslie Loc: Fernandina Beach, FL, USA
 
R.G. wrote:
I can see why it worked out that way in practice. The scene you shot has large areas of just blue and just green. Since the Bayer filter allows only one colour per pixel, on average only 1/3 of the light reaches the sensor, but that assumes that the incoming light is white.

I actually did my tests using a simple white (gray) target. The ratios I get are the same.


(Download)

Reply
Feb 22, 2021 15:40:44   #
selmslie Loc: Fernandina Beach, FL, USA
 
R.G. wrote:
I can see why it worked out that way in practice. ...

This is what the raw file looks with the Bayer array gone but before the file is tagged to tell RawDigger and a raw conversion program that it is actually supposed to be a monochrome image.

You can think of it as happening in two stages.

First, the red and blue curves are moved in lock step to align the green channel with the blue channel (same as dropping the green channel). In effect, that makes the image more magenta.

Next, the red channel is moved to the right to align it with the other two.

These two combined appear to apply separate magenta and red filters over the image.

But keep in mind that RawDigger has no idea about the content of the scene. It's default is to display with the As Shot (the camera was set to Daylight) white balance. The other choices are Daylight (RawDigger's version) and Auto.

But Auto WB is also interesting.

So is the image at 500% with Auto WB when you download it and blow it up.

As Shot/Daylight WB
As Shot/Daylight WB...
(Download)

RawDigger Auto WB
RawDigger Auto WB...
(Download)

RawDigger Auto WB at 500%
RawDigger Auto WB at 500%...
(Download)

Reply
Feb 22, 2021 15:44:03   #
bwana Loc: Bergen, Alberta, Canada
 
selmslie wrote:
Both images were taken with a 50mm lens* at f/8 and both cameras were set to base ISO 100. The cameras were on a tripod and the shutter was tripped by hand. The Sony had IBIS turned on. The one taken with the D610 was converted to B&W in Capture One via the normal demosaicing without applying any color adjustments. The one taken with the A7 II skipped the demosaicing step and no color filter was used on the lens. Otherwise both images received the same post processing – only a slight bump in clarity and structure.

You need to look closely to see the difference in sharpness. The highlights in the water and on the leaves are smaller in the Sony image. The grass and some of the branches are slightly sharper. If you look closer than 100% you will see that many of the point sources (highlights in the water drops) cover only one pixel in the A7 II image but usually more than that in the D610 image. The tree branches and the grass also look sharper in the A7 II version. Pixilation becomes visible in the D610 sooner than with the A7 II.

What you can’t see is that the shutter speed for the A7 II was 1/500s and the D610 it was 1/250s because the Bayer array blocks almost a full stop of light.

So what do you lose with the conversion? You can no longer tailor your B&W conversion by applying color adjustments. Glass filters are less effective and not as convenient.

Whether you need the extra sharpness and resolution is also questionable. If you are printing up to 13x19 inches, 24 MP is already more than enough. Besides, if you are not capturing landscape or astronomical images, sharpness may be low on your list of priorities.

So you can take this information and consider yourself fortunate to have saved yourself the expense.

On the other hand, you may find some additional benefits as I have.

* The A7 II used a 1965 50mm Leica Summicron Rigid and the D610 used a 2015 Nikon 50mm f/2.8G. Both lenses have been compared at f/8 using a Nikon Z7 and there was no discernible difference in resolution.
Both images were taken with a 50mm lens* at f/8 an... (show quote)

I like my mono Sony A6000. Works great for both IR and LRGB & narrowband astrophotography.

bwa

Reply
 
 
Feb 22, 2021 15:55:20   #
R.G. Loc: Scotland
 
selmslie wrote:
.....First, the red and blue curves are moved in lock step to align the green channel with the blue channel......


I must be missing something. The sensor will be producing similar data for each channel so all the software needs to do is discard the colour information (which you don't need anyway). Or is it baked into the software to correct each channel to give accurate colours?

Reply
Feb 22, 2021 17:21:07   #
selmslie Loc: Fernandina Beach, FL, USA
 
R.G. wrote:
I must be missing something. The sensor will be producing similar data for each channel so all the software needs to do is discard the colour information (which you don't need anyway). Or is it baked into the software to correct each channel to give accurate colours?

Once the Bayer array is gone there is no color information to discard.

It's like B&W film. A monochrome sensor sees only luminance regardless of its color.

The only way to bias the luminance with reference to a particular color is to place a color filter over the lens. But that will alter the luminance for every pixel.

For example, if you place a green filter on the lens, red or blue objects will get darker and green objects will get lighter - for every pixel that records the object, not just the ones where the Bayer array had a green filter.

See the explanation for Bayer filter. It shows that the Bayer array, although the most common, is not the only color filter array, just the simplest.

Reply
Feb 22, 2021 18:37:33   #
Ourspolair
 
I am guessing that the raw info is still sent out "tagged" as G1, R, G2 and B channels, and you adjust those to get R, (G1+G2) and B to give the same amplitude for an 18% grey target.
Please let me know if I have missed something. Thanks.

Reply
Feb 22, 2021 19:48:14   #
selmslie Loc: Fernandina Beach, FL, USA
 
Ourspolair wrote:
I am guessing that the raw info is still sent out "tagged" as G1, R, G2 and B channels, and you adjust those to get R, (G1+G2) and B to give the same amplitude for an 18% grey target.
Please let me know if I have missed something. Thanks.

No, the raw file is not tagged at all. It's only the demosaicing process that does that.

For example, if row one contains 6000 pixels they might be tagged as green, red, green, red, ... Row 2 would be blue, green, blue, green, ... until you reach 5999 and 6000.

This pattern repeats for rows 3 and 4, 5 and 6, ... until you reach rows 3999 and 4000.

The RGB pixels are assembled from overlapping sets of 2x2 pixels starting at the upper left that start with

GR
BG

Eventually you end up with a set of RGB pixels that is one row and one column smaller - 3999x5999

But if you skip the demosaicing process none of the pixels are tagged at all. Each pixels's luminance value stands alone. No assembly required. You get the full 4000x6000 array.

Reply
Page 1 of 3 next> last>>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.