Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Is it worth it to remove the Bayer array?
Page <prev 2 of 3 next>
Feb 23, 2021 02:25:03   #
bwana Loc: Bergen, Alberta, Canada
 
selmslie wrote:
No, the raw file is not tagged at all. It's only the demosaicing process that does that.

For example, if row one contains 6000 pixels they might be tagged as green, red, green, red, ... Row 2 would be blue, green, blue, green, ... until you reach 5999 and 6000.

This pattern repeats for rows 3 and 4, 5 and 6, ... until you reach rows 3999 and 4000.

The RGB pixels are assembled from overlapping sets of 2x2 pixels starting at the upper left that start with

GR
BG

Eventually you end up with a set of RGB pixels that is one row and one column smaller - 3999x5999

But if you skip the demosaicing process none of the pixels are tagged at all. Each pixels's luminance value stands alone. No assembly required. You get the full 4000x6000 array.
No, the raw file is not tagged at all. It's only ... (show quote)

And that is where you pick up the extra resolution and the improved sensitivity results from not having the various filters in front of the sensor.

bwa

Reply
Feb 23, 2021 04:10:52   #
R.G. Loc: Scotland
 
selmslie wrote:
...A monochrome sensor sees only luminance regardless of its color.....


That's what I meant. All channels can be treated as being the same. I was wondering if the software can be told to simply read the channels as luminance information rather than colour information.

Reply
Feb 23, 2021 05:39:46   #
selmslie Loc: Fernandina Beach, FL, USA
 
R.G. wrote:
That's what I meant. All channels can be treated as being the same. I was wondering if the software can be told to simply read the channels as luminance information rather than colour information.

That's exactly how it's done. By skipping the demosaicing step, all channels are treated the same. Only their luminance values are read.

But if the Bayer array is still in place, all channels are not the same. The red and blue channels are darker than the green channel. To get their luminance values to match the green channel you would have to reduce the green brightness about a stop and the blue brightness about a half stop to match the red channel. Only then could you skip the demosaicing step and read the unbiased luminance.

There is more evidence in these two images of the effect of the Bayer array. Take a look at the water droplets that are against a dark background. Even though the focus was on the porch at the center of the image beyond the fountain, each droplet is reflecting a single highlight which may be small enough to land on a single pixel. With the Bayer array in place that can get spread over four RGB pixels. Without the Bayer array and the demosaicing step it does not get spread.

The consequence can be easily seen in the results. You can see between the drops monochrome image more easily because their image is not smeared. Besides that, by doubling the shutter speed they are only traveling half as far.

I took four images with each camera and I posted the the pair with the most closely matched spray pattern. The phenomenon is visible in all eight images. It can also be seen in the highlights on the leaves and the pine needles where they are smaller and more contrasty without the Bayer array.

Reply
 
 
Feb 23, 2021 08:24:27   #
R.G. Loc: Scotland
 
It would be interesting to see how far they've come in experimenting with mixing colour filtered pixels and unfiltered pixels. The human eye doesn't need as much colour resolution as it does luminance resolution, and as your thread shows, the losses due to colour filtering are significant. (PS - Thanks for the above link).

Reply
Feb 23, 2021 09:29:32   #
selmslie Loc: Fernandina Beach, FL, USA
 
R.G. wrote:
It would be interesting to see how far they've come in experimenting with mixing colour filtered pixels and unfiltered pixels. The human eye doesn't need as much colour resolution as it does luminance resolution, and as your thread shows, the losses due to colour filtering are significant. (PS - Thanks for the above link).

Kodak and Edward T. Chang each toyed with the concept in 2007 but it didn't catch on. All that was accomplished was a slight increase in sensitivity but the loss of resolution was even greater than with a Bayer array.

There is no free lunch. We really need to use a color filter array like Bayer or X-Trans or no CFA at all. There are a few more exotic choices but not for mainstream cameras.

You can do almost as well with a color image converted from raw to B&W. You can even do some outlandish things during the conversion from color to B&W as we have often seen on UHH. Or you can give up and work with the color version.

But a monochrome sensor is a commitment - not for the faint of heart.

Reply
Feb 23, 2021 09:50:17   #
selmslie Loc: Fernandina Beach, FL, USA
 
selmslie wrote:
... But Auto WB is also interesting. ....

Up to this point I had assumed that I needed to use a separate program to tag the file and pack the raw data into a DNG. But it looks like that may not always be necessary.

Here is a Capture One session where I opened the original raw file. It opened with a distinct magenta+red cast. I clicked anywhere in the image with the white balance probe and ended up with a gray-scale image.

The problem with this approach is that it still goes through the demosaicing process and I don't get the benefit of the gain in resolution.


(Download)

Reply
Feb 23, 2021 10:07:06   #
selmslie Loc: Fernandina Beach, FL, USA
 
selmslie wrote:
... The problem with this approach is that it still goes through the demosaicing process and I don't get the benefit of the gain in resolution.

But it does give us the opportunity to see the impact that the demosaicing step has on the sharpness and resolution of the image.

via the DNG, no demosaicing
via the DNG, no demosaicing...
(Download)

Directly from raw, with demosaicing
Directly from raw, with demosaicing...
(Download)

Reply
 
 
Feb 23, 2021 10:14:50   #
Abo
 
selmslie wrote:
If you like B&W photography it may be worth considering.

The A7 II was functioning perfectly but it was being used primarily for capturing raw images to be converted to B&W. The resolution was fine at 24 MP but the demosaicing process was introducing some occasional degradation.

During the B&W conversion of a raw file from a camera with a color filter array it takes information from four adjacent pixels to create a single RGB pixel. Information from each raw pixel is used to create four adjacent RGB pixels. This should cause a loss of sharpness. The same thing happens when you convert raw information from RGB to B&W.

Sharpness is a linear measurement but megapixel is an area measurement. So a 2x gain in linear sharpness could translate to a 4x increase in resolution. That would make a 24MP sensor as good as a 48 MP color sensor and maybe even a 96 MP color sensor.

But the lens will limit that improvement. No matter how much we increase the sensor's effective resolution and sharpness, the end result will be a combination of the sensor and the lens. Either the lens or the sensor will be the weakest link. So there is no way to achieve a really dramatic increase in sharpness or resolution without also getting a sharper (and significantly more expensive) lens.

In a 24 MP Bayer sensor there are 12 green MP, and 6 MP each of red and blue. The effective resolution should be somewhere between 12 and 24 MP but likely on the low end of that range.

But all of this is theoretical. The only way to know for sure is to actually compare a camera without the Bayer array to one that still has it.

Why invest about $4000 to get a used Leica Monochrom Typ 246 since some of the features in the A7 II were not in the Leica? For about $1000 plus some filters the Sony could get converted to B&W only. It would also be cheaper than replacing the A7 II with an A7R II.

In the next post you will see the result. The differences are not dramatic and you really need to look very closely to see the difference.
If you like B&W photography it may be worth co... (show quote)


At the risk of getting an even bigger headache from information
overload of my limited cranial capacity,
(how) would this relate to an X Trans sensor?

Reply
Feb 23, 2021 10:59:20   #
selmslie Loc: Fernandina Beach, FL, USA
 
Abo wrote:
At the risk of getting an even bigger headache from information
overload of my limited cranial capacity,
(how) would this relate to an X Trans sensor?

That's a good question.

I have seen some talk about removal of the IR cut filter from a Fuji but not the removal of the X-Trans array itself. I suppose it could be done but the service I used only does Sony full frame and crop sensors.

The effect of removing the X-Trans array might actually provide a little more increase in sharpness and resolution than for the Bayer array.

The reason for this is that the Bayer array is based on a simple 2x2 pixel concept - two green, one red and one blue pixel.

But the X-Trans is more complicated and based on ten green, four red and four blue pixels. To arrange them in a repeating square it takes 36 pixels - 20 green, 8 red and 8 blue.

I can only guess at the effect.

Reply
Feb 23, 2021 15:12:59   #
JBRIII
 
A question: Would changing the stops on the colors really produce true black and white? Assume there is no green, but a lot of red illuminating it. The pixel filters out the red, so no luminance value to use, but if no filter, the red would be seen giving a luminance value, or am I missing something.

Second, I have also wondered about mixed color (cones in eye), and black/white pixels (more numerous rods in eye). Humans do it, maybe camera algorithms were insufficient when tried?
There are thermal infrared cameras which do this. Low res. thermal image, fused with high res. visible image, FLIR cameras I believe. Process called data or image fusion can use any kind of data which can be made into an image. Can be done in Matlab, but I know of no canned/user friendly package which does it. An interest of mine, which like too many others, never find the time for because of too many interests.
Jim

Reply
Feb 23, 2021 15:40:20   #
selmslie Loc: Fernandina Beach, FL, USA
 
JBRIII wrote:
A question: Would changing the stops on the colors really produce true black and white? Assume there is no green, but a lot of red illuminating it. The pixel filters out the red, so no luminance value to use, but if no filter, the red would be seen giving a luminance value, or am I missing something. ...

Your question is not clear to me. Are you referring to a camera with a Bayer array or without one (monochrome)?

In the case of a monochrome sensor, applying a filter over the entire image will alter the luminosity depending on the colors in the subject, just like it did with panchromatic film.

This would be a better comparison if there had been some clouds. Lately it's been all or nothing.

I don't have anything intelligent to say about your second question.

Or about the subjects after your second question.

Full color from an unmodified camera
Full color from an unmodified camera...
(Download)

Monochrome with no filter
Monochrome with no filter...
(Download)

Monochrome with red filter
Monochrome with red filter...
(Download)

Monochrome with green filter
Monochrome with green filter...
(Download)

Reply
 
 
Feb 23, 2021 16:16:21   #
JBRIII
 
selmslie wrote:
Your question is not clear to me. Are you referring to a camera with a Bayer array or without one (monochrome)?

In the case of a monochrome sensor, applying a filter over the entire image will alter the luminosity depending on the colors in the subject, just like it did with panchromatic film.

This would be a better comparison if there had been some clouds. Lately it's been all or nothing.

I don't have anything intelligent to say about your second question.

Or about the subjects after your second question.
Your question is not clear to me. Are you referri... (show quote)


I am asking about the procedure used to convert color image to mono. As I believe stated, each pixel is converted to luminance and adjusted for pixel color response (by stops). While I know color response of pixels overlaps to a degree, i.e. green sees a little red, etc., a solid blue area would have luminance in a B&W pixel, but probably none on a red one to adjust? Do they look at surrounding pixels and then somehow adjust luminance values for this? If so, this could account for some of the differences between true mono cameras and software conversion of color images?
Thanks;
Jim

Reply
Feb 23, 2021 17:06:03   #
selmslie Loc: Fernandina Beach, FL, USA
 
JBRIII wrote:
I am asking about the procedure used to convert color image to mono. ...

Capture One, Lightroom and many other programs make it easy to do the conversion.

Below is an example of the process for Capture One. As you can see, there are several options to let you increase or decrease the primary or complimentary colors in an image with a simple slider. In this example I used the Cyan slider (opposite of red) and the Blue slider to emulate the effect of a deep red filter over the lens.



All you need to do is play with the sliders.

Reply
Feb 23, 2021 17:25:03   #
JBRIII
 
I don't think that answers my question of how the process is done. For example, let's say we take a photo of a pure blue at say 400 nm using a filter. Now I know from charts of Bayer filters that they don't have sharp cutoffs so some blue would get to the green pixels, but none should stimulate the red. So unless the process uses data from surrounding red and lit green pixels to decide there must be light hitting the red, the photo converted to B&W at the pixel level would have gray for green and black for red, but a real mono would be one solid white or light gray? Either some sort of interpolation or A.I. would seem to be needed to avoid this. I realize for most real world photos, except for modern art! at the Horschorn in DC, this extreme situation would rarely exist, but if the process works as I am saying/asking, it might explain why some conversions show no discernable difference at the photo level between conversions and real mono, while others do. Just trying to understand exactly what is done with converting photos in software.
Thanks;
Jim

P.S. Life Pixel is having a 10 day sale on conversions and converted cameras, but do not believe they do Bayer removal.

Reply
Feb 23, 2021 18:23:37   #
selmslie Loc: Fernandina Beach, FL, USA
 
JBRIII wrote:
I don't think that answers my question of how the process is done. For example, let's say we take a photo of a pure blue at say 400 nm using a filter. Now I know from charts of Bayer filters that they don't have sharp cutoffs so some blue would get to the green pixels, but none should stimulate the red. ...

You asked about converting a color image to B&W. That means that the color filter array (CFA) is still present. That's what I thought you were asking about.

If you look in this post you will see this chart:



Blue, green and red filters overlap in a typical CFA and they extend into the IR range if the IR cut filter is removed.

But you are probably not going to find a plot like this for your specific camera. For that matter you might find it difficult to find one for an specific color filter you can screw onto your lens.

If you want to keep the CFA and supplement it with a separate color filter over the lens, knock yourself out. You will just be spitting into the wind. I would not bother.

Reply
Page <prev 2 of 3 next>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.