Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Pixel size vs Bit Size
Page <prev 2 of 2
Apr 4, 2019 00:02:00   #
Wallen Loc: Middle Earth
 
TheShoe wrote:
Since we are dealing with matters of definition, there is nothing de facto about it, it is 8 bits. In the earlier days when Hollerith Cards and Binary Coded Decimal were in use, the character was 6 bits. With later machines, a larger range of values was needed, which, in turn, meant that something was needed to replace the term "character". Enter the byte which is defined as 8 bits (a bit is 1 binary digit).


Byte:
Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today ("de facto") it almost always means eight bits.

https://en.wikipedia.org/wiki/Units_of_information

Reply
Apr 4, 2019 06:58:11   #
Longshadow Loc: Audubon, PA, United States
 
Wallen wrote:
Byte:
Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today ("de facto") it almost always means eight bits.

https://en.wikipedia.org/wiki/Units_of_information


Yea, just like megs and gigs....

We (programmers) said 30K for a file that was 30 Kilobytes in size, NEVER Ks.
It was a 20Meg file, NO "s". (Meg stands for megabits, which is already plural.)
The media and people who were NOT in computing started with megs and gigs as plural. Guess what became the "standard"? I STILL say 25Meg and 5Gig.
I can't wait until people start using ters for terabits. Why haven't they? Sound stupid?
(Sorry, been a hitch in my git-along since it started.)

Unless they dropped it, a "word" could be either 4, 8, 16,... bits, depending on the architecture of the processor. Many micro-controllers started with a 4 bit word. The Z80, 8080, 6800, etc. were 8 bit word processors. Then came the 16 bit word processors.

Reply
Apr 11, 2019 15:02:15   #
Fotoartist Loc: Detroit, Michigan
 
Wallen wrote:
With the appropriate lens, the number of available pixels or sample points for recording (Figure 3A vs 3B) dictate how much detail can be captured. Figure 3A & 3B, both use a 4bit gradient, meaning both can can show 16 shades of grey. But 3B has more pixels compared to 3A, so it was able to show more detail because having more boxes to fill, it was able to use of more of the available shades.

On the other hand, the software (together with the sensor/electronics technology & pixel size) will indicate how much Color depth/gamut is possible (gradation/shade recorded per Bit - Figure 1,2 & 3)

The computer records them as groups called bits, like the 8bit/12bit/16bit/jpeg or RAW files where each bit is a group of bytes corresponding to a single shade. Higher bits produces a smoother gradient transition.

Figures 1A, 2A, 3A & 3B Shows how each would show a black pearl image.

Please note that this is a very simplified explanation.
Sensor only capture intensity of light. Hence, the matrix of the sensor are divided where some pixels have filters that allow only the green to pass, to output green signals. Others have red and the remaining with blue. Each one giving its share of the RGB signals whose blending produces the other colors/shade.

Technically, a 1megapixel camera, is only a 0.33megapixel camera. Because only 1/3 of its sensor record records each color*(not exactly true because some sensors have filters that favor recording more of one of the colors, such as the Bayer filter which allot more pixels for recording green). A group of 3 colorpixels are needed to show each point of color in the gamut.

To combat these inefficiencies, various software filters and algorithms like smoothing, dithering etc. are applied to the output. New technologies are also being explored such as pixel shifting & colored sensor layering
With the appropriate lens, the number of available... (show quote)


I like the diagram. I like visuals. Thanks.

Reply
 
 
Apr 14, 2019 07:13:38   #
Wallen Loc: Middle Earth
 
Longshadow wrote:
Yea, just like megs and gigs....

We (programmers) said 30K for a file that was 30 Kilobytes in size, NEVER Ks.
It was a 20Meg file, NO "s". (Meg stands for megabits, which is already plural.)
The media and people who were NOT in computing started with megs and gigs as plural. Guess what became the "standard"? I STILL say 25Meg and 5Gig.
I can't wait until people start using ters for terabits. Why haven't they? Sound stupid?
(Sorry, been a hitch in my git-along since it started.)

Unless they dropped it, a "word" could be either 4, 8, 16,... bits, depending on the architecture of the processor. Many micro-controllers started with a 4 bit word. The Z80, 8080, 6800, etc. were 8 bit word processors. Then came the 16 bit word processors.
Yea, just like megs and gigs.... img src="https:/... (show quote)


I guess the reason is that with regards to the digital technology, it grows too fast and with so much variety that users just tend to adapt what is the most common terms or usage instead of making a set standard.
As for ters LOL, i'm a little bit partial to "Teb" :-)

Reply
Page <prev 2 of 2
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.