Peterff wrote:
I don't disagree with any of that, except that sensor data records light and thus visual data that eventually is transformed into an image in one form or another that can be viewed by a human. A JPEG, TIFF, PNG or whatever can't be viewed without interpretation either.
That just simply is not true.
Each pixel in an image is precisely defined in any of the image formats. The color and luminance at each location are precise, and are the same for any of those formats. The byte values in the file may be different, but when read into the computer the "interpretation" is a predefined process and to the degree that it is accurate it produces
exactly the same data fed to the monitor, regardless of which image format it came from.
Raw sensor data is not interpreted, it is interpolated (aka demosaiced). Before that is done there is no "predefined process" that is always done the same way. The results of correct interpolation can be an almost infinite number of different data sets. The user chooses which one to make. Today that might be image 1001, and tomorrow a different user might decide on image 8008. Both are right, neither are the same. The raw data cannot be called image data because it cannot define which of image 1001 or 8008 is correct. Once the user decides which image to make, it is saved as exactly that image. Image 8008 is not image 1001, and the data in the image file is specific to one or the other.
That is just like the pile of wood at the lumber yard. We can make a two story house or a ten story house, it can have many bathrooms or none. It's up to the user to decide, and all are correct. The lumber yard is a pile of lumber, not a house. If you build a house, design 1001 has 2 bathrooms and one story. Or house 8008 has 8 bathrooms on four floors. One house or the other. The piles of lumber were neither, and were not houses.
Peterff wrote:
You can take lumber and transform it into many things, not just housing or some other building. A boat, furniture, a tool, a musical instrument, a weapon, a fire, a vehicle, so many other things.
You cannot do that with raw sensor data and transform it into a perfume or paper or an aircraft. A raw file may not be close to a final image, but it is still visual data that cannot be easily be re-purposed as audio, financial, fluid dynamics or other kinds of data as in the limber analogy. In my opinion you are both wrong.
You can take lumber and transform it into many thi... (
show quote)
In fact there are many things that raw sensor data can be used for, and that is not the point. The point is that it does not define one single specific image. There is no image. Just like those piles of wood are not houses laying around at the lumber yard.
Peterff wrote:
The situation is very similar to the dilemma of the abortion debate. ...
Lets not obfuscate the topic with totally worthless, yet emotionally charge, concepts of no value here.
Peterff wrote:
... So no, you may think I am wrong, and that Apaflo is correct. I think that you are both wrong, but does it actually matter?
We're only talking about data files here and how to categorize them, it is still a philosophical and not a technical question.
There is no philosophical question. It is purely technical. And yes it does actually matter if, and only if, a person actually wants to work with sensors, raw sensor data, and the software that processes raw sensor data. Obviously most photographers have little to no interest, and for them it makes no difference. To the engineers designing sensors or various devices that use sensors, whether it be cameras or something else, and to the software engineers that develop programs to work with the data, it is absolutely a requirement to fully grasp the differences.
Peterff wrote:
The philosophical question is when does visual data become an image, at what stage of processing? I don't think that we have an agreed answer on that, and frankly I don't really think we need one to discuss the relative merits of raw versus other things.
That is not philosophical, but a pretty simple technical question. And one that I've answered more than once in this thread already.
Sensor data does not define characteristics of an image, such as color, brightness, and location. Image data does that. For example, if a sensor has 2000x3000 sensor sites, the one that is at 1400x2000 has what relationship to the pixel in an image produced from that data located at 1400x2000? It does not define any of color, brightness nor location. If the value for that byte of data is changed in the sensor data it affects, to at least some degree, probably several dozens of pixels in any image produced from that data. And in reverse, for the pixel at 1400x2000 in any image produced there are perhaps anywhere from 9 to 64 data locations on the sensor that affect the color and intensity of that pixel. The effect however is not preset, it need not be the same each time the process is done for the process to be called correct. That differs from pixel data that is either processed correctly or there is a measurable error.
Peterff wrote:
Bring forth the zealots and heretics, and tie them to their stakes. I'll happily throw fuel upon the fires!
Let's skip fueling fires and put some light on the subject instead. Which is to say that if you are not willing to learn, I'm not willing to discuss it further.