Reputation: 2240
I'm trying to work out how many colour possibilities are used per pixel for any given image.
For example, if an image uses 8 bits per pixel, then it can represent one of 256 shades.
I'm looking for something like the following:
CImg<unsigned char> inputImage(inputImageFilename.c_str());
CImgDisplay disp_input(inputImage,"input");
std::cout << sizeof(inputImage[0]);
I know that this particular image has 8 bit pixel depth. I was hoping this would output 8, which I could then use as the exponent of 2 to get 256 (2^8 = 256). But it outputs 1, so this is not an option.
I've also tried .depth()
but quickly realised this does not refer to the pixel depth.
Can someone help me out?
Upvotes: 1
Views: 920
Reputation: 613
Two things here:
The documentation states:
Class representing an image (up to 4 dimensions wide), each pixel being of type T.
which means the pixel depth is defined by the template type T. In your case this is unsigned char
which results in a pixel depth of 8. If you want to have a pixel depth of 16 you could use CImg<uint16_t>
.
Depending on the file type you are reading you can determine the bit depth. Jpeg for example has a bit depth of 8, while png can have a bit depth of 8 or 16 (at least that's whats supported by CImg). If you have a png file and want to know the bit depth you can use the function load_png() as follows:
CImg<unsigned char> inputImage();
unsigned int bit_depth;
inputImage.load_png(inputImageFilename.c_str(), &bit_depth);
std::cout << bit_depth;
Since I used unsigned char
as type T I will only have access to the first 8 bits even if the bit depth of the file is 16. Internally the image data is saved as unsigned short (aka 16 bit) if bit_depth == 16
. So the following should be possible:
if (bit_depth == 16)
CImg<unsigned short> newImage(inputImage);
The bit depth can of course also read from the exif data of the file.
Upvotes: 1