Reputation: 75276
I have a method that processes all the pixels in a bitmap (represented as a Byte
array). For each pixel, I do fairly complex calculations for each R, G and B value using a float
to store the calculated value until finally converting it back to a Byte
(using just a cast, and I'm certain the value in the float
will always be 255.0 or less).
In trying to optimize this method, I was surprised to find that about 80% of the overall processing time was from just the casting of the three float
values for R, G and B to their Byte
counterparts.
Is there any kind of super-fast way of doing this (e.g.):
float Rtotal = 123.7;
float Gtotal = 7.3;
float Btotal = 221.3;
Byte Rsource = (Byte)Rtotal;
Byte Gsource = (Byte)Gtotal;
Byte Bsource = (Byte)Btotal;
Upvotes: 3
Views: 81
Reputation: 75276
Ok, this is a bit odd. If I just make this change:
float Rtotal = 123.7;
float Gtotal = 7.3;
float Btotal = 221.3;
Byte Rsource = (int)Rtotal;
Byte Gsource = (int)Gtotal;
Byte Bsource = (int)Btotal;
the extra time caused by the cast to (Byte)
disappears. My guess here is that the compiler is adding some kind of bounds checking to the (Byte)
cast to ensure that it's within the valid range of a Byte, whereas it omits this if the cast is to int
.
Upvotes: 3