Reputation: 52047
I have an object model that I use to fill results from a query and that I then pass along to a gridview.
Something like this:
public class MyObjectModel
{
public int Variable1 {get;set;}
public int VariableN {get;set;}
}
Let's say variable1 holds the value of a count and I know that the count will never get to become very large (ie. number of upcoming appointments for a certain day). For now, I've put these data types as int. Let's say it's safe to say that someone will book less than 255 appointments per day. Will changing the datatype from int to byte affect performance much? Is it worth the trouble?
Thanks
Upvotes: 0
Views: 502
Reputation: 73604
It will affect the amount of memory allocated for that variable. In my personal opinion, I don't think it's worth the trouble in the example case.
If there were a huge number of variables, or a database table where you could really save, then yes, but not in this case.
Besides, after years of maintenance programming, I can safely say that it's rarely safe to assume an upper limit on anything. if there's even a remote chance that some poor maintenance programmer is going to have to re-write the app because of trying to save a trivial amount of resources, it's not worth the pay-off.
Upvotes: 1
Reputation: 499392
No, performance will not be affected much at all.
For each int
you will be saving 3 bytes, or 6 in total for the specific example. Unless you have many millions of these, the savings in memory are very small.
Not worth the trouble.
Edit:
Just to clarify - my answer is specifically about the example code. In many cases the choices will make a difference, but it is a matter of scale and will require performance testing to ensure correct results.
To answer @Filip's comment - There is a difference between compiling an application to 64bit and selecting an isolated data type.
Upvotes: 3
Reputation: 59553
Using a integer variable smaller than an int
(System.Int32
) will not provide any performance benefits. This is because most integer operations in the CLR will promote the variable to an int
prior to performing the operation. int
is considered the "natural" integer size on the systems for which the CLR was developed.
Consider the following code:
for (byte appointmentIndex = 0; appointmentIndex < Variable1; appointmentIndex++)
ProcessAppointment(appointmentIndex);
In the compiled code, the comparison (appointmentIndex < Variable1
) and the increment (appointmentIndex++
) will (most likely) be performed using 32-bit integers. Even if the optimizer uses a smaller data type, the CPU itself will require additional work to use the smaller data type.
If you are storing an array of values, then using a smaller data type could help save space, which might give a performance advantage in some scenerios.
Upvotes: 1
Reputation: 40431
I agree with the other answers that performance won't be worth it. But if you're going to do it at all, go with a short
instead of a byte
. My rule of thumb is to pick the highest number you can imagine, multiply by 10, then use that as the basis to pick your value. So if you can't possibly imagine a value higher than 200, then use 2000 as your basis, which would mean you'd need a short
.
Upvotes: 0
Reputation: 67574
Contrary to popular belief, making your data type smaller does not make access faster. In fact, it's slower. Look at bool
, it's implemented as an int
.
This is because internally, your CPU works with native-word-sized registers (32/64 bit these days), and you're forcing it to convert your data back and forth for no reason (well only when writing the result in memory, but it's still a penalty you could easily avoid).
Fiddling with integer widths only affects memory access, and caching specifically. This is the kind of stuff you can only figure out by profiling your application and looking at page fault counters in particular.
Upvotes: 0
Reputation: 14411
The .NET runtime optimizes the use of Int32 especially for counters etc. .NET Integer vs Int16?
Upvotes: 0