Reputation: 141
I believe that when you add two unsigned int
values together, the returned value's data type will be an unsigned int
.
But the addition of two unsigned int
values may return a value that is larger than an unsigned int
.
So why does unsigned int
+ unsigned int
return an unsigned int
and not some other larger data type?
Upvotes: 0
Views: 476
Reputation: 43
Because a variable + a same type variable can be only equal to that type variable , (well in some cases it will but not in your case)
example:
int + int = int a int plus another int cannot be equal to a float because it dont have the properties of a float. I hope this answers your question bye!
Upvotes: -3
Reputation: 123431
The type of a varaible does not only determine the range of values it can hold, but sloppy speaking, also how the operations are realized. If you add two unsigned
values you get an unsigned
result. If you want a different type as result (eg long unsigned
) you could cast:
unsigned x = 42;
unsigned y = 42;
long unsigned z = static_cast<long unsigned>(x) + static_cast<long unsigned>(y);
Actually the real reason is: It is defined like that. In particular unsigned overflow is well defined in C++ to wrap around and using a wider type for the result of unsigned
operators would break that behaviour.
As a contrived example, consider this loop:
for (unsigned i = i0; i != 0; ++i) {}
Note the condition! Lets assume i0 > 0
, then it can only ever be false when incrementing the maximum value of unsigned
results in 0
. This code is obfuscated and should probably make you raise an eyebrow or two in a code-review, though it is perfectly legal. Making the result type adjust depending on the value of the result, or choosing the result type such that overflow cannot happen would break this behaviour.
Upvotes: 2
Reputation: 238461
Let's imagine that we have a language where adding two integers results in a bigger type. So, adding two 32 bit numbers results in a 64 bit number. What would happen in expression the following expression?
auto x = a + b + c + d + e + f + g;
a + b
is 64 bits. a + b + c
is 128 bits. a + b + c + d
is 256 bits... This becomes unmanageable very fast. Most processors don't support operations with so wide operands.
Upvotes: 2
Reputation: 234875
This would have truly evil consequences:
Would you really want 1 + 1 to be a long
type? And (1 + 1) + (1 + 1)
would become a long long
type? It would wreak havoc with the type system.
It's also possible, for example, that short
, int
, long
, and long long
are all the same size, and similarly for the unsigned
versions.
So the implicit type conversion rules as they stand are probably the best solution.
You could always force the issue with something like
0UL + "unsigned int" + "unsigned int"
Upvotes: 2