user517339
user517339

Reputation:

What is the difference between the types of 0x7FFF and 32767?

I'd like to know what the difference is between the values 0x7FFF and 32767. As far as I know, they should both be integers, and the only benefit is the notational convenience. They will take up the same amount of memory, and be represented the same way, or is there another reason for choosing to write a number as 0x vs base 10?

Upvotes: 4

Views: 15398

Answers (7)

DWright
DWright

Reputation: 9500

The 0x7FFF notation is much more clear about potential over/underflow than the decimal notation.

If you using something that is 16 bits wide, 0x7FFF alerts you to the fact that if you use those bits in a signed way, you are at the very maximum of what those 16 bits can hold for a positive, signed value. Add 1 to it, and you'll overflow.

Same goes for 32 bits wide. The maximum that it can hold (signed, positive) is 0x7FFFFFFF.

You can see these maximums straight off of the hex notation, whereas you can't off of the decimal notation. (Unless you happen to have memorized that 32767 is the positive signed max for 16 bits).

(Note: the above is true when twos complement is being used for distinguishing between positive and negative values if the 16 bits are holding a signed value).

Upvotes: 2

Davide Berra
Davide Berra

Reputation: 6568

Choosing to write 0x7fff or 32767 into source code it's only a programmer choice because, those values are stored in the same identical way into computer memory.

For example: I'd feel more comfortable use the 0x notation when I need to do operations with 4bit instead the classical byte.

If I need to extract the lower 4 bit of a char variable I'd do

res = charvar & 0x0f;

That's the same of:

res = charvar & 15;

The latter is just less intuitive and readable but the operation is identical

Upvotes: 0

Chris Eberle
Chris Eberle

Reputation: 48795

The only advantage is that some programmers find it easier to convert between base 16 and binary in their heads. Since each base 16 digit occupies exactly 4 bits, it's a lot easier to visualize the alignment of bits. And writing in base 2 is quite cumbersome.

Upvotes: 10

Kerrek SB
Kerrek SB

Reputation: 477398

The type of an undecorated decimal integral constants is always signed. The type of an undecorated hexadecimal or octal constant alternates between signed and unsigned as you hit the various boundary values determined by the widths of the integral types.

For constants decorated as unsigned (e.g. 0xFU), there is no difference.

Also, it's not possible to express 0 as a decimal literal.

See Table 6 in C++11 and 6.4.4.1/5 in C11.

Upvotes: 3

Scott Stafford
Scott Stafford

Reputation: 44808

That is true, there is no difference. Any differences would be in the variable the value is stored in. The literals 0x7FFF and 32767 are identical to the compiler in every way.

See http://www.cplusplus.com/doc/tutorial/constants/.

Upvotes: 0

Reed Copsey
Reed Copsey

Reputation: 564681

Both are integer literals, and just provide a different means of expressing the same number. There is no technical advantage to using one form over the other.

Note that you can also use octal notation as well (by prepending the value with 0).

Upvotes: 2

hd1
hd1

Reputation: 34677

One is hex -- base 16 -- and the other is decimal?

Upvotes: 0

Related Questions