Lodhart
Lodhart

Reputation: 685

What does 'u' mean after a number?

Can you tell me what exactly does the u after a number, for example:

#define NAME_DEFINE 1u 

Upvotes: 53

Views: 89206

Answers (5)

Dirk Herrmann
Dirk Herrmann

Reputation: 5949

A decimal literal in the code (rules for octal and hexadecimal literals are different, see https://en.cppreference.com/w/c/language/integer_constant) has one of the types int, long or long long. From these, the compiler has to choose the smallest type that is large enough to hold the value. Note that the types char, signed char and short are not considered. For example:

0 // this is a zero of type int
32767 // type int
32768 // could be int or long: On systems with 16 bit integers
      // the type will be long, because the value does not fit in an int there.

If you add a u suffix to such a number (a capital U will also do), the compiler will instead have to choose the smallest type from unsigned int, unsigned long and unsigned long long. For example:

0u // a zero of type unsigned int
32768u // type unsigned int: always fits into an unsigned int
100000u // unsigned int or unsigned long

The last example can be used to show the difference to a cast:

100000u // always 100000, but may be unsigned int or unsigned long
(unsigned int)100000 // always unsigned int, but not always 100000
                     // (e.g. if int has only 16 bit)

On a side note: There are situations, where adding a u suffix is the right thing to ensure correctness of computations, as Lundin's answer demonstrates. However, there are also coding guidelines that strictly forbid mixing of signed and unsigned types, even to the extent that the following statement

unsigned int x = 0;

is classified as non-conforming and has to be written as

unsigned int x = 0u;

This can lead to a situation where developers that deal a lot with unsigned values develop the habit of adding u suffixes to literals everywhere. But, be aware that changing signedness can lead to different behavior in various contexts, for example:

(x > 0)

can (depending on the type of x) mean something different than

(x > 0u)

Luckily, the compiler / code checker will typically warn you about suspicious cases. Nevertheless, adding a u suffix should be done with consideration.

Upvotes: 2

Lundin
Lundin

Reputation: 215115

Integer literals like 1 in C code are always of the type int. int is the same thing as signed int. One adds u or U (equivalent) to the literal to ensure it is unsigned int, to prevent various unexpected bugs and strange behavior.

One example of such a bug:

On a 16-bit machine where int is 16 bits, this expression will result in a negative value:

long x = 30000 + 30000;

Both 30000 literals are int, and since both operands are int, the result will be int. A 16-bit signed int can only contain values up to 32767, so it will overflow. x will get a strange, negative value because of this, rather than 60000 as expected.

The code

long x = 30000u + 30000u;

will however behave as expected.

Upvotes: 76

Mocha
Mocha

Reputation: 136

It is a way of telling the compiler that the constant 1 is meant to be used as an unsigned integer. Some compilers assume that any number without a suffix like 'u' is of int type. To avoid this confusion, it is recommended to use a suffix like 'u' when using a constant as an unsigned integer. Other similar suffixes also exist. For example, for float 'f' is used.

Upvotes: 5

user529758
user529758

Reputation:

it means "unsigned int", basically it functions like a cast to make sure that numeric constants are converted to the appropriate type at compile-time.

Upvotes: 2

It is a way to define unsigned literal integer constants.

Upvotes: 20

Related Questions