Thanos Kyprianos
Thanos Kyprianos

Reputation: 1686

Difference between INT_MAX and __INT_MAX__ in C

What is the difference between the 2? __INT_MAX__ is defined without adding a library as far as I know and INT_MAX is defined in limits.h but when I include the library INT_MAX gets expanded to __INT_MAX__ either way (or so does VSCode say). Why would I ever use the limits.h one when it gets expanded to the other one?

Upvotes: 53

Views: 9667

Answers (7)

Andreas Wenzel
Andreas Wenzel

Reputation: 25096

You should always use INT_MAX, as that is the macro constant that is defined by the ISO C standard.

The macro constant __INT_MAX__ is not specified by ISO C, so it should not be used, if you want your code to be portable. That macro is simply an implementation detail of the compiler that you are using. Other compilers will probably not define that macro, and will implement INT_MAX in some other way.

Upvotes: 63

Abhiranjan tiwari
Abhiranjan tiwari

Reputation: 5

INT_MAX is a macro that specifies that an integer variable cannot store any value beyond this limit.

INT_MIN specifies that an integer variable cannot store any value below this limit.

Values of INT_MAX and INT_MIN may vary from compiler to compiler. Following are typical values in a compiler where integers are stored using 32 bits.

Value of INT_MAX is +2147483647. Value of INT_MIN is -2147483648.

Upvotes: -2

Satish Chandra Gupta
Satish Chandra Gupta

Reputation: 3361

In the C programming language, INT_MAX is a macro that expands to the maximum value that can be stored in a variable of type int. This value is implementation-defined, meaning that it may vary depending on the specific C implementation being used. On most systems, int is a 32-bit data type and INT_MAX is defined as 2147483647, which is the maximum value that can be stored in a 32-bit, two's complement integer.

On the other hand, __INT_MAX__ is a predefined macro that represents the maximum value that can be stored in a variable of type int on the system where the C program is being compiled. Like INT_MAX, the value of __INT_MAX__ is implementation-defined and may vary depending on the specific C implementation being used. However, __INT_MAX__ is set by the compiler during compilation, whereas INT_MAX is typically defined in a header file (e.g., limits.h) and included in the program at runtime.

In general, it is recommended to use INT_MAX rather than __INT_MAX__ in C programs, as INT_MAX is portable and will work on any system, whereas __INT_MAX__ is specific to the system where the program is being compiled.

Upvotes: 1

John Dallman
John Dallman

Reputation: 694

To add to the other answers, when you're writing code that will be run on several platforms, it really pays to stick to the standards. If you don't, when a new platform comes along, you have a lot of work to do adapting it, and the best way to do that is usually to change it conform to the standard. This work is very dull and uninteresting, and well worth avoiding by doing things right to start with.

I work on a mathematical modeller that was originally written in the 1980s on VAX/VMS, and in its early days supported several 68000 platforms, including Apollo/Domain. Nowadays, it runs on 64-bit Windows, Linux, macOS, Android and iOS, none of which existed when it was created.

Upvotes: 12

lee joe
lee joe

Reputation: 191

__INT_MAX__ is a predefined macro in the C preprocessor that specifies the maximum value of an int type on a particular platform. This value is implementation-defined and may vary across different platforms.

INT_MAX is a constant defined in the limits.h header file that specifies the maximum value of an int type. It is defined as follows:

define INT_MAX __INT_MAX__

The limits.h header file is part of the C standard library and provides various constants that specify the limits of various types, such as the minimum and maximum values of the int, long, and long long types.

The reason why INT_MAX is defined as __INT_MAX__ is because __INT_MAX__ is a predefined macro that specifies the maximum value of an int type on a particular platform, and INT_MAX is simply an alias for this value.

You can use either __INT_MAX__ or INT_MAX to get the maximum value of an int type, but it is generally recommended to use INT_MAX since it is defined in a standard library header file and is therefore more portable.

Upvotes: 1

dbush
dbush

Reputation: 224352

__INT_MAX__ is an implementation defined macro, which means not all systems may have it. In particular, GCC defines this macro but MSVC does not.

On the other hand, INT_MAX is defined by the C standard and is guaranteed to be present in limits.h for any conforming compiler.

So for portability, use INT_MAX.

Upvotes: 44

Andrew Henle
Andrew Henle

Reputation: 1

Why would I ever use the limits.h one when it gets expanded to the other one?

limits.h is standard and portable.

Every implementation of the C language is free to create the value of macros such as INT_MAX as it sees fit. The __INT_MAX__ value you are seeing is an artifact of your particular compiler, and maybe even the particular version of the compiler you're using.

Upvotes: 16

Related Questions