Gerald
Gerald

Reputation: 677

Implicit type conversion and different behavior between x64 and arm64

I'm facing a weird issue when testing my software on arm64.

I've written a bit of code to reproduce the issue :

char c;
int sum = 0;
for (int i = 0; i <= 255; i++)
{
   c = i;
   int a = c * 10;
   sum += a;
   std::cout << a << std::endl;
}

When I run it on Windows (built with Visual Studio 2017) or Ubuntu x64 (gcc 9.3.0-17) I have the following results :

0
10
...
1260
1270
-1280
-1270
-20
-10
sum=-1280

if I run the same code on Ubuntu arm64 (gcc 9.3.0-17), I have different results :

0
10
...
1260
1270
1280
1290
...
2540
2550
sum=326400

I don't know if there is some extra optimization in gcc on arm64 (using -O3) or if there is some issue I don't see ? Any idea on how I can solve this issue?

Upvotes: 3

Views: 908

Answers (3)

char is signed by standard.
However, ARM decided to make it unsigned by default decades ago.

It's that simple.

Upvotes: -1

Dzemo997
Dzemo997

Reputation: 336

char is represented by 8 bit and it is signed. You can store 0-127, after that first bit is interpreted as a negative number. Maybe it is better to use uint8_t or similar instead of char.

Upvotes: -3

Adrian Mole
Adrian Mole

Reputation: 51854

The char data type can be either signed or unsigned. In the case of the former (as seems to be the case when targeting x64), the c = i statement will cause overflow on the 129th iteration of the for loop (the maximum value for a signed char is 127) and the value it is being assigned has 'wrapped round' to a negative value.

However, when targeting arm64, your compiler appears to use an unsigned char type (with a range of 0 thru 255), so there is no overflow in that statement and the arithmetic proceeds 'as expected'.


To confirm (or otherwise) the above diagnosis, just check the value of the CHAR_MAX constant (defined in the <climits> header file) in your different build environments.

Upvotes: 5

Related Questions