John
John

Reputation: 6648

Convert array of unsigned char (byte) into unsigned short in C++

I have array^ byteArray and I need to extract bytes in Little Endian sequence to make unsigned shorts and ints. I've tried every combination of the following I can think of so am asking for help.

int x = UInt32(byteArray[i]) + (UInt32)(0x00ff) * UInt32(byteArray[i + 1]);
int x = UInt32(byteArray[i]) + UInt32(0x00ff) * UInt32(byteArray[i + 1]);
int x = byteArray[i] + 0x00ff * byteArray[i + 1];

The problem is the least significant byte (at i+1) I know it is 0x50 but the generated short/int reports the lower byte as 0x0b. The higher byte is unaffected.

I figure this is a sign error but I can't seem to be able to fix it.

Upvotes: 0

Views: 3126

Answers (5)

Hans Passant
Hans Passant

Reputation: 941218

You are using managed code. Endian-ness is an implementation detail that the framework is aware of:

array<Byte>^ arr = gcnew array<Byte> { 1, 2, 3, 4 };
int value = BitConverter::ToInt16(arr, 1);
System::Diagnostics::Debug::Assert(value == 0x302);

Whether the framework's assumptions are correct depends on where the data came from.

Upvotes: 3

K-ballo
K-ballo

Reputation: 81349

The proper way to generate an 16 bit int from two 8 bit ints is value = static_cast< int16_t >( hibyte ) << 8 | lobyte;

Upvotes: 2

Roland Illig
Roland Illig

Reputation: 41617

You have to multiply the second byte with 0x0100 instead of 0x00ff.

It's like in the decimal system, where you multiply by ten, not by nine.

Upvotes: 0

Mark Robinson
Mark Robinson

Reputation: 3145

You want to do this instead

int x = UInt32(byteArray[i]) | (UInt32(byteArray[i + 1]) << 8);

Your multipliers are messing things up.

Upvotes: 0

John
John

Reputation: 6648

int y = byteArray[i] | byteArray[i + 1] << 8;

is what you need to use. (see also Convert a vector<unsigned char> to vector<unsigned short>)

Upvotes: 1

Related Questions