Frank Myat Thu
Frank Myat Thu

Reputation: 4474

Hexa base digit and Decimal base digit at C#

Some of my downloaded c# code include hexadecimal based calculation.

eg. int length = ((((byte) hexstr[0x10]) & 0x80) > 0) ? 0x10 : 8;

when i change this code to normal decimal based code like that

int length = ((((byte) hexstr[16]) & 128) > 0) ? 16 : 8;

It give the same solution without giving any error.
It still run correctly.
So what I would like to know is why most of the code use hexadecimal base digit which is more difficult to understand than normal decimal digit.

If there anyone who know this, please let me know it.

Upvotes: 4

Views: 281

Answers (1)

Jon Skeet
Jon Skeet

Reputation: 1500525

It shows the bit pattern more clearly. 0x80 is clearly the value with the top nybble set to 8 and the bottom nybble set to 0... that's not at all clear from the decimal value.

As another example, if I wanted to mask the second and third bytes of an integer, I might use:

int masked = original & 0xffff00;

I wrote that code without a calculator or anything similar. There's no way I'd have done the same for the decimal equivalent - I can't multiply 65535 by 256 in my head with any likelihood of success, and the resulting code wouldn't have been nearly as clear anyway.

Upvotes: 4

Related Questions