saurabh agarwal
saurabh agarwal

Reputation: 2184

efficient way to convert 16 bit value to 8 bit value

I have a variable that holds 16-bit value. I need only 8 LSB. Rest 8 bits needs to be discarded.

I am using this code for doing this.

#include<stdio.h>
#include<stdint.h>
int main(int argc, char *argv[])
{
    int linkIndx,size=128;

    uint16_t val = 0xABCD;
    uint8_t vr;

    vr = val; //this assignment discards upper 8-bits 

    printf("0x%X 0x%X ", val, vr);
}

Result:

0xABCD 0xCD

I want to know, Is it a good way to take 8 LSB from 16 bit variable?

EDIT:
Please add performance issues (from memory and speed perspective) with this particular way of implementation.

Upvotes: 10

Views: 38668

Answers (4)

jfboyer
jfboyer

Reputation: 11

While the above answers are correct, here an other way to do it. This method does not require bitwise operations

#include <stdint.h>
#include <stdio.h>

union converted16{
        struct{
                uint8_t lsb;
                uint8_t msb;
        }raw;
        uint16_t data;

}mynumber;

int main(int argc, char *argv[]){

        mynumber.data = 0xFEAB;
        printf("msb = %u, lsb = %u\n",mynumber.raw.msb, mynumber.raw.lsb);
        return 1;
}

Upvotes: 0

user2371524
user2371524

Reputation:

While both answers are correct, the bit masking here is completely redundant. It happens implicitly when converting to uint8_t. Without exactly sized integer types (and, speaking of performance, you should consider that, because performance is in general best when using the native word size of the machine), this would be different:

unsigned int val = 0xabcd;
unsigned int vr = val & 0xff;
assert(vr == 0xcd);

But when you really need to have these exactly sized type, the best code is IMHO

uint16_t val = 0xabcd;
uint8_t vr = (uint8_t) val;

The explicit cast is here to document the intention! Without it, a good compiler will warn you about the implicit conversion possibly losing information (and you should always enable compiler warnings, e.g. for gcc -Wall -Wextra -pedantic, to catch cases where you do such a conversion by accident).

The performance of all variants using the exactly sized types should be the same, because a decent compiler will emit the same code for all of them. The version using just unsigned int might perform a bit better.

[edit]: As you're asking about memory performance, too: it is unlikely that you gain something by using uint8_t because some architectures require values smaller than the native word size to be aligned to word boundaries. If they don't require it, it might still be faster to have them aligned, so the compiler might decide to do so. That just introduces unused padding bytes. With gcc, you can use the option -Os to optimize for size and as x86 architecture is byte-addressable, this may result in uint8_t being used without padding on a PC, but consequently with lower speed. Most of the time, speed vs memory is a tradeoff, you can have either one or the other.

Upvotes: 8

Waqas Shabbir
Waqas Shabbir

Reputation: 753

you can use the bit-masking concept.
Like this,

uint16_t val = 0xABCD;
uint8_t vr  = (uint8_t) (val & 0x00FF);

Or this can also be done by simply explicit type casting, as an 8-bit integer only carries LBS 8-bits from 16-bits value, & discards the remaining MSB 8-bits (by default, when assigns a larger value).

Upvotes: 2

Amol Saindane
Amol Saindane

Reputation: 1598

You can do as below:

uint16_t val = 0xABCD;
uint8_t vr = val & 0x00FF;    // Bitwise AND Operation

Using this bitwise operation you will get 8 bits of LSB

Upvotes: 6

Related Questions