Reputation: 143
I've got a task and i have no clue what i supposed to do. Here is the task:
Write the following function: char * encodingToShortString (char * dig_str);
The function is supposed to create and return a new string short_dig_str. Each byte in short_dig_str will consist of the two corresponding bit quartets for two consecutive characters of dig_str. For dig_str with the length n (LENGTH, not size), the length of short_dig_str would be n / 2 for even n and n / 2 + 1 for odd n. For odd n, the first quartet in short_dig_str does not match any digit of dig_str and all its bits are zeros.
Example: For dig_str = "1234", the string short_dig_str will consist of the following integer: 00010010 00110100
For dig_str = "51234", the string short_dig_str will consist of the following integer: 00000101 00010010 00110100
(From left to right, the most significant, MSB, to the least significant LSB).
The required memory space must be assigned to the short_dig_str string accurately. It can be assumed that there is enough memory for allocation.
I've started the function like this:
char* codingToShortString(char* dig_str)//let's imagine that dig_str[] = "12";
{
char *short_dig_str;
char temp;//0000 0000
int n = strlen(dig_str);
unsigned mask = 1<<3;//1111
unsigned c; //bit counter
if (n%2 == 0)
{
short_dig_str = malloc(((n/2)+1)*sizeof(char));
}
else
{
short_dig_str = malloc(((n/2)+2)*sizeof(char));
}
for (i=0; i<n; i++)
{
for (c=1; c<=4; c++)
{
temp = dig_str[i] & mask;
temp <<= 1;
}
}
}
But afterwards i have no clue what to do. How i put the binary value into short_dig_str? I'm very confused.
Upvotes: 0
Views: 1371
Reputation: 12404
First look at the desired output:
Example: For dig_str = "1234", the string short_dig_str will consist of the following integer: 00010010 00110100
For dig_str = "51234", the string short_dig_str will consist of the following integer: 00000101 00010010 00110100
With "integer" a (unsigned) char is meant. If you write the result as hex values you will get
"1234" => 0x12, 0x34
"51234" => 0x05, 0x12, 0x34
You are following an approach that is too complicated. You do not need any bitmasks for this.
char* codingToShortString(char* dig_str)
{
int n = strlen(dig_str);
// Add 1 before dividing to "round up", add 1 for \0
char *short_dig_str = malloc((n+1)/2 + 1);
unsigned char digits;
int out_pos = 0; // Read index within input: "12345"
int in_pos = 0; // Write index within output: {0x01,0x23,0x34}
// First handle odd number of digits
// Foe even numbers no special treatment needed.
if (n%2 != 0)
{
digits = dig_str[in_pos++] - '0';
short_dig_str[out_pos++] = digit;
}
// Then handle remaining digits (as pairs!).
for ( ; in_pos < n; )
{
digits = (dig_str[in_pos++] -'0') << 4; // one digits in upper half ...
digits |= dig_str[in_pos++] - '0'; // ... one digit in lower half
// Store into result array...
short_dig_str[out_pos++] = digits;
}
return short_dig_str;
}
As the returned pointer is not used as a string but as raw bytes to store 2 decimals, it should be unsigned char
or uint8_t
etc. rather than char
but your signature is defined as it is.
The name codingToShortString
is misleading, as no string (and no 0-termination) is created.
Bad names, bad types... That's not a really good assignment I would say...
Upvotes: 1