Reputation: 57
Suppose I have 2 64-bit unsigned integer. I have a window of size k
, and at one time it starts at the end of the first integer and ends at the start of the second integer. (Of course, I know where it starts)
For example, the first one is ...0110011
, and the second one is 110......
And the window starts at the first 0
, the size of the window is 10
. The output should be 0110011 110
.
My question is, how to write such a decent program to solve this problem? I try to use mask and then realize I got 2 chunk bits integer (0110011 and 110.......
) and I don't know how to concatenate them together.
Upvotes: 0
Views: 83
Reputation: 4637
This is sort of over-complicated, but it allows more flexibility at the cost of being much longer.
The function signature is:
template <typename... Ts_>
std::string see_bit_range(std::size_t p_start, std::size_t p_length, const Ts_&... p_types)
It creates and returns string of 1
s and 0
s instead of displaying it. p_start
is how many bits to the right it should start at (0
would be the most significant bit in the first variable). p_length
is how many bits to look at. After that, you can pass in as many different variables of as many types as you desire. I didn't make it check if p_start
or p_length
are too large, or if you pass it zero arguments, so don't do that.
Take two variables like so: unsigned char var1 = 0b00110011, var 2 = 0b11000000;
. Calling the function like see_bit_range(1, 10, var1, var2)
, the result is the string "0110011 110".
int main()
{
const uint64_t var1 = 0b0110011; //...0110011
const uint64_t var2 = 3ull << 62; //110...
const int start = 57; //How many bits from the left to start at (64 - 7)
const int length = 10; //How many bits to read
std::cout << see_bit_range(start, length, var1, var2); //0110011 110
}
The function's code is below. get_var_sum
is a helper to calculate the size of all the objects, and is otherwise unimportant. It can be left out entirely by using new
allocated memory instead of using local arrays like I did.
#include <string> //string
#include <cstddef> //size_t
#include <climits> //CHAR_BIT
template <typename T_, typename... Ts_>
struct get_var_sum
{
static constexpr std::size_t value = sizeof(T_) + get_var_sum<Ts_...>::value;
};
template <typename T_>
struct get_var_sum<T_>
{
static constexpr std::size_t value = sizeof(T_);
};
template <typename... Ts_>
std::string see_bit_range(std::size_t p_start, std::size_t p_length, const Ts_&... p_types)
{
std::string out; //The string that represents the bits
constexpr std::size_t SIZE_SUM = get_var_sum<Ts_...>::value; //The size of all the types combined
unsigned char buffer[SIZE_SUM]; //A buffer to store the binary values in
constexpr std::size_t ELEMENTS = sizeof...(Ts_); //The number of elements
const unsigned char* ptrs[ELEMENTS]{ reinterpret_cast<const unsigned char*>(&p_types)... }; //Creates an array of pointers to the objects
std::size_t sizes[ELEMENTS]{ sizeof(Ts_)... }; //Creates an array that holds the sizes of the objects
out.reserve(CHAR_BIT * SIZE_SUM); //Be nice and tell the string what to expect
bool small_endian; //If we're on a small endian or big endian machine
unsigned long long test = 1;
small_endian = *reinterpret_cast<unsigned char*>(&test) == 1;
std::size_t x, y, z = 0;
for (x = 0; x < ELEMENTS; ++x) //For each value
{
for (y = 0; y < sizes[x]; ++y) //For each byte in the value
buffer[z++] = ptrs[x][small_endian ? sizes[x] - y - 1 : y]; //Copy the value into the buffer
}
y = 0;
z = 0;
for (x = p_start; x < p_start + p_length; ++x) //For the bit range specified
{
if ((x - y) / CHAR_BIT == sizes[z]) //If we just reached the end of a variable
{
y = x;
++z;
out.push_back(' '); //Put a space in the string
}
out.push_back(((buffer[x / CHAR_BIT] & (1u << (CHAR_BIT - (x % CHAR_BIT) - 1))) > 0) ? '1' : '0'); //Show if the bit is on or not
}
return out;
}
Upvotes: 0
Reputation: 4220
Try this:
#include <stdio.h>
int main() {
unsigned long long i1 = 0x0123456789ABCDEFULL;
unsigned long long i2 = 0x11223344AABBCCDDULL;
// window start 7, size 12 bits
// so it should include bits 7..0 from i1 and 63..60 from i2
int windowStart = 7; // bit 7 of i1
int windowSize = 12;
// Number of most significant bits needed from i2
int numBitsFromSecondNum = windowSize - windowStart - 1;
// AND mask of i1 to obtain the least significant bits from it
unsigned long long chunk1 = i1 & ((1 << (windowStart+1)) - 1);
// Right shift i2 to obtain the most significant bits from it
unsigned long long chunk2 = i2 >> (64 - numBitsFromSecondNum);
// Concatenation of the 2 chunks
unsigned long long result = (chunk1 << numBitsFromSecondNum) | chunk2;
printf("%llX\n", chunk1); // prints: EF
printf("%llX\n", chunk2); // prints: 1
printf("%llX\n", result); // prints: EF1
return 0;
}
Window start and size are not as in your question, but I chosen them to be aligned to 4 bits, to easily see the output printed in hex.
In the code above I am assuming that unsigned long long
(same as unsigned long long int
) size is 64 bits. C standard guarantees it to be at least 64 bits, and that implies it could be bigger (e.g. 128 bits). It is better to be replaced with (8*sizeof(unsigned long long))
.
Upvotes: 1