Reputation: 25
#include <bitset>
#include <iostream>
int main() {
std::bitset<8> a = 10101010;
std::bitset<8> b = 11111111;
std::cout << (a ^ b);
}
When running the following code the result is:
11010101
Expected output is:
01010101
Am I doing something wrong?
Upvotes: -2
Views: 107
Reputation: 28049
You initialize a
and b
with values which are decimal int
literals (10101010
, 11111111
).
In order to use binary literals you need the 0b
prefix:
#include <bitset>
#include <iostream>
int main() {
std::bitset<8> a = 0b10101010;
std::bitset<8> b = 0b11111111;
std::cout << (a ^ b);
}
Output:
01010101
More info about such literals: Integer literal.
A side note:
As mentioned above the current values you use are decimal integer literals.
But there is also the 0
prefix (without b
) for octal literals.
So if you used e.g. 01010101
to initialize a
or b
, it would have been interpreted as an octal (base 8) value.
Upvotes: 8