Reputation: 5421
It just seems like "not of one mind" in the design here, because integer data and character data of 16 bits is now differentiable but integer and character data of 8 bits is not.
C++ has always had the only choice for 8-bit values a 'char'. But the feature of recognizing wchar_t as an official, distinct type from unsigned short enables improvements, but only for wide-string users. It seems like this is not coordinated; the language acts differently for 8-bit and 16-bit values.
I think there is clear value in having more distinct types; having distinct 8-bit char AND and 8-bit "byte" would be much nicer, e.g. in usage for operator overloading. For example:
// This kind of sucks...
BYTE m = 59; // This is really 'unsigned char' because there is no other option
cout << m; // outputs character data ";" because it assumes 8-bits is char data.
// This is a consequence of limited ability to overload
// But for wide strings, the behavior is different and better...
unsigned short s = 59;
wcout << s; // Prints the number "59" like we expect
wchar_t w = L'C'
wcout << w; // Prints out "C" like we expect
The language would be more consistent if there were a new 8-bit integer type introduced, which would enable more intelligent overloads and overloads that behave more similarly irrespective of if you are using narrow or wide strings.
Upvotes: 1
Views: 236
Reputation: 385244
Yes, probably, but using single-byte integers that aren't characters is pretty rare and you can trivially get around your stated problem via integral promotion (try applying a unary +
and see what happens).
It's also worth noting that your premise is flawed: wchar_t
and unsigned short
have always been distinct types, per paragraph 3.9.1/5
in C++98, C++03, C++11 and C++14.
Upvotes: 2