Reputation: 1218
So I realize that assuming ascii encoding can get you in trouble, but I'm never really sure how much trouble you can have subtracting characters. I'd like to know what relatively common scenarios can cause any of the following to evaluate to false.
Given:
std::string test = "B";
char m = 'M';
A) (m-'A')==12
B) (test[0]-'D') == -2
Also, does the answer change for lowercase values (changing the 77
to 109
ofc)?
Edit: Digit subtraction answers this question for char digits, by saying the standard says '2'-'0'==2
must hold for all digits 0-9
, but I want to know if it holds for a-z
and A-Z
, which section 2.3
of the standard is unclear on in my reading.
Edit 2: Removed ASCII specific content, to focus question more clearly (sorry @πάντα-ῥεῖ for a content changing edit, but I feel it is necessary). Essentially the standard seems to imply some ordering of characters for the basic set, but some encodings do not maintain that ordering, so what's the overriding principle?
Upvotes: 2
Views: 1839
Reputation: 1
In other words, when are chars in C/C++ not stored in ASCII?
C or C++ language don't have any notion of the actual character coding table used by the target system. The only convention is that character literals like 'A'
match the current encoding.
You could as well deal with EBCDIC encoded characters and the code looks the same as for ASCII characters.
Upvotes: 1