mmurphy
mmurphy

Reputation: 1347

Differences in string class implementations

Why are string classes implemented in several different ways and what are the advantages and disadvantages? I have seen it done several differents ways

  1. Only using a simple char (most basic way).
  2. Supporting UTF8 and UTF16 through a templated string, such as string<UTF8>. Where UTF8 is a char and UTF16 is an unsigned short.
  3. Having both a UTF8 and UTF16 in the string class.

Are there any other ways to implement a string class that may be better?

Upvotes: 3

Views: 243

Answers (1)

SigTerm
SigTerm

Reputation: 26429

As far as I know std::basic_string<wchar_t> where sizeof(wchar_t) == 2 is not UTF16 encoding. There are more than 2^16 characters in unicode, and codes go at least up to 0xFFFFF which is > 0xFFFF (2byte wchar_t capacity). As a result, proper UTF16 should use variable number of bytes per letter (one 2byte wchar_t or two of them), which is not the case with std::basic_string and similar classes that assume that one string element == one character.

As far as I know there are two ways to deal with unicode strings.

  1. Either use big enough type to fit any character into single string element (for example, on linux it is quite normal to see sizeof(wchar_t) == 4), so you'll be able to enjoy "benefits" (basically, easy string length calculation and nothing else) of std::string-like classes.
  2. Or use variable-length encoding (UTF8 - 1..4 bytes per char or UTF16 - 2..4 bytes per char), and well-tested string class that provides string-manipulation routines.

As long as you don't use char it doesn't matter which method you use. char-based strings are likely to cause trouble on machines with different 8bit codepage, if you weren't careful enough to take care of that (It is safe to assume that you'll forget about it and won't be careful enough - Microsoft Applocale was created for a reason).

Unicode contains plenty of non-printable characters (control and formatting characters in unicode), so that pretty much defeats any benefit method #1 could provide. Regardless, if you decide to use method #1, you should remember that wchar_t is not big enough to fit all possible characters on some compilers/platforms (windows/microsoft compiler), and that std::basic_string<wchar_t> is not a perfect solution because of that.


Rendering internationalized text is PAIN, so the best idea would be just to grab whatever unicode-compatible string class (like QString) there is that hopefully comes with text layout engine (that can properly handle control characters and bidirectional text) and concentrate on more interesting programming problems instead.


-Update-

If unsigned short is not UTF16, then what is, unsigned int? What is UTF8 then? Is that unsigned char?

UTF16 is variable-length character encoding. UTF16 uses 1..2 2-byte (i.e. uint16_t, 16 bit) elements per character. I.e. number of of elements in UTF16 string != number of characters in string for UTF16. You can't calculate string length by counting elements.

UTF8 is another variable-length encoding, based on 1byte elements (8 bit, 1 byte or "unsigned char"). One unicode character ("code point") in UTF8 takes 1..4 uint8_t elements. Once again, number of elements in string != number of characters in string. The advantage of UTF8 is characters that exist within ASCII take exactly 1 byte per character in UTF8, which saves a bit of space, while in UTF16, character always takes at least 2 bytes.

UTF32 is fixed-length character encoding, that always uses 32bit (4 bytes or uint32_t) per character. Currently any unicode character can fit into single UTF32 element, and UTF32 will probably remain fixed-length for a long time (I don't think that all languages of Earth combined would produce 2^31 different characters). It wastes more memory, but number of elements in string == number of characters in string.

Also, keep in mind, that C++ standard doesn't specify how big "int" or "short" should be.

Upvotes: 2

Related Questions