Sventimir
Sventimir

Reputation: 2056

C++ Unicode characters printing

I need to print some Unicode characters on the Linux terminal using iostream. Strange things happen though. When I write:

cout << "\u2780";

I get: , which is almost exactly what I want. However, if I write:

cout << '\u2780';

I get: 14851712.

The problem is, I don't know the exact character to be printed at compile time. Therefore I'd like to do something like:

int x;
// Some calculations...
cout << (char)('\u2780' + x);

Which prints: . Using wcout or wchar_t instead don't work either. How do I get correct printing?

From what I found around on the Internet, it seems important that I use the GCC 4.7.2 compiler (executable g++) straight from the Debian 7 (Wheezy) repository.

Upvotes: 11

Views: 23244

Answers (4)

quanta
quanta

Reputation: 245

In Linux, I have been successful printing out any Unicode directly as in the most naive way:

std::cout << "ΐ, Α, Β, Γ, Δ, Θ, Λ, Ξ, ... ±, ... etc."

Upvotes: 1

Potatoswatter
Potatoswatter

Reputation: 137920

The program prints an integer because of C++11 §2.14.3/1:

A multicharacter literal, or an ordinary character literal containing a single c-char not representable in the execution character set, is conditionally-supported, has type int, and has an implementation-defined value.

The execution character set is what char can represent, i.e., ASCII.

You got 14851712, or in hexadecimal E29E80, which is the UTF-8 representation of U+2780 (DINGBAT CIRCLED SANS-SERIF DIGIT ONE). Putting UTF-8, a multibyte encoding, into an int is insane and stupid, but that's what you get from a "conditionally supported, implementation-defined" feature.

To get a UTF-32 value, use U'\u2780'. The first U specifies the char32_t type and UTF-32 encoding (i.e. up to 31 bits but no surrogate pairs). The second \u specifies a universal-character-name containing the code point. To get a value supposedly compatible with wcout, use L'\u2780', but that doesn't necessarily use a Unicode runtime value nor get you more than two bytes of storage.

As for reliably manipulating and printing the Unicode code point, as other answers have noted, the C++ standard hasn't quite gotten there yet. Joni's answer is the best way, yet it still assumes that the compiler and the user's environment are using the same locale, which often isn't true.

You can also specify UTF-8 strings in the source using u8"\u2780" and force the runtime environment to UTF-8 using something like std::locale::global( std::locale( "en_US.UTF-8" ) );. But that still has rough edges. Joni suggests using the C interface std::setlocale from <clocale> instead of the C++ interface std::locale::global from <locale>, which is a workaround to the C++ interface being broken in GCC on OS X and perhaps other platforms. The issues are platform-sensitive enough that your Linux distribution might well have put a patch into their own GCC package.

Upvotes: 1

bames53
bames53

Reputation: 88225

When you write

cout << "\u2780";

The compiler converts \u2780 into the appropriate encoding of that character in the execution character set. That's probably UTF-8, and so the string ends up having four bytes (three for the character, one for the null terminator).

If you want to generate the character at run time then you need some way to do the same conversion to UTF-8 at run time that the compiler is doing at compile time.


C++11 provides a handy wstring_convert template and codecvt facets that can do this, however libstdc++, the standard library implementation that comes with GCC, has not yet gotten around to implementing them (as of GCC 4.8.0 (2013-03-22)). The following shows how to use these features, but you'll need to either use a different standard library implementation or wait for libstdc++ to implement them.

#include <codecvt>

int main() {
  char32_t base = U'\u2780';

  std::wstring_convert<std::codecvt_utf8<char32_t>, char32_t> convert;
  std::cout << convert.to_bytes(base + 5) << '\n';
}

You can also use any other method of producing UTF-8 you have available. For example, iconv, ICU, and manual use of pre-C++11 codecvt_byname facets would all work. (I don't show examples of these, because that code would be more involved than the simple code permitted by wstring_convert.)


An alternative that would work for a small number of characters would be to create an array of strings using literals.

char const *special_character[] = { "\u2780", "\u2781", "\u2782",
  "\u2783", "\u2784", "\u2785", "\u2786", "\u2787", "\u2788", "\u2789" };

std::cout << special_character[i] << '\n';

Upvotes: 4

Joni
Joni

Reputation: 111369

The Unicode character \u2780 is outside of the range for the char datatype. You should have received this compiler warning to tell you about it: (at least my g++ 4.7.3 gives it)

test.cpp:6:13: warning: multi-character character constant [-Wmultichar]

If you want to work with characters like U+2780 as single units you'll have to use the widechar datatype wchar_t, or if you are lucky enough to be able to work with C++11, char32_t or char16_t. Note that one 16-bit unit is not enough to represent the full range of Unicode characters.

If that's not working for you, it's probably because the default "C" locale doesn't have support for non-ASCII output. To fix that problem you can call setlocale in the start of the program; that way you can output the full range of characters supported by the user's locale: (which may or may not have support for all of the characters you use)

#include <clocale>
#include <iostream>

using namespace std;

int main() {
    setlocale(LC_ALL, "");
    wcout << L'\u2780';
    return 0;
}

Upvotes: 7

Related Questions