Johnny Lim
Johnny Lim

Reputation: 5843

What is the Java's internal represention for String? Modified UTF-8? UTF-16?

I searched Java's internal representation for String, but I've got two materials which look reliable but inconsistent.

One is:

http://www.codeguru.com/cpp/misc/misc/multi-lingualsupport/article.php/c10451

and it says:

Java uses UTF-16 for the internal text representation and supports a non-standard modification of UTF-8 for string serialization.

The other is:

http://en.wikipedia.org/wiki/UTF-8#Modified_UTF-8

and it says:

Tcl also uses the same modified UTF-8[25] as Java for internal representation of Unicode data, but uses strict CESU-8 for external data.

Modified UTF-8? Or UTF-16? Which one is correct? And how many bytes does Java use for a char in memory?

Please let me know which one is correct and how many bytes it uses.

Upvotes: 59

Views: 45496

Answers (7)

Paul Verest
Paul Verest

Reputation: 63912

As of 2023, see JEP 254: Compact Strings https://openjdk.org/jeps/254

Before JDK 9 it was UTF-16 char value[], usually 2 bytes per char, 4 bytes for Asian (Chinese, Japanese 日本)

Since JDK 9 it is UTF-8 byte[]
e.g. 1 byte for ASCII/Latin, 2 bytes for Áá Àà Ăă Ắắ Ằằ Ẵẵ (letters with diacritics), 4 bytes for Asian (Chinese, Japanese 日本)
It is still possible to Disables the Compact Strings feature with -XX:-CompactStrings
see Documentation for The java Command https://docs.oracle.com/en/java/javase/17/docs/specs/man/java.html#advanced-runtime-options-for-java

and the article https://howtodoinjava.com/java9/compact-strings/


String class BEFORE Java 9 Prior to Java 9, string data was stored as an array of chars. This required 16 bits for each char.

public final class String
    implements java.io.Serializable, Comparable<String>, CharSequence {
 
    //The value is used for character storage.
  private final char value[];
 
}

String class AFTER Java 9 Starting with Java 9, strings are now internally represented using a byte array along with a flag field for encoding references.

public final class String
    implements java.io.Serializable, Comparable<String>, CharSequence {
 
    /** The value is used for character storage. */
  @Stable
  private final byte[] value;
 
  /**
   * The identifier of the encoding used to encode the bytes in
   * {@code value}. The supported values in this implementation are
   *
   * LATIN1
   * UTF16
   *
   * @implNote This field is trusted by the VM, and is a subject to
   * constant folding if String instance is constant. Overwriting this
   * field after construction will cause problems.
   */
  private final byte coder;
 
}

Upvotes: 1

Stephen C
Stephen C

Reputation: 718886

You can confirm the following by looking at the source code of the relevant version of the java.lang.String class in OpenJDK. (For some really old versions of Java, String was partly implemented in native code. That source code is not publicly available.)

Prior to Java 9, the standard in-memory representation for a Java String is UTF-16 code-units held in a char[].

With Java 6 update 21 and later, there was a non-standard option (-XX:UseCompressedStrings) to enable compressed strings. This feature was removed in Java 7.

For Java 9 and later, the implementation of String has been changed to use a compact representation by default. The java command documentation now says this:

-XX:-CompactStrings

Disables the Compact Strings feature. By default, this option is enabled. When this option is enabled, Java Strings containing only single-byte characters are internally represented and stored as single-byte-per-character Strings using ISO-8859-1 / Latin-1 encoding. This reduces, by 50%, the amount of space required for Strings containing only single-byte characters. For Java Strings containing at least one multibyte character: these are represented and stored as 2 bytes per character using UTF-16 encoding. Disabling the Compact Strings feature forces the use of UTF-16 encoding as the internal representation for all Java Strings.


Note that neither classical, "compressed" or "compact" strings ever used UTF-8 encoding as the String representation. Modified UTF-8 is used in other contexts; e.g. in class files, and the object serialization format.

See also:


To answer your specific questions:

Modified UTF-8? Or UTF-16? Which one is correct?

Either UTF-16 or an adaptive representation that depends on the actual data; see above.

And how many bytes does Java use for a char in memory?

A single char uses 2 bytes. There might be some "wastage" due to possible padding, depending on the context.

A char[] is 2 bytes per character plus the object header (typically 12 bytes including the array length) padded to (typically) a multiple of 8 bytes.

Please let me know which one is correct and how many bytes it uses.

If we are talking about a String now, it is not possible to give a general answer. It will depend on the Java version and hardware platform, as well as the String length and (in some cases) what the characters are. Indeed, for some versions of Java it even depends on how you created the String.


Having said all of the above, the API model for String is that it is both a sequence of UTF-16 code-units and a sequence of Unicode code-points. As a Java programmer, you should be able to ignore everything that happens "under the hood". The internal String representation is (should be!) irrelevant.

Upvotes: 41

Peter Lawrey
Peter Lawrey

Reputation: 533530

Java uses UTF-16 for the internal text representation

The representation for String and StringBuilder etc in Java is UTF-16

https://docs.oracle.com/javase/8/docs/technotes/guides/intl/overview.html

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

At the JVM level, if you are using -XX:+UseCompressedStrings (which is default for some updates of Java 6) The actual in-memory representation can be 8-bit, ISO-8859-1 but only for strings which do not need UTF-16 encoding.

http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html

and supports a non-standard modification of UTF-8 for string serialization.

Serialized Strings use UTF-8 by default.

And how many bytes does Java use for a char in memory?

A char is always two bytes, if you ignore the need for padding in an Object.

Note: a code point (which allows character > 65535) can use one or two characters, i.e. 2 or 4 bytes.

Upvotes: 64

mohan.reddy8
mohan.reddy8

Reputation: 23

java is available in 18 international languages and following UNICODE character set, which contains all the characters which are available in 18 international languages and contains 65536 characters.And java following UTF-16 so the size of char in java is 2 bytes.

Upvotes: -7

AlexR
AlexR

Reputation: 115338

Java stores strings internally as UTF-16 and uses 2 bytes for each character.

Upvotes: -6

Andreas Johansson
Andreas Johansson

Reputation: 1145

UTF-16.

From http://java.sun.com/javase/technologies/core/basic/intl/faq.jsp :

How is text represented in the Java platform?

The Java programming language is based on the Unicode character set, and several libraries implement the Unicode standard. The primitive data type char in the Java programming language is an unsigned 16-bit integer that can represent a Unicode code point in the range U+0000 to U+FFFF, or the code units of UTF-16. The various types and classes in the Java platform that represent character sequences - char[], implementations of java.lang.CharSequence (such as the String class), and implementations of java.text.CharacterIterator - are UTF-16 sequences.

Upvotes: 11

belgther
belgther

Reputation: 2534

The size of a char is 2 bytes.

Therefore, I would say that Java uses UTF-16 for internal String representation.

Upvotes: 2

Related Questions