Reputation: 28193
I am confused about the text encoding and charset. For many reasons, I have to learn non-Unicode, non-UTF8 stuff in my upcoming work.
I find the word "charset" in email headers as in "ISO-2022-JP", but there's no such a encoding in text editors. (I looked around the different text editors.)
What's the difference between text encoding and charset? I'd appreciate it if you could show me some use case examples.
Upvotes: 198
Views: 61911
Reputation: 990
TLDR;
Encoding
: or Character Encoding or Character-Encoding Scheme is a part of Charset
Charset
: is defined as the combination of one or more coded character sets (Unicode, US-ASCII, ISO 8859-1,...) and a character-encoding scheme (UTF-8, UTF-16, ISO 2022, EUC,...).
A coded character set is a mapping between a set of abstract characters and a set of integers. US-ASCII, ISO 8859-1, JIS X 0201, and Unicode are examples of coded character sets.
Let's take Unicode
as an example:
Code Point <-> Character Description
U+0041 | A Latin Capital letter A (https://symbl.cc/en/0041/)
U+0042 | B Latin Capital letter B (https://symbl.cc/en/0042/)
... |
U+005A | Z Latin Capital letter Z (https://symbl.cc/en/005A/)
... |
U+5301 | ε Ideograph Japanese unit of weight (1/1000 of a kan) CJK ε (https://symbl.cc/en/5301/)
... |
U+1F525 | π₯ Fire Emoji (https://symbl.cc/en/1F525-fire-emoji/)
A character-encoding scheme is a mapping between one or more coded character sets and a set of octet (eight-bit byte) sequences. UTF-8, UTF-16, ISO 2022, and EUC are examples of character-encoding schemes.
Let's take UTF-8
as an example:
Character Code Point <-> Bytes(Hex)
A U+0041 | 41 (https://symbl.cc/en/0041/)
B U+0042 | 42 (https://symbl.cc/en/0042/)
... |
Z U+005A | 5A (https://symbl.cc/en/005A/)
... |
ε U+5301 | E5 8C 81 (https://symbl.cc/en/5301/)
... |
π₯ U+1F525 | F0 9F 94 A5 (https://symbl.cc/en/1F525-fire-emoji/)
Refer to Java Charset Class
Upvotes: -1
Reputation: 21680
Basically:
People sometimes use charset to refer both to the character repertoire and the encoding scheme. The Unicode Standard charset has multiple encodings, e.g., UTF-8, UTF-16, UTF-32, UCS-4, UTF-EBCDIC, Punycode, and GB18030.
Upvotes: 192
Reputation: 117
In my opinion, a charset is part of an encoding (a component), encoding has a charset attribute, so a charset can be used in many encodings. For example, Unicode is a charset used in encodings like UTF-8, UTF-16 and so on. See illustration here:
The char in charset doesn't mean the char
type in the programming world. It means a character in the real world. In English it maybe the same, but in other languages not, like in Chinese, 'ζ' is a inseparable 'char' in charsets (Unicode, GB [used in GBK and GB2312]), 'a' is also a char in charsets (ASCII, ISO-8859, Unicode).
Upvotes: 7
Reputation: 14668
Throwing more light for people visiting henceforth, hopefully it would be helpful.
There are characters in each language and collection of those characters form the βcharacter setβ of that language. When a character is encoded then it is assigned a unique identifier or a number called as code point. In computer, these code points will be represented by one or more bytes.
Examples of character set: ASCII (covers all English characters), ISO/IEC 646, Unicode (covers characters from all living languages in the world)
A coded character set is a set in which a unique number is assigned to each character. That unique number is called as "code point".
Coded character sets are sometimes called code pages.
Encoding is the mechanism to map the code points with some bytes so that a character can be read and written uniformly across different system using same encoding scheme.
Examples of encoding: ASCII, Unicode encoding schemes like UTF-8, UTF-16, UTF-32.
09 15
) when using the UTF-16 encoding FC
while in βUTF-8β it represented as C3 BC
and in UTF-16 as FE FF 00 FC
.09 15
), three bytes with UTF-8 (E0 A4 95
), or four bytes with UTF-32 (00 00 09 15
)Upvotes: 41
Reputation: 8841
An encoding is a mapping between bytes and characters from a character set, so it will be helpful to discuss and understand the difference between between bytes and characters.
Think of bytes as numbers between 0 and 255, whereas characters are abstract things like "a", "1", "$" and "Γ". The set of all characters that are available is called a character set.
Each character has a sequence of one or more bytes that are used to represent it; however, the exact number and value of the bytes depends on the encoding used and there are many different encodings.
Most encodings are based on an old character set and encoding called ASCII which is a single byte per character (actually, only 7 bits) and contains 128 characters including a lot of the common characters used in US English.
For example, here are 6 characters in the ASCII character set that are represented by the values 60 to 65.
Extract of ASCII Table 60-65
ββββββββ¦βββββββββββββββ
β Byte β Character β
β βββββββ¬βββββββββββββββ
β 60 β < β
β 61 β = β
β 62 β > β
β 63 β ? β
β 64 β @ β
β 65 β A β
ββββββββ©βββββββββββββββ
In the full ASCII set, the lowest value used is zero and the highest is 127 (both of these are hidden control characters).
However, once you start needing more characters than the basic ASCII provides (for example, letters with accents, currency symbols, graphic symbols, etc.), ASCII is not suitable and you need something more extensive. You need more characters (a different character set) and you need a different encoding as 128 characters is not enough to fit all the characters in. Some encodings offer one byte (256 characters) or up to six bytes.
Over time a lot of encodings have been created. In the Windows world, there is CP1252, or ISO-8859-1, whereas Linux users tend to favour UTF-8. Java uses UTF-16 natively.
One sequence of byte values for a character in one encoding might stand for a completely different character in another encoding, or might even be invalid.
For example, in ISO 8859-1, Γ’ is represented by one byte of value 226
, whereas in UTF-8 it is two bytes: 195, 162
. However, in ISO 8859-1, 195, 162
would be two characters, Γ, Β’.
When computers store data about characters internally or transmit it to another system, they store or send bytes. Imagine a system opening a file or receiving message sees the bytes 195, 162
. How does it know what characters these are?
In order for the system to interpret those bytes as actual characters (and so display them or convert them to another encoding), it needs to know the encoding used. That is why encoding appears in XML headers or can be specified in a text editor. It tells the system the mapping between bytes and characters.
Upvotes: 3
Reputation: 32898
In my opinion the word "charset" should be limited to identifying the parameter used in HTTP, MIME, and similar standards to specify a character encoding (a mapping from a series of text characters to a sequence of bytes) by name. For example:charset=utf-8
.
I'm aware, though, that MySQL, Java, and other places may use the word "charset" to mean a character encoding.
Upvotes: 3
Reputation: 284977
Every encoding has a particular charset associated with it, but there can be more than one encoding for a given charset. A charset is simply what it sounds like, a set of characters. There are a large number of charsets, including many that are intended for particular scripts or languages.
However, we are well along the way in the transition to Unicode, which includes a character set capable of representing almost all the world's scripts. However, there are multiple encodings for Unicode. An encoding is a way of mapping a string of characters to a string of bytes. Examples of Unicode encodings include UTF-8, UTF-16 BE, and UTF-16 LE . Each of these has advantages for particular applications or machine architectures.
Upvotes: 106
Reputation: 91209
A character encoding consists of:
Step #1 by itself is a "character repertoire" or abstract "character set", and #1 + #2 = a "coded character set".
But back before Unicode became popular and everyone (except East Asians) was using a single-byte encoding, steps #3 and #4 were trivial (code point = code unit = byte). Thus, older protocols didn't clearly distinguish between "character encoding" and "coded character set". Older protocols use charset
when they really mean encoding.
Upvotes: 36
Reputation:
Googled for it. http://en.wikipedia.org/wiki/Character_encoding
The difference seems to be subtle. The term charset actually doesn't apply to Unicode. Unicode goes through a series of abstractions. abstract characters -> code points -> encoding of code points to bytes.
Charsets actually skip this and directly jump from characters to bytes. sequence of bytes <-> sequence of characters
In short, encoding : code points -> bytes charset: characters -> bytes
Upvotes: 9
Reputation: 14386
A charset is just a set; it either contains, e.g. the Euro sign, or else it doesn't. That's all.
An encoding is a bijective mapping from a character set to a set of integers. If it supports the Euro sign, it must assign a specific integer to that character and to no other.
Upvotes: 6
Reputation: 45364
A character set, or character repertoire, is simply a set (an unordered collection) of characters. A coded character set assigns an integer (a "code point") to each character in the repertoire. An encoding is a way of representing code points unambiguously as a stream of bytes.
Upvotes: 15