Reputation:
Is there a rule that helps to find the UTF-8 codes of all accented letters associated to an ascii one ? For example, can I have all the UTF-8 codes all the accented letters é
, è
,... from the UTF-8 code of the letter e
?
import unicodedata
def accented_letters(letter):
accented_chars = []
for accent_type in "acute", "double acute", "grave", "double grave":
try:
accented_chars.append(
unicodedata.lookup(
"Latin small letter {letter} with {accent_type}" \
.format(**vars())
)
)
except KeyError:
pass
return accented_chars
print(accented_letters("e"))
for kind in ["NFC", "NFKC", "NFD", "NFKD"]:
print(
'---',
kind,
list(unicodedata.normalize(kind,"é")),
sep = "\n"
)
for oneChar in "βεέ.¡¿?ê":
print(
'---',
oneChar,
unicodedata.name(oneChar),
unicodedata.normalize('NFD', oneChar).encode('ascii','ignore'),
sep = "\n"
)
The corresponding output.
['é', 'è', 'ȅ']
---
NFC
['é']
---
NFKC
['é']
---
NFD
['e', '́']
---
NFKD
['e', '́']
---
β
GREEK SMALL LETTER BETA
b''
---
ε
GREEK SMALL LETTER EPSILON
b''
---
έ
GREEK SMALL LETTER EPSILON WITH TONOS
b''
---
.
FULL STOP
b'.'
---
¡
INVERTED EXCLAMATION MARK
b''
---
¿
INVERTED QUESTION MARK
b''
---
?
QUESTION MARK
b'?'
---
ê
LATIN SMALL LETTER E WITH CIRCUMFLEX
b'e'
https://www.rfc-editor.org/rfc/rfc3629
Upvotes: 0
Views: 281
Reputation: 4079
Using unicodedata.lookup:
import unicodedata
def accented_letters(letter):
accented_chars = []
for accent_type in "acute", "double acute", "grave", "double grave":
try:
accented_chars.append(unicodedata.lookup("Latin small letter {letter} with {accent_type}".format(**vars())))
except KeyError:
pass
return accented_chars
print(accented_letters("e"))
To do the reverse, one can use unicodedata.normalize with the NFD form and take the first character, as the second character is the combining form accent.
print(unicodedata.normalize("NFD","è")[0]) # Prints "e".
Upvotes: 0
Reputation: 12782
They're often supposed to be distinct characters in many languages. However if you really need this, you will want to find a function that normalizes strings. In thus case you will want to normalize to get decomposed characters where these become two Unicode code points in the string.
Upvotes: 1