Reputation: 6689
I'm trying to convert some strings that are in French Canadian and basically, I'd like to be able to take out the French accent marks in the letters while keeping the letter. (E.g. convert é
to e
, so crème brûlée
would become creme brulee
)
What is the best method for achieving this?
Upvotes: 563
Views: 331858
Reputation: 1337
After 15 years this question is still interesting. Actually I found that Lucene.NET approach is the most complete one. If you do not mind referencing the whole Lucene.NET it is just a matter of 3 lines of code.
My actual use case required to use lucene because I needed to match exactly the result of the ASCIIFOlding filter used by elasticsearch, but this approach is general and Lucene.NET is highly maintained.
public static string AsciiFold(this string input)
{
var output = new char[input.Length * 4];
var outputlenght = Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter.FoldToASCII(input.ToCharArray(), 0, output, 0, input.Length);
return new string(output, 0, outputlenght);
}
The * 4 is needed to support the worst case scenario as described in source code.
Upvotes: 0
Reputation: 4879
I needed something that converts all major unicode characters and the voted answer leaved a few out so I've created a version of CodeIgniter's convert_accented_characters($str)
into C# that is easily customisable:
using System;
using System.Text;
using System.Collections.Generic;
public static class Strings
{
static Dictionary<string, string> foreign_characters = new Dictionary<string, string>
{
{ "äæǽ", "ae" },
{ "öœ", "oe" },
{ "ü", "ue" },
{ "Ä", "Ae" },
{ "Ü", "Ue" },
{ "Ö", "Oe" },
{ "ÀÁÂÃÄÅǺĀĂĄǍΑΆẢẠẦẪẨẬẰẮẴẲẶА", "A" },
{ "àáâãåǻāăąǎªαάảạầấẫẩậằắẵẳặа", "a" },
{ "Б", "B" },
{ "б", "b" },
{ "ÇĆĈĊČ", "C" },
{ "çćĉċč", "c" },
{ "Д", "D" },
{ "д", "d" },
{ "ÐĎĐΔ", "Dj" },
{ "ðďđδ", "dj" },
{ "ÈÉÊËĒĔĖĘĚΕΈẼẺẸỀẾỄỂỆЕЭ", "E" },
{ "èéêëēĕėęěέεẽẻẹềếễểệеэ", "e" },
{ "Ф", "F" },
{ "ф", "f" },
{ "ĜĞĠĢΓГҐ", "G" },
{ "ĝğġģγгґ", "g" },
{ "ĤĦ", "H" },
{ "ĥħ", "h" },
{ "ÌÍÎÏĨĪĬǏĮİΗΉΊΙΪỈỊИЫ", "I" },
{ "ìíîïĩīĭǐįıηήίιϊỉịиыї", "i" },
{ "Ĵ", "J" },
{ "ĵ", "j" },
{ "ĶΚК", "K" },
{ "ķκк", "k" },
{ "ĹĻĽĿŁΛЛ", "L" },
{ "ĺļľŀłλл", "l" },
{ "М", "M" },
{ "м", "m" },
{ "ÑŃŅŇΝН", "N" },
{ "ñńņňʼnνн", "n" },
{ "ÒÓÔÕŌŎǑŐƠØǾΟΌΩΏỎỌỒỐỖỔỘỜỚỠỞỢО", "O" },
{ "òóôõōŏǒőơøǿºοόωώỏọồốỗổộờớỡởợо", "o" },
{ "П", "P" },
{ "п", "p" },
{ "ŔŖŘΡР", "R" },
{ "ŕŗřρр", "r" },
{ "ŚŜŞȘŠΣС", "S" },
{ "śŝşșšſσςс", "s" },
{ "ȚŢŤŦτТ", "T" },
{ "țţťŧт", "t" },
{ "ÙÚÛŨŪŬŮŰŲƯǓǕǗǙǛŨỦỤỪỨỮỬỰУ", "U" },
{ "ùúûũūŭůűųưǔǖǘǚǜυύϋủụừứữửựу", "u" },
{ "ÝŸŶΥΎΫỲỸỶỴЙ", "Y" },
{ "ýÿŷỳỹỷỵй", "y" },
{ "В", "V" },
{ "в", "v" },
{ "Ŵ", "W" },
{ "ŵ", "w" },
{ "ŹŻŽΖЗ", "Z" },
{ "źżžζз", "z" },
{ "ÆǼ", "AE" },
{ "ẞ", "Ss" },
{ "ß", "ss" },
{ "IJ", "IJ" },
{ "ij", "ij" },
{ "Œ", "OE" },
{ "ƒ", "f" },
{ "ξ", "ks" },
{ "π", "p" },
{ "β", "v" },
{ "μ", "m" },
{ "ψ", "ps" },
{ "Ё", "Yo" },
{ "ё", "yo" },
{ "Є", "Ye" },
{ "є", "ye" },
{ "Ї", "Yi" },
{ "Ж", "Zh" },
{ "ж", "zh" },
{ "Х", "Kh" },
{ "х", "kh" },
{ "Ц", "Ts" },
{ "ц", "ts" },
{ "Ч", "Ch" },
{ "ч", "ch" },
{ "Ш", "Sh" },
{ "ш", "sh" },
{ "Щ", "Shch" },
{ "щ", "shch" },
{ "ЪъЬь", "" },
{ "Ю", "Yu" },
{ "ю", "yu" },
{ "Я", "Ya" },
{ "я", "ya" },
};
public static char RemoveDiacritics(this char c){
foreach(KeyValuePair<string, string> entry in foreign_characters)
{
if(entry.Key.IndexOf (c) != -1)
{
return entry.Value[0];
}
}
return c;
}
public static string RemoveDiacritics(this string s)
{
//StringBuilder sb = new StringBuilder ();
string text = "";
foreach (char c in s)
{
int len = text.Length;
foreach(KeyValuePair<string, string> entry in foreign_characters)
{
if(entry.Key.IndexOf (c) != -1)
{
text += entry.Value;
break;
}
}
if (len == text.Length) {
text += c;
}
}
return text;
}
}
Usage
// for strings
"crème brûlée".RemoveDiacritics (); // creme brulee
// for chars
"Ã"[0].RemoveDiacritics (); // A
Upvotes: 54
Reputation: 8178
The accepted answer from Blair Conrad
with 662 up-votes does not work correctly. Polish ł is not translated. Norse ø is not translated.
And the answer from azrafe7
with 245 up-votes does not work neither for all characters. I fed his code with the first 600 Unicode characters and got 6 translated completely wrong and 80 characters completely missing. And several characters are even returned as question marks or Unicode error code 0xFFFD! Apart this code does not work on environments where ISO-8859-8 is not installed, neither on Linux.
Why do so many people up-vote answers that do not work? These answers can be used for simple texts like "crème brûlée"
, but that's it! Lots and lots of characters with diacritics stay unchanged or are replaced wrongly! I suppose that nobody ever invested the time to systematically test the code in these answers. But I did this. And I found that there is no easy solution. I ended up writing workarounds for all the characters that the code in these answers translates wrong.
So I had to write my own code which
By default my code replaces each character with diacritics into another character.
For example 'ü'
into 'u'
. If you like you can also replace 'ü'
with 'ue'
as useful in German. But this does not make sense for Turkish, so it depends on you.
public class CharConverter
{
static Char[] mc_Convert;
/// <summary>
/// This function is ultra fast because it uses a lookup table.
/// This function does not depend on Windows functionality. It also works on Linux.
/// This function removes all diacritics, accents, etc.
/// For example "Crème Brûlée mit Soße" is converted to "Creme Brulee mit Sosse".
/// </summary>
public static String RemoveDiacritics(String s_Text)
{
StringBuilder i_Out = new StringBuilder(s_Text.Length);
foreach (Char c_Char in s_Text)
{
/* This switch statement is optional!
switch (c_Char)
{
// If you like you can add your own conversions, like here for German.
// Otherwise remove this switch and 'ä' will be translated to 'a', etc.
case 'ä': i_Out.Append("ae"); continue;
case 'ö': i_Out.Append("oe"); continue;
case 'ü': i_Out.Append("ue"); continue;
case 'Ä': i_Out.Append("Ae"); continue;
case 'Ö': i_Out.Append("Oe"); continue;
case 'Ü': i_Out.Append("Ue"); continue;
case 'ß': i_Out.Append("ss"); continue;
} */
if (c_Char < mc_Convert.Length)
i_Out.Append(mc_Convert[c_Char]);
else
i_Out.Append(c_Char);
}
return i_Out.ToString();
}
// static constructor
// See https://www.compart.com/en/unicode/U+0180
static CharConverter()
{
mc_Convert = new Char[0x270];
// Fill char array with translation of each character to itself
for (int i = 0; i < 0x270; i++)
{
mc_Convert[i] = (Char)i;
}
// Store the replacements for 310 special characters
#region Fill mc_Convert
mc_Convert[0x0C0] = 'A'; // À
mc_Convert[0x0C1] = 'A'; // Á
mc_Convert[0x0C2] = 'A'; // Â
mc_Convert[0x0C3] = 'A'; // Ã
mc_Convert[0x0C4] = 'A'; // Ä
mc_Convert[0x0C5] = 'A'; // Å
mc_Convert[0x0C6] = 'A'; // Æ
mc_Convert[0x0C7] = 'C'; // Ç
mc_Convert[0x0C8] = 'E'; // È
mc_Convert[0x0C9] = 'E'; // É
mc_Convert[0x0CA] = 'E'; // Ê
mc_Convert[0x0CB] = 'E'; // Ë
mc_Convert[0x0CC] = 'I'; // Ì
mc_Convert[0x0CD] = 'I'; // Í
mc_Convert[0x0CE] = 'I'; // Î
mc_Convert[0x0CF] = 'I'; // Ï
mc_Convert[0x0D0] = 'D'; // Ð
mc_Convert[0x0D1] = 'N'; // Ñ
mc_Convert[0x0D2] = 'O'; // Ò
mc_Convert[0x0D3] = 'O'; // Ó
mc_Convert[0x0D4] = 'O'; // Ô
mc_Convert[0x0D5] = 'O'; // Õ
mc_Convert[0x0D6] = 'O'; // Ö
mc_Convert[0x0D8] = 'O'; // Ø
mc_Convert[0x0D9] = 'U'; // Ù
mc_Convert[0x0DA] = 'U'; // Ú
mc_Convert[0x0DB] = 'U'; // Û
mc_Convert[0x0DC] = 'U'; // Ü
mc_Convert[0x0DD] = 'Y'; // Ý
mc_Convert[0x0DF] = 's'; // ß
mc_Convert[0x0E0] = 'a'; // à
mc_Convert[0x0E1] = 'a'; // á
mc_Convert[0x0E2] = 'a'; // â
mc_Convert[0x0E3] = 'a'; // ã
mc_Convert[0x0E4] = 'a'; // ä
mc_Convert[0x0E5] = 'a'; // å
mc_Convert[0x0E6] = 'a'; // æ
mc_Convert[0x0E7] = 'c'; // ç
mc_Convert[0x0E8] = 'e'; // è
mc_Convert[0x0E9] = 'e'; // é
mc_Convert[0x0EA] = 'e'; // ê
mc_Convert[0x0EB] = 'e'; // ë
mc_Convert[0x0EC] = 'i'; // ì
mc_Convert[0x0ED] = 'i'; // í
mc_Convert[0x0EE] = 'i'; // î
mc_Convert[0x0EF] = 'i'; // ï
mc_Convert[0x0F1] = 'n'; // ñ
mc_Convert[0x0F2] = 'o'; // ò
mc_Convert[0x0F3] = 'o'; // ó
mc_Convert[0x0F4] = 'o'; // ô
mc_Convert[0x0F5] = 'o'; // õ
mc_Convert[0x0F6] = 'o'; // ö
mc_Convert[0x0F8] = 'o'; // ø
mc_Convert[0x0F9] = 'u'; // ù
mc_Convert[0x0FA] = 'u'; // ú
mc_Convert[0x0FB] = 'u'; // û
mc_Convert[0x0FC] = 'u'; // ü
mc_Convert[0x0FD] = 'y'; // ý
mc_Convert[0x0FF] = 'y'; // ÿ
mc_Convert[0x100] = 'A'; // Ā
mc_Convert[0x101] = 'a'; // ā
mc_Convert[0x102] = 'A'; // Ă
mc_Convert[0x103] = 'a'; // ă
mc_Convert[0x104] = 'A'; // Ą
mc_Convert[0x105] = 'a'; // ą
mc_Convert[0x106] = 'C'; // Ć
mc_Convert[0x107] = 'c'; // ć
mc_Convert[0x108] = 'C'; // Ĉ
mc_Convert[0x109] = 'c'; // ĉ
mc_Convert[0x10A] = 'C'; // Ċ
mc_Convert[0x10B] = 'c'; // ċ
mc_Convert[0x10C] = 'C'; // Č
mc_Convert[0x10D] = 'c'; // č
mc_Convert[0x10E] = 'D'; // Ď
mc_Convert[0x10F] = 'd'; // ď
mc_Convert[0x110] = 'D'; // Đ
mc_Convert[0x111] = 'd'; // đ
mc_Convert[0x112] = 'E'; // Ē
mc_Convert[0x113] = 'e'; // ē
mc_Convert[0x114] = 'E'; // Ĕ
mc_Convert[0x115] = 'e'; // ĕ
mc_Convert[0x116] = 'E'; // Ė
mc_Convert[0x117] = 'e'; // ė
mc_Convert[0x118] = 'E'; // Ę
mc_Convert[0x119] = 'e'; // ę
mc_Convert[0x11A] = 'E'; // Ě
mc_Convert[0x11B] = 'e'; // ě
mc_Convert[0x11C] = 'G'; // Ĝ
mc_Convert[0x11D] = 'g'; // ĝ
mc_Convert[0x11E] = 'G'; // Ğ
mc_Convert[0x11F] = 'g'; // ğ
mc_Convert[0x120] = 'G'; // Ġ
mc_Convert[0x121] = 'g'; // ġ
mc_Convert[0x122] = 'G'; // Ģ
mc_Convert[0x123] = 'g'; // ģ
mc_Convert[0x124] = 'H'; // Ĥ
mc_Convert[0x125] = 'h'; // ĥ
mc_Convert[0x126] = 'H'; // Ħ
mc_Convert[0x127] = 'h'; // ħ
mc_Convert[0x128] = 'I'; // Ĩ
mc_Convert[0x129] = 'i'; // ĩ
mc_Convert[0x12A] = 'I'; // Ī
mc_Convert[0x12B] = 'i'; // ī
mc_Convert[0x12C] = 'I'; // Ĭ
mc_Convert[0x12D] = 'i'; // ĭ
mc_Convert[0x12E] = 'I'; // Į
mc_Convert[0x12F] = 'i'; // į
mc_Convert[0x130] = 'I'; // İ
mc_Convert[0x131] = 'i'; // ı
mc_Convert[0x134] = 'J'; // Ĵ
mc_Convert[0x135] = 'j'; // ĵ
mc_Convert[0x136] = 'K'; // Ķ
mc_Convert[0x137] = 'k'; // ķ
mc_Convert[0x138] = 'K'; // ĸ
mc_Convert[0x139] = 'L'; // Ĺ
mc_Convert[0x13A] = 'l'; // ĺ
mc_Convert[0x13B] = 'L'; // Ļ
mc_Convert[0x13C] = 'l'; // ļ
mc_Convert[0x13D] = 'L'; // Ľ
mc_Convert[0x13E] = 'l'; // ľ
mc_Convert[0x13F] = 'L'; // Ŀ
mc_Convert[0x140] = 'l'; // ŀ
mc_Convert[0x141] = 'L'; // Ł
mc_Convert[0x142] = 'l'; // ł
mc_Convert[0x143] = 'N'; // Ń
mc_Convert[0x144] = 'n'; // ń
mc_Convert[0x145] = 'N'; // Ņ
mc_Convert[0x146] = 'n'; // ņ
mc_Convert[0x147] = 'N'; // Ň
mc_Convert[0x148] = 'n'; // ň
mc_Convert[0x149] = 'n'; // ʼn
mc_Convert[0x14C] = 'O'; // Ō
mc_Convert[0x14D] = 'o'; // ō
mc_Convert[0x14E] = 'O'; // Ŏ
mc_Convert[0x14F] = 'o'; // ŏ
mc_Convert[0x150] = 'O'; // Ő
mc_Convert[0x151] = 'o'; // ő
mc_Convert[0x152] = 'O'; // Œ
mc_Convert[0x153] = 'o'; // œ
mc_Convert[0x154] = 'R'; // Ŕ
mc_Convert[0x155] = 'r'; // ŕ
mc_Convert[0x156] = 'R'; // Ŗ
mc_Convert[0x157] = 'r'; // ŗ
mc_Convert[0x158] = 'R'; // Ř
mc_Convert[0x159] = 'r'; // ř
mc_Convert[0x15A] = 'S'; // Ś
mc_Convert[0x15B] = 's'; // ś
mc_Convert[0x15C] = 'S'; // Ŝ
mc_Convert[0x15D] = 's'; // ŝ
mc_Convert[0x15E] = 'S'; // Ş
mc_Convert[0x15F] = 's'; // ş
mc_Convert[0x160] = 'S'; // Š
mc_Convert[0x161] = 's'; // š
mc_Convert[0x162] = 'T'; // Ţ
mc_Convert[0x163] = 't'; // ţ
mc_Convert[0x164] = 'T'; // Ť
mc_Convert[0x165] = 't'; // ť
mc_Convert[0x166] = 'T'; // Ŧ
mc_Convert[0x167] = 't'; // ŧ
mc_Convert[0x168] = 'U'; // Ũ
mc_Convert[0x169] = 'u'; // ũ
mc_Convert[0x16A] = 'U'; // Ū
mc_Convert[0x16B] = 'u'; // ū
mc_Convert[0x16C] = 'U'; // Ŭ
mc_Convert[0x16D] = 'u'; // ŭ
mc_Convert[0x16E] = 'U'; // Ů
mc_Convert[0x16F] = 'u'; // ů
mc_Convert[0x170] = 'U'; // Ű
mc_Convert[0x171] = 'u'; // ű
mc_Convert[0x172] = 'U'; // Ų
mc_Convert[0x173] = 'u'; // ų
mc_Convert[0x174] = 'W'; // Ŵ
mc_Convert[0x175] = 'w'; // ŵ
mc_Convert[0x176] = 'Y'; // Ŷ
mc_Convert[0x177] = 'y'; // ŷ
mc_Convert[0x178] = 'Y'; // Ÿ
mc_Convert[0x179] = 'Z'; // Ź
mc_Convert[0x17A] = 'z'; // ź
mc_Convert[0x17B] = 'Z'; // Ż
mc_Convert[0x17C] = 'z'; // ż
mc_Convert[0x17D] = 'Z'; // Ž
mc_Convert[0x17E] = 'z'; // ž
mc_Convert[0x180] = 'b'; // ƀ
mc_Convert[0x189] = 'D'; // Ɖ
mc_Convert[0x191] = 'F'; // Ƒ
mc_Convert[0x192] = 'f'; // ƒ
mc_Convert[0x193] = 'G'; // Ɠ
mc_Convert[0x197] = 'I'; // Ɨ
mc_Convert[0x198] = 'K'; // Ƙ
mc_Convert[0x199] = 'k'; // ƙ
mc_Convert[0x19A] = 'l'; // ƚ
mc_Convert[0x19F] = 'O'; // Ɵ
mc_Convert[0x1A0] = 'O'; // Ơ
mc_Convert[0x1A1] = 'o'; // ơ
mc_Convert[0x1AB] = 't'; // ƫ
mc_Convert[0x1AC] = 'T'; // Ƭ
mc_Convert[0x1AD] = 't'; // ƭ
mc_Convert[0x1AE] = 'T'; // Ʈ
mc_Convert[0x1AF] = 'U'; // Ư
mc_Convert[0x1B0] = 'u'; // ư
mc_Convert[0x1B6] = 'z'; // ƶ
mc_Convert[0x1CD] = 'A'; // Ǎ
mc_Convert[0x1CE] = 'a'; // ǎ
mc_Convert[0x1CF] = 'I'; // Ǐ
mc_Convert[0x1D0] = 'i'; // ǐ
mc_Convert[0x1D1] = 'O'; // Ǒ
mc_Convert[0x1D2] = 'o'; // ǒ
mc_Convert[0x1D3] = 'U'; // Ǔ
mc_Convert[0x1D4] = 'u'; // ǔ
mc_Convert[0x1D5] = 'U'; // Ǖ
mc_Convert[0x1D6] = 'u'; // ǖ
mc_Convert[0x1D7] = 'U'; // Ǘ
mc_Convert[0x1D8] = 'u'; // ǘ
mc_Convert[0x1D9] = 'U'; // Ǚ
mc_Convert[0x1DA] = 'u'; // ǚ
mc_Convert[0x1DB] = 'U'; // Ǜ
mc_Convert[0x1DC] = 'u'; // ǜ
mc_Convert[0x1DE] = 'A'; // Ǟ
mc_Convert[0x1DF] = 'a'; // ǟ
mc_Convert[0x1E0] = 'A'; // Ǡ
mc_Convert[0x1E1] = 'a'; // ǡ
mc_Convert[0x1E2] = 'A'; // Ǣ
mc_Convert[0x1E3] = 'a'; // ǣ
mc_Convert[0x1E4] = 'G'; // Ǥ
mc_Convert[0x1E5] = 'g'; // ǥ
mc_Convert[0x1E6] = 'G'; // Ǧ
mc_Convert[0x1E7] = 'g'; // ǧ
mc_Convert[0x1E8] = 'K'; // Ǩ
mc_Convert[0x1E9] = 'k'; // ǩ
mc_Convert[0x1EA] = 'O'; // Ǫ
mc_Convert[0x1EB] = 'o'; // ǫ
mc_Convert[0x1EC] = 'O'; // Ǭ
mc_Convert[0x1ED] = 'o'; // ǭ
mc_Convert[0x1F0] = 'j'; // ǰ
mc_Convert[0x1F4] = 'G'; // Ǵ
mc_Convert[0x1F5] = 'g'; // ǵ
mc_Convert[0x1F8] = 'N'; // Ǹ
mc_Convert[0x1F9] = 'n'; // ǹ
mc_Convert[0x1FA] = 'A'; // Ǻ
mc_Convert[0x1FB] = 'a'; // ǻ
mc_Convert[0x1FC] = 'A'; // Ǽ
mc_Convert[0x1FD] = 'a'; // ǽ
mc_Convert[0x1FE] = 'O'; // Ǿ
mc_Convert[0x1FF] = 'o'; // ǿ
mc_Convert[0x200] = 'A'; // Ȁ
mc_Convert[0x201] = 'a'; // ȁ
mc_Convert[0x202] = 'A'; // Ȃ
mc_Convert[0x203] = 'A'; // ȃ
mc_Convert[0x204] = 'E'; // Ȅ
mc_Convert[0x205] = 'e'; // ȅ
mc_Convert[0x206] = 'E'; // Ȇ
mc_Convert[0x207] = 'e'; // ȇ
mc_Convert[0x208] = 'I'; // Ȉ
mc_Convert[0x209] = 'i'; // ȉ
mc_Convert[0x20A] = 'I'; // Ȋ
mc_Convert[0x20B] = 'i'; // ȋ
mc_Convert[0x20C] = 'O'; // Ȍ
mc_Convert[0x20D] = 'o'; // ȍ
mc_Convert[0x20E] = 'O'; // Ȏ
mc_Convert[0x20F] = 'o'; // ȏ
mc_Convert[0x210] = 'R'; // Ȑ
mc_Convert[0x211] = 'r'; // ȑ
mc_Convert[0x212] = 'R'; // Ȓ
mc_Convert[0x213] = 'r'; // ȓ
mc_Convert[0x214] = 'U'; // Ȕ
mc_Convert[0x215] = 'u'; // ȕ
mc_Convert[0x216] = 'U'; // Ȗ
mc_Convert[0x217] = 'u'; // ȗ
mc_Convert[0x218] = 'S'; // Ș
mc_Convert[0x219] = 's'; // ș
mc_Convert[0x21A] = 'T'; // Ț
mc_Convert[0x21B] = 't'; // ț
mc_Convert[0x21E] = 'H'; // Ȟ
mc_Convert[0x21F] = 'h'; // ȟ
mc_Convert[0x224] = 'Z'; // Ȥ
mc_Convert[0x225] = 'z'; // ȥ
mc_Convert[0x226] = 'A'; // Ȧ
mc_Convert[0x227] = 'a'; // ȧ
mc_Convert[0x228] = 'E'; // Ȩ
mc_Convert[0x229] = 'e'; // ȩ
mc_Convert[0x22A] = 'O'; // Ȫ
mc_Convert[0x22B] = 'o'; // ȫ
mc_Convert[0x22C] = 'O'; // Ȭ
mc_Convert[0x22D] = 'o'; // ȭ
mc_Convert[0x22E] = 'O'; // Ȯ
mc_Convert[0x22F] = 'o'; // ȯ
mc_Convert[0x230] = 'O'; // Ȱ
mc_Convert[0x231] = 'o'; // ȱ
mc_Convert[0x232] = 'Y'; // Ȳ
mc_Convert[0x233] = 'y'; // ȳ
mc_Convert[0x234] = 'l'; // ȴ
mc_Convert[0x235] = 'n'; // ȵ
mc_Convert[0x23A] = 'A'; // Ⱥ
mc_Convert[0x23B] = 'C'; // Ȼ
mc_Convert[0x23C] = 'c'; // ȼ
mc_Convert[0x23D] = 'L'; // Ƚ
mc_Convert[0x23E] = 'T'; // Ⱦ
mc_Convert[0x23F] = 's'; // ȿ
mc_Convert[0x240] = 'z'; // ɀ
mc_Convert[0x243] = 'B'; // Ƀ
mc_Convert[0x244] = 'U'; // Ʉ
mc_Convert[0x246] = 'E'; // Ɇ
mc_Convert[0x247] = 'e'; // ɇ
mc_Convert[0x248] = 'J'; // Ɉ
mc_Convert[0x249] = 'j'; // ɉ
mc_Convert[0x24C] = 'R'; // Ɍ
mc_Convert[0x24D] = 'r'; // ɍ
mc_Convert[0x24E] = 'Y'; // Ɏ
mc_Convert[0x24F] = 'y'; // ɏ
mc_Convert[0x261] = 'g'; // ɡ
#endregion
}
}
Upvotes: 3
Reputation: 11
There is a Nuget package Unidecode.NET that provides the desired functionality. It's a port of code that already exists in Python and originally in Perl. cf. https://github.com/thecoderok/Unidecode.NET
// testing Unidecode.NET
using Unidecode.NET;
string x = "AEIOUÁÉÍÓÚÀÈÌÒUaeiouáéíóúàèìòù";
Console.WriteLine(x);
Console.WriteLine(x.Unidecode());
Upvotes: 0
Reputation: 4824
Same as accepted answer but faster, using Span
instead of StringBuilder
.
Requires .NET Core 3.1 or newer .NET.
static string RemoveDiacritics(string text)
{
ReadOnlySpan<char> normalizedString = text.Normalize(NormalizationForm.FormD);
int i = 0;
Span<char> span = text.Length < 1000
? stackalloc char[text.Length]
: new char[text.Length];
foreach (char c in normalizedString)
{
if (CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark)
span[i++] = c;
}
return new string(span[..i]).Normalize(NormalizationForm.FormC);
}
Also this is extensible for additional character replacements e.g. for polish Ł.
span[i++] = c switch
{
'Ł' => 'L',
'ł' => 'l',
_ => c
};
A small note: Stack allocation stackalloc
is rather faster than Heap allocation new
, and it makes less work for Garbage Collector. 1000
is a threshold to avoid allocating large structures on Stack which may cause StackOverflowException
. While 1000 is a pretty safe value, in most cases 10000 or even 100000 would also work (100k allocates on Stack up to 200kB while default stack size is 1 MB). However 100k looks for me a bit dangerous.
Upvotes: 12
Reputation: 123
For anyone who finds Lucene.Net as an overkill for removing diacritics, I managed to find this small library, that utilize ASCII transliteration for you.
https://github.com/anyascii/anyascii
Upvotes: 6
Reputation: 183
For simply removing French Canadian accent marks as the original question asked, here's an alternate method that uses a regular expression instead of hardcoded conversions and For/Next loops. Depending on your needs, it could be condensed into a single line of code; however, I added it to an extensions class for easier reusability.
Visual Basic
Imports System.Text
Imports System.Text.RegularExpressions
Public MustInherit Class StringExtension
Public Shared Function RemoveDiacritics(Text As String) As String
Return New Regex("\p{Mn}", RegexOptions.Compiled).Replace(Text.Normalize(NormalizationForm.FormD), String.Empty)
End Function
End Class
Implementation
Private Shared Sub DoStuff()
MsgBox(StringExtension.RemoveDiacritics(inputString))
End Sub
c#
using System.Text;
using System.Text.RegularExpressions;
namespace YourApplication
{
public abstract class StringExtension
{
public static string RemoveDiacritics(string Text)
{
return new Regex(@"\p{Mn}", RegexOptions.Compiled).Replace(Text.Normalize(NormalizationForm.FormD), string.Empty);
}
}
}
Implementation
private static void DoStuff()
{
MessageBox.Show(StringExtension.RemoveDiacritics(inputString));
}
Input: äáčďěéíľľňôóřŕšťúůýž ÄÁČĎĚÉÍĽĽŇÔÓŘŔŠŤÚŮÝŽ ÖÜË łŁđĐ ţŢşŞçÇ øı
Output: aacdeeillnoorrstuuyz AACDEEILLNOORRSTUUYZ OUE łŁđĐ tTsScC øı
I included characters that wouldn't be converted to help visualize what happens when unexpected input is received.
If you need it to also convert other types of characters such as the Polish ł and Ł, then depending on your needs, consider incorporating this answer (.NET Core friendly) that uses CodePagesEncodingProvider
into your solution.
Upvotes: 7
Reputation: 242030
I've not used this method, but Michael Kaplan describes a method for doing so in his blog post (with a confusing title) that talks about stripping diacritics: Stripping is an interesting job (aka On the meaning of meaningless, aka All Mn characters are non-spacing, but some are more non-spacing than others)
static string RemoveDiacritics(string text)
{
var normalizedString = text.Normalize(NormalizationForm.FormD);
var stringBuilder = new StringBuilder(capacity: normalizedString.Length);
for (int i = 0; i < normalizedString.Length; i++)
{
char c = normalizedString[i];
var unicodeCategory = CharUnicodeInfo.GetUnicodeCategory(c);
if (unicodeCategory != UnicodeCategory.NonSpacingMark)
{
stringBuilder.Append(c);
}
}
return stringBuilder
.ToString()
.Normalize(NormalizationForm.FormC);
}
Note that this is a followup to his earlier post: Stripping diacritics....
The approach uses String.Normalize to split the input string into constituent glyphs (basically separating the "base" characters from the diacritics) and then scans the result and retains only the base characters. It's just a little complicated, but really you're looking at a complicated problem.
Of course, if you're limiting yourself to French, you could probably get away with the simple table-based approach in How to remove accents and tilde in a C++ std::string, as recommended by @David Dibben.
Upvotes: 673
Reputation: 4544
The accepted answer is totally correct, but nowadays, it should be updated to use Rune class instead of CharUnicodeInfo
, as C# & .NET updated the way to analyse strings in latest versions (Rune class has been added in .NET Core 3.0).
The following code for .NET 5+ is now recommended, as it go further for non-latin chars :
static string RemoveDiacritics(string text)
{
var normalizedString = text.Normalize(NormalizationForm.FormD);
var stringBuilder = new StringBuilder();
foreach (var c in normalizedString.EnumerateRunes())
{
var unicodeCategory = Rune.GetUnicodeCategory(c);
if (unicodeCategory != UnicodeCategory.NonSpacingMark)
{
stringBuilder.Append(c);
}
}
return stringBuilder.ToString().Normalize(NormalizationForm.FormC);
}
Upvotes: 49
Reputation: 2736
this did the trick for me...
string accentedStr;
byte[] tempBytes;
tempBytes = System.Text.Encoding.GetEncoding("ISO-8859-8").GetBytes(accentedStr);
string asciiStr = System.Text.Encoding.UTF8.GetString(tempBytes);
quick&short!
Upvotes: 250
Reputation: 10135
This code worked for me:
var updatedText = text.Normalize(NormalizationForm.FormD)
.Where(c => CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark)
.ToArray();
However, please don't do this with names. It's not only an insult to people with umlauts/accents in their name, it can also be dangerously wrong in certain situations (see below). There are alternative writings instead of just removing the accent.
Furthermore, it's simply wrong and dangerous, e.g. if the user has to provide his name exactly how it occurs on the passport.
For example my name is written Zuberbühler
and in the machine readable part of my passport you will find Zuberbuehler
. By removing the umlaut, the name will not match with either part. This can lead to issues for the users.
You should rather disallow umlauts/accent in an input form for names so the user can write his name correctly without its umlaut or accent.
Practical example, if the web service to apply for ESTA (https://www.application-esta.co.uk/special-characters-and) would use above code instead of transforming umlauts correctly, the ESTA application would either be refused or the traveller will have problems with the American Border Control when entering the States.
Another example would be flight tickets. Assuming you have a flight ticket booking web application, the user provides his name with an accent and your implementation is just removing the accents and then using the airline's web service to book the ticket! Your customer may not be allowed to board since the name does not match to any part of his/her passport.
Upvotes: 2
Reputation: 5511
In case someone is interested, I was looking for something similar and ended writing the following:
public static string NormalizeStringForUrl(string name)
{
String normalizedString = name.Normalize(NormalizationForm.FormD);
StringBuilder stringBuilder = new StringBuilder();
foreach (char c in normalizedString)
{
switch (CharUnicodeInfo.GetUnicodeCategory(c))
{
case UnicodeCategory.LowercaseLetter:
case UnicodeCategory.UppercaseLetter:
case UnicodeCategory.DecimalDigitNumber:
stringBuilder.Append(c);
break;
case UnicodeCategory.SpaceSeparator:
case UnicodeCategory.ConnectorPunctuation:
case UnicodeCategory.DashPunctuation:
stringBuilder.Append('_');
break;
}
}
string result = stringBuilder.ToString();
return String.Join("_", result.Split(new char[] { '_' }
, StringSplitOptions.RemoveEmptyEntries)); // remove duplicate underscores
}
Upvotes: 37
Reputation: 2892
TL;DR - C# string extension method
I think the best solution to preserve the meaning of the string is to convert the characters instead of stripping them, which is well illustrated in the example crème brûlée
to crme brle
vs. creme brulee
.
I checked out Alexander's comment above and saw the Lucene.Net code is Apache 2.0 licensed, so I've modified the class into a simple string extension method. You can use it like this:
var originalString = "crème brûlée";
var maxLength = originalString.Length; // limit output length as necessary
var foldedString = originalString.FoldToASCII(maxLength);
// "creme brulee"
The function is too long to post in a StackOverflow answer (~139k characters of 30k allowed lol) so I made a gist and attributed the authors:
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/// <summary>
/// This class converts alphabetic, numeric, and symbolic Unicode characters
/// which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
/// block) into their ASCII equivalents, if one exists.
/// <para/>
/// Characters from the following Unicode blocks are converted; however, only
/// those characters with reasonable ASCII alternatives are converted:
///
/// <ul>
/// <item><description>C1 Controls and Latin-1 Supplement: <a href="http://www.unicode.org/charts/PDF/U0080.pdf">http://www.unicode.org/charts/PDF/U0080.pdf</a></description></item>
/// <item><description>Latin Extended-A: <a href="http://www.unicode.org/charts/PDF/U0100.pdf">http://www.unicode.org/charts/PDF/U0100.pdf</a></description></item>
/// <item><description>Latin Extended-B: <a href="http://www.unicode.org/charts/PDF/U0180.pdf">http://www.unicode.org/charts/PDF/U0180.pdf</a></description></item>
/// <item><description>Latin Extended Additional: <a href="http://www.unicode.org/charts/PDF/U1E00.pdf">http://www.unicode.org/charts/PDF/U1E00.pdf</a></description></item>
/// <item><description>Latin Extended-C: <a href="http://www.unicode.org/charts/PDF/U2C60.pdf">http://www.unicode.org/charts/PDF/U2C60.pdf</a></description></item>
/// <item><description>Latin Extended-D: <a href="http://www.unicode.org/charts/PDF/UA720.pdf">http://www.unicode.org/charts/PDF/UA720.pdf</a></description></item>
/// <item><description>IPA Extensions: <a href="http://www.unicode.org/charts/PDF/U0250.pdf">http://www.unicode.org/charts/PDF/U0250.pdf</a></description></item>
/// <item><description>Phonetic Extensions: <a href="http://www.unicode.org/charts/PDF/U1D00.pdf">http://www.unicode.org/charts/PDF/U1D00.pdf</a></description></item>
/// <item><description>Phonetic Extensions Supplement: <a href="http://www.unicode.org/charts/PDF/U1D80.pdf">http://www.unicode.org/charts/PDF/U1D80.pdf</a></description></item>
/// <item><description>General Punctuation: <a href="http://www.unicode.org/charts/PDF/U2000.pdf">http://www.unicode.org/charts/PDF/U2000.pdf</a></description></item>
/// <item><description>Superscripts and Subscripts: <a href="http://www.unicode.org/charts/PDF/U2070.pdf">http://www.unicode.org/charts/PDF/U2070.pdf</a></description></item>
/// <item><description>Enclosed Alphanumerics: <a href="http://www.unicode.org/charts/PDF/U2460.pdf">http://www.unicode.org/charts/PDF/U2460.pdf</a></description></item>
/// <item><description>Dingbats: <a href="http://www.unicode.org/charts/PDF/U2700.pdf">http://www.unicode.org/charts/PDF/U2700.pdf</a></description></item>
/// <item><description>Supplemental Punctuation: <a href="http://www.unicode.org/charts/PDF/U2E00.pdf">http://www.unicode.org/charts/PDF/U2E00.pdf</a></description></item>
/// <item><description>Alphabetic Presentation Forms: <a href="http://www.unicode.org/charts/PDF/UFB00.pdf">http://www.unicode.org/charts/PDF/UFB00.pdf</a></description></item>
/// <item><description>Halfwidth and Fullwidth Forms: <a href="http://www.unicode.org/charts/PDF/UFF00.pdf">http://www.unicode.org/charts/PDF/UFF00.pdf</a></description></item>
/// </ul>
/// <para/>
/// See: <a href="http://en.wikipedia.org/wiki/Latin_characters_in_Unicode">http://en.wikipedia.org/wiki/Latin_characters_in_Unicode</a>
/// <para/>
/// For example, '&agrave;' will be replaced by 'a'.
/// </summary>
public static partial class StringExtensions
{
/// <summary>
/// Converts characters above ASCII to their ASCII equivalents. For example,
/// accents are removed from accented characters.
/// </summary>
/// <param name="input"> The string of characters to fold </param>
/// <param name="length"> The length of the folded return string </param>
/// <returns> length of output </returns>
public static string FoldToASCII(this string input, int? length = null)
{
// See https://gist.github.com/andyraddatz/e6a396fb91856174d4e3f1bf2e10951c
}
}
Hope that helps someone else, this is the most robust solution I've found!
Upvotes: 13
Reputation: 21
Not having enough reputations, apparently I can not comment on Alexander's excellent link. - Lucene appears to be the only solution working in reasonably generic cases.
For those wanting a simple copy-paste solution, here it is, leveraging code in Lucene:
string testbed = "ÁÂÄÅÇÉÍÎÓÖØÚÜÞàáâãäåæçèéêëìíîïðñóôöøúüāăčĐęğıŁłńŌōřŞşšźžșțệủ";
Console.WriteLine(Lucene.latinizeLucene(testbed));
AAAACEIIOOOUUTHaaaaaaaeceeeeiiiidnoooouuaacDegiLlnOorSsszzsteu
//////////
public static class Lucene
{
// source: https://raw.githubusercontent.com/apache/lucenenet/master/src/Lucene.Net.Analysis.Common/Analysis/Miscellaneous/ASCIIFoldingFilter.cs
// idea: https://stackoverflow.com/questions/249087/how-do-i-remove-diacritics-accents-from-a-string-in-net (scroll down, search for lucene by Alexander)
public static string latinizeLucene(string arg)
{
char[] argChar = arg.ToCharArray();
// latinizeLuceneImpl can expand one char up to four chars - e.g. Þ to TH, or æ to ae, or in fact ⑽ to (10)
char[] resultChar = new String(' ', arg.Length * 4).ToCharArray();
int outputPos = Lucene.latinizeLuceneImpl(argChar, 0, ref resultChar, 0, arg.Length);
string ret = new string(resultChar);
ret = ret.Substring(0, outputPos);
return ret;
}
/// <summary>
/// Converts characters above ASCII to their ASCII equivalents. For example,
/// accents are removed from accented characters.
/// <para/>
/// @lucene.internal
/// </summary>
/// <param name="input"> The characters to fold </param>
/// <param name="inputPos"> Index of the first character to fold </param>
/// <param name="output"> The result of the folding. Should be of size >= <c>length * 4</c>. </param>
/// <param name="outputPos"> Index of output where to put the result of the folding </param>
/// <param name="length"> The number of characters to fold </param>
/// <returns> length of output </returns>
private static int latinizeLuceneImpl(char[] input, int inputPos, ref char[] output, int outputPos, int length)
{
int end = inputPos + length;
for (int pos = inputPos; pos < end; ++pos)
{
char c = input[pos];
// Quick test: if it's not in range then just keep current character
if (c < '\u0080')
{
output[outputPos++] = c;
}
else
{
switch (c)
{
case '\u00C0': // À [LATIN CAPITAL LETTER A WITH GRAVE]
case '\u00C1': // Á [LATIN CAPITAL LETTER A WITH ACUTE]
case '\u00C2': // Â [LATIN CAPITAL LETTER A WITH CIRCUMFLEX]
case '\u00C3': // Ã [LATIN CAPITAL LETTER A WITH TILDE]
case '\u00C4': // Ä [LATIN CAPITAL LETTER A WITH DIAERESIS]
case '\u00C5': // Å [LATIN CAPITAL LETTER A WITH RING ABOVE]
case '\u0100': // Ā [LATIN CAPITAL LETTER A WITH MACRON]
case '\u0102': // Ă [LATIN CAPITAL LETTER A WITH BREVE]
case '\u0104': // Ą [LATIN CAPITAL LETTER A WITH OGONEK]
case '\u018F': // Ə http://en.wikipedia.org/wiki/Schwa [LATIN CAPITAL LETTER SCHWA]
case '\u01CD': // Ǎ [LATIN CAPITAL LETTER A WITH CARON]
case '\u01DE': // Ǟ [LATIN CAPITAL LETTER A WITH DIAERESIS AND MACRON]
case '\u01E0': // Ǡ [LATIN CAPITAL LETTER A WITH DOT ABOVE AND MACRON]
case '\u01FA': // Ǻ [LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE]
case '\u0200': // Ȁ [LATIN CAPITAL LETTER A WITH DOUBLE GRAVE]
case '\u0202': // Ȃ [LATIN CAPITAL LETTER A WITH INVERTED BREVE]
case '\u0226': // Ȧ [LATIN CAPITAL LETTER A WITH DOT ABOVE]
case '\u023A': // Ⱥ [LATIN CAPITAL LETTER A WITH STROKE]
case '\u1D00': // ᴀ [LATIN LETTER SMALL CAPITAL A]
case '\u1E00': // Ḁ [LATIN CAPITAL LETTER A WITH RING BELOW]
case '\u1EA0': // Ạ [LATIN CAPITAL LETTER A WITH DOT BELOW]
case '\u1EA2': // Ả [LATIN CAPITAL LETTER A WITH HOOK ABOVE]
case '\u1EA4': // Ấ [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND ACUTE]
case '\u1EA6': // Ầ [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND GRAVE]
case '\u1EA8': // Ẩ [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
case '\u1EAA': // Ẫ [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE]
case '\u1EAC': // Ậ [LATIN CAPITAL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
case '\u1EAE': // Ắ [LATIN CAPITAL LETTER A WITH BREVE AND ACUTE]
case '\u1EB0': // Ằ [LATIN CAPITAL LETTER A WITH BREVE AND GRAVE]
case '\u1EB2': // Ẳ [LATIN CAPITAL LETTER A WITH BREVE AND HOOK ABOVE]
case '\u1EB4': // Ẵ [LATIN CAPITAL LETTER A WITH BREVE AND TILDE]
case '\u1EB6': // Ặ [LATIN CAPITAL LETTER A WITH BREVE AND DOT BELOW]
case '\u24B6': // Ⓐ [CIRCLED LATIN CAPITAL LETTER A]
case '\uFF21': // A [FULLWIDTH LATIN CAPITAL LETTER A]
output[outputPos++] = 'A';
break;
case '\u00E0': // à [LATIN SMALL LETTER A WITH GRAVE]
case '\u00E1': // á [LATIN SMALL LETTER A WITH ACUTE]
case '\u00E2': // â [LATIN SMALL LETTER A WITH CIRCUMFLEX]
case '\u00E3': // ã [LATIN SMALL LETTER A WITH TILDE]
case '\u00E4': // ä [LATIN SMALL LETTER A WITH DIAERESIS]
case '\u00E5': // å [LATIN SMALL LETTER A WITH RING ABOVE]
case '\u0101': // ā [LATIN SMALL LETTER A WITH MACRON]
case '\u0103': // ă [LATIN SMALL LETTER A WITH BREVE]
case '\u0105': // ą [LATIN SMALL LETTER A WITH OGONEK]
case '\u01CE': // ǎ [LATIN SMALL LETTER A WITH CARON]
case '\u01DF': // ǟ [LATIN SMALL LETTER A WITH DIAERESIS AND MACRON]
case '\u01E1': // ǡ [LATIN SMALL LETTER A WITH DOT ABOVE AND MACRON]
case '\u01FB': // ǻ [LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE]
case '\u0201': // ȁ [LATIN SMALL LETTER A WITH DOUBLE GRAVE]
case '\u0203': // ȃ [LATIN SMALL LETTER A WITH INVERTED BREVE]
case '\u0227': // ȧ [LATIN SMALL LETTER A WITH DOT ABOVE]
case '\u0250': // ɐ [LATIN SMALL LETTER TURNED A]
case '\u0259': // ə [LATIN SMALL LETTER SCHWA]
case '\u025A': // ɚ [LATIN SMALL LETTER SCHWA WITH HOOK]
case '\u1D8F': // ᶏ [LATIN SMALL LETTER A WITH RETROFLEX HOOK]
case '\u1D95': // ᶕ [LATIN SMALL LETTER SCHWA WITH RETROFLEX HOOK]
case '\u1E01': // ạ [LATIN SMALL LETTER A WITH RING BELOW]
case '\u1E9A': // ả [LATIN SMALL LETTER A WITH RIGHT HALF RING]
case '\u1EA1': // ạ [LATIN SMALL LETTER A WITH DOT BELOW]
case '\u1EA3': // ả [LATIN SMALL LETTER A WITH HOOK ABOVE]
case '\u1EA5': // ấ [LATIN SMALL LETTER A WITH CIRCUMFLEX AND ACUTE]
case '\u1EA7': // ầ [LATIN SMALL LETTER A WITH CIRCUMFLEX AND GRAVE]
case '\u1EA9': // ẩ [LATIN SMALL LETTER A WITH CIRCUMFLEX AND HOOK ABOVE]
case '\u1EAB': // ẫ [LATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE]
case '\u1EAD': // ậ [LATIN SMALL LETTER A WITH CIRCUMFLEX AND DOT BELOW]
case '\u1EAF': // ắ [LATIN SMALL LETTER A WITH BREVE AND ACUTE]
case '\u1EB1': // ằ [LATIN SMALL LETTER A WITH BREVE AND GRAVE]
case '\u1EB3': // ẳ [LATIN SMALL LETTER A WITH BREVE AND HOOK ABOVE]
case '\u1EB5': // ẵ [LATIN SMALL LETTER A WITH BREVE AND TILDE]
case '\u1EB7': // ặ [LATIN SMALL LETTER A WITH BREVE AND DOT BELOW]
case '\u2090': // ₐ [LATIN SUBSCRIPT SMALL LETTER A]
case '\u2094': // ₔ [LATIN SUBSCRIPT SMALL LETTER SCHWA]
case '\u24D0': // ⓐ [CIRCLED LATIN SMALL LETTER A]
case '\u2C65': // ⱥ [LATIN SMALL LETTER A WITH STROKE]
case '\u2C6F': // Ɐ [LATIN CAPITAL LETTER TURNED A]
case '\uFF41': // a [FULLWIDTH LATIN SMALL LETTER A]
output[outputPos++] = 'a';
break;
case '\uA732': // Ꜳ [LATIN CAPITAL LETTER AA]
output[outputPos++] = 'A';
output[outputPos++] = 'A';
break;
case '\u00C6': // Æ [LATIN CAPITAL LETTER AE]
case '\u01E2': // Ǣ [LATIN CAPITAL LETTER AE WITH MACRON]
case '\u01FC': // Ǽ [LATIN CAPITAL LETTER AE WITH ACUTE]
case '\u1D01': // ᴁ [LATIN LETTER SMALL CAPITAL AE]
output[outputPos++] = 'A';
output[outputPos++] = 'E';
break;
case '\uA734': // Ꜵ [LATIN CAPITAL LETTER AO]
output[outputPos++] = 'A';
output[outputPos++] = 'O';
break;
case '\uA736': // Ꜷ [LATIN CAPITAL LETTER AU]
output[outputPos++] = 'A';
output[outputPos++] = 'U';
break;
// etc. etc. etc.
// see link above for complete source code
//
// unfortunately, postings are limited, as in
// "Body is limited to 30000 characters; you entered 136098."
[...]
case '\u2053': // ⁓ [SWUNG DASH]
case '\uFF5E': // ~ [FULLWIDTH TILDE]
output[outputPos++] = '~';
break;
default:
output[outputPos++] = c;
break;
}
}
}
return outputPos;
}
}
Upvotes: -3
Reputation: 3994
I often use an extenstion method based on another version I found here (see Replacing characters in C# (ascii)) A quick explanation:
Code:
using System.Linq;
using System.Text;
using System.Globalization;
// namespace here
public static class Utility
{
public static string RemoveDiacritics(this string str)
{
if (null == str) return null;
var chars =
from c in str.Normalize(NormalizationForm.FormD).ToCharArray()
let uc = CharUnicodeInfo.GetUnicodeCategory(c)
where uc != UnicodeCategory.NonSpacingMark
select c;
var cleanStr = new string(chars.ToArray()).Normalize(NormalizationForm.FormC);
return cleanStr;
}
// or, alternatively
public static string RemoveDiacritics2(this string str)
{
if (null == str) return null;
var chars = str
.Normalize(NormalizationForm.FormD)
.ToCharArray()
.Where(c=> CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark)
.ToArray();
return new string(chars).Normalize(NormalizationForm.FormC);
}
}
Upvotes: 16
Reputation: 4474
Popping this Library here if you haven't already considered it. Looks like there are a full range of unit tests with it.
https://github.com/thomasgalliker/Diacritics.NET
Upvotes: 3
Reputation: 1862
I really like the concise and functional code provided by azrafe7. So, I have changed it a little bit to convert it to an extension method:
public static class StringExtensions
{
public static string RemoveDiacritics(this string text)
{
const string SINGLEBYTE_LATIN_ASCII_ENCODING = "ISO-8859-8";
if (string.IsNullOrEmpty(text))
{
return string.Empty;
}
return Encoding.ASCII.GetString(
Encoding.GetEncoding(SINGLEBYTE_LATIN_ASCII_ENCODING).GetBytes(text));
}
}
Upvotes: 1
Reputation: 1583
It's funny such a question can get so many answers, and yet none fit my requirements :) There are so many languages around, a full language agnostic solution is AFAIK not really possible, as others has mentionned that the FormC or FormD are giving issues.
Since the original question was related to French, the simplest working answer is indeed
public static string ConvertWesternEuropeanToASCII(this string str)
{
return Encoding.ASCII.GetString(Encoding.GetEncoding(1251).GetBytes(str));
}
1251 should be replaced by the encoding code of the input language.
This however replace only one character by one character. Since I am also working with German as input, I did a manual convert
public static string LatinizeGermanCharacters(this string str)
{
StringBuilder sb = new StringBuilder(str.Length);
foreach (char c in str)
{
switch (c)
{
case 'ä':
sb.Append("ae");
break;
case 'ö':
sb.Append("oe");
break;
case 'ü':
sb.Append("ue");
break;
case 'Ä':
sb.Append("Ae");
break;
case 'Ö':
sb.Append("Oe");
break;
case 'Ü':
sb.Append("Ue");
break;
case 'ß':
sb.Append("ss");
break;
default:
sb.Append(c);
break;
}
}
return sb.ToString();
}
It might not deliver the best performance, but at least it is very easy to read and extend. Regex is a NO GO, much slower than any char/string stuff.
I also have a very simple method to remove space:
public static string RemoveSpace(this string str)
{
return str.Replace(" ", string.Empty);
}
Eventually, I am using a combination of all 3 above extensions:
public static string LatinizeAndConvertToASCII(this string str, bool keepSpace = false)
{
str = str.LatinizeGermanCharacters().ConvertWesternEuropeanToASCII();
return keepSpace ? str : str.RemoveSpace();
}
And a small unit test to that (not exhaustive) which pass successfully.
[TestMethod()]
public void LatinizeAndConvertToASCIITest()
{
string europeanStr = "Bonjour ça va? C'est l'été! Ich möchte ä Ä á à â ê é è ë Ë É ï Ï î í ì ó ò ô ö Ö Ü ü ù ú û Û ý Ý ç Ç ñ Ñ";
string expected = "Bonjourcava?C'estl'ete!IchmoechteaeAeaaaeeeeEEiIiiiooooeOeUeueuuuUyYcCnN";
string actual = europeanStr.LatinizeAndConvertToASCII();
Assert.AreEqual(expected, actual);
}
Upvotes: 8
Reputation: 6996
The CodePage of Greek (ISO) can do it
The information about this codepage is into System.Text.Encoding.GetEncodings()
. Learn about in: https://msdn.microsoft.com/pt-br/library/system.text.encodinginfo.getencoding(v=vs.110).aspx
Greek (ISO) has codepage 28597 and name iso-8859-7.
Go to the code... \o/
string text = "Você está numa situação lamentável";
string textEncode = System.Web.HttpUtility.UrlEncode(text, Encoding.GetEncoding("iso-8859-7"));
//result: "Voce+esta+numa+situacao+lamentavel"
string textDecode = System.Web.HttpUtility.UrlDecode(textEncode);
//result: "Voce esta numa situacao lamentavel"
So, write this function...
public string RemoveAcentuation(string text)
{
return
System.Web.HttpUtility.UrlDecode(
System.Web.HttpUtility.UrlEncode(
text, Encoding.GetEncoding("iso-8859-7")));
}
Note that... Encoding.GetEncoding("iso-8859-7")
is equivalent to Encoding.GetEncoding(28597)
because first is the name, and second the codepage of Encoding.
Upvotes: 15
Reputation:
Encoding.ASCII.GetString(Encoding.GetEncoding(1251).GetBytes(text));
It actually splits the likes of å
which is one character (which is character code 00E5
, not 0061
plus the modifier 030A
which would look the same) into a
plus some kind of modifier, and then the ASCII conversion removes the modifier, leaving the only a
.
Upvotes: 1
Reputation: 168
Imports System.Text
Imports System.Globalization
Public Function DECODE(ByVal x As String) As String
Dim sb As New StringBuilder
For Each c As Char In x.Normalize(NormalizationForm.FormD).Where(Function(a) CharUnicodeInfo.GetUnicodeCategory(a) <> UnicodeCategory.NonSpacingMark)
sb.Append(c)
Next
Return sb.ToString()
End Function
Upvotes: 1
Reputation: 325
you can use string extension from MMLib.Extensions nuget package:
using MMLib.RapidPrototyping.Generators;
public void ExtensionsExample()
{
string target = "aácčeéií";
Assert.AreEqual("aacceeii", target.RemoveDiacritics());
}
Nuget page: https://www.nuget.org/packages/MMLib.Extensions/ Codeplex project site https://mmlib.codeplex.com/
Upvotes: 1
Reputation: 149
This is how i replace diacritic characters to non-diacritic ones in all my .NET program
C#:
//Transforms the culture of a letter to its equivalent representation in the 0-127 ascii table, such as the letter 'é' is substituted by an 'e'
public string RemoveDiacritics(string s)
{
string normalizedString = null;
StringBuilder stringBuilder = new StringBuilder();
normalizedString = s.Normalize(NormalizationForm.FormD);
int i = 0;
char c = '\0';
for (i = 0; i <= normalizedString.Length - 1; i++)
{
c = normalizedString[i];
if (CharUnicodeInfo.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark)
{
stringBuilder.Append(c);
}
}
return stringBuilder.ToString().ToLower();
}
VB .NET:
'Transforms the culture of a letter to its equivalent representation in the 0-127 ascii table, such as the letter "é" is substituted by an "e"'
Public Function RemoveDiacritics(ByVal s As String) As String
Dim normalizedString As String
Dim stringBuilder As New StringBuilder
normalizedString = s.Normalize(NormalizationForm.FormD)
Dim i As Integer
Dim c As Char
For i = 0 To normalizedString.Length - 1
c = normalizedString(i)
If CharUnicodeInfo.GetUnicodeCategory(c) <> UnicodeCategory.NonSpacingMark Then
stringBuilder.Append(c)
End If
Next
Return stringBuilder.ToString().ToLower()
End Function
Upvotes: 4
Reputation: 7415
Try HelperSharp package.
There is a method RemoveAccents:
public static string RemoveAccents(this string source)
{
//8 bit characters
byte[] b = Encoding.GetEncoding(1251).GetBytes(source);
// 7 bit characters
string t = Encoding.ASCII.GetString(b);
Regex re = new Regex("[^a-zA-Z0-9]=-_/");
string c = re.Replace(t, " ");
return c;
}
Upvotes: 1
Reputation: 39
THIS IS THE VB VERSION (Works with GREEK) :
Imports System.Text
Imports System.Globalization
Public Function RemoveDiacritics(ByVal s As String)
Dim normalizedString As String
Dim stringBuilder As New StringBuilder
normalizedString = s.Normalize(NormalizationForm.FormD)
Dim i As Integer
Dim c As Char
For i = 0 To normalizedString.Length - 1
c = normalizedString(i)
If CharUnicodeInfo.GetUnicodeCategory(c) <> UnicodeCategory.NonSpacingMark Then
stringBuilder.Append(c)
End If
Next
Return stringBuilder.ToString()
End Function
Upvotes: 3