Enyra
Enyra

Reputation: 17972

How to read an ANSI encoded file containing special characters

I'm writing a TFS Checkin policy, which checks if our source files containing our file header.

My problem is, that our file header contains a special character "©" and unfortunately some of our source files are encoded in ANSI. So if I read these files in the policy, the string looks like this "Copyright � 2009".

string content = File.ReadAllText(pendingChange.LocalItem);

I tired to change the encoding of the string, but it does not help. So how can I read these files, that I get the correct string "Copyright © 2009"?

Upvotes: 77

Views: 112915

Answers (3)

Louis Somers
Louis Somers

Reputation: 2964

I know this is an old question but I ran into a similar situation and found the accepted answer to be cutting some corners (no disregard for Jon Skeet's pragmatic short answer, but I'll flesh it out a little more)...

The specs state that the header will contain the encoding directly after {\rtf:

 \ansi  ANSI (the default)
 \mac   Apple Macintosh
 \pc    IBM PC code page 437 
 \pca   IBM PC code page 850, used by IBM Personal System/2 (not implemented in version 1 of Microsoft Word for OS/2)

According to Wikipedia the "ANSI character set has no well-defined meaning"

For the default ANSI you have the choice of these partially incompatible encodings:

using System.Text;
...
string content = File.ReadAllText(filename, Encoding.GetEncoding("ISO-8859-1"));
or
string content = File.ReadAllText(filename, Encoding.GetEncoding("Windows-1252"));

Using WordPad on windows 10 to save a file with a euro sign (0x80 in Windows-1252 but 0xA4 in ISO-8859-1) revealed the following:

The header stated the exact encoding after \ansi

{\rtf1\ansi\ansicpg1252\deff0\nouicompat\deflang1043{ ...

And the encoding was not directly used, instead it was wrapped in RTF encoding: \'80

according to the specs:

\'hh : A hexadecimal value, based on the specified character set (may be used to identify 8-bit values).

I guess the best thing to do is read the header, if the file starts with {\rtf1\ansi\ansicpg1252 then go for Windows-1252.

But to make things more complicated, the specs also state that there can be mixed encodings... search for '\upr'...

I guess there is no definitive answer, the easiest way to go in your case may be to search (in the un-decoded raw byte array) for all the variations of the encoded copyright signs that you may encounter in your source base.

In my case I finally decided to cut a few corners as well, but add a small percentage of defensive coding. All files I have seen so far were Windows-1252 so I common-case-optimised for that.

    Encoding encoding = Encoding.GetEncoding("Windows-1252", EncoderFallback.ReplacementFallback, DecoderFallback.ReplacementFallback);
    
    using (System.IO.StreamReader reader = new System.IO.StreamReader(filename, encoding)) {
        string header= reader.ReadLine();
        if (!header.Contains("cpg1252")) {
            if(header.Contains("\\pca"))
                encoding = Encoding.GetEncoding(850, EncoderFallback.ReplacementFallback, DecoderFallback.ReplacementFallback);
            else if (header.Contains("\\pc"))
                encoding = Encoding.GetEncoding(437, EncoderFallback.ReplacementFallback, DecoderFallback.ReplacementFallback);
            else
                encoding = Encoding.GetEncoding("ISO-8859-1", EncoderFallback.ReplacementFallback, DecoderFallback.ReplacementFallback);
        }
    }
    
    string content = System.IO.File.ReadAllText(filename, encoding);

Upvotes: 7

AnthonyWJones
AnthonyWJones

Reputation: 189457

It would seem sensible if you going to have such policies that you would also have team agreed standard encoding. To be honest, I can't see why any team would use an encoding other than "Unicode (UtF-8 with signature) - Codepage 65001" (except perhaps for ASPX pages with significant non-latin static content but even then I can't see how it would be a big deal to use UTF-8).

Assuming you still want to allow mixed encodings then you next need a way to determine which encoding a file was save in so you know which encoding to pass to ReadAllText. Its not easy to determine this from the file however using Encoding.Default is likely to work ok. Since its most likely you have just 2 encodings to deal with, the VS (UTF-8 with signature) and a common ANSI encoding used by you machines (probably Windows-1252).

Hence using

 string content = File.ReadAllText(pendingChange.LocalItem, Encoding.Default);

will work. (As I see Jon has already posted). This works because when the UTF-8 BOM (which is what VS means by the term "signature") is present at the start of the file the supplied encoding parameter is ignored and UTF-8 is used anyway. Hence where the file is saved using UTF-8 you get correct results and where ANSI is used you are most likely also to get correct results.

BTW if you are processing file headers wouldn't ReadAllLines make things easier?.

Upvotes: 6

Jon Skeet
Jon Skeet

Reputation: 1500155

Use Encoding.Default:

string content = File.ReadAllText(pendingChange.LocalItem, Encoding.Default);

You should be aware, however, that that reads it using the system default encoding - which may not be the same as the encoding of the file. There's no single encoding called ANSI, but usually when people talk about "the ANSI encoding" they mean Windows Code Page 1252 or whatever their box happens to use.

Your code will be more robust if you can find out the exact encoding used.

Upvotes: 148

Related Questions