Reputation: 541
I've copied certain files from a Windows machine to a Linux machine.
All the files encoded with Windows-1252 need to be converted to
UTF-8.
The files which are already in UTF-8 should not be changed.
I'm planning to use the recode
utility for that.
How can I specify that the recode
utility should only convert
windows-1252 encoded files and not the UTF-8 files?
Example usage of recode:
recode windows-1252.. myfile.txt
This would convert myfile.txt
from windows-1252 to UTF-8.
Before doing this, I would like to know that myfile.txt
is actually
windows-1252 encoded and not UTF-8 encoded.
Otherwise, I believe this would corrupt the file.
Upvotes: 53
Views: 305456
Reputation: 33432
Here's a transcription of another answer I gave to a similar question:
If you apply utf8_encode() to an already UTF8 string it will return a garbled UTF8 output.
I made a function that addresses all this issues. It´s called Encoding::toUTF8().
You dont need to know what the encoding of your strings is. It can be Latin1 (iso 8859-1), Windows-1252 or UTF8, or the string can have a mix of them. Encoding::toUTF8() will convert everything to UTF8.
I did it because a service was giving me a feed of data all messed up, mixing UTF8 and Latin1 in the same string.
Usage:
$utf8_string = Encoding::toUTF8($utf8_or_latin1_or_mixed_string);
$latin1_string = Encoding::toLatin1($utf8_or_latin1_or_mixed_string);
Download:
https://github.com/neitanod/forceutf8
Update:
I've included another function, Encoding::fixUFT8(), wich will fix every UTF8 string that looks garbled.
Usage:
$utf8_string = Encoding::fixUTF8($garbled_utf8_string);
Examples:
echo Encoding::fixUTF8("Fédération Camerounaise de Football");
echo Encoding::fixUTF8("Fédération Camerounaise de Football");
echo Encoding::fixUTF8("FÃÂédÃÂération Camerounaise de Football");
echo Encoding::fixUTF8("Fédération Camerounaise de Football");
will output:
Fédération Camerounaise de Football
Fédération Camerounaise de Football
Fédération Camerounaise de Football
Fédération Camerounaise de Football
Update: I've transformed the function (forceUTF8) into a family of static functions on a class called Encoding. The new function is Encoding::toUTF8().
Upvotes: 10
Reputation: 5767
When I recently had this issue, I solved it by first finding all
files in need of conversion.
I did this by excluding the files that should not be converted.
This includes binary files, pure ASCII files (which
by definition already have a valid UTF-8 encoding), and files that
contain at least some valid non-ASCII UTF-8 characters.
In short, I recursively searched the files that probably should be converted :
$ find . -type f -name '*' -exec sh -c 'for n; do file -i "$n" | grep -Ev "binary|us-ascii|utf-8"; done' sh {} +
I had a subdirectory tree containing some 300 – 400 files. About half a dozen of them turned out to be wrongly encoded, and typically returned responses like :
./<some-path>/plain-text-file.txt: text/plain; charset=iso-8859-1
./<some-other-path>/text-file.txt: text/plain; charset=unknown-8bit
Note how the encoding was either iso-8859-1
, or unknown-8bit
.
This makes sense – any non-ASCII Windows-1252 character can either
be a valid ISO 8859-1 character – or – it can be one of the 27
characters in the 128 – 159 (x80 – x9F) range for which no printable
ISO 8859-1 characters are defined.
find . -exec
solution 2A problem with the find . -exec
solution is that it can be very slow
– a problem that grows with the size of the subdirectory tree under
scrutiny.
In my experience, it might be faster – potentially much faster – to run a number of commands instead of the single command suggested above, as follows :
$ file -i * | grep -Ev "binary|us-ascii|utf-8"
$ file -i */* | grep -Ev "binary|us-ascii|utf-8"
$ file -i */*/* | grep -Ev "binary|us-ascii|utf-8"
$ file -i */*/*/* | grep -Ev "binary|us-ascii|utf-8"
$ …
Continue increasing the depth in these commands until the response is something like this:
*/*/*/*/*/*/*: cannot open `*/*/*/*/*/*/*' (No such file or directory)
Once you see cannot open / (No such file or directory)
, it is
clear that the entire subdirectory tree has been searched.
Now that all suspicious files have been found, I prefer to use a text
editor to help with the conversion, instead of using a command line
tool like recode
.
On Windows, I like to use Notepad++ for converting files.
Have a look at this excellent post if you need help on that.
On Linux and macOS, try VS Code for converting files. I've given a few hints in this post.
file
command is not reliablefile *
and file */*
1
Section 1 relies on using the file
command, which unfortunately
isn't completely reliable.
As long as all your files are smaller than 64 kB, there shouldn't be
any problem.
For files (much) larger than 64 kB, there is a risk that non-ASCII
files will falsely be identified as pure ASCII files.
The fewer non-ASCII characters in such files, the bigger the risk
that they will be wrongly identified.
For more on this, see this post and its comments.
2 Subsection 1. a. is inspired by this answer.
Upvotes: 0
Reputation: 41764
As said, you can't reliably determine whether a file is Windows-1252 because Windows-1252 maps almost all bytes to a valid code point. However if the files are only in Windows-1252 and UTF-8 and no other encodings then you can try to parse a file in UTF-8 and if it contains invalid bytes then it's a Windows-1252 file
if iconv -f UTF-8 -t UTF-16 $FILE 1>/dev/null 2>&1; then
# Conversion succeeded
echo "$FILE is in UTF-8"
else
# iconv returns error if there are invalid characters in the byte stream
echo "$FILE is in Windows-1252. Converting to UTF-8"
iconv -f WINDOWS-1252 -t UTF-8 -o ${FILE}_utf8.txt $FILE
fi
This is similar to many other answers that try to treat the file as UTF-8 and check if there are errors. It works 99% of the time because most Windows-1252 texts will be invalid in UTF-8, but there will still be rare cases when it won't work. It's heuristic after all!
There are also various libraries and tools to detect the character set, such as chardet
$ chardet utf8.txt windows1252.txt iso-8859-1.txt utf8.txt: utf-8 with confidence 0.99 windows1252.txt: Windows-1252 with confidence 0.73 iso-8859-1.txt: ISO-8859-1 with confidence 0.73
It can't be completely reliable due to the heuristic nature, so it outputs a confidence value for people to judge. The more human text in the file, the more confident it'll be. If you have very specific texts then more trainings for the library will be needed. For more information read How do browsers determine the encoding used?
Upvotes: 0
Reputation: 11
this script worked for me on Win10/PS5.1 CP1250 to UTF-8
Get-ChildItem -Include *.php -Recurse | ForEach-Object {
$file = $_.FullName
$mustReWrite = $false
# Try to read as UTF-8 first and throw an exception if
# invalid-as-UTF-8 bytes are encountered.
try
{
[IO.File]::ReadAllText($file,[Text.Utf8Encoding]::new($false, $true))
}
catch [System.Text.DecoderFallbackException]
{
# Fall back to Windows-1250
$content = [IO.File]::ReadAllText($file,[Text.Encoding]::GetEncoding(1250))
$mustReWrite = $true
}
# Rewrite as UTF-8 without BOM (the .NET frameworks' default)
if ($mustReWrite)
{
Write "Converting from 1250 to UTF-8"
[IO.File]::WriteAllText($file, $content)
}
else
{
Write "Already UTF-8-encoded"
}
}
Upvotes: 1
Reputation: 1
UTF-8 does not have a BOM as it is both superfluous and invalid. Where a BOM is helpful is in UTF-16 which may be byte swapped as in the case of Microsoft. UTF-16 if for internal representation in a memory buffer. Use UTF-8 for interchange. By default both UTF-8, anything else derived from US-ASCII and UTF-16 are natural/network byte order. The Microsoft UTF-16 requires a BOM as it is byte swapped.
To covert Windows-1252 to ISO8859-15, I first convert ISO8859-1 to US-ASCII for codes with similar glyphs. I then convert Windows-1252 up to ISO8859-15, other non-ISO8859-15 glyphs to multiple US-ASCII characters.
Upvotes: -1
Reputation: 24317
If you want to rename multiple files in a single command ‒ let's say you want to convert all *.txt
files ‒ here is the command:
find . -name "*.txt" -exec iconv -f WINDOWS-1252 -t UTF-8 {} -o {}.ren \; -a -exec mv {}.ren {} \;
Upvotes: 9
Reputation: 1
Found this documentation for the TYPE command:
Convert an ASCII (Windows1252) file into a Unicode (UCS-2 le) text file:
For /f "tokens=2 delims=:" %%G in ('CHCP') do Set _codepage=%%G
CHCP 1252 >NUL
CMD.EXE /D /A /C (SET/P=ÿþ)<NUL > unicode.txt 2>NUL
CMD.EXE /D /U /C TYPE ascii_file.txt >> unicode.txt
CHCP %_codepage%
The technique above (based on a script by Carlos M.) first creates a file with a Byte Order Mark (BOM) and then appends the content of the original file. CHCP is used to ensure the session is running with the Windows1252 code page so that the characters 0xFF and 0xFE (ÿþ) are interpreted correctly.
Upvotes: -1
Reputation: 2684
You can change the encoding of a file with an editor such as notepad++. Just go to Encoding and select what you want.
I always prefer the Windows 1252
Upvotes: 2
Reputation: 14824
If you are sure your files are either UTF-8 or Windows 1252 (or Latin1), you can take advantage of the fact that recode will exit with an error if you try to convert an invalid file.
While utf8 is valid Win-1252, the reverse is not true: win-1252 is NOT valid UTF-8. So:
recode utf8..utf16 <unknown.txt >/dev/null || recode cp1252..utf8 <unknown.txt >utf8-2.txt
Will spit out errors for all cp1252 files, and then proceed to convert them to UTF8.
I would wrap this into a cleaner bash script, keeping a backup of every converted file.
Before doing the charset conversion, you may wish to first ensure you have consistent line-endings in all files. Otherwise, recode will complain because of that, and may convert files which were already UTF8, but just had the wrong line-endings.
Upvotes: 0
Reputation: 99
There's no general way to tell if a file is encoded with a specific encoding. Remember that an encoding is nothing more but an "agreement" how the bits in a file should be mapped to characters.
If you don't know which of your files are actually already encoded in UTF-8 and which ones are encoded in windows-1252, you will have to inspect all files and find out yourself. In the worst case that could mean that you have to open every single one of them with either of the two encodings and see whether they "look" correct -- i.e., all characters are displayed correctly. Of course, you may use tool support in order to do that, for instance, if you know for sure that certain characters are contained in the files that have a different mapping in windows-1252 vs. UTF-8, you could grep for them after running the files through 'iconv' as mentioned by Seva Akekseyev.
Another lucky case for you would be, if you know that the files actually contain only characters that are encoded identically in both UTF-8 and windows-1252. In that case, of course, you're done already.
Upvotes: 9
Reputation: 61351
Use the iconv command.
To make sure the file is in Windows-1252, open it in Notepad (under Windows), then click Save As. Notepad suggests current encoding as the default; if it's Windows-1252 (or any 1-byte codepage, for that matter), it would say "ANSI".
Upvotes: 3
Reputation: 1500225
How would you expect recode to know that a file is Windows-1252? In theory, I believe any file is a valid Windows-1252 file, as it maps every possible byte to a character.
Now there are certainly characteristics which would strongly suggest that it's UTF-8 - if it starts with the UTF-8 BOM, for example - but they wouldn't be definitive.
One option would be to detect whether it's actually a completely valid UTF-8 file first, I suppose... again, that would only be suggestive.
I'm not familiar with the recode tool itself, but you might want to see whether it's capable of recoding a file from and to the same encoding - if you do this with an invalid file (i.e. one which contains invalid UTF-8 byte sequences) it may well convert the invalid sequences into question marks or something similar. At that point you could detect that a file is valid UTF-8 by recoding it to UTF-8 and seeing whether the input and output are identical.
Alternatively, do this programmatically rather than using the recode utility - it would be quite straightforward in C#, for example.
Just to reiterate though: all of this is heuristic. If you really don't know the encoding of a file, nothing is going to tell you it with 100% accuracy.
Upvotes: 43