Reputation: 32398
I edit all kinds of files with Vim (as I'm sure most Vim users do). One bug bear I have is what Vim does when I come across a file with an odd encoding. Most editors (these days) make a good stab at detecting file encodings. However, Vim generally doesn't. And you have to type, for example:
:e ++enc=utf-16le
To re-read the file in UTF-16 (Otherwise you get a mass of @ signs)
I've been searching around and have seen scripts like set_utf8.vim
which can detect a specific file encoding. However, is there are more general solution? I'm a bit bored of having to manually work out what the file encoding is and consulting the help every time I open an unusual file.
Upvotes: 30
Views: 16196
Reputation: 15844
Add this code to your .vimrc:
if has("multi_byte")
if &termencoding == ""
let &termencoding = &encoding
endif
set encoding=utf-8
setglobal fileencoding=utf-8
"setglobal bomb
set fileencodings=ucs-bom,utf-8,latin1
endif
Upvotes: 5
Reputation: 172520
Adding the encoding name to 'fileencodings'
should do the trick:
:set fencs=ucs-bom,utf-16le,utf-8,default,latin1
Alternatively, there are plugins like AutoFenc and fencview.
Upvotes: 21
Reputation: 272237
Do you have a byte-order-mark ? Vim should detect this and work appropriately. From the doc - section 45.4:
When you start editing that 16-bit Unicode file, and it has a BOM, Vim will detect this and convert the file to utf-8 when reading it. The 'fileencoding' option (without s at the end) is set to the detected value. In this case it is "utf-16le". That means it's Unicode, 16-bit and little-endian. This file format is common on MS-Windows (e.g., for registry files).
Upvotes: 4