GregF
GregF

Reputation: 1392

R encoding - Saved as UTF-8 with wrong characters (I think)

I have a file that explicitly says it is UTF-8, the unix command file -i says it is encoded as UTF-8, but when I load it into R (using readr with UTF8 encoding), I still can clearly tell that multi-byte characters are wrong. When I specify "Windows-1252" (which based on this chart, I'm pretty sure is what it was originally encoded as) as the encoding, I get more incorrect characters.

I think what happened is that someone saved these incorrect characters as UTF-8. Is there any way to recover the original text?

Here are attempts at fixing by specifying the encoding:

library(curl)
library(readr)
#> 
#> Attaching package: 'readr'
#> The following object is masked from 'package:curl':
#> 
#>     parse_date

text_file <- tempfile()
curl_download("https://dl.dropboxusercontent.com/s/7syikmmiduubsqv/test.txt", text_file)


# Default is UTF-8, other specifications add extra characters
read_lines(text_file)
#> [1] "{Província}"
# read_lines(text_file, locale = locale(encoding = "UTF-8")) # same
read_lines(text_file, locale = locale(encoding = "Windows-1252"))
#> [1] "{ProvÃ<U+0083>­ncia}"
read_lines(text_file, locale = locale(encoding = "latin1"))
#> [1] "{ProvÃ<U+0083>­ncia}"

# Same as equivalent readr code
# readLines(text_file)
# readLines(text_file, encoding = "UTF-8")
# readLines(text_file, encoding = "UTF-8-BOM")
# readLines(text_file, encoding = "Windows-1252")

# Desired text: "{Prov\u00EDncia}"

Update

Reverse encoding (a la Stat545 example) doesn't work

iconv(read_lines(text_file), from = "UTF-8", to = "Latin1")
#> [1] "{Província}"
iconv(read_lines(text_file), from = "UTF-8", to = "Windows-1252")
#> [1] "{Província}"

Upvotes: 2

Views: 2791

Answers (2)

oskjerv
oskjerv

Reputation: 198

Due to low reputation Im not allowed to make a comment, but your function helped me out. Reason I am posting is that there are some bugs in the function (bracket error and actual_chars_current is not defined).

Edited:

 create_utf_crosswalk <- function() {
    # Affects Windows-1252 0x80 - 0xFF (but a few characters aren't in
    # the spec, so  remove them)
    hex_codes <- sprintf("%x", seq(strtoi("0x80"), strtoi("0xFF")))
    hex_codes <- hex_codes[!hex_codes %in% c("81", "8d", "8f", "90", "9f")]

    actual_chars_locale <- vapply(hex_codes, FUN.VALUE = character(1), function(x) {
      parse(text = paste0("'\\x", x, "'"))[[1]]
    })

    actual_chars_utf <- iconv(actual_chars_locale, to = "UTF-8")

    mangled_chars_utf <- vapply(actual_chars_utf, FUN.VALUE = character(1), 
                                function(x){
                                  Encoding(x) <- "Windows-1252"
                                  x
                                })

    out <- actual_chars_utf
    names(out) <- mangled_chars_utf
    out
  }

Upvotes: 1

GregF
GregF

Reputation: 1392

Well, I imagine there's a better way to fix this, but until someone posts it, here's a solution that creates the table from the website and replaces it in text.

(requires stringr)


# Create the Debugging table from http://www.i18nqa.com/debug/utf8-debug.html
# UTF-8 characters were interpreted as Windows-1252 and then saved
# as UTF-8
create_utf_crosswalk <- function() {
  # Affects Windows-1252 0x80 - 0xFF (but a few characters aren't in
  # the spec, so  remove them)
  hex_codes <- sprintf("%x", seq(strtoi("0x80"), strtoi("0xFF")))
  hex_codes <- hex_codes[!hex_codes %in% c("81", "8d", "8f", "90", "9f")]

  actual_chars_locale <- vapply(hex_codes, FUN.VALUE = character(1), function(x) {
    parse(text = paste0("'\\x", x, "'"))[[1]]
  })

  actual_chars_utf <- iconv(actual_chars_current, to = "UTF-8")

  mangled_chars_utf <- vapply(actual_chars_utf, FUN.VALUE = character(1), 
  function(
    Encoding(x) <- "Windows-1252"
    x
  })

  out <- actual_chars_utf
  names(out) <- mangled_chars_utf
  out
}

text_file <- tempfile()
curl::curl_download("https://dl.dropboxusercontent.com/s/7syikmmiduubsqv/test.txt", text_file)
test_text <- readr::read_lines(text_file)

utf_fix <- create_utf_crosswalk()

stringr::str_replace_all(test_text, utf_fix)
#> [1] "{Província}"

Update

Figured out a direct solution, which works on the example text, but not the full file (perhaps I'm not specifying the exactly correct file encoding).

text <- readLines("https://dl.dropboxusercontent.com/s/7syikmmiduubsqv/test.txt")

fixed <- iconv(text, from = "UTF-8", to = "Windows-1252")
Encoding(fixed) <- "UTF-8"

fixed

Upvotes: 4

Related Questions