Mapping unicode characters to language in R

413 views Asked by At

I'm extracting data from a .pdf file which is in Tamil(an Indian local language) Language, After extracting the text in R from pdf file gives me some junk or unicode character format text. I'm unable to map it to proper text or the same text as it is in pdf file, Here is the code

library(tm)
library(pdftools)
library(qdapRegex)
library(stringr)
library(textreadr)

if(!require("ghit")){
  install.packages("ghit")
}
# on 64-bit Windows
ghit::install_github(c("ropenscilabs/tabulizerjars", "ropenscilabs/tabulizer"), INSTALL_opts = "--no-multiarch")
# elsewhere
ghit::install_github(c("ropenscilabs/tabulizerjars", "ropenscilabs/tabulizer"))
text <- extract_tables("D:/first.pdf")
 text[[1]][,2][3]

This gives me some junk character like

"«îù£ñ¢«ð좬ì  , âô¢ì£ñ¢ú¢ «ó£ Ì"

I tried with changing the unicode type

library(stringi)
stri_trans_toupper("ê¶ó®", locale = "Tamil")

But no success though. Any suggestion will be appreciable.

Thanks.

1

There are 1 answers

2
Kota Mori On

If your text has been successfully extracted and it is the only problem of converting the encoding, I think iconv function works. I provide an example with text encoded by "cp932" (East Asian Languages).

# text file written in cp932
x <- readLines("test-cp932.txt", encoding="utf-8")  

x
## [1] "\x82\xa0\x82肪\x82Ƃ\xa4"
# this is garbled because the file has been read
# in a wrong encoding

iconv(x, "cp932", "utf-8")
## [1] "ありがとう"
# this means 'thank you'

If this does not work out, then your text may have been contaminated during the parsing process.

Another possibility is to make your strings to raw object (codes) and reformulate the original text using code mapping like this.

charToRaw(x)
##  [1] 82 a0 82 e8 82 aa 82 c6 82 a4