I'm extracting data from a .pdf file which is in Tamil(an Indian local language) Language, After extracting the text in R from pdf file gives me some junk or unicode character format text. I'm unable to map it to proper text or the same text as it is in pdf file, Here is the code
library(tm)
library(pdftools)
library(qdapRegex)
library(stringr)
library(textreadr)
if(!require("ghit")){
install.packages("ghit")
}
# on 64-bit Windows
ghit::install_github(c("ropenscilabs/tabulizerjars", "ropenscilabs/tabulizer"), INSTALL_opts = "--no-multiarch")
# elsewhere
ghit::install_github(c("ropenscilabs/tabulizerjars", "ropenscilabs/tabulizer"))
text <- extract_tables("D:/first.pdf")
text[[1]][,2][3]
This gives me some junk character like
"«îù£ñ¢«ð좬ì , âô¢ì£ñ¢ú¢ «ó£ Ì"
I tried with changing the unicode type
library(stringi)
stri_trans_toupper("ê¶ó®", locale = "Tamil")
But no success though. Any suggestion will be appreciable.
Thanks.
If your text has been successfully extracted and it is the only problem of converting the encoding, I think
iconv
function works. I provide an example with text encoded by "cp932" (East Asian Languages).If this does not work out, then your text may have been contaminated during the parsing process.
Another possibility is to make your strings to raw object (codes) and reformulate the original text using code mapping like this.