I have data scraped from the internet (hence varied encodings) and stored as parquet files. While processing it in R I use the arrow library. For the following code snippet
library(arrow)
download.file('https://github.com/akashshah59/embedded_nul_parquet/raw/main/CC-MAIN-20200702045758-20200702075758-00007.parquet','sample.parquet')
read_parquet(file = 'sample.parquet',as_data_frame = TRUE)
I get -
Error in Table__to_dataframe(x, use_threads = option_use_threads()) :
embedded nul in string: '\0 at \0'
So, I thought, what if I could read the file as binaries and replace the embedded nul character \0 myself.
parquet <- read_parquet(file = 'sample.parquet',as_data_frame = FALSE)
raw <- write_to_raw(parquet,format = "file")
print(raw)
In this case, I get an indecipherable stream of characters and nuls, which makes it very difficult to remove '00' characters that are problematic in the stream.
[1] 41 52 52 4f 57 31 00 00 ff ff ff ff d0 02 00 00 10 00 00 00 00 00 0a 00 0c 00 06 00
[29] 05 00 08 00 0a 00 00 00 00 01 04 00 0c 00 00 00 08 00 08 00 00 00 04 00 08 00 00 00
[57] 04 00 00 00 0d 00 00 00 70 02 00 00 38 02 00 00 10 02 00 00 d0 01 00 00 a4 01 00 00
[85] 74 01 00 00 34 01 00 00 04 01 00 00 cc 00 00 00 9c 00 00 00 64 00 00 00 34 00 00 00
[113] 04 00 00 00 d4 fd ff ff 00 00 01 05 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00
[141] c4 fd ff ff 0a 00 00 00 77 61 72 63 5f 6c 61 6e 67 73 00 00 00 fe ff ff 00 00 01 05
[169] 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00 f0 fd ff ff 0b 00 00 00 6c 61 6e 67
[197] 5f 64 65 74 65 63 74 00 2c fe ff ff 00 00 01 03 18 00 00 00 0c 00 00 00 04 00
Is there a way to read parquet in a way that embedded nuls are skipped while reading? Or is there a pattern that I can use to efficiently remove embedded nuls from the following parquet string?
For example, when I read in the same file stored as csv, R provides functionality to read it safely:
download.file('https://github.com/akashshah59/embedded_nul_parquet/raw/main/CC-MAIN-20200702045758-20200702075758-00007.tsv','sample.tsv')
table <- read.csv('sample.tsv', sep = '\t',quote = """, skipNul = TRUE)
Here, skipNul skips the Nuls efficiently and returns the data.frame with the dimensions that were needed.
Session Info:
> sessionInfo()
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.5 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8
[4] LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] stringr_1.4.0 dplyr_1.0.2 tictoc_1.0 arrow_1.0.1 sparklyr_1.4.0
References: Arrow manual
This may be a bug. The file is being read by
arrow
fine. The error comes when converting it into a data frame.Specifically, there is an issue with the second column, raw. Reading every other column works fine.
Converting that column to a vector causes problems.
There isn't a great way to read in the column successfully. You can read it in as binary, discard the
nul
s, then convert to a character