Is there a way to deal with embedded nuls while reading in parquet files?

1.2k views Asked by At

I have data scraped from the internet (hence varied encodings) and stored as parquet files. While processing it in R I use the arrow library. For the following code snippet

library(arrow)

download.file('https://github.com/akashshah59/embedded_nul_parquet/raw/main/CC-MAIN-20200702045758-20200702075758-00007.parquet','sample.parquet')

read_parquet(file = 'sample.parquet',as_data_frame = TRUE)

I get -

Error in Table__to_dataframe(x, use_threads = option_use_threads()) : 
  embedded nul in string: '\0 at \0'

So, I thought, what if I could read the file as binaries and replace the embedded nul character \0 myself.

parquet <- read_parquet(file = 'sample.parquet',as_data_frame = FALSE) 
raw <- write_to_raw(parquet,format = "file") 
print(raw)

In this case, I get an indecipherable stream of characters and nuls, which makes it very difficult to remove '00' characters that are problematic in the stream.

  [1] 41 52 52 4f 57 31 00 00 ff ff ff ff d0 02 00 00 10 00 00 00 00 00 0a 00 0c 00 06 00
  [29] 05 00 08 00 0a 00 00 00 00 01 04 00 0c 00 00 00 08 00 08 00 00 00 04 00 08 00 00 00
  [57] 04 00 00 00 0d 00 00 00 70 02 00 00 38 02 00 00 10 02 00 00 d0 01 00 00 a4 01 00 00
  [85] 74 01 00 00 34 01 00 00 04 01 00 00 cc 00 00 00 9c 00 00 00 64 00 00 00 34 00 00 00
 [113] 04 00 00 00 d4 fd ff ff 00 00 01 05 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00
 [141] c4 fd ff ff 0a 00 00 00 77 61 72 63 5f 6c 61 6e 67 73 00 00 00 fe ff ff 00 00 01 05
 [169] 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00 f0 fd ff ff 0b 00 00 00 6c 61 6e 67
 [197] 5f 64 65 74 65 63 74 00 2c fe ff ff 00 00 01 03 18 00 00 00 0c 00 00 00 04 00 

Is there a way to read parquet in a way that embedded nuls are skipped while reading? Or is there a pattern that I can use to efficiently remove embedded nuls from the following parquet string?

For example, when I read in the same file stored as csv, R provides functionality to read it safely:

download.file('https://github.com/akashshah59/embedded_nul_parquet/raw/main/CC-MAIN-20200702045758-20200702075758-00007.tsv','sample.tsv')
table <- read.csv('sample.tsv', sep = '\t',quote = """, skipNul = TRUE) 

Here, skipNul skips the Nuls efficiently and returns the data.frame with the dimensions that were needed.

Session Info:

> sessionInfo()
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.5 LTS

Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C               LC_TIME=en_US.UTF-8       
 [4] LC_COLLATE=en_US.UTF-8     LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
[10] LC_TELEPHONE=C             LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] stringr_1.4.0  dplyr_1.0.2    tictoc_1.0     arrow_1.0.1    sparklyr_1.4.0

References: Arrow manual

2

There are 2 answers

1
Paul On BEST ANSWER

This may be a bug. The file is being read by arrow fine. The error comes when converting it into a data frame.

library(arrow)
library(tidyverse)

read_parquet("parquet.parquet", as_data_frame = FALSE)
#> Table
#> 45483 rows x 13 columns
#> $date <string>
#>   $raw <string>
#>   $url <string>
#>   $isReliable <int64>
#>   $title <string>
#>   $language1 <string>
#>   $language1_conf <int64>
#>   $language2 <string>
#>   $language2_conf <int64>
#>   $language3 <string>
#>   $language3_conf <int64>
#>   $lang_detect <string>
#>   $warc_first <string>

Specifically, there is an issue with the second column, raw. Reading every other column works fine.

df_except_bad_col <- read_parquet("parquet.parquet", col_select = -2)
df_except_bad_col
#> # A tibble: 45,483 x 12
#> date  url   isReliable title language1 language1_conf language2 language2_conf language3 language3_conf lang_detect
#> <chr> <chr>      <int> <chr> <chr>              <int> <chr>              <int> <chr>              <int> <chr>      
#>   1 2019~ http~          1 2019~ ja                    96 en                     2 un                     0 ja         
#> 2 2020~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 3 2020~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 4 2020~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 5 2019~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 6 2019~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 7 2019~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 8 2019~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 9 2019~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> 10 2019~ http~          1 Косм~ ru                    87 en                     3 un                     0 ru         
#> # ... with 45,473 more rows, and 1 more variable: warc_first <chr>

Converting that column to a vector causes problems.

bad_column <- read_parquet("parquet.parquet", col_select = 2, as_data_frame = FALSE)
bad_column[[1]]$as_vector()
#> Error in ChunkedArray__as_vector(self) : 
#>   embedded nul in string: '\0 at \0'

There isn't a great way to read in the column successfully. You can read it in as binary, discard the nuls, then convert to a character

fixed_column <-
  bad_column[[1]]$cast(binary())$as_vector() %>%
  map(discard, ~. == 0x00) %>%
  map_chr(rawToChar)
  
mutate(df_except_bad_col, raw = fixed_column)
#> # A tibble: 45,483 x 13
#>    date        url                    isReliable title                     language1 language1_conf language2 language2_conf language3 language3_conf lang_detect warc_first raw              
#>    <chr>       <chr>                       <int> <chr>                     <chr>              <int> <chr>              <int> <chr>              <int> <chr>       <chr>      <chr>            
#>  1 2019-12-31  http://10mm.hatenablo~          1 2019-12-31から1日間の記事一覧 - 1~ ja                    96 en                     2 un                     0 ja          ja         "2019-12-31"     
#>  2 2020-06-10~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "10 июнÑ\u008~
#>  3 2020-06-04~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "4 июнÑ\u008f~
#>  4 2020-05-29~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "29 маÑ\u008f ~
#>  5 2019-12-19~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "19 декабр~
#>  6 2019-12-15~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "15 декабр~
#>  7 2019-12-13~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "13 декабр~
#>  8 2019-10-14~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "14 октÑ\u008~
#>  9 2019-10-02~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "2 октÑ\u008f~
#> 10 2019-09-14~ http://3dmag.org/ru/t~          1 Космос / Поиск по тегам ~ ru                    87 en                     3 un                     0 ru          ru         "14 Ñ\u0081енÑ~
#> # ... with 45,473 more rows
0
ianmcook On

In arrow 3.0.0 (now on CRAN), you can set this option to strip out embedded nuls:

options(arrow.skip_nul = TRUE)

After you do that in your R session, read_parquet() will succeed with this warning if it encounters an embedded nul:

Warning message:
Stripping '\0' (nul) from character vector