Unique values of factor column containing NAs => "Hash table is full" error

3.3k views Asked by At

I have a data.table with 57m records and 9 columns, one of which is causing a problem when I try to run some summary stats. The offending column is a factor with 3699 levels and I am receiveing an error from the following line of code:

    > unique(da$UPC)
    Error in unique.default(da$UPC): hash table is full

Now obviously I would just use: levels(da$UPC) but I am trying to count the unique values which exist in each group as part of multiple j parameters/caluclations in a data.table group statement.

Interestingly unique(da$UPC[1:1000000]) works as expected however unique(da$UPC[1:10000000]) does not. Given that my table has 57m records this is an issue.

I tried converting the factor to a character and that works no problem as follows:

    da$UPC = as.character(levels(da$UPC))[da$UPC]
    unique(da$UPC)

Doing this does show me an additional "level" which is NA. So because my data has some NAs in a factor column the unique function fails to work. I'm wondering if this is something which the developers are aware of an something which needs to be fixed? I found the following article on r-devel which might be relevant but I'm not sure and it does not mention data.table.

Linked article: unique(1:3,nmax=1) freezes R!

    sessionInfo:

    R version 3.0.1 (2013-05-16)
    Platform: x86_64-unknown-linux-gnu (64-bit)

    locale:
     [1] LC_CTYPE=C                    LC_NUMERIC=C
     [3] LC_TIME=en_US.iso88591        LC_COLLATE=C
     [5] LC_MONETARY=en_US.iso88591    LC_MESSAGES=en_US.iso88591
     [7] LC_PAPER=C                    LC_NAME=C
     [9] LC_ADDRESS=C                  LC_TELEPHONE=C
     [11] LC_MEASUREMENT=en_US.iso88591 LC_IDENTIFICATION=C

    attached base packages:
    [1] stats     graphics  grDevices utils     datasets  methods   base

    other attached packages:
    [1] plyr_1.8         data.table_1.8.8
4

There are 4 answers

0
Patrick Rutz On

this snippet of code should place your missing observations into a regular level which will be more manageable to work with.

# Need additional level to place missing into first
levels(da$UPC) <- c(levels(da$UPC), '(NA)')
da$UPC[is.na(da$UPC)] <- '(NA)'

It sounds like you are ultimately trying to drop infrequent levels to assist in some sort of analysis. I wrote a function factorize() which I believe can help you. It buckets infrequent levels into an "Other" category.

Here's the link, please let me know if it helps.

[factorize()][1] https://github.com/greenpat/R-Convenience/blob/master/factorize.R

(reproduced below)

# This function takes a vector x and returns a factor representation of the same vector.
# The key advantage of factorize is that you can assign levels for infrequent categories,
# as well as empty and NA values. This makes it much easier to perform
# multidimensional/thematic analysis on your largest population subsets.
factorize <- function(
    x,  # vector to be transformed
    min_freq = .01,  # all levels < this % of records will be bucketed
    min_n = 1,  # all levels < this # of records will be bucketed
    NA_level = '(missing)',  # level created for NA values
    blank_level = '(blank)',  # level created for "" values
    infrequent_level = 'Other',  # level created for bucketing rare values
    infrequent_can_include_blank_and_NA = F,  # default NA and blank are not bucketed
    order = T,  # default to ordered
    reverse_order = F  # default to increasing order
) {
    if (class(x) != 'factor'){
        x <- as.factor(x)
    }
    # suspect this is faster than reassigning new factor object
    levels(x) <- c(levels(x), NA_level, infrequent_level, blank_level)

    # Swap out the NA and blank categories
    x[is.na(x)] <- NA_level
    x[x == ''] <- blank_level

    # Going to use this table to reorder
    f_tb <- table(x, useNA = 'always')

    # Which levels will be bucketed?
    infreq_set <- c(
        names(f_tb[f_tb < min_n]),
        names(f_tb[(f_tb/sum(f_tb)) < min_freq])
    )

    # If NA and/or blank were infrequent levels above, this prevents bucketing
    if(!infrequent_can_include_blank_and_NA){
        infreq_set <- infreq_set[!infreq_set %in% c(NA_level, blank_level)]
    }

    # Relabel all the infrequent choices
    x[x %in% infreq_set] <- infrequent_level

    # Return the reordered factor
    reorder(droplevels(x), rep(1-(2*reverse_order),length(x)), FUN = sum, order = order)
}
0
Tad Dallas On

Could you use dplyr and get a different result? For instance, I set up some (small) fake data, and then determine the distinct levels of alpha. I don't know how well this scales though.

test <- data.frame(alpha=sample(c('a', 'b', 'c'), 100000, replace=TRUE), 
                  num=runif(100000))

uniqueAlpha <- distinct(select(test, alpha))
0
denrou On

Not sure it will solve the problem, but you can check Hadley Wickham's forcats package:

library(forcats)
fct_count(da$UPC)
0
Osdorp On

Maybe I missing the point, but if it is a data.table object you can use this to summarize the counts:

da[,.N, by=UPC]

If it works, the unique values would be:

unique <- da[,.N, by=UPC]$UPC
length(unique)

You can group by multiple columns too:

da[,.N,by=.(A,B,C,..)]