Creating ngrams from scikit learn and count vectorizer throws Memory Error

10.5k views Asked by At

I am building ngrams from multiple text documents using scikit-learn. I need to build document-frequency using countVectorizer.

Example :

document1 = "john is a nice guy"

document2 = "person can be a guy"

So, document-frequency will be

{'be': 1,
 'can': 1,
 'guy': 2,
 'is': 1,
 'john': 1,
 'nice': 1,
 'person': 1}

Here documents are just strings but when I tried with huge amount of data. It throws MEMORY ERROR.

Code :

import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
document = [Huge amount of data around 7MB] # ['john is a guy', 'person guy']
vectorizer = CountVectorizer(ngram_range=(1, 5))
X = vectorizer.fit_transform(document).todense()
tranformer = vectorizer.transform(document).todense()
matrix_terms = np.array(vectorizer.get_feature_names())
lst_freq =  map(sum,zip(*tranformer.A))          
matrix_freq = np.array(lst_freq)
final_matrix = np.array([matrix_terms,matrix_freq])

ERROR :

Traceback (most recent call last):
  File "demo1.py", line 13, in build_ngrams_matrix
    X = vectorizer.fit_transform(document).todense()
  File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/base.py", line 605, in todense
    return np.asmatrix(self.toarray(order=order, out=out))
  File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/compressed.py", line 901, in toarray
    return self.tocoo(copy=False).toarray(order=order, out=out)
  File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/coo.py", line 269, in toarray
    B = self._process_toarray_args(order, out)
  File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/base.py", line 789, in _process_toarray_args
    return np.zeros(self.shape, dtype=self.dtype, order=order)
MemoryError
1

There are 1 answers

3
perimosocordiae On BEST ANSWER

As the comments have mentioned, you're running into memory issues when you convert the large sparse matrices to dense format. Try something like this:

import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
document = [Huge amount of data around 7MB] # ['john is a guy', 'person guy']
vectorizer = CountVectorizer(ngram_range=(1, 5))

# Don't need both X and transformer; they should be identical
X = vectorizer.fit_transform(document)
matrix_terms = np.array(vectorizer.get_feature_names())

# Use the axis keyword to sum over rows
matrix_freq = np.asarray(X.sum(axis=0)).ravel()
final_matrix = np.array([matrix_terms,matrix_freq])

EDIT: If you want a dictionary from term to frequency, try this after calling fit_transform:

terms = vectorizer.get_feature_names()
freqs = X.sum(axis=0).A1
result = dict(zip(terms, freqs))