I am building ngrams from multiple text documents using scikit-learn. I need to build document-frequency using countVectorizer.
Example :
document1 = "john is a nice guy"
document2 = "person can be a guy"
So, document-frequency will be
{'be': 1,
'can': 1,
'guy': 2,
'is': 1,
'john': 1,
'nice': 1,
'person': 1}
Here documents are just strings but when I tried with huge amount of data. It throws MEMORY ERROR.
Code :
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
document = [Huge amount of data around 7MB] # ['john is a guy', 'person guy']
vectorizer = CountVectorizer(ngram_range=(1, 5))
X = vectorizer.fit_transform(document).todense()
tranformer = vectorizer.transform(document).todense()
matrix_terms = np.array(vectorizer.get_feature_names())
lst_freq = map(sum,zip(*tranformer.A))
matrix_freq = np.array(lst_freq)
final_matrix = np.array([matrix_terms,matrix_freq])
ERROR :
Traceback (most recent call last):
File "demo1.py", line 13, in build_ngrams_matrix
X = vectorizer.fit_transform(document).todense()
File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/base.py", line 605, in todense
return np.asmatrix(self.toarray(order=order, out=out))
File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/compressed.py", line 901, in toarray
return self.tocoo(copy=False).toarray(order=order, out=out)
File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/coo.py", line 269, in toarray
B = self._process_toarray_args(order, out)
File "/usr/local/lib/python2.7/dist-packages/scipy/sparse/base.py", line 789, in _process_toarray_args
return np.zeros(self.shape, dtype=self.dtype, order=order)
MemoryError
As the comments have mentioned, you're running into memory issues when you convert the large sparse matrices to dense format. Try something like this:
EDIT: If you want a dictionary from term to frequency, try this after calling
fit_transform
: