Reputation: 6891
TFIDFVectorizer takes so much memory ,vectorizing 470 MB of 100k documents takes over 6 GB , if we go 21 million documents it will not fit 60 GB of RAM we have.
So we go for HashingVectorizer but still need to know how to distribute the hashing vectorizer.Fit and partial fit does nothing so how to work with Huge Corpus?
Upvotes: 4
Views: 5401
Reputation: 41
One way to overcome the inability of HashingVectorizer to account for IDF is to index your data into elasticsearch or lucene and retrieve termvectors from there using which you can calculate Tf-IDF.
Upvotes: 1
Reputation: 40159
I would strongly recommend you to use the HashingVectorizer when fitting models on large dataset.
The HashingVectorizer
is data independent, only the parameters from vectorizer.get_params()
are important. Hence (un)pickling `HashingVectorizer instance should be very fast.
The vocabulary based vectorizers are better suited for exploratory analysis on small datasets.
Upvotes: 10