Vahid SJ
Vahid SJ

Reputation: 423

How to load a pre-trained Word2vec MODEL File and reuse it?

I want to use a pre-trained word2vec model, but I don't know how to load it in python.

This file is a MODEL file (703 MB). It can be downloaded here:
http://devmount.github.io/GermanWordEmbeddings/

Upvotes: 19

Views: 48291

Answers (5)

Eric Duminil
Eric Duminil

Reputation: 54233

Since you specifically mentioned the German Word2Vec, here's an up-to-date example, with GenSim 4.3.0:

from gensim.models import KeyedVectors

# NOTE: 'german.model' is available at https://devmount.github.io/GermanWordEmbeddings/
word2vec_path = 'german.model'
model = KeyedVectors.load_word2vec_format(word2vec_path, binary=True)
model.most_similar(model['Frau'] + model['Kind'])
# [('Kind', 0.8979102969169617), ('Frau', 0.8766001462936401),  ('Mutter', 0.8282196521759033), ...
model.most_similar(model['Obama'] - model['USA'] + model['Russland'])
# [('Obama', 0.8849074840545654), ('US-Praesident_Obama', 0.8133699893951416),  ('Putin', 0.7943856120109558), ...
model['Frankreich']
# array([-0.0014747 ,  0.09541887,  0.10959213,  0.12412726,  0.06772646, ....
model['NotAWord']
# KeyError

Upvotes: 1

Nilani Algiriyage
Nilani Algiriyage

Reputation: 35696

Use KeyedVectors to load the pre-trained model.

from gensim.models import KeyedVectors

word2vec_path = 'path/GoogleNews-vectors-negative300.bin.gz'
w2v_model = KeyedVectors.load_word2vec_format(word2vec_path, binary=True)

Upvotes: 8

Jeehaan Algaraady
Jeehaan Algaraady

Reputation: 11

I met the same issue and I downloaded GoogleNews-vectors-negative300 from Kaggle. I saved and extracted the file in my descktop. Then I implemented this code in python and it worked well:

model = KeyedVectors.load_word2vec_format=(r'C:/Users/juana/descktop/archive/GoogleNews-vectors-negative300.bin')

Upvotes: 1

user18081522
user18081522

Reputation:

I used the same model in my code and since I couldn't load it, I asked the author about it. His answer was that the model has to be loaded in binary format:

gensim.models.KeyedVectors.load_word2vec_format(w2v_path, binary=True)

This worked for me, and I think it should work for you, too.

Upvotes: 4

AbtPst
AbtPst

Reputation: 8018

just for loading

import gensim

# Load pre-trained Word2Vec model.
model = gensim.models.Word2Vec.load("modelName.model")

now you can train the model as usual. also, if you want to be able to save it and retrain it multiple times, here's what you should do

model.train(//insert proper parameters here//)
"""
If you don't plan to train the model any further, calling
init_sims will make the model much more memory-efficient
If `replace` is set, forget the original vectors and only keep the normalized
ones = saves lots of memory!
replace=True if you want to reuse the model
"""
model.init_sims(replace=True)

# save the model for later use
# for loading, call Word2Vec.load()

model.save("modelName.model")

Upvotes: 30

Related Questions