Ammar Rashed
Ammar Rashed

Reputation: 75

Convert Python dictionary to Word2Vec object

I have obtained a dictionary mapping words to their vectors in python, and I am trying to scatter plot the n most similar words since TSNE on huge number of words is taking forever. The best option is to convert the dictionary to a w2v object to deal with it.

Upvotes: 4

Views: 3106

Answers (2)

Haiovych Bogdan
Haiovych Bogdan

Reputation: 76

I had the same issue and I finaly found the solution

So, I assume that your dictionary looks like mine

d = {}
d['1'] = np.random.randn(300)
d['2'] = np.random.randn(300)

Basically, the keys are the users' ids and each of them has a vector with shape (300,).

So now, in order to use it as word2vec I need to firstly save it to binary file and then load it with gensim library

from numpy import zeros, dtype, float32 as REAL, ascontiguousarray, fromstring
from gensim import utils

m = gensim.models.keyedvectors.Word2VecKeyedVectors(vector_size=300)
m.vocab = d
m.vectors = np.array(list(d.values()))
my_save_word2vec_format(binary=True, fname='train.bin', total_vec=len(d), vocab=m.vocab, vectors=m.vectors)

Where my_save_word2vec_format function is:

def my_save_word2vec_format(fname, vocab, vectors, binary=True, total_vec=2):
"""Store the input-hidden weight matrix in the same format used by the original
C word2vec-tool, for compatibility.

Parameters
----------
fname : str
    The file path used to save the vectors in.
vocab : dict
    The vocabulary of words.
vectors : numpy.array
    The vectors to be stored.
binary : bool, optional
    If True, the data wil be saved in binary word2vec format, else it will be saved in plain text.
total_vec : int, optional
    Explicitly specify total number of vectors
    (in case word vectors are appended with document vectors afterwards).

"""
if not (vocab or vectors):
    raise RuntimeError("no input")
if total_vec is None:
    total_vec = len(vocab)
vector_size = vectors.shape[1]
assert (len(vocab), vector_size) == vectors.shape
with utils.smart_open(fname, 'wb') as fout:
    print(total_vec, vector_size)
    fout.write(utils.to_utf8("%s %s\n" % (total_vec, vector_size)))
    # store in sorted order: most frequent words at the top
    for word, row in vocab.items():
        if binary:
            row = row.astype(REAL)
            fout.write(utils.to_utf8(word) + b" " + row.tostring())
        else:
            fout.write(utils.to_utf8("%s %s\n" % (word, ' '.join(repr(val) for val in row))))

And then use

m2 = gensim.models.keyedvectors.Word2VecKeyedVectors.load_word2vec_format('train.bin', binary=True)

To load the model as word2vec

Upvotes: 6

gojomo
gojomo

Reputation: 54173

If you've calculated the word-vectors with your own code, you may want to write them to a file in a format compatible with Google's original word2vec.c or gensim. You can review the gensim code in KeyedVectors.save_word2vec_format() to see exactly how its vectors are written – it's less than 20 lines of code – and do something similar to your vectors. See:

https://github.com/RaRe-Technologies/gensim/blob/3d2227d58b10d0493006a3d7e63b98d64e991e60/gensim/models/keyedvectors.py#L130

Then you could re-load vectors that originated with your code and use them almost directly with examples like the one from Jeff Delaney you mention.

Upvotes: 0

Related Questions