Reputation: 241
I am new to word-embedding and Tensorflow. I am working on a project where I need to apply word2vec to health data.
I used the code for Tensorflow website (word2vec_basic.py
). I modified a little this code to make it read my data instead of "text8.zip" and it runs normally until the last step:
num_steps = 100001
with tf.Session(graph=graph) as session:
# We must initialize all variables before we use them.
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()<code>
This code is exactly the same as the example so I don't think it is wrong. the error It gave is:
KeyError Traceback (most recent call last)
<ipython-input-20-fc4c5c915fc6> in <module>()
34 for k in xrange(top_k):
35 print(nearest[k])
---> 36 close_word = reverse_dictionary[nearest[k]]
37 log_str = "%s %s," % (log_str, close_word)
38 print(log_str)
KeyError: 2868
I changed the size of the input data but it still gives the same error.
I would really appreciate if someone could give me some advice on how to fix this problem.
Upvotes: 1
Views: 1016
Reputation: 714
If the vocabulary size is less than default maximum (50000), you should modify the number.
At the last of step 2, let's modify vocabulary_size to actual dictionary size.
data, count, dictionary, reverse_dictionary = build_dataset(words)
del words # Hint to reduce memory.
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]])
#add this line to modify
vocabulary_size = len(dictionary)
print('Dictionary size', len(dictionary))
Upvotes: 1