Lucky
Lucky

Reputation: 905

Word embedding Visualization using TSNE not clear

I have downloaded pre-trained model of word embeddings from Word Embeddings by M. Baroni et al. I want to visualize embeddings of words present in sentences. I have two sentences:

sentence1 = "Four people died in an accident."

sentence2 = "4 men are dead from a collision"

I have function to load the embeddings file from above link:

def load_data(FileName = './EN-wform.w.5.cbow.neg10.400.subsmpl.txt'):

    embeddings = {}
    file = open(FileName,'r')
    i = 0
    print "Loading word embeddings first time"
    for line in file:
        # print line

        tokens = line.split('\t')

        #since each line's last token content '\n'
        # we need to remove that
        tokens[-1] = tokens[-1].strip()

        #each line has 400 tokens
        for i in xrange(1, len(tokens)):
            tokens[i] = float(tokens[i])

        embeddings[tokens[0]] = tokens[1:-1]
    print "finished"
    return embeddings

e = load_data()

From both the sentences, I compute lemmas of words and ignore stopwords and punctuations, so now my sentences becomes:

sentence1 = ['Four', 'people', 'died', 'accident']
sentence2 = ['4', 'men', 'dead', 'collision']

Now, when I try to visualize the embeddings using TSNE(t-distributed stochastic neighbor embedding), I first store labels and tokens for each sentence:

#for sentence store labels and embeddings in list
# tokens contains vector of 400 dimensions for each label
labels1 = []
tokens1 = []
for i in sentence1:
    if i in e:
        labels1.append(i)
        tokens1.append(e[i])
    else:
        print i

labels2 = []
tokens2 = []
for i in sentence2:
    if i in e:
        labels2.append(i)
        tokens2.append(e[i])
    else:
        print i

For TSNE

tsne_model = TSNE(perplexity=40, n_components=2, init='random', n_iter=2000, random_state=23)
# fit transform for tokens of both sentences
new_values = tsne_model.fit_transform(tokens1)
new_values1 = tsne_model.fit_transform(tokens2)

#Plot values
x = []
y = []
x1 = []
y1 = []

for value in new_values:
    x.append(value[0])
    y.append(value[1])

for value in new_values1:
    x1.append(value[0])
    y1.append(value[1])


plt.figure(figsize=(10, 10)) 

for i in range(len(x)):
    plt.scatter(x[i],y[i])
    plt.annotate(labels[i],
                 xy=(x[i], y[i]),
                 xytext=(5, 2),
                 textcoords='offset points',
                 ha='right',
                 va='bottom')

for i in range(len(x1)):
    plt.scatter(x1[i],y1[i])
    plt.annotate(labels[i],
                 xy=(x1[i], y1[i]),
                 xytext=(5, 2),
                 textcoords='offset points',
                 ha='right',
                 va='bottom')

plt.show()

Plot of labels in 2-dimension

My question is that, why synonyms words such as "collision" and "accident", "people" and "people" have different co-ordinates? if words are same/synonyms, shouldn't they be closer?

distances = euclidean_distances(tokens1) # returns shape (8,8)

Upvotes: 3

Views: 1168

Answers (1)

Philip
Philip

Reputation: 3414

From the TSNE-documentation:

t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results.

Which means you aren't guaranteed to get the same coordinates when performing a dimensionality reduction of the word embeddings.

To solve this, perform the fit_transform once instead of twice, by joining your sentences:

sentence1 = ['Four', 'people', 'died', 'accident']
sentence2 = ['4', 'men', 'dead', 'collision']
sentences = list(set(sentence1)| set(sentence2))

EDIT: There is also a bug in your code, you're plotting labels from the wrong list.

Upvotes: 1

Related Questions