qwertpoi
qwertpoi

Reputation: 21

ValueError with multilingual pre-trained Wiki word vectors

I'm trying to use multilingual pre-trained Wiki word vectors from FastText (https://fasttext.cc/docs/en/pretrained-vectors.html).

I scraped the vectors from the website in the following way:

import requests

# link to vector file for German
url = 'https://dl.fbaipublicfiles.com/fasttext/vectors-aligned/wiki.de.align.vec'
r = requests.get(url, stream = True)

if r.encoding is None:
    r.encoding = 'utf-8'

with open('/Users/LNV/OneDrive/Desktop/Jupiter_Notebook/Intro to ML/vector-biases/data/extract_DE.txt', 'w', encoding="utf-8") as fp:
    for line_num, vector in enumerate(r.iter_lines(decode_unicode = True)):
        fp.write(vector)
        fp.write('\n')
        # first 20,000 words
        if line_num == 20_001:
            break

And removed the first line:

deu_input = open('/Users/LNV/OneDrive/Desktop/Jupiter_Notebook/Intro to ML/vector-biases/data/extract_DE.txt', 'r', encoding="utf-8").readlines()
with open('/Users/LNV/OneDrive/Desktop/Jupiter_Notebook/Intro to ML/vector-biases/data/extract_DE_nofirstline.txt', 'w', encoding="utf-8") as deu_output:
    for index, line in enumerate(deu_input):
        if index != 0:
            deu_output.write(line)

What I'm doing works well with some languages or with a certain number of vectors, but for some other languages or over a certain number of elements I get the following error:

Traceback (most recent call last):
  File "explorer_ES.py", line 22, in <module>
    ns = neighbours(vectors,w,20)  # neighbours is what I imported from utils, w is the word I entered, and I get 20 examples of nearest neighbours
  File "/mnt/c/Users/LNV/OneDrive/Desktop/Jupiter_Notebook/Intro to ML/vector-biases/utils.py", line 31, in neighbours
    cos = cosine_similarity(dm, w, k)
  File "/mnt/c/Users/LNV/OneDrive/Desktop/Jupiter_Notebook/Intro to ML/vector-biases/utils.py", line 21, in cosine_similarity
    num = np.dot(dm[w1],dm[w2])
  File "<__array_function__ internals>", line 5, in dot
ValueError: shapes (300,) and (299,) not aligned: 300 (dim 0) != 299 (dim 0)

For instance I get this error when trying this code with a file in German that I have previously scraped (I also removed the first line). I get the same error with other languages, but not with some others.

from utils import readDM, cosine_similarity, neighbours  
import sys

fasttext_vecs="./data/extract_DE_nofirstline.txt"  
print("Reading vectors...")
vectors = readDM(fasttext_vecs)


f = ""

while f != 'q':
    f = input("\nWhat would you like to do? (n = nearest neighbours, s=similarity, q=quit) ")

    while f == 'n':
        w = input("Enter a word or 'x' to exit nearest neighbours: ")

        if w == 'x':
            f = 'x'
        else:
            ns = neighbours(vectors,w,20)  # neighbours is what I imported from utils, w is the word I entered, and I get 20 examples of nearest neighbours
            print(ns)

    while f == 's':
        w = input("Input two words separated by a space or 'x' to exit similarity: ")
        
        if w == 'x':
            f = 'x'
        else:
            w1,w2 = w.split()   # splits a string into a list
            if w1 in vectors and w2 in vectors:
                sim = cosine_similarity(vectors,w1,w2)
                print("SIM",w1,w2,sim)
            else:
                print("Word(s) not found in space.")

Upvotes: 0

Views: 140

Answers (1)

gojomo
gojomo

Reputation: 54233

Since you're only using the plain-text full-word vectors, you could use a off-the-shelf library like Gensim to read the vectors.

Its loading function has a limit option to read just the first N vectors from the front of the file, to save memory. So, you'd not have to modify any files (with potential risks of encoding/rewriting issues).

For example:

from gensim.models import KeyedVectors

# read 1st 20k word vectors
vecs_de_align = KeyedVectors.load_word2vec_format('wiki.de.align.vec', binary=False, limit=20000)

# get 20 nearest-neighbors of a word
similars = vecs_de_align.most_similar('Apfel')
print(similars)

Upvotes: 0

Related Questions