Arshad Shaik
Arshad Shaik

Reputation: 17

vocab size versus vector size in word2vec

I have a data with 6200 sentences(which are triplets of form "sign_or_symptoms diagnoses Pathologic_function"), however the unique words(vocabulary) in these sentence is 181, what would be the appropriate vector size to train a model on the sentences with such low vocabulary. Is there any resource or research on appropriate vector size depending on vocabulary size?

Upvotes: 0

Views: 1383

Answers (1)

gojomo
gojomo

Reputation: 54163

The best practice is to test it against your true end-task.

That's an incredibly small corpus and vocabulary-size for word2vec. It might not be appropriate at all, as it gets its power from large, varied training sets.

But on the bright side, you can run lots of trials with different parameters very quickly!

You absolutely can't use a vector dimensionality as large as your vocabulary (181), or even really very close. In such a case, the model is certain to 'overfit' – just memorizing the effects of each word in isolation, with none of the necessary trading-off 'tug-of-war', forcing words to be nearer/farther to each other, that creates the special value/generality of word2vec models.

My very loose rule-of-thumb would be to investigate dimensionalities around the square-root of the vocabulary size. And, multiples-of-4 tend to work best in the underlying array routines (at least when performance is critical, which it might not be with such a tiny data set). So I'd try 12 or 16 dimensions first, and then explore other lower/higher values based on some quantitative quality evaluation on your real task.

But again, you're working with a dataset so tiny, unless your 'sentences' are actually really long, word2vec may be a very weak technique for you without more data.

Upvotes: 1

Related Questions