Reputation: 581
I seem to be getting all the correct results until the very last step. My array of results keeps coming back empty.
I'm trying to follow this tutorial to compare 6 sets of notes:
https://www.oreilly.com/learning/how-do-i-compare-document-similarity-using-python
I have this so far:
#tokenize an array of all text
raw_docs = [Notes_0, Notes_1, Notes_2, Notes_3, Notes_4, Notes_5]
gen_docs = [[w.lower() for w in word_tokenize(text)]
for text in raw_docs]
#create dictionary
dictionary_interactions = gensim.corpora.Dictionary(gen_docs)
print("Number of words in dictionary: ", len(dictionary_interactions))
#create a corpus
corpus_interactions = [dictionary_interactions.doc2bow(gen_docs) for gen_docs in gen_docs]
len(corpus_interactions)
#convert to tf-idf model
tf_idf_interactions = gensim.models.TfidfModel(corpus_interactions)
#check for similarities between docs
sims_interactions = gensim.similarities.Similarity('C:/Users/JNproject', tf_idf_interactions[corpus_interactions],
num_features = len(dictionary_interactions))
print(sims_interactions)
print(type(sims_interactions))
with the output:
Number of words in dictionary: 46364
Similarity index with 6 documents in 0 shards (stored under C:/Users/Jeremy Bice/JNprojects/Company/Interactions/sim_interactions)
<class 'gensim.similarities.docsim.Similarity'>
That seems right so I continue with this:
query_doc = [w.lower() for w in word_tokenize("client is")]
print(query_doc)
query_doc_bow = dictionary_interactions.doc2bow(query_doc)
print(query_doc_bow)
query_doc_tf_idf = tf_idf_interactions[query_doc_bow]
print(query_doc_tf_idf)
#check for similarities between docs
sims_interactions[query_doc_tf_idf]
and my output is this:
['client', 'is']
[(335, 1), (757, 1)]
[]
array([ 0., 0., 0., 0., 0., 0.], dtype=float32)
How do I get an output here?
Upvotes: 1
Views: 796
Reputation: 993
Depending on the content of your raw_docs
, this can be the correct behaviour.
Your code returns an empty tf_idf
although your query words appear in your original documents and your dictionary. tf_idf
is computed by term_frequency * inverse_document_frequency
. inverse_document_frequency
is computed by log(N/d)
, where N
is your total number of documents and d
is the number of documents a specific term occurs in.
My guess is that your query terms ['client', 'is']
occur in each document of yours, resulting in an inverse_document_frequency
of 0
and an empty tf_idf
list. You can check this behaviour with the documents I took and modified from the tutorial you mentioned:
# original: commented out
# added arbitrary words 'now' and 'the' where missing, so they occur in each document
#raw_documents = ["I'm taking the show on the road.",
raw_documents = ["I'm taking the show on the road now.",
# "My socks are a force multiplier.",
"My socks are the force multiplier now.",
# "I am the barber who cuts everyone's hair who doesn't cut their own.",
"I am the barber who cuts everyone's hair who doesn't cut their own now.",
# "Legend has it that the mind is a mad monkey.",
"Legend has it that the mind is a mad monkey now.",
# "I make my own fun."]
"I make my own the fun now."]
If you query for
query_doc = [w.lower() for w in word_tokenize("the now")]
you get
['the', 'now']
[(3, 1), (8, 1)]
[]
[0. 0. 0. 0. 0.]
Upvotes: 2