Monica Heddneck
Monica Heddneck

Reputation: 3135

What exactly does 'use_idf' do when creating a TfidfTransformer in sklearn?

I am using the TfidfTransformer from the sklearn package in Python 2.7.

As I was getting comfortable with the arguments, I became a bit confused about use_idf, as in:

TfidfVectorizer(use_idf=False).fit_transform(<corpus goes here>)

What exactly does use_idf do when false or true?

Since we are generating a sparse Tfidf matrix, it doesn't make sense to have an argument to choose a sparse Tfidif matrix; that seems redundant.

This post was interesting but didn't seem to nail it.

The documentation says only, Enable inverse-document-frequency reweighting, which isn't very illuminating.

Any comments appreciated.

EDIT I think I figured it out. It's real simple:
Text --> counts
Counts --> TF, meaning we just have raw counts or Counts --> TFIDF, meaning we have weighted counts.

What was confusing me was...since they called it TfidfVectorizer I didn't realize that was true only if you chose it to be a TFIDF. You could have also use it to create just a TF.

Upvotes: 14

Views: 7362

Answers (3)

tash
tash

Reputation: 951

Adding the link to sklearn's documentation on how their TF-IDF slightly differs from textbook's: https://scikit-learn.org/stable/modules/feature_extraction.html#tfidf-term-weighting

Their TF(t) is actually raw count of the term. Their IDF(t) is actually 1+log(n/DF(t))

Upvotes: 0

Harsha Reddy
Harsha Reddy

Reputation: 431

Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.

TF: Term Frequency, which measures how frequently a term occurs in a document. TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document)

IDF: Inverse Document Frequency, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:

IDF(t) = log_e(Total number of documents / Number of documents with term t in it).

If you give use_idf=False, you will score using only TF.

Upvotes: 8

Pranav Waila
Pranav Waila

Reputation: 385

In Term frequency (TF) calculation, all terms are considered equally important. Even certain terms which have no importance in determining relevance are treaded in the calculations.

Scaling down the weights for terms with high collection frequency helps the calculations. Inverse Document Frequency reduces the TF weight of a term by a factor that grows with its collection frequency. So Document frequency DF of the term is used to scale its weight.

Upvotes: 2

Related Questions