CountVectorizer vs TFIDFVectorizer

Machine Learning 101: CountVectorizer vs TFIDFVectorizer

Natural Language Processing is a cornerstone of data science. With so many different tools out there, sometimes it’s easy to get a bit confused.

One thing many data scientists (including me) mess up on is the differences between the CountVectorizer and TFIDFVectorizer.

After this 5-minute read, we’ll completely introduce both the TFIDFVecorizer and CountVectorizer, give you a run-down on the differences, suggest when to use one or the other, and finally, some python code to make sure that you can code both of these up.

This guide isn’t one to skip; buckle up. (Full Python Code At Bottom)

buckling belt


What is the difference between TfidfVectorizer and CountVectorizer?

TF-IDF Vectorizer and Count Vectorizer are both methods used in natural language processing to vectorize text. However, there is a fundamental difference between the two methods.

CountVectorizer simply counts the number of times a word appears in a document (using a bag-of-words approach), while TF-IDF Vectorizer takes into account not only how many times a word appears in a document but also how important that word is to the whole corpus.

This is done by penalizing words that often appear across all documents, reducing the count of these as these words are likely to be less important.

penalize with a red card


Are TfidfVectorizer and CountVectorizer the same thing?

TfidfVectorizer and CountVectorizer are not the same thing.

It’s easiest to think of TF-IDF as a formula combining the two ideas of term frequency and inverse document frequency, with the purpose of reflecting how important a word is to a document (sentence) in a corpus.

CountVectorizer is much simpler since it’s just a tool that converts a collection of text documents into a matrix of token counts, with no respect to the overall corpus.

Below is an image of how closely related these ideas are in text processing.

 

simplyfying the tf idf formula


As we can see, TFIDFVectorizer extends the ideas of CountVectorizer.


What is CountVectorizer?

CountVectorizer is a tool used to vectorize text data, meaning that it will convert text into numerical data that can be used in machine learning algorithms.

This tool exists in the SciKit-Learn (sklearn) text module; once converted, this numerical data will form a matrix, where each row represents a document (sentence), and each column represents a word.

The values in the matrix represent the frequency of that word in the document.

These embeddings, created from your corpus (or dataset), are critical in any model building down the line, and non-zero entities in your matrix represent words that don’t exist.

zero


Bag of Words Model vs. Countvectorizer

The difference between the Bag Of Words Model and CountVectorizer is that the Bag of Words Model is the goal, and CountVectorizer is the tool to help us get there.

For example, if you wanted to build a bag of words model using Sklearn, the simplest (and most used) method is to use CountVectorizer.

While many think these two are the same, the Bag of Words Model is where you’re going, and CountVectorizer is how you will get there.

car riding off into the distance


The bag of words model is a simple way of representing text data. Each document is represented as a bag of words, where each word is a token.

The order of the words is not important, and the presence or absence of a word is what matters.


Which is better, TF-IDF or Countvectorizer?

There is no conclusive answer to which vectorizer is better because it depends on the specific business problem and data.

From personal use, TF-IDF will usually be much stronger in modeled data. For example, If you are building a spam classifier, TF-IDF will prove to provide embeddings that will allow much more accurate machine-learning models to be built.

Which-is-better-TF-IDF-or-CountVectorizer


Why TFidfVectorizer Performs Better In Machine Learning Models Than CountVectorizer

One reason TFIDF usually performs better than CountVectorizer in machine learning models is that CountVectorizer treats all words equally without penalty.

Read the following sentence, and think about which words are “most important”

it is extremely likely that it is a bug, it looks like a cat, but it is a bug.

Our word list will look something like the following:

word counts

We quickly notice a problem; words like “it” and “is” are seen as highly valuable, while words like bug and cat are some of our lowest-scoring words. (Images)

Since there’s no IDF penalty applied like in TF-IDF, words that are common to a corpus but don’t help us understand the individual documents hurt our accuracy.

While in the previous example, we used common stop words that would probably be eliminated in cleaning – there will be other times when certain words dominate your corpus’s vocabulary and throw off your accuracy.


When can CountVectorizer work better than TFIDF??

There are a couple of situations where CountVectorizer can work better than TFIDF.

There is no definitive answer to this question as it depends on the data and the task at hand.

In general, however, Count Vectorizer may work better when the data is shorter and/or contains fewer unique words, while Tfidf may work better when the data is longer and/or contains more unique words.


When can TFIDF work better than Count Vectorizer??

There are a few situations where Count Vectorizer can work better than Tfidf.

One is if your text data is very short – in this case, you may get better results using CountVectorizer.

Another is if you’re dealing with text that has a lot of common words (such as “the” or “a”), which can cause some issues with Tfidf.

Finally, Count Vectorizer can sometimes be more effective if you’re working with texts with different lengths since it doesn’t penalize longer texts as much as Tfidf does.


Coding CountVectorizer and TfidfVectorizer in Python

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer


def create_embeddings(messages, vectorizer):
    
    # fit and transform our messages
    embeddings = vectorizer.fit_transform(messages)
    
    # create our dataframe
    df = pd.DataFrame(embeddings.toarray(), \
                      columns=vectorizer.get_feature_names())
    
    return df


messages = ['I like to play at the park',
            'I play baseball with friends the park']

# create with CountVectorizer
vectorizer = CountVectorizer()

# send our embeddings through with our function
embeddings = create_embeddings(messages, vectorizer)

# return out embeddings
embeddings

 

Below, we can see our CountVectorizer Embeddings

countvectorizer embeddings

 

# create with CountVectorizer
vectorizer = TfidfVectorizer()

# send our embeddings through with our function
embeddings = create_embeddings(messages, vectorizer)

# return out embeddings
embeddings

Below, we can see our TfidfVectorizer Embeddings (Much different than CountVectorizer)

tf-idf embeddings



Other Articles In Our Machine Learning 101 Series

We have many quick guides that go over some of the fundamental parts of machine learning. Some of those guides include:

  • Reverse Standardization: Now that you can split your data correctly, use this guide to build your first model.
  • Feature Selection With SelectKBest Using Scikit-Learn: Feature selection is tough; we make it easy for both regression and classification in this guide.
  • Criterion Vs. Predictor: It’s easy to confuse which variable is your training and which is your test variable – this guide explains the difference in detail.
  • Gini Index vs. Entropy: Learn how decision trees make splitting decisions. These two are the workhouse of top-performing tree-based methods.
  • Heatmaps In Python: Visualizing data is key in data science; this post will teach eight different libraries to plot heatmaps.
  • Normal Distribution vs Uniform Distribution: Now that you know the difference between these two vectorizers, you can now start to understand the different distributions these variables can have.
  • Parameter Versus Variable: Commonly misunderstood – these two aren’t the same thing. This article will break down the difference.
  • Welch’s T-Test: Do you know the difference between the student’s t-test and welch’s t-test? Don’t worry, we explain it in-depth here.
Stewart Kaplan