Florian
Florian

Reputation: 139

Python - From list of list of tokens to bag of words

I am struggling with computing bag of words. I have a pandas dataframe with a textual column, that I properly tokenize, remove stop words, and stem. In the end, for each document, I have a list of strings.

My ultimate goal is to compute bag of words for this column, I've seen that scikit-learn has a function to do that but it works on string, not on a list of string.

I am doing the preprocessing myself with NLTK and would like to keep it that way...

Is there a way to compute bag of words based on a list of list of tokens ? e.g., something like that:

["hello", "world"]
["hello", "stackoverflow", "hello"]

should be converted into

[1, 1, 0]
[2, 0, 1]

with vocabulary:

["hello", "world", "stackoverflow"]

Upvotes: 4

Views: 5731

Answers (3)

Ryan
Ryan

Reputation: 333

Using sklearn.feature_extraction.text.CountVectorizer

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer

df = pd.DataFrame({'text': [['hello', 'world'], 
                        ['hello', 'stackoverflow', 'hello']]
                   })

## Join words to a single line as required by CountVectorizer
df['text'] = df['text'].apply(lambda x: ' '.join([word for word in x]))

vectorizer = CountVectorizer(lowercase=False)
x = vectorizer.fit_transform(df['text'].values)

print(vectorizer.get_feature_names())
print(x.toarray())

Output:

['hello', 'stackoverflow', 'world']

[[1 0 1]
 [2 1 0]]

Upvotes: 2

Zhangjian
Zhangjian

Reputation: 31

sklearn.feature_extraction.text.CountVectorizer can help a lot. Here's the excample of official document:

from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
corpus = [
    'This is the first document.',
    'This is the second second document.',
    'And the third one.',
    'Is this the first document?',
]
X = vectorizer.fit_transform(corpus)
X.toarray() 
/*array([[0, 1, 1, 1, 0, 0, 1, 0, 1],
   [0, 1, 0, 1, 0, 2, 1, 0, 1],
   [1, 0, 0, 0, 1, 0, 1, 1, 0],
   [0, 1, 1, 1, 0, 0, 1, 0, 1]]...)*/

You can get the feature name with the method vectorizer.get_feature_names().

Upvotes: 3

jezrael
jezrael

Reputation: 862761

You can create DataFrame by filtering with Counter and then convert to lists:

from collections import Counter

df = pd.DataFrame({'text':[["hello", "world"],
                           ["hello", "stackoverflow", "hello"]]})

L = ["hello", "world", "stackoverflow"]

f = lambda x: Counter([y for y in x if y in L])
df['new'] = (pd.DataFrame(df['text'].apply(f).values.tolist())
               .fillna(0)
               .astype(int)
               .reindex(columns=L)
               .values
               .tolist())
print (df)

                            text        new
0                 [hello, world]  [1, 1, 0]
1  [hello, stackoverflow, hello]  [2, 0, 1]

Upvotes: 3

Related Questions