Reputation: 184
I wrote code below but I get error (TypeError: unhashable type: 'list') while running , can you help me ? i want most frequent words in my token.
! pip install wget
import wget
url = 'https://raw.githubusercontent.com/dirkhovy/NLPclass/master/data/moby_dick.txt'
wget.download(url, 'moby_dick.txt')
documents = [line.strip() for line in open('moby_dick.txt', encoding='utf8').readlines()]
import spacy
nlp = spacy.load('en')
tokens = [[token.text for token in nlp(sentence)] for sentence in documents[:200]]
from collections import Counter
# your code here
# Pass the split_it list to instance of Counter class.
Counter = Counter(tokens)
# most_common() produces k frequently encountered
# input values and their respective counts.
most_occur = Counter.most_common(10)
print(most_occur)
the error: TypeError Traceback (most recent call last) in () 4 # Pass the split_it list to instance of Counter class. 5 ----> 6 Counter = Counter(tokens) 7 8 # most_common() produces k frequently encountered
1 frames /usr/lib/python3.6/collections/init.py in update(*args, **kwds) 620 super(Counter, self).update(iterable) # fast path when counter is empty 621 else: --> 622 _count_elements(self, iterable) 623 if kwds: 624 self.update(kwds)
Upvotes: 0
Views: 248
Reputation: 2139
dict(Counter(map(tuple, token))).most_common())
If above code did not help try to downgrade your spacy.
Try to install below packages:
pip install msgpack==0.5.6 spacy==2.0.13 https://github.com/huggingface/neuralcoref-models/releases/download/en_coref_md-3.0.0/en_coref_md-3.0.0.tar.gz
or
python -m venv neuralcoref
source neuralcoref/bin/activate
CFLAGS='-stdlib=libc++' pip install thinc==6.10.3
pip install msgpack==0.5.6
CFLAGS='-stdlib=libc++' pip install spacy==2.0.12 # <-- not 2.0.13
pip install https://github.com/huggingface/neuralcoref-models/releases/download/en_coref_lg-3.0.0/en_coref_lg-3.0.0.tar.gz
Upvotes: 0
Reputation: 522
! pip install wget
import wget
url = 'https://raw.githubusercontent.com/dirkhovy/NLPclass/master/data/moby_dick.txt'
wget.download(url, 'moby_dick.txt')
documents = [line.strip() for line in open('moby_dick.txt', encoding='utf8').readlines()]
import spacy
nlp = spacy.load('en')
tokens = [token.text for sentence in documents[:200] for token in nlp(sentence)]
from collections import Counter
Counter = Counter(tokens)
most_occur = Counter.most_common(10)
print(most_occur)
update your syntax of list comprehension
Upvotes: 0