Reputation: 2189
I trying to import the nltk package in python 2.7
import nltk
stopwords = nltk.corpus.stopwords.words('english')
print(stopwords[:10])
Running this gives me the following error:
LookupError:
**********************************************************************
Resource 'corpora/stopwords' not found. Please use the NLTK
Downloader to obtain the resource: >>> nltk.download()
So therefore I open my python termin and did the following:
import nltk
nltk.download()
Which gives me:
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
However this does not seem to stop. And running it again still gives me the same error. Any thoughts where this goes wrong?
Upvotes: 99
Views: 249322
Reputation: 239
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
words=stopwords.words('english')[0:20]
print(words)
Upvotes: 0
Reputation: 29
in my case after running
import nltk
nltk.download('stopwords')
it did not work. The issue was wordnet.zip was unabale to unzip on its own so simple go to folder wherepython3 -m textblob.download_corpora
this command installed package and unzip folder
cd ~
cd nltk_data/corpora/
unzip stopwords.zip
Upvotes: -1
Reputation: 6016
Installed the ntlk and imported the stopwords
!pip3 install nltk
import nltk
nltk.download('stopwords')
Upvotes: 0
Reputation: 191
check what error you are getting --
python3 -m nltk.downloader stopwords
Error :
RuntimeWarning: 'nltk.downloader' found in sys.modules after import of package 'nltk', but prior to execution of 'nltk.downloader'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
[nltk_data] Error loading stopwords: <urlopen error [SSL:
[nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed:
[nltk_data] unable to get local issuer certificate (_ssl.c:1123)>
Use the solution provided my @reshma2k
Upvotes: 1
Reputation: 31
Use GPU runtime, it will not give you any error.
The same code will work which you are using
import nltk
stopwords = nltk.corpus.stopwords.words('english')
print(stopwords[:10])
Upvotes: 3
Reputation: 74
I know the comment is quite late, but if it helps:
Although the nltk.download('stopwords')
will do the job, there might be times when it won't work due to proxy issues if your organization has blocked it.
I found this github link pretty handy, from where I can just pick up the list of words and integrate it manually in my project just as a workaround.
Upvotes: 1
Reputation: 179
if you get an SSL/Certificate error, run the following command.
This works by disabling SSL check!
import nltk
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
nltk.download()
Upvotes: 14
Reputation: 12616
You are currently trying to download every item in nltk data, so this can take long. You can try downloading only the stopwords that you need:
import nltk
nltk.download('stopwords')
Or from command line (thanks to Rafael Valero's answer):
python -m nltk.downloader stopwords
Upvotes: 172
Reputation: 1
showing info https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml
If you are running this command in a jupyter notebook, it opens another window titled 'NLTK Downloader'. Once you go in that window, you can select the topics you want to download and then click on download button to start downloading.
Until you close the NLTK Downloader window, the cell in the Jupyter keeps on running.
Upvotes: 0
Reputation: 197
You can do this in separately in console.
It will give you a result.
import nltk
nltk.download('stopwords')
I used jupyter console when I faced this problem.
Upvotes: 15
Reputation: 721
If your PC uses proxy for connectivity, then try this:
import nltk
nltk.set_proxy('http://proxy.example.com:3128', ('USERNAME', 'PASSWORD'))
nltk.download('stopwords')
Upvotes: 5
Reputation: 2816
The some as mentioned here by Kurt Bourbaki but in the command line:
python -m nltk.downloader stopwords
Upvotes: 46