Reputation: 14654
I am looking to download full Wikipedia text for my college project. Do I have to write my own spider to download this or is there a public dataset of Wikipedia available online?
To just give you some overview of my project, I want to find out the interesting words of few articles I am interested in. But to find these interesting words, I am planning to apply tf/idf to calculate term frequency for each word and pick the ones with high frequency. But to calculate the tf, I need to know the total occurrences in whole of Wikipedia.
How can this be done?
Upvotes: 29
Views: 43408
Reputation: 32946
from wikipedia: http://en.wikipedia.org/wiki/Wikipedia_database
Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. For our advice about complying with these licenses, see Wikipedia:Copyrights.
Seems that you are in luck too. From the dump section:
As of 12 March 2010, the latest complete dump of the English-language Wikipedia can be found at http://download.wikimedia.org/enwiki/20100130/ This is the first complete dump of the English-language Wikipedia to have been created since 2008. Please note that more recent dumps (such as the 20100312 dump) are incomplete.
So the data is only 9 days old :)
EDIT: new link as old is broken: https://dumps.wikimedia.org/enwiki/
Upvotes: 30
Reputation: 6661
I found out a relevant Kaggle dataset at https://www.kaggle.com/datasets/ltcmdrdata/plain-text-wikipedia-202011
From the dataset description:
This dataset includes ~40MB JSON files, each of which contains a collection of Wikipedia articles. Each article element in the JSON contains only 3 keys: an ID number, the title of the article, and the text of the article. Each article has been "flattened" to occupy a single plain text string. This makes it easier for humans to read, as opposed to the markup version. It also makes it easier for NLP tasks. You will have much less cleanup to do.
Each file looks like this:
[
{
"id": "17279752",
"text": "Hawthorne Road was a cricket and football ground in Bootle in England...",
"title": "Hawthorne Road"
}
]
From this it is trivial to extract the text with a JSON reader.
Upvotes: 0
Reputation: 45
Use this script
#https://en.wikipedia.org/w/api.php?action=query&prop=extracts&pageids=18630637&inprop=url&format=json
import sys, requests
for i in range(int(sys.argv[1]),int(sys.argv[2])):
print("[wikipedia] getting source - id "+str(i))
Text=requests.get("https://en.wikipedia.org/w/api.php?action=query&prop=extracts&pageids="+str(i)+"&inprop=url&format=json").text
print("[wikipedia] putting into file - id "+str(i))
with open("wikipedia/"+str(i)+"--id.json","w+") as File:
File.writelines(Text)
print("[wikipedia] archived - id "+str(i))
1 to 1062 is at https://costlyyawningassembly.mkcodes.repl.co/.
Upvotes: 0
Reputation: 3885
All the latest wikipedia dataset can be downloaded from: Wikimedia Just make sure to click on the latest available date
Upvotes: 0
Reputation: 111
If you need a text only version, not a Mediawiki XML, then you can download it here: http://kopiwiki.dsd.sztaki.hu/
Upvotes: 11
Reputation: 5511
Considering the size of the dump, you would probably be better served using the word frequency in the English language, or to use the MediaWiki API to poll pages at random (or the most consulted pages). There are frameworks to build bots based on this API (in Ruby, C#, ...) that can help you.
Upvotes: 4
Reputation: 24383
http://en.wikipedia.org/wiki/Wikipedia_database#Latest_complete_dump_of_english_wikipedia
Upvotes: 1