Aniruddhsinh
Aniruddhsinh

Reputation: 2109

How to download all files (but not HTML) from a website using wget?

How to use wget and get all the files from website?

I need all files except the webpage files like HTML, PHP, ASP etc.

Upvotes: 178

Views: 314845

Answers (8)

Daniel
Daniel

Reputation: 203

I know this topic is very old, but I fell here at 2021 looking for a way to download all Slackware files from a mirror (http://ftp.slackware-brasil.com.br/slackware64-current/).

After reading all the answers, the best option for me was:

wget -m -p -k -np -R '*html*,*htm*,*asp*,*php*,*css*' -X 'www' http://ftp.slackware-brasil.com.br/slackware64-current/

I had to use *html* instead of just html to avoid downloads like index.html.tmp.

Please forgive me for resurrecting this topic, I thought it might be useful to someone other than me, and my doubt is very similar to @Aniruddhsinh's question.

Upvotes: 9

Zsolt Botykai
Zsolt Botykai

Reputation: 51673

To filter for specific file extensions:

wget -A pdf,jpg -m -p -E -k -K -np http://site/path/

Or, if you prefer long option names:

wget --accept pdf,jpg --mirror --page-requisites --adjust-extension --convert-links --backup-converted --no-parent http://site/path/

This will mirror the site, but the files without jpg or pdf extension will be automatically removed.

Upvotes: 295

izilotti
izilotti

Reputation: 4937

This downloaded the entire website for me:

wget --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://site/path/

Upvotes: 92

Jesse
Jesse

Reputation: 3837

wget -m -p -E -k -K -np http://site/path/

man page will tell you what those options do.

wget will only follow links, if there is no link to a file from the index page, then wget will not know about its existence, and hence not download it. ie. it helps if all files are linked to in web pages or in directory indexes.

Upvotes: 64

wget -m -A * -pk -e robots=off www.mysite.com/

this will download all type of files locally and point to them from the html file and it will ignore robots file

Upvotes: 5

Steve Bennett
Steve Bennett

Reputation: 126667

I was trying to download zip files linked from Omeka's themes page - pretty similar task. This worked for me:

wget -A zip -r -l 1 -nd http://omeka.org/add-ons/themes/
  • -A: only accept zip files
  • -r: recurse
  • -l 1: one level deep (ie, only files directly linked from this page)
  • -nd: don't create a directory structure, just download all the files into this directory.

All the answers with -k, -K, -E etc options probably haven't really understood the question, as those as for rewriting HTML pages to make a local structure, renaming .php files and so on. Not relevant.

To literally get all files except .html etc:

wget -R html,htm,php,asp,jsp,js,py,css -r -l 1 -nd http://yoursite.com

Upvotes: 28

Suneel Kumar
Suneel Kumar

Reputation: 5831

Try this. It always works for me

wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL

Upvotes: 5

kenorb
kenorb

Reputation: 166833

You may try:

wget --user-agent=Mozilla --content-disposition --mirror --convert-links -E -K -p http://example.com/

Also you can add:

-A pdf,ps,djvu,tex,doc,docx,xls,xlsx,gz,ppt,mp4,avi,zip,rar

to accept the specific extensions, or to reject only specific extensions:

-R html,htm,asp,php

or to exclude the specific areas:

-X "search*,forum*"

If the files are ignored for robots (e.g. search engines), you've to add also: -e robots=off

Upvotes: 7

Related Questions