Reputation: 220
A web page which is shown in a browser consists of HTML document and some objects such as CSS, JS, Image, etc. I want to save all of them on my hard disk using wget
command to load it later from local computer. Is there any chance?
Note: I want a page not all pages of a web site or something similar.
Upvotes: 2
Views: 3638
Reputation: 2547
Use following command:
wget -E -k -p http://example.com
Detail of switches:
-E :
If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp .[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good use for this is when you're downloading CGI-generated materials. A URL like http://example.com/article.cgi?25 will be saved as article.cgi?25.html.
-k
After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.
-p
This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets.
Upvotes: 2