Sometimes, I like to download websites from a users perspective when performing SEO, making performance improvements and for a number of other reasons when working on websites. I love Linux for this task and wget does this really well.
All websites are not created equally and there are number of ways to use wget. Here is a very quick, clean way to accomplish this task:
First you want to cd to the directory of your choice, so you don’t download files to the wrong location.
Run the command:
wget -r -p -U Mozilla http://www.example.com
How this works:
-r this means recursive – follows site links and downloads the whole site rather than just the target page.
-p page-requisites and means to get all images, etc… to display the HTML page
-U user-agent – this mimics browsing the site with a web browser as some software will block users from downloading whole websites.
http://www.example.com – The target site or page to start the download process from.
This article is not meant to cover all aspects of wget, as it is a very powerful and versatile tool that can be used for any number of tasks, but rather a quick reference.
If you have any questions or improvements, please comment. If you enjoyed this post, please share with the world!