wget -r www.example.com
.This downloads the pages recursively up to a maximum of 5
levels deep.wget -c www.example.com
.wget -r -l10 www.example.com
, example for depth of 10. for infinite levels
use -l inf
.10
recurcively, with resume option use wget -A "*.pdf" -rc -l10 www.example.com
.Find more at https://www.lifewire.com/uses-of-command-wget-2201085
403 Forbidden
for some links, those websites may block those with improper headers, so make them to think that you are a browser. So use wget -mk -w 20 --user-agent="Mozilla/4.5 (X11; U; Linux x86_64; en-US)" https://www.example.com
.(I)(4)
use wget -A "*.pdf" -rc -mk -w 20 --user-agent="Mozilla/4.5 (X11; U; Linux x86_64; en-US)" https://www.example.com
.Find more at https://superuser.com/questions/786097/wget-mirroring-the-site-fails-403-forbidden-even-with-user-agent