Ok, here I am again...
Baf's wget works really fine (thanks a lot!!!) but I really don't know how to set it up nicely...
Here the situation:
I would like to download a web page (single web page) from Wikipedia, so I run this:
Code:
export PAGES="/mnt/us/WebPages"
webpage="http://it.wikipedia.org/wiki/Wget"
/mnt/us/extensions/offlinepages/bin/resource/Wget/wget -e robots=off --wait=20 --limit-rate=20K --quiet --no-parent --page-requisites --convert-links --adjust-extension -U Mozilla -P "$PAGES" "$webpage"
It goes past the robot.txt exclusions and, thanks to -U, --wait and --limit-rate it isn't detected as a site crawler.
BUT it doesn't download images...
It retrieves only Wget.html and favicon.ico!!
Isn't it supposed to download all the necessary to display the page?
If I turn on the --recursive, I get all the linked html to the wget page but no images again!!
Any help?
Thanks guys!!!