Skip to Content.
Sympa Menu

pcplantdb - Re: [pcplantdb] Re: [piw] Information I would like to see

pcplantdb@lists.ibiblio.org

Subject: pcplantdb

List archive

Chronological Thread  
  • From: Richard Morris <webmaster@pfaf.org>
  • To: Permaculture Plant Database <pcplantdb@lists.ibiblio.org>
  • Subject: Re: [pcplantdb] Re: [piw] Information I would like to see
  • Date: Fri, 04 Feb 2005 10:00:20 +0000

Lawrence F. London, Jr. wrote:
Richard Morris wrote:

I've done this sort of stuff on the pfaf site. I've used programs like wget (on linux) to download entire website. I've then run some perl


On this particular topic, apps that do things like wget.
I badly need and will purchase one for win2k or winxp.
Can anyone recommend one that really works and has recovery mode after timeouts
or 404's? I have heard of Websnatcher and Webgrabber and know there must be a slew
of them out there.

Thanks for any leads,

LL
Wget is fine for most purposes. Its free, avaliable for unix systems (may already be installed). I use cygwin on my pc to give me a unix shell and run wget from there.

$ wget --help
GNU Wget 1.8.2, a non-interactive network retriever.
Usage: wget [OPTION]... [URL]...

Mandatory arguments to long options are mandatory for short options too.

Startup:
-V, --version display the version of Wget and exit.
-h, --help print this help.
-b, --background go to background after startup.
-e, --execute=COMMAND execute a `.wgetrc'-style command.

Logging and input file:
-o, --output-file=FILE log messages to FILE.
-a, --append-output=FILE append messages to FILE.
-d, --debug print debug output.
-q, --quiet quiet (no output).
-v, --verbose be verbose (this is the default).
-nv, --non-verbose turn off verboseness, without being quiet.
-i, --input-file=FILE download URLs found in FILE.
-F, --force-html treat input file as HTML.
-B, --base=URL prepends URL to relative links in -F -i file.
--sslcertfile=FILE optional client certificate.
--sslcertkey=KEYFILE optional keyfile for this certificate.
--egd-file=FILE file name of the EGD socket.

Download:
--bind-address=ADDRESS bind to ADDRESS (hostname or IP) on local host.
-t, --tries=NUMBER set number of retries to NUMBER (0 unlimits).
-O --output-document=FILE write documents to FILE.
-nc, --no-clobber don't clobber existing files or use .# suffixes.
-c, --continue resume getting a partially-downloaded file.
--progress=TYPE select progress gauge type.
-N, --timestamping don't re-retrieve files unless newer than local.
-S, --server-response print server response.
--spider don't download anything.
-T, --timeout=SECONDS set the read timeout to SECONDS.
-w, --wait=SECONDS wait SECONDS between retrievals.
--waitretry=SECONDS wait 1...SECONDS between retries of a retrieval.
--random-wait wait from 0...2*WAIT secs between retrievals.
-Y, --proxy=on/off turn proxy on or off.
-Q, --quota=NUMBER set retrieval quota to NUMBER.
--limit-rate=RATE limit download rate to RATE.

And a host of other options.


I've also a home brew java program which does the same thing. Potentially it could be modified to suit your needs.

Rich






Archive powered by MHonArc 2.6.24.

Top of Page