Skip to Content.
Sympa Menu

sm-sorcery-bugs - [SM-Sorcery-Bugs] [Bug 1777] wget replacement (proz) cool patch included

sm-sorcery-bugs AT lists.ibiblio.org

Subject: Bugs for Sorcery are reported here

List archive

Chronological Thread  
  • From: bugzilla-daemon AT metalab.unc.edu
  • To: sm-sorcery-bugs AT lists.ibiblio.org
  • Subject: [SM-Sorcery-Bugs] [Bug 1777] wget replacement (proz) cool patch included
  • Date: Sat, 5 Jun 2004 10:52:20 -0400

http://bugs.sourcemage.org/show_bug.cgi?id=1777





------- Additional Comments From acedit AT armory.com 2004-06-05 10:52 -------
so while I like the idea of using proz there is an inherent
limitation in our downloading infrastructure that makes fully
utilizing proz infeasable without a significant recode, or a
significant hack. Obviously we're not doing the latter on my
watch.

What makes proz useful is giving it several mirrors to get from
at once, then it will download from each of them in parallel.
Otherwise you're just being annoying and grabbing several download
pipes from the same system. If you've got a extremely fast network
that /might/ get you somewhere, otoh, if the mirror has some bandwidth
throttling per connection, they probably wouldn't like you to subvert
that by taking multiple connections, and on our third hand they
may have throttling per ip, which would get you nowhere other than
annoying.

One thing to keep in mind is that cast will do background downloading
so you usually only pay for the first download in a series. The goodth
you get from improved download speed using proz is pretty limited when
you think about it.

Now for a look at sorcery, and how doing this right would work:

In sorcery we have the liburl frontend to url handling, then individual
url handlers. The assumptions were made the a url handler could only
be given on url at a time, and that a url had only one mechanism
to download, and whatever that mechanism was, it wouldn't necessarily
support multiple, possibly different protocol, urls itself. Proz of
course can download from http and ftp at the same time (right?).

So really what we need to do seperate the url handling from the download
program handling. The model I came up with was to still have urls passed
one by one to url handlers, but there purpose would be to parse them
and drop them into download handler buckets. The http, ftp and https
handlers would probably all drop their urls into a wget or proz
bucket, while cvs would drop into a cvs downloader.

So after all the urls were passed to url handlers, each download
handler would have a list of urls to get, which were pre-chewed in
some way by the url_handler, then we could invoke each download handler
one by one in some order of preference. The proz handler would happily
try to get all the urls at once, the cvs handler would do whatever foo
it needed, etc. At some point the file would get downloaded and we
move on.

Unfortunatly, while this whole thing is really cool and rigorous
and we would have the best downloader on the block...what we've
got now is "pretty good". Theres a bunch of other things that
ought to be fixed first so while Im not ignoring this bug, the
true solution wont be here for a while.




------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.



  • [SM-Sorcery-Bugs] [Bug 1777] wget replacement (proz) cool patch included, bugzilla-daemon, 06/05/2004

Archive powered by MHonArc 2.6.24.

Top of Page