Skip to Content.
Sympa Menu

sm-users - Re: [SM-Users] first experiences and problems

sm-users AT lists.ibiblio.org

Subject: Sourcemage Users List

List archive

Chronological Thread  
  • From: Seth Alan Woolley <seth AT positivism.org>
  • To: Thomas Orgis <thomas-forum AT orgis.org>
  • Cc: sm-users AT lists.ibiblio.org
  • Subject: Re: [SM-Users] first experiences and problems
  • Date: Wed, 14 Dec 2005 13:52:04 -0800

I'll try to explain a little how things work:

downloads are done in parallel, and casting continues when the specific
summon for a spell completes and it clears a certain lock on that spell.

To get to continue_casting, we have to complete summon with a good
return code, except source verification is not done in summon. It's
done at the time of cast. This allows a number of things: you can have
NFS-shared source cache and not have to worry about summoning or
integrity at the time of downloading as it's checked at the cast stage
anyways. What you propose is that we move verification to an earlier
part of the stage, immediately after downloading before summon exits,
for the benefit of being able to detect and retry summoning until we get
the correct file.

I'm thinking instead, if we go down your path, we provide an option to
unpack_file (where integrity check happens) upon failure to have it
query for redownloading. This might fail for all sorts of reasons
anyways:

* some of my computers are only checking one official
mirror I keep internally for sources and have egress firewalling in
place to prevent any other type of access to the web.
* there may not be a good copy even on our local mirrors.
* the spell author may have mispasted the hash or sig and there may be
no real valid source.
* the internet might be down right now.
* it's redownloaded, but it has bit-errors during transport.

There're all sorts of heuristics that summon tries to do. We keep a
list of mirrors and dynamically try mirrors until we find a copy (when
sourceforge has messed up mirrors, there's little we can do to make it
faster other than remembering which mirror was successful last time and
trying it first next time, which has its own drawbacks because then it
might overload one of their better performing (uptime-wise) mirrors),
and we can use netselect to find better mirrors, too. We just have to
remember each change affects a large system of heuristics and tweaks and
it all has to be taken into account.

As I said, we have downloading as a completely separate system than
integrity checking. We'd have to integrate integrity checking into
summon or have integrity checking do additional summoning to get what
you want. Personally the benefit you propose doesn't seem to outweigh
the costs of tangling the two codebases. It's definitely possible, but
it comes at very little advantage.

Why do I think this?

One reason is that hashes are just plainly the inferior download
mechanism. With a gpg-check, all legitimate sources can be changed
upstream easily, and mirrors can even be mostly out-of-sync and still
pass a gpg-check as long as the signature and original sources are kept
together. Secondly, our mirrors should be a last resort for files, the
more we burden the mirrors, the more we burden our infrastructure and
costs (I host a mirror, I'm aware of the costs). Thirdly, invalid
source downloading is important to note. When an invalid download
happens, either you're being hacked, and might want to know, the line is
really terrible, or the sources are changing willy-nilly. In that case,
I'd like it to fail out so we can fix it. In almost every case, it's a
definite problem that's fixable.

You're always able to download by hand and ignore the checks. At that
point it's a conscious act on the part of the administrator. A
summon/cast -d might be all that is needed to fix it, and if that's the
case, then it really shouldn't be our problem to keep retrying. In
fact, the type of error on fetching with summon (e.g. http: 404/503/200
but corrupted) greatly affects our success later, which requires some
more knowledge that the url handlers would need.

What I would do is suggest you come up with a patch after reading the
code that implements it how you want, let others test it, get feedback
to see if it's worth it, let us review it for
design/complexity/suggest-improvements and whatnot, and go from there.

Most of the developers are working on areas of the code that they would
like to see changed themselves, so those who make suggestions are often
the people who are doing the coding, and I hope you don't mind doing
that at all ;)

Seth

On Sun, Dec 11, 2005 at 04:34:56PM +0100, Thomas Orgis wrote:
> --Signature_Sun__11_Dec_2005_16_34_56_+0100_VP=gs9v+C4jePQvG
> Content-Type: text/plain; charset=US-ASCII
> Content-Transfer-Encoding: quoted-printable
>
> > signatures. If you have a suggestion of an algorithm that would work=20
> > well without overburdening some part of the mirror chain, perhaps by=20
> > falling back directly to our mirrors (which should be accurate to our=20
> > own grimoiire in any case) but failing after the second failed integrity=
> =20
> > check, then we might be able to implement it.
>
>
> Hm... what about something like this (in pseudo-C/PERL):
>
>
> if(not download_there(file) or integrity(file) =3D=3D false) #we don't have=
> it already handy
> {
> if(download(official_file) =3D=3D false or integrity(official_file)
> =3D=3D=
> false) #official source fails
> {
> int i =3D -1
> while(download(mirror_file[++i]) =3D=3D false)
> {
> if(i =3D=3D overall_attempt_limit) break
> }
> }
> }
>
> if(not download_there(file) or integrity(file) =3D=3D false)
> {
> print("unable to get (valid) file... try later or in a different
> world")
> }
> else
> {
> continue_casting()
> }
>
>
> What I am not sure about: Do we have multiple "official" urls in general? T=
> hen it would just mean:
>
> 1. try to get one successful download from official sources
> 2. if unable to or download invalid: try to get one successful download fro=
> m mirrors
> 3. if unable to or download invalid: be screwed
>
> One could think about these steps being tunable:
>
> Give up after n total download attempts.
> Give up after m invalid downloads...
>
>
> Additionally, somehow related, the download from sourceforge mirrors annoye=
> d me a bit before I set the default mirror for that to something sensible f=
> or me. In the sf case, where we have a list of mirrors available: Does it m=
> ake sense to try one mirror three times on timeouts (producing normally thr=
> ee long pauses while waiting for the timeout)? Wouldn't it be better to cyc=
> le through the list right after the first timeout?
>
>
> Thomas.
>
> --Signature_Sun__11_Dec_2005_16_34_56_+0100_VP=gs9v+C4jePQvG
> Content-Type: application/pgp-signature
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.2.3 (GNU/Linux)
>
> iD8DBQFDnEcgm0xSvNRG1SQRAg5TAJ9z3ggxlxI4scqi8E2hKavAnUjtqgCfdM2/
> az9f+N/5fcLqMQWPbxCoUNY=
> =EGob
> -----END PGP SIGNATURE-----
>
> --Signature_Sun__11_Dec_2005_16_34_56_+0100_VP=gs9v+C4jePQvG--
>

--
Seth Alan Woolley [seth at positivism.org], SPAM/UCE is unauthorized
Quality Assurance Team Leader & Security Team: Source Mage GNU/linux
Linux so advanced, it may as well be magic http://www.sourcemage.org
Key id 63C1E02F = E07A FB0E 5925 CE4A 6526 2AD5 1782 FEC2 63C1 E02F

Attachment: pgpyQkV9ukWer.pgp
Description: PGP signature




Archive powered by MHonArc 2.6.24.

Top of Page