[BitTorrent] Re: bt2 protocol features

Elliott Mitchell ehem at m5p.com
Fri Jul 16 03:45:05 EDT 2004


>From: John Prevost <j.prevost at gmail.com>

> On Fri, 16 Jul 2004 01:54:22 +0200, Olaf van der Spek <ovdspek at liacs.nl> wrote:
> > 1024?
> > That'd mean 16 mbyte of data was invalid.
> > Isn't that kinda much with piece sizes <= 2 mb?

I was suggesting grabbing 1024 piece hashes at a time. Which would be
enough hashes to cover 128-256MB. Seemed a likely number.

> In the case of getting all of the leaf hashes at once, there's a
> fairly large load of data that needs to be downloaded before any
> validation can begin.  For an 8GB torrent with a 16kb block size, that
> would be 10MB of data up front, which is probably unreasonable.

There isn't any point in getting block-level hashes ahead of time, only
piece-level hashes. You need to be able to verify pieces before you
advertise having them, in order to avoid passing around bad pieces;
however, since you can't advertise having individual blocks why bother
retrieving the hashes for them, since once you've got all the blocks you
can verify against the piece hash?

If you've retrieved all the blocks of a piece, and found the verification
fails, *then* it is worthwhile retrieving the block-level hashes to
figure out which blocks were bad (and then blacklist that peer).

> Whenever a piece is requested, a peer would include all of the hashes
> required to validate that piece, from the top of the tree down.  So in
> the above example of block C:

> In the case of a single 8GB file (which is excessively huge for a
> single file!) the number of hashes for each piece would be 19, adding
> 2.3% of the data size to each PIECE message.  In a more reasonable
> large file, say 600MB, it would be 16, and add 2.0%.  In a 100MB file
> it would be 12, or 1.5%.  So the amount of overhead is kind of high
> and irritating, but at least does not increase much with file size.

Depends on the fan-out factor. Your 2.3% sounds pretty high, and that
isn't really that much overhead. This is inefficient because you're
sending hashes multiple times. Not really that big a deal, but I cringe
at the repeated copies.

> Contrariwise, the send-them-all-at-once model only adds 0.12% to the
> total amount of data transferred, but you need to transfer *all* of it
> before any other verification can be done: for the 8GB file, it's 10MB
> downloaded before the real downloading can begin.  For the 600MB file,
> it's 750kB, and for the 100MB file it's 125kB.

As you don't need the block-level hashes until there is a problem the
number is closer to 1.3MB. Still unacceptably large for starting out.
These are the two extremes, not the only options. A more likely approach
is to grab an upper tier block of hashes, then grab blocks of hashes down
the chain until you've got a bunch of leaves and can start work on those.
If these are embeded in a piece, you upload this block itself in exchange
for download bandwidth of other pieces...  (tricky to do as most peers
will tend to retrieve these first, but once you've got /some/ of the
leaves you can do full upload/download of payload pieces)

> So my suspicion (from what little Bram has said so far) is: binary
> Merkle hash tree, and each PIECE message includes the hashes needed to
> validate that piece.  Why?  Because: 1) anything other than binary is
> silly, 2) if you're not going to use the tree-ness (i.e. you'll
> transfer all leaf hashes) there's no reason to use a tree at all, and
> 3) sending the validation data with every piece is much simpler than
> trying to negotiate which hashes are needed--simple is good.

True, though if you're sending multiple hashes at once, it is worthwhile
to collapse levels together and then the intermediate levels of hashes
needn't ever be sent, saving bandwidth.


>From: John Prevost <j.prevost at gmail.com>
> I'd say that removing the keepalives and considering removing
> "interested" and batching up "have" messages on unchoke would do a lot
> more for this.  Disconnecting peers because they're "not good enough"
> is far too dangerous.

Why is it dangerous? You're going to expend bandwidth sending keepalives.
They're going to be unable to keep their interested state correct if you
don't update them. Certainly you want to keep a few inactive peers
available for use if an active peer drops out, but not very many (as they
otherwise roast your bandwidth).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \   (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_  \   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/




------------------------ Yahoo! Groups Sponsor --------------------~--> 
Yahoo! Domains - Claim yours for only $14.70
http://us.click.yahoo.com/Z1wmxD/DREIAA/yQLSAA/dkFolB/TM
--------------------------------------------------------------------~-> 

 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 



More information about the BitTorrent mailing list