[BitTorrent] Re: bt2 protocol features

Elliott Mitchell ehem at m5p.com
Sat Jul 17 02:28:05 EDT 2004


>From: John Prevost <j.prevost at gmail.com>
> couple of really nice features.  The first feature is that it doesn't
> need additional (possibly troublesome) protocol support for additional
> commands or for "special" blocks or pieces that contain the hashes. 

Both approaches require protocol support of some flavor. Sending hashes
with the pieces add overhead to the piece messages and in doing so makes
a pretty serious structual change to the piece messages.

I'm suggesting packing hashes into a piece because that portion is a
minimal change to the protocol. Except for the handling of the magic
pieces by the client, it is otherwise indistinguishable from any other
piece.

> The second is that it can be done without either peer needing to know
> any state of the other except "the other peer has this block" (which
> admittedly is known at a lower resolution than that.)  The third is
> that the receiving peer needs no state to record hash information:
> each received block carries enough information to validate it in a

Not really a positive for your approach, as either method requires some
state, just it is different state. With sending them as a block, you
simply keep the block around.

With sending them with the pieces, you need to keep track of the hashes
so you can send the verification information to your peers. The hashes   
*cannot* be computed with information you have. So you're stuck with 
extra data either way. 

> vacuum.  And finally, it's desirable to know at the individual block
> level whether the data is valid or not, without having to go into some
> sort of recovery mode.

Note that the granularity of hash verification is a separate issue from
which retrieval method is used. Retrieving them in blocks doesn't have
any relation to whether you typically verify at the block versus piece
level. Unlike your proposal which requires block-level verification.

> Which reminds me of the other nice feature of having each block
> message carry all of the information needed to validate the block, as
> well as the data: the data appears bad if either the hashes or the
> data is tampered with.  In short, there's pretty much no bookkeeping
> in this scenario, whereas only validating at the block level once a
> problem is known requires lots of bookkeeping and possible repeated
> tries to fetch hashes from different peers, etc.

So, now the case for grabbing blocks of hashes. No recomputation. When
sending them with the blocks, you've got to do the full recomputation
back to the root of the tree. You can cache parts of the tree as good,
but this requires extra work/space. While blocks of hashes the expected
mode is to verify blobs of hashes, and this lends itself to no
recomputation.


It should be noted that a hash failure is an *extraordinary* case. The
hashes are extraneous data, in the common case those hashes should never
be needed. Though the checksums used by TCP and IP are simple, they are
effective.

There are *only* two even slightly likely reasons why a hash will fail. A
software (or hardware) failure on the client, or malicious corruption.
The first is quite possible, flaky processor, or memory can kill
anything; also certain systems are not known for the quality of their TCP
stacks. The second would be caused by an evil DoS client or an evil
leecher clients trying to get maximal downloads without any actual
uploads. In theory the mere existance of the hashes should discourage
leechers, but you do have to be prepared to enforce them. OTOH folks
interested in doing a DoS are more likely to attack the tracker, rather
than the clients and the hashes should limit the damage.


> Well, let's presume for the moment that 35 is a healthy number, that
> more than 35 is enough that the trade-off of a larger pool for more
> bandwidth isn't worth it, and that less than 35 is small enough that
> the pool is too small for your connection to be healthy.  (In reality,
> I think it's trickier than that: I think that adjusting the pool size
> is a useful tweak to allow to users: while some will foolishly adjust
> it the wrong way for their connection, it's a fairly important
> parameter to be able to tune to manage the trade-off between overhead
> and pool size.)

Quite useful, but quite difficult for an end user to adjust it properly
(the optimum is non-obvious at first glance). Given this, the control
should be placed in advanced settings and given strong recommendations
not to adjust. Also makes it a good candidate for automatic adjustment if
possible (and I /suspect/ this IS possible).

Also note that the comment was with respect to the suggestion of batching
HAVEs or sending a full bitfield on unchoke (doing this is a very near
equivalent to disconnecting the peer).

> I imagine that it's possible to design things so that disconnecting
> peers would work, but it's going to be quite a balancing act to figure
> out what the strategy should be--and on top of that, you'll probably
> want to then connect to more peers to get back up to a healthy-sized
> peer pool after a short time.

Start connecting to peers, anytime you add another peer and your download
bandwidth increases you haven't got enough peers (though caution is
needed as at the start you'll be relying on the optmistic slots). Once
you've got enough peers to fill your downstream or upstream connect to
a few extras to have immediate coverage for disconnecting peers and keep
your piece coverage good. Note that I am suggesting being /more/
aggressive about disconnecting, *not* disconnecting all idle connections.

Haven't tried it yet. A real world test may show it performs horribly,
but until that point it seems worthy of testing.


> data."  (Although I am convinced that preferring to download rare
> pieces does not perform appreciably better than simple random
> selection.)

Downloading rare pieces doesn't help bandwidth, it keeps torrents alive.
If a piece is rare, and that one peer disconnects you're dead. The one
problem is if a piece suddenly becomes rare everyone will pile on and it
will be a long time before /anyone/ completes the piece.

For a peer that is starting out pieces that are of average commonality
are best. As you'll be able to use multiple peers to obtain it, and once
you have it a fair number of peers will be interested. There should be a
continum here, as you get a greater percentage of the pieces the rare
pieces are more desirable as they're harder to obtain, and by this time
you've got enough pieces that most peers will be interested in
*something*.

Should also be noted that there is an argument that you should *only*
ask for rare pieces if a client has them. This helps locality of
reference on the client, and again makes stranding the torrent less
likely. Also note that clients with rare pieces should be exempted from
disconnection...


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \   (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_  \   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/




------------------------ Yahoo! Groups Sponsor --------------------~--> 
Make a clean sweep of pop-up ads. Yahoo! Companion Toolbar.
Now with Pop-Up Blocker. Get it for free!
http://us.click.yahoo.com/L5YrjA/eSIIAA/yQLSAA/dkFolB/TM
--------------------------------------------------------------------~-> 

 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 



More information about the BitTorrent mailing list