Have compression (was Re: Standards (was [BitTorrent] Back to Merkle Hash Trees...))

Justin Cormack justin at street-vision.com
Wed Feb 9 05:12:26 EST 2005



ok, so we are getting into a technical discussion, good.

My view is that we should be doing protcol design, rationally. Anyone can
make 3 protcols before breakfast and they might even work, but I am more
interested in designing good ones with rational reasons.

ok, so in summary, with Merkle trees and verification at 1k/4k or whatever
level, having a piece size specified is merely a way of compressing have
messages and bitmaps. Its a bit of a hangover from its previous use in BT1
which was a combination of reducing have traffic and keeping the torrent
file small.

Olavs protcol lets you request in 32k chunks (and verify) but you can only
send have messages in this predetermined size set by whover made the
torrent. Lets junk the fixed piece size, and see what the options are.
  
Here is one suggestion

Lets change the have (and request etc) messages to look like
uint32_t clen
uint8_t message_type
uint8_t log
uint32_t piece

ie 10 bytes. piece size no longer fixed, but specified by 1 << log (may be
minimum eg 1k/4k). Have messages should be as compact as possible, so if you
have the whole file (the initial message from a seed) you send (for 4G file)
log=33, piece=0. We can abolish the bitmap message and just send initial
have messages, as mostly clients will have significant locality [They could
be longer, but we want to encourage some locality for performance reasons].

Now this doesnt buy us anything if we send a have message after each chunk,
but we now have the ability to dynamically vary piece size. First thing we
think about is how large a piece can we request in one go. Clearly if we
request really big pieces (the whole file!) we send fewer have messages but
increase the latency of them.

First thing we notice is that we dont have to send messages at the same rate
to all peers. If we are choking them, we dont need to send any messages
until the point we unchoke, so we can batch them. Another strategy is to send
have messages for rare pieces immediately to the peers that dont have them,
while you coalesce common ones.

Another thing we notice is that it might now make sense to change the unchoke
message to have a single byte log (meaning I will let you download at least
this much now, maybe more). This gives us a piece size guideline, and may
help the choking protocol.

End game is another time where making piece size really small might help,
we could get rid of the endgame protcol and cancel messages if we can shrink
piece size down.

In fact the beginning and end both symmetrically require small pieces,
which suggests that studying this further will give us a good insight into
how to select sizes

Clearly this needs analysis to see how much space it can save, or how much
it can improve latency over a fixed piece size. 

This is only a suggestion. We have isolated a problem, realised it is a
tradeoff (can reduce overhead but only by increasing latency) and have changed
the protcol so that clients can dynamically make this tradeoff rather than hard
coding it into the protocol. If we find optimum fixed values (or as function
of say filesize) we can recommend or mandate them in a standard. If it
turns out to depend on network conditions it can stay dynamic.


 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 





More information about the BitTorrent mailing list