[bittorrent] Re: HAVE messages

David Mott dpmott at sep.com
Thu Jun 23 03:30:58 EDT 2005



On Wed, 22 Jun 2005, Elliott Mitchell wrote:


> 4 bytes for the message length (interesting given how no messages other
> than the bitfield, can even approach 64K). 1 byte for the message type.
> 4 bytes for the piece number. So 9 bytes total for a HAVE message.

Agreed.

> A MULTIHAVE message would share the 4 byte message length, and 1 byte
> message type. If we stick with the theme of only long integers, we'd have
> a 4 byte have count. Then 4 bytes for each piece. So 9 bytes plus 4 bytes
> for each piece.

I wasn't thinking that there was a need for a "have count".  Assuming that
each piece index is the same size, you should be able to figure out how
many indices you have based on the message length.

This also makes the case of a MULTIHAVE with one index identical to the
regular HAVE message (except for the 1 byte message type).  This should
lend itself to code reuse as well.

> For one piece, we use 9 bytes with a conventional HAVE and 13 with
> MULTIHAVE message. At two pieces, we use 18 bytes with HAVE and 17 with
> MULTIHAVE. At three it becomes 27 and 21. Until we're averaging 3 pieces
> per message, the savings aren't much. So the question becomes, how many
> HAVE messages are sent?

With every additional index that we pack in, we're saving 5 bytes.

The question of how many HAVE messages are being sent is still valid.  And
since I was suggesting a time-based dispatch (i.e. every few minutes or
so), it would depend on conditions at the time.  But, since I'm defining a
MULTIHAVE message to be the same size as a HAVE message for one index,
there is no downside there.

> If you're in a heavily seeded swarm, your peers will never send HAVE
> messages as seeds will only need BITFIELD messages to advertise their
> status. In a fresh swarm with one seed, your peers will end up sending
> HAVE messages for every piece. The average is likely to be one HAVE
> recieved from each peer for every two pieces. Suppression of unneed HAVEs
> would further drop this to one HAVE for every four pieces.

Yes.

This reminds me that I've wanted a "I'm A Seed" message, as an alternative
to sending a full bitfield.


> Most clients request blocks of 32K at a time (while the baseline defaults
> to 16K), for which it takes 8 messages to complete a piece. Multiplying
> those two numbers, we should expect to receive a wave of HAVE messages
> every 32 blocks we download. So with 32 peers each HAVE message will be
> interleaved with one other block, resulting in zero savings from
> MULTIHAVE. At 64 peers MULTIHAVE would save some, but not much.
>
> The baseline client only seeks 20 peers and refuses connections at 55
> peers. Generally 50 peers is the worst case used for discussions. With
> these sorts of numbers, HAVE supression seems profitable, while MULTIHAVE
> doesn't.

Even with my definition of MULTIHAVE, it doesn't save *much*.  But any
little bit helps.  I think that the big benefit would come from delivering
grouped HAVE messages periodically.  Sending isolated HAVE messages just
adds to the TCP/IP congestion.

> > > If you don't update your view of the world you won't be able to
> > > execute rarest first properly.
> >
> > True, but so long as you can see at least one copy of each piece, I'm not
> > sure that this is so important.  The whole "rarest first" algorithm is an
> > arbitrary algorithm which isn't called out in the BT spec (right?).  So
> > long as you're pulling down content, I don't think it matters.
>
> Olaf van der Spek got to this first. If only one peer has a piece and
> that peer disappears, you've got a problem.

Well, sure.  But that's not different from the case today.  If a peer
can't currently see all of the pieces, then it needs to go look for the
missing ones, perhaps in a more aggressive fashion than when it has
visibility to all pieces.


> I haven't seen the mainline implement it despite being obvious. I figure
> there is a reason, but I've never seen it mentioned.

When I get a chance I'll put a blurb into the WIKI spec so developers can
reduce the unneeded HAVE messages (unless it's already there or someone
beats me to it).

I can't fathom why Bram hasn't incorporated this feature.  However, I'm
also one of "those" people that thought there were some pretty good ideas
raised on the Yahoo mailing list.  Bram made it pretty clear that he
disagreed, so perhaps this feature remains unimplemented *because* it was
suggested on that mailing list.

No matter the reason, the change is transparent to other peers, backwards
compatible with the current protocol, and provides an obvious bandwidth
savings.  So, there's no reason that other client developers couldn't or
shouldn't implement it.


> Dropping and then notifying helps auto-tuning, as you can detect queue
> depth on the opposite end with certainty. You simply keep queuing more
> requests and when requests get dropped, you reduce the number you queue.

I like that because it's not wasteful of bandwidth.  The notification
should happen early and only a few times (perhaps only once).  After that,
a well-behaved client would not exceed the queue depth.  Or an ambitious
client could "test" the queue depth periodically to see if its peer has
lengthened the queue, but that wouldn't take more than a few extra
messages either.

Again, 5 was just an example.  And as you point out, one size probably
does not fit all.


> Okay, you sounded like you were suggesting scrapping the existing
> handling of this problem. Now, see above for why MULTIHAVE is unlikely to
> gain much.

I'm wondering if my definition of a MULTIHAVE message makes any difference
to your point.  I'm still thinking that the small savings that is gained
by grouping HAVE messages is trivial compared to the benefit of delivering
batches of HAVE messages periodically.  Tell me if I'm barking up the
wrong tree there.

> > I think that we're on the same page here -- the information is necessary
> > to make the protocol "flow" better (polling for available pieces is every
> > bit as inelegant, or moreso, than the current protocol).  I'm just saying
> > that it could be reworked to keep the lower bandwidth peers from getting
> > swamped in HAVE messages.
>
> Now now instead of being swamped with HAVE messages, they'll be swamped
> with requests for HAVE messages? In such cases it might be better to
> simply disconnect from high-bandwidth peers, unless they have rare
> pieces.

I was actually thinking that it would be a somewhat infrequent (say, on
the order of 10 minutes or whenver a client runs out of pieces to
download), and it would be a request for a MULTIHAVE message or a
bitfield.

And yes, disconnecting from high bandwidth peers that are quickly and
constantly downloading pieces would keep a slow(er) peer from getting
swamped in HAVE messages.  This goes back to the concept of (a tracker)
grouping peers based on their connection speed.  It'd be nice(r) if that
weren't necessary, though.

I'd like to thank you (and Olaf, and everyone else who has been
contributing to this thread) for taking the time to discuss it.  I'm
learning a lot and dispelling some of my misconceptions.

-dpmott





More information about the BitTorrent mailing list