[BitTorrent] Have maps (was Merkle, URLs, etc)

Joseph Ashwood ashwood at msn.com
Mon Mar 7 20:21:38 EST 2005


----- Original Message ----- 
From: "Konstantin 'Kosta' Welke" <kosta at fillibach.de>
Subject: Re: [BitTorrent] Have maps (was Merkle, URLs, etc)


> On Sun, 6 Mar 2005 17:36:43 -0800, Joseph Ashwood <ashwood at msn.com> wrote:

[Optimal case for binary trees?]
> In the case of "I need to verify this one piece to be able to share it".

Actually the optimum case for that is having the verification in the node, 
regardless of branching. this then leads to the overhead to verification = 
depth, binary trees will be deepest, they are not optimal.

> I think that the tree implementation should imply
> what hash is whose parent and child (for binary trees, this is very easy).
> So there is no need for "searching". Did I miss something?

Searching is important when a piece is requested, there is a search overhead 
to determine whether or not that piece is available. Using a perfectly flat 
tree this is a search of the minimum possible area, using a binary tree it 
is a search of the maximum possible area. These represent the extremes 
available, the binary tree is the most costly.

>> The pre-caching problem is this: The computations necessary to 
>> predictively
>> load the next requested nodes from disk is exponentially more complex 
>> with
>> binary than with as flat as possible.
>
> Just so I get it right: Are you talking about "What partial trees do I 
> need
> next?" or "What partial trees does my peer need?". In the first case, I
> cant follow you (should be logarithmic, just like for all trees). In the 
> second
> case, I can neither see a computational difference.

It is the second case (peer need). The overhead of this becomes critical as 
the trees and number of connections grows, it is linear in both but 
exponential in the combined. By adding overhead to the search for the next 
piece you slow down the tit-for-tat, on a single peer with a single file and 
a single connection, this is not ciritical, but as these numbers grow it 
becomes increasingly necessary to predict what your peers will want in order 
to reduce the computation overhead (i.e. don't flush it out of the memory 
cache). I am unsure whether or not BT1 does this, but considering the 
relatively small number of connections that it maintains and the perfect 
flatness of the tree this is not a major issue. The issue is when you have a 
situation like Hurricane Electric (which will BitTorrent peer and track for 
hosting clients) where a single peer may have thousands of active files, 
each with hundreds of active peers, this exponential situation will severely 
hamper their potential for system speed.

>> The search overhead is related but can be solved using some of the more
>> esoteric possibilities of the MerklePool concept I laid out before.
>
> Sorry I must have missed both the problem and your solution. Can you point
> me to a date/time and subject of your message so I can re-read it?

I didn't post about it. The search overhead problem is succinctly "find me 
piece with hash X" finding it in a Merkle tree is costly, binary is the most 
costly, flat the least, but finding it in a 256-ary tree in a shared 
MerklePool eliminates the advantage/disadvantage in this case.

>> Construction problem is that with each layer of the tree that is build
>> maintaining anything resembling balance (necessary in order to make noth 
>> the
>> pre-caching and search problems as easy as possible, even though still 
>> far
>> worse than n-ary) becomes increasingly difficult and as it requires an
>> exponential time algorithm, this can become very costly
>
> The total size of the tree should be easy to calculate for all n-ary 
> trees.
> I dont see how balancing makes any sense in a Merkle Tree as it cannot
> save any space. Is there a better way than the naive approach of 
> constructing
> an n-ary tree? (Naive meaning the first n leaves have one parents, the 
> first
> n parents have a parent etc. Disadvantage is that the tree tends to get 
> empty
> towards the end in non-optimal cases).

The better way is to compute all the nodes at a single level (this is where 
I began the use of the MerklePool), but the cost is primarily in the depth 
of the tree. as the depth increase it becomes necessary to maintain indexing 
across multiple levels, by flattening the tree this again exponential 
overhead is reduced.

Balance of the tree becomes inportant again in the search, by expending 
effort in balancing the tree (which with most algorithms it should come 
close) you make it faster to search the depths of the tree for any 
information (unbalanced trees have more depth and so the search takes 
longer). Binary trees, first off, already have the extra depth, and second 
balancing the nodes is in the average case more difficult.

>> The hash size vs input size problem is that the hashes used slow down as
>> there is less input, leading to exponential slow down of the entire 
>> system
>> as the inputs shrink.
>
> Can you please rephrase this sentence so that I can understand it?

Modern hashes have substantial overhead in the finalization operations, by 
having the smallest nodes possible the finalization code is executed the 
maximum number of times. As the size of the nodes shrinks linearly, the 
number of internal nodes increases super-linearly. As the number of nodes 
increases the number of times it is necessary to run the finalization code 
increases. I did have a misstep there, I believe it is only a polynomial 
increase, not exponential.

>> Is that enough problems or should I think for more than 30 seconds on it?
>
> Please think of more. ;)
>
> Note that I do neither think that binary trees are the best choice. They 
> are
> worst case for tree size but optimal case for quick verification of a 
> single
> piece.

Here is where we substantially differ. In verifying a single piece in a 
properly formatted n-ary tree (like my proposal) the cost is the tree depth. 
This is the same optimal cost for binary trees. The n-ary tree will be 
flatter and so offers faster verification of the piece than the binary 
version. For reference, my implemenation can verify a single piece of a 
478MB file in 4 hashes, assuming 4KB blocks, the same performance for a 
binary tree would only be a 65KB file, again assuming 4KB blocks. Verifying 
the same 478MB file would take a binary tree 29 hashes (assuming I counted 
correctly) approximately 7 times as long. 7 times the time is not a small 
performance penalty.

> To know if this is really relevant, this bittorrent simulator might
> be helpfull (I think I'll start coding next week). If it is irrelevant, we
> should use flat trees. I not, a tradeoff using n-ary trees seems good.

I agree a simulator would be of great help.
                    Joe 



 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 





More information about the BitTorrent mailing list