Standards (was [BitTorrent] Back to Merkle Hash Trees...)

Justin Cormack justin at street-vision.com
Tue Feb 8 05:44:57 EST 2005


> 
> 
> Justin Cormack wrote:
> >>
> >>Justin Cormack wrote:
> >>
> >>>>>32 bits for the piece. I thought everyone had pretty much agreed that 1k/4k
> >>>>
> >>>>2^47 bytes doesn't look like a 'big' limitation.
> >>>
> >>>
> >>>Its a bit too small, 128TB, as it is within range of the size of filesystems
> >>>people have now, let alone a few years in the future.
> >>
> >>Let's just say I'd love to hit that limitation.
> > 
> > 
> > Better safe than sorry.
> 
> The largest torrents I've seen were about 8 gb. That's a factor 16384 
> smaller then the limit. I doubt the protocol will still be in use when 
> this becomes an issue.

I am expecting to be using 1TB+ by the end of this year, so it seems a lot
closer...

> But using 64-bit indexes would be an easy solution.
> 
> 
> >>>>>your xbt url doesnt support a multifile torrent.
> >>>>
> >>>>Why not?
> >>>>It's the info_hash that's included, not the root hash of a single file.
> >>>
> >>>
> >>>But the info_hash isnt sufficient information to verify a torrent. Of course
> >>
> >>Why not?
> >>With the info_hash, you can verify the info key and with the info key 
> >>you can verify the rest.
> > 
> > 
> > yes but without the Merkle root hashes you dont know where the error was.
> 
> But those root hashes are in the info key.

Are we talking at cross purposes? I thought the idea was a URL that would
replace the torrent file by encoding all the information in it.

> >>>one logical thing would be to Merkle hash the individual file hashes into
> >>>one hash...
> >>>
> >>>ok re see belows above.
> >>>
> >>>One of the points of Merkle hashes is that you dont really need the piece/chunk
> >>>distinction.
> >>>
> >>>I can see what you are doing, keeping the distinction means fewer have messages
> >>>(though you negate this somewhat as the chunk_have messages clearly have a
> >>>useful purpose, even if only at some times).
> >>
> >>It's not just that. It's also a smaller bitfield message and smaller 
> >>in-memory bitfields.
> > 
> > 
> > The overhead is not that great, especially if you give an offset and length
> > you have verified in the have message. You can store internally as this too.
> 
> That sounds a lot more complex then the flat bit vector in use now 
> (internally).

Not sure if it is internally, I think it is in the peer protcol that it
becomes more complex. As you have a Merkle tree, your "what have I verified"
structure is also a tree. eg once you have verified the first half of the
content, all you need store is the hash for this, and the fact that your
data structure has this (rather than branches) is all the information you
need. The more you download the less overhead there is.


> >>>There are a lot of options available once one gets this flexibility, and it
> >>>is probably best to scrap the piece/chunk distinction.
> >>>
> >>>Call the smallest verifiable unit SVU (say 4k).
> >>>
> >>>One option would be that if have messages are a range of SVUs (or SVU + length)
> >>>and requests have lengths then you could have a standard algorithm to ramp
> >>>up request sizes as downloads progress, for example, or make this rarity
> >>>based. This would amortise the slightly large have and request messages for
> >>>small requests with those for larger requests later.
> >>
> >>I thought about something like that, but someoone else suggested I kept 
> >>the protocol as simple as possible. I wanted to include a 32-bit vector 
> >>in the request, along with a piece index. That'd allow you to request 
> >>between 0 and 32 chunks of one piece with one request. Another 
> >>optimization would be to allow multiple requests per request message.
> >>The same could be done for have and cancel messages.
> >>
> >>Another issue is random access IO. If a seek takes 10 ms, you can only 
> >>do 100 seeks per second and with chunks of 4 kb, that'd mean a top of 
> >>400 kb. If you don't have pieces and peers requests chunks completely at 
> >>random, you can't do read-ahead.
> > 
> > 
> > Sorry, was there an extra dont in that last sentence?
> > 
> > Most of the time it makes sense to do transfers in large amounts at a time
> > it is just at the beginning you might want a lower value.
> 
> True.
> 
> With 4k chunks, do you keep the entire merkle tree in memory?
> 

If you dont want to use this much memory, you can choose a large chunk size
yourself, never request less than that  and never use the small hashes (though
I suppose you might have to calculate them if requested which is a 
disadvantage).
 
>  
> Yahoo! Groups Links
> 
> 
> 
>  
> 
> 
> 



 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 





More information about the BitTorrent mailing list