[bittorrent] Questions related to mainline DHT specs...

NewMedia42 newmedia42 at excite.com
Sun Aug 20 06:11:10 EDT 2006


>from bittorrent.org: "...containing the compact node info for
>the target node __or__ the K (8) closest good nodes in its own
>routing table."  So my guess is, there might be implementations
>which only return your own node information if you search for
>your exact own ID.

Thanks for pointing that out, I had overlooked that part of it... :)

>Yeah, the clients that use different source udp port numbers
>are yous broken, imho. When it comes to annoucnes/tokens, I
>completely ignore them and they are not able to announce to my
>client (more on that later). I usually end up adding newly seen
>IP/port combinations into a queue. My client pings every node
>in this queue once and if a ping response is received, I add
>the node to the secondary routing table. Then, when a node in
>the primary table can't be reached anymore, I subtitute it with
>a node from the secondary table.

Ah, I have a 'transaction' list, I can easily make the result of receiving a result from a request (such as a ping, get_peers, etc) be adding it to the route list.  That way it should eliminate these.  They really do f thing's up pretty good - for the heck of it I let my DHT run for about 12 hours, I picked up a total of 11791 unique nodes (unique node ID's).  Then, I looked at unique IP addresses (ignoring port, obviously), and came up with 7270 occurances of IP's that only showed up once in the routes (which should be correct).  BUT, I have 6 that were:

68.152.x.x -> 1451 nodes
203.185.x.x -> 222 nodes
145.145.x.x -> 361 nodes
195.84.x.x -> 713 nodes
193.6.x.x -> 871 nodes
195.53.x.x -> 887 nodes

The last two octets are only left off for privacy reasons, but it's crazy that these 6 clients are accounting for almost 40% of the entries.  The other interesting thing is that they never announced the same node ID twice - so for the first one, it used 1451 different node ID's - and they're pretty evenly distributed across the node space.  The problem then get's magnified by clients that don't check before adding a peer to their route, so you end up getting these different ip/port combos from all sorts of other sources.

>I never used my DHT implementation in a real client; I only
>used it for experimenting and running a big distributed tracker
>node. Normaly, I used ~2,000 primary peers and ~1,000 secondary
>ones. And yeah, the expiring is indeed an important thing: you
>basically need to do a ping query every 15 minutes to each peer
>in your routing table. So this quickly adds up to a couple of
>pings per second. I'm not sure if you want to keep too many
>peers in a normal dht client: I'd guess that about 400 peers
>should already be plenty.

I'm just interested from the implementation standpoint - I used a pooled allocator for everything, so it pretty easily scales up to whatever it needs without much overhead.  I also played around with controlling the packet rate - the only challenge with that is that if your routing table grows big enough, soon you exceed the number of requests that you need to be processing and it starts snowballing...  I've added a fair amount of expiration now, but my rules are still pretty lenient for the time being (nodes only get removed if they haven't been reachable to at least 3 attempts (15 minutes each, assuming it's a ping), and the transaction handler automatically retries sending the requests up to 3 times with 60 seconds inbetween in case of packet loss).

>Never looked at that, I don't care about how many they report.

The only reason I ask about the size was whether or not I could get by with a default buffer size.  In the 12 hour test I ran earlier, the largest packet ever sent or received was 307 bytes (if anyone else cares)...

>As I mentioned earlier, I do not use my implementation in a
>client. Therefore, I've never really done any announces, only
>received them. 

How do you typically see them coming in?  Immediately following a get_peers?

>Concerning the token: You should not issue random tokens, as
>you'd have to store way to much data. The mainline client as
>well as my implementation create the token deterministically
>based on a hash on the client's IP and port and some random
>seed (the same seed for the whole session). That way, you can
>verify the token of the announce messages by only looking at
>the IP and port it came from. 

That was my intention, but until I felt like I had a better grip on the protocol and it's various implentations, I didn't want to make any assumptions... :)

>This, unfortunately, does not work with clients choosing
>arbitrary outgoing port numbers.  You could just use the IP
>address for the hash and completely ignore the port, but I
>decided to keep the port and thus ignoring announces from these
>broken clients.

I'm just going to ignore these clients, since they appear to be a very small percentage, and they cause the bulk of the problems (at least for me right now)...

Thanks again for your insight and help!


_______________________________________________
Join Excite! - http://www.excite.com
The most personalized portal on the Web!





More information about the BitTorrent mailing list