[bittorrent] Introductory/endgame algorithms

Jari Sundell sundell.software at gmail.com
Thu Sep 29 08:19:44 EDT 2005


On 9/29/05, Elliott Mitchell <ehem at m5p.com> wrote:
>
> > I'm limiting the number of peers that can download a single piece to 5
> in
> > the endgame mode. With a 2 minute timeout this has shown itself to work
> well
> > in all cases I've encountered. Also the piece with fewest concurrent
> > downloads is delegated next. Peers with a transfer speed less than 4kb/s
> > only queue a single piece, while faster peers use a queue size of half
> what
> > they do in normal downloads.
>
> Uh, you need to clear this paragraph up. You're saying that you'll only
> request blocks from a particular piece from a grouping of 5 peers?
>
> The second though is a bad idea. If the queue depth on the remote end is
> ideal (zero or near zero), then you've just cut your download rate by
> 50%.


Sorry, there's some confusion of terminology here. I meant what you call
blocks, but which the protocol part of the specification calls piece. You
may ignore the rant though.

> I haven't considered doing this explicitly, I use mmap'ed files directly
> and
> > the kernel keeps the pages in memory if there's room. If the user wants
> to
> > preload the files, he may use dd or similar to dump them into /dev/null.
>
> Bad idea to do it that way. I doubt most OSes do read-ahead on mmap()ed
> files, and for what BitTorrent is doing readahead is quite important.
> Your best bet is use pread()/readv() on whole pieces when the first block
> is requested, and ensure your buffer is page-aligned
> (sysconf(_SC_PAGESIZE)).
>
> There are two key points here. First, when the first block of a piece is
> requested, very probably subsequent blocks will be read (I'd even advise
> adding a seek penalty each time a new piece is requested to prevent
> deliberate attacks). By reading the whole piece you'll avoid a second
> seek returning to fetch the rest of the piece, a crucial performance
> factor. Second, by doing I/O to page-aligned boundaries the OS is free to
> do copies by merely memory-maping the files in and doing zero-copy.
>

Please don't call it a bad idea, it really isn't. Very few high performance
programs that do a lot of disk IO use the read/write variants because they
involve creating copies of the data. (With the exception of direct IO, but
that's not widely supported)

Accessing mmaped file regions do use read ahead, and you can even control
certain aspects of it by using madvise. If you do "cat /proc/<pid>/maps" on
linux, you'll see a list of mmap'ed files. It's very common in use for
accessing file data, and definitely provides good performance. Usually much
better than read/write.

Using mmap does lead to some rather complicated code due to the page
alignment and chunks spanning multiple files, but I feel my code does layer
that complexity well.

http://libtorrent.rakshasa.no/

--
Rakshasa

Nyaa?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.ibiblio.org/pipermail/bittorrent/attachments/20050929/dd1fffd2/attachment.html 


More information about the BitTorrent mailing list