[cc-community] question about liscensing and netscraping.
lac at openend.se
Sun Aug 5 00:50:41 EDT 2007
In a message of Sun, 05 Aug 2007 08:38:49 +0800, Evan Prodromou writes:
>There are other things you can do about Web scrapers: carefully craft
>your robots.txt to block compute- or bandwidth-intensive URLs, and add a
>Crawl-Delay stanza. (Crawl-Delay is a huge help, by the way, and most
>"good" crawlers now honor it.)
I do not know how to do this, ca you point me at a URL that
teaches me how? One good example would be fine.
>And for spiders that ignore robots.txt, you can block them at the IP or
>at the HTTP server level. I think returning a 403 error with the link to
>your downloadable bundles would be appropriate.
We know how to trap robots that do not respect robots.txt. It is
ones that do and still hammers us that are the problem.
>> Is there a CC license that we could license our wiki under that would
>> make it possible for us to threaten those who use our wiki in ways we
>> do not want with a lawsuit?
>Is that _really_ in the spirit of Free Software? Guido doesn't sue
>people for using Python in ways he doesn't like. Part of giving things
>away is that you don't control how they're used after that. We make Free
>things so that people can use them.
Hammering the python wiki to such an extent that nobody can download
python packages because they share a machine prevents people from
using all the free software that is loaded there. We might dislike
but tolerate polite scrapers, but this one is killing our ability to
serve up packages. If an html threat made this bot either go away or
learn manners, we would be content.
>Evan Prodromou - evan at prodromou.name - http://evan.prodromou.name/
More information about the cc-community