Archive - Originally posted on "The Horse's Mouth" - 2006-04-12 04:56:08 - Graham Ellis
We're delighted to welcome crawlers such as Yahoo, Inktomi, Google and MSN spiders through our web site to index content - it's in our mutual interest. Those crawlers are all well written to analyse the pages that they collect, note patterns, and tailor their activities to make the best use of a dynamic site with 16 alternative versions (4 x font sets, 4 x colour sets) of displaying each page.
Robots which simply collect the whole of a web site for local offline browsing, such as HTTrack and (in some guises) Wget can be more problematic; they're simply not well formed to mirror dynamic sites and will try to gather every possible page, skewing web statistics and in bad cases leading to restriction of our resources for others. And it's doubtful whether any realistic use will be made of the data gathered.
I was watching HTTrack struggling to copy our dynamic website to a static mirror yesterday morning - every 4 seconds, another hit; it took 5 minutes just to get the help pages for the adhoc query demo by the time it had them in green on black in a tiny font, blue on yellow in a huge font, and all the intermediate settings. It's a waste of our resources and, frankly, I doubt whether the person making the mirror will find it of any use.
So as a web site owner, should I discourage such mirroring, and if so, how?
My first thought is to modify my robots.txt file to disallow all downloads by wget and httrack - except that I would need to check that they actually respect the standard before I go to the trouble, and that in any case we WELCOME user who sensibly use the utilities to download a few pages for offline viewing.
A second thought is to use my denial of service mechanism to trigger a delay where file access from a single remote host get delayed once they reach a certain threshold in a certain time - except that this would be just as likely to trap the legitimate / welcomed "bots" unless I put in some user agent specific logic that would need to be high maintenance - updated as new agents come along. And I certainly don't want to go down the "ban xxx IP address" road either.
A look at the HTTrack FAQ rather confirms my worries that neither of the above solutions is ideal; although it respects robots.txt, that can be turned off. Advise to users suggests that they use time limits and for large mirrors, ask the webmaster first and try not to download during working hours and do not download too large websites - use filters and I see all four of these pieces of advise NOT followed ... it also advises Are the pages copyrighted and, yes, they are.
But, in reality, it's no great issue to us if one or two users pull huge amounts of files they'll never use off our system.