[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [gopher] Torrent update



Martin Ebnoether <ventilator@semmel.ch> writes:

> On the Sat, May 01, 2010 at 06:23:09AM +0300, Kim Holviala blubbered:
>
> Hi.
>
>> PLEASE set up robots.txt to prevent robots from re-archiving this stuff!  
>> I don't want to end up with 20 copies of the 30gig archive!
>>
>> Put "robots.txt" in your gopher root with something like this in it:
>>
>> User-agent: *
>> Disallow: /archives
>
> Does this really work for gopherspace? 

It's up to the spider itself - Kim's one looks for it. 

I dunno if the robots.txt convention was made just for web or if other
protocols were being considered too, but that won't prevent a spider
from using a robots.txt found in a gopherspace :-)

> Besides, are there any popular search engines that index
> gopherspace? Google[1] does not, neither does Bing.

No, but we have Veronica Lodge and others! 

> [1] I once made a joke to a Google engineer about when "Google
> Gopher" would launch. He looked kind of puzzled...

"puzzled" as in "he is working at google and does not know what gopher
is" or as in "our business is the web, only if you encapsulate gopher
over http will we spider it"?

-- 
Nuno J. Silva
gopher://sdf-eu.org/1/users/njsg

_______________________________________________
Gopher-Project mailing list
Gopher-Project@lists.alioth.debian.org
http://lists.alioth.debian.org/mailman/listinfo/gopher-project




Reply to: