[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: robots.txt (was Re: Download a whole gopherhole using wget/curl?)



Silly question; But isn't the User-Agent kind of useless here since a Gopher request is basically just a selector for a resource?
There are no headers
No User-Agent to identify a request

What am I missing here :)

Kind Regards

James


On Fri, Nov 29, 2019 at 3:39 PM Sean Conner <sean@conman.org> wrote:
It was thus said that the Great Christoph Lohmann once stated:
> Good point. In eomyidae you have two possibilities:
>
>       User-Agent: *
>       Disallow: *

  Okay, but this diverts from the HTTP version of robots.txt (from my
understanding unless it's been updated since I was last dealing with this
stuff).

> and
>
>       User-Agent: *
>       Disallow:

  This actually has a different meaning from the HTTP version---there this
means "all browsers allowed to crawl" (back from when it robots.txt was
first developed).

  -spc


Reply to: