[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[gopher] Re: Gopher "robots.txt"



> It wouldn't be that much of a performance hit.  Rather than fetching the
> directory with a normal gopher request, then fetching the attributes for
> each entry of the directory, the robot could make a request something
> like this:
> 
> 	1/directoryF$+VIEWS+ABSTRACT+ROBOTS
> 
> which would fetch only the VIEWS, ABSTRACT, and ROBOTS attributes of
> every item in the directory (in addition to the INFO attribute, which
> contains the usual Gopher directory entry).  A fully Gopher+ supporting
> robot would need to get the VIEWS and ABSTRACT to build a complete index
> anyway.

This is a excellent idea, but I don't think we want to require all robots or
robot-like things to be gopher+ compliant. It would be nice, but I don't
think it should be mandatory.

Also, this doesn't solve the problem adequately for those *servers* which are
not gopher+ compliant, or certain subsets of indexed search servers that can't
or don't make gopher+ compliant responses (I can think of several immediately).

-- 
----------------------------- personal page: http://www.armory.com/~spectre/ --
 Cameron Kaiser, Point Loma Nazarene University * ckaiser@stockholm.ptloma.edu
-- Adore, v.: To venerate expectantly. -- Ambrose Bierce ----------------------



Reply to: