[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[gopher] Re: Getting file info from an URI??

> So my question is: how _can_ I find the file information given only an
> URI?

I've found that getting gopher to work nicely in a web-like
environment exposes a real impedance mismatch.  The real diffirence is
that the web assumes that, given a URL, you get all the meta-data from
what is returned when the URL is requested.  Gopher works the other
way around - the directory gives the meta-data, which is what you use
when you display the content.  Unfortunately, the gopher:// URL
doesn't carry arround the meta-data.  My solution with web->gopher was
to pass all that stuff around continuously, which is why the URLs are
extremely long (they contain the title, mime type, etc.)

Yes, Gopher+ does help with the info request, but it's an inefficient
way of doing things because you need to do two requests each time (one
for the meta-data, one for the content), so it's best avoided if
possible, even when it is available.

I'm don't really know the details about the gopher:// URL structure,
but my impression is that it basically contains the full request line
passed to the gopher server, so it could be something like
"gopher://1/fooF+Ftext/html"; to request the selector "1/foo" with
Gopher+ as type text/html.  Getting the info is more complex than just
appending "F!" to whatever comes after the //.

> Can one assume Gopher+ nowadays?

No.  There are lots of servers that aren't, including important ones
like floodgap.

> 2) Make a pool of recently downloaded directory listings from
>    which one can search for the given selector and get the file
>    description and type.

Yikes.  And it doesn't help if the user simply enters a URL without
navigating to it (but you can probably live with that... it just won't
have a title and stuff).

Reply to: