[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: man v. info



Karsten M. Self wrote:

> on Tue, Dec 25, 2001 at 06:32:31PM -0800, Craig Dickson (crdic@yahoo.com) wrote:
> > Carl Fink wrote:
> > 
> > > BTW, for HTML docs, put them all in *one* file with hyperlinks.  There is no
> > > meaningful advantage to cutting it into twenty pieces, and it makes
> > > searching significantly more difficult.
> > 
> > For locally-stored docs that's arguable. The advantage of small files
> > comes when you have to read it across a network, especially a WAN.
> 
> I'd disagree.  Info nodes can be _quit_ small -- a screen or less of
> data.  Load latency kills you more than the actual data transfer
> interval.  I'd rather have, say, 1/10 the interrupts, of roughly 2-4
> times the duration, than to be interrupted with great frequency.

Yes, but that's of course not a problem of the format; there are as well
HTML pages with only 5 lines.

> This can be further mitigated by browsers that render on partial load,
> or which allow background loading of pages (Galeon rocks for this).

Sorry, I disagree. Try

  info --output=gcc.txt --subnodes gcc

to put the whole gcc.info* files into one text file, then load it with
Galeon. Although the file has only 30000 lines and it's text only,
loading and viewing is slow even with Galeon.

Or look at the PHP docs (e.g. from
<http://www.php3.de/download-docs.php>). They have several formats (no
info); one of them being a single HTML file (4.7 MB). Even when loaded
from the local harddisk, this takes ages to load in Galeon.

Same for the gawk.html, mysql.html and similar large files.

You might argue that I should use w3m or links to read those large HTML
files - but then I would have to remember the keystrokes of these
programs (i.e. I can't use my favourite browser) and I have to
install/build these programs on other machines ((X)Emacs is
everywhere).

> > When I want to search a directory of HTML files, I tend to grep it
> > first, then view the files that seem to be apropos.
> 
> One better:
> 
>     $ less $( grep -l 'pattern' filelist )

And then you read the plain HTML source? Not very cool, frankly.

A local search engine like mnogosearch, htdig or glimpse could help, of
course. Is there a Debian package with already set-up configuration for
one of these? I seem to remember that FreeBSD has something like this
(htdig-based and with man2html and info2html).

The german HTML tutorial SelfHTML 8.0 comes with a built-in JavaScript
search engine (<http://selfhtml.teamone.de/>, but it seems to be down at
the moment). It is very fast and works well. It looks like it's only
available for SelfHTML at the moment, though.

I think a decent search facility is a must for more in-depth
documentation. If I _know_ that I want to use newwin(3), I can easily
type "man newwin". But if I just want to get started with curses, I am
really lost after "man -k curses". A hierarchical "book" (be it in HTML
or in info format) with a "Getting started" topic is a lot more
user-friendly in such cases.

Regards...
		Michael



Reply to: