[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: man v. info



Karsten M. Self wrote:

> on Wed, Dec 26, 2001 at 04:31:12PM +0100, Michael Mauch (michael.mauch@gmx.de) wrote:
> > Karsten M. Self wrote:

> > > This can be further mitigated by browsers that render on partial
> > > load, or which allow background loading of pages (Galeon rocks for
> > > this).
> > 
> > Sorry, I disagree. Try
> > 
> >   info --output=gcc.txt --subnodes gcc
> > 
> > to put the whole gcc.info* files into one text file, then load it with
> > Galeon. Although the file has only 30000 lines and it's text only,
> > loading and viewing is slow even with Galeon.
> 
> I have 2929 lines, and a 2 second load time.

That seems to be the info-ized manpage.

> What packages do you have installed?

The text from the info file that was installed by "apt-get install
gcc-2.95-doc" has 31299 lines.

Pasted from your other mail:

> With gcc-2.95-doc installed, load time is ~1-2 seconds in Galeon from a
> text file sitting on /tmp.

Wow, that's fast. Is that the time time until it starts displaying the
first page(s) or is it the time until the whole page is loaded (CPU
usage goes down to normal, mouse cursor is normal again)? This takes
more than 30 seconds here. Oh, wait a moment - Galeon 1.0 from the
Debian system really is a lot faster (2 seconds for the whole file). So
maybe something went wrong with my Galeon-0.12.7 build here (built from
sources on something that once was a SuSE-6.1).

I'm sorry, this was my own fault then.

> Previously described PIII-600MHz 128 MiB, IDE system.  Galeon 1.0-2.
> 
> What's your hardware?

Athlon 700, 384 MiB.
 
> I don't have the PHP document handy.  My experience is that mawk.html
> loads in 13 seconds on first access, and in about 1.5 seconds on reload,
> accessed under dwww, as: 
> 
>     http://ego/cgi-bin/dwww?type=man&location=/usr/share/man/man1/gawk.1.gz
>     http://ego/cgi-bin/dwww?type=man&location=/usr/share/man/man1/mawk.1.gz

Um, the gawk info file is a whole book (it's also available on a dead
tree), that's by far more exhaustive than the HTML-ized man page. The
gawk.html as shipped with the gawk-3.1.0 sources has 32483 lines, 1.6 MB
- and it takes more than 20 seconds to load (on the good Galeon 1.0 from
Debian). But ok, once it is loaded, it's ok; even searching is fast enough.

> > You might argue that I should use w3m 
> 
> w3m's loading is likely to be as slow or slower -- it doesn't display a
> page until it's fully loaded, unlike.
> 
> > or links 
> 
> ...which _does_ render a page _while_ it's loading.
> 
> > to read those large HTML files 
> 
> ...but in any case, both render the files in < 1 second.  I suspect
> caching is going on here.  Trying another large page -- bash -- it loads
> in about 2 seconds.  This is comperable to wait times for a freshly
> rendered manpage via groff.

Yes, ok. Links _is_ fast.

> > - but then I would have to remember the keystrokes of these programs
> > (i.e. I can't use my favourite browser) 
> 
> Incidental not:  I'd recommend you learn _one_ text mode browser.

Um, yes - I already learned lynx a while ago. Alas that's very slow for
large files. So I learn another text browser, no problem. But why can't
you learn the (p)info keystrokes, then? ;-)

But when I'm at work, I'm again stuck with Netscape 4.x, because the
admins won't install anything other. I don't have an info browser there,
but I do have (X)Emacs, and so I can happily read info files (as long as
they are not HTML only, like it's the case with the PHP manual).

> For times when you've got console-only access (no X11, or remote
> session, or other reasons), they're a godsend.

Yes, of course - and I can navigate with the keyboard.

> > > > When I want to search a directory of HTML files, I tend to grep it
> > > > first, then view the files that seem to be apropos.
> > > 
> > > One better:
> > > 
> > >     $ less $( grep -l 'pattern' filelist )
> > 
> > And then you read the plain HTML source? Not very cool, frankly.
> 
> <pedantic>
>     $ for file in $( list ); do w3m $file; done
> </pedantic>

And then I type my search string into a dozen of w3m instances? Still
not convinced.

> > A local search engine like mnogosearch, htdig or glimpse could help,
> > of course. Is there a Debian package with already set-up configuration
> > for one of these? I seem to remember that FreeBSD has something like
> > this (htdig-based and with man2html and info2html).
> 
> Try dwww.

Thank you, that's great! And with info2www the info books are there,
too. But then: how can I search for e.g. "assembler" in the gawk book?

> > The german HTML tutorial SelfHTML 8.0 comes with a built-in JavaScript
> > search engine (<http://selfhtml.teamone.de/>, but it seems to be down
> > at the moment). It is very fast and works well. It looks like it's
> > only available for SelfHTML at the moment, though.
> 
> I don't care for Javascript.

Normally I don't, neither - but this example of a "self-contained"
search engine made me wonder if JavaScript really is "always" senseless
bloat.

> It doesn't work in my text browsers, among other issues, and generally
> fucks up my browsing in general use.  The latest Moz build allows
> site-specific Java/Javascript enabling / disabling.  I'm waiting for
> this to hit Galeon.

ACK.

> > I think a decent search facility is a must for more in-depth
> > documentation. If I _know_ that I want to use newwin(3), I can easily
> > type "man newwin". But if I just want to get started with curses, I am
> > really lost after "man -k curses". A hierarchical "book" (be it in
> > HTML or in info format) with a "Getting started" topic is a lot more
> > user-friendly in such cases.
> 
> Most man pages have a "SEE ALSO" section.

Yes, but it's totally unstructured and these "links" tell nothing about
what I might expect on the "linked" pages. Just the man page names, and
then "go figure it out for yourself, we don't care where you get lost in
man page land".

Regards...
		Michael



Reply to: