[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [klakier@pld.org.pl: Standard libxml-based processing scripts for DocBook?]



Eric Bischoff <e.bischoff@noos.fr> writes:

> > Public id is
> > the "universal" part, but system id is LSB/linux path. The disadvantage of
> > this solution is that document becomes less "universal".
> 
> Exactly. But can you guess the problems we face at KDE when we distribute
> documentation? The target system could very well be non-LSB, and we still
> have to guess an absolute path. We could do it via autoconf, though :-/.

In fact, when you meet non-standard system, there is very little chance
the processing goes well, in that case the docs could be simply disabled.

This could be painless for some folks, but on the other hand, could
help promoting the standard ;-P

> > and if the tools find the file already stored in "cache dir", e.g.
> > /var/cache/xml/http:/www.oasis-open.org/docbook/xml/4.0/docbookx.dtd
> 
> This is a great idea - but do we know any tools that currently implement
> this? It's the same as developping tools that have their own entity
> resolvers. It works, but what about using tools out-of-the-box?

Nope, so far. Nevertheless such an approach is simple and much more
natural than catalogs. Think about the easiness of implementation
too...  But, maybe I am reinventing the wheel now, and such an idea is
well known - and has been proved to have a lot of disadvantages, that I
do not realise yet?  Do you know if there were some general discussions
concerning such an approach to finding XML files?  I'm just wondering why such
simple and promising idea has not been proposed as a standard for XML tools
to replace catalogs?

> > it would be taken from that dir, otherwise would be downloaded.  No
> > catalog files... of course tools should be modified to use it.
> 
> This is the point ;-). I would even prefer XML tools to be modified to use
> the existing mechanism: the catalogs ;-). I know Norm is working on this
> currently.

Hmm, does not look the implementation of 'cache' is very complicated.
I have just tried with rxp validator, here is my ad-hoc patch
http://team.pld.org.pl/~klakier/rxp-xmlcache.patch 
to the source :
ftp://ftp.cogsci.ed.ac.uk/pub/richard/rxp-1.2.4beta5.tar.gz

Trying it:
You may first validate simple document (rxp -sxV document.xml) 
using real http support, without 'cache ' e.g. this doc:

<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE para SYSTEM "http://www.docbook.org/xml/4.1.2/docbookx.dtd"; []>
<para>Test only</para>

you could get some messages "/usr/share/xmlcache/...: No such file or directory",
they can be safely ignored (patch is not yet perfect ;) )

Then, add 'cached' dtds:
mkdir -p /usr/share/xmlcache/http:/www.docbook.org/xml/4.1.2/
cp -a /usr/share/sgml/docbook/xml-dtd-4.1.2/* /usr/share/xmlcache/http:/www.docbook.org/xml/4.1.2/

and try again. Now files should be taken from /usr/share/xmlcache

> You seem to have bad experiences with catalogs ;-). Believe me, if used in a
> structured way (with some starting point in /etc/sgml), they become a very
> great tool, allowing you to switch from versions for your documents easily

I know that catalogs are convenient for users - if they are already set up 
properly. But I really dislike them as maintainer of XML/SGML rpm packages.
Using centralized catalogs in the way LSB suggests is simpler than 
it had been done earlier, but is still too complicated
(big %post scripts, 3 catalog files needed to find one dtd file, 
(de)registration procedure requires some parsing, etc)

Regards,
Rafal

-- 
Rafał Kleger-Rudomin (klakier@pld.org.pl)



Reply to: