>>>>> Norbert Preining <firstname.lastname@example.org> writes:
> Ever heard of grep, sed, awk, .... all these nice things that make
> your life happy. Trash them when you are doing XML.
JFTR: there's xmlstarlet(1), which is capable enough to replace
awk(1), sed(1), and grep(1) (which is more often than not gets
mixed with awk(1) and sed(1), even though its functions are
effectively embedded within the former two) on most uses. And
then, every other high-level language has libraries for XML
processing. Of which Perl is close enough to Awk to hardly ever
bother learning the latter at all.
Seriously, XML takes a lot of concerns off an application
programmer. It provides quoting, arbitrary hierarchical
structure, support for different encodings, etc. Why, don't you
think that $ grep '[[:lower:]]' FILE is ever supposed to work?
For surely it isn't: grep has no way to know the encoding of the
input file, and relies on the locale instead. On the contrary,
XML allows for the encoding to be specified explicitly via a
processing instruction. And then, there's XPath, which takes
the input dataset structure into account. (Care to provide an
example of grepping out the VALUE for KEY of [SECTION]?)
… Oh how I'm glad that there're prominent TeX figures that are
actively using XML nowadays!
I take the point, however, that the XML toolset is not nearly as
mature and complete as that for “plain text.” It /is/ an issue,
and I hope it will be resolved. It /is/ reasonable to use the
two-level hierarchial [SECTION] KEY = VALUE format for
configuration files, for it has better readability (as long as
the common tools are considered.) What is /not/ reasonable is
to label and shun XML for what it's not.
<!DOCTYPE the><the ><tensible ribbon="campaign" /><p>Advocating the
judicious use of XML applications at the Internet at large.</p></the>