> Christian Schwarz writes:
> > If I may summarize it: You suggest to "mix" our current source packages
> > with binary packages since you consider the terms "source" and "binary"
> > inaccurate. Thus, we would gain "source package dependencies"
> > automatically, etc., and it would also make all "binary-arch" directories
> > obsolete. Every "package" would depend on the necessary "vm's".
Yann Dirson:
> > > It's certainly not ready to be applied right now, as we don't have the
> > > necessary tools, but it should be seen as a medium-term proposal.
CS again:
> > This is surely a _very_ systematic approach! But what are the benefits
> > from it?
Yann:
> * as you said, source-package dependencies.
> * easy access to source packages (see my reply to Raul on Jul 3 for
> why I consider that very useful).
> * About docs, this would allow the sysadmins to choose what format
> they will install, depending on their needs; this would solve the
> problems about man/catman, texi/info/html/ps, and such.
> * choosing the way multi-step conversions are done (eg. sdc used to
> (and maybe still does, I don't really know) do sgml->ascii
> conversion via lout. People who do not want to waste disk-space with
> lout could just use groff as a second converter.
> * this would make more accurate dependency control-field, by using the
> transitive constructs I describe (about converters and VMs), though
> this scheme could probably be applied to current package-scheme, by
> extending the virtual-package feature.
Wow! This sounds like what I was discussing before when I was
discussing reforming the source packaging system - but extended to
solve documentation source format problems (with multiple stages of
compiling and linking).
This is definitely where we should head. Exciting stuff. Revolutionary
even. I'm glad I saw this, since I skipped the original thread. (the
discussion here is out of control!)
If a few little hacks to the packaging system is all that is needed
to do all this wild stuff, I say let's do it. :-)
Maybe I should summarize in different language, to see if other
people understand the concept as I understand it:
1) we have one single packaging format - this format can hold
binaries (architecture specific or not), upstream sources,
debian-specific patches fully rendered documentation (ie. info
files, HTML files, postscript files, etc.), intermediately-rendered
documentation (ie. texi)
What is inside a package would be determined by a field in the
control file for the package. These files could have different
filename extensions and be placed in different directories on
the FTP sites depending on their contents - but the packaging
system wouldn't care what they were.
2) The packaging system (dpkg) would be able to install any of these
packages, and handle dependencies.
3) The relationships between source packages and derived (ie. binary)
packages would be defined in some sort of a "map".
ie.
upstream source packages
|
|
v
"debian" patches package
|
|
+-----> "virtual machine" binary package
| (ie. Java bytecode, Win32 i386 code)
| |
| v
+-----> architecture specific binary package
| (ie. i386, alpha, sparc, ppc, m68k, gnu-win32-i386)
|
+-----> intermediate documentation package
| (ie. texi, man pages)
| |
| v
+-----> final documentation package
(ie. compiled man pages, info files,
HTML pages)
4) Converters can be provided that will convert packages
in a "source" state into packages in a "binary" state
- but there might be multiple stages
I'm using the terms "source" and "binary" loosely - and
there might be multiple conversion steps involved. I
also use the term "compile" loosely to signify the
transition between states.
5) The packaging system (or perhaps a new subsystem) can handle
conversion from the source state to the binary states.
It would also be able to do this on a delayed basis. ie.
A person could download all the "intermediate documentation
packages" that consisted of man page sources. When they
want to look at the man pages, the packaging system would
kick in the appropriate converter (groff, or perhaps man2html)
to create the rendered version. The rendered versions could
then be cached.
Alternatively, the person could choose to
download the pre-rendered final version off of the internet
(or a locally mirror site) instead. The system could use
heuristics (rules of thumb) based on cost and system
performance to automatically decide whether or not to compile
the docs (or binary) locally, or to download them.
[ I'm a control systems engineer, and I smell some potentially
linearizable relationships... :-) ]
This also works really slick for compiled packages - the
person might be doing a "port" to a strange architecture
(ie. GNU-Win32), so they could simply choose the source
packages, and attempt to convert them to binary packages.
If they were successful, they could upload the binary
packages, so that others participating in the "port" don't
have to go through the compilation step.
It could also provide a global cache for JVM based code
such as Java code. Java code can be compiled to machine
specific code using tools such as "toba", and soon "gcc".
Running pre-compiled Java code is going to be faster
than using a JIT like "kaffe" (since there is no
compilation step). But this cuts down on the portability.
If we had a system such as this, then the user could
use the architecture specific binary package if it was
available, or use the "JVM" binary package instead.
One can even imagine "on-the-fly" conversion servers that
could do compilation remotely, and deliver compiled
packages on demand.
So we could build a solution which always optimizes between
bandwidth and computing power. That's revolutionary.
6) It would even be possible to define conversions between
states, skipping intermidiate representations. This could
lead to some serious potential for optimizations.
I could go on - this is a really exciting idea!!!
(anyone else think this is cool? or am I out in outer space
somewhere?)
Cheers,
- Jim
Attachment:
pgpCglmlB9TWH.pgp
Description: PGP signature