Re: Wrapper to download databases (was: Sharing a subdirectory of /usr/share between multiple packages ?)
On Tue, May 08, 2007 at 10:05:05AM +0200, Gabor Gombas wrote:
> On Tue, May 08, 2007 at 09:39:23AM +0200, Michael Hanke wrote:
> > This has the advantage that the datasets are only downloaded if it is
> > really necessary (there are modifications checked by md5sum). The is
> > especially usefull as the datasets tend to be the same across releases.
> And this is a real PITA when you want to install on a machine that does
> not have a network connection. Don't do that! Anything that bypasses the
> normal package management is a pain to handle.
> Instead how about splitting the package to two source packages: one for
> code and one for the data? That way you can upload the data package much
> less frequently.
I'm aware of the disadvantages. But still, the machines were such
packages will be installed most likely have a VERY fast connection.
Additionally, the scripts offer the ability to use local archives or
mirrors (on disk or LAN).
Even when I split the package into data and binaries. As soon as a
single file gets updated I have to bump the version and everyone has to
download the whole package again.
And we are talking about a lot of data. I'm currently testing it with a
little 6MB package (mainly because it is only 6MB), but the canditates
are 130 MB, 250 MB and 400 MB. And as I understand it Charles talks
about a lot more data.
But you are of course right, that having it all properly packaged in the
archive is by far the best way to do it (from the user perspective). I'm
just not sure whether it is useful to put this very, very special
interested data in there?
GPG key: 1024D/3144BE0F Michael Hanke