Re: Why not move Apt to a relational database
* From: Justin Emmanuel
* Date: Sun, 03 Jun 2007 10:55:01 +0100
Hallo, Justin. Hope, you are still here.
> I am brand new to this mailing list, I joined it because I had an idea
> that I would like to have considered. Moving apt to a relational
> database, for several reasons.
>
> Based on a relational database it will run faster,
First reason is "faster". What if i'll say: based on tmpfs and
directory/file structure it will run even faster?
> also there should be some more data stored about the programs to
> facilitate system restoring.
File size in UNIX systems is limited to two things:
- amount of memory (soft limit)
- arch (hard limit, on AMD64 practically it approaches infinity)
> The data should be backed up automatically and regularly,
Periodic job:
(lock-db) && (tar c -C /var/cache/db-tmpfs -f /var/backup/db.$$ .) \
&& (unlock-db) || echo error | mail -s '[db] backup daemon' root
> so that if the database is stored on another computer and first
> computer has a hardware failure, the data from the backup can be used
> to completely restore the computer to its status again.
clients on failed machine: scp, curl, lftp, whatever to transfer a file
> It should be a relational database that contains checksums of the
> compressed and uncompressed state of files that will be installed. So
> that if there is a problem with the computer and something is
> segfaulting, every file on the computer can be checked against this
> information, including freshly downloaded files, so that they can find
> out if any of them are corrupt and need to be replaced. Then apt can
> automatically download the file. I have had to numerous times manually
> edit the text database that apt writes to because something had been
> changed to "." when it should have been ">". In a good relational
> database, the version numbers can be kept separately from the rest of
> the data, this will all go to help avoid corruption and lead to
> scalability both for individual machines and networked enterprise
> machines. The data at every level can be split into different tables
> using normalisation, increasing the speed of the reading and making
> sure that only the files that need to be parsed get parsed.
Can't see more reasons here, only new features.
Problem: is it possible without RDB, with scheme, i've proposed?
> So what do you think? Is this the correct mailing list to send this idea
> to?
I think, we must take FREE version of DB2 Express and take control of
our XML
(composed from sf.net's ads in mailing lists ;)
As i'm new here also, just expressing my stupid (Linux specific)
contr-``idea''.
____
Reply to: