On Sat, Oct 21, 2000 at 04:30:27AM +0300, Eray Ozkural wrote: > > Only if you have a limited view of what can go wrong. Sure you can rebuild > > most of the SQL database, but how are you going to deal with massive > > filesystem corruption, or an accident in one of the scripts? > What can a human do about massive filesystem corruption? > Do you go and review every i-node manually? If there were a filesystem that was adequately efficient that I could repair by hand in the event of disasters, I'm sure I would. Unfortunately there isn't. There's no trade off to be made here: you either get a file system that is adequate and not maintainable by hand, or you don't get one at all. Managing (eg) 130k files and directories spread out in random, varying hierarchies that can change in a large variety of ways is a very different problem to managing about 5k sets of source packages, each with less than 100 source files and derived binaries. It's a different matter with the archive though: it *is* possible to make an adequately efficient solution that a human being can manage. And it's much more useful for a system to be managable by a human: humans and programs have different failure modes, it's much easier to cope with tools breaking, to verify that your tools work correctly, and to become confident in your tools if you yourself can understand exactly what they're doing. That's a really important benefit that you seem to be just ignoring. > An accident in one of the scripts... Which > scripts are you referring to? The pool manager shouldn't have such > mistakes, right? :) We've already uncovered such bugs in da-katie. And in the testing scripts. And no doubt in dinstall as it already exists. And in most programs on the planet. Sure, bugs *shouldn't* happen, people *shouldn't* make mistakes, but they do. > > AJ was quite right to bring up examples like ext2 and the linux kernel - > > they are good example of mixing both requirements, and how fine that line > > is. > ext2 isn't a good filesystem. and linux isn't a good kernel. the fact > that we're running them doesn't entail that their philosophy is correct. > we're running them because they're the best [in some aspect or another] > in the free world. It's not really that useful to talk about things as being "good" or "bad" when even the "best" isn't "good". > No, no. It's okay, because it's just a workaround for fs... I just > don't believe things before I see numbers. When I checked for myself, > I was convinced as I posted those stats. Those stats have already been posted by Jason and others when this was last discussed. Geez. People these days. Expecting to keep up with events when they haven't even memorised the past four or five years of sporadic discussion on the topic. The gall. [0] > The disagreement is about automation. No, there is no disagreement about automation, per se. If you look at "testing" for example, you'll note that it's major purpose is to automate the entire freeze process, and spend significantly more (computer) time minimising various automatically identifiable problems, so the release manager's job is much easier. Similarly, dinstall itself is an automated tool that does what a human could do with a few careful mv's, ln's and rm's. The disagreement is about whether it's worth retaining the possibility of doing *without* that automation in future. Whether the archive should be physically maintainable by hand, even if it's grown too large for that to be feasible as a whole. > Of course never mind if you aren't interested in automating things. *snort* Cheers, aj [0] http://www.debian.org/News/weekly/1999/45/ for the immediate past instance of discussion. Others were before DWN started, I think, and didn't get archived too well. -- Anthony Towns <aj@humbug.org.au> <http://azure.humbug.org.au/~aj/> I don't speak for anyone save myself. GPG signed mail preferred. ``We reject: kings, presidents, and voting. We believe in: rough consensus and working code.'' -- Dave Clark
Attachment:
pgpj2b2Sj8e57.pgp
Description: PGP signature