Maintaining file system integrety.
I have set up a server that was originally intended to supply FTP
and http services on the internet. As a convenience to my self I set
up NFS and started using it for various temporary uses. When one of
the guys found out that it was available he set up the means for
access to my NFS server via one of the Novell NetWare file servers
here. Once we found out how well it worked and that the resources to
maintain this system were far less than expanding the Novell servers,
the powers that be decided to add about 4 times the storage to my
server and start off loading a great deal of the "junk" that is
hanging out on the primary Novell server to my Debian box.
During some testing a week or so ago the Debian box locked up due to
a problem on the main ext2 file system. The server was rebooted,
fsck took care of the problem and off we went. If my recollection is
correct a huge quantity of data was being deleted or had just been
deleted and a new large chunk was going on again when it died. I
wondered it there was something that could be done to insure that
there were as few of these kind of problems as possible. Basically I
figure that there are some parameters that are standardly set in the
Debian distribution that could be altered in favour of better server
as opposed to work station performance. Things like the level of
nice on kflushd.
Any direction in this area would be great. Thanks all.
Chris,
--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
debian-user-request@lists.debian.org .
Trouble? e-mail to templin@bucknell.edu .
Reply to: