On Thu, Nov 23, 2006 at 04:54:30PM +0100, Qweb - Yavuz Aydin wrote: > Dear all, > > Because of variuos reasons we would like to use a custom-built storage > server. This storage server would need to be accessed by nfs. We need > advice with 2 issues. > Where I work we have a storage server that is accessible via NFS. It causes us no end of grief. I am currently trying to push for a migration to something sane, like AFS or Lustre, or even Red Hat's GFS. The problem is that NFS is a workgroup filesystem, not a true distributed filesystem. Since you are talking about a cluster, you might want something more robust instead of NFS. > Issue 1: > It seems obvious that SCSI is the way to go. However, because of the > simple fact that (S)ATA provides much more capacity (and is far less > expensive) we would like to know what you think of using SATA drives > in a storage server. And what RAID level would be a good choice? It > may be good to know that we will setup 2 identical storage servers, > one as a hot-spare, which syncs it's disks with DRBD from the main > storage server. Off-course we will be using heartbeat to take over the > ip on the hot-spare once the main storage becomes unavailable. > SATA should be OK. > Issue 2: > Another question which remains unanswered for us is how one would > scale storage. For example, if we setup a storage server with RAID 5 > or RAID 6, how can we extend the RAID array with more capacity, > without losing data? Would it be as simple as plugging in another disk > at the array? > Check out LVM. Regards, -Roberto -- Roberto C. Sanchez http://people.connexer.com/~roberto http://www.connexer.com
Attachment:
signature.asc
Description: Digital signature