[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Webserver with RAID-5 - performance problem





--On Wednesday, December 15, 2004 08:53 +0100 Andrej <list@vatos-locos.net> wrote:

We have a new server which we want to use as a webserver. The following
hardware components are included:

- Intel XEON 3GHz
- 2GB RAM
- Intel SRCU42L RAID Controller SCSI
- 3 Fujitsu u320 SCSI discs in a RAID-5 array
 We've installed Debian Sarge.

The problem is, that we've been running hdparm on the RAID device with
these result:
----
hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads:  138 MB in  3.04 seconds =  45.39 MB/sec
----

hdparm is not a benchmark, the I/O measurements it gives are merely a toy. Go get bonnie/bonnie++, or some other Disk/Filesystem Benchmark. Run that, compare results. Even then it's not apples to apples. Remember also RAID-5 has a performance penalty. Additionally Linux uses 128K disk I/O blocks, if you've built your RAID array with any other size stripe you may suffer pathological performance loss. I don't have a very high opinion of any of Intel's RAID products (excepting the ones they bought from ICP Vortex). Not to mention that card has a 64MB buffer, the drives may have as much as 8MB a piece, so 128MB doesn't even begin to constitute a test.

Also you have to make SURE the RAID array is finished rebuilding, all patrolling reads/parity reads are turned off, and all unnecessary daemons are stopped. Benchmarks should be performed single-user, without network whenever possible, doing everything locally, and repeatedly. I won't go any further into benchmark methodology, there are documents and HOWTOs out there for that.

I'd check the RAID controller and make *SURE* you're getting U-160 or U-320. Another common problem with people who arent' familiar with SCSI is they use cheap cables and/or cheap terminators. This isn't IDE. Quality cables are a MUST. Quality ACTIVE terminators are and absolute requirement for U-160+. Also U-320 doesn't offer you much, if any, performance over U-160. Why? Bus timing cycles. If you don't know what that means I doubt I have enough space to really explain it.

SATA *MAY* yield faster raw throughput, but in real world it is not even close to the same as SCSI. A SCSI chain has and does multiple commands in parallel. You can have several hundred outstanding I/O transactions for any given SCSI device. SATA allows only one. (SATA 2.0 does allow command queues though I don't know of anyone implementing them.)

The rest of this may bore you to tears, or you may find it highly interesting. I'll leave it to you to decide to read on or not.

This parallelism usually doesn't make the performance any better for low volume or desktop type apps. But in a server, servicing hundreds of concurrent users, each doing something different, this gives the controller and the drives a chance at optimally reordering reads and writes to lower head movement, and speed data into and out of the drives and controllers for everyone.

Another problem with the entire Intel product line is the FSB sucks. You can bottleneck at numbers as low as ~160mbyte/sec, that's a pathological case, usually you'll get more mileage than that. Transactions to/from memory, to/from the PCI bus, to/from AGP, to/from the SIO, etc, all compete for the North bridge interconnect. This is the same interconnect the CPU must use to access it's memory, fill and empty it's cache, etc.


The result is not looking good.

At home I use a SATA HD, the hdparm result is about 60MB/sec.

I would be very thankful for any suggestion for solving these problems


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact
listmaster@lists.debian.org






--
Michael Loftis
Modwest Sr. Systems Administrator
Powerful, Affordable Web Hosting



Reply to: