[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Suggestions regarding a PCI-X card.



On 2/15/2012 3:13 AM, Dan Serban wrote:

> I don't know why I didn't think of that, maybe I spent too much time on
> Wikipedia and assumed I'd see 33MHz for all PCI devices.

>> No, "capabilities" tells you what the device can do *and* what's it's
>> currently doing.  Note the 133MHz is under the sub heading "status".
> 
> Such an elaborate post for this one simple response.  How foolish do I feel?

Heheh.  Note the right hand side of my email address.  ;)  I can be a
bit, engaged, when talking shop. ;)

> Yes, I'm familiar with the configuration, though admittedly not familiar
> enough with lspci =)

You should have never run it.  ;)

> I was thinking the same thing, but I wanted to test the results with
> bonnie++ and simple dd tests to see if I would be gaining much of anything
> by putting one card into one bus and the other on the second.

Bonnie and dd aren't going to tell you much.  They're going to give you
bonnie and dd results.  Which bear little resemblance to most real world
workloads.

> While I'm not unfamiliar with the theories, I have been bit by IO problems
> in three cases and have the opportunity to test and see for myself on this
> install where I'm failing to see the bottlenecks.
> 
> In one case the pci bus was the limiting factor and my file server machine
> which was where concurrent 2-7mbit video recordings were being written
> would lag out severely due to IO wait.

This iowait was caused by the disks running out of seek headroom.  The
PCI bus was not becoming saturated and thus was not your bottleneck.  A
32/33 PCI bus can carry 138 of your 7mbit/s streams assuming -5-10% for
bus wide PCI protocol overhead, and assuming there was little/no bus
contention with devices other than the capture hardware.  That's 69
streams coming through the capture hardware or NIC and 69 streams
written to the disk array.  The bandwidth required of the disks is only
60 MB/s, but with that many streaming writes the heads can't seek to
each track quickly enough while writing each file.  You're looking at
minimum 69ms of iowait between the first writer thread and the last,
assuming 1ms between seeks, and it's probably more on the order of 8ms
per seek given the track-track seek time on a 7.2k SATA drive is 5ms.
So you're looking at

69 writers * 8ms = 552ms   of iowait between the first and last thread.

You could have a 4 GB/s PCIe x8 RAID card w/512MB BBWC and 16 7.2K SATA
drives attached, and you'd still likely hit this wall with 69 such
concurrent 7mb/s streams.  Using 15k SAS drives would cut that iowait in
half.  But 275ms is probably still going to be too high.

If you were writing significantly less than 69 streams then you have
some other hardware or software problem.

> The other is still an issue for me, but I feel NFS is really the culprit in
> that case though .. again testing and experience is how I end up feeling
> comfortable enough to tell people to go get stuffed when a suggestion is
> made that I know to be wrong.  Call me a perfectionist or a__l retentive but
> that's how I roll when bitten.

Just make sure you're testing your actual workloads.  Synthetic tests
are just that, synthetic.  No one has ever been bitten by testing with
their actual workload.  Countless folks have been bitten by thinking
bonnie, iozone, etc results are a substitution for their actual
workload.  We see this somewhat frequently on the XFS and Linux-RAID lists.

> Yes though 66MHz vaguely sounds like half of what I had though didn't
> it?  :)

64/66 PCI-X yields 528 MB/s of bandwidth.

I've seen countless IBM FasTt 600 fiber channel storage arrays deployed
with 28 15k FC drives (2 shelves), with a single 4Gb/s FC host
connection being used, serving from 4-12 ESX farm nodes and 400-800
users.  That's 'only' 400 MB/s full duplex, 800 MB/s total, serving the
storage bandwidth needs of an entire organization.

The point is, even a lowly 66MHz, 528 MB/s PCI-X bus has a tremendous
amount of capability, in the real world.  General file serving is a real
world workload that doesn't 'need' 800 MB/s of bandwidth, which is what
you'll get using two SAT2-MV8 cards in one bus.  It doesn't even 'need'
528 MB/s.

The problem here is that you're a hobbyist (nothing wrong with that),
not an SA, so you're not going to digest or agree with what I'm telling
you WRT storage b/w.  If you were an SA, you wouldn't be monkeying with
upgrading and optimizing 10 year old hardware with PCI-X buses and uber
cheap non-RAID SATA HBAs from the same period.

So drop the 2 SAT2-MV8 HBAs in slot1 and slot 3 and close J53 so both
cards run at 100 MHz (asymmetry is BAD after all).  Now you have your
1.6 GB/s of PCI-X b/w which should closely match the average streaming
read performance of those 16 drives.  Now, you'll never get close to
achieving 1.6 GB/s throughput with these drives, but it'll sure be fun
to burn hundreds of hours trying. ;)

> While a good solution and idea, I refuse to admit that I simply overlooked
> that option.  =)

Or you could just close J53. ;)  /me ducks

> Correct, I haven't had to use lspci to the extent I needed to here, I

But did you really "need" to use it?  ;)

> simply found the output to be ambiguous.  Thanks for straightening it out
> for me.

You're welcome Dan.  Apologies for any jabs/sarcasm that may be
offensive.  Sometimes it's needed to slap folks into a proper
perspective. ;)

-- 
Stan


Reply to: