Re: Suggestions regarding a PCI-X card.
On Wed, 15 Feb 2012 00:40:26 -0600
Stan Hoeppner <email@example.com> wrote:
> This is an 8 port card, so 16 drives will require 2 cards. Unless you
> plan to connect 4 SATA drives and 4 EIDE drives to the mobo ports...ick
Indeed it is, I was planning on adding the second after I've tested the
first thoroughly. PCI-X cards x2.
> > One thing that's come to my attention before I go forward is that when I
> > run lspci -vv, I've noticed this:
> > # lspci -vv -s 03
> > 03:03.0 SCSI storage controller: Marvell Technology Group Ltd.
> > MV88SX6081 8-port SATA II PCI-X Controller (rev 09)
> > --snip--
> > Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
> > <TAbort- <MAbort- >SERR- <PERR- INTx-
> That "Status: Cap+ 66MHz+" is an lspci default for all PCI devices.
> Ignore it. I have a 12 year old Intel BX test system here, w/33MHz 5v
> only PCI slots. 66MHz PCI hadn't even been invented yet. But lspci says:
> Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium
> TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
I don't know why I didn't think of that, maybe I spent too much time on
Wikipedia and assumed I'd see 33MHz for all PCI devices.
> > --snip--
> > Capabilities:  PCI-X non-bridge device
> > Status: Dev=03:03.0 64bit+ 133MHz+ SCD- USC- DC=simple
> This is what you trust: ^^^^^^^^^^^^^^
> It's running at 133MHz.
> > DMMRBC=512 DMOST=4 DMCRS=8 RSCEM- 266MHz- 533MHz-
> > Am I reading the above wrong?
> Yes, you were.
> No, "capabilities" tells you what the device can do *and* what's it's
> currently doing. Note the 133MHz is under the sub heading "status".
Such an elaborate post for this one simple response. How foolish do I feel?
> > I've double checked that the
> > jumpers are set correctly on the motherboard and am concerned that I'm
> > somehow doing something wrong.
> You haven't yet. But just in case...
> slots 1/2 on PCI-X bus B: Max 133MHz, single card
> slots 1/2 on PCI-X bus B: Max 100MHz, two cards
> slots 3/4 on PCI-X bus A: Max 100MHz, 1 or 2 cards
Yes, I'm familiar with the configuration, though admittedly not familiar
enough with lspci =)
> If you install a 2nd SAT2-MV8, put both cards in PCI-X slots 1/2, and
> close J53. This leaves slots 3/4 open for non PCI-X cards should you
> need to install such in the future, or have such already. Don't attempt
> to max your SATA HBA bandwidth by using both PCI-X buses, one card in
> each, as that's wholly unnecessary, and decreases your flexibility and
> performance WRT future card installation.
I was thinking the same thing, but I wanted to test the results with
bonnie++ and simple dd tests to see if I would be gaining much of anything
by putting one card into one bus and the other on the second.
> The reason "maxing out" is unnecessary is this:
While I'm not unfamiliar with the theories, I have been bit by IO problems
in three cases and have the opportunity to test and see for myself on this
install where I'm failing to see the bottlenecks.
In one case the pci bus was the limiting factor and my file server machine
which was where concurrent 2-7mbit video recordings were being written
would lag out severely due to IO wait.
The other is still an issue for me, but I feel NFS is really the culprit in
that case though .. again testing and experience is how I end up feeling
comfortable enough to tell people to go get stuffed when a suggestion is
made that I know to be wrong. Call me a perfectionist or a__l retentive but
that's how I roll when bitten.
> In other words, don't get yourself all wound up over theoretical maximum
> bandwidths of drives, cards, and bus slots. Even though 16
> SATA-I/II/III drives in RAID0 may have a theoretical combined streaming
> read rate of ~1.6GB/s, you'll never see it in the real world. You'll be
> lucky to see 1GB/s with the "perfect streaming test", which doesn't
> exist, regardless of HBA, RAID card, bus slot speed, etc.
> So don't worry about the PCI-X bus speed.
Yes though 66MHz vaguely sounds like half of what I had though didn't
> Believe me when I say I'm losing patience with you Dan. ;)
Oh. I certainly do! =)
> Believe your own eyes. Remove your cranium from your backside and use
> some deduction and common sense. It literally took me about 2 minutes
> to figure this out, and it wasn't difficult at all.
Heh, so simple!
> Drop it in slot 3/4 leaving the sister slot empty, and look at the lspci
> output. You should see 100 where you currently see 133. If you don't,
> then you know lspci is simply fuckered as both the 66 and 133 are wrong.
> Then you can simply tell lspci to piss off, assume the hardware is
> working as it should (it is), and go on with your life.
While a good solution and idea, I refuse to admit that I simply overlooked
that option. =)
> > Thanks.
> > a difference I can try this with the latest debian live distro.
> None of this makes a difference. That mobo was created in 2001/2002, 10
> freak'n years ago. You think the kernel/tools devs wouldn't have it
> figured out by, already covered by previous kernels/tool revs? Again,
> you're simply reading lspci wrong, I'm guessing because you're just not
> that familiar with it.
Correct, I haven't had to use lspci to the extent I needed to here, I
simply found the output to be ambiguous. Thanks for straightening it out