[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Server Motherboards with multiple PCI buses

On Fri, Apr 14, 2000 at 01:18:55PM +0200, Russell Coker wrote:
> On Wed, 12 Apr 2000, J. Currey wrote:
> >I am putting together a couple of servers that will become PCI 
> >bus bottlenecked.
> >I haven't found very many motherboards that have multiple PCI buses.
> >The Intel L44GX+ has taken the AGP port (PCI 66) and used it for 
> >PCI slots instead, but has some bug report against this second PCI 
> >where the machine locks, but otherwise sounds good. 
> >Does anyone know of some good ix86 multiple PCI bus motherboard that 
> >you have running Debian on ( even if a custom kernel was required
> > (like for SMP))?
> I am curious, what are you doing that will cause a PCI bus bottleneck?  I
> hope you don't mind me asking.

Well supporting gigabit Ethernet for one, and 4 100Mb sub networks
and logging.

PCI bandwidth is about 132 MB/sec (32bit at 33MHz), and with 100MB/sec? taken
by the gigabit Ethernet, it doesn't leave much room for disk writes, much
less the other networks. In practicality it will rarely see that 
much, but it must be capable of it (and I have a one shot budget to
accommodate a few years growth).

A common example of a PCI bottle neck is multiple SCSI controllers 
with stripped drives. It would make sense for gigabyte Ethernet cards 
and high speed
SCSI controllers to use the AGP slot (since AGP is really PCI @ 66MHZ
with a funny connector <- flame target) .  There are  SCSI raid adapters
that are using PCI 66MHZ. 

Make sense?  Oh well :).


Reply to: