[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Net work card + CPU load



--On Saturday, January 29, 2005 17:11 +0200 Hans du Plooy <hansdp@sagacit.com> wrote:

Hi everybody,

I have noticed that on my workstation (P-III 1ghz, 512mb, Intel chipset,
100mbit 3com network), I can copy stuff over the network at about 8-9mb/s
at  the best of times.  When I do this, my CPU usage is quite high.

I notice the same thing on my home PC (2ghz Athlon, 384mb, Via chipset,
SMC  network card - tulip driver).   On both these machines the CPU load
seems to  be directly proportional to the transfer rate on the network
interface.

I can only assume that his is because both these cards do a lot of the
hard  work in software, in the driver.  Can anyone confirm/deny/explain
this?

Yup, pretty much all cards do in the 100mbit realm. In the Gbit+ realm, the desktop/cheaper cards almost universally do the heavy lifting in the driver, but the server type ones (more expensive ones usually) will do many things in the card...how much of that is supported by the Linux driver depends on that particular driver+card combination. There's also to factor in if you're using IDE HDDs (even using DMA) they'll push on the CPU some too. You're not going to fill a gbit pipe with an IDE/EIDE/UDMA hard drive. *maybe* with a single stream if you do a rain dance. There's also the PCI bus to factor in. At 100mbit with allt he other peripherals you're putting quite a bit on it. At gbit+ speeds, PCI (standard 32bit 33mhz PCI) is oversubscribed.


Either ways, our office would be migrating to a gigabit network soon, and
I'm  tasked with finding good network cards.  Can anyone recommend a
gigabit  network chipset that fully implemented in hardware - i.e. the
network card  itself does all the heavy lifting.  I'm looking for
something that would  affect the CPU usage as little as possible.

Most server type cards will have some form of hardware offload. Check the kenrel drivers for which ones have supported offload engines. The e1000 driver is known to support the offload when it's available. I've also had good luck with the sk98 driver though I don't recall if offload was there and enabled or not.

Higher end cards will also do interrupt mitigation. That is they dont' even interrupt for every packet, they wait until a buffer is full or a timeout happens. Some allow interrupt throttling too, that means that they put an absolute upper ceiling on the number of interrupts they'll send to the CPU.

As you hit Gbit+ speeds all these things are VERY important with traditional Intel architecture. The newer AMD Opteron (Also the Athlon 64FX which is just a low end Opteron, the Athlon 64 is like an Opteron minus it's onboard DRAM controller so it's sort of in this area too) systems use a different 'front side bus' system called HyperTransport which allows for multiple point to point links on the north bridge and between the PCI, PCI-X and other peripheral busses or bridges. Giving you a far larger aggregate bandwidth than is possible on Intel. They look more like a Sun E or V series than they do a PC, really. I could go into *great* detail about the differences really, but the Opterons are much more suited to high end computing than any current Intel platform.

Now that said, jsut because you have Gig-E to everything doesn't mean you need to fully utilise it at every server port. Your switches come into play then too.....Many 'Gig-E' switches are FAR oversubscribed, they're becoming less and less though. And as for 10Gig-E, it seems anything over 4 ports is certainly oversubscribed.



Just my $0.05



Reply to: