tags 506419 patch thanks The attached patch against linux-2.6 2.6.26-11 seems to fix the original problem for me, but I have not had a chance to verify the second ('downstream') problem I identified in the bug report. The patch is based on the driver source I obtained via Transtec from Supermicro, the board manufacturer. It's quite ugly and big and I don't have the time to dissect it right now, but if you want me to try out particulars, I will. The new driver has one shortcoming: after loading, the LAN card becomes unusable, even just bridging to the IPMI card fails. Once you `ip link set eth0 up`, IPMI starts working again after 3-5 seconds. To address the problem, I have created /etc/modprobe.d/local-forcedeth with the following contents: install forcedeth modprobe --ignore-install forcedeth; ip link set eth0 up options forcedeth debug=1 This causes a problem with IPv6 RA if eth0 is actually part of a bridge, in which case I advise to just add a pre-up hook to the bridge iface stanza to down the iface again. Downing the iface after IPMI has started working again does *not* break it again. I'd still really like this fixed because having such a hack in place to keep IPMI working doesn't really make me more relaxed about remote kernel upgrades. Cheers, -- .''`. martin f. krafft <madduck@d.o> Related projects: : :' : proud Debian developer http://debiansystem.info `. `'` http://people.debian.org/~madduck http://vcs-pkg.org `- Debian - when you have better things to do than fixing systems
From 736de8cc89ebefca5a3bd277219e278cf48ce765 Mon Sep 17 00:00:00 2001 From: martin f. krafft <madduck@madduck.net> Date: Tue, 9 Dec 2008 08:27:01 +0100 Subject: [PATCH] import driver from supermicro/nvidia Signed-off-by: martin f. krafft <madduck@madduck.net> --- Makefile | 32 + forcedeth.c | 5170 ++++++++++++++++++++++++++++++++++++++++------------------- readme.txt | 43 + 3 files changed, 3585 insertions(+), 1660 deletions(-) create mode 100644 Makefile create mode 100644 readme.txt diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..62b6bf0 --- /dev/null +++ b/Makefile @@ -0,0 +1,32 @@ +KERNELNAME=/lib/modules/$(shell uname -r)/build +K_VERSION:=$(shell uname -r | cut -c1-3 | sed 's/2\.[56]/2\.6/') +ARCH := $(shell uname -m | sed 's/i.86/i386/') +SUBDIR := $(shell pwd) + +DISTROS:=$(findstring 2.6.16.46-0.12,$(shell uname -r)) +ifeq (${DISTROS},2.6.16.46-0.12) +CFLAGS += -DSLES10SP1 +$(warning ${DISTROS}) +endif + +ifeq ($(ARCH),i386) + command_option = -D__KERNEL__ -I$(KERNELNAME)/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -Wno-unused -fomit-frame-pointer -pipe -freorder-blocks -mpreferred-stack-boundary=2 -march=athlon -DMODULE -DMODVERSIONS -include $(KERNELNAME)/include/linux/modversions.h -nostdinc -iwithprefix include -DKBUILD_BASENAME=forcedeth +endif +ifeq ($(ARCH),x86_64) + command_option = -D__KERNEL__ -I$(KERNELNAME)/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fno-strict-aliasing -fno-common -Wno-unused -fomit-frame-pointer -mno-red-zone -mcmodel=kernel -pipe -fno-reorder-blocks -finline-limit=2000 -fno-strength-reduce -fno-asynchronous-unwind-tables -DMODULE -DMODVERSIONS -include $(KERNELNAME)/include/linux/modversions.h -nostdinc -iwithprefix include -DKBUILD_BASENAME=forcedeth +endif + +ifeq ($(K_VERSION),2.6) + obj-m := forcedeth.o +endif + +default: forcedeth.c +ifeq ($(K_VERSION),2.4) + gcc $(command_option) -c -o forcedeth.o forcedeth.c +else + $(MAKE) -C $(KERNELNAME) SUBDIRS=$(SUBDIR) modules +endif + +clean: + $(RM) *.o *.mod.c *.mod.o *.ko + diff --git a/forcedeth.c b/forcedeth.c index 20d4fe9..a5c2ad0 100644 --- a/forcedeth.c +++ b/forcedeth.c @@ -13,7 +13,7 @@ * Copyright (C) 2004 Andrew de Quincey (wol support) * Copyright (C) 2004 Carl-Daniel Hailfinger (invalid MAC handling, insane * IRQ rate fixes, bigendian fixes, cleanups, verification) - * Copyright (c) 2004,2005,2006,2007,2008 NVIDIA Corporation + * Copyright (c) 2004,5,6 NVIDIA Corporation * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by @@ -29,6 +29,92 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * + * Changelog: + * 0.01: 05 Oct 2003: First release that compiles without warnings. + * 0.02: 05 Oct 2003: Fix bug for nv_drain_tx: do not try to free NULL skbs. + * Check all PCI BARs for the register window. + * udelay added to mii_rw. + * 0.03: 06 Oct 2003: Initialize dev->irq. + * 0.04: 07 Oct 2003: Initialize np->lock, reduce handled irqs, add printks. + * 0.05: 09 Oct 2003: printk removed again, irq status print tx_timeout. + * 0.06: 10 Oct 2003: MAC Address read updated, pff flag generation updated, + * irq mask updated + * 0.07: 14 Oct 2003: Further irq mask updates. + * 0.08: 20 Oct 2003: rx_desc.Length initialization added, nv_alloc_rx refill + * added into irq handler, NULL check for drain_ring. + * 0.09: 20 Oct 2003: Basic link speed irq implementation. Only handle the + * requested interrupt sources. + * 0.10: 20 Oct 2003: First cleanup for release. + * 0.11: 21 Oct 2003: hexdump for tx added, rx buffer sizes increased. + * MAC Address init fix, set_multicast cleanup. + * 0.12: 23 Oct 2003: Cleanups for release. + * 0.13: 25 Oct 2003: Limit for concurrent tx packets increased to 10. + * Set link speed correctly. start rx before starting + * tx (nv_start_rx sets the link speed). + * 0.14: 25 Oct 2003: Nic dependant irq mask. + * 0.15: 08 Nov 2003: fix smp deadlock with set_multicast_list during + * open. + * 0.16: 15 Nov 2003: include file cleanup for ppc64, rx buffer size + * increased to 1628 bytes. + * 0.17: 16 Nov 2003: undo rx buffer size increase. Substract 1 from + * the tx length. + * 0.18: 17 Nov 2003: fix oops due to late initialization of dev_stats + * 0.19: 29 Nov 2003: Handle RxNoBuf, detect & handle invalid mac + * addresses, really stop rx if already running + * in nv_start_rx, clean up a bit. + * 0.20: 07 Dec 2003: alloc fixes + * 0.21: 12 Jan 2004: additional alloc fix, nic polling fix. + * 0.22: 19 Jan 2004: reprogram timer to a sane rate, avoid lockup + * on close. + * 0.23: 26 Jan 2004: various small cleanups + * 0.24: 27 Feb 2004: make driver even less anonymous in backtraces + * 0.25: 09 Mar 2004: wol support + * 0.26: 03 Jun 2004: netdriver specific annotation, sparse-related fixes + * 0.27: 19 Jun 2004: Gigabit support, new descriptor rings, + * added CK804/MCP04 device IDs, code fixes + * for registers, link status and other minor fixes. + * 0.28: 21 Jun 2004: Big cleanup, making driver mostly endian safe + * 0.29: 31 Aug 2004: Add backup timer for link change notification. + * 0.30: 25 Sep 2004: rx checksum support for nf 250 Gb. Add rx reset + * into nv_close, otherwise reenabling for wol can + * cause DMA to kfree'd memory. + * 0.31: 14 Nov 2004: ethtool support for getting/setting link + * capabilities. + * 0.32: 16 Apr 2005: RX_ERROR4 handling added. + * 0.33: 16 May 2005: Support for MCP51 added. + * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics. + * 0.35: 26 Jun 2005: Support for MCP55 added. + * 0.36: 28 Jun 2005: Add jumbo frame support. + * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list + * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of + * per-packet flags. + * 0.39: 18 Jul 2005: Add 64bit descriptor support. + * 0.40: 19 Jul 2005: Add support for mac address change. + * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead + * of nv_remove + * 0.42: 06 Aug 2005: Fix lack of link speed initialization + * in the second (and later) nv_open call + * 0.43: 10 Aug 2005: Add support for tx checksum. + * 0.44: 20 Aug 2005: Add support for scatter gather and segmentation. + * 0.45: 18 Sep 2005: Remove nv_stop/start_rx from every link check + * 0.46: 20 Oct 2005: Add irq optimization modes. + * 0.47: 26 Oct 2005: Add phyaddr 0 in phy scan. + * 0.48: 24 Dec 2005: Disable TSO, bugfix for pci_map_single + * 0.49: 10 Dec 2005: Fix tso for large buffers. + * 0.50: 20 Jan 2006: Add 8021pq tagging support. + * 0.51: 20 Jan 2006: Add 64bit consistent memory allocation for rings. + * 0.52: 20 Jan 2006: Add MSI/MSIX support. + * 0.53: 19 Mar 2006: Fix init from low power mode and add hw reset. + * 0.54: 21 Mar 2006: Fix spin locks for multi irqs and cleanup. + * 0.55: 22 Mar 2006: Add flow control (pause frame). + * 0.56: 22 Mar 2006: Additional ethtool and moduleparam support. + * 0.57: 14 May 2006: Moved mac address writes to nv_probe and nv_remove. + * 0.58: 20 May 2006: Optimized rx and tx data paths. + * 0.59: 31 May 2006: Added support for sideband management unit. + * 0.60: 31 May 2006: Added support for recoverable error. + * 0.61: 18 Jul 2006: Added support for suspend/resume. + * 0.62: 16 Jan 2007: Fixed statistics, mgmt communication, and low phy speed on S5. + * * Known bugs: * We suspect that on some hardware no TX done interrupts are generated. * This means recovery from netif_stop_queue only happens if the hw timer @@ -39,8 +125,14 @@ * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few * superfluous timer interrupts from the nic. */ -#define FORCEDETH_VERSION "0.61" +#ifdef NV_LINUX_VER_H +#include "../linux_version.h" +#else +#define PACKAGE_VER "V1.27" +#endif +#define FORCEDETH_VERSION "0.62-Driver Package "PACKAGE_VER #define DRV_NAME "forcedeth" +#define DRV_DATE "2008/09/02" #include <linux/module.h> #include <linux/types.h> @@ -57,46 +149,393 @@ #include <linux/random.h> #include <linux/init.h> #include <linux/if_vlan.h> +#include <linux/rtnetlink.h> +#include <linux/reboot.h> +#include <linux/version.h> + +#define RHEL3 0 +#define SLES9 1 +#define RHEL4 2 +#define SUSE10 3 +#define FEDORA5 4 +#define SUSE10U1 4 +#define SLES10 4 +#define FEDORA6 5 +#define SUSE10U2 5 +#define RHEL5 5 +#define SLES10U1 5 +#define FEDORA7 6 +#define OPENSUSE10U3 7 +#define OPENSOURCE 8 + +#ifdef SLES10SP +#define NVVER SLES10U1 +#else +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,22) +#define NVVER OPENSOURCE +#elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,21) +#define NVVER OPENSUSE10U3 +#elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,18) +#define NVVER FEDORA7 +#elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,17) +#define NVVER FEDORA6 +#elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,13) +#define NVVER FEDORA5 +#elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9) +#define NVVER SUSE10 +#elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,6) +#define NVVER RHEL4 +#elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,0) +#define NVVER SLES9 +#else +#define NVVER RHEL3 +#endif +#endif + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,24) +#ifndef NV_SET_MODULE_OWNER +#define NV_SET_MODULE_OWNER +#endif +#else +#ifndef NV_NAPI_POLL_LIST +#define NV_NAPI_POLL_LIST +#endif +#endif + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,23) +#ifndef NV_ETHTOOL_PERM_ADDR +#define NV_ETHTOOL_PERM_ADDR +#endif +#endif + +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,19) +#ifndef ROUND_JIFFIES +#define ROUND_JIFFIES +#endif +#endif + +#if NVVER > RHEL3 #include <linux/dma-mapping.h> +#else +#include <linux/forcedeth-compat.h> +#endif #include <asm/irq.h> #include <asm/io.h> #include <asm/uaccess.h> #include <asm/system.h> -#if 0 +#ifdef NVLAN_DEBUG #define dprintk printk #else #define dprintk(x...) do { } while (0) #endif -#define TX_WORK_PER_LOOP 64 -#define RX_WORK_PER_LOOP 64 +#define DPRINTK(nlevel,klevel,args...) (void)((debug & NETIF_MSG_##nlevel) && printk(klevel args)) + + /* pci_ids.h */ +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_12 +#define PCI_DEVICE_ID_NVIDIA_NVENET_12 0x0268 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_13 +#define PCI_DEVICE_ID_NVIDIA_NVENET_13 0x0269 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_14 +#define PCI_DEVICE_ID_NVIDIA_NVENET_14 0x0372 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_15 +#define PCI_DEVICE_ID_NVIDIA_NVENET_15 0x0373 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_16 +#define PCI_DEVICE_ID_NVIDIA_NVENET_16 0x03E5 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_17 +#define PCI_DEVICE_ID_NVIDIA_NVENET_17 0x03E6 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_18 +#define PCI_DEVICE_ID_NVIDIA_NVENET_18 0x03EE +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_19 +#define PCI_DEVICE_ID_NVIDIA_NVENET_19 0x03EF +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_20 +#define PCI_DEVICE_ID_NVIDIA_NVENET_20 0x0450 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_21 +#define PCI_DEVICE_ID_NVIDIA_NVENET_21 0x0451 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_22 +#define PCI_DEVICE_ID_NVIDIA_NVENET_22 0x0452 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_23 +#define PCI_DEVICE_ID_NVIDIA_NVENET_23 0x0453 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_24 +#define PCI_DEVICE_ID_NVIDIA_NVENET_24 0x054c +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_25 +#define PCI_DEVICE_ID_NVIDIA_NVENET_25 0x054d +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_26 +#define PCI_DEVICE_ID_NVIDIA_NVENET_26 0x054e +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_27 +#define PCI_DEVICE_ID_NVIDIA_NVENET_27 0x054f +#endif + + /* mii.h */ +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_28 +#define PCI_DEVICE_ID_NVIDIA_NVENET_28 0x07dc +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_29 +#define PCI_DEVICE_ID_NVIDIA_NVENET_29 0x07dd +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_30 +#define PCI_DEVICE_ID_NVIDIA_NVENET_30 0x07de +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_31 +#define PCI_DEVICE_ID_NVIDIA_NVENET_31 0x07df +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_32 +#define PCI_DEVICE_ID_NVIDIA_NVENET_32 0x0760 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_33 +#define PCI_DEVICE_ID_NVIDIA_NVENET_33 0x0761 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_34 +#define PCI_DEVICE_ID_NVIDIA_NVENET_34 0x0762 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_35 +#define PCI_DEVICE_ID_NVIDIA_NVENET_35 0x0763 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_36 +#define PCI_DEVICE_ID_NVIDIA_NVENET_36 0x0AB0 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_37 +#define PCI_DEVICE_ID_NVIDIA_NVENET_37 0x0AB1 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_38 +#define PCI_DEVICE_ID_NVIDIA_NVENET_38 0x0AB2 +#endif + +#ifndef PCI_DEVICE_ID_NVIDIA_NVENET_39 +#define PCI_DEVICE_ID_NVIDIA_NVENET_39 0x0AB3 +#endif + +#ifndef ADVERTISE_1000HALF +#define ADVERTISE_1000HALF 0x0100 +#endif + +#ifndef ADVERTISE_1000FULL +#define ADVERTISE_1000FULL 0x0200 +#endif + +#ifndef ADVERTISE_PAUSE_CAP +#define ADVERTISE_PAUSE_CAP 0x0400 +#endif + +#ifndef ADVERTISE_PAUSE_ASYM +#define ADVERTISE_PAUSE_ASYM 0x0800 +#endif + +#ifndef MII_CTRL1000 +#define MII_CTRL1000 0x09 +#endif + +#ifndef MII_STAT1000 +#define MII_STAT1000 0x0A +#endif + +#ifndef MII_RESV1 +#define MII_RESV1 0x17 +#endif + +#ifndef LPA_1000FULL +#define LPA_1000FULL 0x0800 +#endif + +#ifndef LPA_1000HALF +#define LPA_1000HALF 0x0400 +#endif + +#ifndef LPA_PAUSE_CAP +#define LPA_PAUSE_CAP 0x0400 +#endif + +#ifndef LPA_PAUSE_ASYM +#define LPA_PAUSE_ASYM 0x0800 +#endif + +#ifndef BMCR_SPEED1000 +#define BMCR_SPEED1000 0x0040 /* MSB of Speed (1000) */ +#endif + +#ifndef NETDEV_TX_OK +#define NETDEV_TX_OK 0 /* driver took care of packet */ +#endif + +#ifndef NETDEV_TX_BUSY +#define NETDEV_TX_BUSY 1 /* driver tx path was busy*/ +#endif + +#ifndef DMA_39BIT_MASK +#define DMA_39BIT_MASK 0x0000007fffffffffULL +#endif + +#ifndef __iomem +#define __iomem +#endif + +#ifndef __bitwise +#define __bitwise +#endif + +#ifndef __force +#define __force +#endif + +#ifndef PCI_D0 +#define PCI_D0 ((int __bitwise __force) 0) +#endif + +#ifndef PM_EVENT_SUSPEND +#define PM_EVENT_SUSPEND 2 +#endif + +#ifndef MODULE_VERSION +#define MODULE_VERSION(ver) +#endif + +#if NVVER > FEDORA6 +#define CHECKSUM_HW CHECKSUM_PARTIAL +#endif + +#if NVVER < SUSE10 +#define pm_message_t u32 +#endif + + /* rx/tx mac addr + type + vlan + align + slack*/ +#ifndef RX_NIC_BUFSIZE +#define RX_NIC_BUFSIZE (ETH_DATA_LEN + 64) +#endif + /* even more slack */ +#ifndef RX_ALLOC_BUFSIZE +#define RX_ALLOC_BUFSIZE (ETH_DATA_LEN + 128) +#endif + +#ifndef PCI_DEVICE +#define PCI_DEVICE(vend,dev) \ + .vendor = (vend), .device = (dev), \ + .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID +#endif + +#if NVVER < RHEL4 + struct msix_entry { + u16 vector; /* kernel uses to write allocated vector */ + u16 entry; /* driver uses to specify entry, OS writes */ + }; +#endif + +#ifndef PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET +#define PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET 0x00 +#endif + +#ifndef PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET +#define PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET 0x04 +#endif + +#ifndef PCI_MSIX_ENTRY_DATA_OFFSET +#define PCI_MSIX_ENTRY_DATA_OFFSET 0x08 +#endif + +#ifndef PCI_MSIX_ENTRY_SIZE +#define PCI_MSIX_ENTRY_SIZE 16 +#endif + +#ifndef PCI_MSIX_FLAGS_BIRMASK +#define PCI_MSIX_FLAGS_BIRMASK (7 << 0) +#endif + +#ifndef PCI_CAP_ID_MSIX +#define PCI_CAP_ID_MSIX 0x11 +#endif + +#if NVVER > FEDORA7 +#define IRQ_FLAG IRQF_SHARED +#else +#define IRQ_FLAG SA_SHIRQ +#endif /* * Hardware access: */ -#define DEV_NEED_TIMERIRQ 0x00001 /* set the timer irq flag in the irq mask */ -#define DEV_NEED_LINKTIMER 0x00002 /* poll link settings. Relies on the timer irq */ -#define DEV_HAS_LARGEDESC 0x00004 /* device supports jumbo frames and needs packet format 2 */ -#define DEV_HAS_HIGH_DMA 0x00008 /* device supports 64bit dma */ -#define DEV_HAS_CHECKSUM 0x00010 /* device supports tx and rx checksum offloads */ -#define DEV_HAS_VLAN 0x00020 /* device supports vlan tagging and striping */ -#define DEV_HAS_MSI 0x00040 /* device supports MSI */ -#define DEV_HAS_MSI_X 0x00080 /* device supports MSI-X */ -#define DEV_HAS_POWER_CNTRL 0x00100 /* device supports power savings */ -#define DEV_HAS_STATISTICS_V1 0x00200 /* device supports hw statistics version 1 */ -#define DEV_HAS_STATISTICS_V2 0x00400 /* device supports hw statistics version 2 */ -#define DEV_HAS_TEST_EXTENDED 0x00800 /* device supports extended diagnostic test */ -#define DEV_HAS_MGMT_UNIT 0x01000 /* device supports management unit */ -#define DEV_HAS_CORRECT_MACADDR 0x02000 /* device supports correct mac address order */ -#define DEV_HAS_COLLISION_FIX 0x04000 /* device supports tx collision fix */ -#define DEV_HAS_PAUSEFRAME_TX_V1 0x08000 /* device supports tx pause frames version 1 */ -#define DEV_HAS_PAUSEFRAME_TX_V2 0x10000 /* device supports tx pause frames version 2 */ -#define DEV_HAS_PAUSEFRAME_TX_V3 0x20000 /* device supports tx pause frames version 3 */ -#define DEV_NEED_TX_LIMIT 0x40000 /* device needs to limit tx */ -#define DEV_HAS_GEAR_MODE 0x80000 /* device supports gear mode */ +#define DEV_NEED_TIMERIRQ 0x000001 /* set the timer irq flag in the irq mask */ +#define DEV_NEED_LINKTIMER 0x000002 /* poll link settings. Relies on the timer irq */ +#define DEV_HAS_LARGEDESC 0x000004 /* device supports jumbo frames and needs packet format 2 */ +#define DEV_HAS_HIGH_DMA 0x000008 /* device supports 64bit dma */ +#define DEV_HAS_CHECKSUM 0x000010 /* device supports tx and rx checksum offloads */ +#define DEV_HAS_VLAN 0x000020 /* device supports vlan tagging and striping */ +#define DEV_HAS_MSI 0x000040 /* device supports MSI */ +#define DEV_HAS_MSI_X 0x000080 /* device supports MSI-X */ +#define DEV_HAS_POWER_CNTRL 0x000100 /* device supports power savings */ +#define DEV_HAS_STATISTICS_V1 0x000200 /* device supports hw statistics version 1 */ +#define DEV_HAS_STATISTICS_V2 0x000400 /* device supports hw statistics version 2 */ +#define DEV_HAS_STATISTICS_V3 0x000800 /* device supports hw statistics version 3 */ +#define DEV_HAS_TEST_EXTENDED 0x001000 /* device supports extended diagnostic test */ +#define DEV_HAS_MGMT_UNIT 0x002000 /* device supports management unit */ +#define DEV_HAS_CORRECT_MACADDR 0x004000 /* device supports correct mac address */ +#define DEV_HAS_COLLISION_FIX 0x008000 /* device supports tx collision fix */ +#define DEV_HAS_PAUSEFRAME_TX_V1 0x010000 /* device supports tx pause frames version 1 */ +#define DEV_HAS_PAUSEFRAME_TX_V2 0x020000 /* device supports tx pause frames version 2 */ +#define DEV_HAS_PAUSEFRAME_TX_V3 0x040000 /* device supports tx pause frames version 3 */ +#define DEV_MACADDRESS_CHECK 0x080000 /* device should perform mac address check */ +#define DEV_HAS_GEAR_MODE 0x100000 /* device should support gear mode */ +#define DEV_NEED_TX_LIMIT 0x200000 /* device needs to limit tx */ + +#define NVIDIA_ETHERNET_ID(deviceid,nv_driver_data) {\ + .vendor = PCI_VENDOR_ID_NVIDIA, \ + .device = deviceid, \ + .subvendor = PCI_ANY_ID, \ + .subdevice = PCI_ANY_ID, \ + .driver_data = nv_driver_data, \ +}, + +#define Mv_LED_Control 16 +#define Mv_Page_Address 22 +#define Mv_LED_FORCE_OFF 0x88 +#define Mv_LED_DUAL_MODE3 0x40 + +struct nvmsi_msg{ + u32 address_lo; + u32 address_hi; + u32 data; +}; enum { NvRegIrqStatus = 0x000, @@ -120,18 +559,18 @@ enum { #define NVREG_IRQ_OTHER (NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RECOVER_ERROR) #define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \ - NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \ - NVREG_IRQ_TX_FORCED|NVREG_IRQ_RECOVER_ERROR)) + NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \ + NVREG_IRQ_TX_FORCED|NVREG_IRQ_RECOVER_ERROR)) - NvRegUnknownSetupReg6 = 0x008, -#define NVREG_UNKSETUP6_VAL 3 + NvRegSoftwareTimerCtrl = 0x008, +#define NVREG_SOFTWARE_TIMER_RELOAD_ENABLE 0x01 +#define NVREG_SOFTWARE_TIMER_ENABLE 0x02 -/* - * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic - * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms - */ + /* + * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic + * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms + */ NvRegPollingInterval = 0x00c, -#define NVREG_POLL_DEFAULT_THROUGHPUT 970 /* backup tx cleanup if loop max reached */ #define NVREG_POLL_DEFAULT_CPU 13 NvRegMSIMap0 = 0x020, NvRegMSIMap1 = 0x024, @@ -176,20 +615,17 @@ enum { #define NVREG_RCVSTAT_BUSY 0x01 NvRegSlotTime = 0x9c, -#define NVREG_SLOTTIME_LEGBF_ENABLED 0x80000000 +#define NVREG_CTRL_LEGBF_ENABLED 0x80000000 #define NVREG_SLOTTIME_10_100_FULL 0x00007f00 #define NVREG_SLOTTIME_1000_FULL 0x0003ff00 #define NVREG_SLOTTIME_HALF 0x0000ff00 #define NVREG_SLOTTIME_DEFAULT 0x00007f00 -#define NVREG_SLOTTIME_MASK 0x000000ff +#define NVREG_SLOTTIME_MASK 0x00ff NvRegTxDeferral = 0xA0, -#define NVREG_TX_DEFERRAL_DEFAULT 0x15050f -#define NVREG_TX_DEFERRAL_RGMII_10_100 0x16070f -#define NVREG_TX_DEFERRAL_RGMII_1000 0x14050f -#define NVREG_TX_DEFERRAL_RGMII_STRETCH_10 0x16190f -#define NVREG_TX_DEFERRAL_RGMII_STRETCH_100 0x16300f -#define NVREG_TX_DEFERRAL_MII_STRETCH 0x152000 +#define NVREG_TX_DEFERRAL_DEFAULT 0x15050f +#define NVREG_TX_DEFERRAL_RGMII_10_100 0x16070f +#define NVREG_TX_DEFERRAL_RGMII_1000 0x14050f NvRegRxDeferral = 0xA4, #define NVREG_RX_DEFERRAL_DEFAULT 0x16 NvRegMacAddrA = 0xA8, @@ -205,10 +641,13 @@ enum { NvRegPhyInterface = 0xC0, #define PHY_RGMII 0x10000000 NvRegBackOffControl = 0xC4, -#define NVREG_BKOFFCTRL_DEFAULT 0x70000000 -#define NVREG_BKOFFCTRL_SEED_MASK 0x000003ff -#define NVREG_BKOFFCTRL_SELECT 24 -#define NVREG_BKOFFCTRL_GEAR 12 +#define NVREG_BKOFFCTRL_DEFAULT 0x70000000 +#define NVREG_BKOFFCTRL_LSFR_GEAR_SEL 0x10000000 +#define NVREG_BKOFFCTRL_LSFR_MAIN_SEL 0x20000000 +#define NVREG_BKOFFCTRL_LSFR_GEARBF_ENABLE 0x40000000 +#define NVREG_BKOFFCTRL_SEED_MASK 0x000003FF +#define NVREG_BKOFFCTRL_SELECT 24 +#define NVREG_BKOFFCTRL_GEAR 12 NvRegTxRingPhysAddr = 0x100, NvRegRxRingPhysAddr = 0x104, @@ -232,7 +671,7 @@ enum { NvRegTxRxControl = 0x144, #define NVREG_TXRXCTL_KICK 0x0001 #define NVREG_TXRXCTL_BIT1 0x0002 -#define NVREG_TXRXCTL_BIT2 0x0004 +#define NVREG_TXRXCTL_BM_DIS 0x0004 #define NVREG_TXRXCTL_IDLE 0x0008 #define NVREG_TXRXCTL_RESET 0x0010 #define NVREG_TXRXCTL_RXCHECK 0x0400 @@ -244,10 +683,12 @@ enum { NvRegTxRingPhysAddrHigh = 0x148, NvRegRxRingPhysAddrHigh = 0x14C, NvRegTxPauseFrame = 0x170, -#define NVREG_TX_PAUSEFRAME_DISABLE 0x0fff0080 -#define NVREG_TX_PAUSEFRAME_ENABLE_V1 0x01800010 -#define NVREG_TX_PAUSEFRAME_ENABLE_V2 0x056003f0 -#define NVREG_TX_PAUSEFRAME_ENABLE_V3 0x09f00880 +#define NVREG_TX_PAUSEFRAME_DISABLE 0x01ff0080 +#define NVREG_TX_PAUSEFRAME_ENABLE_V1 0x01800010 +#define NVREG_TX_PAUSEFRAME_ENABLE_V2 0x056003f0 +#define NVREG_TX_PAUSEFRAME_ENABLE_V3 0x09f00880 + NvRegTxPauseFrame2 = 0x174, +#define NVREG_TX_PAUSEFRAME2_LIMIT_ENABLE 0x00010000 NvRegMIIStatus = 0x180, #define NVREG_MIISTAT_ERROR 0x0001 #define NVREG_MIISTAT_LINKCHANGE 0x0008 @@ -270,6 +711,9 @@ enum { #define NVREG_MIICTL_WRITE 0x00400 #define NVREG_MIICTL_ADDRSHIFT 5 NvRegMIIData = 0x194, + NvRegTxUnicast = 0x1a0, + NvRegTxMulticast = 0x1a4, + NvRegTxBroadcast = 0x1a8, NvRegWakeUpFlags = 0x200, #define NVREG_WAKEUPFLAGS_VAL 0x7770 #define NVREG_WAKEUPFLAGS_BUSYSHIFT 24 @@ -284,6 +728,7 @@ enum { #define NVREG_WAKEUPFLAGS_ENABLE 0x1111 NvRegPatternCRC = 0x204, +#define NV_UNKNOWN_VAL 0x01 NvRegPatternMask = 0x208, NvRegPowerCap = 0x268, #define NVREG_POWERCAP_D3SUPP (1<<30) @@ -324,6 +769,7 @@ enum { NvRegTxPause = 0x2e0, NvRegRxPause = 0x2e4, NvRegRxDropFrame = 0x2e8, + NvRegVlanControl = 0x300, #define NVREG_VLANCONTROL_ENABLE 0x2000 NvRegMSIXMap0 = 0x3e0, @@ -333,25 +779,28 @@ enum { NvRegPowerState2 = 0x600, #define NVREG_POWERSTATE2_POWERUP_MASK 0x0F11 #define NVREG_POWERSTATE2_POWERUP_REV_A3 0x0001 +#define NVREG_POWERSTATE2_PHYRST (1<<2) +#define NVREG_POWERSTATE2_GATE_CORE_TX_ON (1<<10) +#define NVREG_POWERSTATE2_GATE_CORE_RX_ON (1<<11) }; /* Big endian: should work, but is untested */ struct ring_desc { - __le32 buf; - __le32 flaglen; + u32 PacketBuffer; + u32 FlagLen; }; struct ring_desc_ex { - __le32 bufhigh; - __le32 buflow; - __le32 txvlan; - __le32 flaglen; + u32 PacketBufferHigh; + u32 PacketBufferLow; + u32 TxVlan; + u32 FlagLen; }; -union ring_type { +typedef union _ring_type { struct ring_desc* orig; struct ring_desc_ex* ex; -}; +} ring_type; #define FLAG_MASK_V1 0xffff0000 #define FLAG_MASK_V2 0xffffc000 @@ -360,25 +809,25 @@ union ring_type { #define NV_TX_LASTPACKET (1<<16) #define NV_TX_RETRYERROR (1<<19) -#define NV_TX_RETRYCOUNT_MASK (0xF<<20) +#define NV_TX_TRC_TD_MASK (0xF<<20) #define NV_TX_FORCED_INTERRUPT (1<<24) #define NV_TX_DEFERRED (1<<26) #define NV_TX_CARRIERLOST (1<<27) #define NV_TX_LATECOLLISION (1<<28) #define NV_TX_UNDERFLOW (1<<29) -#define NV_TX_ERROR (1<<30) +#define NV_TX_ERROR (1<<30) /* logical OR of all errors */ #define NV_TX_VALID (1<<31) #define NV_TX2_LASTPACKET (1<<29) -#define NV_TX2_RETRYERROR (1<<18) -#define NV_TX2_RETRYCOUNT_MASK (0xF<<19) #define NV_TX2_FORCED_INTERRUPT (1<<30) +#define NV_TX2_RETRYERROR (1<<18) +#define NV_TX2_TRC_TD_MASK (0xF<<19) #define NV_TX2_DEFERRED (1<<25) #define NV_TX2_CARRIERLOST (1<<26) #define NV_TX2_LATECOLLISION (1<<27) #define NV_TX2_UNDERFLOW (1<<28) /* error and valid are the same for both */ -#define NV_TX2_ERROR (1<<30) +#define NV_TX2_ERROR (1<<30) /* logical OR of all errors */ #define NV_TX2_VALID (1<<31) #define NV_TX2_TSO (1<<28) #define NV_TX2_TSO_SHIFT 14 @@ -399,8 +848,9 @@ union ring_type { #define NV_RX_CRCERR (1<<27) #define NV_RX_OVERFLOW (1<<28) #define NV_RX_FRAMINGERR (1<<29) -#define NV_RX_ERROR (1<<30) +#define NV_RX_ERROR (1<<30) /* logical OR of all errors */ #define NV_RX_AVAIL (1<<31) +#define NV_RX_ERROR_MASK (NV_RX_ERROR1|NV_RX_ERROR2|NV_RX_ERROR3|NV_RX_ERROR4|NV_RX_CRCERR|NV_RX_OVERFLOW|NV_RX_FRAMINGERR) #define NV_RX2_CHECKSUMMASK (0x1C000000) #define NV_RX2_CHECKSUM_IP (0x10000000) @@ -416,8 +866,9 @@ union ring_type { #define NV_RX2_OVERFLOW (1<<23) #define NV_RX2_FRAMINGERR (1<<24) /* error and avail are the same for both */ -#define NV_RX2_ERROR (1<<30) +#define NV_RX2_ERROR (1<<30) /* logical OR of all errors */ #define NV_RX2_AVAIL (1<<31) +#define NV_RX2_ERROR_MASK (NV_RX2_ERROR1|NV_RX2_ERROR2|NV_RX2_ERROR3|NV_RX2_ERROR4|NV_RX2_CRCERR|NV_RX2_OVERFLOW|NV_RX2_FRAMINGERR) #define NV_RX3_VLAN_TAG_PRESENT (1<<16) #define NV_RX3_VLAN_TAG_MASK (0x0000FFFF) @@ -451,11 +902,17 @@ union ring_type { #define NV_WATCHDOG_TIMEO (5*HZ) #define RX_RING_DEFAULT 128 -#define TX_RING_DEFAULT 256 -#define RX_RING_MIN 128 -#define TX_RING_MIN 64 +#define TX_RING_DEFAULT 64 +#define RX_RING_MIN RX_RING_DEFAULT +#define TX_RING_MIN TX_RING_DEFAULT #define RING_MAX_DESC_VER_1 1024 #define RING_MAX_DESC_VER_2_3 16384 +/* + * Difference between the get and put pointers for the tx ring. + * This is used to throttle the amount of data outstanding in the + * tx ring. + */ +#define TX_LIMIT_DIFFERENCE 1 /* rx/tx mac addr + type + vlan + align + slack*/ #define NV_RX_HEADERS (64) @@ -471,7 +928,7 @@ union ring_type { #define LINK_TIMEOUT (3*HZ) #define STATS_INTERVAL (10*HZ) -/* +/* * desc_ver values: * The nic supports three different descriptor types: * - DESC_VER_1: Original @@ -488,18 +945,23 @@ union ring_type { #define PHY_OUI_VITESSE 0x01c1 #define PHY_OUI_REALTEK 0x0732 #define PHY_OUI_REALTEK2 0x0020 -#define PHYID1_OUI_MASK 0x03ff +#define PHY_OUI_BRCM 0x05ef +#define PHYID1_OUI_MASK 0x03ff #define PHYID1_OUI_SHFT 6 #define PHYID2_OUI_MASK 0xfc00 #define PHYID2_OUI_SHFT 10 #define PHYID2_MODEL_MASK 0x03f0 #define PHY_MODEL_REALTEK_8211 0x0110 +#define PHY_MODEL_REALTEK_8201 0x0200 #define PHY_REV_MASK 0x0001 #define PHY_REV_REALTEK_8211B 0x0000 #define PHY_REV_REALTEK_8211C 0x0001 -#define PHY_MODEL_REALTEK_8201 0x0200 #define PHY_MODEL_MARVELL_E3016 0x0220 -#define PHY_MARVELL_E3016_INITMASK 0x0300 +#define PHY_MODEL_MARVELL_E1116 0x0210 +#define PHY_MODEL_MARVELL_E1111 0x00c0 +#define PHY_MODEL_MARVELL_E1011 0x00b0 +#define PHY_MODEL_BRCM_9507 0x00a0 +#define PHY_MODEL_BRCM_AC131 0x0070 #define PHY_CICADA_INIT1 0x0f000 #define PHY_CICADA_INIT2 0x0e00 #define PHY_CICADA_INIT3 0x01000 @@ -528,15 +990,32 @@ union ring_type { #define PHY_REALTEK_INIT_REG4 0x14 #define PHY_REALTEK_INIT_REG5 0x18 #define PHY_REALTEK_INIT_REG6 0x11 +#define PHY_REALTEK_INIT_REG7 0x01 #define PHY_REALTEK_INIT1 0x0000 #define PHY_REALTEK_INIT2 0x8e00 -#define PHY_REALTEK_INIT3 0x0001 +#define PHY_REALTEK_INIT3 0x0001 #define PHY_REALTEK_INIT4 0xad17 #define PHY_REALTEK_INIT5 0xfb54 #define PHY_REALTEK_INIT6 0xf5c7 #define PHY_REALTEK_INIT7 0x1000 #define PHY_REALTEK_INIT8 0x0003 +#define PHY_REALTEK_INIT9 0x0008 +#define PHY_REALTEK_INIT10 0x0005 +#define PHY_REALTEK_INIT11 0x0200 #define PHY_REALTEK_INIT_MSK1 0x0003 +#define PHY_MARVELL_INIT_REG1 0x16 +#define PHY_MARVELL_INIT_REG2 0x10 +#define PHY_MARVELL_INIT1 0x0300 +#define PHY_MARVELL_INIT2 0x4000 +#define PHY_MARVELL_E3016_INITMASK 0x0300 +#define PHY_MARVELL_INIT_MSK1 0x00ff +#define PHY_BRCM_INIT_REG1 0x1C +#define PHY_BRCM_INIT_REG2 0x1F +#define PHY_BRCM_INIT_REG3 0x1B +#define PHY_BRCM_INIT1 0x2820 +#define PHY_BRCM_INIT2 0x0080 +#define PHY_BRCM_INIT3 0x0020 +#define PHY_BRCM_INIT_MSK1 0x7C20 #define PHY_GIGABIT 0x0100 @@ -568,24 +1047,46 @@ union ring_type { #define NV_MSI_X_VECTOR_TX 0x1 #define NV_MSI_X_VECTOR_OTHER 0x2 -#define NV_RESTART_TX 0x1 -#define NV_RESTART_RX 0x2 +#define NV_RESTART_TX 0x1 +#define NV_RESTART_RX 0x2 + +#define SET_HW_TIMER_INTERVAL(base,interval) do{ \ + writel(0,base + NvRegSoftwareTimerCtrl); \ + writel((interval), base + NvRegPollingInterval); \ + writel(NVREG_SOFTWARE_TIMER_RELOAD_ENABLE|NVREG_SOFTWARE_TIMER_ENABLE,base + NvRegSoftwareTimerCtrl); \ +}while(0); + +#define US_TO_SW_TIMER_INTERVAL(us) (((us) * 100) / 1024) +#define NV_SW_TIMER_INTERVAL_NON_1000_DEFAULT US_TO_SW_TIMER_INTERVAL(450) +#define NV_SW_TIMER_INTERVAL_1000_DEFAULT US_TO_SW_TIMER_INTERVAL(130) +#define NV_SW_THROUGHPUT_INTERVAL_DEFAULT US_TO_SW_TIMER_INTERVAL(500) + +#define POLL_INTR 0 +#define THROUGHPUT_POLL_INTR 1 -#define NV_TX_LIMIT_COUNT 16 +#define NV_TX_LIMIT_COUNT 16 -/* statistics */ struct nv_ethtool_str { char name[ETH_GSTRING_LEN]; }; static const struct nv_ethtool_str nv_estats_str[] = { + { "tx_dropped" }, + { "tx_fifo_errors" }, + { "tx_carrier_errors" }, + { "tx_packets" }, { "tx_bytes" }, + { "rx_crc_errors" }, + { "rx_over_errors" }, + { "rx_errors_total" }, + { "rx_packets" }, + { "rx_bytes" }, + + /* hardware counters */ { "tx_zero_rexmt" }, { "tx_one_rexmt" }, { "tx_many_rexmt" }, { "tx_late_collision" }, - { "tx_fifo_errors" }, - { "tx_carrier_errors" }, { "tx_excess_deferral" }, { "tx_retry_error" }, { "rx_frame_error" }, @@ -593,34 +1094,39 @@ static const struct nv_ethtool_str nv_estats_str[] = { { "rx_late_collision" }, { "rx_runt" }, { "rx_frame_too_long" }, - { "rx_over_errors" }, - { "rx_crc_errors" }, { "rx_frame_align_error" }, { "rx_length_error" }, { "rx_unicast" }, { "rx_multicast" }, { "rx_broadcast" }, - { "rx_packets" }, - { "rx_errors_total" }, - { "tx_errors_total" }, - - /* version 2 stats */ { "tx_deferral" }, - { "tx_packets" }, - { "rx_bytes" }, { "tx_pause" }, { "rx_pause" }, - { "rx_drop_frame" } + { "rx_drop_frame" }, + + /* version 3 stats */ + { "tx_unicast" }, + { "tx_multicast" }, + { "tx_broadcast" } }; struct nv_ethtool_stats { + u64 tx_dropped; + u64 tx_fifo_errors; + u64 tx_carrier_errors; + u64 tx_packets; u64 tx_bytes; + u64 rx_crc_errors; + u64 rx_over_errors; + u64 rx_errors_total; + u64 rx_packets; + u64 rx_bytes; + + /* hardware counters */ u64 tx_zero_rexmt; u64 tx_one_rexmt; u64 tx_many_rexmt; u64 tx_late_collision; - u64 tx_fifo_errors; - u64 tx_carrier_errors; u64 tx_excess_deferral; u64 tx_retry_error; u64 rx_frame_error; @@ -628,28 +1134,26 @@ struct nv_ethtool_stats { u64 rx_late_collision; u64 rx_runt; u64 rx_frame_too_long; - u64 rx_over_errors; - u64 rx_crc_errors; u64 rx_frame_align_error; u64 rx_length_error; u64 rx_unicast; u64 rx_multicast; u64 rx_broadcast; - u64 rx_packets; - u64 rx_errors_total; - u64 tx_errors_total; - - /* version 2 stats */ u64 tx_deferral; - u64 tx_packets; - u64 rx_bytes; u64 tx_pause; u64 rx_pause; u64 rx_drop_frame; + + /* version 3 stats */ + u64 tx_unicast; + u64 tx_multicast; + u64 tx_broadcast; }; -#define NV_DEV_STATISTICS_V2_COUNT (sizeof(struct nv_ethtool_stats)/sizeof(u64)) +#define NV_DEV_STATISTICS_V3_COUNT (sizeof(struct nv_ethtool_stats)/sizeof(u64)) +#define NV_DEV_STATISTICS_V2_COUNT (NV_DEV_STATISTICS_V3_COUNT - 3) #define NV_DEV_STATISTICS_V1_COUNT (NV_DEV_STATISTICS_V2_COUNT - 6) +#define NV_DEV_STATISTICS_SW_COUNT 10 /* diagnostics */ #define NV_TEST_COUNT_BASE 3 @@ -663,12 +1167,12 @@ static const struct nv_ethtool_str nv_etests_str[] = { }; struct register_test { - __u32 reg; - __u32 mask; + u32 reg; + u32 mask; }; static const struct register_test nv_registers_test[] = { - { NvRegUnknownSetupReg6, 0x01 }, + { NvRegSoftwareTimerCtrl, 0x01 }, { NvRegMisc1, 0x03c }, { NvRegOffloadConfig, 0x03ff }, { NvRegMulticastAddrA, 0xffffffff }, @@ -677,6 +1181,25 @@ static const struct register_test nv_registers_test[] = { { 0,0 } }; +#define NV_NUM_FACTORY_ADDRESS 15 +static const u8 factory_address[NV_NUM_FACTORY_ADDRESS][6] = { + { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }, + { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + { 0x04, 0x4B, 0x80, 0x80, 0x80, 0x01 }, + { 0x04, 0x4B, 0x80, 0x80, 0x80, 0x02 }, + { 0x04, 0x4B, 0x80, 0x80, 0x80, 0x03 }, + { 0x04, 0x4B, 0x80, 0x80, 0x80, 0x04 }, + { 0x04, 0x4B, 0x80, 0x80, 0x80, 0x05 }, + { 0x04, 0x4B, 0x80, 0x80, 0x80, 0x06 }, + { 0x02, 0x04, 0x4B, 0x80, 0x80, 0x01 }, + { 0x02, 0x04, 0x4B, 0x80, 0x80, 0x02 }, + { 0x02, 0x04, 0x4B, 0x80, 0x80, 0x03 }, + { 0x02, 0x04, 0x4B, 0x80, 0x80, 0x04 }, + { 0x02, 0x04, 0x4B, 0x80, 0x80, 0x05 }, + { 0x02, 0x04, 0x4B, 0x80, 0x80, 0x06 }, + { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB } +}; + struct nv_skb_map { struct sk_buff *skb; dma_addr_t dma; @@ -691,20 +1214,63 @@ struct nv_skb_map { * critical parts: * - rx is (pseudo-) lockless: it relies on the single-threading provided * by the arch code for interrupts. - * - tx setup is lockless: it relies on netif_tx_lock. Actual submission + * - tx setup is lockless: it relies on dev->xmit_lock. Actual submission * needs dev->priv->lock :-( - * - set_multicast_list: preparation lockless, relies on netif_tx_lock. + * - set_multicast_list: preparation lockless, relies on dev->xmit_lock. */ /* in dev: base, irq */ struct fe_priv { + + /* fields used in fast path are grouped together + for better cache performance + */ spinlock_t lock; + spinlock_t timer_lock; + void __iomem *base; + struct pci_dev *pci_dev; + u32 txrxctl_bits; + int tx_ring_size; + int tx_limit; + u32 tx_pkts_in_progress; + struct nv_skb_map *tx_change_owner; + struct nv_skb_map *tx_end_flip; + u8 revision_id; - struct net_device *dev; + int need_linktimer; + unsigned long link_timeout; + u32 irqmask; + u32 msi_flags; + + unsigned int rx_buf_sz; + struct vlan_group *vlangrp; + int rx_csum; +#ifdef NV_NAPI_POLL_LIST struct napi_struct napi; +#endif + + /* + * rx specific fields in fast path + */ + ring_type get_rx __attribute__((aligned(L1_CACHE_BYTES))); + ring_type put_rx, first_rx, last_rx; + struct nv_skb_map *get_rx_ctx, *put_rx_ctx; + struct nv_skb_map *first_rx_ctx, *last_rx_ctx; + + /* + * tx specific fields in fast path + */ + ring_type get_tx __attribute__((aligned(L1_CACHE_BYTES))); + ring_type put_tx, first_tx, last_tx; + struct nv_skb_map *get_tx_ctx, *put_tx_ctx; + struct nv_skb_map *first_tx_ctx, *last_tx_ctx; + + struct nv_skb_map *rx_skb; + struct nv_skb_map *tx_skb; /* General data: * Locking: spin_lock(&np->lock); */ + struct net_device_stats stats; struct nv_ethtool_stats estats; int in_shutdown; u32 linkspeed; @@ -719,87 +1285,71 @@ struct fe_priv { u16 gigabit; int intr_test; int recover_error; + int interrupt_moderation; + unsigned int quietcount; + u32 swtimer_interval; + u32 polling_mode; /* General data: RO fields */ dma_addr_t ring_addr; - struct pci_dev *pci_dev; u32 orig_mac[2]; - u32 irqmask; u32 desc_ver; - u32 txrxctl_bits; u32 vlanctl_bits; u32 driver_data; u32 device_id; u32 register_size; - int rx_csum; u32 mac_in_use; - void __iomem *base; - /* rx specific fields. * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); */ - union ring_type get_rx, put_rx, first_rx, last_rx; - struct nv_skb_map *get_rx_ctx, *put_rx_ctx; - struct nv_skb_map *first_rx_ctx, *last_rx_ctx; - struct nv_skb_map *rx_skb; - - union ring_type rx_ring; - unsigned int rx_buf_sz; + ring_type rx_ring; unsigned int pkt_limit; struct timer_list oom_kick; - struct timer_list nic_poll; struct timer_list stats_poll; + struct tasklet_struct nic_poll; u32 nic_poll_irq; int rx_ring_size; - - /* media detection workaround. - * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); - */ - int need_linktimer; - unsigned long link_timeout; + u32 rx_len_errors; /* * tx specific fields. */ - union ring_type get_tx, put_tx, first_tx, last_tx; - struct nv_skb_map *get_tx_ctx, *put_tx_ctx; - struct nv_skb_map *first_tx_ctx, *last_tx_ctx; - struct nv_skb_map *tx_skb; - - union ring_type tx_ring; + ring_type tx_ring; u32 tx_flags; - int tx_ring_size; - int tx_limit; - u32 tx_pkts_in_progress; - struct nv_skb_map *tx_change_owner; - struct nv_skb_map *tx_end_flip; - int tx_stop; + int tx_limit_start; + int tx_limit_stop; - /* vlan fields */ - struct vlan_group *vlangrp; /* msi/msi-x fields */ - u32 msi_flags; struct msix_entry msi_x_entry[NV_MSI_X_MAX_VECTORS]; /* flow control */ u32 pause_flags; + u32 led_stats[3]; + u32 saved_config_space[64]; + u32 saved_nvregphyinterface; +#if NVVER < SUSE10 + u32 pci_state[16]; +#endif + /* msix table */ + struct nvmsi_msg nvmsg[NV_MSI_X_MAX_VECTORS]; + unsigned long msix_pa_addr; }; /* * Maximum number of loops until we assume that a bit in the irq mask * is stuck. Overridable with module param. */ -static int max_interrupt_work = 5; +static int max_interrupt_work = 10; /* * Optimization can be either throuput mode or cpu mode - * + * * Throughput Mode: Every tx and rx packet will generate an interrupt. * CPU Mode: Interrupts are controlled by a timer. */ enum { - NV_OPTIMIZATION_MODE_THROUGHPUT, + NV_OPTIMIZATION_MODE_THROUGHPUT, NV_OPTIMIZATION_MODE_CPU }; static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT; @@ -820,16 +1370,26 @@ enum { NV_MSI_INT_DISABLED, NV_MSI_INT_ENABLED }; + +#ifdef CONFIG_PCI_MSI static int msi = NV_MSI_INT_ENABLED; +#else +static int msi = NV_MSI_INT_DISABLED; +#endif /* * MSIX interrupts */ enum { - NV_MSIX_INT_DISABLED, + NV_MSIX_INT_DISABLED, NV_MSIX_INT_ENABLED }; + +#ifdef CONFIG_PCI_MSI +static int msix = NV_MSIX_INT_ENABLED; +#else static int msix = NV_MSIX_INT_DISABLED; +#endif /* * DMA 64bit @@ -841,6 +1401,20 @@ enum { static int dma_64bit = NV_DMA_64BIT_ENABLED; /* + * Wake On Lan + */ +enum { + NV_WOL_DISABLED, + NV_WOL_ENABLED +}; + +enum { + NV_LOW_POWER_DISABLED, + NV_LOW_POWER_ENABLED +}; +static int lowpowerspeed = NV_LOW_POWER_ENABLED; + +/* * Crossover Detection * Realtek 8201 phy + some OEM boards do not work properly. */ @@ -850,14 +1424,80 @@ enum { }; static int phy_cross = NV_CROSSOVER_DETECTION_DISABLED; +enum { + NV_NAPI_DISABLED, + NV_NAPI_ENABLED +}; +static int napi = NV_NAPI_DISABLED; + +static int debug = 0; + +#if NVVER < RHEL4 +static inline unsigned long nv_msecs_to_jiffies(const unsigned int m) +{ +#if HZ <= 1000 && !(1000 % HZ) + return (m + (1000 / HZ) - 1) / (1000 / HZ); +#elif HZ > 1000 && !(HZ % 1000) + return m * (HZ / 1000); +#else + return (m * HZ + 999) / 1000; +#endif +} +#endif + +static void nv_msleep(unsigned int msecs) +{ +#if NVVER > SLES9 + msleep(msecs); +#else + unsigned long timeout = nv_msecs_to_jiffies(msecs); + + while (timeout) { + set_current_state(TASK_UNINTERRUPTIBLE); + timeout = schedule_timeout(timeout); + } +#endif +} + static inline struct fe_priv *get_nvpriv(struct net_device *dev) { +#if NVVER > RHEL3 return netdev_priv(dev); +#else + return (struct fe_priv *) dev->priv; +#endif +} + +static void __init quirk_nforce_network_class(struct pci_dev *pdev) +{ + /* Some implementations of the nVidia network controllers + * show up as bridges, when we need to see them as network + * devices. + */ + + /* If this is already known as a network ctlr, do nothing. */ + if ((pdev->class >> 8) == PCI_CLASS_NETWORK_ETHERNET) + return; + + if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_OTHER) { + char c; + + /* Clearing bit 6 of the register at 0xf8 + * selects Ethernet device class + */ + pci_read_config_byte(pdev, 0xf8, &c); + c &= 0xbf; + pci_write_config_byte(pdev, 0xf8, c); + + /* sysfs needs pdev->class to be set correctly */ + pdev->class &= 0x0000ff; + pdev->class |= (PCI_CLASS_NETWORK_ETHERNET << 8); + } } static inline u8 __iomem *get_hwbase(struct net_device *dev) { - return ((struct fe_priv *)netdev_priv(dev))->base; + return ((struct fe_priv *)get_nvpriv(dev))->base; } static inline void pci_push(u8 __iomem *base) @@ -868,24 +1508,17 @@ static inline void pci_push(u8 __iomem *base) static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v) { - return le32_to_cpu(prd->flaglen) + return le32_to_cpu(prd->FlagLen) & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2); } static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v) { - return le32_to_cpu(prd->flaglen) & LEN_MASK_V2; -} - -static bool nv_optimized(struct fe_priv *np) -{ - if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) - return false; - return true; + return le32_to_cpu(prd->FlagLen) & LEN_MASK_V2; } static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target, - int delay, int delaymax, const char *msg) + int delay, int delaymax, const char *msg) { u8 __iomem *base = get_hwbase(dev); @@ -905,36 +1538,26 @@ static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target, #define NV_SETUP_RX_RING 0x01 #define NV_SETUP_TX_RING 0x02 -static inline u32 dma_low(dma_addr_t addr) -{ - return addr; -} - -static inline u32 dma_high(dma_addr_t addr) -{ - return addr>>31>>1; /* 0 if 32bit, shift down by 32 if 64bit */ -} - static void setup_hw_rings(struct net_device *dev, int rxtx_flags) { struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); - if (!nv_optimized(np)) { + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { if (rxtx_flags & NV_SETUP_RX_RING) { - writel(dma_low(np->ring_addr), base + NvRegRxRingPhysAddr); + writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr); } if (rxtx_flags & NV_SETUP_TX_RING) { - writel(dma_low(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); + writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr); } } else { if (rxtx_flags & NV_SETUP_RX_RING) { - writel(dma_low(np->ring_addr), base + NvRegRxRingPhysAddr); - writel(dma_high(np->ring_addr), base + NvRegRxRingPhysAddrHigh); + writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr); + writel((u32) (cpu_to_le64(np->ring_addr) >> 32), base + NvRegRxRingPhysAddrHigh); } if (rxtx_flags & NV_SETUP_TX_RING) { - writel(dma_low(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr); - writel(dma_high(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddrHigh); + writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr); + writel((u32) (cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)) >> 32), base + NvRegTxRingPhysAddrHigh); } } } @@ -943,19 +1566,19 @@ static void free_rings(struct net_device *dev) { struct fe_priv *np = get_nvpriv(dev); - if (!nv_optimized(np)) { - if (np->rx_ring.orig) + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + if(np->rx_ring.orig) pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size), - np->rx_ring.orig, np->ring_addr); + np->rx_ring.orig, np->ring_addr); } else { if (np->rx_ring.ex) pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size), - np->rx_ring.ex, np->ring_addr); + np->rx_ring.ex, np->ring_addr); } if (np->rx_skb) kfree(np->rx_skb); if (np->tx_skb) - kfree(np->tx_skb); + kfree(np->tx_skb); } static int using_multi_irqs(struct net_device *dev) @@ -963,17 +1586,51 @@ static int using_multi_irqs(struct net_device *dev) struct fe_priv *np = get_nvpriv(dev); if (!(np->msi_flags & NV_MSI_X_ENABLED) || - ((np->msi_flags & NV_MSI_X_ENABLED) && - ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1))) + ((np->msi_flags & NV_MSI_X_ENABLED) && + ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1))) return 0; else return 1; } +#define GATE_OFF 0 +#define GATE_ON 1 +static inline void nv_pmctrl2_gatecoretxrx(struct net_device *dev,int action) +{ + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); + u32 powerstate; + + if (!np->mac_in_use) { + powerstate = readl(base + NvRegPowerState2); + + if (action) + powerstate |= NVREG_POWERSTATE2_GATE_CORE_TX_ON|NVREG_POWERSTATE2_GATE_CORE_RX_ON; + else + powerstate &= ~(NVREG_POWERSTATE2_GATE_CORE_TX_ON|NVREG_POWERSTATE2_GATE_CORE_RX_ON); + writel(powerstate,base + NvRegPowerState2); + } +} + +static inline void nv_msi_workaround(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); + + if (np->msi_flags & NV_MSI_ENABLED) { + writel(0, base + NvRegMSIIrqMask); + writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask); + } +} + static void nv_enable_irq(struct net_device *dev) { struct fe_priv *np = get_nvpriv(dev); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + + nv_msi_workaround(dev); + /* modify network device class id */ if (!using_multi_irqs(dev)) { if (np->msi_flags & NV_MSI_X_ENABLED) enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); @@ -990,6 +1647,7 @@ static void nv_disable_irq(struct net_device *dev) { struct fe_priv *np = get_nvpriv(dev); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); if (!using_multi_irqs(dev)) { if (np->msi_flags & NV_MSI_X_ENABLED) disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); @@ -1006,8 +1664,11 @@ static void nv_disable_irq(struct net_device *dev) static void nv_enable_hw_interrupts(struct net_device *dev, u32 mask) { u8 __iomem *base = get_hwbase(dev); + struct fe_priv *np = get_nvpriv(dev); writel(mask, base + NvRegIrqMask); + if (np->msi_flags & NV_MSI_ENABLED) + writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask); } static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask) @@ -1051,7 +1712,7 @@ static int mii_rw(struct net_device *dev, int addr, int miireg, int value) writel(reg, base + NvRegMIIControl); if (reg_delay(dev, NvRegMIIControl, NVREG_MIICTL_INUSE, 0, - NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) { + NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) { dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d timed out.\n", dev->name, miireg, addr); retval = -1; @@ -1073,90 +1734,149 @@ static int mii_rw(struct net_device *dev, int addr, int miireg, int value) return retval; } +static void nv_save_LED_stats(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + u32 reg=0; + u32 value=0; + int i=0; + + reg = Mv_Page_Address; + value = 3; + mii_rw(dev,np->phyaddr,reg,value); + udelay(5); + + reg = Mv_LED_Control; + for(i=0;i<3;i++){ + np->led_stats[i]=mii_rw(dev,np->phyaddr,reg+i,MII_READ); + dprintk(KERN_DEBUG "%s: save LED reg%d: value=0x%x\n",dev->name,reg+i,np->led_stats[i]); + } + +} + +static void nv_restore_LED_stats(struct net_device *dev) +{ + + struct fe_priv *np = get_nvpriv(dev); + u32 reg=0; + u32 value=0; + int i=0; + + reg = Mv_Page_Address; + value = 3; + mii_rw(dev,np->phyaddr,reg,value); + udelay(5); + + reg = Mv_LED_Control; + for(i=0;i<3;i++){ + mii_rw(dev,np->phyaddr,reg+i,np->led_stats[i]); + udelay(1); + dprintk(KERN_DEBUG "%s: restore LED reg%d: value=0x%x\n",dev->name,reg+i,np->led_stats[i]); + } + +} + +static void nv_LED_on(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + u32 reg=0; + u32 value=0; + + reg = Mv_Page_Address; + value = 3; + mii_rw(dev,np->phyaddr,reg,value); + udelay(5); + + reg = Mv_LED_Control; + mii_rw(dev,np->phyaddr,reg,Mv_LED_DUAL_MODE3); + +} + +static void nv_LED_off(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + u32 reg=0; + u32 value=0; + + reg = Mv_Page_Address; + value = 3; + mii_rw(dev,np->phyaddr,reg,value); + udelay(5); + + reg = Mv_LED_Control; + mii_rw(dev,np->phyaddr,reg,Mv_LED_FORCE_OFF); + udelay(1); + +} + static int phy_reset(struct net_device *dev, u32 bmcr_setup) { - struct fe_priv *np = netdev_priv(dev); - u32 miicontrol; + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); + u32 miicontrol,phy_reserved,phyinterface; unsigned int tries = 0; + u32 pm2_ctrl; + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011) { + nv_save_LED_stats(dev); + } miicontrol = BMCR_RESET | bmcr_setup; if (mii_rw(dev, np->phyaddr, MII_BMCR, miicontrol)) { return -1; } /* wait for 500ms */ - msleep(500); + nv_msleep(500); /* must wait till reset is deasserted */ while (miicontrol & BMCR_RESET) { - msleep(10); + nv_msleep(10); miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); /* FIXME: 100 tries seem excessive */ if (tries++ > 100) return -1; } - return 0; -} - -static int phy_init(struct net_device *dev) -{ - struct fe_priv *np = get_nvpriv(dev); - u8 __iomem *base = get_hwbase(dev); - u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg; + if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011) { + nv_restore_LED_stats(dev); + } + + if (np->phy_oui == PHY_OUI_MARVELL) { + if (np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_32 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_33 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_34 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_35 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_36 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_37 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_38 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_39) { + if(np->phy_model == PHY_MODEL_MARVELL_E1116){ + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG1,MII_READ); + phy_reserved &= ~PHY_MARVELL_INIT_MSK1; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG1,phy_reserved )) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } - /* phy errata for E3016 phy */ - if (np->phy_model == PHY_MODEL_MARVELL_E3016) { - reg = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ); - reg &= ~PHY_MARVELL_E3016_INITMASK; - if (mii_rw(dev, np->phyaddr, MII_NCONFIG, reg)) { - printk(KERN_INFO "%s: phy write to errata reg failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } - } - if (np->phy_oui == PHY_OUI_REALTEK) { - if (np->phy_model == PHY_MODEL_REALTEK_8211 && - np->phy_rev == PHY_REV_REALTEK_8211B) { - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG4, PHY_REALTEK_INIT5)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG5, PHY_REALTEK_INIT6)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2,MII_READ); + phy_reserved |= PHY_MARVELL_INIT1; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + if(np->phy_model == PHY_MODEL_MARVELL_E1111){ + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2,MII_READ); + phy_reserved |= PHY_MARVELL_INIT1; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } - } - if (np->phy_model == PHY_MODEL_REALTEK_8201) { - if (np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_32 || - np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_33 || - np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_34 || - np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_35 || - np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_36 || - np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_37 || - np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_38 || - np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_39) { - phy_reserved = mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, MII_READ); - phy_reserved |= PHY_REALTEK_INIT7; - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, phy_reserved)) { + if(np->phy_model == PHY_MODEL_MARVELL_E3016){ + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2,MII_READ); + phy_reserved |= PHY_MARVELL_INIT2; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2 ,phy_reserved)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; } @@ -1164,46 +1884,45 @@ static int phy_init(struct net_device *dev) } } - /* set advertise register */ - reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); - reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL|ADVERTISE_PAUSE_ASYM|ADVERTISE_PAUSE_CAP); - if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) { - printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } - - /* get phy interface type */ - phyinterface = readl(base + NvRegPhyInterface); + if (np->phy_oui == PHY_OUI_BRCM){ + if (np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_32 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_33 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_34 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_35 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_36 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_37 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_38 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_39) { + if(np->phy_model == PHY_MODEL_BRCM_9507) { + phy_reserved = mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG1,MII_READ); + phy_reserved &= ~PHY_BRCM_INIT_MSK1; + phy_reserved |= PHY_BRCM_INIT1; + if (mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG1 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + } - /* see if gigabit phy */ - mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); - if (mii_status & PHY_GIGABIT) { - np->gigabit = PHY_GIGABIT; - mii_control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); - mii_control_1000 &= ~ADVERTISE_1000HALF; - if (phyinterface & PHY_RGMII) - mii_control_1000 |= ADVERTISE_1000FULL; - else - mii_control_1000 &= ~ADVERTISE_1000FULL; + if(np->phy_model == PHY_MODEL_BRCM_AC131) { + phy_reserved = mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG2,MII_READ); + phy_reserved |= PHY_BRCM_INIT2; + if (mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG2 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } - if (mii_rw(dev, np->phyaddr, MII_CTRL1000, mii_control_1000)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + phy_reserved = mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG3,MII_READ); + phy_reserved |= PHY_BRCM_INIT3; + if (mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG3 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + } } } - else - np->gigabit = 0; - - mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); - mii_control |= BMCR_ANENABLE; - /* reset the phy - * (certain phys need bmcr to be setup with reset) - */ - if (phy_reset(dev, mii_control)) { - printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev)); - return PHY_ERROR; - } + /* get phy interface type */ + phyinterface = readl(base + NvRegPhyInterface); /* phy vendor specific configuration */ if ((np->phy_oui == PHY_OUI_CICADA) && (phyinterface & PHY_RGMII) ) { @@ -1233,103 +1952,316 @@ static int phy_init(struct net_device *dev) if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG1, PHY_VITESSE_INIT1)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT2)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ); if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ); phy_reserved &= ~PHY_VITESSE_INIT_MSK1; phy_reserved |= PHY_VITESSE_INIT3; if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT4)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT5)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ); phy_reserved &= ~PHY_VITESSE_INIT_MSK1; phy_reserved |= PHY_VITESSE_INIT3; if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ); if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT6)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT7)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ); if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ); phy_reserved &= ~PHY_VITESSE_INIT_MSK2; phy_reserved |= PHY_VITESSE_INIT8; if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT9)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG1, PHY_VITESSE_INIT10)) { printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; - } + } } + if (np->phy_oui == PHY_OUI_REALTEK) { - if (np->phy_model == PHY_MODEL_REALTEK_8211 && - np->phy_rev == PHY_REV_REALTEK_8211B) { - /* reset could have cleared these out, set them back */ - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + if (np->phy_model == PHY_MODEL_REALTEK_8211){ + if(np->phy_rev == PHY_REV_REALTEK_8211B) { + /* reset could have cleared these out, set them back */ + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG4, PHY_REALTEK_INIT5)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG5, PHY_REALTEK_INIT6)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + if(np->phy_rev == PHY_REV_REALTEK_8211C) { + /* Reset the phy using Power Saving Register */ + pm2_ctrl = readl(base + NvRegPowerState2); + pm2_ctrl |= NVREG_POWERSTATE2_PHYRST; + writel(pm2_ctrl,base + NvRegPowerState2); + /* Some delay for the reset to take effect */ + udelay(20000); + /* De-assert PHY Reset */ + pm2_ctrl &= ~NVREG_POWERSTATE2_PHYRST; + writel(pm2_ctrl,base + NvRegPowerState2); + udelay(20000); + + phy_reserved = mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, MII_READ); + phy_reserved |= PHY_REALTEK_INIT9; + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT10)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG7, PHY_REALTEK_INIT11)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + } + + if (np->phy_model == PHY_MODEL_REALTEK_8201) { + if (np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_32 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_33 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_34 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_35 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_36 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_37 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_38 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_39) { + phy_reserved = mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, MII_READ); + phy_reserved |= PHY_REALTEK_INIT7; + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + if (phy_cross == NV_CROSSOVER_DETECTION_DISABLED) { + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + phy_reserved = mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, MII_READ); + phy_reserved &= ~PHY_REALTEK_INIT_MSK1; + phy_reserved |= PHY_REALTEK_INIT3; + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG4, PHY_REALTEK_INIT5)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + } + } + + return 0; +} + +static int phy_init(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); + u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg; + + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + + if (np->phy_oui == PHY_OUI_MARVELL) { + /* phy errata for E3016 phy */ + if (np->phy_model == PHY_MODEL_MARVELL_E3016) { + reg = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ); + reg &= ~PHY_MARVELL_E3016_INITMASK; + if (mii_rw(dev, np->phyaddr, MII_NCONFIG, reg)) { + printk(KERN_INFO "%s: phy write to errata reg failed.\n", pci_name(np->pci_dev)); return PHY_ERROR; } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG5, PHY_REALTEK_INIT6)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + } + + if (np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_32 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_33 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_34 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_35 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_36 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_37 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_38 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_39) { + if(np->phy_model == PHY_MODEL_MARVELL_E1116){ + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG1,MII_READ); + phy_reserved &= ~PHY_MARVELL_INIT_MSK1; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG1,phy_reserved )) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2,MII_READ); + phy_reserved |= PHY_MARVELL_INIT1; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } - if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { - printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); - return PHY_ERROR; + if(np->phy_model == PHY_MODEL_MARVELL_E1111){ + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2,MII_READ); + phy_reserved |= PHY_MARVELL_INIT1; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + } + if(np->phy_model == PHY_MODEL_MARVELL_E3016){ + phy_reserved = mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2,MII_READ); + phy_reserved |= PHY_MARVELL_INIT2; + if (mii_rw(dev, np->phyaddr,PHY_MARVELL_INIT_REG2 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + } + } + } + + if (np->phy_oui == PHY_OUI_BRCM){ + if (np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_32 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_33 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_34 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_35 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_36 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_37 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_38 || + np->device_id == PCI_DEVICE_ID_NVIDIA_NVENET_39) { + if(np->phy_model == PHY_MODEL_BRCM_9507) { + phy_reserved = mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG1,MII_READ); + phy_reserved &= ~PHY_BRCM_INIT_MSK1; + phy_reserved |= PHY_BRCM_INIT1; + if (mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG1 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + } + + if(np->phy_model == PHY_MODEL_BRCM_AC131) { + phy_reserved = mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG2,MII_READ); + phy_reserved |= PHY_BRCM_INIT2; + if (mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG2 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + + phy_reserved = mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG3,MII_READ); + phy_reserved |= PHY_BRCM_INIT3; + if (mii_rw(dev, np->phyaddr,PHY_BRCM_INIT_REG3 ,phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + } + } + } + + if (np->phy_oui == PHY_OUI_REALTEK) { + if (np->phy_model == PHY_MODEL_REALTEK_8211){ + if(np->phy_rev == PHY_REV_REALTEK_8211B) { + /* reset could have cleared these out, set them back */ + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG4, PHY_REALTEK_INIT5)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG5, PHY_REALTEK_INIT6)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + } + if(np->phy_rev == PHY_REV_REALTEK_8211C) { + phy_reserved = mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, MII_READ); + phy_reserved |= PHY_REALTEK_INIT9; + if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG6, phy_reserved)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } } } if (np->phy_model == PHY_MODEL_REALTEK_8201) { @@ -1365,17 +2297,80 @@ static int phy_init(struct net_device *dev) return PHY_ERROR; } } + } + } + + /* set advertise register */ + reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); + reg &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM); + reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL); + + if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */ + reg |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM; + if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) + reg |= ADVERTISE_PAUSE_ASYM; + np->fixed_mode = reg; + + if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) { + printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + + /* get phy interface type */ + phyinterface = readl(base + NvRegPhyInterface); + + /* see if gigabit phy */ + mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); + if (mii_status & PHY_GIGABIT) { + np->gigabit = PHY_GIGABIT; + mii_control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); + mii_control_1000 &= ~ADVERTISE_1000HALF; + if (phyinterface & PHY_RGMII) + mii_control_1000 |= ADVERTISE_1000FULL; + else { + mii_control_1000 &= ~ADVERTISE_1000FULL; + } + if (mii_rw(dev, np->phyaddr, MII_CTRL1000, mii_control_1000)) { + printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; } } + else + np->gigabit = 0; + + mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); + if (np->autoneg == AUTONEG_DISABLE){ + np->pause_flags &= ~(NV_PAUSEFRAME_RX_ENABLE | NV_PAUSEFRAME_TX_ENABLE); + if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) + np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE; + if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) + np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE; + mii_control &= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_SPEED1000|BMCR_FULLDPLX); + if (reg & (ADVERTISE_10FULL|ADVERTISE_100FULL)) + mii_control |= BMCR_FULLDPLX; + if (reg & (ADVERTISE_100HALF|ADVERTISE_100FULL)) + mii_control |= BMCR_SPEED100; + } else { + mii_control |= BMCR_ANENABLE; + } + + /* reset the phy and setup BMCR + * (certain phys need reset at same time new values are set) */ + if (phy_reset(dev, mii_control)) { + printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } /* some phys clear out pause advertisment on reset, set it back */ mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg); /* restart auto negotiation */ - mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); - mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE); - if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) { - return PHY_ERROR; + if (np->autoneg == AUTONEG_ENABLE) { + mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); + mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE); + if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) { + return PHY_ERROR; + } } return 0; @@ -1383,11 +2378,12 @@ static int phy_init(struct net_device *dev) static void nv_start_rx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 rx_ctrl = readl(base + NvRegReceiverControl); - dprintk(KERN_DEBUG "%s: nv_start_rx\n", dev->name); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + /* Already running? Stop it. */ if ((readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) && !np->mac_in_use) { rx_ctrl &= ~NVREG_RCVCTL_START; @@ -1396,22 +2392,22 @@ static void nv_start_rx(struct net_device *dev) } writel(np->linkspeed, base + NvRegLinkSpeed); pci_push(base); - rx_ctrl |= NVREG_RCVCTL_START; - if (np->mac_in_use) + rx_ctrl |= NVREG_RCVCTL_START; + if (np->mac_in_use) rx_ctrl &= ~NVREG_RCVCTL_RX_PATH_EN; writel(rx_ctrl, base + NvRegReceiverControl); dprintk(KERN_DEBUG "%s: nv_start_rx to duplex %d, speed 0x%08x.\n", - dev->name, np->duplex, np->linkspeed); + dev->name, np->duplex, np->linkspeed); pci_push(base); } static void nv_stop_rx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 rx_ctrl = readl(base + NvRegReceiverControl); - dprintk(KERN_DEBUG "%s: nv_stop_rx\n", dev->name); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); if (!np->mac_in_use) rx_ctrl &= ~NVREG_RCVCTL_START; else @@ -1428,11 +2424,11 @@ static void nv_stop_rx(struct net_device *dev) static void nv_start_tx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 tx_ctrl = readl(base + NvRegTransmitterControl); - dprintk(KERN_DEBUG "%s: nv_start_tx\n", dev->name); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); tx_ctrl |= NVREG_XMITCTL_START; if (np->mac_in_use) tx_ctrl &= ~NVREG_XMITCTL_TX_PATH_EN; @@ -1442,11 +2438,11 @@ static void nv_start_tx(struct net_device *dev) static void nv_stop_tx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 tx_ctrl = readl(base + NvRegTransmitterControl); - dprintk(KERN_DEBUG "%s: nv_stop_tx\n", dev->name); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); if (!np->mac_in_use) tx_ctrl &= ~NVREG_XMITCTL_START; else @@ -1458,51 +2454,43 @@ static void nv_stop_tx(struct net_device *dev) udelay(NV_TXSTOP_DELAY2); if (!np->mac_in_use) - writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, - base + NvRegTransmitPoll); -} - -static void nv_start_rxtx(struct net_device *dev) -{ - nv_start_rx(dev); - nv_start_tx(dev); -} - -static void nv_stop_rxtx(struct net_device *dev) -{ - nv_stop_rx(dev); - nv_stop_tx(dev); + writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll); } static void nv_txrx_reset(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); + unsigned int i; - dprintk(KERN_DEBUG "%s: nv_txrx_reset\n", dev->name); - writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + writel(NVREG_TXRXCTL_BM_DIS | np->txrxctl_bits, base + NvRegTxRxControl); + udelay(32); + for(i=0;i<10000;i++){ + udelay(1); + if(readl(base+NvRegTxRxControl) & NVREG_TXRXCTL_IDLE) + break; + } + writel(NVREG_TXRXCTL_BM_DIS | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl); pci_push(base); udelay(NV_TXRX_RESET_DELAY); - writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl); - pci_push(base); } static void nv_mac_reset(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); - u32 temp1, temp2, temp3; + u32 temp1,temp2,temp3; - dprintk(KERN_DEBUG "%s: nv_mac_reset\n", dev->name); - - writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl); - pci_push(base); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + writel(NVREG_TXRXCTL_BM_DIS | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl); /* save registers since they will be cleared on reset */ temp1 = readl(base + NvRegMacAddrA); temp2 = readl(base + NvRegMacAddrB); temp3 = readl(base + NvRegTransmitPoll); + pci_push(base); writel(NVREG_MAC_RESET_ASSERT, base + NvRegMacReset); pci_push(base); udelay(NV_MAC_RESET_DELAY); @@ -1515,118 +2503,133 @@ static void nv_mac_reset(struct net_device *dev) writel(temp2, base + NvRegMacAddrB); writel(temp3, base + NvRegTransmitPoll); - writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl); + writel(NVREG_TXRXCTL_BM_DIS | np->txrxctl_bits, base + NvRegTxRxControl); pci_push(base); } -static void nv_get_hw_stats(struct net_device *dev) +#if NVVER < SLES9 +static int nv_ethtool_ioctl(struct net_device *dev, void *useraddr) { - struct fe_priv *np = netdev_priv(dev); - u8 __iomem *base = get_hwbase(dev); + struct fe_priv *np = get_nvpriv(dev); + u8 *base = get_hwbase(dev); + u32 ethcmd; + + if (copy_from_user(ðcmd, useraddr, sizeof (ethcmd))) + return -EFAULT; + + switch (ethcmd) { + case ETHTOOL_GDRVINFO: + { + struct ethtool_drvinfo info = { ETHTOOL_GDRVINFO }; + if(napi) + strcpy(info.driver, "forcedeth-NAPI"); + else + strcpy(info.driver, "forcedeth"); + strcpy(info.version, FORCEDETH_VERSION); + strcpy(info.bus_info, pci_name(np->pci_dev)); + if (copy_to_user(useraddr, &info, sizeof (info))) + return -EFAULT; + return 0; + } + case ETHTOOL_GLINK: + { + struct ethtool_value edata = { ETHTOOL_GLINK }; + + edata.data = !!netif_carrier_ok(dev); + + if (copy_to_user(useraddr, &edata, sizeof(edata))) + return -EFAULT; + return 0; + } + case ETHTOOL_GWOL: + { + struct ethtool_wolinfo wolinfo; + memset(&wolinfo, 0, sizeof(wolinfo)); + wolinfo.supported = WAKE_MAGIC; + + spin_lock_irq(&np->lock); + if (np->wolenabled) + wolinfo.wolopts = WAKE_MAGIC; + spin_unlock_irq(&np->lock); + + if (copy_to_user(useraddr, &wolinfo, sizeof(wolinfo))) + return -EFAULT; + return 0; + } + case ETHTOOL_SWOL: + { + struct ethtool_wolinfo wolinfo; + if (copy_from_user(&wolinfo, useraddr, sizeof(wolinfo))) + return -EFAULT; + + spin_lock_irq(&np->lock); + if (wolinfo.wolopts == 0) { + writel(0, base + NvRegWakeUpFlags); + np->wolenabled = NV_WOL_DISABLED; + } + if (wolinfo.wolopts & WAKE_MAGIC) { + writel(NVREG_WAKEUPFLAGS_ENABLE, base + NvRegWakeUpFlags); + np->wolenabled = NV_WOL_ENABLED; + } + spin_unlock_irq(&np->lock); + return 0; + } - np->estats.tx_bytes += readl(base + NvRegTxCnt); - np->estats.tx_zero_rexmt += readl(base + NvRegTxZeroReXmt); - np->estats.tx_one_rexmt += readl(base + NvRegTxOneReXmt); - np->estats.tx_many_rexmt += readl(base + NvRegTxManyReXmt); - np->estats.tx_late_collision += readl(base + NvRegTxLateCol); - np->estats.tx_fifo_errors += readl(base + NvRegTxUnderflow); - np->estats.tx_carrier_errors += readl(base + NvRegTxLossCarrier); - np->estats.tx_excess_deferral += readl(base + NvRegTxExcessDef); - np->estats.tx_retry_error += readl(base + NvRegTxRetryErr); - np->estats.rx_frame_error += readl(base + NvRegRxFrameErr); - np->estats.rx_extra_byte += readl(base + NvRegRxExtraByte); - np->estats.rx_late_collision += readl(base + NvRegRxLateCol); - np->estats.rx_runt += readl(base + NvRegRxRunt); - np->estats.rx_frame_too_long += readl(base + NvRegRxFrameTooLong); - np->estats.rx_over_errors += readl(base + NvRegRxOverflow); - np->estats.rx_crc_errors += readl(base + NvRegRxFCSErr); - np->estats.rx_frame_align_error += readl(base + NvRegRxFrameAlignErr); - np->estats.rx_length_error += readl(base + NvRegRxLenErr); - np->estats.rx_unicast += readl(base + NvRegRxUnicast); - np->estats.rx_multicast += readl(base + NvRegRxMulticast); - np->estats.rx_broadcast += readl(base + NvRegRxBroadcast); - np->estats.rx_packets = - np->estats.rx_unicast + - np->estats.rx_multicast + - np->estats.rx_broadcast; - np->estats.rx_errors_total = - np->estats.rx_crc_errors + - np->estats.rx_over_errors + - np->estats.rx_frame_error + - (np->estats.rx_frame_align_error - np->estats.rx_extra_byte) + - np->estats.rx_late_collision + - np->estats.rx_runt + - np->estats.rx_frame_too_long; - np->estats.tx_errors_total = - np->estats.tx_late_collision + - np->estats.tx_fifo_errors + - np->estats.tx_carrier_errors + - np->estats.tx_excess_deferral + - np->estats.tx_retry_error; - - if (np->driver_data & DEV_HAS_STATISTICS_V2) { - np->estats.tx_deferral += readl(base + NvRegTxDef); - np->estats.tx_packets += readl(base + NvRegTxFrame); - np->estats.rx_bytes += readl(base + NvRegRxCnt); - np->estats.tx_pause += readl(base + NvRegTxPause); - np->estats.rx_pause += readl(base + NvRegRxPause); - np->estats.rx_drop_frame += readl(base + NvRegRxDropFrame); + default: + break; } + + return -EOPNOTSUPP; } /* - * nv_get_stats: dev->get_stats function - * Get latest stats value from the nic. - * Called with read_lock(&dev_base_lock) held for read - - * only synchronized against unregister_netdevice. + * nv_ioctl: dev->do_ioctl function + * Called with rtnl_lock held. */ -static struct net_device_stats *nv_get_stats(struct net_device *dev) +static int nv_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) { - struct fe_priv *np = netdev_priv(dev); + switch(cmd) { + case SIOCETHTOOL: + return nv_ethtool_ioctl(dev, rq->ifr_data); - /* If the nic supports hw counters then retrieve latest values */ - if (np->driver_data & (DEV_HAS_STATISTICS_V1|DEV_HAS_STATISTICS_V2)) { - nv_get_hw_stats(dev); - - /* copy to net_device stats */ - dev->stats.tx_bytes = np->estats.tx_bytes; - dev->stats.tx_fifo_errors = np->estats.tx_fifo_errors; - dev->stats.tx_carrier_errors = np->estats.tx_carrier_errors; - dev->stats.rx_crc_errors = np->estats.rx_crc_errors; - dev->stats.rx_over_errors = np->estats.rx_over_errors; - dev->stats.rx_errors = np->estats.rx_errors_total; - dev->stats.tx_errors = np->estats.tx_errors_total; + default: + return -EOPNOTSUPP; } - - return &dev->stats; } +#endif /* * nv_alloc_rx: fill rx ring entries. * Return 1 if the allocations for the skbs failed and the * rx engine is without Available descriptors */ -static int nv_alloc_rx(struct net_device *dev) +static inline int nv_alloc_rx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); struct ring_desc* less_rx; + struct sk_buff *skb; less_rx = np->get_rx.orig; if (less_rx-- == np->first_rx.orig) less_rx = np->last_rx.orig; while (np->put_rx.orig != less_rx) { - struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); + skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); if (skb) { + skb->dev = dev; np->put_rx_ctx->skb = skb; - np->put_rx_ctx->dma = pci_map_single(np->pci_dev, - skb->data, - skb_tailroom(skb), - PCI_DMA_FROMDEVICE); +#if NVVER > FEDORA7 + np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data, + skb_tailroom(skb), PCI_DMA_FROMDEVICE); np->put_rx_ctx->dma_len = skb_tailroom(skb); - np->put_rx.orig->buf = cpu_to_le32(np->put_rx_ctx->dma); +#else + np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data, + skb->end-skb->data, PCI_DMA_FROMDEVICE); + np->put_rx_ctx->dma_len = skb->end-skb->data; +#endif + np->put_rx.orig->PacketBuffer = cpu_to_le32(np->put_rx_ctx->dma); wmb(); - np->put_rx.orig->flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL); + np->put_rx.orig->FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL); if (unlikely(np->put_rx.orig++ == np->last_rx.orig)) np->put_rx.orig = np->first_rx.orig; if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx)) @@ -1638,28 +2641,34 @@ static int nv_alloc_rx(struct net_device *dev) return 0; } -static int nv_alloc_rx_optimized(struct net_device *dev) +static inline int nv_alloc_rx_optimized(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); struct ring_desc_ex* less_rx; + struct sk_buff *skb; less_rx = np->get_rx.ex; if (less_rx-- == np->first_rx.ex) less_rx = np->last_rx.ex; while (np->put_rx.ex != less_rx) { - struct sk_buff *skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); + skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD); if (skb) { + skb->dev = dev; np->put_rx_ctx->skb = skb; - np->put_rx_ctx->dma = pci_map_single(np->pci_dev, - skb->data, - skb_tailroom(skb), - PCI_DMA_FROMDEVICE); +#if NVVER > FEDORA7 + np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data, + skb_tailroom(skb), PCI_DMA_FROMDEVICE); np->put_rx_ctx->dma_len = skb_tailroom(skb); - np->put_rx.ex->bufhigh = cpu_to_le32(dma_high(np->put_rx_ctx->dma)); - np->put_rx.ex->buflow = cpu_to_le32(dma_low(np->put_rx_ctx->dma)); +#else + np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data, + skb->end-skb->data, PCI_DMA_FROMDEVICE); + np->put_rx_ctx->dma_len = skb->end-skb->data; +#endif + np->put_rx.ex->PacketBufferHigh = cpu_to_le64(np->put_rx_ctx->dma) >> 32; + np->put_rx.ex->PacketBufferLow = cpu_to_le64(np->put_rx_ctx->dma) & 0x0FFFFFFFF; wmb(); - np->put_rx.ex->flaglen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL); + np->put_rx.ex->FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL); if (unlikely(np->put_rx.ex++ == np->last_rx.ex)) np->put_rx.ex = np->first_rx.ex; if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx)) @@ -1672,22 +2681,23 @@ static int nv_alloc_rx_optimized(struct net_device *dev) } /* If rx bufs are exhausted called after 50ms to attempt to refresh */ -#ifdef CONFIG_FORCEDETH_NAPI static void nv_do_rx_refill(unsigned long data) { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); + int retcode; - /* Just reschedule NAPI rx processing */ - netif_rx_schedule(dev, &np->napi); -} + if(napi) { + /* Just reschedule NAPI rx processing */ +#ifdef NV_NAPI_POLL_LIST + netif_rx_schedule(dev, &np->napi); #else -static void nv_do_rx_refill(unsigned long data) -{ - struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); - int retcode; + netif_rx_schedule(dev); +#endif + return; + } + spin_lock_irq(&np->timer_lock); if (!using_multi_irqs(dev)) { if (np->msi_flags & NV_MSI_X_ENABLED) disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); @@ -1696,7 +2706,8 @@ static void nv_do_rx_refill(unsigned long data) } else { disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); } - if (!nv_optimized(np)) + + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) retcode = nv_alloc_rx(dev); else retcode = nv_alloc_rx_optimized(dev); @@ -1714,17 +2725,16 @@ static void nv_do_rx_refill(unsigned long data) } else { enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); } + spin_unlock_irq(&np->timer_lock); } -#endif -static void nv_init_rx(struct net_device *dev) +static void nv_init_rx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int i; np->get_rx = np->put_rx = np->first_rx = np->rx_ring; - - if (!nv_optimized(np)) + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) np->last_rx.orig = &np->rx_ring.orig[np->rx_ring_size-1]; else np->last_rx.ex = &np->rx_ring.ex[np->rx_ring_size-1]; @@ -1732,14 +2742,14 @@ static void nv_init_rx(struct net_device *dev) np->last_rx_ctx = &np->rx_skb[np->rx_ring_size-1]; for (i = 0; i < np->rx_ring_size; i++) { - if (!nv_optimized(np)) { - np->rx_ring.orig[i].flaglen = 0; - np->rx_ring.orig[i].buf = 0; + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + np->rx_ring.orig[i].FlagLen = 0; + np->rx_ring.orig[i].PacketBuffer = 0; } else { - np->rx_ring.ex[i].flaglen = 0; - np->rx_ring.ex[i].txvlan = 0; - np->rx_ring.ex[i].bufhigh = 0; - np->rx_ring.ex[i].buflow = 0; + np->rx_ring.ex[i].FlagLen = 0; + np->rx_ring.ex[i].TxVlan = 0; + np->rx_ring.ex[i].PacketBufferHigh = 0; + np->rx_ring.ex[i].PacketBufferLow = 0; } np->rx_skb[i].skb = NULL; np->rx_skb[i].dma = 0; @@ -1748,12 +2758,11 @@ static void nv_init_rx(struct net_device *dev) static void nv_init_tx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int i; np->get_tx = np->put_tx = np->first_tx = np->tx_ring; - - if (!nv_optimized(np)) + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) np->last_tx.orig = &np->tx_ring.orig[np->tx_ring_size-1]; else np->last_tx.ex = &np->tx_ring.ex[np->tx_ring_size-1]; @@ -1764,14 +2773,14 @@ static void nv_init_tx(struct net_device *dev) np->tx_end_flip = NULL; for (i = 0; i < np->tx_ring_size; i++) { - if (!nv_optimized(np)) { - np->tx_ring.orig[i].flaglen = 0; - np->tx_ring.orig[i].buf = 0; + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + np->tx_ring.orig[i].FlagLen = 0; + np->tx_ring.orig[i].PacketBuffer = 0; } else { - np->tx_ring.ex[i].flaglen = 0; - np->tx_ring.ex[i].txvlan = 0; - np->tx_ring.ex[i].bufhigh = 0; - np->tx_ring.ex[i].buflow = 0; + np->tx_ring.ex[i].FlagLen = 0; + np->tx_ring.ex[i].TxVlan = 0; + np->tx_ring.ex[i].PacketBufferHigh = 0; + np->tx_ring.ex[i].PacketBufferLow = 0; } np->tx_skb[i].skb = NULL; np->tx_skb[i].dma = 0; @@ -1783,30 +2792,31 @@ static void nv_init_tx(struct net_device *dev) static int nv_init_ring(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); - + struct fe_priv *np = get_nvpriv(dev); nv_init_tx(dev); nv_init_rx(dev); - - if (!nv_optimized(np)) + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) return nv_alloc_rx(dev); else return nv_alloc_rx_optimized(dev); } -static int nv_release_txskb(struct net_device *dev, struct nv_skb_map* tx_skb) +static int nv_release_txskb(struct net_device *dev, unsigned int skbnr) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); - if (tx_skb->dma) { - pci_unmap_page(np->pci_dev, tx_skb->dma, - tx_skb->dma_len, - PCI_DMA_TODEVICE); - tx_skb->dma = 0; + dprintk(KERN_INFO "%s: nv_release_txskb for skbnr %d\n", + dev->name, skbnr); + + if (np->tx_skb[skbnr].dma) { + pci_unmap_page(np->pci_dev, np->tx_skb[skbnr].dma, + np->tx_skb[skbnr].dma_len, + PCI_DMA_TODEVICE); + np->tx_skb[skbnr].dma = 0; } - if (tx_skb->skb) { - dev_kfree_skb_any(tx_skb->skb); - tx_skb->skb = NULL; + if (np->tx_skb[skbnr].skb) { + dev_kfree_skb_any(np->tx_skb[skbnr].skb); + np->tx_skb[skbnr].skb = NULL; return 1; } else { return 0; @@ -1815,21 +2825,21 @@ static int nv_release_txskb(struct net_device *dev, struct nv_skb_map* tx_skb) static void nv_drain_tx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); unsigned int i; for (i = 0; i < np->tx_ring_size; i++) { - if (!nv_optimized(np)) { - np->tx_ring.orig[i].flaglen = 0; - np->tx_ring.orig[i].buf = 0; + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + np->tx_ring.orig[i].FlagLen = 0; + np->tx_ring.orig[i].PacketBuffer = 0; } else { - np->tx_ring.ex[i].flaglen = 0; - np->tx_ring.ex[i].txvlan = 0; - np->tx_ring.ex[i].bufhigh = 0; - np->tx_ring.ex[i].buflow = 0; + np->tx_ring.ex[i].FlagLen = 0; + np->tx_ring.ex[i].TxVlan = 0; + np->tx_ring.ex[i].PacketBufferHigh = 0; + np->tx_ring.ex[i].PacketBufferLow = 0; } - if (nv_release_txskb(dev, &np->tx_skb[i])) - dev->stats.tx_dropped++; + if (nv_release_txskb(dev, i)) + np->stats.tx_dropped++; np->tx_skb[i].dma = 0; np->tx_skb[i].dma_len = 0; np->tx_skb[i].first_tx_desc = NULL; @@ -1842,62 +2852,60 @@ static void nv_drain_tx(struct net_device *dev) static void nv_drain_rx(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int i; - for (i = 0; i < np->rx_ring_size; i++) { - if (!nv_optimized(np)) { - np->rx_ring.orig[i].flaglen = 0; - np->rx_ring.orig[i].buf = 0; + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + np->rx_ring.orig[i].FlagLen = 0; + np->rx_ring.orig[i].PacketBuffer = 0; } else { - np->rx_ring.ex[i].flaglen = 0; - np->rx_ring.ex[i].txvlan = 0; - np->rx_ring.ex[i].bufhigh = 0; - np->rx_ring.ex[i].buflow = 0; + np->rx_ring.ex[i].FlagLen = 0; + np->rx_ring.ex[i].TxVlan = 0; + np->rx_ring.ex[i].PacketBufferHigh = 0; + np->rx_ring.ex[i].PacketBufferLow = 0; } wmb(); if (np->rx_skb[i].skb) { +#if NVVER > FEDORA7 pci_unmap_single(np->pci_dev, np->rx_skb[i].dma, - (skb_end_pointer(np->rx_skb[i].skb) - - np->rx_skb[i].skb->data), - PCI_DMA_FROMDEVICE); + (skb_end_pointer(np->rx_skb[i].skb) - np->rx_skb[i].skb->data), + PCI_DMA_FROMDEVICE); +#else + pci_unmap_single(np->pci_dev, np->rx_skb[i].dma, + np->rx_skb[i].skb->end-np->rx_skb[i].skb->data, + PCI_DMA_FROMDEVICE); +#endif dev_kfree_skb(np->rx_skb[i].skb); np->rx_skb[i].skb = NULL; } } } -static void nv_drain_rxtx(struct net_device *dev) +static void drain_ring(struct net_device *dev) { nv_drain_tx(dev); nv_drain_rx(dev); } -static inline u32 nv_get_empty_tx_slots(struct fe_priv *np) -{ - return (u32)(np->tx_ring_size - ((np->tx_ring_size + (np->put_tx_ctx - np->get_tx_ctx)) % np->tx_ring_size)); -} - static void nv_legacybackoff_reseed(struct net_device *dev) { u8 __iomem *base = get_hwbase(dev); - u32 reg; + u32 reg; u32 low; int tx_status = 0; - - reg = readl(base + NvRegSlotTime) & ~NVREG_SLOTTIME_MASK; - get_random_bytes(&low, sizeof(low)); - reg |= low & NVREG_SLOTTIME_MASK; - - /* Need to stop tx before change takes effect. - * Caller has already gained np->lock. - */ + + rdtscl(low); + low = low & 0xff; + reg = readl(base + NvRegSlotTime) & ~0xFF; + reg |= low; + + /* need to stop tx before change takes effect */ tx_status = readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_START; if (tx_status) nv_stop_tx(dev); nv_stop_rx(dev); writel(reg, base + NvRegSlotTime); - if (tx_status) + if(tx_status) nv_start_tx(dev); nv_start_rx(dev); } @@ -1907,7 +2915,7 @@ static void nv_legacybackoff_reseed(struct net_device *dev) #define BACKOFF_SEEDSET_LFSRS 15 /* Known Good seed sets */ -static const u32 main_seedset[BACKOFF_SEEDSET_ROWS][BACKOFF_SEEDSET_LFSRS] = { +u32 gMainSeedSet[BACKOFF_SEEDSET_ROWS][BACKOFF_SEEDSET_LFSRS] = { {145, 155, 165, 175, 185, 196, 235, 245, 255, 265, 275, 285, 660, 690, 874}, {245, 255, 265, 575, 385, 298, 335, 345, 355, 366, 375, 385, 761, 790, 974}, {145, 155, 165, 175, 185, 196, 235, 245, 255, 265, 275, 285, 660, 690, 874}, @@ -1917,7 +2925,7 @@ static const u32 main_seedset[BACKOFF_SEEDSET_ROWS][BACKOFF_SEEDSET_LFSRS] = { {366, 365, 376, 686, 497, 308, 447, 455, 466, 476, 485, 496, 871, 800, 84}, {466, 465, 476, 786, 597, 408, 547, 555, 566, 576, 585, 597, 971, 900, 184}}; -static const u32 gear_seedset[BACKOFF_SEEDSET_ROWS][BACKOFF_SEEDSET_LFSRS] = { +u32 gGearSeedSet[BACKOFF_SEEDSET_ROWS][BACKOFF_SEEDSET_LFSRS] = { {251, 262, 273, 324, 319, 508, 375, 364, 341, 371, 398, 193, 375, 30, 295}, {351, 375, 373, 469, 551, 639, 477, 464, 441, 472, 498, 293, 476, 130, 395}, {351, 375, 373, 469, 551, 639, 477, 464, 441, 472, 498, 293, 476, 130, 397}, @@ -1930,70 +2938,70 @@ static const u32 gear_seedset[BACKOFF_SEEDSET_ROWS][BACKOFF_SEEDSET_LFSRS] = { static void nv_gear_backoff_reseed(struct net_device *dev) { u8 __iomem *base = get_hwbase(dev); - u32 miniseed1, miniseed2, miniseed2_reversed, miniseed3, miniseed3_reversed; - u32 temp, seedset, combinedSeed; + u32 miniseed1,miniseed2,miniseed2_reversed,miniseed3,miniseed3_reversed; + u32 temp,seedset,combinedSeed; + u32 low; int i; - + /* Setup seed for free running LFSR */ - /* We are going to read the time stamp counter 3 times - and swizzle bits around to increase randomness */ - get_random_bytes(&miniseed1, sizeof(miniseed1)); - miniseed1 &= 0x0fff; - if (miniseed1 == 0) - miniseed1 = 0xabc; - - get_random_bytes(&miniseed2, sizeof(miniseed2)); - miniseed2 &= 0x0fff; - if (miniseed2 == 0) - miniseed2 = 0xabc; - miniseed2_reversed = - ((miniseed2 & 0xF00) >> 8) | - (miniseed2 & 0x0F0) | - ((miniseed2 & 0x00F) << 8); - - get_random_bytes(&miniseed3, sizeof(miniseed3)); - miniseed3 &= 0x0fff; - if (miniseed3 == 0) - miniseed3 = 0xabc; - miniseed3_reversed = - ((miniseed3 & 0xF00) >> 8) | - (miniseed3 & 0x0F0) | - ((miniseed3 & 0x00F) << 8); - - combinedSeed = ((miniseed1 ^ miniseed2_reversed) << 12) | - (miniseed2 ^ miniseed3_reversed); + /* We are going to read the time stamp counter 3 times and swizzle bits around to increase randomness */ + rdtscl(low); + miniseed1= low & 0x0fff; + if(miniseed1==0) + miniseed1=0xabc; + + rdtscl(low); + miniseed2= low & 0x0fff; + if(miniseed2==0) + miniseed2=0xabc; + miniseed2_reversed = ((miniseed2 & 0xF00) >> 8) | (miniseed2 & 0x0F0) | ((miniseed2 & 0x00F) << 8); + + rdtscl(low); + miniseed3= low & 0x0fff; + if(miniseed3==0) + miniseed3=0xabc; + miniseed3_reversed = ((miniseed3 & 0xF00) >> 8) | (miniseed3 & 0x0F0) | ((miniseed3 & 0x00F) << 8); + + combinedSeed = ((miniseed1 ^ miniseed2_reversed) << 12) | (miniseed2 ^ miniseed3_reversed); /* Seeds can not be zero */ - if ((combinedSeed & NVREG_BKOFFCTRL_SEED_MASK) == 0) + if ((combinedSeed & 0x3FF) == 0) combinedSeed |= 0x08; - if ((combinedSeed & (NVREG_BKOFFCTRL_SEED_MASK << NVREG_BKOFFCTRL_GEAR)) == 0) + if ((combinedSeed & 0x3FF000) == 0) combinedSeed |= 0x8000; + /* Ensure seeds are not the same */ + if ((combinedSeed & 0x3FF) == (combinedSeed & 0x3FF000)) + combinedSeed = 0x3FF3FF; + /* No need to disable tx here */ - temp = NVREG_BKOFFCTRL_DEFAULT | (0 << NVREG_BKOFFCTRL_SELECT); + temp = 0; + temp = NVREG_BKOFFCTRL_LSFR_GEAR_SEL|NVREG_BKOFFCTRL_LSFR_MAIN_SEL|NVREG_BKOFFCTRL_LSFR_GEARBF_ENABLE; temp |= combinedSeed & NVREG_BKOFFCTRL_SEED_MASK; temp |= combinedSeed >> NVREG_BKOFFCTRL_GEAR; writel(temp,base + NvRegBackOffControl); /* Setup seeds for all gear LFSRs. */ - get_random_bytes(&seedset, sizeof(seedset)); - seedset = seedset % BACKOFF_SEEDSET_ROWS; - for (i = 1; i <= BACKOFF_SEEDSET_LFSRS; i++) + rdtscl(low); + seedset = low % BACKOFF_SEEDSET_ROWS; + for(i=1;i <= BACKOFF_SEEDSET_LFSRS;i++) { - temp = NVREG_BKOFFCTRL_DEFAULT | (i << NVREG_BKOFFCTRL_SELECT); - temp |= main_seedset[seedset][i-1] & 0x3ff; - temp |= ((gear_seedset[seedset][i-1] & 0x3ff) << NVREG_BKOFFCTRL_GEAR); + temp = NVREG_BKOFFCTRL_LSFR_GEAR_SEL|NVREG_BKOFFCTRL_LSFR_MAIN_SEL|NVREG_BKOFFCTRL_LSFR_GEARBF_ENABLE; + temp |= gMainSeedSet[seedset][i-1] & 0x3ff; + temp |= ((gGearSeedSet[seedset][i-1] & 0x3ff) << NVREG_BKOFFCTRL_GEAR); + temp |= i << NVREG_BKOFFCTRL_SELECT; writel(temp, base + NvRegBackOffControl); } + } /* * nv_start_xmit: dev->hard_start_xmit function - * Called with netif_tx_lock held. + * Called with dev->xmit_lock held. */ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u32 tx_flags = 0; u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET); unsigned int fragments = skb_shinfo(skb)->nr_frags; @@ -2007,20 +3015,17 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) struct ring_desc* start_tx; struct ring_desc* prev_tx; struct nv_skb_map* prev_tx_ctx; - unsigned long flags; + + dprintk("%s:%s\n",dev->name,__FUNCTION__); /* add fragments to entries count */ for (i = 0; i < fragments; i++) { entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) + - ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); + ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); } - empty_slots = nv_get_empty_tx_slots(np); + empty_slots = (u32)(np->tx_ring_size - ((np->tx_ring_size + (np->put_tx_ctx - np->get_tx_ctx)) % np->tx_ring_size)); if (unlikely(empty_slots <= entries)) { - spin_lock_irqsave(&np->lock, flags); - netif_stop_queue(dev); - np->tx_stop = 1; - spin_unlock_irqrestore(&np->lock, flags); return NETDEV_TX_BUSY; } @@ -2032,10 +3037,10 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) prev_tx_ctx = np->put_tx_ctx; bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt, - PCI_DMA_TODEVICE); + PCI_DMA_TODEVICE); np->put_tx_ctx->dma_len = bcnt; - put_tx->buf = cpu_to_le32(np->put_tx_ctx->dma); - put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); + put_tx->PacketBuffer = cpu_to_le32(np->put_tx_ctx->dma); + put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags); tx_flags = np->tx_flags; offset += bcnt; @@ -2044,7 +3049,7 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) put_tx = np->first_tx.orig; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; - } while (size); + } while(size); /* setup the fragments */ for (i = 0; i < fragments; i++) { @@ -2056,12 +3061,13 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) prev_tx = put_tx; prev_tx_ctx = np->put_tx_ctx; bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; + np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt, - PCI_DMA_TODEVICE); + PCI_DMA_TODEVICE); np->put_tx_ctx->dma_len = bcnt; - put_tx->buf = cpu_to_le32(np->put_tx_ctx->dma); - put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); + put_tx->PacketBuffer = cpu_to_le32(np->put_tx_ctx->dma); + put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags); offset += bcnt; size -= bcnt; if (unlikely(put_tx++ == np->last_tx.orig)) @@ -2072,36 +3078,30 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) } /* set last fragment flag */ - prev_tx->flaglen |= cpu_to_le32(tx_flags_extra); + prev_tx->FlagLen |= cpu_to_le32(tx_flags_extra); /* save skb in this slot's context area */ prev_tx_ctx->skb = skb; - if (skb_is_gso(skb)) +#ifdef NETIF_F_TSO +#if NVVER > FEDORA5 + if (skb_shinfo(skb)->gso_size) tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT); +#else + if (skb_shinfo(skb)->tso_size) + tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->tso_size << NV_TX2_TSO_SHIFT); +#endif else - tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ? - NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0; +#endif + tx_flags_extra = (skb->ip_summed == CHECKSUM_HW ? (NV_TX2_CHECKSUM_L3|NV_TX2_CHECKSUM_L4) : 0); - spin_lock_irqsave(&np->lock, flags); + spin_lock_irq(&np->lock); /* set tx flags */ - start_tx->flaglen |= cpu_to_le32(tx_flags | tx_flags_extra); + start_tx->FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra); np->put_tx.orig = put_tx; - spin_unlock_irqrestore(&np->lock, flags); - - dprintk(KERN_DEBUG "%s: nv_start_xmit: entries %d queued for transmission. tx_flags_extra: %x\n", - dev->name, entries, tx_flags_extra); - { - int j; - for (j=0; j<64; j++) { - if ((j%16) == 0) - dprintk("\n%03x:", j); - dprintk(" %02x", ((unsigned char*)skb->data)[j]); - } - dprintk("\n"); - } + spin_unlock_irq(&np->lock); dev->trans_start = jiffies; writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); @@ -2110,7 +3110,7 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev) static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u32 tx_flags = 0; u32 tx_flags_extra; unsigned int fragments = skb_shinfo(skb)->nr_frags; @@ -2118,27 +3118,25 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) u32 offset = 0; u32 bcnt; u32 size = skb->len-skb->data_len; - u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); u32 empty_slots; struct ring_desc_ex* put_tx; struct ring_desc_ex* start_tx; struct ring_desc_ex* prev_tx; struct nv_skb_map* prev_tx_ctx; struct nv_skb_map* start_tx_ctx; - unsigned long flags; + + u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); + + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); /* add fragments to entries count */ for (i = 0; i < fragments; i++) { entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) + - ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); + ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0); } - empty_slots = nv_get_empty_tx_slots(np); + empty_slots = (u32)(np->tx_ring_size - ((np->tx_ring_size + (np->put_tx_ctx - np->get_tx_ctx)) % np->tx_ring_size)); if (unlikely(empty_slots <= entries)) { - spin_lock_irqsave(&np->lock, flags); - netif_stop_queue(dev); - np->tx_stop = 1; - spin_unlock_irqrestore(&np->lock, flags); return NETDEV_TX_BUSY; } @@ -2151,11 +3149,11 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) prev_tx_ctx = np->put_tx_ctx; bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt, - PCI_DMA_TODEVICE); + PCI_DMA_TODEVICE); np->put_tx_ctx->dma_len = bcnt; - put_tx->bufhigh = cpu_to_le32(dma_high(np->put_tx_ctx->dma)); - put_tx->buflow = cpu_to_le32(dma_low(np->put_tx_ctx->dma)); - put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); + put_tx->PacketBufferHigh = cpu_to_le64(np->put_tx_ctx->dma) >> 32; + put_tx->PacketBufferLow = cpu_to_le64(np->put_tx_ctx->dma) & 0x0FFFFFFFF; + put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags); tx_flags = NV_TX2_VALID; offset += bcnt; @@ -2164,8 +3162,7 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) put_tx = np->first_tx.ex; if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx)) np->put_tx_ctx = np->first_tx_ctx; - } while (size); - + } while(size); /* setup the fragments */ for (i = 0; i < fragments; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; @@ -2176,13 +3173,14 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) prev_tx = put_tx; prev_tx_ctx = np->put_tx_ctx; bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size; + np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt, - PCI_DMA_TODEVICE); + PCI_DMA_TODEVICE); np->put_tx_ctx->dma_len = bcnt; - put_tx->bufhigh = cpu_to_le32(dma_high(np->put_tx_ctx->dma)); - put_tx->buflow = cpu_to_le32(dma_low(np->put_tx_ctx->dma)); - put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags); + put_tx->PacketBufferHigh = cpu_to_le64(np->put_tx_ctx->dma) >> 32; + put_tx->PacketBufferLow = cpu_to_le64(np->put_tx_ctx->dma) & 0x0FFFFFFFF; + put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags); offset += bcnt; size -= bcnt; if (unlikely(put_tx++ == np->last_tx.ex)) @@ -2193,28 +3191,34 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) } /* set last fragment flag */ - prev_tx->flaglen |= cpu_to_le32(NV_TX2_LASTPACKET); + prev_tx->FlagLen |= cpu_to_le32(NV_TX2_LASTPACKET); /* save skb in this slot's context area */ prev_tx_ctx->skb = skb; - if (skb_is_gso(skb)) +#ifdef NETIF_F_TSO +#if NVVER > FEDORA5 + if (skb_shinfo(skb)->gso_size) tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT); +#else + if (skb_shinfo(skb)->tso_size) + tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->tso_size << NV_TX2_TSO_SHIFT); +#endif else - tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ? - NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0; +#endif + tx_flags_extra = (skb->ip_summed == CHECKSUM_HW ? (NV_TX2_CHECKSUM_L3|NV_TX2_CHECKSUM_L4) : 0); /* vlan tag */ if (likely(!np->vlangrp)) { - start_tx->txvlan = 0; + start_tx->TxVlan = 0; } else { if (vlan_tx_tag_present(skb)) - start_tx->txvlan = cpu_to_le32(NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb)); + start_tx->TxVlan = cpu_to_le32(NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb)); else - start_tx->txvlan = 0; + start_tx->TxVlan = 0; } - spin_lock_irqsave(&np->lock, flags); + spin_lock_irq(&np->lock); if (np->tx_limit) { /* Limit the number of outstanding tx. Setup all fragments, but @@ -2237,36 +3241,25 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev) } /* set tx flags */ - start_tx->flaglen |= cpu_to_le32(tx_flags | tx_flags_extra); + start_tx->FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra); np->put_tx.ex = put_tx; - spin_unlock_irqrestore(&np->lock, flags); - - dprintk(KERN_DEBUG "%s: nv_start_xmit_optimized: entries %d queued for transmission. tx_flags_extra: %x\n", - dev->name, entries, tx_flags_extra); - { - int j; - for (j=0; j<64; j++) { - if ((j%16) == 0) - dprintk("\n%03x:", j); - dprintk(" %02x", ((unsigned char*)skb->data)[j]); - } - dprintk("\n"); - } + spin_unlock_irq(&np->lock); dev->trans_start = jiffies; writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); - return NETDEV_TX_OK; + return NETDEV_TX_OK; } static inline void nv_tx_flip_ownership(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); np->tx_pkts_in_progress--; if (np->tx_change_owner) { - np->tx_change_owner->first_tx_desc->flaglen |= - cpu_to_le32(NV_TX2_VALID); + u32 flaglen = le32_to_cpu(np->tx_change_owner->first_tx_desc->FlagLen); + flaglen |= NV_TX2_VALID; + np->tx_change_owner->first_tx_desc->FlagLen = cpu_to_le32(flaglen); np->tx_pkts_in_progress++; np->tx_change_owner = np->tx_change_owner->next_tx_ctx; @@ -2277,131 +3270,137 @@ static inline void nv_tx_flip_ownership(struct net_device *dev) } } + /* * nv_tx_done: check for completed packets, release the skbs. * * Caller must own np->lock. */ -static void nv_tx_done(struct net_device *dev) +static inline unsigned int nv_tx_done(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); - u32 flags; - struct ring_desc* orig_get_tx = np->get_tx.orig; + struct fe_priv *np = get_nvpriv(dev); + u32 Flags; + struct ring_desc* put_tx = np->put_tx.orig; + unsigned int tx_packets_cnt = 0; - while ((np->get_tx.orig != np->put_tx.orig) && - !((flags = le32_to_cpu(np->get_tx.orig->flaglen)) & NV_TX_VALID)) { + dprintk("%s:%s\n",dev->name,__FUNCTION__); - dprintk(KERN_DEBUG "%s: nv_tx_done: flags 0x%x.\n", - dev->name, flags); + while ((np->get_tx.orig != put_tx) && + !((Flags = le32_to_cpu(np->get_tx.orig->FlagLen)) & NV_TX_VALID)) { + dprintk(KERN_DEBUG "%s: nv_tx_done:NVLAN tx done\n", dev->name); pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma, - np->get_tx_ctx->dma_len, - PCI_DMA_TODEVICE); + np->get_tx_ctx->dma_len, + PCI_DMA_TODEVICE); np->get_tx_ctx->dma = 0; if (np->desc_ver == DESC_VER_1) { - if (flags & NV_TX_LASTPACKET) { - if (flags & NV_TX_ERROR) { - if (flags & NV_TX_UNDERFLOW) - dev->stats.tx_fifo_errors++; - if (flags & NV_TX_CARRIERLOST) - dev->stats.tx_carrier_errors++; - if ((flags & NV_TX_RETRYERROR) && !(flags & NV_TX_RETRYCOUNT_MASK)) + if (Flags & NV_TX_LASTPACKET) { + if (Flags & NV_TX_ERROR) { + if (Flags & NV_TX_UNDERFLOW) + np->stats.tx_fifo_errors++; + if (Flags & NV_TX_CARRIERLOST) + np->stats.tx_carrier_errors++; + if((Flags & NV_TX_RETRYERROR) && ((Flags & NV_TX_TRC_TD_MASK)==0)) nv_legacybackoff_reseed(dev); - dev->stats.tx_errors++; + + np->stats.tx_errors++; } else { - dev->stats.tx_packets++; - dev->stats.tx_bytes += np->get_tx_ctx->skb->len; + np->stats.tx_packets++; + np->stats.tx_bytes += np->get_tx_ctx->skb->len; + tx_packets_cnt++; } dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; + } } else { - if (flags & NV_TX2_LASTPACKET) { - if (flags & NV_TX2_ERROR) { - if (flags & NV_TX2_UNDERFLOW) - dev->stats.tx_fifo_errors++; - if (flags & NV_TX2_CARRIERLOST) - dev->stats.tx_carrier_errors++; - if ((flags & NV_TX2_RETRYERROR) && !(flags & NV_TX2_RETRYCOUNT_MASK)) + if (Flags & NV_TX2_LASTPACKET) { + if (Flags & NV_TX2_ERROR) { + if (Flags & NV_TX2_UNDERFLOW) + np->stats.tx_fifo_errors++; + if (Flags & NV_TX2_CARRIERLOST) + np->stats.tx_carrier_errors++; + if((Flags & NV_TX2_RETRYERROR) && ((Flags & NV_TX2_TRC_TD_MASK)==0)) nv_legacybackoff_reseed(dev); - dev->stats.tx_errors++; + np->stats.tx_errors++; } else { - dev->stats.tx_packets++; - dev->stats.tx_bytes += np->get_tx_ctx->skb->len; - } + np->stats.tx_packets++; + np->stats.tx_bytes += np->get_tx_ctx->skb->len; + tx_packets_cnt++; + } dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; } } + if (unlikely(np->get_tx.orig++ == np->last_tx.orig)) np->get_tx.orig = np->first_tx.orig; if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) np->get_tx_ctx = np->first_tx_ctx; } - if (unlikely((np->tx_stop == 1) && (np->get_tx.orig != orig_get_tx))) { - np->tx_stop = 0; - netif_wake_queue(dev); - } + return tx_packets_cnt; } -static void nv_tx_done_optimized(struct net_device *dev, int limit) +static inline unsigned int nv_tx_done_optimized(struct net_device *dev, int max_work) { - struct fe_priv *np = netdev_priv(dev); - u32 flags; - struct ring_desc_ex* orig_get_tx = np->get_tx.ex; - - while ((np->get_tx.ex != np->put_tx.ex) && - !((flags = le32_to_cpu(np->get_tx.ex->flaglen)) & NV_TX_VALID) && - (limit-- > 0)) { + struct fe_priv *np = get_nvpriv(dev); + u32 Flags; + struct ring_desc_ex* put_tx = np->put_tx.ex; + unsigned int tx_packets_cnt = 0; - dprintk(KERN_DEBUG "%s: nv_tx_done_optimized: flags 0x%x.\n", - dev->name, flags); + while ((np->get_tx.ex != put_tx) && + !((Flags = le32_to_cpu(np->get_tx.ex->FlagLen)) & NV_TX_VALID) && + (max_work-- > 0)) { + dprintk(KERN_DEBUG "%s: nv_tx_done_optimized:NVLAN tx done\n", dev->name); pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma, - np->get_tx_ctx->dma_len, - PCI_DMA_TODEVICE); + np->get_tx_ctx->dma_len, + PCI_DMA_TODEVICE); np->get_tx_ctx->dma = 0; - if (flags & NV_TX2_LASTPACKET) { - if (!(flags & NV_TX2_ERROR)) - dev->stats.tx_packets++; - else { - if ((flags & NV_TX2_RETRYERROR) && !(flags & NV_TX2_RETRYCOUNT_MASK)) { - if (np->driver_data & DEV_HAS_GEAR_MODE) - nv_gear_backoff_reseed(dev); - else + if (Flags & NV_TX2_LASTPACKET) { + if (!(Flags & NV_TX2_ERROR)) { + np->stats.tx_packets++; + tx_packets_cnt++; + }else{ + if((Flags & NV_TX2_RETRYERROR) && ((Flags & NV_TX2_TRC_TD_MASK)==0)){ + if(!(np->driver_data & DEV_HAS_GEAR_MODE)) nv_legacybackoff_reseed(dev); + else + nv_gear_backoff_reseed(dev); } } - dev_kfree_skb_any(np->get_tx_ctx->skb); np->get_tx_ctx->skb = NULL; - if (np->tx_limit) { + if(np->tx_limit){ nv_tx_flip_ownership(dev); } } + if (unlikely(np->get_tx.ex++ == np->last_tx.ex)) np->get_tx.ex = np->first_tx.ex; if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx)) np->get_tx_ctx = np->first_tx_ctx; } - if (unlikely((np->tx_stop == 1) && (np->get_tx.ex != orig_get_tx))) { - np->tx_stop = 0; - netif_wake_queue(dev); - } + return tx_packets_cnt; } /* * nv_tx_timeout: dev->tx_timeout function - * Called with netif_tx_lock held. + * Called with dev->xmit_lock held. + * */ static void nv_tx_timeout(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 status; + unsigned long flags; + + if (!netif_running(dev)) + return; if (np->msi_flags & NV_MSI_X_ENABLED) status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; @@ -2413,8 +3412,15 @@ static void nv_tx_timeout(struct net_device *dev) { int i; - printk(KERN_INFO "%s: Ring at %lx\n", - dev->name, (unsigned long)np->ring_addr); + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + printk(KERN_INFO "%s: Ring at %lx: get %lx put %lx\n", + dev->name, (unsigned long)np->tx_ring.orig, + (unsigned long)np->get_tx.orig, (unsigned long)np->put_tx.orig); + } else { + printk(KERN_INFO "%s: Ring at %lx: get %lx put %lx\n", + dev->name, (unsigned long)np->tx_ring.ex, + (unsigned long)np->get_tx.ex, (unsigned long)np->put_tx.ex); + } printk(KERN_INFO "%s: Dumping tx registers\n", dev->name); for (i=0;i<=np->register_size;i+= 32) { printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n", @@ -2426,43 +3432,43 @@ static void nv_tx_timeout(struct net_device *dev) } printk(KERN_INFO "%s: Dumping tx ring\n", dev->name); for (i=0;i<np->tx_ring_size;i+= 4) { - if (!nv_optimized(np)) { + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n", - i, - le32_to_cpu(np->tx_ring.orig[i].buf), - le32_to_cpu(np->tx_ring.orig[i].flaglen), - le32_to_cpu(np->tx_ring.orig[i+1].buf), - le32_to_cpu(np->tx_ring.orig[i+1].flaglen), - le32_to_cpu(np->tx_ring.orig[i+2].buf), - le32_to_cpu(np->tx_ring.orig[i+2].flaglen), - le32_to_cpu(np->tx_ring.orig[i+3].buf), - le32_to_cpu(np->tx_ring.orig[i+3].flaglen)); + i, + le32_to_cpu(np->tx_ring.orig[i].PacketBuffer), + le32_to_cpu(np->tx_ring.orig[i].FlagLen), + le32_to_cpu(np->tx_ring.orig[i+1].PacketBuffer), + le32_to_cpu(np->tx_ring.orig[i+1].FlagLen), + le32_to_cpu(np->tx_ring.orig[i+2].PacketBuffer), + le32_to_cpu(np->tx_ring.orig[i+2].FlagLen), + le32_to_cpu(np->tx_ring.orig[i+3].PacketBuffer), + le32_to_cpu(np->tx_ring.orig[i+3].FlagLen)); } else { printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n", - i, - le32_to_cpu(np->tx_ring.ex[i].bufhigh), - le32_to_cpu(np->tx_ring.ex[i].buflow), - le32_to_cpu(np->tx_ring.ex[i].flaglen), - le32_to_cpu(np->tx_ring.ex[i+1].bufhigh), - le32_to_cpu(np->tx_ring.ex[i+1].buflow), - le32_to_cpu(np->tx_ring.ex[i+1].flaglen), - le32_to_cpu(np->tx_ring.ex[i+2].bufhigh), - le32_to_cpu(np->tx_ring.ex[i+2].buflow), - le32_to_cpu(np->tx_ring.ex[i+2].flaglen), - le32_to_cpu(np->tx_ring.ex[i+3].bufhigh), - le32_to_cpu(np->tx_ring.ex[i+3].buflow), - le32_to_cpu(np->tx_ring.ex[i+3].flaglen)); + i, + le32_to_cpu(np->tx_ring.ex[i].PacketBufferHigh), + le32_to_cpu(np->tx_ring.ex[i].PacketBufferLow), + le32_to_cpu(np->tx_ring.ex[i].FlagLen), + le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferHigh), + le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferLow), + le32_to_cpu(np->tx_ring.ex[i+1].FlagLen), + le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferHigh), + le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferLow), + le32_to_cpu(np->tx_ring.ex[i+2].FlagLen), + le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferHigh), + le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferLow), + le32_to_cpu(np->tx_ring.ex[i+3].FlagLen)); } } } - spin_lock_irq(&np->lock); + spin_lock_irqsave(&np->lock,flags); /* 1) stop tx engine */ nv_stop_tx(dev); /* 2) check that the packets were not sent already: */ - if (!nv_optimized(np)) + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) nv_tx_done(dev); else nv_tx_done_optimized(dev, np->tx_ring_size); @@ -2471,15 +3477,19 @@ static void nv_tx_timeout(struct net_device *dev) if (np->get_tx_ctx != np->put_tx_ctx) { printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name); nv_drain_tx(dev); - nv_init_tx(dev); + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) + np->get_tx.orig = np->put_tx.orig = np->first_tx.orig; + else + np->get_tx.ex = np->put_tx.ex = np->first_tx.ex; + np->get_tx_ctx = np->put_tx_ctx = np->first_tx_ctx; setup_hw_rings(dev, NV_SETUP_TX_RING); } netif_wake_queue(dev); - /* 4) restart tx engine */ nv_start_tx(dev); - spin_unlock_irq(&np->lock); + + spin_unlock_irqrestore(&np->lock,flags); } /* @@ -2492,7 +3502,7 @@ static int nv_getlen(struct net_device *dev, void *packet, int datalen) int protolen; /* length as stored in the proto field */ /* 1) calculate len according to header */ - if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == htons(ETH_P_8021Q)) { + if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == __constant_htons(ETH_P_8021Q)) { protolen = ntohs( ((struct vlan_ethhdr *)packet)->h_vlan_encapsulated_proto ); hdrlen = VLAN_HLEN; } else { @@ -2500,7 +3510,7 @@ static int nv_getlen(struct net_device *dev, void *packet, int datalen) hdrlen = ETH_HLEN; } dprintk(KERN_DEBUG "%s: nv_getlen: datalen %d, protolen %d, hdrlen %d\n", - dev->name, datalen, protolen, hdrlen); + dev->name, datalen, protolen, hdrlen); if (protolen > ETH_DATA_LEN) return datalen; /* Value in proto field not a len, no checks possible */ @@ -2535,35 +3545,28 @@ static int nv_getlen(struct net_device *dev, void *packet, int datalen) } } -static int nv_rx_process(struct net_device *dev, int limit) +static inline unsigned int nv_rx_process(struct net_device *dev,int max_work) { - struct fe_priv *np = netdev_priv(dev); - u32 flags; - int rx_work = 0; + struct fe_priv *np = get_nvpriv(dev); + u32 Flags; struct sk_buff *skb; int len; + unsigned int rx_processed_cnt = 0; + dprintk("%s:%s\n",dev->name,__FUNCTION__); while((np->get_rx.orig != np->put_rx.orig) && - !((flags = le32_to_cpu(np->get_rx.orig->flaglen)) & NV_RX_AVAIL) && - (rx_work < limit)) { + !((Flags = le32_to_cpu(np->get_rx.orig->FlagLen)) & NV_RX_AVAIL) && (max_work-- > 0)) { - dprintk(KERN_DEBUG "%s: nv_rx_process: flags 0x%x.\n", - dev->name, flags); - - /* - * the packet is for us - immediately tear down the pci mapping. - * TODO: check if a prefetch of the first cacheline improves - * the performance. - */ pci_unmap_single(np->pci_dev, np->get_rx_ctx->dma, np->get_rx_ctx->dma_len, PCI_DMA_FROMDEVICE); + skb = np->get_rx_ctx->skb; np->get_rx_ctx->skb = NULL; { int j; - dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",flags); + dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",Flags); for (j=0; j<64; j++) { if ((j%16) == 0) dprintk("\n%03x:", j); @@ -2571,34 +3574,35 @@ static int nv_rx_process(struct net_device *dev, int limit) } dprintk("\n"); } - /* look at what we actually got: */ + if (np->desc_ver == DESC_VER_1) { - if (likely(flags & NV_RX_DESCRIPTORVALID)) { - len = flags & LEN_MASK_V1; - if (unlikely(flags & NV_RX_ERROR)) { - if (flags & NV_RX_ERROR4) { + + if (likely(Flags & NV_RX_DESCRIPTORVALID)) { + len = Flags & LEN_MASK_V1; + if (unlikely(Flags & NV_RX_ERROR)) { + if ((Flags & NV_RX_ERROR_MASK) == NV_RX_ERROR4) { len = nv_getlen(dev, skb->data, len); - if (len < 0) { - dev->stats.rx_errors++; + if (len < 0 || len > np->rx_buf_sz) { + np->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } } /* framing errors are soft errors */ - else if (flags & NV_RX_FRAMINGERR) { - if (flags & NV_RX_SUBSTRACT1) { + else if ((Flags & NV_RX_ERROR_MASK) == NV_RX_FRAMINGERR) { + if (Flags & NV_RX_SUBSTRACT1) { len--; } } /* the rest are hard errors */ else { - if (flags & NV_RX_MISSEDFRAME) - dev->stats.rx_missed_errors++; - if (flags & NV_RX_CRCERR) - dev->stats.rx_crc_errors++; - if (flags & NV_RX_OVERFLOW) - dev->stats.rx_over_errors++; - dev->stats.rx_errors++; + if (Flags & NV_RX_MISSEDFRAME) + np->stats.rx_missed_errors++; + if (Flags & NV_RX_CRCERR) + np->stats.rx_crc_errors++; + if (Flags & NV_RX_OVERFLOW) + np->stats.rx_over_errors++; + np->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } @@ -2608,118 +3612,109 @@ static int nv_rx_process(struct net_device *dev, int limit) goto next_pkt; } } else { - if (likely(flags & NV_RX2_DESCRIPTORVALID)) { - len = flags & LEN_MASK_V2; - if (unlikely(flags & NV_RX2_ERROR)) { - if (flags & NV_RX2_ERROR4) { + if (likely(Flags & NV_RX2_DESCRIPTORVALID)) { + len = Flags & LEN_MASK_V2; + if (unlikely(Flags & NV_RX2_ERROR)) { + if ((Flags & NV_RX2_ERROR_MASK) == NV_RX2_ERROR4) { len = nv_getlen(dev, skb->data, len); - if (len < 0) { - dev->stats.rx_errors++; + if (len < 0 || len > np->rx_buf_sz) { + np->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } } /* framing errors are soft errors */ - else if (flags & NV_RX2_FRAMINGERR) { - if (flags & NV_RX2_SUBSTRACT1) { + else if ((Flags & NV_RX2_ERROR_MASK) == NV_RX2_FRAMINGERR) { + if (Flags & NV_RX2_SUBSTRACT1) { len--; } } /* the rest are hard errors */ else { - if (flags & NV_RX2_CRCERR) - dev->stats.rx_crc_errors++; - if (flags & NV_RX2_OVERFLOW) - dev->stats.rx_over_errors++; - dev->stats.rx_errors++; + if (Flags & NV_RX2_CRCERR) + np->stats.rx_crc_errors++; + if (Flags & NV_RX2_OVERFLOW) + np->stats.rx_over_errors++; + np->stats.rx_errors++; dev_kfree_skb(skb); goto next_pkt; } } - if (((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_TCP) || /*ip and tcp */ - ((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_UDP)) /*ip and udp */ + if (((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_TCP) || ((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_UDP)) + /*ip and tcp or udp */ skb->ip_summed = CHECKSUM_UNNECESSARY; } else { dev_kfree_skb(skb); goto next_pkt; } } + /* got a valid packet - forward it to the network core */ + dprintk(KERN_DEBUG "%s: nv_rx_process:NVLAN rx done\n", dev->name); skb_put(skb, len); skb->protocol = eth_type_trans(skb, dev); - dprintk(KERN_DEBUG "%s: nv_rx_process: %d bytes, proto %d accepted.\n", - dev->name, len, skb->protocol); -#ifdef CONFIG_FORCEDETH_NAPI - netif_receive_skb(skb); -#else - netif_rx(skb); -#endif + + if(napi) + netif_receive_skb(skb); + else + netif_rx(skb); + dev->last_rx = jiffies; - dev->stats.rx_packets++; - dev->stats.rx_bytes += len; + np->stats.rx_packets++; + np->stats.rx_bytes += len; + rx_processed_cnt++; next_pkt: if (unlikely(np->get_rx.orig++ == np->last_rx.orig)) np->get_rx.orig = np->first_rx.orig; if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx)) np->get_rx_ctx = np->first_rx_ctx; - - rx_work++; } - - return rx_work; + return rx_processed_cnt; } -static int nv_rx_process_optimized(struct net_device *dev, int limit) +static inline int nv_rx_process_optimized(struct net_device *dev, int max_work) { - struct fe_priv *np = netdev_priv(dev); - u32 flags; + struct fe_priv *np = get_nvpriv(dev); + u32 Flags; u32 vlanflags = 0; - int rx_work = 0; + u32 rx_processed_cnt = 0; struct sk_buff *skb; int len; while((np->get_rx.ex != np->put_rx.ex) && - !((flags = le32_to_cpu(np->get_rx.ex->flaglen)) & NV_RX2_AVAIL) && - (rx_work < limit)) { - - dprintk(KERN_DEBUG "%s: nv_rx_process_optimized: flags 0x%x.\n", - dev->name, flags); + !((Flags = le32_to_cpu(np->get_rx.ex->FlagLen)) & NV_RX2_AVAIL) && + (max_work-- > 0)) { - /* - * the packet is for us - immediately tear down the pci mapping. - * TODO: check if a prefetch of the first cacheline improves - * the performance. - */ pci_unmap_single(np->pci_dev, np->get_rx_ctx->dma, np->get_rx_ctx->dma_len, PCI_DMA_FROMDEVICE); + skb = np->get_rx_ctx->skb; np->get_rx_ctx->skb = NULL; - { - int j; - dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",flags); - for (j=0; j<64; j++) { - if ((j%16) == 0) - dprintk("\n%03x:", j); - dprintk(" %02x", ((unsigned char*)skb->data)[j]); - } - dprintk("\n"); - } /* look at what we actually got: */ - if (likely(flags & NV_RX2_DESCRIPTORVALID)) { - len = flags & LEN_MASK_V2; - if (unlikely(flags & NV_RX2_ERROR)) { - if (flags & NV_RX2_ERROR4) { + if (likely(Flags & NV_RX2_DESCRIPTORVALID)) { + len = Flags & LEN_MASK_V2; + + if (len < 0 || len > np->rx_buf_sz) { + printk(KERN_DEBUG "forcedeth:the package size is too large,flags %x\n",Flags); + np->rx_len_errors++; + dev_kfree_skb(skb); + goto next_pkt; + } + + if (unlikely(Flags & NV_RX2_ERROR)) { + if ((Flags & NV_RX2_ERROR_MASK) == NV_RX2_ERROR4) { len = nv_getlen(dev, skb->data, len); - if (len < 0) { + if (len < 0 || len > np->rx_buf_sz) { + np->rx_len_errors++; dev_kfree_skb(skb); goto next_pkt; } } /* framing errors are soft errors */ - else if (flags & NV_RX2_FRAMINGERR) { - if (flags & NV_RX2_SUBSTRACT1) { + else if ((Flags & NV_RX2_ERROR_MASK) == NV_RX2_FRAMINGERR) { + if (Flags & NV_RX2_SUBSTRACT1) { len--; } } @@ -2730,46 +3725,42 @@ static int nv_rx_process_optimized(struct net_device *dev, int limit) } } - if (((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_TCP) || /*ip and tcp */ - ((flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_UDP)) /*ip and udp */ - skb->ip_summed = CHECKSUM_UNNECESSARY; + if (likely(np->rx_csum)) { + if (likely(((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_TCP) || ((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_UDP))) + /*ip and tcp or udp */ + skb->ip_summed = CHECKSUM_UNNECESSARY; + } + dprintk(KERN_DEBUG "%s: nv_rx_process_optimized:NVLAN rx done\n", dev->name); /* got a valid packet - forward it to the network core */ skb_put(skb, len); skb->protocol = eth_type_trans(skb, dev); prefetch(skb->data); - dprintk(KERN_DEBUG "%s: nv_rx_process_optimized: %d bytes, proto %d accepted.\n", - dev->name, len, skb->protocol); - if (likely(!np->vlangrp)) { -#ifdef CONFIG_FORCEDETH_NAPI - netif_receive_skb(skb); -#else - netif_rx(skb); -#endif + if(napi) + netif_receive_skb(skb); + else + netif_rx(skb); } else { - vlanflags = le32_to_cpu(np->get_rx.ex->buflow); + vlanflags = le32_to_cpu(np->get_rx.ex->PacketBufferLow); if (vlanflags & NV_RX3_VLAN_TAG_PRESENT) { -#ifdef CONFIG_FORCEDETH_NAPI - vlan_hwaccel_receive_skb(skb, np->vlangrp, - vlanflags & NV_RX3_VLAN_TAG_MASK); -#else - vlan_hwaccel_rx(skb, np->vlangrp, - vlanflags & NV_RX3_VLAN_TAG_MASK); -#endif + if(napi) + vlan_hwaccel_receive_skb(skb, np->vlangrp, + vlanflags & NV_RX3_VLAN_TAG_MASK); + else + vlan_hwaccel_rx(skb, np->vlangrp, vlanflags & NV_RX3_VLAN_TAG_MASK); } else { -#ifdef CONFIG_FORCEDETH_NAPI - netif_receive_skb(skb); -#else - netif_rx(skb); -#endif + if(napi) + netif_receive_skb(skb); + else + netif_rx(skb); } } - dev->last_rx = jiffies; - dev->stats.rx_packets++; - dev->stats.rx_bytes += len; + np->stats.rx_packets++; + np->stats.rx_bytes += len; + rx_processed_cnt++; } else { dev_kfree_skb(skb); } @@ -2778,16 +3769,13 @@ next_pkt: np->get_rx.ex = np->first_rx.ex; if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx)) np->get_rx_ctx = np->first_rx_ctx; - - rx_work++; } - - return rx_work; + return rx_processed_cnt; } static void set_bufsize(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); if (dev->mtu <= ETH_DATA_LEN) np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS; @@ -2801,7 +3789,7 @@ static void set_bufsize(struct net_device *dev) */ static int nv_change_mtu(struct net_device *dev, int new_mtu) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int old_mtu; if (new_mtu < 64 || new_mtu > np->pkt_limit) @@ -2825,14 +3813,21 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu) * guessed, there is probably a simpler approach. * Changing the MTU is a rare event, it shouldn't matter. */ + nv_disable_hw_interrupts(dev,np->irqmask); nv_disable_irq(dev); +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); +#else + spin_lock_bh(&dev->xmit_lock); +#endif spin_lock(&np->lock); /* stop engines */ - nv_stop_rxtx(dev); + nv_stop_rx(dev); + nv_stop_tx(dev); nv_txrx_reset(dev); /* drain rx queue */ - nv_drain_rxtx(dev); + nv_drain_rx(dev); + nv_drain_tx(dev); /* reinit driver view of the rx queue */ set_bufsize(dev); if (nv_init_ring(dev)) { @@ -2843,16 +3838,22 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu) writel(np->rx_buf_sz, base + NvRegOffloadConfig); setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), - base + NvRegRingSizes); + base + NvRegRingSizes); pci_push(base); writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); pci_push(base); /* restart rx engine */ - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); spin_unlock(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif nv_enable_irq(dev); + nv_enable_hw_interrupts(dev,np->irqmask); } return 0; } @@ -2863,11 +3864,11 @@ static void nv_copy_mac_to_hw(struct net_device *dev) u32 mac[2]; mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) + - (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24); + (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24); mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8); - writel(mac[0], base + NvRegMacAddrA); writel(mac[1], base + NvRegMacAddrB); + } /* @@ -2876,17 +3877,22 @@ static void nv_copy_mac_to_hw(struct net_device *dev) */ static int nv_set_mac_address(struct net_device *dev, void *addr) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); struct sockaddr *macaddr = (struct sockaddr*)addr; - if (!is_valid_ether_addr(macaddr->sa_data)) + if(!is_valid_ether_addr(macaddr->sa_data)) return -EADDRNOTAVAIL; + dprintk("%s:%s\n",dev->name,__FUNCTION__); /* synchronized against open : rtnl_lock() held by caller */ memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN); if (netif_running(dev)) { +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); +#else + spin_lock_bh(&dev->xmit_lock); +#endif spin_lock_irq(&np->lock); /* stop rx engine */ @@ -2898,7 +3904,11 @@ static int nv_set_mac_address(struct net_device *dev, void *addr) /* restart rx engine */ nv_start_rx(dev); spin_unlock_irq(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif } else { nv_copy_mac_to_hw(dev); } @@ -2907,11 +3917,11 @@ static int nv_set_mac_address(struct net_device *dev, void *addr) /* * nv_set_multicast: dev->set_multicast function - * Called with netif_tx_lock held. + * Called with dev->xmit_lock held. */ static void nv_set_multicast(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 addr[2]; u32 mask[2]; @@ -2921,6 +3931,7 @@ static void nv_set_multicast(struct net_device *dev) memset(mask, 0, sizeof(mask)); if (dev->flags & IFF_PROMISC) { + dprintk(KERN_DEBUG "%s: Promiscuous mode enabled.\n", dev->name); pff |= NVREG_PFF_PROMISC; } else { pff |= NVREG_PFF_MYADDR; @@ -2938,8 +3949,8 @@ static void nv_set_multicast(struct net_device *dev) walk = dev->mc_list; while (walk != NULL) { u32 a, b; - a = le32_to_cpu(*(__le32 *) walk->dmi_addr); - b = le16_to_cpu(*(__le16 *) (&walk->dmi_addr[4])); + a = le32_to_cpu(*(u32 *) walk->dmi_addr); + b = le16_to_cpu(*(u16 *) (&walk->dmi_addr[4])); alwaysOn[0] &= a; alwaysOff[0] &= ~a; alwaysOn[1] &= b; @@ -2966,15 +3977,16 @@ static void nv_set_multicast(struct net_device *dev) writel(mask[1], base + NvRegMulticastMaskB); writel(pff, base + NvRegPacketFilterFlags); dprintk(KERN_INFO "%s: reconfiguration for multicast lists.\n", - dev->name); + dev->name); nv_start_rx(dev); spin_unlock_irq(&np->lock); } static void nv_update_pause(struct net_device *dev, u32 pause_flags) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); + u32 pause_enable; np->pause_flags &= ~(NV_PAUSEFRAME_TX_ENABLE | NV_PAUSEFRAME_RX_ENABLE); @@ -2990,21 +4002,39 @@ static void nv_update_pause(struct net_device *dev, u32 pause_flags) if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE) { u32 regmisc = readl(base + NvRegMisc1) & ~NVREG_MISC1_PAUSE_TX; if (pause_flags & NV_PAUSEFRAME_TX_ENABLE) { - u32 pause_enable = NVREG_TX_PAUSEFRAME_ENABLE_V1; - if (np->driver_data & DEV_HAS_PAUSEFRAME_TX_V2) + pause_enable = NVREG_TX_PAUSEFRAME_ENABLE_V1; + if(np->driver_data & DEV_HAS_PAUSEFRAME_TX_V2) pause_enable = NVREG_TX_PAUSEFRAME_ENABLE_V2; - if (np->driver_data & DEV_HAS_PAUSEFRAME_TX_V3) + if(np->driver_data & DEV_HAS_PAUSEFRAME_TX_V3) pause_enable = NVREG_TX_PAUSEFRAME_ENABLE_V3; - writel(pause_enable, base + NvRegTxPauseFrame); + writel(pause_enable , base + NvRegTxPauseFrame); writel(regmisc|NVREG_MISC1_PAUSE_TX, base + NvRegMisc1); np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE; } else { writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame); - writel(regmisc, base + NvRegMisc1); + writel(regmisc, base + NvRegMisc1); } } } +static inline void nv_change_m2pintf(struct net_device *dev,u32 phyreg) +{ + u8 __iomem *base = get_hwbase(dev); + int count=0; + + nv_stop_tx(dev); + nv_stop_rx(dev); + while(readl(base+NvRegTransmitterStatus) && (count++<100)) + udelay(1); + count=0; + while(readl(base+NvRegReceiverStatus) && (count++<100)) + udelay(1); + writel(phyreg, base + NvRegPhyInterface); + udelay(30); + nv_start_tx(dev); + nv_start_rx(dev); +} + /** * nv_update_linkspeed: Setup the MAC according to the link partner * @dev: Network device to be configured @@ -3018,7 +4048,7 @@ static void nv_update_pause(struct net_device *dev, u32 pause_flags) */ static int nv_update_linkspeed(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); int adv = 0; int lpa = 0; @@ -3028,8 +4058,7 @@ static int nv_update_linkspeed(struct net_device *dev) int mii_status; int retval = 0; u32 control_1000, status_1000, phyreg, pause_flags, txreg; - u32 txrxFlags = 0; - u32 phy_exp; + u32 txrxFlags = 0 ; /* BMSR_LSTATUS is latched, read it twice: * we want the current value. @@ -3046,7 +4075,7 @@ static int nv_update_linkspeed(struct net_device *dev) goto set_speed; } - if (np->autoneg == 0) { + if (np->autoneg == AUTONEG_DISABLE) { dprintk(KERN_DEBUG "%s: nv_update_linkspeed: autoneg off, PHY set to 0x%04x.\n", dev->name, np->fixed_mode); if (np->fixed_mode & LPA_100FULL) { @@ -3078,17 +4107,16 @@ static int nv_update_linkspeed(struct net_device *dev) adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ); dprintk(KERN_DEBUG "%s: nv_update_linkspeed: PHY advertises 0x%04x, lpa 0x%04x.\n", - dev->name, adv, lpa); - + dev->name, adv, lpa); retval = 1; if (np->gigabit == PHY_GIGABIT) { control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); status_1000 = mii_rw(dev, np->phyaddr, MII_STAT1000, MII_READ); if ((control_1000 & ADVERTISE_1000FULL) && - (status_1000 & LPA_1000FULL)) { + (status_1000 & LPA_1000FULL)) { dprintk(KERN_DEBUG "%s: nv_update_linkspeed: GBit ethernet detected.\n", - dev->name); + dev->name); newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_1000; newdup = 1; goto set_speed; @@ -3135,11 +4163,11 @@ set_speed: nv_stop_rx(dev); } + if (np->gigabit == PHY_GIGABIT) { phyreg = readl(base + NvRegSlotTime); phyreg &= ~(0x3FF00); - if (((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10) || - ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100)) + if (((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10)||((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100)) phyreg |= NVREG_SLOTTIME_10_100_FULL; else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_1000) phyreg |= NVREG_SLOTTIME_1000_FULL; @@ -3154,27 +4182,15 @@ set_speed: phyreg |= PHY_100; else if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000) phyreg |= PHY_1000; - writel(phyreg, base + NvRegPhyInterface); + nv_change_m2pintf(dev,phyreg); - phy_exp = mii_rw(dev, np->phyaddr, MII_EXPANSION, MII_READ) & EXPANSION_NWAY; /* autoneg capable */ if (phyreg & PHY_RGMII) { - if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000) { + if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000) txreg = NVREG_TX_DEFERRAL_RGMII_1000; - } else { - if (!phy_exp && !np->duplex && (np->driver_data & DEV_HAS_COLLISION_FIX)) { - if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_10) - txreg = NVREG_TX_DEFERRAL_RGMII_STRETCH_10; - else - txreg = NVREG_TX_DEFERRAL_RGMII_STRETCH_100; - } else { - txreg = NVREG_TX_DEFERRAL_RGMII_10_100; - } - } - } else { - if (!phy_exp && !np->duplex && (np->driver_data & DEV_HAS_COLLISION_FIX)) - txreg = NVREG_TX_DEFERRAL_MII_STRETCH; else - txreg = NVREG_TX_DEFERRAL_DEFAULT; + txreg = NVREG_TX_DEFERRAL_RGMII_10_100; + } else { + txreg = NVREG_TX_DEFERRAL_DEFAULT; } writel(txreg, base + NvRegTxDeferral); @@ -3187,53 +4203,61 @@ set_speed: txreg = NVREG_TX_WM_DESC2_3_DEFAULT; } writel(txreg, base + NvRegTxWatermark); - writel(NVREG_MISC1_FORCE | ( np->duplex ? 0 : NVREG_MISC1_HD), - base + NvRegMisc1); + base + NvRegMisc1); pci_push(base); writel(np->linkspeed, base + NvRegLinkSpeed); pci_push(base); + if (optimization_mode == NV_OPTIMIZATION_MODE_CPU){ + + if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000) + np->swtimer_interval = NV_SW_TIMER_INTERVAL_1000_DEFAULT; + else + np->swtimer_interval = NV_SW_TIMER_INTERVAL_NON_1000_DEFAULT; + } + + pause_flags = 0; /* setup pause frame */ if (np->duplex != 0) { - if (np->autoneg && np->pause_flags & NV_PAUSEFRAME_AUTONEG) { + if (np->autoneg && (np->pause_flags & NV_PAUSEFRAME_AUTONEG)) { adv_pause = adv & (ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM); lpa_pause = lpa & (LPA_PAUSE_CAP| LPA_PAUSE_ASYM); switch (adv_pause) { - case ADVERTISE_PAUSE_CAP: - if (lpa_pause & LPA_PAUSE_CAP) { - pause_flags |= NV_PAUSEFRAME_RX_ENABLE; - if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) - pause_flags |= NV_PAUSEFRAME_TX_ENABLE; - } - break; - case ADVERTISE_PAUSE_ASYM: - if (lpa_pause == (LPA_PAUSE_CAP| LPA_PAUSE_ASYM)) - { - pause_flags |= NV_PAUSEFRAME_TX_ENABLE; - } - break; - case ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM: - if (lpa_pause & LPA_PAUSE_CAP) - { - pause_flags |= NV_PAUSEFRAME_RX_ENABLE; - if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) - pause_flags |= NV_PAUSEFRAME_TX_ENABLE; - } - if (lpa_pause == LPA_PAUSE_ASYM) - { - pause_flags |= NV_PAUSEFRAME_RX_ENABLE; - } - break; + case (ADVERTISE_PAUSE_CAP): + if (lpa_pause & LPA_PAUSE_CAP) { + pause_flags |= NV_PAUSEFRAME_RX_ENABLE; + if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) + pause_flags |= NV_PAUSEFRAME_TX_ENABLE; + } + break; + case (ADVERTISE_PAUSE_ASYM): + if (lpa_pause == (LPA_PAUSE_CAP| LPA_PAUSE_ASYM)) { + if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) + pause_flags |= NV_PAUSEFRAME_TX_ENABLE; + } + break; + case (ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM): + if (lpa_pause & LPA_PAUSE_CAP) { + pause_flags |= NV_PAUSEFRAME_RX_ENABLE; + if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) + pause_flags |= NV_PAUSEFRAME_TX_ENABLE; + } + if (lpa_pause == LPA_PAUSE_ASYM) + { + pause_flags |= NV_PAUSEFRAME_RX_ENABLE; + } + break; } } else { pause_flags = np->pause_flags; } } - nv_update_pause(dev, pause_flags); + nv_update_pause(dev, pause_flags); + if (txrxFlags & NV_RESTART_TX) nv_start_tx(dev); if (txrxFlags & NV_RESTART_RX) @@ -3248,12 +4272,14 @@ static void nv_linkchange(struct net_device *dev) if (!netif_carrier_ok(dev)) { netif_carrier_on(dev); printk(KERN_INFO "%s: link up.\n", dev->name); + nv_pmctrl2_gatecoretxrx(dev,GATE_OFF); nv_start_rx(dev); } } else { if (netif_carrier_ok(dev)) { netif_carrier_off(dev); printk(KERN_INFO "%s: link down.\n", dev->name); + nv_pmctrl2_gatecoretxrx(dev,GATE_ON); nv_stop_rx(dev); } } @@ -3273,29 +4299,53 @@ static void nv_link_irq(struct net_device *dev) dprintk(KERN_DEBUG "%s: link change notification done.\n", dev->name); } -static void nv_msi_workaround(struct fe_priv *np) +#define NV_MAX_QUIET_COUNT 5000 +#define NV_MAX_MODERATION_COUNT 2 +static inline void nv_change_interruptmode_ifneeded(struct net_device *dev,int processed) { + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); - /* Need to toggle the msi irq mask within the ethernet device, - * otherwise, future interrupts will not be detected. - */ - if (np->msi_flags & NV_MSI_ENABLED) { - u8 __iomem *base = np->base; - - writel(0, base + NvRegMSIIrqMask); - writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask); + if(np->interrupt_moderation){ + if(processed > NV_MAX_MODERATION_COUNT){ + if(np->polling_mode==THROUGHPUT_POLL_INTR){ + /* transition to polling mode */ + SET_HW_TIMER_INTERVAL(base,np->swtimer_interval); + dprintk(KERN_DEBUG "transition to polling mode\n"); + np->polling_mode= POLL_INTR; + } + np->quietcount = 0; + }else{ + /* transition to throughput mode */ + if(np->polling_mode == THROUGHPUT_POLL_INTR) + return; + if (np->quietcount < NV_MAX_QUIET_COUNT){ + np->quietcount++; + }else{ + dprintk(KERN_DEBUG "transition to throughput mode\n"); + SET_HW_TIMER_INTERVAL(base,US_TO_SW_TIMER_INTERVAL(500000)); + np->polling_mode = THROUGHPUT_POLL_INTR; + } + } } } +#define TX_WORK_PER_LOOP 64 +#define RX_WORK_PER_LOOP 64 +#if NVVER < FEDORA7 +static irqreturn_t nv_nic_irq(int foo, void *data, struct pt_regs *regs) +#else static irqreturn_t nv_nic_irq(int foo, void *data) +#endif { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); - u32 events; + u32 events,mask; + unsigned int processed = 0,rx_processed; int i; - dprintk(KERN_DEBUG "%s: nv_nic_irq\n", dev->name); + dprintk("%s:%s\n",dev->name,__FUNCTION__); for (i=0; ; i++) { if (!(np->msi_flags & NV_MSI_X_ENABLED)) { @@ -3305,91 +4355,71 @@ static irqreturn_t nv_nic_irq(int foo, void *data) events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus); } + pci_push(base); dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events); - if (!(events & np->irqmask)) + mask = readl(base + NvRegIrqMask); + if (!(events & mask)) break; - nv_msi_workaround(np); + nv_msi_workaround(dev); spin_lock(&np->lock); - nv_tx_done(dev); + processed = nv_tx_done(dev); spin_unlock(&np->lock); -#ifdef CONFIG_FORCEDETH_NAPI - if (events & NVREG_IRQ_RX_ALL) { - netif_rx_schedule(dev, &np->napi); + if(napi){ + if (events & NVREG_IRQ_RX_ALL) { +#ifdef NV_NAPI_POLL_LIST + netif_rx_schedule(dev, &np->napi); +#else + netif_rx_schedule(dev); +#endif - /* Disable furthur receive irq's */ - spin_lock(&np->lock); - np->irqmask &= ~NVREG_IRQ_RX_ALL; + /* Disable furthur receive irq's */ + spin_lock(&np->lock); + np->irqmask &= ~NVREG_IRQ_RX_ALL; - if (np->msi_flags & NV_MSI_X_ENABLED) - writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); - else - writel(np->irqmask, base + NvRegIrqMask); - spin_unlock(&np->lock); - } -#else - if (nv_rx_process(dev, RX_WORK_PER_LOOP)) { - if (unlikely(nv_alloc_rx(dev))) { + if (np->msi_flags & NV_MSI_X_ENABLED) + writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); + else + writel(np->irqmask, base + NvRegIrqMask); + spin_unlock(&np->lock); + } + } else { + rx_processed = nv_rx_process(dev,RX_WORK_PER_LOOP); + processed += rx_processed; + if (nv_alloc_rx(dev)) { spin_lock(&np->lock); if (!np->in_shutdown) mod_timer(&np->oom_kick, jiffies + OOM_REFILL); spin_unlock(&np->lock); } } -#endif - if (unlikely(events & NVREG_IRQ_LINK)) { + + if (events & NVREG_IRQ_LINK) { spin_lock(&np->lock); nv_link_irq(dev); spin_unlock(&np->lock); } - if (unlikely(np->need_linktimer && time_after(jiffies, np->link_timeout))) { + if (np->need_linktimer && time_after(jiffies, np->link_timeout)) { spin_lock(&np->lock); nv_linkchange(dev); spin_unlock(&np->lock); np->link_timeout = jiffies + LINK_TIMEOUT; } - if (unlikely(events & (NVREG_IRQ_TX_ERR))) { + if (events & (NVREG_IRQ_TX_ERR)) { dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n", - dev->name, events); + dev->name, events); } - if (unlikely(events & (NVREG_IRQ_UNKNOWN))) { + if (events & (NVREG_IRQ_UNKNOWN)) { printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n", - dev->name, events); + dev->name, events); } - if (unlikely(events & NVREG_IRQ_RECOVER_ERROR)) { - spin_lock(&np->lock); - /* disable interrupts on the nic */ - if (!(np->msi_flags & NV_MSI_X_ENABLED)) - writel(0, base + NvRegIrqMask); - else - writel(np->irqmask, base + NvRegIrqMask); - pci_push(base); - if (!np->in_shutdown) { - np->nic_poll_irq = np->irqmask; - np->recover_error = 1; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); - } - spin_unlock(&np->lock); - break; - } - if (unlikely(i > max_interrupt_work)) { - spin_lock(&np->lock); - /* disable interrupts on the nic */ - if (!(np->msi_flags & NV_MSI_X_ENABLED)) - writel(0, base + NvRegIrqMask); - else - writel(np->irqmask, base + NvRegIrqMask); - pci_push(base); + nv_change_interruptmode_ifneeded(dev,processed); - if (!np->in_shutdown) { - np->nic_poll_irq = np->irqmask; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); - } - spin_unlock(&np->lock); - printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i); + if (i >= max_interrupt_work) { + dprintk(KERN_DEBUG "%s: too many iterations (%d) in %s.\n", dev->name, i,__FUNCTION__); break; } @@ -3399,22 +4429,20 @@ static irqreturn_t nv_nic_irq(int foo, void *data) return IRQ_RETVAL(i); } -/** - * All _optimized functions are used to help increase performance - * (reduce CPU and increase throughput). They use descripter version 3, - * compiler directives, and reduce memory accesses. - */ +#if NVVER < FEDORA7 +static irqreturn_t nv_nic_irq_optimized(int foo, void *data, struct pt_regs *regs) +#else static irqreturn_t nv_nic_irq_optimized(int foo, void *data) +#endif { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); - u32 events; - int i; - - dprintk(KERN_DEBUG "%s: nv_nic_irq_optimized\n", dev->name); + u32 events,mask; + unsigned int processed = 0 ,rx_processed = 0; + int i = 0; - for (i=0; ; i++) { + for(i=0;;i++){ if (!(np->msi_flags & NV_MSI_X_ENABLED)) { events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK; writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); @@ -3422,40 +4450,47 @@ static irqreturn_t nv_nic_irq_optimized(int foo, void *data) events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK; writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus); } - dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events); - if (!(events & np->irqmask)) + + mask = readl(base + NvRegIrqMask); + if (!(events & mask)) break; - nv_msi_workaround(np); + nv_msi_workaround(dev); spin_lock(&np->lock); - nv_tx_done_optimized(dev, TX_WORK_PER_LOOP); + processed = nv_tx_done_optimized(dev, TX_WORK_PER_LOOP); spin_unlock(&np->lock); -#ifdef CONFIG_FORCEDETH_NAPI - if (events & NVREG_IRQ_RX_ALL) { - netif_rx_schedule(dev, &np->napi); - - /* Disable furthur receive irq's */ - spin_lock(&np->lock); - np->irqmask &= ~NVREG_IRQ_RX_ALL; - - if (np->msi_flags & NV_MSI_X_ENABLED) - writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); - else - writel(np->irqmask, base + NvRegIrqMask); - spin_unlock(&np->lock); - } + if(napi){ + if (events & NVREG_IRQ_RX_ALL) { +#ifdef NV_NAPI_POLL_LIST + netif_rx_schedule(dev, &np->napi); #else - if (nv_rx_process_optimized(dev, RX_WORK_PER_LOOP)) { - if (unlikely(nv_alloc_rx_optimized(dev))) { + netif_rx_schedule(dev); +#endif + /* Disable furthur receive irq's */ spin_lock(&np->lock); - if (!np->in_shutdown) - mod_timer(&np->oom_kick, jiffies + OOM_REFILL); + np->irqmask &= ~NVREG_IRQ_RX_ALL; + + if (np->msi_flags & NV_MSI_X_ENABLED) + writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); + else + writel(np->irqmask, base + NvRegIrqMask); spin_unlock(&np->lock); } + } else { + rx_processed = 0; + if ((rx_processed=nv_rx_process_optimized(dev, RX_WORK_PER_LOOP))) { + if (unlikely(nv_alloc_rx_optimized(dev))) { + spin_lock(&np->lock); + if (!np->in_shutdown) + mod_timer(&np->oom_kick, jiffies + OOM_REFILL); + spin_unlock(&np->lock); + } + } + processed += rx_processed; } -#endif + if (unlikely(events & NVREG_IRQ_LINK)) { spin_lock(&np->lock); nv_link_irq(dev); @@ -3467,14 +4502,6 @@ static irqreturn_t nv_nic_irq_optimized(int foo, void *data) spin_unlock(&np->lock); np->link_timeout = jiffies + LINK_TIMEOUT; } - if (unlikely(events & (NVREG_IRQ_TX_ERR))) { - dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n", - dev->name, events); - } - if (unlikely(events & (NVREG_IRQ_UNKNOWN))) { - printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n", - dev->name, events); - } if (unlikely(events & NVREG_IRQ_RECOVER_ERROR)) { spin_lock(&np->lock); /* disable interrupts on the nic */ @@ -3487,74 +4514,61 @@ static irqreturn_t nv_nic_irq_optimized(int foo, void *data) if (!np->in_shutdown) { np->nic_poll_irq = np->irqmask; np->recover_error = 1; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); + tasklet_schedule(&np->nic_poll); } spin_unlock(&np->lock); break; } - if (unlikely(i > max_interrupt_work)) { - spin_lock(&np->lock); - /* disable interrupts on the nic */ - if (!(np->msi_flags & NV_MSI_X_ENABLED)) - writel(0, base + NvRegIrqMask); - else - writel(np->irqmask, base + NvRegIrqMask); - pci_push(base); + nv_change_interruptmode_ifneeded(dev,processed); - if (!np->in_shutdown) { - np->nic_poll_irq = np->irqmask; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); - } - spin_unlock(&np->lock); - printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i); + if (unlikely(i >= max_interrupt_work)) { + dprintk(KERN_DEBUG "%s: too many iterations (%d) in %s.\n", dev->name, i,__FUNCTION__); break; } - } - dprintk(KERN_DEBUG "%s: nv_nic_irq_optimized completed\n", dev->name); return IRQ_RETVAL(i); } +#if NVVER < FEDORA7 +static irqreturn_t nv_nic_irq_tx(int foo, void *data, struct pt_regs *regs) +#else static irqreturn_t nv_nic_irq_tx(int foo, void *data) +#endif { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); - u32 events; + u32 events,mask; int i; unsigned long flags; + unsigned int processed; - dprintk(KERN_DEBUG "%s: nv_nic_irq_tx\n", dev->name); + dprintk("%s:%s\n",dev->name,__FUNCTION__); for (i=0; ; i++) { events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL; writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus); dprintk(KERN_DEBUG "%s: tx irq: %08x\n", dev->name, events); - if (!(events & np->irqmask)) + + mask = readl(base + NvRegIrqMask); + if (!(events & mask)) break; spin_lock_irqsave(&np->lock, flags); - nv_tx_done_optimized(dev, TX_WORK_PER_LOOP); + processed = nv_tx_done_optimized(dev, TX_WORK_PER_LOOP); spin_unlock_irqrestore(&np->lock, flags); - if (unlikely(events & (NVREG_IRQ_TX_ERR))) { + if (events & (NVREG_IRQ_TX_ERR)) { dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n", - dev->name, events); + dev->name, events); } - if (unlikely(i > max_interrupt_work)) { - spin_lock_irqsave(&np->lock, flags); - /* disable interrupts on the nic */ - writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask); - pci_push(base); - if (!np->in_shutdown) { - np->nic_poll_irq |= NVREG_IRQ_TX_ALL; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); - } - spin_unlock_irqrestore(&np->lock, flags); - printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i); + nv_change_interruptmode_ifneeded(dev,processed); + + if (unlikely(i >= max_interrupt_work)) { + dprintk(KERN_DEBUG "%s: too many iterations (%d) in %s.\n", dev->name, i,__FUNCTION__); break; } @@ -3564,20 +4578,30 @@ static irqreturn_t nv_nic_irq_tx(int foo, void *data) return IRQ_RETVAL(i); } -#ifdef CONFIG_FORCEDETH_NAPI +#ifdef NV_NAPI_POLL_LIST static int nv_napi_poll(struct napi_struct *napi, int budget) +#else +static int nv_napi_poll(struct net_device *dev, int *budget) +#endif { +#ifdef NV_NAPI_POLL_LIST struct fe_priv *np = container_of(napi, struct fe_priv, napi); - struct net_device *dev = np->dev; + struct net_device *dev = pci_get_drvdata(np->pci_dev); + int limit = budget; +#else + struct fe_priv *np = get_nvpriv(dev); + int limit = min(*budget,dev->quota); +#endif u8 __iomem *base = get_hwbase(dev); unsigned long flags; int pkts, retcode; - if (!nv_optimized(np)) { - pkts = nv_rx_process(dev, budget); + dprintk("%s:%s\n",dev->name,__FUNCTION__); + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + pkts = nv_rx_process(dev, limit); retcode = nv_alloc_rx(dev); } else { - pkts = nv_rx_process_optimized(dev, budget); + pkts = nv_rx_process_optimized(dev, limit); retcode = nv_alloc_rx_optimized(dev); } @@ -3588,11 +4612,15 @@ static int nv_napi_poll(struct napi_struct *napi, int budget) spin_unlock_irqrestore(&np->lock, flags); } - if (pkts < budget) { + if (pkts < limit) { /* re-enable receive interrupts */ spin_lock_irqsave(&np->lock, flags); +#ifdef NV_NAPI_POLL_LIST __netif_rx_complete(dev, napi); +#else + netif_rx_complete(dev); +#endif np->irqmask |= NVREG_IRQ_RX_ALL; if (np->msi_flags & NV_MSI_X_ENABLED) @@ -3601,95 +4629,106 @@ static int nv_napi_poll(struct napi_struct *napi, int budget) writel(np->irqmask, base + NvRegIrqMask); spin_unlock_irqrestore(&np->lock, flags); + +#ifdef NV_NAPI_POLL_LIST } return pkts; -} -#endif - -#ifdef CONFIG_FORCEDETH_NAPI -static irqreturn_t nv_nic_irq_rx(int foo, void *data) -{ - struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); - u8 __iomem *base = get_hwbase(dev); - u32 events; - - events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL; - writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus); - - if (events) { - netif_rx_schedule(dev, &np->napi); - /* disable receive interrupts on the nic */ - writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); - pci_push(base); +#else + return 0; + } else { + /* used up our quantum, so reschedule */ + dev->quota -= pkts; + *budget -= pkts; + return 1; } - return IRQ_HANDLED; +#endif } + +#if NVVER < FEDORA7 +static irqreturn_t nv_nic_irq_rx(int foo, void *data, struct pt_regs *regs) #else static irqreturn_t nv_nic_irq_rx(int foo, void *data) +#endif { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); - u32 events; + u32 events,mask; int i; unsigned long flags; + unsigned int processed = 0; - dprintk(KERN_DEBUG "%s: nv_nic_irq_rx\n", dev->name); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); for (i=0; ; i++) { events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL; writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus); dprintk(KERN_DEBUG "%s: rx irq: %08x\n", dev->name, events); - if (!(events & np->irqmask)) + + mask = readl(base + NvRegIrqMask); + if (!(events & mask)) break; + if(napi) { + if (events & NVREG_IRQ_RX_ALL) { +#ifdef NV_NAPI_POLL_LIST + netif_rx_schedule(dev, &np->napi); +#else + netif_rx_schedule(dev); +#endif + np->irqmask &= ~NVREG_IRQ_RX_ALL; - if (nv_rx_process_optimized(dev, RX_WORK_PER_LOOP)) { - if (unlikely(nv_alloc_rx_optimized(dev))) { - spin_lock_irqsave(&np->lock, flags); - if (!np->in_shutdown) - mod_timer(&np->oom_kick, jiffies + OOM_REFILL); - spin_unlock_irqrestore(&np->lock, flags); + if (np->msi_flags & NV_MSI_X_ENABLED) + writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); + else + writel(np->irqmask, base + NvRegIrqMask); + pci_push(base); } - } + } else { - if (unlikely(i > max_interrupt_work)) { - spin_lock_irqsave(&np->lock, flags); - /* disable interrupts on the nic */ - writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask); - pci_push(base); + if ((processed = nv_rx_process_optimized(dev, RX_WORK_PER_LOOP))) { + if (unlikely(nv_alloc_rx_optimized(dev))) { + spin_lock_irqsave(&np->lock, flags); + if (!np->in_shutdown) + mod_timer(&np->oom_kick, jiffies + OOM_REFILL); + spin_unlock_irqrestore(&np->lock, flags); + } + } - if (!np->in_shutdown) { - np->nic_poll_irq |= NVREG_IRQ_RX_ALL; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); + nv_change_interruptmode_ifneeded(dev,processed); + + if (unlikely(i >= max_interrupt_work)) { + dprintk(KERN_DEBUG "%s: too many iterations (%d) in %s.\n", dev->name, i,__FUNCTION__); + break; } - spin_unlock_irqrestore(&np->lock, flags); - printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i); - break; + } + dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name); } - dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name); - return IRQ_RETVAL(i); } -#endif +#if NVVER < FEDORA7 +static irqreturn_t nv_nic_irq_other(int foo, void *data, struct pt_regs *regs) +#else static irqreturn_t nv_nic_irq_other(int foo, void *data) +#endif { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); - u32 events; + u32 events,mask; int i; unsigned long flags; - dprintk(KERN_DEBUG "%s: nv_nic_irq_other\n", dev->name); + dprintk("%s:%s\n",dev->name,__FUNCTION__); for (i=0; ; i++) { events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER; writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus); dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events); - if (!(events & np->irqmask)) + + mask = readl(base + NvRegIrqMask); + if (!(events & mask)) break; /* check tx in case we reached max loop limit in tx isr */ @@ -3698,14 +4737,14 @@ static irqreturn_t nv_nic_irq_other(int foo, void *data) spin_unlock_irqrestore(&np->lock, flags); if (events & NVREG_IRQ_LINK) { - spin_lock_irqsave(&np->lock, flags); + spin_lock_irq(&np->lock); nv_link_irq(dev); - spin_unlock_irqrestore(&np->lock, flags); + spin_unlock_irq(&np->lock); } if (np->need_linktimer && time_after(jiffies, np->link_timeout)) { - spin_lock_irqsave(&np->lock, flags); + spin_lock_irq(&np->lock); nv_linkchange(dev); - spin_unlock_irqrestore(&np->lock, flags); + spin_unlock_irq(&np->lock); np->link_timeout = jiffies + LINK_TIMEOUT; } if (events & NVREG_IRQ_RECOVER_ERROR) { @@ -3717,27 +4756,18 @@ static irqreturn_t nv_nic_irq_other(int foo, void *data) if (!np->in_shutdown) { np->nic_poll_irq |= NVREG_IRQ_OTHER; np->recover_error = 1; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); + tasklet_schedule(&np->nic_poll); } spin_unlock_irq(&np->lock); break; } if (events & (NVREG_IRQ_UNKNOWN)) { printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n", - dev->name, events); + dev->name, events); } - if (unlikely(i > max_interrupt_work)) { - spin_lock_irqsave(&np->lock, flags); - /* disable interrupts on the nic */ - writel(NVREG_IRQ_OTHER, base + NvRegIrqMask); - pci_push(base); - if (!np->in_shutdown) { - np->nic_poll_irq |= NVREG_IRQ_OTHER; - mod_timer(&np->nic_poll, jiffies + POLL_WAIT); - } - spin_unlock_irqrestore(&np->lock, flags); - printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i); + if (unlikely(i >= max_interrupt_work)) { + dprintk(KERN_DEBUG "%s: too many iterations (%d) in %s.\n", dev->name, i,__FUNCTION__); break; } @@ -3747,14 +4777,18 @@ static irqreturn_t nv_nic_irq_other(int foo, void *data) return IRQ_RETVAL(i); } +#if NVVER < FEDORA7 +static irqreturn_t nv_nic_irq_test(int foo, void *data, struct pt_regs *regs) +#else static irqreturn_t nv_nic_irq_test(int foo, void *data) +#endif { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 events; - dprintk(KERN_DEBUG "%s: nv_nic_irq_test\n", dev->name); + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); if (!(np->msi_flags & NV_MSI_X_ENABLED)) { events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK; @@ -3768,7 +4802,7 @@ static irqreturn_t nv_nic_irq_test(int foo, void *data) if (!(events & NVREG_IRQ_TIMER)) return IRQ_RETVAL(0); - nv_msi_workaround(np); + nv_msi_workaround(dev); spin_lock(&np->lock); np->intr_test = 1; @@ -3779,6 +4813,7 @@ static irqreturn_t nv_nic_irq_test(int foo, void *data) return IRQ_RETVAL(1); } +#ifdef CONFIG_PCI_MSI static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask) { u8 __iomem *base = get_hwbase(dev); @@ -3804,23 +4839,16 @@ static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask) } writel(readl(base + NvRegMSIXMap1) | msixmap, base + NvRegMSIXMap1); } +#endif static int nv_request_irq(struct net_device *dev, int intr_test) { struct fe_priv *np = get_nvpriv(dev); - u8 __iomem *base = get_hwbase(dev); int ret = 1; - int i; - irqreturn_t (*handler)(int foo, void *data); - if (intr_test) { - handler = nv_nic_irq_test; - } else { - if (nv_optimized(np)) - handler = nv_nic_irq_optimized; - else - handler = nv_nic_irq; - } +#if NVVER > SLES9 + u8 __iomem *base = get_hwbase(dev); + int i; if (np->msi_flags & NV_MSI_X_CAPABLE) { for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) { @@ -3830,21 +4858,21 @@ static int nv_request_irq(struct net_device *dev, int intr_test) np->msi_flags |= NV_MSI_X_ENABLED; if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) { /* Request irq for rx handling */ - if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQF_SHARED, dev->name, dev) != 0) { + if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQ_FLAG, dev->name, dev) != 0) { printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret); pci_disable_msix(np->pci_dev); np->msi_flags &= ~NV_MSI_X_ENABLED; goto out_err; } /* Request irq for tx handling */ - if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQF_SHARED, dev->name, dev) != 0) { + if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQ_FLAG, dev->name, dev) != 0) { printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret); pci_disable_msix(np->pci_dev); np->msi_flags &= ~NV_MSI_X_ENABLED; goto out_free_rx; } /* Request irq for link and timer handling */ - if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQF_SHARED, dev->name, dev) != 0) { + if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQ_FLAG, dev->name, dev) != 0) { printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret); pci_disable_msix(np->pci_dev); np->msi_flags &= ~NV_MSI_X_ENABLED; @@ -3853,12 +4881,19 @@ static int nv_request_irq(struct net_device *dev, int intr_test) /* map interrupts to their respective vector */ writel(0, base + NvRegMSIXMap0); writel(0, base + NvRegMSIXMap1); +#ifdef CONFIG_PCI_MSI set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL); set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL); set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER); +#endif } else { /* Request irq for all interrupts */ - if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, handler, IRQF_SHARED, dev->name, dev) != 0) { + if ((!intr_test && np->desc_ver == DESC_VER_3 && + request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) || + (!intr_test && np->desc_ver != DESC_VER_3 && + request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) || + (intr_test && + request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) { printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret); pci_disable_msix(np->pci_dev); np->msi_flags &= ~NV_MSI_X_ENABLED; @@ -3875,7 +4910,97 @@ static int nv_request_irq(struct net_device *dev, int intr_test) if ((ret = pci_enable_msi(np->pci_dev)) == 0) { np->msi_flags |= NV_MSI_ENABLED; dev->irq = np->pci_dev->irq; - if (request_irq(np->pci_dev->irq, handler, IRQF_SHARED, dev->name, dev) != 0) { + if ((!intr_test && np->desc_ver == DESC_VER_3 && + request_irq(np->pci_dev->irq, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) || + (!intr_test && np->desc_ver != DESC_VER_3 && + request_irq(np->pci_dev->irq, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) || + (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) { + printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret); + pci_disable_msi(np->pci_dev); + np->msi_flags &= ~NV_MSI_ENABLED; + dev->irq = np->pci_dev->irq; + goto out_err; + } + + /* map interrupts to vector 0 */ + writel(0, base + NvRegMSIMap0); + writel(0, base + NvRegMSIMap1); + /* enable msi vector 0 */ + writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask); + } + } +#else +#ifdef CONFIG_PCI_MSI + u8 __iomem *base = get_hwbase(dev); + int i; + + if (np->msi_flags & NV_MSI_X_CAPABLE) { + for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) { + np->msi_x_entry[i].entry = i; + } + if ((ret = pci_enable_msi(np->pci_dev)) == 0) { + np->msi_flags |= NV_MSI_X_ENABLED; + if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) { + msi_alloc_vectors(np->pci_dev,(int *)np->msi_x_entry,2); + /* Request irq for rx handling */ + if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQ_FLAG, dev->name, dev) != 0) { + printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret); + pci_disable_msi(np->pci_dev); + np->msi_flags &= ~NV_MSI_X_ENABLED; + goto out_err; + } + /* Request irq for tx handling */ + if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQ_FLAG, dev->name, dev) != 0) { + printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret); + pci_disable_msi(np->pci_dev); + np->msi_flags &= ~NV_MSI_X_ENABLED; + goto out_free_rx; + } + /* Request irq for link and timer handling */ + if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQ_FLAG, dev->name, dev) != 0) { + printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret); + pci_disable_msi(np->pci_dev); + np->msi_flags &= ~NV_MSI_X_ENABLED; + goto out_free_tx; + } + /* map interrupts to their respective vector */ + writel(0, base + NvRegMSIXMap0); + writel(0, base + NvRegMSIXMap1); +#ifdef CONFIG_PCI_MSI + set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL); + set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL); + set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER); +#endif + } else { + /* Request irq for all interrupts */ + if ((!intr_test && np->desc_ver == DESC_VER_3 && + request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) || + (!intr_test && np->desc_ver != DESC_VER_3 && + request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) || + (intr_test && + request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) { + printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret); + pci_disable_msi(np->pci_dev); + np->msi_flags &= ~NV_MSI_X_ENABLED; + goto out_err; + } + + /* map interrupts to vector 0 */ + writel(0, base + NvRegMSIXMap0); + writel(0, base + NvRegMSIXMap1); + } + } + } + if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) { + + if ((ret = pci_enable_msi(np->pci_dev)) == 0) { + np->msi_flags |= NV_MSI_ENABLED; + dev->irq = np->pci_dev->irq; + if ((!intr_test && np->desc_ver == DESC_VER_3 && + request_irq(np->pci_dev->irq, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) || + (!intr_test && np->desc_ver != DESC_VER_3 && + request_irq(np->pci_dev->irq, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) || + (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) { printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret); pci_disable_msi(np->pci_dev); np->msi_flags &= ~NV_MSI_ENABLED; @@ -3890,21 +5015,38 @@ static int nv_request_irq(struct net_device *dev, int intr_test) writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask); } } +#endif +#endif if (ret != 0) { - if (request_irq(np->pci_dev->irq, handler, IRQF_SHARED, dev->name, dev) != 0) + if ((!intr_test && np->desc_ver == DESC_VER_3 && + request_irq(np->pci_dev->irq, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) || + (!intr_test && np->desc_ver != DESC_VER_3 && + request_irq(np->pci_dev->irq, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) || + (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) goto out_err; } return 0; + +#if NVVER > SLES9 +out_free_tx: + free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev); +out_free_rx: + free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev); +#else +#ifdef CONFIG_PCI_MSI out_free_tx: free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev); out_free_rx: free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev); +#endif +#endif out_err: return 1; } +#if NVVER > SLES9 static void nv_free_irq(struct net_device *dev) { struct fe_priv *np = get_nvpriv(dev); @@ -3924,11 +5066,39 @@ static void nv_free_irq(struct net_device *dev) } } } +#else +static void nv_free_irq(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + +#ifdef CONFIG_PCI_MSI + int i; + + if (np->msi_flags & NV_MSI_X_ENABLED) { + for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) { + free_irq(np->msi_x_entry[i].vector, dev); + } + pci_disable_msi(np->pci_dev); + np->msi_flags &= ~NV_MSI_X_ENABLED; + } else { + free_irq(np->pci_dev->irq, dev); + + if (np->msi_flags & NV_MSI_ENABLED) { + pci_disable_msi(np->pci_dev); + np->msi_flags &= ~NV_MSI_ENABLED; + } + } +#else + free_irq(np->pci_dev->irq, dev); +#endif + +} +#endif static void nv_do_nic_poll(unsigned long data) { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 mask = 0; @@ -3938,41 +5108,48 @@ static void nv_do_nic_poll(unsigned long data) * nv_nic_irq because that may decide to do otherwise */ + spin_lock_irq(&np->timer_lock); if (!using_multi_irqs(dev)) { if (np->msi_flags & NV_MSI_X_ENABLED) - disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); + disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); else - disable_irq_lockdep(np->pci_dev->irq); + disable_irq(np->pci_dev->irq); mask = np->irqmask; } else { if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) { - disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); + disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); mask |= NVREG_IRQ_RX_ALL; } if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) { - disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); + disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); mask |= NVREG_IRQ_TX_ALL; } if (np->nic_poll_irq & NVREG_IRQ_OTHER) { - disable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); + disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); mask |= NVREG_IRQ_OTHER; } } np->nic_poll_irq = 0; - /* disable_irq() contains synchronize_irq, thus no irq handler can run now */ + /* disable_irq() contains synchronize_irq,thus no irq handler can run now */ if (np->recover_error) { np->recover_error = 0; printk(KERN_INFO "forcedeth: MAC in recoverable error state\n"); if (netif_running(dev)) { +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); +#else + spin_lock_bh(&dev->xmit_lock); +#endif spin_lock(&np->lock); /* stop engines */ - nv_stop_rxtx(dev); + nv_stop_rx(dev); + nv_stop_tx(dev); nv_txrx_reset(dev); /* drain rx queue */ - nv_drain_rxtx(dev); + nv_drain_rx(dev); + nv_drain_tx(dev); /* reinit driver view of the rx queue */ set_bufsize(dev); if (nv_init_ring(dev)) { @@ -3983,77 +5160,204 @@ static void nv_do_nic_poll(unsigned long data) writel(np->rx_buf_sz, base + NvRegOffloadConfig); setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), - base + NvRegRingSizes); + base + NvRegRingSizes); pci_push(base); writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); pci_push(base); /* restart rx engine */ - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); spin_unlock(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif } } - writel(mask, base + NvRegIrqMask); pci_push(base); if (!using_multi_irqs(dev)) { - if (nv_optimized(np)) - nv_nic_irq_optimized(0, dev); + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) +#if NVVER < FEDORA7 + nv_nic_irq((int) 0, (void *) data, (struct pt_regs *) NULL); +#else + nv_nic_irq((int) 0, (void *) data); +#endif else - nv_nic_irq(0, dev); +#if NVVER < FEDORA7 + nv_nic_irq_optimized((int) 0, (void *) data, (struct pt_regs *) NULL); +#else + nv_nic_irq_optimized((int) 0, (void *) data); +#endif if (np->msi_flags & NV_MSI_X_ENABLED) - enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); + enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector); else - enable_irq_lockdep(np->pci_dev->irq); + enable_irq(np->pci_dev->irq); } else { - if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) { - nv_nic_irq_rx(0, dev); - enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); + if (mask & NVREG_IRQ_RX_ALL) { +#if NVVER < FEDORA7 + nv_nic_irq_rx((int) 0, (void *) data, (struct pt_regs *) NULL); +#else + nv_nic_irq_rx((int) 0, (void *) data); +#endif + enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector); } - if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) { - nv_nic_irq_tx(0, dev); - enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); + if (mask & NVREG_IRQ_TX_ALL) { +#if NVVER < FEDORA7 + nv_nic_irq_tx((int) 0, (void *) data, (struct pt_regs *) NULL); +#else + nv_nic_irq_tx((int) 0, (void *) data); +#endif + enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector); } - if (np->nic_poll_irq & NVREG_IRQ_OTHER) { - nv_nic_irq_other(0, dev); - enable_irq_lockdep(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); + if (mask & NVREG_IRQ_OTHER) { +#if NVVER < FEDORA7 + nv_nic_irq_other((int) 0, (void *) data, (struct pt_regs *) NULL); +#else + nv_nic_irq_other((int) 0, (void *) data); +#endif + enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector); } } + spin_unlock_irq(&np->timer_lock); } +#if NVVER > RHEL3 #ifdef CONFIG_NET_POLL_CONTROLLER static void nv_poll_controller(struct net_device *dev) { nv_do_nic_poll((unsigned long) dev); } #endif +#else +static void nv_poll_controller(struct net_device *dev) +{ + nv_do_nic_poll((unsigned long) dev); +} +#endif static void nv_do_stats_poll(unsigned long data) { struct net_device *dev = (struct net_device *) data; - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); + + spin_lock_irq(&np->lock); + + np->estats.tx_dropped = np->stats.tx_dropped; + if (np->driver_data & (DEV_HAS_STATISTICS_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_STATISTICS_V3)) { + np->estats.tx_fifo_errors += readl(base + NvRegTxUnderflow); + np->estats.tx_carrier_errors += readl(base + NvRegTxLossCarrier); + np->estats.tx_bytes += readl(base + NvRegTxCnt); + np->estats.rx_crc_errors += readl(base + NvRegRxFCSErr); + np->estats.rx_over_errors += readl(base + NvRegRxOverflow); + np->estats.tx_zero_rexmt += readl(base + NvRegTxZeroReXmt); + np->estats.tx_one_rexmt += readl(base + NvRegTxOneReXmt); + np->estats.tx_many_rexmt += readl(base + NvRegTxManyReXmt); + np->estats.tx_late_collision += readl(base + NvRegTxLateCol); + np->estats.tx_excess_deferral += readl(base + NvRegTxExcessDef); + np->estats.tx_retry_error += readl(base + NvRegTxRetryErr); + np->estats.rx_frame_error += readl(base + NvRegRxFrameErr); + np->estats.rx_extra_byte += readl(base + NvRegRxExtraByte); + np->estats.rx_late_collision += readl(base + NvRegRxLateCol); + np->estats.rx_runt += readl(base + NvRegRxRunt); + np->estats.rx_frame_too_long += readl(base + NvRegRxFrameTooLong); + np->estats.rx_frame_align_error += readl(base + NvRegRxFrameAlignErr); + np->estats.rx_length_error += readl(base + NvRegRxLenErr); + np->estats.rx_unicast += readl(base + NvRegRxUnicast); + np->estats.rx_multicast += readl(base + NvRegRxMulticast); + np->estats.rx_broadcast += readl(base + NvRegRxBroadcast); + np->estats.rx_packets = + np->estats.rx_unicast + + np->estats.rx_multicast + + np->estats.rx_broadcast; + np->estats.rx_errors_total = + np->estats.rx_crc_errors + + np->estats.rx_over_errors + + np->estats.rx_frame_error + + (np->estats.rx_frame_align_error - np->estats.rx_extra_byte) + + np->estats.rx_late_collision + + np->estats.rx_runt + + np->estats.rx_frame_too_long + + np->rx_len_errors; + + if (np->driver_data & DEV_HAS_STATISTICS_V2) { + np->estats.tx_deferral += readl(base + NvRegTxDef); + np->estats.tx_packets += readl(base + NvRegTxFrame); + np->estats.rx_bytes += readl(base + NvRegRxCnt); + np->estats.tx_pause += readl(base + NvRegTxPause); + np->estats.rx_pause += readl(base + NvRegRxPause); + np->estats.rx_drop_frame += readl(base + NvRegRxDropFrame); + } + + if (np->driver_data & DEV_HAS_STATISTICS_V3) { + np->estats.tx_unicast += readl(base + NvRegTxUnicast); + np->estats.tx_multicast += readl(base + NvRegTxMulticast); + np->estats.tx_broadcast += readl(base + NvRegTxBroadcast); + } + + /* copy to net_device stats */ + np->stats.tx_fifo_errors = np->estats.tx_fifo_errors; + np->stats.tx_carrier_errors = np->estats.tx_carrier_errors; + np->stats.tx_bytes = np->estats.tx_bytes; + np->stats.rx_crc_errors = np->estats.rx_crc_errors; + np->stats.rx_over_errors = np->estats.rx_over_errors; + np->stats.rx_packets = np->estats.rx_packets; + np->stats.rx_errors = np->estats.rx_errors_total; + + } else { + np->estats.tx_packets = np->stats.tx_packets; + np->estats.tx_fifo_errors = np->stats.tx_fifo_errors; + np->estats.tx_carrier_errors = np->stats.tx_carrier_errors; + np->estats.tx_bytes = np->stats.tx_bytes; + np->estats.rx_bytes = np->stats.rx_bytes; + np->estats.rx_crc_errors = np->stats.rx_crc_errors; + np->estats.rx_over_errors = np->stats.rx_over_errors; + np->estats.rx_packets = np->stats.rx_packets; + np->estats.rx_errors_total = np->stats.rx_errors; + } + + if (!np->in_shutdown && netif_running(dev)) + mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL); + spin_unlock_irq(&np->lock); +} - nv_get_hw_stats(dev); +/* + * nv_get_stats: dev->get_stats function + * Get latest stats value from the nic. + * Called with read_lock(&dev_base_lock) held for read - + * only synchronized against unregister_netdevice. + */ +static struct net_device_stats *nv_get_stats(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); - if (!np->in_shutdown) - mod_timer(&np->stats_poll, - round_jiffies(jiffies + STATS_INTERVAL)); + /* It seems that the nic always generates interrupts and doesn't + * accumulate errors internally. Thus the current values in np->stats + * are already up to date. + */ + nv_do_stats_poll((unsigned long)dev); + return &np->stats; } static void nv_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) { - struct fe_priv *np = netdev_priv(dev); - strcpy(info->driver, DRV_NAME); + struct fe_priv *np = get_nvpriv(dev); + if(napi) + strcpy(info->driver, "forcedeth-NAPI"); + else + strcpy(info->driver, "forcedeth"); strcpy(info->version, FORCEDETH_VERSION); strcpy(info->bus_info, pci_name(np->pci_dev)); } static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); wolinfo->supported = WAKE_MAGIC; spin_lock_irq(&np->lock); @@ -4064,7 +5368,7 @@ static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo) static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 flags = 0; @@ -4084,7 +5388,7 @@ static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo) static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int adv; spin_lock_irq(&np->lock); @@ -4103,15 +5407,15 @@ static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd) if (netif_carrier_ok(dev)) { switch(np->linkspeed & (NVREG_LINKSPEED_MASK)) { - case NVREG_LINKSPEED_10: - ecmd->speed = SPEED_10; - break; - case NVREG_LINKSPEED_100: - ecmd->speed = SPEED_100; - break; - case NVREG_LINKSPEED_1000: - ecmd->speed = SPEED_1000; - break; + case NVREG_LINKSPEED_10: + ecmd->speed = SPEED_10; + break; + case NVREG_LINKSPEED_100: + ecmd->speed = SPEED_100; + break; + case NVREG_LINKSPEED_1000: + ecmd->speed = SPEED_1000; + break; } ecmd->duplex = DUPLEX_HALF; if (np->duplex) @@ -4142,9 +5446,9 @@ static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd) } } ecmd->supported = (SUPPORTED_Autoneg | - SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | - SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | - SUPPORTED_MII); + SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | + SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | + SUPPORTED_MII); if (np->gigabit == PHY_GIGABIT) ecmd->supported |= SUPPORTED_1000baseT_Full; @@ -4158,8 +5462,9 @@ static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd) static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); + dprintk(KERN_DEBUG "%s: nv_set_settings \n", dev->name); if (ecmd->port != PORT_MII) return -EINVAL; if (ecmd->transceiver != XCVR_EXTERNAL) @@ -4173,7 +5478,7 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) u32 mask; mask = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full | - ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full; + ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full; if (np->gigabit == PHY_GIGABIT) mask |= ADVERTISED_1000baseT_Full; @@ -4194,24 +5499,27 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) netif_carrier_off(dev); if (netif_running(dev)) { - unsigned long flags; - - nv_disable_irq(dev); + nv_disable_hw_interrupts(dev, np->irqmask); +#if NVVER > RHEL3 + synchronize_irq(np->pci_dev->irq); +#else + synchronize_irq(); +#endif +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); - /* with plain spinlock lockdep complains */ - spin_lock_irqsave(&np->lock, flags); +#else + spin_lock_bh(&dev->xmit_lock); +#endif + spin_lock(&np->lock); /* stop engines */ - /* FIXME: - * this can take some time, and interrupts are disabled - * due to spin_lock_irqsave, but let's hope no daemon - * is going to change the settings very often... - * Worst case: - * NV_RXSTOP_DELAY1MAX + NV_TXSTOP_DELAY1MAX - * + some minor delays, which is up to a second approximately - */ - nv_stop_rxtx(dev); - spin_unlock_irqrestore(&np->lock, flags); + nv_stop_rx(dev); + nv_stop_tx(dev); + spin_unlock(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif } if (ecmd->autoneg == AUTONEG_ENABLE) { @@ -4222,14 +5530,18 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) /* advertise only what has been requested */ adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM); - if (ecmd->advertising & ADVERTISED_10baseT_Half) + if (ecmd->advertising & ADVERTISED_10baseT_Half) { adv |= ADVERTISE_10HALF; - if (ecmd->advertising & ADVERTISED_10baseT_Full) + } + if (ecmd->advertising & ADVERTISED_10baseT_Full) { adv |= ADVERTISE_10FULL; - if (ecmd->advertising & ADVERTISED_100baseT_Half) + } + if (ecmd->advertising & ADVERTISED_100baseT_Half) { adv |= ADVERTISE_100HALF; - if (ecmd->advertising & ADVERTISED_100baseT_Full) + } + if (ecmd->advertising & ADVERTISED_100baseT_Full) { adv |= ADVERTISE_100FULL; + } if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */ adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM; if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) @@ -4239,9 +5551,11 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) if (np->gigabit == PHY_GIGABIT) { adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); adv &= ~ADVERTISE_1000FULL; - if (ecmd->advertising & ADVERTISED_1000baseT_Full) + if (ecmd->advertising & ADVERTISED_1000baseT_Full) { adv |= ADVERTISE_1000FULL; + } mii_rw(dev, np->phyaddr, MII_CTRL1000, adv); + } if (netif_running(dev)) @@ -4266,15 +5580,19 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM); - if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF) + if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF) { adv |= ADVERTISE_10HALF; - if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL) + } + if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL) { adv |= ADVERTISE_10FULL; - if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF) + } + if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF) { adv |= ADVERTISE_100HALF; - if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL) + } + if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL) { adv |= ADVERTISE_100FULL; - np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE); + } + np->pause_flags &= ~(NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE); if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) {/* for rx we set both advertisments but disable tx pause */ adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM; np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE; @@ -4315,8 +5633,9 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) } if (netif_running(dev)) { - nv_start_rxtx(dev); - nv_enable_irq(dev); + nv_start_rx(dev); + nv_start_tx(dev); + nv_enable_hw_interrupts(dev, np->irqmask); } return 0; @@ -4326,13 +5645,13 @@ static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) static int nv_get_regs_len(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); return np->register_size; } static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u32 *rbuf = buf; int i; @@ -4346,7 +5665,7 @@ static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void static int nv_nway_reset(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int ret; if (np->autoneg) { @@ -4355,12 +5674,21 @@ static int nv_nway_reset(struct net_device *dev) netif_carrier_off(dev); if (netif_running(dev)) { nv_disable_irq(dev); +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); +#else + spin_lock_bh(&dev->xmit_lock); +#endif spin_lock(&np->lock); /* stop engines */ - nv_stop_rxtx(dev); + nv_stop_rx(dev); + nv_stop_tx(dev); spin_unlock(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif printk(KERN_INFO "%s: link down.\n", dev->name); } @@ -4378,7 +5706,8 @@ static int nv_nway_reset(struct net_device *dev) } if (netif_running(dev)) { - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); nv_enable_irq(dev); } ret = 0; @@ -4389,19 +5718,9 @@ static int nv_nway_reset(struct net_device *dev) return ret; } -static int nv_set_tso(struct net_device *dev, u32 value) -{ - struct fe_priv *np = netdev_priv(dev); - - if ((np->driver_data & DEV_HAS_CHECKSUM)) - return ethtool_op_set_tso(dev, value); - else - return -EOPNOTSUPP; -} - static void nv_get_ringparam(struct net_device *dev, struct ethtool_ringparam* ring) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); ring->rx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3; ring->rx_mini_max_pending = 0; @@ -4416,46 +5735,47 @@ static void nv_get_ringparam(struct net_device *dev, struct ethtool_ringparam* r static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ring) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); u8 *rxtx_ring, *rx_skbuff, *tx_skbuff; dma_addr_t ring_addr; if (ring->rx_pending < RX_RING_MIN || - ring->tx_pending < TX_RING_MIN || - ring->rx_mini_pending != 0 || - ring->rx_jumbo_pending != 0 || - (np->desc_ver == DESC_VER_1 && - (ring->rx_pending > RING_MAX_DESC_VER_1 || - ring->tx_pending > RING_MAX_DESC_VER_1)) || - (np->desc_ver != DESC_VER_1 && - (ring->rx_pending > RING_MAX_DESC_VER_2_3 || - ring->tx_pending > RING_MAX_DESC_VER_2_3))) { + ring->tx_pending < TX_RING_MIN || + ring->rx_mini_pending != 0 || + ring->rx_jumbo_pending != 0 || + (np->desc_ver == DESC_VER_1 && + (ring->rx_pending > RING_MAX_DESC_VER_1 || + ring->tx_pending > RING_MAX_DESC_VER_1)) || + (np->desc_ver != DESC_VER_1 && + (ring->rx_pending > RING_MAX_DESC_VER_2_3 || + ring->tx_pending > RING_MAX_DESC_VER_2_3))) { return -EINVAL; } /* allocate new rings */ - if (!nv_optimized(np)) { + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { rxtx_ring = pci_alloc_consistent(np->pci_dev, - sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending), - &ring_addr); + sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending), + &ring_addr); } else { rxtx_ring = pci_alloc_consistent(np->pci_dev, - sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending), - &ring_addr); + sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending), + &ring_addr); } rx_skbuff = kmalloc(sizeof(struct nv_skb_map) * ring->rx_pending, GFP_KERNEL); tx_skbuff = kmalloc(sizeof(struct nv_skb_map) * ring->tx_pending, GFP_KERNEL); + if (!rxtx_ring || !rx_skbuff || !tx_skbuff) { /* fall back to old rings */ - if (!nv_optimized(np)) { - if (rxtx_ring) + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + if(rxtx_ring) pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending), - rxtx_ring, ring_addr); + rxtx_ring, ring_addr); } else { if (rxtx_ring) pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending), - rxtx_ring, ring_addr); + rxtx_ring, ring_addr); } if (rx_skbuff) kfree(rx_skbuff); @@ -4466,13 +5786,19 @@ static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ri if (netif_running(dev)) { nv_disable_irq(dev); +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); +#else + spin_lock_bh(&dev->xmit_lock); +#endif spin_lock(&np->lock); /* stop engines */ - nv_stop_rxtx(dev); + nv_stop_rx(dev); + nv_stop_tx(dev); nv_txrx_reset(dev); /* drain queues */ - nv_drain_rxtx(dev); + nv_drain_rx(dev); + nv_drain_tx(dev); /* delete queues */ free_rings(dev); } @@ -4480,8 +5806,9 @@ static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ri /* set new values */ np->rx_ring_size = ring->rx_pending; np->tx_ring_size = ring->tx_pending; - - if (!nv_optimized(np)) { + np->tx_limit_stop =np->tx_ring_size - TX_LIMIT_DIFFERENCE; + np->tx_limit_start =np->tx_ring_size - TX_LIMIT_DIFFERENCE - 1; + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { np->rx_ring.orig = (struct ring_desc*)rxtx_ring; np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size]; } else { @@ -4507,15 +5834,20 @@ static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ri writel(np->rx_buf_sz, base + NvRegOffloadConfig); setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), - base + NvRegRingSizes); + base + NvRegRingSizes); pci_push(base); writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); pci_push(base); /* restart engines */ - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); spin_unlock(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif nv_enable_irq(dev); } return 0; @@ -4525,7 +5857,7 @@ exit: static void nv_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); pause->autoneg = (np->pause_flags & NV_PAUSEFRAME_AUTONEG) != 0; pause->rx_pause = (np->pause_flags & NV_PAUSEFRAME_RX_ENABLE) != 0; @@ -4534,13 +5866,13 @@ static void nv_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam* static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int adv, bmcr; if ((!np->autoneg && np->duplex == 0) || - (np->autoneg && !pause->autoneg && np->duplex == 0)) { - printk(KERN_INFO "%s: can not set pause settings when forced link is in half duplex.\n", - dev->name); + (np->autoneg && !pause->autoneg && np->duplex == 0)) { + printk(KERN_INFO "%s: can not set pause settings when forced link is in half duplex.\n", + dev->name); return -EINVAL; } if (pause->tx_pause && !(np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)) { @@ -4551,12 +5883,21 @@ static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* netif_carrier_off(dev); if (netif_running(dev)) { nv_disable_irq(dev); +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); +#else + spin_lock_bh(&dev->xmit_lock); +#endif spin_lock(&np->lock); /* stop engines */ - nv_stop_rxtx(dev); + nv_stop_rx(dev); + nv_stop_tx(dev); spin_unlock(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif } np->pause_flags &= ~(NV_PAUSEFRAME_RX_REQ|NV_PAUSEFRAME_TX_REQ); @@ -4595,7 +5936,8 @@ static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* } if (netif_running(dev)) { - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); nv_enable_irq(dev); } return 0; @@ -4603,17 +5945,18 @@ static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* static u32 nv_get_rx_csum(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); return (np->rx_csum) != 0; } static int nv_set_rx_csum(struct net_device *dev, u32 data) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); int retcode = 0; if (np->driver_data & DEV_HAS_CHECKSUM) { + if (data) { np->rx_csum = 1; np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK; @@ -4623,6 +5966,7 @@ static int nv_set_rx_csum(struct net_device *dev, u32 data) if (!(np->vlanctl_bits & NVREG_VLANCONTROL_ENABLE)) np->txrxctl_bits &= ~NVREG_TXRXCTL_RXCHECK; } + if (netif_running(dev)) { spin_lock_irq(&np->lock); writel(np->txrxctl_bits, base + NvRegTxRxControl); @@ -4635,61 +5979,102 @@ static int nv_set_rx_csum(struct net_device *dev, u32 data) return retcode; } -static int nv_set_tx_csum(struct net_device *dev, u32 data) +#ifdef NETIF_F_TSO +static int nv_set_tso(struct net_device *dev, u32 data) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); - if (np->driver_data & DEV_HAS_CHECKSUM) - return ethtool_op_set_tx_hw_csum(dev, data); - else - return -EOPNOTSUPP; + if (np->driver_data & DEV_HAS_CHECKSUM){ +#if NVVER < SUSE10 + if(data){ + if(ethtool_op_get_sg(dev)==0) + return -EINVAL; + } +#endif + return ethtool_op_set_tso(dev, data); + }else + return -EINVAL; } +#endif static int nv_set_sg(struct net_device *dev, u32 data) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); - if (np->driver_data & DEV_HAS_CHECKSUM) + if (np->driver_data & DEV_HAS_CHECKSUM){ +#if NVVER < SUSE10 + if(data){ + if(ethtool_op_get_tx_csum(dev)==0) + return -EINVAL; + } +#ifdef NETIF_F_TSO + if(!data) + /* set tso off */ + nv_set_tso(dev,data); +#endif +#endif return ethtool_op_set_sg(dev, data); - else - return -EOPNOTSUPP; + }else + return -EINVAL; } -static int nv_get_sset_count(struct net_device *dev, int sset) +static int nv_set_tx_csum(struct net_device *dev, u32 data) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); - switch (sset) { - case ETH_SS_TEST: - if (np->driver_data & DEV_HAS_TEST_EXTENDED) - return NV_TEST_COUNT_EXTENDED; - else - return NV_TEST_COUNT_BASE; - case ETH_SS_STATS: - if (np->driver_data & DEV_HAS_STATISTICS_V1) - return NV_DEV_STATISTICS_V1_COUNT; - else if (np->driver_data & DEV_HAS_STATISTICS_V2) - return NV_DEV_STATISTICS_V2_COUNT; +#if NVVER < SUSE10 + /* set sg off if tx off */ + if(!data) + nv_set_sg(dev,data); +#endif + if (np->driver_data & DEV_HAS_CHECKSUM) + { + if (data) + dev->features |= NETIF_F_IP_CSUM; else - return 0; - default: - return -EOPNOTSUPP; - } + dev->features &= ~NETIF_F_IP_CSUM; + return 0; + } else + return -EINVAL; +} + +static int nv_get_stats_count(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + + if (np->driver_data & DEV_HAS_STATISTICS_V1) + return NV_DEV_STATISTICS_V1_COUNT; + else if (np->driver_data & DEV_HAS_STATISTICS_V2) + return NV_DEV_STATISTICS_V2_COUNT; + else if (np->driver_data & DEV_HAS_STATISTICS_V3) + return NV_DEV_STATISTICS_V3_COUNT; + else + return NV_DEV_STATISTICS_SW_COUNT; } static void nv_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *estats, u64 *buffer) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); /* update stats */ nv_do_stats_poll((unsigned long)dev); - memcpy(buffer, &np->estats, nv_get_sset_count(dev, ETH_SS_STATS)*sizeof(u64)); + memcpy(buffer, &np->estats, nv_get_stats_count(dev)*sizeof(u64)); +} + +static int nv_self_test_count(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + + if (np->driver_data & DEV_HAS_TEST_EXTENDED) + return NV_TEST_COUNT_EXTENDED; + else + return NV_TEST_COUNT_BASE; } static int nv_link_test(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); int mii_status; mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); @@ -4732,16 +6117,17 @@ static int nv_register_test(struct net_device *dev) static int nv_interrupt_test(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); int ret = 1; int testcnt; - u32 save_msi_flags, save_poll_interval = 0; + u32 save_msi_flags, save_poll_interval = 0,save_swtimer_ctrl = 0; if (netif_running(dev)) { /* free current irq */ nv_free_irq(dev); - save_poll_interval = readl(base+NvRegPollingInterval); + save_poll_interval = readl(base + NvRegPollingInterval); + save_swtimer_ctrl = readl(base + NvRegSoftwareTimerCtrl); } /* flag to test interrupt handler */ @@ -4755,13 +6141,12 @@ static int nv_interrupt_test(struct net_device *dev) return 0; /* setup timer interrupt */ - writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval); - writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6); + SET_HW_TIMER_INTERVAL(base,NVREG_POLL_DEFAULT_CPU); nv_enable_hw_interrupts(dev, NVREG_IRQ_TIMER); /* wait for at least one interrupt */ - msleep(100); + nv_msleep(100); spin_lock_irq(&np->lock); @@ -4784,7 +6169,7 @@ static int nv_interrupt_test(struct net_device *dev) if (netif_running(dev)) { writel(save_poll_interval, base + NvRegPollingInterval); - writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6); + writel(save_swtimer_ctrl, base + NvRegSoftwareTimerCtrl); /* restore original irq */ if (nv_request_irq(dev, 0)) return 0; @@ -4795,26 +6180,32 @@ static int nv_interrupt_test(struct net_device *dev) static int nv_loopback_test(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); struct sk_buff *tx_skb, *rx_skb; dma_addr_t test_dma_addr; u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET); - u32 flags; + u32 Flags; int len, i, pkt_len; u8 *pkt_data; u32 filter_flags = 0; u32 misc1_flags = 0; int ret = 1; + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + if (netif_running(dev)) { nv_disable_irq(dev); filter_flags = readl(base + NvRegPacketFilterFlags); misc1_flags = readl(base + NvRegMisc1); - } else { - nv_txrx_reset(dev); - } + } + writel(NVREG_TXRXCTL_BM_DIS | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl); + pci_push(base); + udelay(NV_TXRX_RESET_DELAY); + writel(NVREG_TXRXCTL_BM_DIS | np->txrxctl_bits, base + NvRegTxRxControl); + pci_push(base); + /* reinit driver view of the rx queue */ set_bufsize(dev); nv_init_ring(dev); @@ -4827,11 +6218,12 @@ static int nv_loopback_test(struct net_device *dev) writel(np->rx_buf_sz, base + NvRegOffloadConfig); setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), - base + NvRegRingSizes); + base + NvRegRingSizes); pci_push(base); /* restart rx engine */ - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); /* setup packet for tx */ pkt_len = ETH_DATA_LEN; @@ -4842,59 +6234,64 @@ static int nv_loopback_test(struct net_device *dev) ret = 0; goto out; } - test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data, - skb_tailroom(tx_skb), - PCI_DMA_FROMDEVICE); + pkt_data = skb_put(tx_skb, pkt_len); for (i = 0; i < pkt_len; i++) pkt_data[i] = (u8)(i & 0xff); +#if NVVER > FEDORA7 + test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data, + skb_tailroom(tx_skb), PCI_DMA_TODEVICE); +#else + test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data, + tx_skb->end-tx_skb->data, PCI_DMA_TODEVICE); +#endif - if (!nv_optimized(np)) { - np->tx_ring.orig[0].buf = cpu_to_le32(test_dma_addr); - np->tx_ring.orig[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra); + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + np->tx_ring.orig[0].PacketBuffer = cpu_to_le32(test_dma_addr); + np->tx_ring.orig[0].FlagLen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra); } else { - np->tx_ring.ex[0].bufhigh = cpu_to_le32(dma_high(test_dma_addr)); - np->tx_ring.ex[0].buflow = cpu_to_le32(dma_low(test_dma_addr)); - np->tx_ring.ex[0].flaglen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra); + np->tx_ring.ex[0].PacketBufferHigh = cpu_to_le64(test_dma_addr) >> 32; + np->tx_ring.ex[0].PacketBufferLow = cpu_to_le64(test_dma_addr) & 0x0FFFFFFFF; + np->tx_ring.ex[0].FlagLen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra); } writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); pci_push(get_hwbase(dev)); - msleep(500); + nv_msleep(500); /* check for rx of the packet */ - if (!nv_optimized(np)) { - flags = le32_to_cpu(np->rx_ring.orig[0].flaglen); + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { + Flags = le32_to_cpu(np->rx_ring.orig[0].FlagLen); len = nv_descr_getlength(&np->rx_ring.orig[0], np->desc_ver); } else { - flags = le32_to_cpu(np->rx_ring.ex[0].flaglen); + Flags = le32_to_cpu(np->rx_ring.ex[0].FlagLen); len = nv_descr_getlength_ex(&np->rx_ring.ex[0], np->desc_ver); } - if (flags & NV_RX_AVAIL) { + if (Flags & NV_RX_AVAIL) { ret = 0; } else if (np->desc_ver == DESC_VER_1) { - if (flags & NV_RX_ERROR) + if (Flags & NV_RX_ERROR) ret = 0; } else { - if (flags & NV_RX2_ERROR) { + if (Flags & NV_RX2_ERROR) { ret = 0; } } - if (ret) { + if (ret) { if (len != pkt_len) { ret = 0; - dprintk(KERN_DEBUG "%s: loopback len mismatch %d vs %d\n", - dev->name, len, pkt_len); + dprintk(KERN_DEBUG "%s: loopback len mismatch %d vs %d\n", + dev->name, len, pkt_len); } else { rx_skb = np->rx_skb[0].skb; for (i = 0; i < pkt_len; i++) { if (rx_skb->data[i] != (u8)(i & 0xff)) { ret = 0; - dprintk(KERN_DEBUG "%s: loopback pattern check failed on byte %d\n", - dev->name, i); + dprintk(KERN_DEBUG "%s: loopback pattern check failed on byte %d\n", + dev->name, i); break; } } @@ -4903,16 +6300,24 @@ static int nv_loopback_test(struct net_device *dev) dprintk(KERN_DEBUG "%s: loopback - did not receive test packet\n", dev->name); } +#if NVVER > FEDORA7 + pci_unmap_page(np->pci_dev, test_dma_addr, + skb_end_pointer(tx_skb)-tx_skb->data, + PCI_DMA_TODEVICE); +#else pci_unmap_page(np->pci_dev, test_dma_addr, - (skb_end_pointer(tx_skb) - tx_skb->data), - PCI_DMA_TODEVICE); + tx_skb->end-tx_skb->data, + PCI_DMA_TODEVICE); +#endif dev_kfree_skb_any(tx_skb); out: /* stop engines */ - nv_stop_rxtx(dev); + nv_stop_rx(dev); + nv_stop_tx(dev); nv_txrx_reset(dev); /* drain rx queue */ - nv_drain_rxtx(dev); + nv_drain_rx(dev); + nv_drain_tx(dev); if (netif_running(dev)) { writel(misc1_flags, base + NvRegMisc1); @@ -4925,10 +6330,12 @@ static int nv_loopback_test(struct net_device *dev) static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 *buffer) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); int result; - memset(buffer, 0, nv_get_sset_count(dev, ETH_SS_TEST)*sizeof(u64)); + memset(buffer, 0, nv_self_test_count(dev)*sizeof(u64)); + + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); if (!nv_link_test(dev)) { test->flags |= ETH_TEST_FL_FAILED; @@ -4938,10 +6345,17 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 if (test->flags & ETH_TEST_FL_OFFLINE) { if (netif_running(dev)) { netif_stop_queue(dev); -#ifdef CONFIG_FORCEDETH_NAPI - napi_disable(&np->napi); -#endif + if(napi) +#ifdef NV_NAPI_POLL_LIST + napi_disable(&np->napi); +#else + netif_poll_disable(dev); +#endif +#if NVVER > FEDORA5 netif_tx_lock_bh(dev); +#else + spin_lock_bh(&dev->xmit_lock); +#endif spin_lock_irq(&np->lock); nv_disable_hw_interrupts(dev, np->irqmask); if (!(np->msi_flags & NV_MSI_X_ENABLED)) { @@ -4950,12 +6364,18 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus); } /* stop engines */ - nv_stop_rxtx(dev); + nv_stop_rx(dev); + nv_stop_tx(dev); nv_txrx_reset(dev); /* drain rx queue */ - nv_drain_rxtx(dev); + nv_drain_rx(dev); + nv_drain_tx(dev); spin_unlock_irq(&np->lock); +#if NVVER > FEDORA5 netif_tx_unlock_bh(dev); +#else + spin_unlock_bh(&dev->xmit_lock); +#endif } if (!nv_register_test(dev)) { @@ -4989,16 +6409,20 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 writel(np->rx_buf_sz, base + NvRegOffloadConfig); setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), - base + NvRegRingSizes); + base + NvRegRingSizes); pci_push(base); writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); pci_push(base); /* restart rx engine */ - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); netif_start_queue(dev); -#ifdef CONFIG_FORCEDETH_NAPI - napi_enable(&np->napi); -#endif + if(napi) +#ifdef NV_NAPI_POLL_LIST + napi_enable(&np->napi); +#else + netif_poll_enable(dev); +#endif nv_enable_hw_interrupts(dev, np->irqmask); } } @@ -5007,16 +6431,16 @@ static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 static void nv_get_strings(struct net_device *dev, u32 stringset, u8 *buffer) { switch (stringset) { - case ETH_SS_STATS: - memcpy(buffer, &nv_estats_str, nv_get_sset_count(dev, ETH_SS_STATS)*sizeof(struct nv_ethtool_str)); - break; - case ETH_SS_TEST: - memcpy(buffer, &nv_etests_str, nv_get_sset_count(dev, ETH_SS_TEST)*sizeof(struct nv_ethtool_str)); - break; + case ETH_SS_STATS: + memcpy(buffer, &nv_estats_str, nv_get_stats_count(dev)*sizeof(struct nv_ethtool_str)); + break; + case ETH_SS_TEST: + memcpy(buffer, &nv_etests_str, nv_self_test_count(dev)*sizeof(struct nv_ethtool_str)); + break; } } -static const struct ethtool_ops ops = { +static struct ethtool_ops ops = { .get_drvinfo = nv_get_drvinfo, .get_link = ethtool_op_get_link, .get_wol = nv_get_wol, @@ -5026,18 +6450,29 @@ static const struct ethtool_ops ops = { .get_regs_len = nv_get_regs_len, .get_regs = nv_get_regs, .nway_reset = nv_nway_reset, - .set_tso = nv_set_tso, +#ifdef NV_ETHTOOL_PERM_ADDR +#if NVVER > SUSE10 + .get_perm_addr = ethtool_op_get_perm_addr, +#endif +#endif .get_ringparam = nv_get_ringparam, .set_ringparam = nv_set_ringparam, .get_pauseparam = nv_get_pauseparam, .set_pauseparam = nv_set_pauseparam, .get_rx_csum = nv_get_rx_csum, .set_rx_csum = nv_set_rx_csum, + .get_tx_csum = ethtool_op_get_tx_csum, .set_tx_csum = nv_set_tx_csum, + .get_sg = ethtool_op_get_sg, .set_sg = nv_set_sg, +#ifdef NETIF_F_TSO + .get_tso = ethtool_op_get_tso, + .set_tso = nv_set_tso, +#endif .get_strings = nv_get_strings, + .get_stats_count = nv_get_stats_count, .get_ethtool_stats = nv_get_ethtool_stats, - .get_sset_count = nv_get_sset_count, + .self_test_count = nv_self_test_count, .self_test = nv_self_test, }; @@ -5053,16 +6488,25 @@ static void nv_vlan_rx_register(struct net_device *dev, struct vlan_group *grp) if (grp) { /* enable vlan on MAC */ np->txrxctl_bits |= NVREG_TXRXCTL_VLANSTRIP | NVREG_TXRXCTL_VLANINS; + /* vlan is dependent on rx checksum */ + np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK; } else { /* disable vlan on MAC */ np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANSTRIP; np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANINS; + if (!np->rx_csum) + np->txrxctl_bits &= ~NVREG_TXRXCTL_RXCHECK; } writel(np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl); spin_unlock_irq(&np->lock); -} +}; + +static void nv_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid) +{ + /* nothing to do */ +}; /* The mgmt unit and driver use a semaphore to access the phy during init */ static int nv_mgmt_acquire_sema(struct net_device *dev) @@ -5073,13 +6517,17 @@ static int nv_mgmt_acquire_sema(struct net_device *dev) for (i = 0; i < 10; i++) { mgmt_sema = readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_MGMT_SEMA_MASK; - if (mgmt_sema == NVREG_XMITCTL_MGMT_SEMA_FREE) + if (mgmt_sema == NVREG_XMITCTL_MGMT_SEMA_FREE) { + dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: sema is free\n"); break; - msleep(500); + } + nv_msleep(500); } - if (mgmt_sema != NVREG_XMITCTL_MGMT_SEMA_FREE) + if (mgmt_sema != NVREG_XMITCTL_MGMT_SEMA_FREE) { + dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: sema is not free\n"); return 0; + } for (i = 0; i < 2; i++) { tx_ctrl = readl(base + NvRegTransmitterControl); @@ -5089,41 +6537,121 @@ static int nv_mgmt_acquire_sema(struct net_device *dev) /* verify that semaphore was acquired */ tx_ctrl = readl(base + NvRegTransmitterControl); if (((tx_ctrl & NVREG_XMITCTL_HOST_SEMA_MASK) == NVREG_XMITCTL_HOST_SEMA_ACQ) && - ((tx_ctrl & NVREG_XMITCTL_MGMT_SEMA_MASK) == NVREG_XMITCTL_MGMT_SEMA_FREE)) + ((tx_ctrl & NVREG_XMITCTL_MGMT_SEMA_MASK) == NVREG_XMITCTL_MGMT_SEMA_FREE)) { + dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: acquired sema\n"); return 1; - else + } else udelay(50); } + dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: exit\n"); return 0; } +static int nv_mac_address_check(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + int i; + + /* Check against known invalid addresses */ +#if NVVER > SUSE10 + for (i = 0; i < NV_NUM_FACTORY_ADDRESS; i++) { + if (dev->perm_addr[0] == factory_address[i][0] && + dev->perm_addr[1] == factory_address[i][1] && + dev->perm_addr[2] == factory_address[i][2] && + dev->perm_addr[3] == factory_address[i][3] && + dev->perm_addr[4] == factory_address[i][4] && + dev->perm_addr[5] == factory_address[i][5]) + goto bad_address; + if (dev->perm_addr[0] == factory_address[i][5] && + dev->perm_addr[1] == factory_address[i][4] && + dev->perm_addr[2] == factory_address[i][3] && + dev->perm_addr[3] == factory_address[i][2] && + dev->perm_addr[4] == factory_address[i][1] && + dev->perm_addr[5] == factory_address[i][0]) + goto bad_address; + } + if (!is_valid_ether_addr(dev->perm_addr)) + goto bad_address; +#else + for (i = 0; i < NV_NUM_FACTORY_ADDRESS; i++) { + if (dev->dev_addr[0] == factory_address[i][0] && + dev->dev_addr[1] == factory_address[i][1] && + dev->dev_addr[2] == factory_address[i][2] && + dev->dev_addr[3] == factory_address[i][3] && + dev->dev_addr[4] == factory_address[i][4] && + dev->dev_addr[5] == factory_address[i][5]) + goto bad_address; + if (dev->dev_addr[0] == factory_address[i][5] && + dev->dev_addr[1] == factory_address[i][4] && + dev->dev_addr[2] == factory_address[i][3] && + dev->dev_addr[3] == factory_address[i][2] && + dev->dev_addr[4] == factory_address[i][1] && + dev->dev_addr[5] == factory_address[i][0]) + goto bad_address; + } + + if (!is_valid_ether_addr(dev->dev_addr)) + goto bad_address; +#endif + + return 1; + +bad_address: +#if NVVER > SUSE10 + printk(KERN_ERR "%s:Invalid Mac address detected: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(np->pci_dev),dev->perm_addr[0], dev->perm_addr[1], dev->perm_addr[2], dev->perm_addr[3], dev->perm_addr[4], dev->perm_addr[5]); +#else + printk(KERN_ERR "%s:Invalid Mac address detected: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(np->pci_dev),dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2], dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]); +#endif + if (np->driver_data & DEV_MACADDRESS_CHECK) { + return 0; + } else { + printk(KERN_ERR "Please complain to your hardware vendor. Switching to a random MAC.\n"); + dev->dev_addr[0] = 0x00; + dev->dev_addr[1] = 0x00; + dev->dev_addr[2] = 0x6c; + get_random_bytes(&dev->dev_addr[3], 3); + return 1; + } +} + static int nv_open(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); int ret = 1; + u32 tx_ctrl; int oom, i; u32 low; + u32 temp; dprintk(KERN_DEBUG "nv_open: begin\n"); /* erase previous misconfiguration */ if (np->driver_data & DEV_HAS_POWER_CNTRL) nv_mac_reset(dev); + /* stop adapter: ignored, 4.3 seems to be overkill */ writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA); writel(0, base + NvRegMulticastAddrB); writel(NVREG_MCASTMASKA_NONE, base + NvRegMulticastMaskA); writel(NVREG_MCASTMASKB_NONE, base + NvRegMulticastMaskB); writel(0, base + NvRegPacketFilterFlags); - writel(0, base + NvRegTransmitterControl); + if (np->mac_in_use){ + tx_ctrl = readl(base + NvRegTransmitterControl); + tx_ctrl &= ~NVREG_XMITCTL_START; + }else + tx_ctrl = 0; + writel(tx_ctrl, base + NvRegTransmitterControl); writel(0, base + NvRegReceiverControl); writel(0, base + NvRegAdapterControl); - if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE) + if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE){ writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame); + if (np->driver_data & DEV_HAS_PAUSEFRAME_TX_V3) + writel(NVREG_TX_PAUSEFRAME2_LIMIT_ENABLE, base + NvRegTxPauseFrame2); + } /* initialize descriptor rings */ set_bufsize(dev); @@ -5132,15 +6660,16 @@ static int nv_open(struct net_device *dev) writel(0, base + NvRegLinkSpeed); writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll); nv_txrx_reset(dev); - writel(0, base + NvRegUnknownSetupReg6); + writel(0, base + NvRegSoftwareTimerCtrl); np->in_shutdown = 0; /* give hw rings */ setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING); writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT), - base + NvRegRingSizes); + base + NvRegRingSizes); + /* continue setup */ writel(np->linkspeed, base + NvRegLinkSpeed); if (np->desc_ver == DESC_VER_1) writel(NVREG_TX_WM_DESC1_DEFAULT, base + NvRegTxWatermark); @@ -5158,37 +6687,50 @@ static int nv_open(struct net_device *dev) writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus); writel(NVREG_MIISTAT_MASK_ALL, base + NvRegMIIStatus); + /* continue setup */ writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1); writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus); writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags); writel(np->rx_buf_sz, base + NvRegOffloadConfig); writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus); - - get_random_bytes(&low, sizeof(low)); - low &= NVREG_SLOTTIME_MASK; - if (np->desc_ver == DESC_VER_1) { - writel(low|NVREG_SLOTTIME_DEFAULT, base + NvRegSlotTime); - } else { - if (!(np->driver_data & DEV_HAS_GEAR_MODE)) { + + rdtscl(low); + if (np->desc_ver == DESC_VER_1){ + writel((low & NVREG_SLOTTIME_MASK)| NVREG_SLOTTIME_DEFAULT, base + NvRegSlotTime); + }else{ + temp = NVREG_SLOTTIME_10_100_FULL; + if(!(np->driver_data & DEV_HAS_GEAR_MODE)){ /* setup legacy backoff */ - writel(NVREG_SLOTTIME_LEGBF_ENABLED|NVREG_SLOTTIME_10_100_FULL|low, base + NvRegSlotTime); - } else { - writel(NVREG_SLOTTIME_10_100_FULL, base + NvRegSlotTime); + low = low & 0xff; + temp |= (NVREG_CTRL_LEGBF_ENABLED|NVREG_SLOTTIME_10_100_FULL|low); + writel(temp, base + NvRegSlotTime); + }else{ + writel(temp, base + NvRegSlotTime); nv_gear_backoff_reseed(dev); } } writel(NVREG_TX_DEFERRAL_DEFAULT, base + NvRegTxDeferral); writel(NVREG_RX_DEFERRAL_DEFAULT, base + NvRegRxDeferral); - if (poll_interval == -1) { - if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) - writel(NVREG_POLL_DEFAULT_THROUGHPUT, base + NvRegPollingInterval); - else - writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval); + + if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT){ + /* throughput mode */ + np->interrupt_moderation = 0 ; + }else{ + /* polling mode */ + np->interrupt_moderation = 1 ; + if(poll_interval==0) + poll_interval=1; + if (poll_interval == -1) { + /* Initially set interval for 10/100 speed. Once we determine speed,we adjust value */ + np->swtimer_interval = NV_SW_TIMER_INTERVAL_NON_1000_DEFAULT; + } + else{ + np->swtimer_interval = poll_interval & 0xFFFF; + } + SET_HW_TIMER_INTERVAL(base,np->swtimer_interval); } - else - writel(poll_interval & 0xFFFF, base + NvRegPollingInterval); - writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6); + writel((np->phyaddr << NVREG_ADAPTCTL_PHYSHIFT)|NVREG_ADAPTCTL_PHYVALID|NVREG_ADAPTCTL_RUNNING, base + NvRegAdapterControl); writel(NVREG_MIISPEED_BIT8|NVREG_MIIDELAY, base + NvRegMIISpeed); @@ -5236,54 +6778,69 @@ static int nv_open(struct net_device *dev) * to init hw */ np->linkspeed = 0; ret = nv_update_linkspeed(dev); - nv_start_rxtx(dev); + nv_start_rx(dev); + nv_start_tx(dev); netif_start_queue(dev); -#ifdef CONFIG_FORCEDETH_NAPI - napi_enable(&np->napi); -#endif + if(napi) +#ifdef NV_NAPI_POLL_LIST + napi_enable(&np->napi); +#else + netif_poll_enable(dev); +#endif if (ret) { netif_carrier_on(dev); } else { - printk(KERN_INFO "%s: no link during initialization.\n", dev->name); + dprintk(KERN_DEBUG "%s: no link during initialization.\n", dev->name); netif_carrier_off(dev); } if (oom) mod_timer(&np->oom_kick, jiffies + OOM_REFILL); /* start statistics timer */ - if (np->driver_data & (DEV_HAS_STATISTICS_V1|DEV_HAS_STATISTICS_V2)) - mod_timer(&np->stats_poll, - round_jiffies(jiffies + STATS_INTERVAL)); + if(np->driver_data & (DEV_HAS_STATISTICS_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_STATISTICS_V3)) +#ifdef ROUND_JIFFIES + mod_timer(&np->stats_poll, round_jiffies(jiffies + STATS_INTERVAL)); +#else + mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL); +#endif spin_unlock_irq(&np->lock); return 0; out_drain: - nv_drain_rxtx(dev); + drain_ring(dev); return ret; } static int nv_close(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base; + dprintk(KERN_DEBUG "nv_close: begin\n"); spin_lock_irq(&np->lock); np->in_shutdown = 1; spin_unlock_irq(&np->lock); -#ifdef CONFIG_FORCEDETH_NAPI - napi_disable(&np->napi); -#endif + if(napi) +#ifdef NV_NAPI_POLL_LIST + napi_disable(&np->napi); +#else + netif_poll_disable(dev); +#endif +#if NVVER > RHEL3 synchronize_irq(np->pci_dev->irq); +#else + synchronize_irq(); +#endif del_timer_sync(&np->oom_kick); - del_timer_sync(&np->nic_poll); del_timer_sync(&np->stats_poll); netif_stop_queue(dev); spin_lock_irq(&np->lock); - nv_stop_rxtx(dev); + nv_stop_tx(dev); + nv_stop_rx(dev); nv_txrx_reset(dev); /* disable interrupts on the nic or we will lock up */ @@ -5296,12 +6853,10 @@ static int nv_close(struct net_device *dev) nv_free_irq(dev); - nv_drain_rxtx(dev); + drain_ring(dev); - if (np->wolenabled) { - writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags); + if (np->wolenabled) nv_start_rx(dev); - } /* FIXME: power down nic */ @@ -5315,40 +6870,42 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i unsigned long addr; u8 __iomem *base; int err, i; - u32 powerstate, txreg; - u32 phystate_orig = 0, phystate; + u32 powerstate, phystate_orig = 0, phystate = 0, txreg,reg,mii_status; int phyinitialized = 0; - DECLARE_MAC_BUF(mac); - static int printed_version; - - if (!printed_version++) - printk(KERN_INFO "%s: Reverse Engineered nForce ethernet" - " driver. Version %s.\n", DRV_NAME, FORCEDETH_VERSION); + /* modify network device class id */ + quirk_nforce_network_class(pci_dev); dev = alloc_etherdev(sizeof(struct fe_priv)); err = -ENOMEM; if (!dev) goto out; - np = netdev_priv(dev); - np->dev = dev; + dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__); + np = get_nvpriv(dev); np->pci_dev = pci_dev; spin_lock_init(&np->lock); + spin_lock_init(&np->timer_lock); +#ifdef NV_SET_MODULE_OWNER + SET_MODULE_OWNER(dev); +#endif SET_NETDEV_DEV(dev, &pci_dev->dev); init_timer(&np->oom_kick); np->oom_kick.data = (unsigned long) dev; np->oom_kick.function = &nv_do_rx_refill; /* timer handler */ - init_timer(&np->nic_poll); - np->nic_poll.data = (unsigned long) dev; - np->nic_poll.function = &nv_do_nic_poll; /* timer handler */ + init_timer(&np->stats_poll); np->stats_poll.data = (unsigned long) dev; np->stats_poll.function = &nv_do_stats_poll; /* timer handler */ + tasklet_init(&np->nic_poll,nv_do_nic_poll,(unsigned long)dev); + err = pci_enable_device(pci_dev); - if (err) + if (err) { + printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n", + err, pci_name(pci_dev)); goto out_free; + } pci_set_master(pci_dev); @@ -5356,7 +6913,7 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i if (err < 0) goto out_disable; - if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V2)) + if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V2|DEV_HAS_STATISTICS_V3)) np->register_size = NV_PCI_REGSZ_VER3; else if (id->driver_data & DEV_HAS_STATISTICS_V1) np->register_size = NV_PCI_REGSZ_VER2; @@ -5368,8 +6925,8 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { dprintk(KERN_DEBUG "%s: resource %d start %p len %ld flags 0x%08lx.\n", pci_name(pci_dev), i, (void*)pci_resource_start(pci_dev, i), - pci_resource_len(pci_dev, i), - pci_resource_flags(pci_dev, i)); + (long)pci_resource_len(pci_dev, i), + (long)pci_resource_flags(pci_dev, i)); if (pci_resource_flags(pci_dev, i) & IORESOURCE_MEM && pci_resource_len(pci_dev, i) >= np->register_size) { addr = pci_resource_start(pci_dev, i); @@ -5377,8 +6934,8 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i } } if (i == DEVICE_COUNT_RESOURCE) { - dev_printk(KERN_INFO, &pci_dev->dev, - "Couldn't find register window\n"); + printk(KERN_INFO "forcedeth: Couldn't find register window for device %s.\n", + pci_name(pci_dev)); goto out_relreg; } @@ -5393,15 +6950,19 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i np->desc_ver = DESC_VER_3; np->txrxctl_bits = NVREG_TXRXCTL_DESC_3; if (dma_64bit) { - if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) - dev_printk(KERN_INFO, &pci_dev->dev, - "64-bit DMA failed, using 32-bit addressing\n"); - else + if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) { + printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n", + pci_name(pci_dev)); + } else { dev->features |= NETIF_F_HIGHDMA; + printk(KERN_INFO "forcedeth: using HIGHDMA\n"); + } +#if NVVER > RHEL3 if (pci_set_consistent_dma_mask(pci_dev, DMA_39BIT_MASK)) { - dev_printk(KERN_INFO, &pci_dev->dev, - "64-bit DMA (consistent) failed, using 32-bit ring buffers\n"); + printk(KERN_INFO "forcedeth: 64-bit DMA (consistent) failed, using 32-bit ring buffers for device %s.\n", + pci_name(pci_dev)); } +#endif } } else if (id->driver_data & DEV_HAS_LARGEDESC) { /* packet format 2: supports jumbo frames */ @@ -5417,11 +6978,15 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i if (id->driver_data & DEV_HAS_LARGEDESC) np->pkt_limit = NV_PKTLIMIT_2; + dev->mtu = ETH_DATA_LEN; + if (id->driver_data & DEV_HAS_CHECKSUM) { np->rx_csum = 1; np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK; - dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG; + dev->features |= NETIF_F_IP_CSUM|NETIF_F_SG; +#ifdef NETIF_F_TSO dev->features |= NETIF_F_TSO; +#endif } np->vlanctl_bits = 0; @@ -5429,6 +6994,9 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i np->vlanctl_bits = NVREG_VLANCONTROL_ENABLE; dev->features |= NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX; dev->vlan_rx_register = nv_vlan_rx_register; + dev->vlan_rx_kill_vid = nv_vlan_rx_kill_vid; + /* vlan needs rx checksum support, so force it */ + np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK; } np->msi_flags = 0; @@ -5439,49 +7007,59 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i np->msi_flags |= NV_MSI_X_CAPABLE; } - np->pause_flags = NV_PAUSEFRAME_RX_CAPABLE | NV_PAUSEFRAME_RX_REQ | NV_PAUSEFRAME_AUTONEG; + np->pause_flags = NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_CAPABLE|NV_PAUSEFRAME_RX_REQ; if ((id->driver_data & DEV_HAS_PAUSEFRAME_TX_V1) || - (id->driver_data & DEV_HAS_PAUSEFRAME_TX_V2) || - (id->driver_data & DEV_HAS_PAUSEFRAME_TX_V3)) { - np->pause_flags |= NV_PAUSEFRAME_TX_CAPABLE | NV_PAUSEFRAME_TX_REQ; + (id->driver_data & DEV_HAS_PAUSEFRAME_TX_V2)|| + (id->driver_data & DEV_HAS_PAUSEFRAME_TX_V3)) + { + np->pause_flags |= NV_PAUSEFRAME_TX_CAPABLE|NV_PAUSEFRAME_TX_REQ; } - err = -ENOMEM; np->base = ioremap(addr, np->register_size); if (!np->base) goto out_relreg; dev->base_addr = (unsigned long)np->base; + /* ungate all clocks before we access any registers*/ + nv_pmctrl2_gatecoretxrx(dev,GATE_OFF); + /* stop engines */ + nv_stop_rx(dev); + nv_stop_tx(dev); + nv_txrx_reset(dev); + dev->irq = pci_dev->irq; np->rx_ring_size = RX_RING_DEFAULT; np->tx_ring_size = TX_RING_DEFAULT; + np->tx_limit_stop = TX_RING_DEFAULT - TX_LIMIT_DIFFERENCE; + np->tx_limit_start = TX_RING_DEFAULT - TX_LIMIT_DIFFERENCE - 1; - if (!nv_optimized(np)) { + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) { np->rx_ring.orig = pci_alloc_consistent(pci_dev, - sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size), - &np->ring_addr); + sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size), + &np->ring_addr); if (!np->rx_ring.orig) goto out_unmap; np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size]; } else { np->rx_ring.ex = pci_alloc_consistent(pci_dev, - sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size), - &np->ring_addr); + sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size), + &np->ring_addr); if (!np->rx_ring.ex) goto out_unmap; np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size]; } - np->rx_skb = kcalloc(np->rx_ring_size, sizeof(struct nv_skb_map), GFP_KERNEL); - np->tx_skb = kcalloc(np->tx_ring_size, sizeof(struct nv_skb_map), GFP_KERNEL); + np->rx_skb = kmalloc(sizeof(struct nv_skb_map) * np->rx_ring_size, GFP_KERNEL); + np->tx_skb = kmalloc(sizeof(struct nv_skb_map) * np->tx_ring_size, GFP_KERNEL); if (!np->rx_skb || !np->tx_skb) goto out_freering; + memset(np->rx_skb, 0, sizeof(struct nv_skb_map) * np->rx_ring_size); + memset(np->tx_skb, 0, sizeof(struct nv_skb_map) * np->tx_ring_size); dev->open = nv_open; dev->stop = nv_close; - - if (!nv_optimized(np)) + if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) dev->hard_start_xmit = nv_start_xmit; else dev->hard_start_xmit = nv_start_xmit_optimized; @@ -5489,12 +7067,27 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i dev->change_mtu = nv_change_mtu; dev->set_mac_address = nv_set_mac_address; dev->set_multicast_list = nv_set_multicast; + +#if NVVER < SLES9 + dev->do_ioctl = nv_ioctl; +#endif + +#if NVVER > RHEL3 #ifdef CONFIG_NET_POLL_CONTROLLER dev->poll_controller = nv_poll_controller; #endif -#ifdef CONFIG_FORCEDETH_NAPI - netif_napi_add(dev, &np->napi, nv_napi_poll, RX_WORK_PER_LOOP); +#else + dev->poll_controller = nv_poll_controller; +#endif + + if(napi) +#ifdef NV_NAPI_POLL_LIST + netif_napi_add(dev, &np->napi, nv_napi_poll, RX_WORK_PER_LOOP); +#else + dev->poll = &nv_napi_poll; + dev->weight = RX_WORK_PER_LOOP; #endif + SET_ETHTOOL_OPS(dev, &ops); dev->tx_timeout = nv_tx_timeout; dev->watchdog_timeo = NV_WATCHDOG_TIMEO; @@ -5508,15 +7101,8 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i /* check the workaround bit for correct mac address order */ txreg = readl(base + NvRegTransmitPoll); - if (id->driver_data & DEV_HAS_CORRECT_MACADDR) { - /* mac address is already in correct order */ - dev->dev_addr[0] = (np->orig_mac[0] >> 0) & 0xff; - dev->dev_addr[1] = (np->orig_mac[0] >> 8) & 0xff; - dev->dev_addr[2] = (np->orig_mac[0] >> 16) & 0xff; - dev->dev_addr[3] = (np->orig_mac[0] >> 24) & 0xff; - dev->dev_addr[4] = (np->orig_mac[1] >> 0) & 0xff; - dev->dev_addr[5] = (np->orig_mac[1] >> 8) & 0xff; - } else if (txreg & NVREG_TRANSMITPOLL_MAC_ADDR_REV) { + if ((txreg & NVREG_TRANSMITPOLL_MAC_ADDR_REV) || + (id->driver_data & DEV_HAS_CORRECT_MACADDR)) { /* mac address is already in correct order */ dev->dev_addr[0] = (np->orig_mac[0] >> 0) & 0xff; dev->dev_addr[1] = (np->orig_mac[0] >> 8) & 0xff; @@ -5524,51 +7110,36 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i dev->dev_addr[3] = (np->orig_mac[0] >> 24) & 0xff; dev->dev_addr[4] = (np->orig_mac[1] >> 0) & 0xff; dev->dev_addr[5] = (np->orig_mac[1] >> 8) & 0xff; - /* - * Set orig mac address back to the reversed version. - * This flag will be cleared during low power transition. - * Therefore, we should always put back the reversed address. - */ - np->orig_mac[0] = (dev->dev_addr[5] << 0) + (dev->dev_addr[4] << 8) + - (dev->dev_addr[3] << 16) + (dev->dev_addr[2] << 24); - np->orig_mac[1] = (dev->dev_addr[1] << 0) + (dev->dev_addr[0] << 8); } else { - /* need to reverse mac address to correct order */ dev->dev_addr[0] = (np->orig_mac[1] >> 8) & 0xff; dev->dev_addr[1] = (np->orig_mac[1] >> 0) & 0xff; dev->dev_addr[2] = (np->orig_mac[0] >> 24) & 0xff; dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff; dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff; dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff; + /* set permanent address to be correct aswell */ + np->orig_mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) + + (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24); + np->orig_mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8); writel(txreg|NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll); } +#if NVVER > SUSE10 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len); +#endif + if(!nv_mac_address_check(dev)) + goto out_error; - if (!is_valid_ether_addr(dev->perm_addr)) { - /* - * Bad mac address. At least one bios sets the mac address - * to 01:23:45:67:89:ab - */ - dev_printk(KERN_ERR, &pci_dev->dev, - "Invalid Mac address detected: %s\n", - print_mac(mac, dev->dev_addr)); - dev_printk(KERN_ERR, &pci_dev->dev, - "Please complain to your hardware vendor. Switching to a random MAC.\n"); - dev->dev_addr[0] = 0x00; - dev->dev_addr[1] = 0x00; - dev->dev_addr[2] = 0x6c; - get_random_bytes(&dev->dev_addr[3], 3); - } - - dprintk(KERN_DEBUG "%s: MAC Address %s\n", - pci_name(pci_dev), print_mac(mac, dev->dev_addr)); - + dprintk(KERN_DEBUG "%s: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(pci_dev), + dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2], + dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]); /* set mac address */ nv_copy_mac_to_hw(dev); /* disable WOL */ writel(0, base + NvRegWakeUpFlags); - np->wolenabled = 0; + np->wolenabled = NV_WOL_DISABLED; + + pci_read_config_byte(pci_dev, PCI_REVISION_ID, &np->revision_id); if (id->driver_data & DEV_HAS_POWER_CNTRL) { @@ -5576,8 +7147,8 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i powerstate = readl(base + NvRegPowerState2); powerstate &= ~NVREG_POWERSTATE2_POWERUP_MASK; if ((id->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 || - id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) && - pci_dev->revision >= 0xA3) + id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) && + np->revision_id >= 0xA3) powerstate |= NVREG_POWERSTATE2_POWERUP_REV_A3; writel(powerstate, base + NvRegPowerState2); } @@ -5619,7 +7190,7 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i id->device == PCI_DEVICE_ID_NVIDIA_NVENET_37 || id->device == PCI_DEVICE_ID_NVIDIA_NVENET_38 || id->device == PCI_DEVICE_ID_NVIDIA_NVENET_39) && - pci_dev->revision >= 0xA2) + np->revision_id >= 0xA2) np->tx_limit = 0; } @@ -5638,15 +7209,21 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i if (readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_SYNC_PHY_INIT) { np->mac_in_use = readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_MGMT_ST; dprintk(KERN_INFO "%s: mgmt unit is running. mac in use %x.\n", pci_name(pci_dev), np->mac_in_use); - if (nv_mgmt_acquire_sema(dev)) { - /* management unit setup the phy already? */ - if ((readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_SYNC_MASK) == - NVREG_XMITCTL_SYNC_PHY_INIT) { - /* phy is inited by mgmt unit */ - phyinitialized = 1; - dprintk(KERN_INFO "%s: Phy already initialized by mgmt unit.\n", pci_name(pci_dev)); - } else { - /* we need to init the phy */ + for (i = 0; i < 5000; i++) { + nv_msleep(1); + if (nv_mgmt_acquire_sema(dev)) { + /* management unit setup the phy already? */ + if ((readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_SYNC_MASK) == + NVREG_XMITCTL_SYNC_PHY_INIT) { + if(np->mac_in_use){ + /* phy is inited by mgmt unit */ + phyinitialized = 1; + dprintk(KERN_INFO "%s: Phy already initialized by mgmt unit.\n", pci_name(pci_dev)); + } + } else { + /* we need to init the phy */ + } + break; } } } @@ -5672,7 +7249,7 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i id1 = (id1 & PHYID1_OUI_MASK) << PHYID1_OUI_SHFT; id2 = (id2 & PHYID2_OUI_MASK) >> PHYID2_OUI_SHFT; dprintk(KERN_DEBUG "%s: open: Found PHY %04x:%04x at address %d.\n", - pci_name(pci_dev), id1, id2, phyaddr); + pci_name(pci_dev), id1, id2, phyaddr); np->phyaddr = phyaddr; np->phy_oui = id1 | id2; @@ -5686,60 +7263,61 @@ static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_i break; } if (i == 33) { - dev_printk(KERN_INFO, &pci_dev->dev, - "open: Could not find a valid PHY.\n"); + printk(KERN_INFO "%s: open: Could not find a valid PHY.\n", + pci_name(pci_dev)); goto out_error; } - if (!phyinitialized) { + if (!phyinitialized) { /* reset it */ + np->autoneg = AUTONEG_ENABLE; phy_init(dev); } else { /* see if it is a gigabit phy */ - u32 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); + mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); if (mii_status & PHY_GIGABIT) { np->gigabit = PHY_GIGABIT; } + reg = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); + np->autoneg = (reg & BMCR_ANENABLE ? AUTONEG_ENABLE:AUTONEG_DISABLE); + + reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); + reg &= ~(ADVERTISE_PAUSE_CAP|ADVERTISE_PAUSE_ASYM); + + if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) { + reg |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM; + np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE; + } + if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) { + reg |= ADVERTISE_PAUSE_ASYM; + np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE; + } + + if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) { + printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev)); + return PHY_ERROR; + } + if(np->autoneg == AUTONEG_DISABLE){ + np->fixed_mode = reg; + } + } + + if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011 && np->pci_dev->subsystem_vendor ==0x108E && np->pci_dev->subsystem_device==0x6676 ) { + nv_LED_on(dev); } /* set default link speed settings */ np->linkspeed = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10; np->duplex = 0; - np->autoneg = 1; err = register_netdev(dev); if (err) { - dev_printk(KERN_INFO, &pci_dev->dev, - "unable to register netdev: %d\n", err); + printk(KERN_INFO "forcedeth: unable to register netdev: %d\n", err); goto out_error; } - - dev_printk(KERN_INFO, &pci_dev->dev, "ifname %s, PHY OUI 0x%x @ %d, " - "addr %2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x\n", - dev->name, - np->phy_oui, - np->phyaddr, - dev->dev_addr[0], - dev->dev_addr[1], - dev->dev_addr[2], - dev->dev_addr[3], - dev->dev_addr[4], - dev->dev_addr[5]); - - dev_printk(KERN_INFO, &pci_dev->dev, "%s%s%s%s%s%s%s%s%s%sdesc-v%u\n", - dev->features & NETIF_F_HIGHDMA ? "highdma " : "", - dev->features & (NETIF_F_HW_CSUM | NETIF_F_SG) ? - "csum " : "", - dev->features & (NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX) ? - "vlan " : "", - id->driver_data & DEV_HAS_POWER_CNTRL ? "pwrctl " : "", - id->driver_data & DEV_HAS_MGMT_UNIT ? "mgmt " : "", - id->driver_data & DEV_NEED_TIMERIRQ ? "timirq " : "", - np->gigabit == PHY_GIGABIT ? "gbit " : "", - np->need_linktimer ? "lnktim " : "", - np->msi_flags & NV_MSI_CAPABLE ? "msi " : "", - np->msi_flags & NV_MSI_X_CAPABLE ? "msi-x " : "", - np->desc_ver); + printk(KERN_INFO "%s: forcedeth.c: subsystem: %05x:%04x bound to %s\n", + dev->name, pci_dev->subsystem_vendor, pci_dev->subsystem_device, + pci_name(pci_dev)); return 0; @@ -5763,7 +7341,7 @@ out: static void nv_restore_phy(struct net_device *dev) { - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u16 phy_reserved, mii_control; if (np->phy_oui == PHY_OUI_REALTEK && @@ -5782,26 +7360,38 @@ static void nv_restore_phy(struct net_device *dev) mii_rw(dev, np->phyaddr, MII_BMCR, mii_control); } } - + +#ifdef CONFIG_PM +static void nv_set_low_speed(struct net_device *dev); +#endif static void __devexit nv_remove(struct pci_dev *pci_dev) { struct net_device *dev = pci_get_drvdata(pci_dev); - struct fe_priv *np = netdev_priv(dev); + struct fe_priv *np = get_nvpriv(dev); u8 __iomem *base = get_hwbase(dev); + u32 tx_ctrl; + if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011 && np->pci_dev->subsystem_vendor ==0x108E && np->pci_dev->subsystem_device==0x6676) { + nv_LED_off(dev); + } unregister_netdev(dev); - /* special op: write back the misordered MAC address - otherwise * the next nv_probe would see a wrong address. */ writel(np->orig_mac[0], base + NvRegMacAddrA); writel(np->orig_mac[1], base + NvRegMacAddrB); - writel(readl(base + NvRegTransmitPoll) & ~NVREG_TRANSMITPOLL_MAC_ADDR_REV, - base + NvRegTransmitPoll); + + /* relinquish control of the semaphore */ + if (np->mac_in_use) { + tx_ctrl = readl(base + NvRegTransmitterControl); + tx_ctrl &= ~NVREG_XMITCTL_HOST_SEMA_MASK; + writel(tx_ctrl, base + NvRegTransmitterControl); + } else if (!np->wolenabled) + nv_pmctrl2_gatecoretxrx(dev,GATE_ON); /* restore any phy related changes */ nv_restore_phy(dev); - + /* free all structures */ free_rings(dev); iounmap(get_hwbase(dev)); @@ -5811,58 +7401,6 @@ static void __devexit nv_remove(struct pci_dev *pci_dev) pci_set_drvdata(pci_dev, NULL); } -#ifdef CONFIG_PM -static int nv_suspend(struct pci_dev *pdev, pm_message_t state) -{ - struct net_device *dev = pci_get_drvdata(pdev); - struct fe_priv *np = netdev_priv(dev); - - if (!netif_running(dev)) - goto out; - - netif_device_detach(dev); - - // Gross. - nv_close(dev); - - pci_save_state(pdev); - pci_enable_wake(pdev, pci_choose_state(pdev, state), np->wolenabled); - pci_set_power_state(pdev, pci_choose_state(pdev, state)); -out: - return 0; -} - -static int nv_resume(struct pci_dev *pdev) -{ - struct net_device *dev = pci_get_drvdata(pdev); - u8 __iomem *base = get_hwbase(dev); - int rc = 0; - u32 txreg; - - if (!netif_running(dev)) - goto out; - - netif_device_attach(dev); - - pci_set_power_state(pdev, PCI_D0); - pci_restore_state(pdev); - pci_enable_wake(pdev, PCI_D0, 0); - - /* restore mac address reverse flag */ - txreg = readl(base + NvRegTransmitPoll); - txreg |= NVREG_TRANSMITPOLL_MAC_ADDR_REV; - writel(txreg, base + NvRegTransmitPoll); - - rc = nv_open(dev); - nv_set_multicast(dev); -out: - return rc; -} -#else -#define nv_suspend NULL -#define nv_resume NULL -#endif /* CONFIG_PM */ - static struct pci_device_id pci_tbl[] = { { /* nForce Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1), @@ -5942,19 +7480,19 @@ static struct pci_device_id pci_tbl[] = { }, { /* MCP65 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_20), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_NEED_TX_LIMIT|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP65 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_21), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP65 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_22), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP65 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_23), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP67 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_24), @@ -5974,74 +7512,367 @@ static struct pci_device_id pci_tbl[] = { }, { /* MCP73 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_28), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE, }, { /* MCP73 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_29), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE, }, { /* MCP73 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_30), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE, }, { /* MCP73 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_31), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_GEAR_MODE, }, { /* MCP77 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_32), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP77 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_33), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP77 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_34), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP77 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_35), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP79 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_36), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP79 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_37), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP79 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_38), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, { /* MCP79 Ethernet Controller */ PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_39), - .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_NEED_TX_LIMIT|DEV_HAS_GEAR_MODE, + .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V3|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR|DEV_HAS_COLLISION_FIX|DEV_MACADDRESS_CHECK|DEV_HAS_GEAR_MODE|DEV_NEED_TX_LIMIT, }, {0,}, }; -static struct pci_driver driver = { - .name = DRV_NAME, - .id_table = pci_tbl, - .probe = nv_probe, - .remove = __devexit_p(nv_remove), +#ifdef CONFIG_PM +static void nv_set_low_speed(struct net_device *dev) +{ + struct fe_priv *np = get_nvpriv(dev); + int adv = 0; + int lpa = 0; + int adv_lpa, bmcr, tries = 0; + int mii_status; + u32 control_1000; + + if (np->autoneg == 0 || ((np->linkspeed & 0xFFF) != NVREG_LINKSPEED_1000)) + return; + + adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ); + lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ); + control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ); + + adv_lpa = lpa & adv; + + if ((adv_lpa & LPA_10FULL) || (adv_lpa & LPA_10HALF)) { + adv &= ~(ADVERTISE_100BASE4 | ADVERTISE_100FULL | ADVERTISE_100HALF); + control_1000 &= ~(ADVERTISE_1000FULL|ADVERTISE_1000HALF); + printk(KERN_INFO "forcedeth %s: set low speed to 10mbs\n",dev->name); + } else if ((adv_lpa & LPA_100FULL) || (adv_lpa & LPA_100HALF)) { + control_1000 &= ~(ADVERTISE_1000FULL|ADVERTISE_1000HALF); + } else + return; + + /* set new advertisements */ + mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv); + mii_rw(dev, np->phyaddr, MII_CTRL1000, control_1000); + + bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); + if (np->phy_model == PHY_MODEL_MARVELL_E3016) { + bmcr |= BMCR_ANENABLE; + /* reset the phy in order for settings to stick, + * and cause autoneg to start */ + if (phy_reset(dev, bmcr)) { + printk(KERN_INFO "%s: phy reset failed\n", dev->name); + return; + } + } else { + bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART); + mii_rw(dev, np->phyaddr, MII_BMCR, bmcr); + } + mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); + mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); + while (!(mii_status & BMSR_ANEGCOMPLETE)) { + nv_msleep(100); + mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ); + if (tries++ > 50) + break; + } + + nv_update_linkspeed(dev); + + return; +} + +static int nv_suspend(struct pci_dev *pdev, pm_message_t state) +{ + struct net_device *dev = pci_get_drvdata(pdev); + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); + int i; + u32 tx_ctrl; + + dprintk(KERN_INFO "forcedeth: nv_suspend\n"); + + /* MCP55:save msix table */ + if((pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_14)||(pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_15)) + { + unsigned long phys_addr; + void __iomem *base_addr; + void __iomem *base; + unsigned int bir,len; + unsigned int i; + int pos; + u32 table_offset; + + pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX); + pci_read_config_dword(pdev, pos+0x04 , &table_offset); + bir = (u8)(table_offset & PCI_MSIX_FLAGS_BIRMASK); + table_offset &= ~PCI_MSIX_FLAGS_BIRMASK; + phys_addr = pci_resource_start(pdev, bir) + table_offset; + np->msix_pa_addr = phys_addr; + len = NV_MSI_X_MAX_VECTORS * PCI_MSIX_ENTRY_SIZE; + base_addr = ioremap_nocache(phys_addr, len); + + for(i=0;i<NV_MSI_X_MAX_VECTORS;i++){ + base = base_addr + i*PCI_MSIX_ENTRY_SIZE; + np->nvmsg[i].address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET); + np->nvmsg[i].address_hi = readl(base + PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET ); + np->nvmsg[i].data = readl(base + PCI_MSIX_ENTRY_DATA_OFFSET); + } + + iounmap(base_addr); + } + + nv_update_linkspeed(dev); + + if (netif_running(dev)) { + netif_device_detach(dev); + /* bring down the adapter */ + nv_close(dev); + } + + /* relinquish control of the semaphore */ + if (np->mac_in_use){ + tx_ctrl = readl(base + NvRegTransmitterControl); + tx_ctrl &= ~NVREG_XMITCTL_HOST_SEMA_MASK; + writel(tx_ctrl, base + NvRegTransmitterControl); + } + + /* set phy to a lower speed to conserve power */ + if((lowpowerspeed==NV_LOW_POWER_ENABLED)&&!np->mac_in_use) + nv_set_low_speed(dev); + +#if NVVER > RHEL4 + pci_save_state(pdev); +#else + pci_save_state(pdev,np->pci_state); +#endif + np->saved_nvregphyinterface= readl(base+NvRegPhyInterface); + for(i=0;i<64;i++){ + pci_read_config_dword(pdev,i*4,&np->saved_config_space[i]); + } +#if NVVER > RHEL4 + pci_enable_wake(pdev, pci_choose_state(pdev, state), np->wolenabled); +#else + pci_enable_wake(pdev, state, np->wolenabled); +#endif + pci_disable_device(pdev); + +#if NVVER > RHEL4 + pci_set_power_state(pdev, pci_choose_state(pdev, state)); +#else + pci_set_power_state(pdev, state); +#endif + + return 0; +} + +static int nv_resume(struct pci_dev *pdev) +{ + struct net_device *dev = pci_get_drvdata(pdev); + int rc = 0; + struct fe_priv *np = get_nvpriv(dev); + u8 __iomem *base = get_hwbase(dev); + int i; + int err; + u32 txreg; + + dprintk(KERN_INFO "forcedeth: nv_resume\n"); + + pci_set_power_state(pdev, PCI_D0); +#if NVVER > RHEL4 + pci_restore_state(pdev); +#else + pci_restore_state(pdev,np->pci_state); +#endif + for(i=0;i<64;i++){ + pci_write_config_dword(pdev,i*4,np->saved_config_space[i]); + } + err = pci_enable_device(pdev); + if (err) { + printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n", + err, pci_name(pdev)); + } + pci_set_master(pdev); + + txreg = readl(base + NvRegTransmitPoll); + txreg |= NVREG_TRANSMITPOLL_MAC_ADDR_REV; + writel(txreg, base + NvRegTransmitPoll); + nv_change_m2pintf(dev,np->saved_nvregphyinterface); + writel(np->orig_mac[0], base + NvRegMacAddrA); + writel(np->orig_mac[1], base + NvRegMacAddrB); + + /* MCP55:restore msix table */ + if((pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_14)||(pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_15)) + { + unsigned long phys_addr; + void __iomem *base_addr; + void __iomem *base; + unsigned int len; + unsigned int i; + + len = NV_MSI_X_MAX_VECTORS * PCI_MSIX_ENTRY_SIZE; + phys_addr = np->msix_pa_addr; + base_addr = ioremap_nocache(phys_addr, len); + for(i=0;i< NV_MSI_X_MAX_VECTORS;i++){ + base = base_addr + i*PCI_MSIX_ENTRY_SIZE; + writel(np->nvmsg[i].address_lo,base + PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET); + writel(np->nvmsg[i].address_hi,base + PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET); + writel(np->nvmsg[i].data,base + PCI_MSIX_ENTRY_DATA_OFFSET); + } + + iounmap(base_addr); + } + + if(np->mac_in_use){ + /* take control of the semaphore */ + for (i = 0; i < 5000; i++) { + if(nv_mgmt_acquire_sema(dev)) + break; + nv_msleep(1); + } + } + + if(lowpowerspeed==NV_LOW_POWER_ENABLED){ + /* re-initialize the phy */ + phy_init(dev); + udelay(10); + } + + /* bring up the adapter */ + if (netif_running(dev)){ + rc = nv_open(dev); + } + netif_device_attach(dev); + + return rc; +} + +#endif /* CONFIG_PM */ +static struct pci_driver nv_eth_driver = { + .name = "forcedeth", + .id_table = pci_tbl, + .probe = nv_probe, + .remove = __devexit_p(nv_remove), +#ifdef CONFIG_PM .suspend = nv_suspend, .resume = nv_resume, +#endif }; +#ifdef CONFIG_PM +static int nv_reboot_handler(struct notifier_block *nb, unsigned long event, void *p) +{ + struct pci_dev *pdev = NULL; + pm_message_t state = { PM_EVENT_SUSPEND }; + struct net_device *dev = NULL; + struct fe_priv *np = NULL; + + switch (event) + { + case SYS_POWER_OFF: + case SYS_HALT: + case SYS_DOWN: +#if NVVER < FEDORA7 + while ((pdev = pci_find_device(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, pdev)) != NULL) { +#else + while ((pdev = pci_get_device(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, pdev)) != NULL) { +#endif + if (pci_dev_driver(pdev) == &nv_eth_driver) { + nv_suspend(pdev, state); + dev = pci_get_drvdata(pdev); + np = get_nvpriv(dev); + if(!np->wolenabled) + nv_pmctrl2_gatecoretxrx(dev,GATE_ON); + } + } + } + + return NOTIFY_DONE; +} + +/* + * Reboot notification + */ +struct notifier_block nv_reboot_notifier = +{ +notifier_call : nv_reboot_handler, + next : NULL, + priority : 0 +}; +#endif + static int __init init_nic(void) { - return pci_register_driver(&driver); + int status; + if(napi) + printk(KERN_INFO "forcedeth.c: Reverse Engineered nForce ethernet driver. Version %s-NAPI.\n", FORCEDETH_VERSION); + else + printk(KERN_INFO "forcedeth.c: Reverse Engineered nForce ethernet driver. Version %s.\n", FORCEDETH_VERSION); + DPRINTK(DRV,KERN_DEBUG,"forcedeth:%s\n",DRV_DATE); +#if NVVER > FEDORA7 + status = pci_register_driver(&nv_eth_driver); +#else + status = pci_module_init(&nv_eth_driver); +#endif +#ifdef CONFIG_PM + if (status >= 0) + register_reboot_notifier(&nv_reboot_notifier); +#endif + return status; } static void __exit exit_nic(void) { - pci_unregister_driver(&driver); +#ifdef CONFIG_PM + unregister_reboot_notifier(&nv_reboot_notifier); +#endif + pci_unregister_driver(&nv_eth_driver); } +#if NVVER > SLES9 +module_param(debug, int, 0); +module_param(napi, int, 0); +MODULE_PARM_DESC(napi, "NAPI enable by setting to 1 and disabled by setting to 0"); +module_param(lowpowerspeed, int, 0); +MODULE_PARM_DESC(lowpowerspeed, "Low Power State Link Speed enable by setting to 1 and disabled by setting to 0"); module_param(max_interrupt_work, int, 0); MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt"); module_param(optimization_mode, int, 0); @@ -6054,12 +7885,31 @@ module_param(msix, int, 0); MODULE_PARM_DESC(msix, "MSIX interrupts are enabled by setting to 1 and disabled by setting to 0."); module_param(dma_64bit, int, 0); MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0."); -module_param(phy_cross, int, 0); -MODULE_PARM_DESC(phy_cross, "Phy crossover detection for Realtek 8201 phy is enabled by setting to 1 and disabled by setting to 0."); - +#else +MODULE_PARM(debug, "i"); +MODULE_PARM(napi, "i"); +MODULE_PARM_DESC(napi, "NAPI enable by setting to 1 and disabled by setting to 0"); +MODULE_PARM(lowpowerspeed, "i"); +MODULE_PARM_DESC(lowpowerspeed, "Low Power State Link Speed enable by setting to 1 and disabled by setting to 0"); +MODULE_PARM(max_interrupt_work, "i"); +MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt"); +MODULE_PARM(optimization_mode, "i"); +MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer."); +MODULE_PARM(poll_interval, "i"); +MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535."); +#ifdef CONFIG_PCI_MSI +MODULE_PARM(msi, "i"); +MODULE_PARM_DESC(msi, "MSI interrupts are enabled by setting to 1 and disabled by setting to 0."); +MODULE_PARM(msix, "i"); +MODULE_PARM_DESC(msix, "MSIX interrupts are enabled by setting to 1 and disabled by setting to 0."); +#endif +MODULE_PARM(dma_64bit, "i"); +MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0."); +#endif MODULE_AUTHOR("Manfred Spraul <manfred@colorfullife.com>"); MODULE_DESCRIPTION("Reverse Engineered nForce ethernet driver"); MODULE_LICENSE("GPL"); +MODULE_VERSION(FORCEDETH_VERSION); MODULE_DEVICE_TABLE(pci, pci_tbl); diff --git a/readme.txt b/readme.txt new file mode 100644 index 0000000..d60b360 --- /dev/null +++ b/readme.txt @@ -0,0 +1,43 @@ + nVidia Linux NIC driver readme + ====================== + + +Contents +======= +- Introduction +- Building and installation instruction +- Miscellaneous + +Introduction +======== +This file provide the instruction how to build and install the nVidia Linux NIC driver module. + +The forcedeth driver provide the support for embedded MAC in all nVidia chipsets. +It support all the mainstream linux distros base on kernel 2.4.x and 2.6.x, it support the latest mainline kernel as well. +Linux distros support list: +RHEL5 / RHEL5U1 / RHEL5U2 +RHEL4U2 / RHEL4U3 / RHEL4U4 / RHEL4U5 / RHEL4U6 / RHEL4U7 +RHEL3U6 / RHEL3U7 / RHEL3U8 / RHEL3U9 +SLES9SP3 / SLES10 / SLES10SP1 / SLES10SP2 +Fedora core 5/6/7/8/9 +Open SuSE10 / 10.1 / 10.2 / 10.3 +Ubuntu 8.04 + +Building and installation instruction +========================= +1) Make a directory under linux such as /nvlan + +2) Copy the 'forcedeth.c' and 'Makefile' to /nvlan + +3) Change to the driver source directory: + cd /nvlan + +4) Compile the driver: + make + +5) Install the driver module: + insmod forcedeth.ko + +Miscellaneous +========== +'modinfo forcedeth.ko' will display the driver version and available parameters -- 1.6.0.2
From 7dfd3b25965dd7b545db246f3b8713f5d9c4afd8 Mon Sep 17 00:00:00 2001 From: martin f. krafft <madduck@madduck.net> Date: Tue, 9 Dec 2008 08:29:55 +0100 Subject: [PATCH] amend version number for local changes Signed-off-by: martin f. krafft <madduck@madduck.net> --- forcedeth.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/forcedeth.c b/forcedeth.c index a5c2ad0..7cc1581 100644 --- a/forcedeth.c +++ b/forcedeth.c @@ -128,11 +128,11 @@ #ifdef NV_LINUX_VER_H #include "../linux_version.h" #else -#define PACKAGE_VER "V1.27" +#define PACKAGE_VER "V1.27+madduck" #endif #define FORCEDETH_VERSION "0.62-Driver Package "PACKAGE_VER #define DRV_NAME "forcedeth" -#define DRV_DATE "2008/09/02" +#define DRV_DATE "2008/12/09" #include <linux/module.h> #include <linux/types.h> -- 1.6.0.2
Attachment:
digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)