[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#666386: more info



Hi,

I had removed the igb-based eth0 from the bonding interface, and the machine
was running fine with it, but when the time had come to get some Xen domUs
running on it, it failed miserably on me once again.

The updated setup is:

auto bond0
iface bond0 inet manual
  slaves eth2
  bond_mode active-backup
  bond_miimon 100
auto xenbr0
iface xenbr0 inet static
  bridge-ports bond0
  bridge-fd 0
  address 192.168.54.2
  netmask 255.255.255.0
auto vlan2
iface vlan2 inet manual
  vlan-raw-device xenbr0
auto xenbr2
iface xenbr2 inet static
  bridge-ports vlan2
  bridge-fd 0
  address 213.202.97.156
  netmask 255.255.255.240
  gateway 213.202.97.145

And the virtual machine has simply this:

vif = [
        "mac=00:16:3e:7a:32:9b, bridge=xenbr2",
]

But as soon as I generate any traffic to or from 192.168.54.0/24 and that
virtual machine (notice - not the right VLAN), the whole system instantly
reboots, with no messages in syslog.

I should probably use the hypervisor's noreboot option, but I don't have
a connection to its IPMI out-of-band access controller, and I'm off-site,
so I'm SOL.

This is with linux-image-3.2.0-0.bpo.2-amd64 and with latest .bpo.3.

I'm going to try fiddling with ethtool -K eth2 gro/lro off, but with the
reboots taking >3min on this hardware, this is most annoying...

-- 
     2. That which causes joy or happiness.


Reply to: