[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#604470: linux-image-2.6.32-5-openvz-amd64: degraded inbound network bandwidth



There is some other strange effect: after some idle time the network in
the container stop working at all. I see this problem (drop
connectivity) only on the new created for testing container may be
because it is idle most of time, while other containers are production
and are getting activity continuously.

Here is what You asking for (netstat -s):

Ip:
    7298988 total packets received
    0 forwarded
    0 incoming packets discarded
    7262036 incoming packets delivered
    4830632 requests sent out
Icmp:
    13593 ICMP messages received
    2 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 357
        timeout in transit: 34
        redirects: 13106
        echo requests: 52
        echo replies: 44
    949 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 719
        echo request: 178
        echo replies: 52
IcmpMsg:
        InType0: 44
        InType3: 357
        InType5: 13106
        InType8: 52
        InType11: 34
        OutType0: 52
        OutType3: 719
        OutType8: 178
Tcp:
    85940 active connections openings
    91936 passive connection openings
    303 failed connection attempts
    3309 connection resets received
    6 connections established
    7227101 segments received
    4776345 segments send out
    32345 segments retransmited
    9 bad segments received.
    2180 resets sent
Udp:
    20616 packets received
    726 packets to unknown port received.
    0 packet receive errors
    20993 packets sent
UdpLite:
TcpExt:
    265 invalid SYN cookies received
    69 resets received for embryonic SYN_RECV sockets
    163 packets pruned from receive queue because of socket buffer overrun
    84541 TCP sockets finished time wait in fast timer
    26 time wait sockets recycled by time stamp
    5 packets rejects in established connections because of timestamp
    27496 delayed acks sent
    2 delayed acks further delayed because of locked socket
    Quick ack mode was activated 8377 times
    2255888 packets directly queued to recvmsg prequeue.
    1439293771 bytes directly in process context from backlog
    3017217537 bytes directly received in process context from prequeue
    2255586 packet headers predicted
    3511681 packets header predicted and directly queued to user
    452075 acknowledgments not containing data payload received
    717999 predicted acknowledgments
    10 times recovered from packet loss due to fast retransmit
    5885 times recovered from packet loss by selective acknowledgements
    3 bad SACK blocks received
    Detected reordering 10 times using FACK
    Detected reordering 5 times using SACK
    Detected reordering 3 times using time stamp
    4 congestion windows fully recovered without slow start
    24 congestion windows partially recovered using Hoe heuristic
    181 congestion windows recovered without slow start by DSACK
    95 congestion windows recovered without slow start after partial ack
    8805 TCP data loss events
    TCPLostRetransmit: 495
    8 timeouts after reno fast retransmit
    1603 timeouts after SACK recovery
    1006 timeouts in loss state
    14585 fast retransmits
    503 forward retransmits
    8431 retransmits in slow start
    3385 other TCP timeouts
    5 classic Reno fast retransmits failed
    952 SACK retransmits failed
    7885 packets collapsed in receive queue due to low socket buffer
    8407 DSACKs sent for old packets
    115 DSACKs sent for out of order packets
    1623 DSACKs received
    29 DSACKs for out of order packets received
    27 connections reset due to unexpected data
    18 connections reset due to early user close
    188 connections aborted due to timeout
    TCPDSACKIgnoredOld: 1238
    TCPDSACKIgnoredNoUndo: 213
    TCPSpuriousRTOs: 49
    TCPSackShiftFallback: 49650
IpExt:
    InOctets: 495299200
    OutOctets: -1995348470
-- 

*******************************
****  Vladimir Stavrinov  *****
**** vstavrinov@gmail.com  ****
*******************************



Reply to: