Header Shadow Image


ARP replies not forwarded to virtual interface / Destination Host Unreachable / received packet on bond0 with own address as source address

Getting this?

Request timed out 

Running tcpdump on the interface reveals no replies:

# tcpdump -i one-19-0 -s 0 -n arp or icmp | grep -Ei "192.168.0.128|192.168.0.224"

11:59:08.551814 ARP, Request who-has 192.168.0.224 tell 192.168.0.128, length 46
11:59:09.553599 ARP, Request who-has 192.168.0.224 tell 192.168.0.128, length 28
11:59:09.553689 ARP, Request who-has 192.168.0.224 tell 192.168.0.128, length 46

When trying to ping from a guest VM you get this:

Destination Host Unreachable

Your virtual interface looks like this:

[root@mdskvm-p01 network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
3: enp2s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:28 brd ff:ff:ff:ff:ff:ff
4: enp3s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:2a brd ff:ff:ff:ff:ff:ff
5: enp3s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:2c brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master onebr01 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7ae7:d1ff:fe8f:4d26/64 scope link
       valid_lft forever preferred_lft forever
8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:af:dc:91 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global virbr0
       valid_lft forever preferred_lft forever
9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:af:dc:91 brd ff:ff:ff:ff:ff:ff
19: onebr01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.60/24 brd 192.168.0.255 scope global onebr01
       valid_lft forever preferred_lft forever
    inet 192.168.0.88/32 scope global onebr01
       valid_lft forever preferred_lft forever
    inet6 fe80::7ae7:d1ff:fe8f:4d26/64 scope link
       valid_lft forever preferred_lft forever
26: one-19-0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master onebr01 state UNKNOWN group default qlen 1000
    link/ether fe:28:38:a0:00:01 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc28:38ff:fea0:1/64 scope link
       valid_lft forever preferred_lft forever

[root@mdskvm-p01 network-scripts]#

The virtual interface, at least from an OpenNebula perspective, is defined as follows:

[oneadmin@one01 ~]$ onevnet show 7
VIRTUAL NETWORK 7 INFORMATION
ID                       : 7
NAME                     : onevnet01
USER                     : oneadmin
GROUP                    : oneadmin
LOCK                     : None
CLUSTERS                 : 100
BRIDGE                   : onebr01
VN_MAD                   : bridge
AUTOMATIC VLAN ID        : NO
AUTOMATIC OUTER VLAN ID  : NO
USED LEASES              : 1

PERMISSIONS
OWNER                    : um-
GROUP                    : —
OTHER                    : —

VIRTUAL NETWORK TEMPLATE
BRIDGE="onebr01"
BRIDGE_TYPE="linux"
DESCRIPTION="ONE Virtual Network 01"
DNS="192.168.0.224"
GATEWAY="192.168.0.1"
NETWORK_MASK="255.255.255.0"
PHYDEV=""
SECURITY_GROUPS="0"
VN_MAD="bridge"

ADDRESS RANGE POOL
AR 0
SIZE           : 154
LEASES         : 1

RANGE                                   FIRST                               LAST
MAC                         18:28:38:A0:00:01                  18:28:38:a0:00:9a
IP                              192.168.0.100                      192.168.0.253


LEASES
AR  OWNER                         MAC              IP                        IP6
0   V:19            18:28:38:a0:00:01   192.168.0.100                          –

VIRTUAL ROUTERS
[oneadmin@one01 ~]$
[oneadmin@one01 ~]$
[oneadmin@one01 ~]$

Then the issue is likely with your bonding mode:

Marcelo Ricardo Leitner 2015-11-09 17:39:20 UTC

Which bond mode are you using? Please ensure it's either load balance or LACP.
ARP replies are destined to original requester MAC but some bond modes will overwrite src mac for load balancing, which would cause the bridge to not forward the packets back to the guest.

https://bugzilla.redhat.com/show_bug.cgi?id=1279161

The setup here was:

[root@mdskvm-p01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BONDING_OPTS='mode=2 miimon=100'
BRIDGE=onebr01
MACADDR=78:e7:d1:8f:4d:26
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no
[root@mdskvm-p01 network-scripts]#

Changing bonding mode to 6 solved the issue for the following reason:

Mode 6 (balance-alb)

This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

[root@mdskvm-p01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BONDING_OPTS='mode=6 miimon=100'
BRIDGE=onebr01
MACADDR=78:e7:d1:8f:4d:26
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no
[root@mdskvm-p01 network-scripts]#

( This might not work.  The ARP table was still populated from when mode=6 was used. Hence the possible false positive. ) Bonding mode 4 also works:

BONDING_OPTS='mode=4 miimon=100'

This also solves the following error message:

Nov 26 12:03:02 mdskvm-p01 kernel: onebr01: received packet on bond0 with own address as source address (addr:78:e7:d1:8f:4d:26, vlan:0)

Cheers,
TK

Leave a Reply

You must be logged in to post a comment.


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License