Header Shadow Image


RTNETLINK answers: Network is unreachable

Getting this?

RTNETLINK answers: Network is unreachable

Maybe the interface is down:

ip link set eno1 up
ip addr add 10.0.0.100 dev dno1
ip route add default via 10.0.0.1 dev dno1

Thx,

lun4194304 has a LUN larger than allowed by the host adapter

Getting this?

Dec 25 09:56:14 mdskvm-p06 kernel: sd 0:0:0:0: lun4194304 has a LUN larger than allowed by the host adapter
Dec 25 09:56:14 mdskvm-p06 kernel: scsi 0:3:0:0: lun4194304 has a LUN larger than allowed by the host adapter

Fix it by adding the following and rebuilding the initramfs:

[root@mdskvm-p06 ~]# cat /etc/modprobe.d/lpfc.conf
options lpfc  lpfc_nodev_tmo=10 lpfc_lun_queue_depth=32 lpfc_max_luns=65535
[root@mdskvm-p06 ~]# dracut -f

Thx,
AB

ARP replies not forwarded to virtual interface / Destination Host Unreachable / received packet on bond0 with own address as source address

Getting this?

Request timed out 

Running tcpdump on the interface reveals no replies:

# tcpdump -i one-19-0 -s 0 -n arp or icmp | grep -Ei "192.168.0.128|192.168.0.224"

11:59:08.551814 ARP, Request who-has 192.168.0.224 tell 192.168.0.128, length 46
11:59:09.553599 ARP, Request who-has 192.168.0.224 tell 192.168.0.128, length 28
11:59:09.553689 ARP, Request who-has 192.168.0.224 tell 192.168.0.128, length 46

When trying to ping from a guest VM you get this:

Destination Host Unreachable

Your virtual interface looks like this:

[root@mdskvm-p01 network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
3: enp2s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:28 brd ff:ff:ff:ff:ff:ff
4: enp3s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:2a brd ff:ff:ff:ff:ff:ff
5: enp3s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:2c brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master onebr01 state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7ae7:d1ff:fe8f:4d26/64 scope link
       valid_lft forever preferred_lft forever
8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:af:dc:91 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global virbr0
       valid_lft forever preferred_lft forever
9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:af:dc:91 brd ff:ff:ff:ff:ff:ff
19: onebr01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.60/24 brd 192.168.0.255 scope global onebr01
       valid_lft forever preferred_lft forever
    inet 192.168.0.88/32 scope global onebr01
       valid_lft forever preferred_lft forever
    inet6 fe80::7ae7:d1ff:fe8f:4d26/64 scope link
       valid_lft forever preferred_lft forever
26: one-19-0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master onebr01 state UNKNOWN group default qlen 1000
    link/ether fe:28:38:a0:00:01 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc28:38ff:fea0:1/64 scope link
       valid_lft forever preferred_lft forever

[root@mdskvm-p01 network-scripts]#

The virtual interface, at least from an OpenNebula perspective, is defined as follows:

[oneadmin@one01 ~]$ onevnet show 7
VIRTUAL NETWORK 7 INFORMATION
ID                       : 7
NAME                     : onevnet01
USER                     : oneadmin
GROUP                    : oneadmin
LOCK                     : None
CLUSTERS                 : 100
BRIDGE                   : onebr01
VN_MAD                   : bridge
AUTOMATIC VLAN ID        : NO
AUTOMATIC OUTER VLAN ID  : NO
USED LEASES              : 1

PERMISSIONS
OWNER                    : um-
GROUP                    : —
OTHER                    : —

VIRTUAL NETWORK TEMPLATE
BRIDGE="onebr01"
BRIDGE_TYPE="linux"
DESCRIPTION="ONE Virtual Network 01"
DNS="192.168.0.224"
GATEWAY="192.168.0.1"
NETWORK_MASK="255.255.255.0"
PHYDEV=""
SECURITY_GROUPS="0"
VN_MAD="bridge"

ADDRESS RANGE POOL
AR 0
SIZE           : 154
LEASES         : 1

RANGE                                   FIRST                               LAST
MAC                         18:28:38:A0:00:01                  18:28:38:a0:00:9a
IP                              192.168.0.100                      192.168.0.253


LEASES
AR  OWNER                         MAC              IP                        IP6
0   V:19            18:28:38:a0:00:01   192.168.0.100                          –

VIRTUAL ROUTERS
[oneadmin@one01 ~]$
[oneadmin@one01 ~]$
[oneadmin@one01 ~]$

Then the issue is likely with your bonding mode:

Marcelo Ricardo Leitner 2015-11-09 17:39:20 UTC

Which bond mode are you using? Please ensure it's either load balance or LACP.
ARP replies are destined to original requester MAC but some bond modes will overwrite src mac for load balancing, which would cause the bridge to not forward the packets back to the guest.

https://bugzilla.redhat.com/show_bug.cgi?id=1279161

The setup here was:

[root@mdskvm-p01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BONDING_OPTS='mode=2 miimon=100'
BRIDGE=onebr01
MACADDR=78:e7:d1:8f:4d:26
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no
[root@mdskvm-p01 network-scripts]#

Changing bonding mode to 6 solved the issue for the following reason:

Mode 6 (balance-alb)

This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

[root@mdskvm-p01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BONDING_OPTS='mode=6 miimon=100'
BRIDGE=onebr01
MACADDR=78:e7:d1:8f:4d:26
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no
[root@mdskvm-p01 network-scripts]#

Bonding mode 4 also works:

BONDING_OPTS='mode=4 miimon=100'

This also solves the following error message:

Nov 26 12:03:02 mdskvm-p01 kernel: onebr01: received packet on bond0 with own address as source address (addr:78:e7:d1:8f:4d:26, vlan:0)

Cheers,
TK

connect: Network is unreachable

Getting this?

[root@mdskvm-p01 yum.repos.d]# ping 8.8.8.8
connect: Network is unreachable
[root@mdskvm-p01 yum.repos.d]# ip route add default via 192.168.0.1

Solve it by donig this:

Read the rest of this entry »

Getting this adding hosts to XCP-ng?

write EPROTO 140708195170112:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:../deps/openssl/openssl/ssl/s23_clnt.c:827:

 

Read the rest of this entry »

bash: /usr/local/bin/node: No such file or directory

Getting this?

bash: /usr/local/bin/node: No such file or directory

Fix with:

[root@xoa-org01 bin]# node -v
-bash: /usr/local/bin/node: No such file or directory
[root@xoa-org01 bin]# . ~/.bash_profile
[root@xoa-org01 bin]# node
> .exit
[root@xoa-org01 bin]# node -v
v8.16.2
[root@xoa-org01 bin]#

Thx,
TK

XCP-ng: No SR specified and Pool default SR is null

If you're getting this error:  

Oct 14 01:43:26 xcpng01 xapi: [error|xcpng01.nix.mds.xyz|3179 UNIX /var/lib/xcp/xapi|dispatch:SR.get_uuid D:3c84eaa48cb2|backtrace] SR.get_uuid D:855756122ab7 failed with exception Db_exn.DBCache_NotFound("missing row", "SR", "OpaqueRef:NULL")

Read the rest of this entry »

XCP-ng: Adding Plugins: GlusterFS

To create a plugin entry for XCP-ng 8.0.1, follow the following procedure ( We will use the GlusterFS plugin for this example. ):

Create the GlusterFS repo:

[19:40 xcpng02 sm]# cat /etc/yum.repos.d/gluster63.repo
[gluster63]
name=Gluster 6.3
baseurl=http://mirror.centos.org/centos/7/storage/x86_64/gluster-6/
gpgcheck=0
enabled=1
[19:41 xcpng02 sm]#

Read the rest of this entry »

Cryptsvc / Cryptographic Services: HTTPS Pages are slow to load.

HTTPS pages are slow to load.  If you have this issue, try the following:

Visit Services -> Cryptsvc (Cryptographic Services) -> Properties -> Logon Tab -> Select Local System Account

Cheers,
TK

XCP-ng: Create local storage using LVM

Create local storage under XCP-ng as follows.  

Read the rest of this entry »


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License