Header Shadow Image


VNC Failed to connect to server (code: 1006)

When receiving the following on OpenStack or OpenNebula:

VNC Failed to connect to server (code: 1006)

try the below steps to resolve the issue on either hypervisor manager:

First let's check the logs to see what's listed on the OpenNebula Controller:

[root@opennebula01 one]# tail /var/log/one/novnc.log -n 20
192.168.0.32 – – [03/Apr/2016 12:51:34] 192.168.0.32: Plain non-SSL (ws://) WebSocket connection
192.168.0.32 – – [03/Apr/2016 12:51:34] 192.168.0.32: Version hybi-13, base64: 'False'
192.168.0.32 – – [03/Apr/2016 12:51:34] 192.168.0.32: Path: '/?token=jcvwl8hxzu1qhfrplbif'
192.168.0.32 – – [03/Apr/2016 12:51:34] connecting to: mdskvm-p01:5926
handler exception: [Errno 113] No route to host
[root@opennebula01 one]#

The No route to host message is very important but is non-intuitive as to what's really gonig on.  So we will test this out.  We notice on the line above where the connection attempt is happening: mdskvm-p01:5926.  Let's test this:

[root@opennebula01 one]# ssh -p 5926 mdskvm-p01
ssh: connect to host mdskvm-p01 port 5926: No route to host
[root@opennebula01 one]#

Ok we get the same message.  What we did is try to run ssh to that port to test the response from the remote server.  We got none.   But when we ping the host:

[root@opennebula01 one]# ping mdskvm-p01
PING mdskvm-p01 (192.168.0.60) 56(84) bytes of data.
64 bytes from mdskvm-p01 (192.168.0.60): icmp_seq=1 ttl=64 time=0.462 ms

ssh works fine:

[oneadmin@opennebula01 ~]$ ssh mdskvm-p01
Last login: Thu Mar 31 02:34:15 2016 from opennebula01
[oneadmin@mdskvm-p01 ~]$

So either one of two things are possible, service is down or connection to the 5926 port has been blocked. Now that we are on the worker node, let's check that:

[oneadmin@mdskvm-p01 ~]$ netstat -plnt
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 10.0.0.1:53             0.0.0.0:*               LISTEN      –
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      –
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      –
tcp        0      0 0.0.0.0:5925            0.0.0.0:*               LISTEN      1605/qemu-kvm
tcp        0      0 0.0.0.0:5926            0.0.0.0:*               LISTEN      6128/qemu-kvm
tcp6       0      0 :::22                   :::*                    LISTEN      –
tcp6       0      0 ::1:25                  :::*                    LISTEN      –
[oneadmin@mdskvm-p01 ~]$

This tells us what we need to know already but let's run it as root just in case there isn't anything else listed that's important.

[root@mdskvm-p01 ~]# netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 10.0.0.1:53             0.0.0.0:*               LISTEN      2934/dnsmasq
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1829/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2848/master
tcp        0      0 0.0.0.0:5925            0.0.0.0:*               LISTEN      1605/qemu-kvm
tcp        0      0 0.0.0.0:5926            0.0.0.0:*               LISTEN      6128/qemu-kvm

tcp6       0      0 :::22                   :::*                    LISTEN      1829/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      2848/master
[root@mdskvm-p01 ~]#

So we do see the services are running on that port and there is a listening socket to two instances, our two VM's that we created:

[oneadmin@opennebula01 ~]$ onevm list
    ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
    25 oneadmin oneadmin CentOS-7-25     runn    0    512M mdskvm-p01   0d 14h55
    26 oneadmin oneadmin mds-gui-vm-10   runn    0    512M mdskvm-p01   0d 14h41
[oneadmin@opennebula01 ~]$

So let's check that iptables is down or turned off:

[root@mdskvm-p01 ~]# systemctl status iptables
â iptables.service – IPv4 firewall with iptables
   Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[root@mdskvm-p01 ~]#

So the firewall seems to be down as well.  So this is strange.  Firewall shows to be down, ports are open and functioning fine (Message Connection closed by remote host indicates that the port service responded and decided to close the connection on us. ):

[root@mdskvm-p01 ~]# ssh -p 5925 localhost
ssh_exchange_identification: Connection closed by remote host
[root@mdskvm-p01 ~]# ssh -p 5925 0.0.0.0
ssh_exchange_identification: Connection closed by remote host
[root@mdskvm-p01 ~]# ssh -p 5925 127.0.0.1
ssh_exchange_identification: Connection closed by remote host
[root@mdskvm-p01 ~]#

So what to do now?  Well it turns out that we are carrying over our knowledge from the pre CentOS 7 / RHEL 7 / Scientific Linux 7 days.  The correct command to run is the following:

[root@mdskvm-p01 ~]# systemctl status firewalld
â firewalld.service – firewalld – dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2016-03-31 20:57:32 EDT; 2 days ago
 Main PID: 1002 (firewalld)
   CGroup: /system.slice/firewalld.service
           ââ1002 /usr/bin/python -Es /usr/sbin/firewalld –nofork –nopid

Mar 31 20:57:32 mdskvm-p01 systemd[1]: Starting firewalld – dynamic firewall daemon…
Mar 31 20:57:32 mdskvm-p01 systemd[1]: Started firewalld – dynamic firewall daemon.
[root@mdskvm-p01 ~]#

And that shows that the firewall service is up.  So let's check with a more traditional command to see that:

[root@mdskvm-p01 ~]# iptables -nL|grep -Ei "ACCEPT|REJECT"
Chain INPUT (policy ACCEPT)
ACCEPT     udp  —  0.0.0.0/0            0.0.0.0/0            udp dpt:53
ACCEPT     tcp  —  0.0.0.0/0            0.0.0.0/0            tcp dpt:53
ACCEPT     udp  —  0.0.0.0/0            0.0.0.0/0            udp dpt:67
ACCEPT     tcp  —  0.0.0.0/0            0.0.0.0/0            tcp dpt:67
ACCEPT     all  —  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  —  0.0.0.0/0            0.0.0.0/0
ACCEPT     icmp —  0.0.0.0/0            0.0.0.0/0
REJECT     all  —  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
ACCEPT     all  —  0.0.0.0/0            10.0.0.0/16          ctstate RELATED,ESTABLISHED
ACCEPT     all  —  10.0.0.0/16          0.0.0.0/0
ACCEPT     all  —  0.0.0.0/0            0.0.0.0/0
REJECT     all  —  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
REJECT     all  —  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
ACCEPT     all  —  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  —  0.0.0.0/0            0.0.0.0/0
ACCEPT     icmp —  0.0.0.0/0            0.0.0.0/0
REJECT     all  —  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
ACCEPT     udp  —  0.0.0.0/0            0.0.0.0/0            udp dpt:68
ACCEPT     tcp  —  0.0.0.0/0            0.0.0.0/0            tcp dpt:22 ctstate NEW
You have new mail in /var/spool/mail/root
[root@mdskvm-p01 ~]#

So now we can see that every port except port 22, the ssh port that worked, is blocked.  So we will add in some rules to /etc/sysconfig/iptables:

# VNC
-A INPUT -s 192.168.0.0/16 -d 192.168.0.0/16 -i bond0 -p tcp -m multiport –sports 111,2049,5900:5999,50517 -j ACCEPT
-A INPUT -s 192.168.0.0/16 -d 192.168.0.0/16 -i bond0 -p tcp -m multiport –dports 111,2049,5900:5999,50517 -j ACCEPT
-A INPUT -d 192.168.0.0/16 -s 192.168.0.0/16 -i bond0 -p udp -m multiport –sports 111,2049,5900:5999,50517 -j ACCEPT
-A INPUT -d 192.168.0.0/16 -s 192.168.0.0/16 -i bond0 -p udp -m multiport –dports 111,2049,5900:5999,50517 -j ACCEPT

and save the file.  We do this instead of turning off the firewall since the firewall gives additional security that we are after. Notice we can only specify the range here as well, 5900:5999 but the above lines are taken from another server running VNC where the rules were refined for other ports VNC required access too.  So we will reuse those ports.

[root@mdskvm-p01 ~]# cat /etc/sysconfig/iptables
# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state –state NEW -m tcp –dport 22 -j ACCEPT
-A INPUT -p tcp -m state –state NEW -m tcp –dport 2222 -j ACCEPT
# VNC
-A INPUT -s 192.168.0.0/16 -d 192.168.0.0/16 -i br0 -p tcp -m multiport –sports 111,2049,5900:5999,50517 -j ACCEPT
-A INPUT -s 192.168.0.0/16 -d 192.168.0.0/16 -i br0 -p tcp -m multiport –dports 111,2049,5900:5999,50517 -j ACCEPT
-A INPUT -d 192.168.0.0/16 -s 192.168.0.0/16 -i br0 -p udp -m multiport –sports 111,2049,5900:5999,50517 -j ACCEPT
-A INPUT -d 192.168.0.0/16 -s 192.168.0.0/16 -i br0 -p udp -m multiport –dports 111,2049,5900:5999,50517 -j ACCEPT
#

-A INPUT -j REJECT –reject-with icmp-host-prohibited
-A FORWARD -j REJECT –reject-with icmp-host-prohibited
COMMIT
[root@mdskvm-p01 ~]#

Restart the firewall service.  A full howto on firewalld is available from the fedora pages and this nice howto.  We won't go into details of firewalld here as it's out of scope for this topic.  However we will go as far as providing what is needed to add a rule in and disabling firewalld since we have KVM running on this host. NOTE: Ensure you choose the right interface there, in this case we will use br0 as that is the bridge interface we need.

systemctl mask firewalld
systemctl status firewalld
systemctl stop firewalld

Next we enable iptables:

[root@mdskvm-p01 ~]# systemctl start iptables
[root@mdskvm-p01 ~]# systemctl enable iptables

Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service.
[root@mdskvm-p01 ~]#

And check the rules again:

[root@mdskvm-p01 ~]# iptables -nL
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  —  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
ACCEPT     icmp —  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  —  0.0.0.0/0            0.0.0.0/0
ACCEPT     tcp  —  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
ACCEPT     tcp  —  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:2222
ACCEPT     tcp  —  192.168.0.0/16       192.168.0.0/16       multiport sports 111,2049,5900:5999,50517
ACCEPT     tcp  —  192.168.0.0/16       192.168.0.0/16       multiport dports 111,2049,5900:5999,50517
ACCEPT     udp  —  192.168.0.0/16       192.168.0.0/16       multiport sports 111,2049,5900:5999,50517
ACCEPT     udp  —  192.168.0.0/16       192.168.0.0/16       multiport dports 111,2049,5900:5999,50517
REJECT     all  —  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
REJECT     all  —  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
[root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]#
[root@mdskvm-p01 ~]#

And check the rules remotely:  

[oneadmin@opennebula01 ~]$ ssh -p 5925 mdskvm-p01
ssh_exchange_identification: Connection closed by remote host
[oneadmin@opennebula01 ~]$

which gives us the appropriate message instead of no route to host.  Earlier versions of OpenNebula required us to run the following to enable novnc before we do the above.  

cd /usr/share/one; ./install_novnc.sh

If you're running these older versions, it's worth to check this as well.  Here is the result:

Working OpenNebula / OpenStack Connection

Hope you enjoyed the post and got your issue solved.  cool

Cheers,
TK

 

Leave a Reply

You must be logged in to post a comment.


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License