Header Shadow Image


WrongHost: Peer certificate subjectAltName does not match host, expected 1.2.3.4, got DNS: host1.domain, DNS: host2.domain, DNS: host3.domain

Another form of this error is when the certificate validation produced an IP instead of a host, such as this:

WrongHost: Peer certificate subjectAltName does not match host, expected 1.2.3.4, got DNS:srv-c01.earth.water.fire, DNS:cm-r01nn01.earth.water.fire, DNS:cm-r01nn02.earth.water.fire
[02/Jan/2021 03:15:59 +0000] 32309 Thread-13 downloader   ERROR    Failed fetching torrent: Peer certificate subjectAltName does not match host, expected 1.2.3.4, got DNS:srv-c01.earth.water.fire, DNS:cm-r01nn01.earth.water.fire, DNS:cm-r01nn02.earth.water.fire

In our software stack, Cloudera Manager is sitting behind an HAproxy / Keepalived VIP:

Cloudera CM <- HAproxy <- Keepalived <- Cloudera Node

In this case, the error was seen on the Cloudera Node.  So what could be the issue?

Verifying using forward and reverse lookups also produced expected results.

# dig -x 1.2.3.4
;; QUESTION SECTION:
;4.3.2.1.in-addr.arpa.       IN      PTR

;; ANSWER SECTION:
4.3.2.1.in-addr.arpa. 86400  IN      PTR     cm-r01nn01.earth.water.fire.


# dig cm-r01nn01.earth.water.fire
;; QUESTION SECTION:
;cm-r01nn01.earth.water.fire.                IN      A

;; ANSWER SECTION:
cm-r01nn01.earth.water.fire. 1200    IN      A       1.2.3.4


# nslookup 1.2.3.4
Server:         192.168.0.100
Address:        192.168.0.100#53

4.3.2.1.in-addr.arpa name = cm-r01nn01.earth.water.fire.


# nslookup cm-r01nn01
Server:         192.168.0.100
Address:        192.168.0.100#53

Name:   cm-r01nn01.earth.water.fire
Address: 1.2.3.4

Troubleshooting done revealed that pointing the node directly to the Cloudera CM server, bypassing the HAProxy and Keepalived VIP's worked well. A further investigation revealed that Selinux / Auditd were blocking HAProxy and Keepalived communication.  

type=AVC msg=audit(1609572407.005:1253694): avc:  denied  { name_bind } for  pid=3533 comm="haproxy" src=8084 scontext=system_u:system_r:haproxy_t:s0 tcontext=system_u:object_r:luci_port_t:s0 tclass=tcp_socket  

Running the following several times helped create the correct rules to allow communication:  

grep AVC /var/log/audit/audit.log* >> /var/log/audit/audit-denied.log; cat /var/log/audit/audit-denied.log | audit2allow -M systemd-allow;semodule -i systemd-allow.pp  
systemctl restart haproxy keepalived  

Initially, this did not appear to fully resolve the issue.  A full restart of the Cloudera SCM server however did, apparently confirming this problem was made up of two issues.  Communication to or from the CM server was easily analyzed by using tcpdump to verify that no traffic was being received or that no valid replies were being sent, from the Cloudera SCM server.  The underlying behaviour of the logic to detect the correct hostname isn't known without diving into the java source code, however, regardless of the logic, what could be said is that the returned hostname from the query produced no results, defaulting to an IP.  

Regards,

Leave a Reply

You must be logged in to post a comment.


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License