Header Shadow Image


AWX / Ansible High Availability Configuration on CENTOS 7

AWX / Ansible High Availability Configuration on CENTOS 7 . So we want a highly available and scalable AWX and Ansible cluster solution. 

Here's how we'll plan things out:

NAME ADDRESS HOSTNAME SERVICES
awx01 192.168.0.142 awx01.nix.mds.xyz AWX, Gluster, Keepalived, HAProxy
awx02 192.168.0.143 awx02.nix.mds.xyz AWX, Gluster, Keepalived, HAProxy
awx03 192.168.0.117 awx03.nix.mds.xyz AWX, Gluster, Keepalived, HAProxy
awx-c01 (VIP) 192.168.0.65 awx-c01.nix.mds.xyz  

Our PostgreSQL w/ Patroni Solution is a separate cluster residing on seperate physical machines.  
 
Here's the table of steps you'll need.  
 
HOST SETTING DESCRIPTION
awx01 / awx02 / awx03 CentOS 7 Create 3 seperate VM's to add to your cluster.
awx01 / awx02 / awx03 echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf; echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf; sysctl -p Set no local bind and ip forward for HAProxy and Keepalived.
awx01 / awx02 / awx03

Create the FS on the new disk and mount it and setup Gluster:

mkfs.xfs /dev/sdb
mkdir -p /bricks/0
mount /dev/sdb /bricks/0
# grep brick /etc/fstab
/dev/sdb /bricks/0                              xfs     defaults        0 0

Gluster currently ships in version 4.1.  This won't work with Ganesha.  Use either the repo or continue installing the latest version of Gluster:

# cat CentOS-Gluster-3.13.repo
# CentOS-Gluster-3.13.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
# information

[centos-gluster313]
name=CentOS-$releasever – Gluster 3.13 (Short Term Maintanance)
baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.13/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage

[centos-gluster313-test]
name=CentOS-$releasever – Gluster 3.13 Testing (Short Term Maintenance)
baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-3.13/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage

Alternately to the above, use the following to install the latest repo:

yum install centos-release-gluster

Install and enable the rest:

yum -y install glusterfs glusterfs-fuse glusterfs-server glusterfs-api glusterfs-cli
systemctl enable glusterd.service
systemctl start glusterd

On node01 ONLY if creating brand new: 

gluster volume create agv01 replica 3 awx01:/bricks/0/agv01 awx02:/bricks/0/agv01 awx03:/bricks/0/agv01

gluster volume info agv01
gluster volume status 

Replace bricks: 

Unreachable brick:
gluster volume remove-brick agv01 replica X awx01:/bricks/0/agv01 start
gluster volume remove-brick agv01 replica X awx01:/bricks/0/agv01 force
gluster peer detach awx01

Reachable brick:

gluster volume remove-brick agv01 replica X awx01:/bricks/0/agv01 start
gluster volume remove-brick agv01 replica X awx01:/bricks/0/agv01 status
gluster volume remove-brick agv01 replica X awx01:/bricks/0/agv01 commit

gluster peer detach awx01

Add subsequent bricks: 

(from existing cluster member ) 
[root@awx01 ~]# gluster peer probe awx03 
[root@awx01 ~]# gluster volume add-brick agv01 replica 3 awx03:/bricks/0/agv01

Mount the storage locally: 

systemctl disable autofs 
mkdir /n 

Example below.  Add to /etc/fstab as well: 

[root@awx01 ~]# mount -t glusterfs aw01:/agv01 /n
[root@awx02 ~]# mount -t glusterfs awx02:/agv01 /n
[root@awx03 ~]# mount -t glusterfs awx03:/agv01 /n

Ex:

awx01:/agv01 /ansible    glusterfs defaults      0 0

Ensure the following options are set on the gluster volume:

[root@awx01 glusterfs]# gluster volume set gv01 cluster.quorum-type auto
volume set: success
[root@awx01 glusterfs]# gluster volume set gv01 cluster.server-quorum-type server
volume set: success

Here is an example Gluster volume configuration we used (This config is replicated when adding new bricks):

cluster.server-quorum-type: server
cluster.quorum-type: auto
server.event-threads: 8
client.event-threads: 8
performance.readdir-ahead: on
performance.write-behind-window-size: 8MB
performance.io-thread-count: 16
performance.cache-size: 1GB
nfs.trusted-sync: on
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet

Configure the

 

 

 

 

 

 

 

 

 

 

 

GlusterFS filesystem

using

awx01 / awx02 / awx03

PACKAGES:
yum install haproxy     # ( 1.5.18-6.el7.x86_64 used in this case )

On all the nodes add the following:  

# cat /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local0
    stats       socket /var/run/haproxy.sock mode 0600 level admin
    user        haproxy
    group       haproxy
    daemon
    debug
    maxconn 1024

defaults
    mode tcp
    log global
    option                  dontlognull
    option                  redispatch
    retries 2
    timeout client 30m
    timeout connect 4s
    timeout server 30m
    timeout check 5s

listen stats
    bind :9000
    mode http
    stats enable
    stats hide-version
    stats realm Haproxy\ Statistics
    stats uri /haproxy-stats
    stats auth admin:secretPassword

frontend awx-in
    mode tcp
    bind awx-c01:2222
    option tcplog
    default_backend             awx-back


backend awx-back
    log         /dev/log local0 debug
    mode        tcp
    balance     source
    server      awx01.nix.mds.xyz    awx01.nix.mds.xyz:22 maxconn 1024 check
    server      awx02.nix.mds.xyz    awx02.nix.mds.xyz:22 maxconn 1024 check
    server      awx03.nix.mds.xyz    awx03.nix.mds.xyz:22 maxconn 1024 check

#

 

Set logging settings for HAProxy:

# cat /etc/rsyslog.d/haproxy.conf
$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
local6.* /var/log/haproxy.log
local0.* /var/log/haproxy.log

Configure rsyslogd (/etc/rsyslog.conf):

local0.*             /var/log/haproxy.log
local3.*             /var/log/keepalived.log

 
Install and

 

 

 

 

 

 

 

 

 

 

 

Configure HAPROXY. great source that helped with this part.

awx01 / awx02 / awx03

PACKAGES:

yum install keepalived    # ( Used 1.3.5-1.el7.x86_64 in this case )

AWX01:

vrrp_script chk_haproxy {
        script "killall -0 haproxy"             # check the haproxy process
        interval 2                              # every 2 seconds
        weight 2                                # add 2 points if OK
}

vrrp_instance awx-c01 {
        interface eth0                          # interface to monitor
        state MASTER                            # MASTER on haproxy1, BACKUP on haproxy2
        virtual_router_id 65                    # Set to last digit of cluster IP.
        priority 101                            # 101 on haproxy1, 100 on haproxy2

        authentication {
                auth_type PASS
                auth_pass 
s3cretp@s$w0rd
        }

        virtual_ipaddress {
                delay_loop 12
                lb_algo wrr
                lb_kind DR
                protocol TCP
                192.168.0.65                    # virtual ip address
        }

        track_script {
                chk_haproxy
        }
}

AWX02:

vrrp_script chk_haproxy {
        script "killall -0 haproxy"             # check the haproxy process
        interval 2                              # every 2 seconds
        weight 2                                # add 2 points if OK
}

vrrp_instance awx-c01 {
        interface eth0                          # interface to monitor
        state BACKUP                            # MASTER on haproxy1, BACKUP on haproxy2
        virtual_router_id 65                    # Set to last digit of cluster IP.
        priority 102                            # 101 on haproxy1, 100 on haproxy2

        authentication {
                auth_type PASS
                auth_pass 
s3cretp@s$w0rd
        }

        virtual_ipaddress {
                delay_loop 12
                lb_algo wrr
                lb_kind DR
                protocol TCP
                192.168.0.65                    # virtual ip address
        }

        track_script {
                chk_haproxy
        }
}

AWX03:

vrrp_script chk_haproxy {
        script "killall -0 haproxy"             # check the haproxy process
        interval 2                              # every 2 seconds
        weight 2                                # add 2 points if OK
}

vrrp_instance awx-c01 {
        interface eth0                          # interface to monitor
        state BACKUP                            # MASTER on haproxy1, BACKUP on haproxy2
        virtual_router_id 65                    # Set to last digit of cluster IP.
        priority 103                            # 101 on haproxy1, 100 on haproxy2

        authentication {
                auth_type PASS
                auth_pass 
s3cretp@s$w0rd
        }

        virtual_ipaddress {
                delay_loop 12
                lb_algo wrr
                lb_kind DR
                protocol TCP
                192.168.0.65                    # virtual ip address
        }

        track_script {
                chk_haproxy
        }
}

Configure keepalived. A great source that helped with this as well.
awx01 / awx02 / awx03

This step can be made quicker by copying the xml definitions from one host to the other if you already have one defined:

/etc/firewalld/zones/dmz.xml
/etc/firewalld/zones/public.xml

Contents of above:

# cat dmz.xml
<?xml version="1.0" encoding="utf-8"?>
<zone>
  <short>DMZ</short>
  <description>For computers in your demilitarized zone that are publicly-accessible with limited access to your internal network. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="tcp" port="111"/>
  <port protocol="tcp" port="24007-24008"/>
  <port protocol="tcp" port="38465-38469"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="22"/>
  <port protocol="udp" port="22"/>
  <port protocol="udp" port="49000-59999"/>
  <port protocol="tcp" port="49000-59999"/>
  <port protocol="tcp" port="20048"/>
  <port protocol="udp" port="20048"/>
  <port protocol="tcp" port="49152"/>
  <port protocol="tcp" port="4501"/>
  <port protocol="udp" port="4501"/>
  <port protocol="tcp" port="10000"/>
  <port protocol="udp" port="9000"/>
  <port protocol="tcp" port="9000"/>
  <port protocol="tcp" port="445"/>
  <port protocol="tcp" port="139"/>
  <port protocol="udp" port="138"/>
  <port protocol="udp" port="137"/>
</zone>

 

# cat public.xml
<?xml version="1.0" encoding="utf-8"?>
<zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
  <service name="haproxy"/>
  <port protocol="tcp" port="24007-24008"/>
  <port protocol="tcp" port="49152"/>
  <port protocol="tcp" port="38465-38469"/>
  <port protocol="tcp" port="111"/>
  <port protocol="udp" port="111"/>
  <port protocol="tcp" port="2049"/>
  <port protocol="tcp" port="4501"/>
  <port protocol="udp" port="4501"/>
  <port protocol="udp" port="20048"/>
  <port protocol="tcp" port="20048"/>
  <port protocol="tcp" port="22"/>
  <port protocol="udp" port="22"/>
  <port protocol="tcp" port="10000"/>
  <port protocol="udp" port="49000-59999"/>
  <port protocol="tcp" port="49000-59999"/>
  <port protocol="udp" port="9000"/>
  <port protocol="tcp" port="9000"/>
  <port protocol="udp" port="137"/>
  <port protocol="udp" port="138"/>
  <port protocol="udp" port="2049"/>
  <port protocol="tcp" port="445"/>
  <port protocol="tcp" port="139"/>
  <port protocol="udp" port="68"/>
  <port protocol="udp" port="67"/>
</zone>

 

Individual setup:

# cat public.bash

firewall-cmd –zone=public –permanent –add-port=2049/tcp

firewall-cmd –zone=public –permanent –add-port=111/tcp

firewall-cmd –zone=public –permanent –add-port=111/udp

firewall-cmd –zone=public –permanent –add-port=24007-24008/tcp

firewall-cmd –zone=public –permanent –add-port=49152/tcp

firewall-cmd –zone=public –permanent –add-port=38465-38469/tcp

firewall-cmd –zone=public –permanent –add-port=4501/tcp

firewall-cmd –zone=public –permanent –add-port=4501/udp

firewall-cmd –zone=public –permanent –add-port=20048/udp

firewall-cmd –zone=public –permanent –add-port=20048/tcp
firewall-cmd –reload

# cat dmz.bash

firewall-cmd –zone=dmz –permanent –add-port=2049/tcp

firewall-cmd –zone=dmz –permanent –add-port=111/tcp

firewall-cmd –zone=dmz –permanent –add-port=111/udp

firewall-cmd –zone=dmz –permanent –add-port=24007-24008/tcp

firewall-cmd –zone=dmz –permanent –add-port=49152/tcp

firewall-cmd –zone=dmz –permanent –add-port=38465-38469/tcp

firewall-cmd –zone=dmz –permanent –add-port=4501/tcp

firewall-cmd –zone=dmz –permanent –add-port=4501/udp

firewall-cmd –zone=dmz –permanent –add-port=20048/tcp

firewall-cmd –zone=dmz –permanent –add-port=20048/udp

firewall-cmd –reload

#

# On Both

firewall-cmd –permanent –direct –add-rule ipv4 filter INPUT 0 -m pkttype –pkt-type multicast -j ACCEPT
firewall-cmd –reload

 

HANDY STUFF:

firewall-cmd –zone=dmz –list-all
firewall-cmd –zone=public –list-all
firewall-cmd –set-log-denied=all
firewall-cmd –permanent –add-service=haproxy
firewall-cmd –list-all
firewall-cmd –runtime-to-permanent

Configure firewalld.DO NOT disable firewalld .
awx01 / awx02 / awx03

Run any of the following command, or a combination of, on deny entries in /var/log/audit/audit.log that may appear as you stop, start or install above services:

METHOD 1:
grep AVC /var/log/audit/audit.log | audit2allow -M systemd-allow;semodule -i systemd-allow.pp

METHOD 2:
audit2allow -a
audit2allow -a -M ganesha_<NUM>_port
semodule -i ganesha_<NUM>_port.pp

USEFULL THINGS:

ausearch –interpret
aureport

Configure selinux. 

 

 

 

 

 

 

 

Don't disable it.

This actually  makes your host safer and is actually

easy to work with using just these commands.

awx01 / awx02 / awx03

Based on the following comments in the inventory file ( Git checkout directory ./awx/installer/inventory):

# Set pg_hostname if you have an external postgres server, otherwise
# a new postgres service will be created
pg_hostname=psql-c01
pg_username=awx
pg_password=awxpass
pg_database=awx
pg_port=5432

We need to create the following:

CREATE USER awx WITH ENCRYPTED PASSWORD 'awxpass';
CREATE DATABASE awx OWNER awx;

Verify the database:

-bash-4.2$ psql -h psql-c01 -p 5432 -W -U awx
Password for user awx:
psql (10.5)
Type "help" for help.

awx=>

 

Create the AWX database user.
awx01 / awx02 / awx03

Installation

[root@awx-mess01 ~]# yum -y install git gcc gcc-c++ lvm2 bzip2 gettext nodejs yum-utils device-mapper-persistent-data  ansible python-pip

[root@awx-mess01 ~]# git clone –depth 50 https://github.com/ansible/awx.git
[root@awx-mess01 ~]# git clone –depth 50 https://github.com/ansible/awx-logos.git

[root@awx-mess01 installer]# vi inventory
[root@awx-mess01 installer]# grep -v '^ *#' inventory
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"

[all:vars]

dockerhub_base=ansible

 


awx_task_hostname=awx
awx_web_hostname=awxweb
postgres_data_dir=/tmp/pgdocker
host_port=80

docker_compose_dir=/var/lib/awx

 


pg_hostname=psql-c01
pg_username=awx
pg_password=awxpass
pg_database=awx
pg_port=5432


rabbitmq_port=5672
rabbitmq_vhost=tower
rabbitmq_username=tower
rabbitmq_password='password'
rabbitmq_cookie=rabbitmqcookie


admin_user=admin
admin_password=password

create_preload_data=True

secret_key=awxsecret

 

 


[root@awx-mess01 installer]# vi inventory
[root@awx-mess01 installer]# ansible-playbook -i inventory install.yml

Install AWX.  Notice we are using a dummy host here awx-mess01 to play around first.  Also notice that AWX will automatically create it's own RabbitMQ queue.   Later we will configure to use our own RabbitMQ cluster.

Comments are closed.


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License