AWX / Ansible High Availability Configuration on CENTOS 7
AWX / Ansible High Availability Configuration on CENTOS 7 . So we want a highly available and scalable AWX and Ansible cluster solution.
Here's how we'll plan things out:
NAME | ADDRESS | HOSTNAME | SERVICES |
---|---|---|---|
awx01 | 192.168.0.142 | awx01.nix.mds.xyz | AWX, Gluster, Keepalived, HAProxy |
awx02 | 192.168.0.143 | awx02.nix.mds.xyz | AWX, Gluster, Keepalived, HAProxy |
awx03 | 192.168.0.117 | awx03.nix.mds.xyz | AWX, Gluster, Keepalived, HAProxy |
awx-c01 (VIP) | 192.168.0.65 | awx-c01.nix.mds.xyz |
HOST | SETTING | DESCRIPTION |
awx01 / awx02 / awx03 | CentOS 7 | Create 3 seperate VM's to add to your cluster. |
awx01 / awx02 / awx03 | echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf; echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf; sysctl -p | Set no local bind and ip forward for HAProxy and Keepalived. |
awx01 / awx02 / awx03 |
Create the FS on the new disk and mount it and setup Gluster:
mkfs.xfs /dev/sdb Gluster currently ships in version 4.1. This won't work with Ganesha. Use either the repo or continue installing the latest version of Gluster:
# cat CentOS-Gluster-3.13.repo
[centos-gluster313]
[centos-gluster313-test] Alternately to the above, use the following to install the latest repo: yum install centos-release-gluster Install and enable the rest:
yum -y install glusterfs glusterfs-fuse glusterfs-server glusterfs-api glusterfs-cli On node01 ONLY if creating brand new: gluster volume create agv01 replica 3 awx01:/bricks/0/agv01 awx02:/bricks/0/agv01 awx03:/bricks/0/agv01
gluster volume info agv01 Replace bricks:
Unreachable brick: Reachable brick:
gluster volume remove-brick agv01 replica X awx01:/bricks/0/agv01 start Add subsequent bricks:
(from existing cluster member ) Mount the storage locally:
systemctl disable autofs Example below. Add to /etc/fstab as well:
[root@awx01 ~]# mount -t glusterfs aw01:/agv01 /n Ex: awx01:/agv01 /ansible glusterfs defaults 0 0 Ensure the following options are set on the gluster volume:
[root@awx01 glusterfs]# gluster volume set gv01 cluster.quorum-type auto Here is an example Gluster volume configuration we used (This config is replicated when adding new bricks):
cluster.server-quorum-type: server |
Configure the
GlusterFS filesystem using |
awx01 / awx02 / awx03 |
PACKAGES: On all the nodes add the following:
# cat /etc/haproxy/haproxy.cfg
defaults
listen stats
frontend awx-in
Set logging settings for HAProxy:
# cat /etc/rsyslog.d/haproxy.conf Configure rsyslogd (/etc/rsyslog.conf):
local0.* /var/log/haproxy.log |
Install and
Configure HAPROXY. A great source that helped with this part. |
awx01 / awx02 / awx03 |
PACKAGES: yum install keepalived # ( Used 1.3.5-1.el7.x86_64 in this case ) AWX01:
vrrp_script chk_haproxy {
vrrp_instance awx-c01 {
authentication {
virtual_ipaddress {
track_script { AWX02:
vrrp_script chk_haproxy {
vrrp_instance awx-c01 {
authentication {
virtual_ipaddress {
track_script { AWX03:
vrrp_script chk_haproxy {
vrrp_instance awx-c01 {
authentication {
virtual_ipaddress {
track_script { |
Configure keepalived. A great source that helped with this as well. |
awx01 / awx02 / awx03 |
This step can be made quicker by copying the xml definitions from one host to the other if you already have one defined:
/etc/firewalld/zones/dmz.xml Contents of above:
# cat dmz.xml
# cat public.xml
Individual setup: # cat public.bash firewall-cmd –zone=public –permanent –add-port=2049/tcp firewall-cmd –zone=public –permanent –add-port=111/tcp firewall-cmd –zone=public –permanent –add-port=111/udp firewall-cmd –zone=public –permanent –add-port=24007-24008/tcp firewall-cmd –zone=public –permanent –add-port=49152/tcp firewall-cmd –zone=public –permanent –add-port=38465-38469/tcp firewall-cmd –zone=public –permanent –add-port=4501/tcp firewall-cmd –zone=public –permanent –add-port=4501/udp firewall-cmd –zone=public –permanent –add-port=20048/udp
firewall-cmd –zone=public –permanent –add-port=20048/tcp # cat dmz.bash firewall-cmd –zone=dmz –permanent –add-port=2049/tcp firewall-cmd –zone=dmz –permanent –add-port=111/tcp firewall-cmd –zone=dmz –permanent –add-port=111/udp firewall-cmd –zone=dmz –permanent –add-port=24007-24008/tcp firewall-cmd –zone=dmz –permanent –add-port=49152/tcp firewall-cmd –zone=dmz –permanent –add-port=38465-38469/tcp firewall-cmd –zone=dmz –permanent –add-port=4501/tcp firewall-cmd –zone=dmz –permanent –add-port=4501/udp firewall-cmd –zone=dmz –permanent –add-port=20048/tcp firewall-cmd –zone=dmz –permanent –add-port=20048/udp firewall-cmd –reload # # On Both
firewall-cmd –permanent –direct –add-rule ipv4 filter INPUT 0 -m pkttype –pkt-type multicast -j ACCEPT HANDY STUFF:
firewall-cmd –zone=dmz –list-all |
Configure firewalld.DO NOT disable firewalld . |
awx01 / awx02 / awx03 |
Run any of the following command, or a combination of, on deny entries in /var/log/audit/audit.log that may appear as you stop, start or install above services:
METHOD 1:
METHOD 2: USEFULL THINGS:
ausearch –interpret |
Configure selinux.
Don't disable it. This actually makes your host safer and is actually easy to work with using just these commands. |
awx01 / awx02 / awx03 |
Based on the following comments in the inventory file ( Git checkout directory ./awx/installer/inventory):
# Set pg_hostname if you have an external postgres server, otherwise We need to create the following:
CREATE USER awx WITH ENCRYPTED PASSWORD 'awxpass'; Verify the database:
-bash-4.2$ psql -h psql-c01 -p 5432 -W -U awx awx=>
|
Create the AWX database user. |
awx01 / awx02 / awx03 |
Installation [root@awx-mess01 ~]# yum -y install git gcc gcc-c++ lvm2 bzip2 gettext nodejs yum-utils device-mapper-persistent-data ansible python-pip
[root@awx-mess01 ~]# git clone –depth 50 https://github.com/ansible/awx.git
[root@awx-mess01 installer]# vi inventory [all:vars] dockerhub_base=ansible
docker_compose_dir=/var/lib/awx
create_preload_data=True secret_key=awxsecret
|
Install AWX. Notice we are using a dummy host here awx-mess01 to play around first. Also notice that AWX will automatically create it's own RabbitMQ queue. Later we will configure to use our own RabbitMQ cluster. |