Getting a rather cryptic ESXi error message when trying to set a new IPv4 IP:
Enable Management Network: Error
Setting ip/ipv6 configuration failed:
For example, when trying to set 10.3.0.12, this is what is seen:
It doesn't really, really say what the real reason behind the error is. Taking a dive into the network configuration of the ESXi host, reveals the reason why:
[root@mdsesxi-p04:~] esxcli network ip interface ipv4 get
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
—- ———— ————- ————– ———— ——– ——–
vmk0 10.3.0.11 255.255.255.0 10.3.0.255 STATIC 10.3.0.1 false
vmk1 10.3.0.12 255.255.255.0 10.3.0.255 STATIC 10.3.0.1 false
vmk2 10.0.0.11 255.255.255.0 10.0.0.255 STATIC 0.0.0.0 false
[root@mdsesxi-p04:~]
In the UI there's no indication that that IP 10.3.0.12 is already taken by a vmkernel interface vmk1. Instead, setting it to 10.3.0.13, which is free:
[root@mdsesxi-p04:~] esxcli network ip interface ipv4 get
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
—- ———— ————- ————– ———— ——– ——–
vmk0 10.3.0.13 255.255.255.0 10.3.0.255 STATIC 10.3.0.1 false
vmk1 10.3.0.12 255.255.255.0 10.3.0.255 STATIC 10.3.0.1 false
vmk2 10.0.0.11 255.255.255.0 10.0.0.255 STATIC 0.0.0.0 false
[root@mdsesxi-p04:~]
Works perfectly well! With the new IP, the host can now be added to vSphere Client / Server. (VCSA). Additional sample handy ESXi commands:
esxcli network nic list
esxcli network ip netstack list
esxcli network vswitch standard portgroup list
esxcli network nic list
esxcli network vswitch standard list
esxcli network ip dns search list
esxcli network ip interface ipv4 get
esxcli network vswitch standard portgroup list
esxcli network ip interface list
esxcli network ip interface ipv4 get
While establishing an OpenVPN connection, the internal IP's are able to ping yet the external IP's are not, the issue might be with packet NAT from tun to vlan2 interfaces. Note below there are NO replies:
root@DD-WRT-KHUFU:/jffs/etc/openvpn# tcpdump -na -s0 -i tun2 icmp
tcpdump: verbose output suppressed, use -v[v]… for full protocol decode
listening on tun2, link-type RAW (Raw IP), snapshot length 262144 bytes
10:49:55.636673 IP 10.1.1.2 > 74.208.236.205: ICMP echo request, id 1, seq 9093, length 40
10:50:00.028370 IP 10.1.1.2 > 192.168.0.46: ICMP 10.1.1.2 udp port 52858 unreachable, length 535
10:50:00.661006 IP 10.1.1.2 > 74.208.236.205: ICMP echo request, id 1, seq 9094, length 40
10:50:05.666028 IP 10.1.1.2 > 74.208.236.205: ICMP echo request, id 1, seq 9095, length 40
10:50:10.661477 IP 10.1.1.2 > 74.208.236.205: ICMP echo request, id 1, seq 9096, length 40
10:50:11.204349 IP 10.1.1.2 > 192.168.0.51: ICMP 10.1.1.2 udp port 65235 unreachable, length 479
NOTE: there are no reply packets above. Looking at the interfaces and rules:
# ——————————————————————
# VPN: Required to be able to ping local on-prem or Azure VLAN's
# ——————————————————————
iptables -I FORWARD -i br0 -o tun2 -j ACCEPT
iptables -I FORWARD -i tun2 -o br0 -j ACCEPT
iptables -I INPUT -i tun2 -j logdrop
iptables -t nat -A POSTROUTING -o tun2 -j MASQUERADE
6: vlan2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1452 qdisc noqueue state UP qlen 1000
link/ether 2c:fd:a1:35:60:51 brd ff:ff:ff:ff:ff:ff
inet 100.100.100.100/27 brd 108.168.115.31 scope global vlan2
valid_lft forever preferred_lft forever
inet6 fe80::2efd:a1ff:fe35:6051/64 scope link
valid_lft forever preferred_lft forever
11: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 2c:fd:a1:35:60:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.6/24 brd 192.168.0.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::2efd:a1ff:fe35:6050/64 scope link
valid_lft forever preferred_lft forever
14: tun2: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN qlen 500
link/[65534]
inet 10.1.1.1/24 scope global tun2
valid_lft forever preferred_lft forever
forwarding to br0, which is the local network, works very well:
C:\Users\tom>ping josh-vm01.nix.mds.xyz
Pinging josh-vm01.nix.mds.xyz [10.0.0.101] with 32 bytes of data:
Reply from 10.0.0.101: bytes=32 time=5ms TTL=62
Reply from 10.0.0.101: bytes=32 time=5ms TTL=62
But ping to outside does not:
C:\Users\tom>ping microdevsys.com
Pinging microdevsys.com [74.208.236.205] with 32 bytes of data:
Control-C
^C
C:\Users\tom>
the rules responsible for the above local forwarding, which works were:
# ——————————————————————
# VPN: Required to be able to ping local on-prem or Azure VLAN's
# ——————————————————————
iptables -I FORWARD -i br0 -o tun2 -j ACCEPT
iptables -I FORWARD -i tun2 -o br0 -j ACCEPT
iptables -I INPUT -i tun2 -j logdrop
iptables -t nat -A POSTROUTING -o tun2 -j MASQUERADE
however, there was nothing for vlan2 above, which is the internet facing network. The following rules added in forward traffic from the tun (tunnel) interfaces to the outside world, allowing external ping's to work:
Let’s just jump right in into virtualizing a KVM based Physical Server using various KVM tools such as Virsh, Cockpit etc. Will also introduce a twist by configuring bonding at the end, not the beginning, to document a retrofit to an existing environment. Begin by identifying how the various network interfaces that will make up the setup:
[root@dl380g6-p02 network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
inet 10.3.0.10/24 brd 10.3.0.255 scope global noprefixroute enp2s0f0
valid_lft forever preferred_lft forever
inet6 fe80::7ae7:d1ff:fe8f:4d26/64 scope link
valid_lft forever preferred_lft forever
3: enp2s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 78:e7:d1:8f:4d:28 brd ff:ff:ff:ff:ff:ff
4: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 78:e7:d1:8f:4d:2a brd ff:ff:ff:ff:ff:ff
5: enp3s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 78:e7:d1:8f:4d:2c brd ff:ff:ff:ff:ff:ff
[root@dl380g6-p02 network-scripts]#
Begin by installing libvirt (These two commands must be separate for some reason.):
Clear all previous definitions in nmcli. For example:
# nmcli c delete enp2s0f0
# nmcli c delete enp2s0f1
# nmcli c delete enp3s0f0
# nmcli c delete enp3s0f1
# nmcli c delete br0
# nmcli c delete bridge-slave-enp2s0f0
Define bridged networking:
# virsh net-list –all
# nmcli con add ifname br0 type bridge con-name br0 ipv4.addresses 10.3.0.10/24 ipv4.gateway 10.3.0.1 ipv4.dns “192.168.0.46 192.168.0.51 192.168.0.224” ipv4.method manual
# nmcli con add type bridge-slave ifname enp2s0f0 master br0
# nmcli c s
Next, bring the physical interface offline and the bridge interface online. This is best done via the console since networking will be offline causing you to loose connection. Check and verify:
# nmcli c down enp2s0f0
# nmcli c up br0
# nmcli c show
# nmcli c show –active
# virsh net-list –all
IMPORTANT: If the default route is missing which can be confirmed with ip r or netstat -nr, add it in otherwise reaching out to the internet will not work. For example:
ip route add 192.168.0.0/24 dev net0
ip route add default via 192.168.0.1 dev net0
Next, define the br0 interface in virsh. Save this content to br0.xml:
NOTE: If the image is in a location that is not accessible, such as /root/, this error will be seen:
ERROR internal error: process exited while connecting to monitor: 2023-01-23T01:54:21.710369Z qemu-kvm: -blockdev {“driver”:”file”,”filename”:”/root/Rocky-9.1-x86_64-minimal.iso”,”node-name”:”libvirt-1-storage”,”auto-read-only”:true,”discard”:”unmap”}: Could not open ‘/root/Rocky-9.1-x86_64-minimal.iso’: Permission denied
ANOTHER NOTE: If using –extra-args=’console=ttyS0′ to the above virt-install or –nographics, the VNC, SPICE or other graphics options will be skipped and a text based installation will begin. In this case, the VNC route will be taken though SPICE will also be discussed.
Login to the console and monitor the installation, answering any questions in the process:
# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: ‘help’ for help with commands
‘quit’ to quit
virsh #
virsh # list
Id Name State
—————————————-
2 mc-rocky01.nix.mds.xyz running
virsh # console mc-rocky01.nix.mds.xyz
Connected to domain ‘mc-rocky01.nix.mds.xyz’
Escape character is ^] (Ctrl + ])
If there’s not activity, look for a message such as this:
WARNING Unable to connect to graphical console: virt-viewer not installed. Please install the ‘virt-viewer’ package.
WARNING No console to launch for the guest, defaulting to –wait -1
Install virt-viewer if not already (see above). Once installed, find the port on which your new virtual machine is running on:
# ip a
9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 78:e7:d1:8f:4d:26 brd ff:ff:ff:ff:ff:ff
inet 10.3.0.10/24 brd 10.3.0.255 scope global noprefixroute br0
valid_lft forever preferred_lft forever
inet6 fe80::1af2:4625:48b7:9030/64 scope link noprefixroute
valid_lft forever preferred_lft forever
Next, let’s connect to the graphical interface by specifying the IP and above VNC port to view the Graphical install. Before we do so, we’ll need a plugin for our Chrome first:
Chrome Web Store
Home / Apps / Spice Client
Or just search on google. Once installed, click on launch app to login to a client. However, this failed for us. Instead, let’s download the Win x64 client instead:
Look for the Win x64 MSI (gpg) text on the page. Virt-viewer will get instaleld in something like C:\Program Files\VirtViewer v11.0-256. Browse to C:\Program Files\VirtViewer v11.0-256\bin folder then start remote-viewer.exe:
However, the above will only work if the graphics specified is spice:
Establish a connection and continue with the Rocky 9 setup:
Continue with the install making the appropriate selections. Note the dual disk drives specified in the virt-install command. They’re available for our install:
Suppose Network parameters could have been configured however, point is to test DHCP across the bridge interface br0. Once installed:
# virt-install \
> –name mc-rocky01.nix.mds.xyz \
> –ram 4096 \
> –vcpus 4 \
> –disk path=/mnt/kvm-drives/mc-rocky01.nix.mds.xyz-disk01.qcow2 \
> –disk path=/mnt/kvm-drives/mc-rocky01.nix.mds.xyz-disk02.qcow2 \
> –os-variant centos-stream9 \
> –os-type linux \
> –network bridge=br0,model=virtio \
> –graphics vnc,listen=0.0.0.0 \
> –console pty,target_type=serial \
> –location /mnt/iso-images/Rocky-9.1-x86_64-minimal.iso
WARNING Unable to connect to graphical console: virt-viewer not installed. Please install the ‘virt-viewer’ package.
WARNING No console to launch for the guest, defaulting to –wait -1
Domain is still running. Installation may be in progress.
Waiting for the installation to complete.
Domain has shutdown. Continuing.
Domain creation completed.
Restarting guest.
#
verify the IP given once the machine is back up (If prompted for a disk password, since we choose encryption, enter it and proceed with the boot):
Take the time to set the hostname, as per the above image:
# hostnamectl set-hostname mc-rocky01.nix.mds.xyz
Now it’s time to test the connectivity from our Windows 10 Laptop:
Using username “root”.
root@10.3.0.179’s password:
Last login: Sun Jan 22 22:19:46 2023
[root@mc-rocky01 ~]#
[root@mc-rocky01 ~]#
[root@mc-rocky01 ~]#
[root@mc-rocky01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:cb:13:ba brd ff:ff:ff:ff:ff:ff
inet 10.3.0.179/24 brd 10.3.0.255 scope global dynamic noprefixroute enp1s0
valid_lft 3107sec preferred_lft 3107sec
inet6 fe80::5054:ff:fecb:13ba/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@mc-rocky01 ~]#
[root@mc-rocky01 ~]#
[root@mc-rocky01 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search nix.mds.xyz mws.mds.xyz mds.xyz
nameserver 192.168.0.46
nameserver 192.168.0.51
nameserver 192.168.0.224
[root@mc-rocky01 ~]#
Note how the DHCP server populated all DNS servers according to the DHCP configuration defined. Hence external resolution from the KVM guest works and is able to reach out to online sites and resources. Virsh lists a running machine:
[root@dl380g6-p02 iso-images]# virsh list –all
Id Name State
—————————————-
3 mc-rocky01.nix.mds.xyz running
[root@dl380g6-p02 iso-images]#
and fdisk from the guest KVM machine lists the correct drives:
Time to configure bonding (AKA teaming) to retrofit it into the mix for some HA over the 4 NIC’s. As before, since the network configuration will be adjusted, connectivity will be lost. It’s a good idea to have the console handy: Before doing anything, remove all the configurations (Don’t worry about KVM, it will begin to work again once we redefine br01):
# nmcli c
# nmcli c delete br0
# nmcli c delete bridge-slave-enp2s0f0
There should be nothing defined:
# nmcli c
NAME UUID TYPE DEVICE
and the /etc/sysconfig/network-scripts folder should be empty. Next, let’s define the bonding interfaces based on the previous configuration above. The earlier commands:
# nmcli connection add type bond con-name bond0 ifname bond0 bond.options “mode=active-backup,miimon=100” ipv4.method disabled ipv6.method ignore
OR
# nmcli con add type bond con-name bond0 ifname bond0 mode active-backup ipv4.method disabled ipv6.method ignore ipv4.addresses 10.3.0.10/24 ipv4.gateway 10.3.0.1 ipv4.dns “192.168.0.46 192.168.0.51 192.168.0.224” ipv4.method manual
OR
# nmcli con add type bond con-name bond0 ifname bond0 mode active-backup ipv6.method ignore ipv4.addresses 10.3.0.10/24 ipv4.gateway 10.3.0.1 ipv4.dns “192.168.0.46 192.168.0.51 192.168.0.224” ipv4.method manual
# nmcli con add type bond-slave con-name enp2s0f0 ifname enp2s0f0 master bond0
# nmcli con add type bond-slave con-name enp2s0f1 ifname enp2s0f1 master bond0
# nmcli con add type bond-slave con-name enp3s0f0 ifname enp3s0f0 master bond0
# nmcli con add type bond-slave con-name enp3s0f1 ifname enp3s0f1 master bond0
NOTE: There is no IP assignment above. Not needed. That will go on the br01 interface as before. Activate the connection:
# nmcli c up ifcfg-enp2s0f0
# nmcli c up ifcfg-enp2s0f0
# nmcli c up ifcfg-enp2s0f0
# nmcli c up ifcfg-enp2s0f0
Activate the bond0 interface:
# nmcli con up bond0
Next, reestablish the bridge interface. IMPORTANT NOTE: This time the bond0 is added, not the individual physical NIC:
Bridges need all interfaces to be added. NOTE: bond0 of type bridge is incompatible it appears:
# nmcli con add type bridge-slave ifname bond0 master br0
# nmcli con add type bridge-slave ifname enp2s0f0 master br0
# nmcli con add type bridge-slave ifname enp2s0f1 master br0
# nmcli con add type bridge-slave ifname enp3s0f0 master br0
# nmcli con add type bridge-slave ifname enp3s0f1 master br0
# nmcli c up bridge-slave-enp2s0f0
# nmcli c up bridge-slave-enp2s0f1
# nmcli c up bridge-slave-enp3s0f0
# nmcli c up bridge-slave-enp3s0f1
# (optional, not working) nmci c add type vlan con-name vlan0 ifname bond0.0 dev bond0 id 0 master br0 slave-type bridge
# nmcli c s
Test by starting the virtual machine defined earlier:
virsh # start mc-rocky01.nix.mds.xyz
Domain ‘mc-rocky01.nix.mds.xyz’ started
virsh #
Then ping the physical host:
C:\Users\tom>ping 10.3.0.10 -t
Pinging 10.3.0.10 with 32 bytes of data:
Reply from 10.3.0.10: bytes=32 time=1ms TTL=62
Reply from 10.3.0.10: bytes=32 time=1ms TTL=62
And ping the KVM VM as well:
C:\Users\tom>ping 10.3.0.179 -t
Pinging 10.3.0.179 with 32 bytes of data:
Reply from 10.3.0.179: bytes=32 time=1ms TTL=62
Reply from 10.3.0.179: bytes=32 time=1ms TTL=62
Use the following command to test failover capability:
# ip link set dev enp2s0f0 down
# ip link set dev enp2s0f1 down
# ip link set dev enp3s0f0 down
# ip link set dev enp3s0f1 down
# ip link set dev enp2s0f0 up
# ip link set dev enp2s0f1 up
# ip link set dev enp3s0f0 up
# ip link set dev enp3s0f1 up
As noted above, bond interfaces appear to be incompatible with bridges in Rocky 8+ / RHEL 8+ / CentOS 8+ whereas for RHEL 7 clones, it’s sufficient to add the bond0 to br0:
You’re now set with bonding and redundancy on the KVM side!
COMING UP!
UI and Cockpit installation (Plus any other goodies I’ll think of before completing this post)
502 Bad Gateway
The server returned an invalid or incomplete response.
This error only popped up when keepalived was started. Otherwise just with HAproxy, a timeout was seen. It appeared as if it was a keepalived config error. In this case, it was due to a faulty HAproxy configuration:
HP ILO X is the Hewlett Packard server management Integrated Lights Out software running independently from the main circutry of the host itself. It allows remote management including power on/off, status checks, console etc. This post goes over how to access the iLO interface when newer browsers such as Chrome, FireFox and Edge don't support the older TLS versions anymore.
First, use IETab to login. This works for most scenarios:
What if that doesn't work? Or randomly crashes? Let's try the console means. Use SSH or PuTTy to login to the iLO interface:
# ssh Administrator@10.0.0.101:22
or PuTTy:
Once logged in, you should see:
Using username "Administrator".
Administrator@192.168.0.42's password:
User:Administrator logged-in to mdsesxi-ilo-p01.mds.xyz(192.168.0.42)
iLO 2 Advanced 2.33 at 14:56:47 Mar 20 2018
Server Name: mdsesxi-ilo-p01.mds.xyz
Server Power: On
</>hpiLO->
Next type help:
</>hpiLO-> help
status=0
status_tag=COMMAND COMPLETED
DMTF SMASH CLP Commands:
help : Used to get context sensitive help.
show : Used to show values of a property or contents of a collection target.
create : Used to create new user account in the name space of the MAP.
Example: create /map1/accounts1 username=<lname1> password=<pwd12345> name=
<dname1> group=<admin,config,oemhp_vm,oemhp_rc,oemhp_power>
delete : Used to delete user account in the name space of the MAP.
Example: delete /map1/accounts1/<lname1>
load : Used to move a binary image from an URL to the MAP. The URL is
limited to 80 characters
Example : load /map1/firmware1 -source http://192.168.1.1/images/fw/iLO2_130.bin
reset : Used to cause a target to cycle from enabled to disabled and back to
enabled.
set : Used to set a property or set of properties to a specific value.
start : Used to cause a target to change state to a higher run level.
stop : Used to cause a target to change state to a lower run level.
cd : Used to set the current default target.
Example: cd targetname
exit : Used to terminate the CLP session.
version : Used to query the version of the CLP implementation or other CLP
elements.
oemhp_ping : Used to determine if an IP address is reachable from this iLO 2.
Example : oemhp_ping /map1 192.168.1.1 , where 192.168.1.1 is the IP address that you wish
to ping
oemhp_loadSSHKey : Used to authorize a SSH Key File from an URL The URL is
limited to 80 characters
Example : oemhp_loadSSHKey /map1/config1 -source http://UserName:password@192.168.1.1/images/SSHkey1.ppk
HP CLI Commands:
POWER : Control server power.
UID : Control Unit-ID light.
NMI : Generate an NMI.
VM : Virtual media commands.
VSP : Invoke virtual serial port.
VSP LOG : Invoke virtual serial port data logging. TEXTCONS : Invoke Remote Text Console on supported platforms.
</>hpiLO->
Note the TEXTCONS command above:
</>hpiLO-> textcons
Starting text console. Press 'ESC (' to return to the CLI Session.
IMPORTANT: The exit keys are listed above. Some messages will be readable while others will not when host is booting:
Proc 1: Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
Proc 2: Intel(R) Xeon(R) CPU E5540 @ 2.53GHz
QPI Speed: 5.8 GT/s
HP Power Profile Mode: Balanced Power and Performance
Power Regulator Mode: Static Low Power – Processor(s) clocked down to 1.60 GHz
Advanced Memory Protection Mode: Advanced ECC Support
Redundant ROM Detected – This system contains a valid backup system ROM.
Inlet Ambient Temperature: 21C/69F
SATA Option ROM ver 2.00.B12
Copyright 1982, 2008. Hewlett-Packard Development Company, L.P.
Port1: (CD-ROM) DV-28S-W
Broadcom NetXtreme II Ethernet Boot Agent v6.0.11 <F9 = Setup>
Copyright (C) 2000-2010 Broadcom Corporation
All rights reserved.
Press Ctrl-S to enter Configuration Menu
Integrated Lights-Out 2 Advanced
iLO 2 v2.33 Mar 20 2018 10.3.0.8
Slot 0 HP Smart Array P410i Controller Initializing… \
If the message below is seen:
Monitor is in graphics mode or an unsupported text mode.
and reboot that Linux instance, then try again via textconsole:
Probing EDD (edd=off to disable)… ok
Rocky Linux 8.5 (Green Obsidian)
Kernel 4.18.0-348.el8.0.2.x86_64 on an x86_64
Activate the web console with: systemctl enable –now cockpit.socket
dl380g6-p02 login:
For each reverse zone, when manually created, such as:
DNS Zone: 0.168.192.in-addr.arpa.
or
DNS Zone: 0.0.10.in-addr.arpa.
for example, in order for FreeIPA to create reverse records, the option:
Dynamic Update
must be set to True in the reverse zone Settings tab. If not enabled, messages such as these will be seen when installing clients using ipa-client-install:
Hostname (lumberjack01.unix.my.dom) does not have A/AAAA record.
It became apparent that with the growing push for more content on web pages and general media content, my router began to perform rather inadequately. Inadequately to the point where it was rebooting spontaneously. Below is what the situation looked like. Below is an example of high SIRQ's inundating the environment:
NOTE: The last option may or may not be ideal for your router, depending if Flow Acceleration (FA) module is included in your setup and your router supports it.
Additionally, also scan the DD-WRT remote logs (You did setup rsyslog to a remote server right?) which can tell you, amongst other things excessive requests or packet storms and how many DNS queries occurred in 5 minutes (this is alot):
To solve the above DNS queries problem, you can either tune the DNS masquerade on DD-WRT, if you use it, or adjust the DNS caching on your internal DNS servers. Here's a Windows Server example:
PS C:\Users\Administrator.WINAD01.000> Set-DnsServerCache -MaxKBSize 65536 -MaxTtl 0x15180 WARNING: The input value for the setting MaxTtl is lesser than a second and will be ignored. The input value must be
in the format DD.HH:MM:SS where DD is days, HH is hours, MM is minutes and SS is seconds.
PS C:\Users\Administrator.WINAD01.000> Set-DnsServerCache -MaxKBSize 65536 -MaxTtl 2.00:00:00
PS C:\Users\Administrator.WINAD01.000> Get-DnsServerCache
To set it to 2 days and something other then 0, which effectively, it seems, would turn this off. Likewise for FreeIPA / IDM, use the following to adjust the DNS cache:
NOTE: A word about OOM when using CTF. Appears these OOM messages followed by reboots on one of the routers prompted me to change back to SFE:
# cat dd-wrt-roma.mds.xyz.log|grep -Ei oom_kill
Nov 21 02:54:27 dd-wrt-roma.mds.xyz kernel: [20094.748505] [<80014094>] (dump_header) from [<800b7344>] (oom_kill_process+0xec/0x3cc)
Nov 21 02:54:27 dd-wrt-roma.mds.xyz kernel: [20094.766870] [<800b7258>] (oom_kill_process) from [<800b78f0>] (out_of_memory+0x260/0x344)
Nov 21 04:06:13 dd-wrt-roma.mds.xyz kernel: [ 4191.173207] [<80014094>] (dump_header) from [<800b7344>] (oom_kill_process+0xec/0x3cc)
.
.
.
Nov 23 22:15:56 dd-wrt-roma.mds.xyz kernel: [47881.130510] [<80014094>] (dump_header) from [<800b7344>] (oom_kill_process+0xec/0x3cc)
Nov 23 22:15:56 dd-wrt-roma.mds.xyz kernel: [47881.148868] [<800b7258>] (oom_kill_process) from [<800b78f0>] (out_of_memory+0x260/0x344)
# # cat dd-wrt-inet.mds.xyz.log|grep -Ei oom_kill
Nov 25 03:21:21 dd-wrt-inet.mds.xyz kernel: [172011.430393] [<80014094>] (dump_header) from [<800b7344>] (oom_kill_process+0xec/0x3cc)
Nov 26 04:07:38 dd-wrt-inet.mds.xyz kernel: [89063.941579] [<80014094>] (dump_header) from [<800b7344>] (oom_kill_process+0xec/0x3cc)
Nov 26 04:07:38 dd-wrt-inet.mds.xyz kernel: [89063.941594] [<800b7258>] (oom_kill_process) from [<800b78f0>] (out_of_memory+0x260/0x344)
#
EDIT: Nov 27th 2022
Appears that networking topology has alot to do with the performance as well as DNS caching above. See posts below: