Simon Mott - All about me

Category Archives

4 Articles

Auto mount LUKS without a filesystem

One of my friends recently installed a new storage server in our shared lab environment and graciously gave me some storage space on it via iSCSI. I use Proxmox for my personal lab, and I intended to store some non-critical VM disks on this new storage so I could play around with properly using HA (High Availability) with Proxmox. Additionally, I wanted to gain some experience using iSCSI at the same time. While I trust my friend, the storage itself is outside of my control, so I figured it would be good practice to encrypt my data. The general go-to solution for this would seem to be LUKS.

Dual-stacking Proxmox Web UI (pveproxy)

As part of my recent (and ongoing) project to implement native IPv6 on my own infrastructure (except at home… I’m looking at you Hyperoptic), I decided to try to dual-stack as much as possible so that when I have IPv6 connectivity, services would prefer that over IPv4, without making things unavailable.

As it turns out, Proxmox’s Web interface (pveproxy) doesn’t listen on the IPv6 address family by default. This stumped me for a little while, but its pretty simple to fix when you know whats going on.

This post is going to spend most of its time explaining why this happens rather than the fix. If you’re just here to see how to do it, check below.

Odd behaviour with /etc/mtab being a regular file

One of our customers had recently requested a Bare-Metal Restore (BMR) of one of their servers; which is a pretty routine task for us. However, upon bringing the restored server up I’d noticed some odd behaviour with some of its services, notably snmpd.

Our monitoring successfully polls most metrics that we look for, however fails on getting disk statistics and eventually snmpd just starts timing out. Using snmpbulkwalk I could see that I was getting a response right up until midway through the HOST-RESOURCES MIB. It did look to be hanging on mount points and once snmpbulkwalk had timed out, I couldn’t get a successful response from snmpd again. This was also seemingly affecting MariaDB, preventing it from starting, amongst other things.

Slow DNS resolving using bind9 as caching resolver

I currently have 4 DNS servers across my estate and until recently these were all configured to forward all queries to Google DNS (8.8.8.8). I ended up having an issue with Google caching an undesired record value so I opted to change my DNS servers so that they no longer forward queries elsewhere, but instead try to answer it themselves; Doing this gives me slightly more control over my DNS cache.

As I use named (bind9) this was a pretty trivial change – Simply remove the forwarders { 8.8.8.8; }; clause in my configuration and that should be that.

During my post-change testing though I’d noticed that resolution was taking significantly longer for un-cached queries than I’d expect (microsoft.gointeract.io is only used to illustrate my issue):