Lab Environment Setup Pt. 2

Before I was going to stand up my XenServer host, I needed some network storage for installation ISOs, virtual disks (potentially), and other random stuff. I decided to stand up a CentOS 7 NFS server for that purpose.

To start, I pulled down the minimal CentOS 7 ISO and made a bootable USB drive using dd:

$ sudo dd if=/home/jmehl/Downloads/nameofCentOSISO.iso of=/dev/sdb status=progress && sync

 

I ended up installing the CentOS instance on the ThinkServer from my last post. Once it was installed, I performed my usual initial configuration on CentOS/RHEL servers:

  • Configured .vimrc – very important 😉
    • set number          # shows line numbers by default
    • syntax on              # turns on syntax highlighting by default
  • Configured hostname:
    • $ sudo hostnamectl nfs-server
  • Setup static networking:
    • $ sudo vim /etc/sysconfig/network-scripts/ifcfg-eno1
    • Change BOOTPROTO to static
    • Add the following lines:
      • IPADDR= your ip address
      • NETMASK= your subnet mask
      • DNS1= your primary DNS IP
      • GATEWAY= your default gateway IP
      • ONBOOT=yes
  • Setup key authentication with SSH from my management machine:
    • From my management machine:
      • $ ssh-keygen        # This will place newly generated keys in ~/.ssh
      • $ ssh-copy-id jmehl@nfs-server   # copies my local public key over to the nfs-server (in jmehl’s ~/.ssh/authorized_keys file). You must supply the login credentials of the jmehl account here.
      • $ ssh jmehl@nfs-server     # should log in without password prompt
    • Once you are logged in, we should harden the sshd_config a little:
      • $ sudo vim /etc/ssh/sshd_config
      • Uncomment #PermitRootLogin no
      • Change #PasswordAuthentication yes to PasswordAuthentication no 
      • $ sudo systemctl restart sshd.service
  • Patch the system fully:
    • $ sudo yum update -y && sudo yum upgrade -y

Now that the server is set up, I started on the NFS configuration. I started by installing the nfs-utils package:

$ sudo yum install -y nfs-utils

 

Make the share directory:

$ sudo mkdir /var/nfs_share

 

Enable the nfs-server service:

$ sudo systemctl enable nfs-server

 

Start the nfs-server service:

$ sudo systemctl start nfs-server

 

Alter the permissions on the share directory:

$ sudo chmod -R 777 /var/nfs-share

 

In general, chmod 777 is a bad idea. Since this was a lab environment, I didn’t care too much. If you’re ever performing something like this in a production environment, or just want to learn how a secure NFS setup works, Red Hat has an awesome write-up.

Add to the /etc/exports file:

/var/nfs_share *(rw,sync,no_root_squash,no_all_squash)

 

Just as a note, I ran into a problem with my /etc/exports that turned out it was because I had put spaces in between my export options. There are no spaces between any options!

Another side note, this /etc/exports line is basically allowing access to the /var/nfs_share directory from any requesting client (hence the wildcard *). Not recommended for secure environments.

Update the local NFS file system table:

$ sudo exportfs -a

 

Allow nfs traffic through firewalld:

$ sudo firewall-cmd --permanent --zone=public --add-service=nfs

 

Update firewalld config:

$ sudo firewall-cmd --reload

 

Restart the nfs-server service:

$ sudo systemctl restart nfs-server

 

Now that the NFS server is configured properly, we can test the connection from any given client on the local network by attempting to locally mount the shared directory:

$ sudo mount -t nfs nfs-server:/var/nfs_share /mnt

 

This will mount the shared directory /var/nfs_share locally on your machine under /mnt.

If you want that share to be mounted persistently on the client (across reboots), add the following to /etc/fstab:

nfs-server:/var/nfs_share /mnt nfs rw,sync,hard,intr 0 0

 

Now that there is network storage for our VMs, next time, we can setup the XenServer host!

HyperVCheckpoints.ps1

Wrote a simple PowerShell script today for one of my lab environments that safely reboots all virtual machines under a local Hyper-V host, and upon restart, creates a new checkpoint. The script is best used on a scheduled basis, i.e; using Task Scheduler. I have the script running every night at 0300.

While scheduled reboots may seem risky to some, I believe the opposite; risk is mitigated with the practice. Depending on your scheduled time windows, this gives administrators the ability to identify/resolve issues with a rebooting server, as opposed to the issue appearing during primary operational hours, resulting in downtime.

A few caveats:

  • Checkpoints are generally not recommended in a production environment (see https://hyperv.veeam.com/blog/what-are-hyper-v-snapshots-12-things-to-know/) and are definitely not a replacement for full backups.
  • The script, by default, restarts all VMs under the local Hyper-V host. If you have VMs that you do not wish to be restarted/have checkpoints taken, use the ExemptVMNames parameter to specify said VMs.

 

Script can be found on my github:

https://github.com/mehlsec/HyperVCheckpoints