Lab Environment Setup Pt. 2

Before I was going to stand up my XenServer host, I needed some network storage for installation ISOs, virtual disks (potentially), and other random stuff. I decided to stand up a CentOS 7 NFS server for that purpose.

To start, I pulled down the minimal CentOS 7 ISO and made a bootable USB drive using dd:

$ sudo dd if=/home/jmehl/Downloads/nameofCentOSISO.iso of=/dev/sdb status=progress && sync

 

I ended up installing the CentOS instance on the ThinkServer from my last post. Once it was installed, I performed my usual initial configuration on CentOS/RHEL servers:

  • Configured .vimrc – very important 😉
    • set number          # shows line numbers by default
    • syntax on              # turns on syntax highlighting by default
  • Configured hostname:
    • $ sudo hostnamectl nfs-server
  • Setup static networking:
    • $ sudo vim /etc/sysconfig/network-scripts/ifcfg-eno1
    • Change BOOTPROTO to static
    • Add the following lines:
      • IPADDR= your ip address
      • NETMASK= your subnet mask
      • DNS1= your primary DNS IP
      • GATEWAY= your default gateway IP
      • ONBOOT=yes
  • Setup key authentication with SSH from my management machine:
    • From my management machine:
      • $ ssh-keygen        # This will place newly generated keys in ~/.ssh
      • $ ssh-copy-id jmehl@nfs-server   # copies my local public key over to the nfs-server (in jmehl’s ~/.ssh/authorized_keys file). You must supply the login credentials of the jmehl account here.
      • $ ssh jmehl@nfs-server     # should log in without password prompt
    • Once you are logged in, we should harden the sshd_config a little:
      • $ sudo vim /etc/ssh/sshd_config
      • Uncomment #PermitRootLogin no
      • Change #PasswordAuthentication yes to PasswordAuthentication no 
      • $ sudo systemctl restart sshd.service
  • Patch the system fully:
    • $ sudo yum update -y && sudo yum upgrade -y

Now that the server is set up, I started on the NFS configuration. I started by installing the nfs-utils package:

$ sudo yum install -y nfs-utils

 

Make the share directory:

$ sudo mkdir /var/nfs_share

 

Enable the nfs-server service:

$ sudo systemctl enable nfs-server

 

Start the nfs-server service:

$ sudo systemctl start nfs-server

 

Alter the permissions on the share directory:

$ sudo chmod -R 777 /var/nfs-share

 

In general, chmod 777 is a bad idea. Since this was a lab environment, I didn’t care too much. If you’re ever performing something like this in a production environment, or just want to learn how a secure NFS setup works, Red Hat has an awesome write-up.

Add to the /etc/exports file:

/var/nfs_share *(rw,sync,no_root_squash,no_all_squash)

 

Just as a note, I ran into a problem with my /etc/exports that turned out it was because I had put spaces in between my export options. There are no spaces between any options!

Another side note, this /etc/exports line is basically allowing access to the /var/nfs_share directory from any requesting client (hence the wildcard *). Not recommended for secure environments.

Update the local NFS file system table:

$ sudo exportfs -a

 

Allow nfs traffic through firewalld:

$ sudo firewall-cmd --permanent --zone=public --add-service=nfs

 

Update firewalld config:

$ sudo firewall-cmd --reload

 

Restart the nfs-server service:

$ sudo systemctl restart nfs-server

 

Now that the NFS server is configured properly, we can test the connection from any given client on the local network by attempting to locally mount the shared directory:

$ sudo mount -t nfs nfs-server:/var/nfs_share /mnt

 

This will mount the shared directory /var/nfs_share locally on your machine under /mnt.

If you want that share to be mounted persistently on the client (across reboots), add the following to /etc/fstab:

nfs-server:/var/nfs_share /mnt nfs rw,sync,hard,intr 0 0

 

Now that there is network storage for our VMs, next time, we can setup the XenServer host!

Lab Environment Setup

This is the first of many posts regarding my lab environment. This lab will be used primarily for general information security research, and all other stuff I would want to play around with.

Allocated hardware:

  • CyberPower OR700LCDRM1U Smart App LCD UPS SNMP/HTTP Rackmount
    • Capacity: 700 VA / 400 W
    • Output: 120 VAC ± 10%
    • Outlets: 6 × NEMA 5-15R
  • WD Blue 1TB SATA 3.5 Inch Desktop Hard Drive (WD10EZEX)
    • 6 Gb/s 7200 RPM
    • 64MB Cache
  • Lenovo ThinkServer TS140 70A4000HUX
    • Intel Core i3-4130 processor 3.4 GHz, 2C, 4M Cache, 1.00 GT/s, 65W
    • 1 x 4 GB PC3-12800E 1600MHz DDR3 ECC
    • Added two modules of Crucial 4GB Single DDR3 1600 MT/s (PC3-12800) to bring total RAM up to 12GB
    • Inserted the WD10EZEX 1TB HD to serve as the VM storage (for now)
  • Cisco SG200-08P 8-port (4 Reg + 4 PoE) Gigabit PoE Smart Switch (SLM2008PT-NA)
    • 802.1Q VLAN support
    • 4 PoE ports allocated

Now that I have dedicated hardware, I need to setup a type-1 Hypervisor. I’m already used to Hyper-V (very fond of PS management), and there’s a lot of limitations for the free version of VMware vSphere, so I opted to use XenServer. I’ve always had an interest in XS, it has a great web client with Xen Orchestra (XO), and has no “free” limitations seeing as it is FOSS.

So I pulled down the XS 7 installation ISO and started getting ready for my install.