0

My HomeLab – Setup Part 1

It’s begun, over the weekend I started to put together my-homelab.

In order to build up the environment detailed on my Home Labs page, I used 3 of my Lenovo TS200 servers, 2 of them with e3440 Xeon processors (ESXi) and 1 with the e3460 Xeon processor (Hyper-V). These are all quad core, eight threaded, single processors tower servers in which I’ve installed 16gb ram for the ESXi and 8gb ram for Hyper-V. Only the Hyper-V server has disks installed (4 x 750GB SATA drives in a hardware RAID5 setup (courtesy of the M1015 and Adv. Feature key) and an additional 250gb OS disk. The remaining 2 TS200’s are utilising the internal USB slot for ESXi.

Building the Environment

Over the weekend I finally had everything I needed to put my environment together, I wired up, plugged in and powered up a total of 9 devices that will be used in my home lab.

3 Lenovo TS200 Servers
1 Iomega IX4-200d 2TB NAS
1 HP 8 Port KVM
1 Netgear GS724T Switch
1 HP 19in Monitor
1 Management PC (QX9650 based gaming rig that’s been retired for 6 months)
1 HP MicroServer

Using instructions found in the following article “Installing ESXi 4.1 to USB Flash Drive” I pre-provisioned my 2gb Cruizer Blade USB keys with ESXi and installed them straight into the server (you have to love VMware Player )

An additional step in configuring the environment up was to ensure that IP addressing was logical, because I will be using the entire server infrastructure on my home network I needed to ensure that I didn’t run out of network addresses (or more importantly use DHCP addresses in the server pool).

I have configured the network up as follows.

192.168.x.2 – 192.168.x.99 – Server and Networking Infrastructure
192.168.x.100 – 192.168.x.150 – Workstations (DHCP)
172.16.x.10 – 192.168.x.20 – iSCSI Traffic (and management PC)
10.x.x.10 – 10.x.x.20 – vMotion Traffic

The 172.16.x.x and 10.x.x.x networks are going to be vlan’d up to isolate the traffic from the rest of the network.

Building the Storage

Due to the my failure of increasing the disk capacity on my Iomega IX4-200d unit I have had to throw in an additional storage device, I have changed the role that my MicroServer was going to be doing (it was going to be a backup server utilising Microsoft DPM server). With that in mind I have installed OpenFiler on to the MicroServer, a nice and easy installation compared to NexentaStor (which failed to install due to my lack of CD drive as I am using the 5th SATA port as another drive).

Both NAS devices will be configured for iSCSI services and will be used to connect both ESXi servers.

I’ll point to the excellent post on TechHeads site on configuring OpenFiler for use with vSphere.

The specifics for the lab is that both the OpenFiler and IX4-200d devices will be connected to the storage LAN (172.16.x.x) and not the main VM LAN. The IX4 device will be used for the VM’s whilst the OpenFiler storage will be used for the VM backups that I’ll be doing later.

The Active Directory Domain Controller will also install directly onto the IX4 whilst the vCenter server will be installed onto the Hyper-V server (utilising DAS storage).

Installing the Active Directory Domain Controller

VMware’s vCenter Server requires Windows Active Directory as a means of authentication, that means that we need to have a domain controller for the lab. Steering clear of best practice (which requires at least two domain controllers for resilience) means that I am going to just install one for the moment. I sized the DC VM to be fairly small: 1 vCPU with 512MB RAM, 40GB hard disk (thin provisioned) and 1 vNIC connecting to the lab LAN.

The setup was a standard Windows Server 2008 R2 install, followed by Windows Updates before running dcpromo.

Host “WIN-DC01”
Domain “MY-HOME.LAB”.

Next up is the vCenter server. We’ll continue with that journey soon.

Simon

Leave a Reply

Your email address will not be published. Required fields are marked *