Blog

Configure 2-Node VSAN on ESXi Free Using CLI Without VCenter

I’ve tried many different types of shared storage in my HomeLab, however I’ve never really found the “one”. The vSAN feature of vSphere has been high on the list for a long time and when I finally upgraded my hosts to the 6.5 release, I decided to also go for the vSAN.

My setup will be 1x250GB SSD coupled with 1x1TB storage on two hosts and run the witness appliance on a separate third host. I’m connecting the two datastore hosts using a 1Gb crossover cable, silently wishing for a 10Gb connection in the future. The witness traffic will be routed via the normal LAN connection.

There are quite a few 1 host vSAN guides out there and the initial setup for a 2-node is the same. The 2-nodes does add the tinkering with the data and witness network paths and joining of the nodes.

f0633e22-88c5-4de6-90ea-a0431b969a6c.png

Create master vSAN cluster node

The new command creates the cluster and the get command displays the cluster information.

1
2
esxcli vsan cluster new
esxcli vsan cluster get

The important UUID key from the sub-cluster master is later used to join the cluster.

This step is only done on the first cluster node. All other nodes are added to the cluster using the Sub-Cluster UUID.

create-master-vsan-cluster-node.png

List claimable disks for vSAN

In order to be able to claim disks for vSAN, they need to contain no partitions at all. You can use partedutil to clear all partitions or from the UI on the device view. I had a disk which neither could interpret which forced me to zero out the partioning information using the dd utility.

1
sudo dd if=/dev/zero of=/dev/YOUR_DISK_IDENTIFIER bs=1 count=1024

To list eligible disks type this command:

1
vdq -q
list-claimable-disks-for-vsan.png

Add disk to vSAN cluster

Next step is to add physical disks for VSAN usage. VSAN requires one SSD and one, or multiple HDDs. Use the esxcli vsan storage add command with the options -d HDD1 -d HDD2 -s SDD. You can add them individually as well.

1
esxcli vsan storage add -d t10.ATA_____ST3250820AS_________________________________________9QE0BJN3 -s naa.55cd2e414cb98556

List disks in vSAN node

1
esxcli vsan storage list
list-disks-in-vsan-node.png

List datastore in the UI

list-datastore-in-the-ui.png

Change storage policy to allow one node operation

In order to be able to write to the single node vSAN cluster, you need to change the policy. This is only neccessary if you want to setup a single node cluster.

1
2
3
4
5
6
esxcli vsan policy setdefault -c cluster -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmswap -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmem -p "((\"hostFailuresToTolerate\" i0) (\"forceProvisioning\" i1))"
esxcli vsan policy getdefault
change-storage-policy-to-allow-one-node-operation.png

Configure networking for vSAN traffic

First step is to create a virtual switch which we will bind to a vmk# nic. The same network configuration is done on all data nodes. The witness appliance does not require anything specific.

configure-networking-for-vsan-traffic.png

Add VMkernel NIC for crossover connectivity

A way to check connectivity is to ping the ip address from each host.

add-vmkernel-nic-for-crossover-connectivity.png 0424007e-4275-4875-a6de-0fa3bfaa58c9.png

Add networking for 2-node data replication traffic

The data traffic will flow using a crossover cable and the witness traffic will flow via the normal LAN connection.

We need to tag which vmk will handle what vSAN traffic. Make sure to specify the -T flag to specify witness traffic. Make sure to select the correct vmk# for your normal traffic, in my case vmk0.

1
2
3
esxcli vsan network ipv4 add -i vmk1
esxcli vsan network ip add -i vmk0 -T=witness
esxcli vsan network list
add-networking-for-2-node-data-replication-traffic.png

Join a data node to the vSAN cluster

Join using the sub-cluster UUID from the master cluster node earlier.

1
esxcli vsan cluster join -u 5218ef34-de66-cc64-1d0e-f14adba2e0e5 -w
1
esxcli vsan cluster get
join-a-data-node-to-the-vsan-cluster.png

Get default faultdomain

In order to join the witness appliance, you need to specify the fault domain.

1
esxcli vsan faultdomain get
get-default-faultdomain.png

Download and install the vSAN witness appliance

This appliance is actually the nested ESXi 6.5 appliance pre-configured for witness traffic. It can be installed from ESXi 5.5 and onwards and should run on a third host, though it will work perfectly fine to install it on one of the vSAN data nodes.

download-and-install-the-vsan-witness-appliance.png

Join witness appliance to vSAN cluster

Join the witness node specifying the same sub-cluster UUID as earlier (in this screenshot the UUID is different due to multiple setups). Also specify the fault domain id.

1
2
3
esxcli vsan cluster join -u 5218ef34-de66-cc64-1d0e-f14adba2e0e5 -w -t -p 585b9979-e2ea-89a5-d1fe-a02
bb83182fc
esxcli vsan cluster get
join-witness-appliance-to-vsan-cluster.png

The root directory of the vSAN datastore is non-writable, unless you use the osfs tooling. These are good to use if you want to delete stray content as well. Symlinking may save you some hairs.

1
2
3
ln -s /usr/lib/vmware/osfs/bin/osfs-mkdir /usr/sbin/osfs-mkdir
ln -s /usr/lib/vmware/osfs/bin/osfs-rmdir /usr/sbin/osfs-rmdir
ln -s /usr/lib/vmware/osfs/bin/osfs-ls /usr/sbin/osfs-ls
create-symlink-shortcuts.png