Username or Email Address
In the understanding of VMware, a cluster is a group of hosts united by a single high-performance network that is managed by a single service and jointly performs certain functions as one system.
VMware provides a wide range of services to provide fault tolerance. In the current iteration we will focus on HA and DRS:
• HA (High availability) – a technology created to increase the availability of the system, in the event of failure of one of the ESXi nodes, restarts its virtual machines on other ESXi nodes automatically, without the participation of an administrator.
• DRS (Distributed Resource Scheduler) is a technology created to deploy new or migrate existing services or products without interrupting their operation to provide load balancing between cluster hosts. It uses vMotion to work.
It should be noted that the clustering of VMware and its HA and DRS should be used deliberately and in case it is really required.
To build a cluster, VMware uses from 2x to 32x ESXi servers that are managed using vCenter. To build a cluster, you also need a shared store. It can be implemented with VMware vSAN or with storage support providing access via Fiber Channel, iSCSI or NFS. The storage stores the virtual machine files that are available to all the cluster hosts at the same time. It is due to the overall storage and independence of the virtual machine from the physical platform that rapid migration/recovery of virtual machines is achieved.
The best way to imagine all this is to allow the diagrams below:
A simplified view of the VMware vSphere cluster:
VMware HA + DRS in collaboration:
And actually, vMotion that uses DRS:
Of course, the VMware toolkit is significantly wider than the one presented above. There are things like FT (Fault Tolerance), VM Monitoring, VMware Site Recovery Manager (SRM) … But it’s worth understanding that when all these technologies are used together, their limitations are summed up. Since FT is a very cool technology, it has significant limitations (especially for the licensing standard) and allows you to use no more than 4x VM per host, so in practice, the technology is not used very often.
I think it’s clear with the theory, let’s move on to installing the hypervisor on the hosts.
The installation is fairly simple and does not require any special knowledge, let’s say configure the BIOS, as I think, the shutdown is more complicated.
There are three legal ways to use ESXi:
• Request a 60-day trial (trial);
• Request a license for the “Standalone” ESXi hypervisor;
• Obtain a license from one of the distributors;
In the first case, we will get a full-featured version of VMware vSphere including VMware vCenter Server and the rest of the set of software included in the suite, but only for 60 days of the test period.
The second option allows to use free of charge and legally ESXi but with some reservations:
• There should be no more than 32GB of RAM on the server;
• Not managed with vCenter Server;
• As a consequence, we can not build a cluster from the previous point.
Well, in the third case, we get the same as in the first, without any restrictions for 60 days. It is worth noting that the VMware license has several iterations with its limitations, so the license should be selected carefully.
And so we have an installation image of the hypervisor, let’s proceed to installation. You need to determine which disk subsystem the hypervisor will be installed on. This can be an internal HDD, a RAID array. Also, to install and boot ESXi, you can use an internal flash drive (this method is quite popular), or the most fashionable option is Boot-from-SAN. Still, there is such a thing as VMware Auto-Deploy for downloading over the network, but it’s better to apply it in large infrastructures.
The ESXi requirements for the drive are low since the hypervisor is loaded into memory and uses the drive only for logs while running. The minimum volume is 2GB. I used to install LUN on 10GB.
Select the boot from the installation media and jam while the installer loads:
Press Enter to continue:
We agree with the conditions and press F11:
Select the device to install the hypervisor and press Enter:
Choose the layout and set the password for the root user:
Next, the installer collects the data, displays a warning that all data will be deleted, and suggests that we press F11 to start the installation.
After the process is over, press Enter to reboot and boot into the system.
After boot, press F2 to log in and go to the Configure Management Network menu and press Enter to configure the VMware ESXi server’s IP parameters. Management network is the network through which the ESXi server is managed.
Select IP Configuration and press Enter.
Select Set static IP address and network mask, then enter the IP address, subnet mask, Default Gateway and press Enter.
Network settings for VMware ESXi are now complete. In the Configure Management Network menu, press ESC and then press Y to confirm the changes, and then the Management Network will be restarted. After that, press Enter to continue. Restarting the management network can be done at any time if the problem begins by selecting Restart Management Network.
That’s it – the installation is complete. To get into the configuration menu from the splash screen, press F2 and enter the root user credentials. After the host is loaded, we can connect using the vSphere Client, the corresponding version, by the IP address that was specified during the configuration process. After installing the client, we connect to our host using the username root and the password that was set during the installation.
In the next article (p II), we’ll look at how to deploy the vCenter Server Appliance (VCSA) 6.5
Sample rating item
HowTo, Software, VMware by Veniamin Kireev
[…] the previous article Building a Cluster based on VMware (part I) we saw Cluster Technologies types and understand how to install VMware ESXi. To create a cluster of […]
[…] the first article Building a Cluster based on VMware (part I) we saw Cluster Technologies types and understand how to install VMware ESXi, in the second article […]
You must be logged in to post a comment.