<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1078844192138377&amp;ev=PageView&amp;noscript=1">

Blog

High Availability and Redundancy Features in vSphere 6.5

VMware announced the new version of its core data center product, vSphere 6.5, at VMworld Barcelona last week. With it, as always, comes new ESXi and vCenter features—and make no mistake, there are some great ones!

One thing I personally liked is the attention to the core feature set that makes vSphere so great:

  • HA and DRS improvements
  • added security features
  • a new VMFS version

These are features that every vSphere environment, big or small, simple or complex, can utilize. Those on the cutting edge aren’t forgotten either, with a push towards containerization and a standardization of its RESTful API platform (with an increased focus on documentation to boot).

There are two topics that I would like to focus on: improvements to the availability of the VMs that run in your environment, and an easier way to provide redundancy to vCenter itself.

Let’s go over some of the important features that you will be able to use right off the bat.

Higher Availability: vCenter

One of the big issues that many customers face when deploying vCenter is related to availability. After all, vCenter itself can be a single point of failure in many environments. While technologies like HA are designed to function without vCenter being available, many deployments (including ServerCentral’s Enterprise Cloud) need very high uptime for vCenter just to carry out day-to-day activities. vCenter Heartbeat was the old way of doing this, but that product is no longer offered by VMware.

With 6.5, VMware offers a new way of ensuring high uptime for vCenter itself: VCSA High Availability.

This is a VCSA-only technology (not surprising given VMware’s desire to move away from the Windows vCenter Server). It basically allows you to cluster multiple vCenters that are responsible for the same environment. This is an Active/Passive configuration that uses a back-end network to replicate the vPostgres data between the Active, Passive, and Witness appliances. The Witness, as you can probably guess, is there to prevent split-brain scenarios (which sounds a lot cooler than it is in practice).

Configuring vCenter HA within the UI.png
Configuring vCenter HA within the UI

vCenter HA will go a long way in ensuring even higher uptime for vCenter. Given that things are becoming faster and more automated as a whole (and such automation would use vCenter to complete tasks), this is more important than ever to implement in modern environments.

While all of this vCenter stuff is great, what about improvements to normal VMs and workloads? VMware has made several improvements to their oldest feature in the book: High Availability. 

Redundancy of the Future: Orchestrated HA, Proactive HA, and Admission Control

Orchestrated HA

One basic feature that is long overdue is the ability to set a restart order for an HA event. Consider this: Three servers (Database, Application, and Web) all service the same logical app. If you were to cold boot this app, you would probably have an order to the VMs to bring up, right? Usually it is something like Database first, then Application, and finally Web. Well, what if those three servers were all on the same physical host, and that host failed? Currently, HA will restart all three servers on different hosts. Unfortunately, while vSphere has traditionally had a very rudimentary VM Restart Priority, it doesn’t actually orchestrate any servers as they relate to each other. With 6.5, HA can bring up servers in a specific order (think vApps, SRM, or Zerto) after an HA event has occurred. This helps make failure events more predictable, and predictability is a great thing!

Setting VM Restart Order.png
Setting VM Restart Order

Proactive HA

Also in the HA realm of upgrades, vSphere now has a new checkbox called Proactive HA. This is a third-party-dependent feature that enables hardware vendors to inform vSphere of potential issues with a host and trigger an HA event before the host actually begins showing signs of an issue. I can see definite use cases with this when it comes to things like SMART data from hard drives, flaky DIMMs, or even something like an overheating chassis.

This event will actually use vMotion to migrate the VMs (similar to putting a host into Maintenance Mode), thus preventing any downtime at all. This can also utilize a new status called Quarantine Mode, which keeps VMs from starting on this host unless absolutely necessary (or you can use a more traditional Maintenance Mode, which won’t put any VMs on the host no matter what).

Proactive HA set to Quarantine Mode.png
Proactive HA set to Quarantine Mode

Admission Control

The last of the big new HA features is a usability improvement focusing around Admission Control. Admission Control has long been a misunderstood and often improperly used feature, and in my opinion, is overdue for an overhaul. In previous versions of vSphere, it often felt like one of the features that was forgotten about from the 3.x days and never improved upon (fixed slot sizes notwithstanding). With 6.5, the VMware dev team has taken a more simplistic approach to Admission Control.

The long-time favorite of most vSphere admins has been Percentage based on resources used. In this setting, the admin sets a percentage of CPU and Memory that the cluster cannot go below, at which point it prevents the powering on of new VMs. This was done so that the existing VMs could still function should an HA event occur. Typically, this would be set to a percentage depending on the number of hosts in the cluster, designed for a single host failure. For example, if there were two hosts in the cluster, the percentage would be 50%, or ½. If you added a third host, the new ratio would be 1/3, or 33%, and so on. Prior to 6.5, this percentage would not update in the event of a host addition, so it was up to the admin team to ensure that the percentage was set properly (and let’s hope they could do fractions). That has all changed with 6.5: simply set the number of host failures you can tolerate, and it figures out the percentages for you.

The other huge issue with Admission Control was that it functioned strictly with reserved resources. If you didn’t set any reservations on a VM, then Admission Control would only care about the very small CPU and memory resources required. You could deploy many more machines than the cluster could actually support (assuming you’re running at 100%). For example, if you had a three-host cluster with 256 GB memory on each host, then you could really deploy 512 GB worth of VMs and still be at 100% with n-1 redundancy. Even with Admission Control on, if you didn’t reserve any resources, you could deploy many times that amount of resources and the VMs would still power on. They would just be incredibly slow because you would be swapping to disk constantly.

vSphere 6.5 has a new setting called Performance degradation VMs tolerate. This feature issues a warning when a host failure would cause a reduction in VM performance based on actual resource consumption, not just the configured reservations. By setting this to 0%, you can effectively change Admission Control to work for configured memory as opposed to reserved memory, and therefore the performance and not just the availability of the VMs can be preserved during an event.

A modernized Admission Control
A modernized approach to Admission Control

My Take

I think these features will become staples in both new deployments and upgrades, as they are very easy to implement, accessible directly from the UI, and non-invasive to the VMs and overall vSphere environment. The benefits are significant and long overdue, and the costs (mainly an additional vCenter Server) are negligible for modern environments.

Topics: VMware

An Introduction to ServerCentral’s Enterprise Cloud

Part I: The Basics

ServerCentral’s enterprise cloud utilizes VMware’s vSphere platform. Powered by vCloud Director, we offer on-demand, self-service Enterprise Cloud services. For those most familiar with VMware private clouds (think ESXi, vCenter, etc.), managing cloud infrastructure from vCloud Director is a fundamentally different way of thinking. No longer do you have to worry about hardware, hypervisors, or maintenance. Our intent is to offer peace of mind that is seldom found in the IT world. 

Our enterprise cloud is architected to offer flexibility, convenience and the much sought-after single pane of glass.

Understanding how our Enterprise Cloud functions is the key to utilizing the platform to its highest potential. In this first part of our Enterprise Cloud series we will discuss how our platform is organized and how to spin up your first virtual machines in the cloud.

 Understanding the vCloud Director Hierarchy

For most users, their exposure to vCloud Director begins when they first login and access the web-based portal. It is here that they they are presented with an overarching view of their infrastructure. It is also here that you will create vApps, provision virtual machines, and manage your cloud resources. The good news is that it’s all in one place. The bad news is that not much thought is put into how these resources are aligned.

vCloud Director is a very hierarchical platform in all senses of the word. Customers are assigned to the platform as an “organization” which is comprised of varying levels of resources and user management. Organizations are further divided into “virtual data centers” (or VCDs for short). One organization can have multiple virtual data centers if they like. This granular approach allows for centralized management from the previously mentioned single pane of glass. Virtual data centers are also further divided into vApps (virtual applications), edge gateway services and any number of unique network configurations. This detailed hierarchy allows our customers to deploy their infrastructure in the most flexible and sensible manner possible – and to do so within the construct of the best possible best practices.

Think Inside the Box

Many virtual environments are deployed with virtual machines in a configuration where they are in a standalone configuration. This creates single VMs running applications independently from each other with no shared dependencies or policies. vCloud Director takes a different route in the deployment of virtual machines. vCloud Director is architected based on the application tiering model. In this configuration, organizations create virtual applications, or vApps, that contain virtual machines. Typically vApps are used to provide the flexibility to create service levels and policies based on dependency and tiering requirements. All applications need not be created equal.

Take a large cardboard box for example. In this box (your virtual data center) you could place all of your valuables (virtual machines) together. While convenient at first, this becomes a disorganized mess to sort through as more and more valuable are added. Using smaller boxes (vApps), you could separate these valuables along logical lines for better organization and management within the larger box.

With this understanding in mind, let’s login and create an initial vApp in the cloud.

Logging Into vCD and Spinning up a vApp

Every customer receives a unique vCloud Director login for their Enterprise Cloud. It will follow this format:

https://vcd.servercentral.com/cloud/org/c13497/

If you are unsure of your Enterprise Cloud login, please contact us so that we can get you to the right address! Once you have it, logging in and setting up a vApp is a simple and straight-forward process.

1/ Login with the credentials provided:

Picture1-1.png

2/ Once logged in you will see your organization’s default home screen. This shows you the status of various vApps and resources available in your Enterprise Cloud. From here you can navigate around to manage the environment.

Picture2.png

3/ Setting up your first vApp is simple. For the sake of this tutorial, we will create a new vApp from scratch. On the right side of the home screen, click on “Build new vApp”. There are other ways to deploy vApps such as from templates and OVFs, but that is a discussion for another time. For now, we’ll continue to keep it simple.

4/ A new window will pop up asking for details for the new vApp. You will need to give it a name, a description (optional), choose which virtual data center it should be in and modify the lease times. For this first vApp deployment, the defaults are recommended.

Picture4.png

5/ The next screen is where you can deploy a new VM with the creation of the vApp. We normally recommend setting up at least one VM with the vApp so it completes the process all at once. Click on the “New Virtual Machine” button to begin this part of the setup process.

Picture5.png

6/ Configure all the settings for the VM as you need them. You can give it a name, assign compute and memory resources, choose which hardware version (VCD defaults to HW v9), and a few other options. Most of the options are customizable, but we recommend that some of the options be setup as follows:

a/ Expose hardware-assisted CPU virtualization to guest OS: Unchecked
b/ Bus Type: “LSI Logic SAS (SCSI)"

Picture6.png

Exposing CPU virtualization to the guest allows you to run nested hypervisors, and other applications requiring HW virtualization on the CPUs. “LSI Logic SAS (SCSI)” bus type provides better performance on most guest operating systems. However, some older operating systems may need to use older emulation. Paravirtualized SCSI buses may also be used. However, please note that there are limited OSes that can use them. There may be a slight performance gain, depending upon the use case.

If you are unsure what configuration is best for you, please ask.

That’s what we’re here for.

7/ Once the VM configuration is finished on this page, proceed to the next section of the vApp creation. Here you will configure the storage policy, which in most cases will be the default storage policy.

Picture7.png

8/ The next section of the configuration asks you to specify the initial networking settings for the VM. Make sure to choose the right network for the VM. You may leave the other settings as the defaults.

Picture8.png

9 / There is a second networking screen that allows you to enable a couple of advanced networking settings. For now, let’s skip this as they are not needed in the initial configuration. Click “Next” to continue.

Picture9.png

10/ The last part of the process is to verify the configuration and finish building the vApp. Let’s click on “Finish”.

Picture10.png

11/ Once the vApp customization is done, you will see that the vApp and VM are now being created on the home screen.

Picture11.png

Once the vApp and VM are created, you are done with the initial configuration of your first vApp! At this point it is common to begin the installation of the OS onto the virtual machine.

If you found this valuable, please stay tuned. Part II will cover how to upload, mount, and install from an ISO and take a more detailed dive into VM management.

Topics: Cloud VMware