When overseeing virtual servers, take advantage of self-service VM management, VM templates, monitoring tools and permissions groups to ease the management burden.
Managing virtual servers has its advantages compared to physical servers, but it also brings new challenges to the table.
When each workload ran on its own physical server, management was simple and straightforward. If there was a problem with a physical server, then the administrator would investigate the problem at that server. Also, all dedicated resources for the workload were on that server.
When someone in the organization needed a new physical server, they had to claim a budget, order it, and wait for its delivery and installation. The IT landscape looks very different when users can request VMs through a self-service portal and have new workloads deployed in minutes. Those workloads share hardware resources and must be managed together.
Here are six best practices for virtual server management.
1. Use self-service management to prevent VM sprawl
Because it’s so easy to create VMs, VM sprawl is a common problem. There can be VMs in the environment that no one knows the purpose of. It might sound contradictory, but self-service VM management can help prevent this sprawl. When users can request their own VMs, they can also manage them and remove them when no longer needed.
VMs can be deployed with a lease time, so when the lease ends, users must decide if the VMs are still needed. Also, when VMs can be charged to a budget, users might be triggered to clean up resources. In a VMware environment, vRealize Automation is such a system that allows users to request services from a catalog and then maintain those VMs themselves.
Other vendors that have similar software that can be used in VMware or other environments include Morpheus Data, Cloudify and Embotics.
2. Provide VM templates to ensure right sizing
When creating VMs, it’s tempting to select more resources than needed. You may say, “Two CPUs? Why not four? It will probably perform better.” Often, this isn’t true and will, effectively, waste resources. When extrapolated to a large environment, an enormous amount of resources can be wasted with this mindset.
A simple thing to do, which doesn’t require the purchase of additional software, is to work with templates of certain sizes, like a menu of possible VM flavors. This prevents admins from creating VMs that are oversized.
Use some psychological tricks here. If the menu starts with the VM type you would like used most – two CPUs and 4 GB RAM, for example – that most likely won’t be chosen. Add a smaller one, as it’s human nature to select the second smallest or medium-sized option, which is exactly the one you want them to choose. This works the same way with self-service products.
3. Take advantage of tools to monitor performance
Just because the system is organized in this way doesn’t mean admins can sit back and relax. They must keep a close eye on under- or oversized VMs that are possibly no longer used. A tool such as vRealize Operations Manager or Microsoft System Center can help with this. These products provide insight on system performance and deployment effectiveness.
Because workloads are sharing the hardware resources of the hypervisor, good insight on how resources are used is very important. With the standard tools included with hypervisor’s license, such as vCenter for VMware, admins can investigate the system’s performance in a small-scale deployment.
When environments are larger – multiple vCenter servers, maybe even in multiple data centers, are being used – additional software is a must-have. Other vendors with similar software that can be used in VMware or other environments include SolarWinds, Datadog and ManageEngine.
4. Ensure VM security with appropriate permissions
When moving from a physical to virtual environment, admins can delegate management to others. A good plan is required to delegate administration to the right users. The permissions model in most hypervisors, such as with VMware vCenter, enables setting up a hierarchy that reflects the parts of an environment that require delegated administration with the correct permissions.
The best approach is to use groups, like in Active Directory, that enable easy assignment but, even more important, easy revocation of permissions by adding or removing users from a group. Admins can quickly audit permissions by checking the group memberships.
5. Use VPN, multifactor authentication for remote access
The attack surface also changes when moving from a physical environment to a virtual one. With physical servers, a breach of a single server doesn’t necessarily allow access to other severs. But with the introduction of centralized VM management, the entire environment is at risk when access to that platform is breached.
Especially in the current era when admins are working from home to manage environments, having a good remote access method is of the utmost importance. In the past, remote desktop servers were used to jump into the data center and, from there, access the infrastructure management. Those types of access have proven to be not the most secure. One better access method is a VPN connection with multifactor authentication.
6. Choose a specific backup and restore platform for VMs
In a physical environment, a backup is made for each server with an agent running in the OS. This is also possible in a virtualized environment, but this often leads to performance problems because of the large amounts of data that must be pulled from the hypervisor.
With a VM-based backup approach, only the VM’s metadata, OS and app information is collected and saved. That data is often stored as a single file that contains all the information needed to restore that VM on any physical server. Individual files in the backup set might be very difficult to access, so choose a VM backup platform that allows for individual file restores.