Data Center Virtualization Management and the use of Scripting for Automation

21842 words (87 pages) Dissertation

16th Dec 2019 Dissertation Reference this

Tags: Computer Science

Disclaimer: This work has been submitted by a student. This is not an example of the work produced by our Dissertation Writing Service. You can view samples of our professional work here.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NursingAnswers.net.

A study into Data Center Virtualization Management and the use of Scripting for Automation.

Abbreviations

Data Centers – DC

Virtual Machine – VM

Virtual Machine Monitor – VMM

Operating Systems – OS

System Center Virtual Machine Management – SCVMM

Graphical User Interface – GUI

Small and medium Enterprises – SME

Application programming interface – API

Virtual Core Processor – CPU

PowerShell Integrated Scripting Environment – PowerShell IS

Abstract

Virtualization of hardware is a challenge that any size enterprise has to overcome at some point. Data center hardware is historically designed for the hosting of a single operating system and application, a rising trend to reduce the amount of hardware and utilization of said hardware is the use of virtualization. Virtualizing a data center, creating a larger number of virtual machines on a single item of hardware, provides a platform for enterprises to successfully, cost effectively and efficiently manage their resources and hardware.

The objective of the project is to research, review, and evaluate virtualization management solutions and automation methods for a data center environment. Providing a beneficial overview into types of virtualization management that would be used in an enterprise level environment. A summary of enterprise level virtual machine monitors has been conducted revealing an overall insight into the systems. Following on from this the project identifies the need for automation, creating and critically reflecting on some essential scripts to support a virtualized data center and its administrators or managers.

Proving a general view into the benefits, advantages and need for virtualization within a data center environment, and why enterprises should consider this approach. The project will additionally provide areas for future work and improvements.

Chapter 1 – Introduction

If managed correctly virtualization can solve or dramatically reduce data center problems within any enterprise. With the growth in data centers within enterprises, virtualization offers a large number of solutions and benefits to the enterprise. Increasing needs and demand from more resources within enterprises are producing larger physical data centers, this comes with increases in costs, energy and physical space. Virtualizing these machines, reduces the amount of physical machines needed, reducing the costs and energy supplies. However, to get the greatest benefit, an enterprise needs to ensure these virtual machines are managed correctly. There is a wide selection of management applications and methods, with the virtualization market rapidly growing due to the increasing demands “The market has matured rapidly over the last few years, with many organizations having server virtualization rates that exceed 75 percent, illustrating the high level of penetration.”(Warrilow, 2016).

Virtualization has changed the way enterprises use data centers, providing a fully utilized resourceful source, enabling scalable and flexible deliverables for desired needs. Virtual machine management, VMM, can provide a system in which IT administrators are provided with a clear, understandable and full overview of all the virtual machines, virtual resources and hosts. But due to the large scale of virtual data centers, manually managing these may not be the best solutions, automation will here come into its own. This project will review and analyze several of the greatest and well known products for VMM, and review several scripts that can be used as a prototype for essential automation.

1.1 Research Goals and Aims.

Managing virtual data centers has a number of challenges, depending on the complexity and the enterprise.  Our goal is to provide a clear overview and awareness into virtual machine management within a data center, producing a review on current literature from within the industry. Creating a broad view into the area, creating an indication into what currently is available both software and technically.

Consequently, this research aims to critically evaluate and review three of the most common and popular virtual machine management programs available. Focusing on how well they work as a centralized management tool, and how well they perform in a complex data center scenario. Researching into there costs, versions and features, to gain a complete understanding of how well they will perform and identifying the benefits to an enterprise.

Secondly the production of basic automation scripts, will demonstrate a basic understanding and academic base of various automation scripts, that have the ability to be implemented as is or developed on to creating more advance automatic procedures for an enterprise.  Resulting in a standard platform in which recommendations can be produced for further development.

1.2 Objectives

To successful accomplish the above aims and goals, the project will fulfil the below objectives:

  • Understand virtualization, its management and how this successfully works within a data centre environment.
  • Research current literature, creating a broad overview of the industry, identifying what is available in terms of virtual machine management and their performance factors.
  • Investigate and critically evaluate current top market VMM applications.
  • Produce automation prototype scripts.
  • Critically reflect on how these can work within VMM applications to support a virtual data centre and its management.

1.3 Approach / Organisation

The project will follow an organization, very similar to the previously mentioned items. Firstly, the project identifies some background information, in Chapter 2, designed to give even the least technical readers a clear definition of the topic, terms used, along with identifying key subject areas that will be used and mentioned throughout the whole project.  To then expand the reader’s knowledge, the project identifies current literature that is available, intensifying and detailing concepts within the industry.

Chapter 4, reviews and evaluates current available VMM applications, chosen due to the high demand they have in the current market. Identifying the methodology used, presenting the need and identification for automation scripts is presented in Chapter 5.  With the implementation and testing of the scripts described in Chapter 6. A critical evaluation is then conducted in chapter 7, with a overall conclusion and further research identified in the final Chapter, 7.

1.4 Project Plan

The project itself is identified below in figure 1, this is the schedule that was used to ensure tasks and milestones throughout are met and accomplished to thoroughly identify and progress in each areas producing a promising and usable project and report. A more detailed version is attached in the appendix of the report.

Figure 1 – Project Plan

Chapter 2 – Background Research

In this chapter, we provide a broad overview of the background required to understand the topics used throughout this project.

2.1 Data Centers

A common theme growing between businesses, is the used of data centers, these are seen in some of the largest and most profound businesses around the world. A data center as it sounds is dedicated space, where essential IT infrastructure is stored. This central source can be either physical or virtual, here this infrastructure, such as servers and can be used to support and operate daily tasks of the business. (Rouse, 2010) With the Growth in popularity as businesses grow out of their own infrastructure. Data centers (DC) here have grown common to use to provide computation, processing and storage to meet these needs of private and business applications. Depending on the business here may own and operate its own in-house data center, or use a DC provided from a service provider specializing in this area.

The demand and growth has fueled an increasing demand in the computation that required within DC’s. Growth can be down to the rapid increase in the affordability and access of personal DC in a cloud format, this will increase business need due to the demand and progression to gain access online to services 24-7.

DC’s have several challenges that they need to overcome, one of the first problems that is need to overcome is the amount of hardware. Safeguarding the reliability, cool and energy needs of what sometimes can be thousands of servers, in both a responsible style and controlled manner. Many known that large DC such as google or Microsoft cool and generate power from the ocean as a more radical approach (Rosoff, 2011) (Khandelwal, 2016).

Alongside this, in order to efficiently allocate and distribute resources to applications to a business. The DC hardware itself has to have limited flexibility, ensuring that the DC has a high availability, minimal downtime, with suitable networking and security requirements and a high fault tolerance, alongside ensuring that resources are thoroughly monitored and managed.

Many problems can occur due to lack of planning in relation to resources and capacity needed for the specific use, improvements here can be made in order to make use of all the severs and what they can offer. Here severs consolidation can be used to ensure that applications are running the correct performance level and are using the correct amount of resources. Consolidation is either horizontally, distributing workload across several servers, or vertically, running multiple workloads on a combined OS. (Jayaswal, 2006) (Iams, 2005). Virtualization should be used to effectively run and utilize server hardware to the best ability, ensuring VM has the correct resources allocated in a separate, secure and isolate manner (Bhuvan Urgaonkar, 2002).

Resiliency and high availability are critical concerns for DC managers, to ensure that the DC has a minimal as possible downtime; any down time can cost the business. Many large DC will have many hardware failures; the DC needs to have a resilient system with a fault-tolerant system, so the DC can operate in a full or partial matter (Jayaswal, 2006).

2.2 Virtualization

Virtualization refers to the concept of a virtual representation of software such as an operating system, run concurrently on a single item of hardware. Virtualization is a virtual representation rather than a physical one, of a single physical hardware item, partitioning it into multiple concepts. In place of utilizing a certain aspect of the hardware infrastructure. Each Virtual application created on a single item, is then known as a VM, virtual Machine.  A Key use is for Sever virtualization which contains the use of a Virtual Machine Monitor, VMM also known as a hypervisor, the VM is ran directly on the hardware. Where the hypervisor and the guest OS on the VM presents an imitated version of the hardware environment, often unaware that it is ran on a virtualized hardware. Other uses can be virtualized with the use of a host operating system with the VMM running on the OS.

The techniques first developed by IBM in the 1960’s to provide simultaneous interactive contact to a mainframe computer (Susanta Nanda, 2005), but with a recent resurgence in popularity, due to the ability to improve resources consumption. Vitalization is becoming a necessity, when implemented there are a large number of benefits, such as utilizing hardware, reduction in physical space, lower energy costs, and reduced capital and administration costs. Virtualization can be applied in serval ways, some of the main examples are applications, desktop, hardware, storage and networking (VMWare, n.d.) to improve resources and hardware throughout.  There are several classifications of virtualization, and how virtualization is preformed, and the combination of physical resources. These different approaches seen categorized below:

Full virtualization

Full virtualization is a technique in which a ‘bare metal’ hypervisor sitting directly on the hardware, where each virtual operating system is unaware and running an unmodified system. With the guest OS unware that the system is virtualized, commands are issued to what is believed to be the hardware, without knowing this simulated hardware created by the hypervisor.  VMware and Microsoft Virtual sever are prime examples of this. Performance of full virtualization as a complete system is not always ideal, I/O intensive systems struggle, and many critical commands can be lagged due to a binary translation being used (Kai Hwang, 2012). This system is the only selection that involves no hardware or OS altercations or modifications, which is one of the largest advantages and significant value this has.

Para Virtualization

Para virtualization a method in which the guest OS is aware of the virtualization, this modified OS includes drivers that communicate directly to the hardware or host OS via the use of a simple hypervisor layer.  The hypervisor provides commands that also include memory management and interrupt handling. The guest OS here needs altercation to support this method, modifying only the guest OS Kernel to improve performance. The ability to operate in this manner, reduced overhead and can optimize privilege commands compared to full virtualization, where the host OS and hypervisor work more efficiently together. Para virtualization offers better support for I/O Device handling, without labor intensive emulation required in the modification of the guest OS (Jun Nakajima, 2007).  Providing a faster service compared to full virtualization. Xen uses a para-virtualization method (Paul Barham, 2003).

Hardware Assisted virtualization

This being a type of full virtualization, this enchantment developed from vendors and enchantments in virtualization technology, targeting privilege instructions that can be run directly on the processor without out affecting the host. These privilege calls are handled by the hardware directly, removing the need for para virtualization or binary translation. This system is currently only supported on 64-bit, intel processors. (Vmware, 2008). This hardware assisted virtualization provides the VMM or hyper visor with an easier strong implementation, where performance can be improved compared to full virtualization. (Jun Nakajima, 2007)

Partial Virtualization or Hybrid Virtualization

­Additionally to the three previously mentioned types, there is a combination method of two of the above types of virtualization, known as partial or hybrid virtualization. For specific hardware driver’s para virtualization is used, with the host OS using full virtualization for all other features.  The two types can also be merged with hardware assisted virtualization. As a result, the OS and applications will need a modification but this overall can merge some of the benefits of both styles into one.

2.3 Virtual Machine Management (VMM)/ Hypervisors

Virtual machine manager or hypervisors are designed to enable communication between the hardware and VM within the abstraction layer. Typically, a VMM is responsible for monitoring and enforcing policies to the VM’s, this can be done via software on a host OS, or directly on the hardware, known as bare metal. There are a number of different programs available that are designed to keep track of all that occurs in a VM. This software is designed to manage and configure the use of resources, such as CPU, Memory and I/O transferee’s. Additionally, supporting the host and network resources, in order to deploy efficient and suitable VM’s. (Rouse, 2016) (Andreas Blenk, 2015)

The hypervisor acts as a gateway to the hardware for the VM, isolating the guest OS’s and applications from the hardware. This allowing the host to operate one or more different VM’s with different OS’s, allowing them to work on and share the same single piece of hardware. Whilst being independent and un-reliant on each other, just simply on the hardware and hypervisor. This isolation results in the hardware being unware that there are multiple VM’s running. With many additional benefits of hypervisors, such as the ability to migrate and move a VM at a moments notice, with minimal disruption, causing very little down time to the business or use.

There are several types of VMM tools available each with different techniques and strengths depending on requirements. Each with different levels of applications based on what type is needed, these are divided into Type one and Type two. Where as type two mostly used for client-based systems and type one commonly used in enterprise versions.

Type one

Type one runs on bare metal, directly on the hardware without the need of a host OS.  This providing clear and direct communication between the VM and the hardware. Type one here are typically known as ‘hypervisors’.  Type One with the direct communication offers a faster communication with the hardware compared to Type two. (Lee, n.d.) (Robin, 1999)Additionally, type one offers a greater security in comparison. The major benefit here, is each VM is separate, hence if a OS crash or a VM crash was to occur then no other affect would be seen on the other VM’s (More Processes, 2013). There are several popular programs that fall into this category, such items as Xen, VMware vSphere, Microsoft Hyper-V and KVM are just some examples.

cid:image001.png@01D270BC.CE3CCF60

Figure 2 – VMM Type One Example

Type two

Type two VMM, runs on a host OS managing, monitoring and redirecting requests to the hardware and hosting environment, with minor processing ran during the redirect. Here the host machine contains the hypervisor to host the VM’s, each guest VM is then hosted in a secure and isolated environment. The VM here runs on the third level above the hardware, above a virtualized layer, creating an environment with an abstraction of the hardware, that can be used by the VM’s.  (Lee, n.d.) (Robin, 1999)

Examples of these type two systems are VitualBox, Parallels, VMware Workstation and VMware Fusion. Type two VMM are applied when the users want access to the host OS and what that may contain, such as applications and documents. One advantage here is the support of the hardware, type two can support a wider range of hardware, with a simpler and less complicated than type one. However, type two does rely on the host OS, if an error was to occur here on the host or a restart on this host is needed, this would directly impact the VM’s (Security Wing, 2014)

cid:image001.png@01D270BC.BECC5370

Figure 3 – VMM Type Two Example

Several products have been created to support Enterprise level virtualization, these are on average close-course, funded systems that offer an increasing support into managing multiple servers with a high level of virtual machines. These systems operate like the standard VMM programs, however grant an overall view of all hosts and VMs with the stated network. These complete VMM products on average will use Type one VMM, due to the direct control and resource allocation they provide.

2.4 Data Center Virtualization

Data Center virtualization, technically defined as the “the process of designing, developing and deploying a data center on virtualization technologies”(Techopedia, n.d.).  In short using virtualization technology and data center hardware, multiple virtual resources can be created, using fewer items of hardware, minimizing costs, and resources needed for an enterprise scale data center.

The hardware devices can be expensive, consume space and power and generate a lot of heat, additionally maintenance tasks, such as re-deployment and backups can be time consuming and can require extensive downtime. Hardware also runs the risk of failing, producing potential threats to the business. Though the use of virtualization these servers can be consolidated into fewer pieces of hardware in a virtual environment, in place of vast server farms, that are historically designed to run and operate a single enterprise application such as a database or an exchange server on a single item of hardware. (WMware, n.d.)

Server Virtualization has become popular in DC’s, providing an easy method of partitioning the hardware, allowing multiple virtual applications to run in isolated area on a single hardware item, each operating as a single independent server. Data centers typically are designed to use a type one VMM, or hypervisor, where the dependency on another OS is low, reducing risks and lowering the overhead.

As previously mentioned there are a number of VMM products which offer enterprise level management of data centers, these systems are designed to connect and manage multiple host servers with a high volume of VMs. These management systems designed for enterprise data centers virtualization are able to provide type one VMM, with support, management and ease of utilization.

With the vast size and demands of data centers, an area in which administrators can manage and support the data center from one area is essential and critical to a smooth operation. These centralized VMM can offer rapid deployment, management, monitoring and reporting of multiple hosts and VM’s. These can work in coordination with standard VMM tools, providing the increasing need to greater visibility and monitoring, in order to gain virtualizations full benefits.

Both VMM systems require the need for automation to ease workloads and demands required to run and administer data centers, this can be achieved through automated scripts. Once written, scripts can ease and support essential tasks reducing administrators work load. Additionally, increasing the benefits and demands of a datacenter to ensure an enterprise is gaining the best from virtualization.

Reasons to virtualize an enterprises infrastructure contains many benefits, however, can introduce some new challenges to a business. For a business to determine if a virtual need is required is highly dependent on the application and platform, some of which can be more difficult that other. Some of the many benefits of VMs within data centers specifically for enterprises are:

Reduced costs

With the use of a VM, providing access to all resources, potentially using all servers at their full capacity, allows in a reduction in costs in several areas. Such as the reduction of physical servers overall, the cost of upkeep, maintenance and operational costs. With the ease of migration within the Data center, costs can drop, due to the ease of movement of VM’s between hardware reducing energy, and disaster recovery plans (Christopher Clark, 2005).

Higher utilization

Providing near capacity serves, with use of all resources provides the best available and utilized servers, translating into more VM’s hosted on fewer physical servers. (Jeremy Sugerman, 2001) This consumption of resources is necessary to any enterprise to ensure that the equity of the assets within the Data center are not under-utilized, equating into lower costs, maintenance and management.

Increased management

Virtualizing creates an easier management for the users, both infrastructural and virtually. The flexibility here can allow the creation, duplication and dropping of VM is as needed, multiplying these across servers within the enterprise. This management can support the fast and easy provision of applications within the VM’s, this vital property can improve the performance, reliability and management. (WMware, n.d.)

Disaster recovery and redeployment and backups

Virtual machines can be dynamically transformed allowing highly adaptive and responsive data centers, allowing rapid re-deployment between physical servers. Virtual snap shots, allowing a responsive up-to-date backup, allowing the re-deployment of a VM to occur in a matter of minutes. In relation, to recover from a potential disaster, virtualization removes the dependency from a select item of hardware; in place, allowing the VMs to be adaptive and operate on different severs. This support, and the consolidation of severs required allows for a speedier and more affordable recovery plan, with virtual assets providing a simpler backup and recovery process with a greater flexibility of resources and a smaller recovery time. (Hoppes, 2009)

2.5 Virtualized Data Center Automation

Data Center automation involves the process of managing and automating the workflow, processes and monitoring of the data center facility. Data center automation provides a centralized solution that can access, view, edit, monitor and administrate most if not all resources that is found within the data center. Data center Automation can be a relatively easy process depending on what tools will be used. (Techopedia, n.d.)

There are a number of applications available where automation steps can be set up and edit through the data center, the applications will then monitor the data center performance and run automated tasks when a particular event occurs, or at a pre-set time.  A further method that is very common in small to medium size data centers is using scripting to run batch scripts, or using PowerShell to execute the desired task. (Milne, 2015)

Automation provides several features that may benefit the data center, including providing a wide overview and insight into the full and complete data center, automating routine processes such as patching, updating and reporting, automating and scheduling tasks for out-of-office hours or set times.

Chapter 3 – Literature Review

Within this chapter, we will identify and critically evaluate literature in relation and support of this project. Crucially looking into related areas around the use of management within a Virtual data center environment. In order to focus and evaluate the areas of improvement and advancement, here we look into a number of related papers to gain an all-round understanding of the subject matter and its recent and past developments.

3.1 Hypervisor/Virtual Machine Management Applications

Hypervisors vs. Lightweight virtualization: A Performance Comparison – Roberto Morabito, Jimmy Kjallman and Mikka Komu – 2015.

Written in 2015, the paper here focuses upon a performance evaluation of several Linux based virtualization systems. The aim here to review the Linux KVM hypervisor against a several Linux containers based systems, LXC, Docker and OSv. Morabito and his team, review that the increasing level of overhead in these systems is deterring users away from the many benefits of virtualization.

The methodology used allows a standard testing platform using non-visualized performance as a base. With the aim of monitoring the CPU, Memory, Disk I/O and Network I/O performance. In relation to the CPU results, they found that “Both the container-based solutions perform better than KVM” (Roberto Morabito, 2015), Exampled in figure four.The Disk I/O test equated that a very similar performance seen throughout, however Docker seen to at one point running better than the native does. Memory resulted in a half difference for OSv, seen in figure three. Overall stating that the container-based solutions are challenging the more traditional systems, although KVM has seen improvements in current years.

Figure 4 – CPU results (Hypervisors vs. Lightweight virtualization: a Performance Comparison)

Figure 3 – Memory Results (Hypervisors vs. Lightweight virtualization: a Performance Comparison)

Discussion

The investigation here conducts a good and intensive review and benchmark for the alternative to the standardized hypervisors technology. This experiment is very alike “Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors” although with some differences, such as the comparison against the native machine and hypervisor. The work here is a good and progressive follow on, and provides more up-to-date information on container-based systems and alternatives to those pitched in the previous paper.  Additionally, the use of KVM identified in other comparison style literatures “A component-Based Performance Comparison of Four Hypervisors”(Jinho Hwang, 2013). This Paper supporting the claims made, producing a similar results set, however the Hwang et al. conducted a more in depth experiment, testing in several different circumstances.

The results here are surprising to show how well the container-based systems are performing in comparison to hypervisors. However, in terms of security and isolation of VMs the tools challenged here will need reviewing, thus mentioned by Morabito as a further work.

A Component-Based Performance Comparison of Four Hypervisors – Jinho Hwang, Sai Zeng, Frederick y Wu and Timothy Wood – 2013.

Hwang et al. compares four different hypervisors in this paper, reviewing the performance against CPU, Memory, Disk I/O and network I/O. The four hypervisors; Hyper-V, KVM, VSphere and Xen are ran through a serious of tests with the aim of discovering which has the best performance and features for administrators of a data center. For a clean experiment, the same hardware has been set up for each Application, with the same virtual machines, Hwang et al. providing a clean environment and equal parameters. Here they use a number of external monitoring and testing applications to review the pre-agreed performance factors, and additionally testing the effects on multiple numbers of VMs.

To summarize, Hwang et al. found no superior hypervisor system overall. In fact, they found that each hypervisor had areas of improvement and areas where they suppressed the rest. Such as in testing the Disk, using Bonnie++, Xen has the worst performance, due to multiple smaller write actions, compared to the other three hypervisors. Overall, the discovery noted the VSphere performed the best in the majority of tests as predicted, due to the age and development of this program in comparison. In summary Hwang et al. state, “Our results indicate that there is no perfect hypervisors and that different workloads may be best suited for different hypervisors” (Jinho Hwang, 2013). Thus confirming that all hypervisors have a suitable benefits and strengths but depends on the use and the enterprise.

Discussion

Overall, the paper involves a good investigation with strong evidence to support the claims made. This paper alike “Hypervisor Shootout: Maximizing Workload Density in the Virtualization Platform”, “Hypervisors vs. Lightweight virtualization: a Performance Comparison” (Roberto Morabito, 2015) and “Performance Evaluation and Comparison of the Top Market Virtualization Hypervisors” again perform a review of hypervisors. Each containing strengths to support each claim Hwang et al. testing both open and close source programs, providing a clean balance of what system are available. Resulting in showing the differences in what they can offer. Although the difference between this and “Hypervisor shootout” is Hwang et al. testing the power and performance of the machines rather than the amount of VMs that can be created effectively. The combination and overlap of these papers enable an enterprise to have a clean and thorough overview of all that can affect their choice in hypervisor.

Performance Evaluation and Comparison of the Top Market Virtualization Hypervisors – Nile University – Abdellatief Elsayed and Nashwa Abdelbaki – 2013.

Literature here focuses upon the reviewing and identifying the characteristics of top market closed source hypervisors, providing a clear comparison in performance. Elsayed and Abdelbaki compare VMware ESXi5, Microsoft Hyper-V2008R2 and Citrix Xen Server, in two stages the first with one VM each, the second increasing the VM load, with the Thirdly they review Hyper-V 2008R2 against Hyper-V 2012RC. All experiments run on a standardized sever, each monitored by both DS2 Dell data store and PRTG.

Discussing the results Elsayed et al. found during the first test with one VM running, “Superiority of using the Citrix Xen server followed in decreasing order by the Hyper-V and then VMware respectively” (Abdellatief Elsayed, 2013, p. 48), stating the results were because Xen showed lower server CPU and memory usage during the tests. The second test run, this time with Hyper-V showing dominance, this again due to lower CPU utilization and memory usage overall, however surprisingly Xen showing the lowest CPU utilization, this had a 98% increase in memory usage. The third comparison experiment demonstrated Hyper-V 2012 RC had the advantage with “enhancement in memory handling and disk IOP’s” (Abdellatief Elsayed, 2013, p. 49)  a predicted result through the paper.

Discussion

Research here contained well-conducted and adequate evaluation methods, back each point up running the experiment using two different monitoring tools. Elsayed et al. provide a convincing argument, with clear and reasonable data to prove the results gained. Supporting evidence seen in “A component-Based Performance Comparison of Four Hypervisors”(Jinho Hwang, 2013), providing secondary test showing Xen has reduced performance in CPU testing. This paper very alike “Hypervisors vs. Lightweight virtualization: a Performance Comparison”(Roberto Morabito, 2015), in which open source hypervisor systems have been experimented with in a very similar fashion. In conjunction, the two papers provide a clear understanding of performance ratings for a wide variance of VMM systems. Future work can be providing for both these papers aligning them and reviewing the performance against each VMM system.

Secure Virtualization for Cloud Environment using Hypervisor-based Technology – Farzad Sabahi – 2012.

Sabahi conducted this investigation in 2012, focusing upon security for VM and its hypervisor technology in use. Here Sabahi reviews the use of VM in terms of a cloud environment or data center for an enterprise, creating an improved architecture to increase security and reduce potential attacks. The investigation here reviews the increased need of isolation between VMs on shared physical machines; ensure that VMs cannot access both other VMs and the hypervisor. Reviewing both the benefits and weaknesses of a hypervisor based system, enabling Sabahi to create three major levels of security management that enterprise hypervisors should have:

  1. Authentication.
  2. Authorization.
  3. Networking.

The proposed methodology of the architecture created aims to increase these three areas to improve both VM isolation and hypervisor security, with the introduction of security and reliability units into the hypervisor layer. Thus resulting in vulnerabilities and creating a more complex secure system.

Discussion

Overall, the project contains a respectable research idea, future work would be to create a prototype and test the increase security of the hypervisor. This paper although with little in common with the majority of the literature review, highlights the increasing need for isolation between VMs and the increase in technology that will need to be actioned. This seen again in “VManage: Loosely Coupled Platform and Virtualization Management in Data Centers” (Sanjay Kumar, 2009)where a portion of study focuses upon the need of isolation within the VM’s and VMM’s. The importance of isolation seen throughout “Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors”(Stephen Soltesz, 2007), with a supporting experiments reviewing the need for isolation and efficiency.

Hypervisor Shootout: Maximizing Workload Density in the Virtualization Platform – The Taneja Group – 2010

The investigation here conducted by the Taneja group in 2010, investigates the VM density, a measurement of the number of VMs that can run concurrently on a hypervisor whilst performing a number of different workloads. Here the Taneja Group focus on four largely well-known hypervisors:

  1. ESXi 4.1
  2. Hyper-v R2
  3. XenServer 5.6
  4. Redhat Enterprise Linux 5.5 Kernel-based virtual machine monitor

Within the Methods of testing, the group have used DVD store Version 2 test application, developed by Dell. This enabled the group to perform stress and tests conditions in order to provide a standard and clean testing platform. Additionally, they chose to test all hypervisors on a single hardware set up that remained consistent throughout the tests. To further support there tests they additionally split there testing into two categories, light (two thread process and 60-minute settle period) and heavy (made more aggressive, reducing settle period to 0-minutes).

The group then lead on to discuss their results, firstly for the light testing, overall they found that Hyper-V has the least memory commitment features, dropping at around 11 VMs, with 22GB of VM memory. Additionally, KVM coming in second with 14 parallel VMs before reaching the memory and CPU limits. To finish both XenServer and ESXI performed the best, with a result of a maximum of 32 VMs. Taneja group, then continue to describe the results for the heavy testing, with a significant reduction in VM numbers. Again, Hyper-V struggled, reaching its limits at the same level. With both XenServer and ESXI resulting in 22 VMs, although with a performance difference. (The Taneja Group , 2012)

Figure 6 – Performance under Light workloads Vs the number of concurrent VM (Hypervisor Shootout: Maximizing Workload Density in the Virtualization Platform)

Figure 7 – Performance under heavy workloads VS the number of concurrent VMs (Hypervisor Shootout: Maximizing Workload Density in the Virtualization Platform)

Discussion

In general, the investigation is a very in depth and enlightening view into the hypervisors and the density in which they can perform at close to their limit. The Taneja group although haven’t aimed this at Data centers this can be used to review the virtualization techniques that can be used, this data can show a IT team the handling of what the hypervisor can offer in order for them to decide on hypervisor programs. Improvements to consider; the Taneja group could have looked into more capabilities of the hypervisors, such as processing power, in order to provide an all-round detailed and technical review of hypervisors.  Such as the likes of investigation seen in “Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors”(Stephen Soltesz, 2007) and noted in “A component-Based Performance Comparison of Four Hypervisors”(Jinho Hwang, 2013) , where a review very similar to this has been conducted this time focusing on component base performance, not solely on workload. Overall, a convincing argument with a good level of research provided to provide the logical and reasonable results.

Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors – Stephen Soltesz, Herbert Potzl, Mark E. Fiuczynski, Andy Bavier, Larry Peterson – 2007.

The investigation conducted by a number of researches from well-known universities, presents an alternative source in place of the classic hypervisor. The paper discusses and reviews a number of Container based systems, with the methodology focusing on the design and implementation of Linux-VServer a container based system. Chosen as it is an open-source program, and the experience of use from all contributors to the paper. The paper then continues one to investigate and contrast the architecture of VServer with the then current model of the open-source hypervisor based system Xen.

The investigation motivated by the increasing use of virtualization throughout for business, developers, hosting companies and Data centers, for a more cost effective method and most efficient use of severs. The case presents that container-based operating system (COS), can trade the need for isolation for efficiency. Measuring efficiency as overall performance and scalability of the VM’s, and Isolation measured in Fault isolation, Resource isolation and security isolation, where both COS and hypervisor technologies contain both of these. The paper reviews this first comparison, stating, “There is no VM technology that achieves the ideal of maximizing both efficiency and isolation” (Stephen Soltesz, 2007). Although, affirming that dependent on the situation depends on what system should be used, for example using the COS system when efficiency is needed more than isolation.

From this Soltesz et al. discuss with the use of a Linux-VServer a COS system, with the methodology of testing the resource isolation, Security isolation, and file system unification for the VServer system. The results showing for all resources VServer can impose limits for each VM consummation, this actioned by the use of ticket bucket filters for both the CPU and I/O scheduling. Additionally, using kernel modifications to enforce secure isolation, and the use of shared file systems for common files for each VM, reducing disk space and providing a copy-on-write system.

The experiment to explore these statements performed on a standardized machine, with the first part of the experiment testing the performance of the hypervisor or COS, the second testing the isolation. Producing the results that Xen has a better support for multiple kernels, network stack and VM migration, in comparison VServer produces a smaller kernel, mark. VServer has an increased performance in relation to I/O resources, thus performance lacking in Xen.

Discussion

Soltesz et al. conduct a good and intensive research into the alternative to hypervisors, producing a series of clear results for a Linux based VM system. Producing a performance analysis within this research, could highly benefit and support the results gained. The paper contains a large amount of support for each of its claims, providing a convincing and sufficient argument. However, this paper provided a widespread and thorough review for the isolation and efficiency of both the COS system and hypervisors. The use of COS systems again reviewed in “Hypervisors vs. Lightweight virtualization: a Performance Comparison”(Roberto Morabito, 2015), providing an additional theorem and evidence to support claims made, although this paper focuses more on performance than Soltesz et al. has provided.

Virtual Machine Monitors: Current Technology and Future Trends – Mendel Rosenblum and Tal Garfinkel – 2005.

In 2005, a paper published by Rosenblum from VMware and Garfinkel from Stanford University Called Virtual machine monitors: Current technology trends and Future trends. Mainframe computing has seen many downfalls within business environments, Rosenblum et al.’s paper reviews into this, identifying novel solutions with the use of virtualization. The investigation firstly emphases virtualization implementation issues, providing examples of challenges and techniques discovered to overcome these. Secondly, the paper leads on to an “examination of current products and recent research providing interesting insights into the future of VMMs” (Mendel Rosenblum, 2005, p. 45).

The review leads on to state the challenges faces in virtualization in regards to CPU, I/O and Memory, they continue on to state challenges within the virtualization of these areas and how they can be overcome, using certain tools or methods, such as VMware, a VMM tool. Rosenblum et al. provides details of an investigation they have undertaken, providing a view in to future technologies and VMM demands.

This investigation, like the previous paper ‘Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors’ (Stephen Soltesz, 2007); focuses upon the security and isolation of the VMs and how the VMM software can provide this. this paper does not go into as much detail as the COS based paper, but provides a clear indication of where security and isolation can lack and techniques that can be used to overcome these, without technical details. The two papers contain a healthy and understandable agreement with each other, although with different levels of technical abilities.  Rosenblum et al. continue on reviewing the use of VMMs, by using an evaluation of a whole on how VMMs can be challenges and techniques required to overcome these.

Discussion

Although a thorough investigation has been taken into an account reviewing a number of different challenges and how these can be solved or worked around the researches have not discussed how the information came about and what experiments have been undertaking for the results and remarks made, such as seen in previous papers. A recommendation here would be to investigate these further providing a technical overview with more detail. The review of the VMMs and there uses can provide a clear and useful base for further research, using the combination of multiple papers a more advance review into VMMs with clear testing for several areas can be made. However, the age of this paper currently, does result in the VMM being tested are now not the current models. This paper has a large amount of interest, however although all theories may be correct, the paper does not contain a large amount of support for its claims an area in which improvement can be made, such as re-running these exams on the latest software versions.

3.2 Performance and Management with a Data Centre Environment

DCSim: A Data Centre Simulation Tool for Evaluating Dynamic Virtualized Resource Management – Michael Tighe, Gaston Keller, Michael Bauer and Hanan Lutfiyya – 2012.

The investigation produced here reviews the needs and unique challenges seen in managing a data center environment. Tighe and the team initially discuss the need and problem they face; they identify that standard algorithms used to manage virtualization are unable to cope in the scale of complexity seen in the infrastructure of a data center. The team here aim to produce a simulation tool to be able to test and experiment on a data center; data center simulator (DCSim). Discussing the architecture, they state they identify this data center has all the features of an enterprise level data center, albeit on a smaller scale.

Tighe et al. continue to experiment there proposed DCSim system, reviewing VM management policies, scalability and machine capabilities. Scalability results show that the DCSim reacted well and an increase in VMs did not disproportionately affect the simulator. Identifies VM allocation and re-allocation, the results show that the using three algorithms static peak, static average and dynamic, the results show the dynamic algorithm produces the best results, with an acceptable level of SLA violations.

Discussion

Overall Tighe et al. produces a clear and understandable framework and a great idea for testing automation and migrations without altering a live environment. The group leave the paper with a wide area of future work and development plans. Although little performance testing has been conducted on how the VMs cope within this environment and with the algorithms, this could be an area of improvement that could have been added into the paper, such as seen in other papers.

Recommendations for Virtualization Technologies in High Performance Computing – Nathan Regola and Jean-Christophe Ducom – 2010.

The investigation here conducted by Regola et al. focuses upon the evaluation of open source hypervisors for high performance computing (HPC) within a Data center environment. The authors look to find and review a number of hypervisors, to support the need to consolidate HPC servers. They initially discuss how this is not normally feasible, due to the amount of workload and power these physical machines require. Although the need to virtualize these types of machines is increasing, due to the benefits virtualization can bring.

The methodology of the paper, initially Regola et al. review KVM and OpenVZ in a HPC virtualized environment. After discussing how KVM and OpenVZ developed to fit a HPC environment, they continue to evaluate virtualization in HPC, with specific attention to disk, network and I/O, before reviewing network and latency. They describe the results, stating that the KVM displays high read and random read performance with a lower performance factor compared to OpenVZ. Discussing the overall results concluded, “OpenVZ was the best choice for HPC due to the lower overhead”(Nathan Regola, 2010). The paper ended with the results of OS virtualization such as OpenVZ is the only current solution that supports the CPU and I/O performance needed in HPC

Discussion

Overall, the results of the paper clearly demonstrated and described thoroughly the results of the investigation, producing clear positive and negatives of each hypervisor.  Regola et al. have provided a number of related work and future work items to expand on. However, whilst only reviewing open source programs, the project could have acknowledged some open source solutions and presented the findings on how they react in a HPC environment to provide a clear comparison of all solutions. Further to this, “Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors”(Stephen Soltesz, 2007), in this instance provides a clear background research into why an OpenVZ was discovered to be the best solution. This paper alongside this support proves the theorem Regola et al. was try to prove.

VManage: Loosely Coupled Platform and Virtualization Management in Data Centers – Hewlett Packard Laboratories – Sanjay Kumar, Vanish Talwar, Vibhore Kumar, Parthasarathy Ranganathan, and Karsten Schwan – 2009.

Mid 2009, a group of researches produced a paper called, VManage: Loosely Coupled Platform and Virtualization Management in Data Centers. The project focal point aims to provide a solution for both platform management and virtualization management, to provide a beneficial overview to improve the efficiency of data centers. The Researches propose a method, VManage, in which platform and virtual management performed in the same system, providing a coordination within data centers. Producing a set of results with both physical and virtual improvements, improving overall efficiency, performance, stability, isolation and runtime.

Kumar et al. wanted to create a solution to increase the efficiency of IT infrastructure and decrease the increasing costs that are in data center management. Kumar et al. detail the problems occurred with both physical and virtual managements, including power and thermal loss and VM resources, isolation, security and stability. The Research team discuss their aim of creating a “loosely coupled platform and virtualization management and facilitate coordination between these in data centers” (Sanjay Kumar, 2009). The system with the use of “registry and proxy servers” designed to discover individual sensors and actuators, used to register and report performance, alongside this the VManage system uses a stabilizer, to increase stability and decrease redundancy in the coordination interface.

Like similar papers above the experiment here is undertake in a Xen Environment, in this instance using 28 VMs on a number of Dell PowerEdge 1950’s, running a mix of enterprise systems and workloads to emulate a working enterprise and there use of the data center. The researchers provide details into the testing criteria, here they are looking for SLA violations, average power and stability (number of VM migrations). Once they have tested the prototype they detail the results found stating compared to traditional methods of management “VManage can achieve additional power savings (10% lower power) with significantly improved service-level guarantees (71% less violations) and stability (54% fewer VM migrations), at low overhead.” Seen in Figure 3.

Figure 7 – VManage results for SLA violations, Average Power and Stability of VM migrations, in comparison to a single management system

Discussion

The investigation and prototype developed in this paper provides a good synopsis and a practical approach for an IT department to manage their data centers, with clear and understandable results. Kumar et al. provide a proposal of development to allow ease of plugins to add in more controllers for management. Furthermore, unlike the above papers, the team have failed to identify the clear management of the VMs, and the importance of their need for security and isolation depending on the program featured. This management system could provide a development into the migrations undertook and the effect on both the VM and the hardware, like that of the experiment ran in “Autonomic Virtual machine placement in the Data center” (Chris Hyser, 2008) in order to ensure the efficiency of the migrations on both the VM and physical attributes. Kumar et al. have left a clear benchmark for the future of this prototype, however to provide a rounded overview a comparison to existing hardware-software layer systems such as VMware virtual center, could have been provided.

Dynamic placement of Virtual machines for Managing SLA Violations – IBM – Norman Bobroff, Andrzej Kochut and Kirk Beaty – 2007.

Written in 2007, by a number of researches at IBM, to investigate and improve server utilization for data center managements aiming to introduce a management algorithm for “dynamic resource allocation in visualized server environments.”  (Norman Bobroff, 2007).  The IBM team are hoping to minimize the costs of a data center, in terms of reducing overcapacity, improving performance and decreasing SLA violations. The team forecast the need of the algorithm, reviewing historical data users of each existing VM, in terms of resources such as CPU and Memory, to gain a full forecast that addition create a model to under the future predications of multiple VMs in a standard fashion on a single physical machine.

The experiment was conducted using three IBM Blade servers, each running VMware. Each VM has heterogeneous workload, to vary the CPU utilization. The management objective seen to minimize the time of active physical machines hosting VMs subject to resource constraint. With the results stating, that the management algorithm was able to exceed its targets in relation to meeting SLA objectives, reduce the rate of SLA violations and an average of 44% reduction in the number of physical machines.

Discussion

Whilst the experiment produced clear and understandable results, there is only minimal discussion in relation to resources and the effect this has on both the automatic management and the VMs, with the minimal discussion focusing upon the CPU. The improvements to introduce to increase the support of this claim and relate the paper more in this aspect to gain a fuller overview of how the algorithm reacts. The paper here relates very similarly to “Application Performance Management in Virtualized Server Environments” (Gunjan Khanna, 2006)both using the very similar test environments, both with several supporting items, such as positive improvements to the VM environments and how they react to the algorithm to overcome and work within the pre-set SLA’s of each test environment.

Application Performance Management in Virtualized Server Environments – Gunjan Khanna, Kirk Beaty, Gautam Kar, Andrzej Kochut – 2006.

Khanna et al. provide an investigation in to the use of sever virtualization as a concept to solve “Low server utilization and high system management costs”(Gunjan Khanna, 2006, p. 1) within an enterprise data center. Firstly, the investigation reviews current server consolidation algorithms and describes the mapping processes for this initial set up; this process allows the team do demonstrate the problem they are attempting to solve. Thus leading the team to describe the formula in which will be used to solve the utilization problems seen using machine cost and capacity as parameters.

Khanna et al. then run an in-depth experiment, describing the test bed; in this situation to replicate an Enterprise scale data center, using IBM BladeCenter for the hardware and VMware ESX server. Khanna et al. to create an equal test environment set up a number of different size machines with different OS. The team monitor resources and the machines during the experiment, along with the resources required for the migration of the machine. Describing the results, in term of the number of VM again the migration cost of the machine algorithm created versus the initial placements, they notice that this performs well at first, but with an increase in VMs the cost increases, due to the closely packed allocation of machines introducing a higher number of migrations. Concluding the paper, Khanna et al. state with the use of the algorithm created, they are able to monitor high uses of resources, such as the CPU or memory in comparison to SLA violations, so machines can be migrated between physical resources to minimize costs and maximize utilization.

Discussion

Overall Khanna et al. provides a detailed view into the algorithm created and both its successes and downfalls. They strongly described the proposed algorithm with a good use of research behind each point. The team have noticed some areas, which are week and have noted these for future work to improve on them. In relation “VManage: Loosely Coupled Platform and Virtualization Management in Data Centers” (Sanjay Kumar, 2009)  use pre-determined SLA violations within the algorithm and use this to state when a migration should occur, an issues that needs to be defined in all data center migration tasks. The paper could however, like previous papers review into how the VMs respond to the constant migrations taking place and how this affects both the resources but the workloads of the machines. Similar to this they have noted they wish to work on in the future is the use of application workloads and their resource patterns and how this relates to the produced algorithm.

3.3 Autonomic Management of Virtual Machines

Effective Resource and Workload management in Data centers – Lei Lu and Evgenia Smirni – 2014.

This research conducted by Lu and Smirni, focuses upon several areas in relation to management of data centers, they focus upon “How virtualization technologies can be utilized to develop new tools for maintaining high resource utilization, for achieving higher application performance and for reducing the cost of data center management” (Lei Lu, 2014). The results that they have gained from the paper are four prototypes designed to increase automation, each accomplishing the problems previously identified. Each four methodology defined below:

  1. Lu and Smirni firstly develop an autonomic admission control policy called “AWAIT”, proposing an active request and blocking queue strategy for requests when a multiple VMs are overwhelming the hardware in workload surge requests. After testing, Lu and Smirni discus the results, stating a positive outcome between conflicting requests.
  2. Secondly, they focus upon a technique to model dependency relationships of VM resource demands, estimating that this will decrease the inaccuracy currently seen in standard tools for measuring resources utilizations in VMs.  Producing a direct factor graph the researches state results seen in comparison to diverse suite that show “improved accuracy of resource utilization estimates using DFG”(Lei Lu, 2014).
  3. Continuing, producing a prototype management tool, an autonomic system designed to alter resource settings per VM, AppRM. Designed to overcome VM sprawl issues seen within large-scale data centers and the difficulty of knowing which application needs what resources. The application works to meet pre-set Service Level Objectives (SLO), and adjusts to dynamic workloads. Thus tested in a VMware environment, with both over and under resourced VMs, producing a clear adjustment in all scenarios.
  4. Lastly, again producing a prototype providing an automated placement program to overcome server under-utilization. Lu and Smirni, creating a load-balancing algorithm to assign VMs across a data center applying min and max loads to servers, before using network analytical model to predict the performance of the VM under the specified placement. Resulting in on average a reduction in the worst configuration response and a reduction in a random configuration response.

Discussion

Overall, the paper gives a wide overview on the systems stated to increase automated management of resources and server utilization, and they have created solutions for a wide area of related problems, the paper has little backup of the facts stated. Lu and Smirni have lacked in physical evidence of their claims, here they could have provided a more detailed results for the items stated, resulting in a high benchmark for future work. Lu and Smirni could also focus on meeting SLA has seen in other papers within this review, rather than SLO’s, SLA proving a full agreement and understanding from all parties to the agreement in place, compared to the performance objective. However there research conducted in the Fourth target does support the claims seen in “Autonomic Virtual machine placement in the Data center” (Chris Hyser, 2008)where both papers focusing on the load balancing of resources, and a autonomic migration, resulting in balanced resource nodes and a better environment for the VM to perform.

Autonomic Virtual machine placement in the Data center – Hewlett Packard Laboratories – Chris Hyser, Bret McKee, Rob Gardner, and Brian J. Watson – 2008.

The project here, written by a number of research technicians for HP in early 2008, presents a paper reviewing and investigating into Virtual machine placement and the automatic management of these depending on a set number of policies. The paper aimed to produce a high-level overview for data Centre operators and owners in order to “improve quality of a service for data center owners or to increase profitability of providing service for data center owners” (Chris Hyser, 2008, p. 2). The investigation aimed to solve several problems related to virtual machine mapping, focusing upon areas in resource management, space management and business and security constraints. The HP researches present a claim that with the use of an autonomic controller, they are able to ensure the VM’s are able to react in accordance to the pre-set policies from the data center and transfer the VMs between physical hosts, to combat efficiency, stability and minimize the need for human control.

The methodology used by the Hyser et al. has been to create a prototype controller, in order to meet and excel in overcoming VM mapping and rearrangement problems. After this it is discussed the factors and policies that would be taken in to account when generating a new mapping. The authors have produced an architecture, which can have been seen in Figure 1 below, the team have initiated a system in which communicated with the VMM software, to gain data in order to execute the live migrations. The experiment here uses the HP virtual machine manager, alongside a data center built using four HP servers, each with 8 GB memory, 2×3.60Ghz Intel processors, three of which machines are used for the migration experiment and the fourth is used to host the prototype software.

Figure 9-Prototype System Architecture (Autonomic Virtual Machine Placement in the Data Centre, HP, 2008)

Once the experiment has been conducted, the researches explains their recommendations and results of the use of the prototype, the results presented in graphs show the representation of CPU, LAN and SAN represented on three different Nodes, initially start the experiment unbalanced. The Prototype works to drive and minimize the resources and create an even average. The imbalance seen and exceeds pre-set policies; a live migration is conducted producing a clear and obvious balance of the resources, producing a clear indication of a VM on each physical host.

Figure 10-Balance Policy Load Resource Summaries (Autonomic Virtual Machine Placement in the Data Centre, HP, 2008)

Discussion

Hyser et al. present the results clearly and have shown a clear area for additionally work. Here they have outlined this within the paper stating areas in which they propose to further research and develop. Hyser et al. have produced a clear and proposal for the prototype with well-displayed and formulated results. Whilst the prototype testing is, a good start Hyser et al. could look into running and testing the software in a number of different scopes, such as with different types VMM software, or with an increased number of servers and VM’s. Something similar to the tests conducted in “Performance Evaluation and Comparison of the Top Market Virtualization Hypervisors”(Abdellatief Elsayed, 2013). Overall, the paper conducts an intensive benchmark of research into automation software, leaving a continual progression for future work.

Server Virtualization in autonomic management of heterogeneous workloads – Malgorzata Steiner, Ian Whaley, David Carrera, Ilona Gaweda and David Chess – 2007.

The report here develops an investigation and experiment on the automation of server virtualization for large and un-regular workloads. The paper written describes the challenges in automatically managing heterogeneous workloads in an autonomic data center environment. Steiner et al. propose a solution to demonstrate this and its effectiveness in both an experiment and simulation. Steiner et al. and his team face a number of challenges in order to successfully, efficiently and proactively manage a heterogeneous workload. Discussing the three main challenges that they are aiming to overcome, first is the difference in performance that can come from an assorted workload, the second is time scale management and the difference that each workload can bring, and the difficulty in knowing the resource allocation and performance that is need for each. The Third is accumulation of applications on each physical hardware item, ensuring allocation of resources and its usages and the efficiency of this. Stiner then presents solutions with the aim of solving the three challenges discussed, to improve the management of heterogeneous workloads within server virtualization.

A Trending use within the literatures discussed is the use of Xen as the base for the virtualization experiments; again, Steiner et al. sees this in use. Steiner et al. here use both an experimental test and a simulation, with the use of Xen and WebSphere extended deployment. Stating that the experiment used, introduces several features; accumulating workloads onto one physical machine, allows high-level performance in resource allocation and thirdly creates a more effective manner in scheduling.

Discussion

Steiner et al. and the team leave the literature with an effective plan to continue studying. However, further experiments should again be running to further the results gain, perhaps with the use of migration rather than simply using Move-and-restore tactics due to the lack of resources. Further to this, they have discussed in detail how this would affect the management of VM’s within the Xen environment, and the ease of having an image created for the creation of more VM’s when needed. Although for the experiment they have relied on the standard set up of resources from Xen, a recommendation here would be to further this and create unique VMs and uses to expand the evidence they can get from their prototype.

3.4 Conclusion

Virtualization has been a growing portion of the computing world in recent years, with a wide number of written literature supporting a wide variety of aspects that it covers. The body of literature has been focusing upon a number of areas within virtualization management, performance management and autonomic management, along side in depth reviews of currently available systems. There are a wide number of methodology’s and tools varying throughout, used for investigating management styles, technologies, performance and differences from paper to paper. Many papers reviewed use a comparison style method comparing VMM software in different scenarios, or against different systems such as container based systems, presenting an understanding of the better software in a particular scenario.

With the wide variation of literature reviewed, some earlier published work may contain systems that are now outdated or have a more current system with different capabilities, thus leaving an area to review and expand in this area. With the comparison between papers, each methodology/theory being reviewed, followed by a discussion of the literature identifying any relations, support or weaknesses with the paper, in order to fully summaries and critique.

Chapter 4 – Centralized Virtual Machine Managers Review and Comparison.

Though this chapter, the project will focus upon identifying current Enterprise level centralized VMM software’s that are used to support and manage data center environments. Reviewing technically what they are capable of performing and in comparing these against one another, in order to gain an overall view and understanding of what currently is available. The chapter will focus upon the most commonly used enterprise centralized management VMM products, Microsoft’s System Center Virtual Machine Manager, VMware vCenter and Citrix XenCenter.

4.1 System Center Virtual Machine Management (SCVMM)

System Center Virtual Machine Manager, a virtual machine management system from Microsoft System Center suite. Microsoft System Center consist of several system management products, providing a number of management, monitoring, automation and reporting tools used to support and assist corporate and enterprise level systems. SCVMM can work independently or along side other System Center products, such as operations manager or configuration manager (Rouse, 2012). SCVMM is used to centrally support a Virtual environment, consisting of several hosts and machines. Thus presenting an overall management view to provide ease of planning, deploying, managing, reporting and optimizing VM’s for the data center administrators.

Figure 10 – System Centre Virtual Machine Manager logo

4.1.1 Hyper-V

Hyper-V is a hypervisor based product from Microsoft, with a type-one design, residing directly on the hardware of the sever after enabling the install. Launched in 2008, Hyper-V has continued to grow and develop with each Windows server release, currently with 6 versions, from Windows Server 2008 to Windows 2016. From Windows Server 2012 onwards, Hyper-V is built in integrated service. Hyper-V is a windows features, enabled at the choice of an administrator. For non server use, Hyper-V is additionally available on windows 8 through to 10 machines. Like above, this need to be enabled before use.

Most recently, windows have released a windows Hyper-V server product, this product designed to support virtualization. By reducing the server size to the bare minimum from removing all unrelated, excluding virtualization, services and the GUI. This system comes with a number of benefits, due to the minimal size of the server, the maintenance is smaller, and there is a reduction in the number of patches that will be required. (Zhelezko, 2014) Comparing windows server 2016 with hyper v enables and the windows hyper v server, there is no clear identification on which Hyper-V application to use within a data center, this simply is down to the enterprise and what licensing system that they have, and how many guest OS they require to install on VM’s (Posey, 2014).

Hyper-V is the preferred choice of hypervisor for a window server configuration, and for the use within SCVMM. Hyper-V is able to provide a virtualization solution in order to support data centers and enterprises, using windows server tools as the basis. Hyper-V contains a management GUI console, to manage and support the single server. The GUI offers a similar concept to SCVMM and again can be managed and supported by PowerShell scripting. Hyper-V contains many of the same features as SCVMM, without the multiple sever concepts. However, unlike SCVMM the Hyper-V manager can only support the server its installed on, unlike SCVMM where multiple servers can be supported and managed from a single view, one benefit for data center administrators.

4.1.2 Versions

The most current stable version of System Center is System Center 2016, this released in the later quarter of 2016, with a supported upgrade path from System center 2012, which is currently most used with in enterprise environments. The compatibility allows for severs with Windows server 2012 R2 or above for System Center 2016, where as 2012 is compatible with servers built to windows Sever 2008 R2 and above.  The upgrade of SCVMM 2012 to SCVMM 2016 offers several new functions, such as an increase security using guarded host deployment, improvements in networking, such as deploying software defined networking using templates and more operations within running VMs (CFreemanwa, 2016). SCVMM 2012 has had a number of updates through its life span between 2012 and 2016. SCVMM 2016 has yet to have a complete update from Microsoft.

In addition, there are a number of Hyper-V versions, Hyper-V offer a free client that can be installed on machines operating at windows 8 to windows 10. This version is pre-installed on the machines becoming active after enabling. Hyper-v then has a number of enterprise versions, which again are reside upon the server until enabling, these versions can vary from 1.0 to 5.0 on depends of which OS you are operating on the server.

4.1.3 Potential Enterprise Costs

SCVMM is a purchasable product, with the price varying on the amount of machines and what is required and other such factors. There are a number of options that are dependent of the enterprise, for example, extra costs are applied for licensing of infrastructure, examples such as windows Windows Server 2012 Datacenter Edition, or SQL Server. Table 1, shows the minimal costs that is needed for the System center suite 2012 (Microsoft Volume Licensing, n.d.) for both the standard and Data center editions. The standard edition provides support and a cheaper option for SME’s that have only lightly virtualized servers, or hybrid based system due to the enterprise’s size. In comparison the data center edition, is designed for a full data center set up, with each license covering up to two physical processes, which can manage an unlimited number of OS based VMs. Both of which contain access to the full software center applications, including SCVMM. Complete quotes for SCVMM are only available through Microsoft Volume Licensing system, here they administrators identify what is needed and required from the SCVMM before a confirmed price is given. Table 1 contains, the average price excluding any additional licenses.

Edition VMs per license Includes Cost (USD)
System Center 2012 R2 Standard Edition 2 App Controller

Configuration Manager

Data Protection Manager

Endpoint Protection

Operations Manager

Orchestrator

Service Manager Virtual Machine Manager

$ 1,323
System Center 2012 R2 Data Center Edition Unlimited App Controller

Configuration Manager

Data Protection Manager

Endpoint Protection

Operations Manager

Orchestrator

Service Manager

Virtual Machine Manager

$3,607

Table 1 – Potential Costs of SCVMM 2012  (Microsoft Volume Licensing, n.d.)

Pricing for 2016, relays a very similar pricing and licensing structure, again providing costings for both standard and data center editions. However, within 2016 a focus is upon using Hyper-v, which has had a number of improvements and is now a built in function within windows sever. (Microsoft Volume licencing, 2016).

Hyper-V is a free hypervisor tool, however for all installs the users do need to have a licensed OS, whether this is windows 8/10 or windows sever 2012/2016, each with a cost.  With a number of different levels of licensing available, Microsoft can offer products to a wide range of customers, granting enterprises flexibility and control towards the scales that are needed. With these products available through Microsoft volume licensing system. (Microsoft, 2016)

Table 2 – Potential Costs of SCVMM 2016 (Microsoft Volume licencing, 2016)

Table 2 – Potential costs of Windows Server 2016 (Microsoft, 2016)

4.1.4 Features

SCVMM has a large number of features and functions essential for the smooth management of a data center. SCVMM had initially developed these functions, with SCVMM 2016 improve and expanding some of the most popular and used items. Below we have looked into some of the most useful features that currently are offered, in both SCVMM 2012 and 2016.

SCVMM is designed to support central management of VMs within a data center, either on premise or from a third party provider. SCVMM id designed to manage large numbers of virtual servers, Microsoft stating “Supported number if 400 virtualization hosts, and 8,000 virtual machines” (Microsoft TechNet, 2013). The main functionality of SCVMM, which is one of the main causes of popularity is the ability to manage multiple hosts that use different hypervisors, not only can this support Microsoft Hyper-v and SCVMM based machines, but SCVMM can support VMware and Citrix Xen based servers. This overall support has increased the popularity, due to the flexibility and allowances in provisions it offers.

SCVMM provides a centralized area for these VMs and Servers, where an administrator is able to perform and work with a view of how a change will affect other VM’s and Hosts. Changes can be made on the SCVMM system whilst VMs are live, without causing interruption or for a shutdown or restart to be required.

SCVMM being able to display and manage a complete virtual infrastructure, presents a clear and easy way for administrators to view and understand what state the whole network is in. Within SCVMM a GUI can show a clear understanding of the health of all VMs and hosts in the network, this GUI dashboard shows overall states, properties, CPU, networking and disk performance of each VM (TechNet, n.d.). An example of the SCVMM health dashboard is shown in Figure 11, this demonstrating the system and how it performs.

Figure 11 – Example of SCVMM (Joyner, 2013)

With SCVMM being a part of the System center application pack, SCVMM is able to be integrated with other system center products such as operations manager. Thus being able to produce reports with more details and information, than shown in the GUI. (TechNet, 2014)

Linking closely is the ability to be able to evaluate host capabilities and storage options, to suggest and consolidate workloads and VM’s, thus creating more available and usable space. Reducing need for additional resources and/or hardware. This intelligent placement analysis allows the administer to deploy machines again pre-set algorithms created. Dynamically optimizing the Data center and its resources.

SCVMM additionally enables live migrations within the datacenter, this is essential criteria for any enterprise with a data center, this dynamic performance can mean that a dramatically lowered down time. The feature can automatic reallocate VM workloads dependent on the resources required, this feature can be actioned for a single VM, or a sequence of multiple VM’s. Not only can VM’s be live migrated, but storage can be live migrated too. Tying in tightly with the ability to rapidly deploy and migrate VM’s, SCVMM can create VMs, by script or GUI, that can be deployed across the data center (Otey, 2011).

One feature which will be focus upon in this project, is the integration with PowerShell. SCVMM application contains a strong scripting environment, with a large number of command lets available. PowerShell enables administrators to run scripts, in this instance from the SCVMM console, for a wide variety of tasks, reducing the need to action these through the GUI offered from SCVMM. Scripts will normally be written by the administer, that are highly customized to action what the Enterprise needs. This method of automating actions can increase management and reduce labor time needed for certain processes (Otey, 2011). SCVMM offers a well designed GUI where users can select through options, alongside the PowerShell scripting environment.

4.1.5 Data Center Management using SCVMM

SCVMM is an ideal application to manage a data center, with the large number of servers and VM’s SCVMM can provide an easy overall view to administrators of the complete network of both physical and virtual devices. SCVMM is installed on a management server within the data center, Administrators can then install a console version on to standard computers in order to remotely manage and control the data center as desired.

Both the administrator console and the server contain a GUI, where all management task can be performed and actioned, this overview allows a view of health and all current hosts and VMs. Alternatively, PowerShell scripting can be used to automate and manage the hosts and VM’s, this can support the administrators in creating PowerShell script options than can be ran instantly, without having to filter through the options displayed in the GUI. Due to the scale in which data centers can grow to this need for automation is desired, reducing the time and man power that could be spent manually moving through the GUI for management methods.

4.2 VMware vCenter Server

VCenter Server is similar with SCVMM is a centralized management suite for VMware’s virtualization products. VMware is a subpart of Dell, offering Virtualization and Cloud based services, one of their most known and supported products is the VSphere hypervisor product. VCenter offers administrators a central resource to manage, ensure security, availability and to simplify and reduce the complexity of managing virtual infrastructure. Working in connection with VMware virtualization products, such as vCenter Orchestrator, a streamline and automation product and vSphere operations management, a reporting tool, vCenter server can successfully manage multiple hosts, optimizing the data center for administrators.

Figure 12 – VMware Logo

4.2.1 vSphere ESXi

vSphere is VMware’s hypervisor, used as the recommended hypervisor for vCenter, debuted in 2009, vSphere offers a type-one bare metal hypervisor. Using VMware ESXi as the base of the hypervisor, ESXi a type-one hypervisor, using the VMKernel operating system interface. vSphere is then used to manage the ESXi hypervisor, which can then be implemented into the vCenter VMM suite.

Originating as ESX, VMKernel is a virtualization managed by a service console, this Linux based system is designed to provide a management interface for the host machine, using the vSphere interface is installed, displayed in figure 13. ESX can be used for custom implementations, this however will be for IT administrators, with the large majority using ESXi framework.

Figure 13 – VMware ESX Architecture (Raffic, 2013)

ESXi is an integrated virtualization platform, this however is without the service console, in place running the vSphere or other such software directly on the VMKernel. The ESXi architecture providing an ultra-thin, minimalistic frame work, thus providing a secure, highly reliable with a reduced number of patches needed. This newer ESKi additional, is much quicker to install again due to the smaller architecture (Surksum, 2011).

Figure 14 – VMware ESKi Architecture (Raffic, 2013)

Currently the vSphere application uses the ESXi hypervisor to manage the hosts and VMs, creating a highly available, resilient infrastructure ideal for virtualization. This can then be additionally supported by vCenter, offering a richer environment for support, management and monitoring compared to the vSphere suite (Cloud Administor, 2015).

For Data center use, an enterprise will require the enterprise license of vSphere, this offers the largest number of CPU’s per processor.  vSphere itself contains a number of additional support items alongside the hypervisor, aiming to support a virtualized data center. vSphere can manage large items of infrastructure, using a wide variety of components to the vSphere package.  The paid licensed versions of vSphere hypervisor offer support of automated scripting, using a PowerShell based command line called PowerCLI, alongside full support from the vCenter server management.

vSphere offers a number of tools such as a client and web based management GUI tool, in order to keep consistency through products these management GUI clients is used by vCenter. Here vCenter connects to the client or the web based application to provide management and clear viewing point.

4.2.2 Versions

vCenter Server has had a number of major updates since its release. VMware launched vCenter as a configuration manager in late 2010/11. Each major version release has been in line with an upgrade of the vSphere. vCenter is currently at version 6.00, containing a number of improvements to its security and databases (VMware, 2015).  Alike this vSphere has a similar edition history, with the current version resting at vSphere 6.5. each upgrade with a number of significant changes and additions to the hypervisor product. vSphere additionally releases versions for standard, enterprise and enterprise plus. Each with more capabilities and features.

4.2.3 Potential Enterprise costs

Vmware has a wide number of products and different combinations of items to purchase. For an example an enterprise can simply buy one product, like the vSphere suite, or they can buy a full data center enterprise kit, including all items that would be needed. like many hypervisor products, Vmware is a paid product for enterprise level virtualization, with many editions for scalability.

In terms of an enterprise level data center, Vmware offers several packages, from upgrades to first set ups. One of the largest packs currently on offer is a VMware vSphere with Operations Management, this in an enterprise edition, contains all that is needs to set up a data center, for the first time. The pack contains six versions of vSphere, and a vCenter management server, along with a number of management, optimization, protection, and monitoring programs. Shown in table 4, this can then be the base to build and expand upon to build the desired level of data center for an enterprise. (Vmware white paper, 2015) The convince of these bundles is highly supported and offered from Vmware, allowing the ability to upgrade/add more products and services from the Vmware product stores.

If an enterprise does not need nor has the support or funds for an enterprise package, they can however purchase products individually, to create a data center that is more suited to a smaller business. vCenter as a centralized VMM can be purchased as a single server to support and allow the ease of management to these smaller desired data centers, shown in Table 4.  Containing different editions to support enterprise needs and size.

Product Edition vSphere VMs per license Includes Price (GBP)
vCenter Server Foundation Up to 3 vSphere clients

VCenter management service

Database server

Inventory service

£2,135.17
  Standard No limit vSphere clients

VCenter management service

Database server

Inventory service

vCenter Orchestrator

vCenter backup

£7,572.85
VMware vSphere with Operations Management Enterprise Plus Acceleration Kit n/a vSphere – all products

vCenter Server – all products

£21,750.00

Table 4 – Example costs for vCenter Server (Vmware white paper, 2015)

vSphere itself has a number of editions, vSphere can be offered as a free tool, this free version is just a hypervisor with minimal management and support offered. vSphere then has standard and enterprise versions, each contains a greater ability to support and manage more VMs, with the ease of management tools offered and the ability to use and run automated scripts, something which is restricted in the free version (VMware, n.d.). The free version does not offer features such as vMotion, or the ability to combine this to the vCenter VMM tool, due to the lack of an API functionality.  vSphere enterprise/standard editions have support of vCenter, and other monitoring tools. The standard of versions of vSphere costs, £865.00 GBP (VMWare, n.d.), with the enterprise plus costs £3045.00 (VMware, n.d.) GPB, with the standard version supporting basic vSphere Hypervisor and basic management and monitoring tools, and the enterprise plus edition contain all up-to-date management tools, providing a full supportive enterprise hypervisor.

4.2.4 Features

vCenter provides a single point of control, this centralized resource contains a number of features to unify resources essential for an easy management of a VMs and hosts through a whole data center.  Some of these features have been identified below, these are some essential features that any administrator will require to successfully run a virtualized data center.

Like SCVMM, vCenter can support a vast number of hosts and VMs, vCenter on average can host 1000 hosts per server, with up to 10,000 powered on VMs. Which is a dramatic increase from the SCVMM capabilities. vCenter has the additional abilities to link multiple vCenter servers together, essential for any large scale data center with a vast number of hosts and VMs (VMware, 2013). Thus producing an overview that can monitor and manage up-to 30,000 VMs from one view.

vCenter has the capability to connect to the web client overview which is a vSphere application, this means that an administrator can view the vCenter overview ‘on the go’, providing an essential support and ease of support for administrators whom are not always in proximity to the vCenter server. (VMware, 2015)

Like SCVMM, vCenter can support third party hypervisors, however unlike SCVMM, vCenter has to have an additional plug-in enabled to allow the system to support this. vCenter can then manage up to 20 Hyper-V hosts, (Ivobeerens.nl, 2012). However currently VMware only support Hyper-V as the third party hypervisor (VMware, 2012).

vCenter offers an inventory search functionality, this enabling the administrator to view and have an entire directory of hosts, VM’s and networks at a simple glance, this enables an ease of searching and reporting. This can additionally be used in the linked mode, where multiple vCenter servers are compiled together, vCenter views the data as a simultaneously using just one instance of vCenter.  An additional benefit here from vCenter is the use of audit trails, vCenter keeps track of any changes made, maintaining a clear and understandable record, something which can be used if a rollback or incorrect change has been made. (VMware, 2015)

With use of the vCenter operations manager, an administrator can view a dashboard like the SCVMM, showing options such as the health, risk and efficiency. This view additionally offers support to set alerts and notifications, these pre-set by an administrator to alert over specific events or rapid fluctuations in both VMs and hosts. These alerts can be set to simple alert the user of such an event, or these can be pre-set to automatically start a process to resolve them, such as running a specific script if X problem occurs, thus resolving, halting or preempting problems developing into an overall situation.

Figure 15 – Example vCenter Operations Dashboard  (Lesslhumer, 2012)

vCenter has a clear dashboard GUI where all tasks can be performed, however VMware additionally supports scripting. Scripting for VMware is done using PowerCLI, which is an interface for PowerShell, with just small altercations between scripts. PowerCLI commands are ran through PowerShell and can be used to automate, manage and alter the vCenter server, hosts and VM’s, again this reducing workloads of administrators. (Elsinga, 2015). vCenter to keep consistency through products uses the vSphere client or web based application as its GUI. Here the administrator in place of connecting to a vSphere host, they will connect to the vCenter server to gain the visibility and overview of vCenter.

With a large number of products from VMware vCenter has the capability to connect to a large number of these applications. Each offering a different service such as vRealize, an application offering several benefits such as advanced automation and workflow support.

4.2.5 Data Center Management using vCenter

Using vCenter as a centralized data center management application has a large number of benefits for IT administrators. The VMware programs are not the cheapest for data centers but offer a wide number of applications, for management and for reporting monitoring and automating, which other parties may not offer. like the SCVMM vCenter is installed on a management server.  An IT Administrator will then use the vSphere client or web based application to view and support their management needs. With the use of the web based application, administrators have the ability to remotely support the data center from any location that simply has access to the internet.

Alternatively, to the GUI, PowerCLI framework from within PowerShell module can be used to manage, automate vCenter, vSphere hosts and VM’s. this like with SCVMM, can reduce the amount of man hours and workload it takes to manually go through the GUI to perform daily and simple actions. Due to the large scale that data centers can preform at, especially with VMware applications, whom can connect multiple vCenter servers together, this is essential to reduce and scale the work down to automate it and providing reduced management hours.

Figure 16 – VMware vCenter Architecture (MacDonnell, n.d.)

4.3 XenCenter

XenCenter is a management application for XenServer’s, XenCenter operates on a windows operating system on the data centers administrator’s computer, allowing these administrators to install, configure, administer, monitor and manage VM’s. Like SCVMM and vCenter this is a centralized VMM tool. XenCenter is an open-source software, that know works under Citrix Systems as of 2007. XenCenter offers a dashboard in which the administrator can use or add bespoke components onto, to display what is required for the enterprise. (Berry, 2013)

Figure 17 – Citrix XenServer Logo

4.3.1 XenServer

XenServer is a type-one hypervisor, an open source application produced by Citrix Systems, based upon the Xen hypervisor. XenServer is an enterprise level virtualization platform that is used to deliver all critical factors that are needed in virtualization, whether this be in a data center or personal use (Vugt, 2010).

XenServer using the open platform technology called Xen, Xen is a hypervisor developed by the XenSource. Xen is a type one hypervisor which lays on Dom0, the host domain with privilege access to the hardware. This system has no management consoles or any supporting features, just they hypervisor system. Xen Hypervisor is used by a number of different virtualization products, such as Red Hat or Oracle (Xen, 2011).

XenServer is an application using Xen hypervisor as a base with the Citrix team integrating management GUI and additionally features to support enterprise and commercial use of the Xen product. Working to integrate, manage and automate virtualization and its management. Citrix offer this product along side several other applications, which can be used together to provide a fully enterprise level suite of products. These products such as XenCenter, XenApp, XenDesktop and XenMotion (Rouse, 2015).

XenServer remains an open source project with the community being managed by Citrix, the project aims to develop solutions to ease the management and reduce costs of server and application virtualization.

4.3.2 Versions

Xen originated in 2003 as part of XenSource, there was a number of releases from this open source program, before it was purchased by Citrix Systems in 2005. XenCenter and XenServer currently operates at version 7.1, released on February 26th 2017, the majority of users will still be operating at 7.0 due to the timeliness of this new release. This contained a number of updates and enhanced functionalities, such as increasing configuration limits and maintenance improvements. Previously Citrix released version 5 in may of 2010, Citrix XenServer 6.0 in 2011 with consistent yearly updates from then on. with each version of XenServer released, XenCenter was additionally updated (Benedict, 2015) (Ahmed, 2013).

4.3.3 Potential Enterprise costs

Like the previous two products, the price of XenServer equates what features and functionalities are required and selected.  XenCenter can be downloaded as a singular program for free. XenCenter however in the majority of cases is purchased through a package that includes XenServer. There are three methods of purchasing Xen Server, Free, Standard and Enterprise. Each uses the same source code, however each edition with more features and/or greater support. XenServer itself is a free based product, however the enterprise is paying for the support, maintenance, updates and additional features from Citrix (Quatratics software, 2016). XenServer is a more flexible application then many other virtualization companies, Enterprises are able to select features and products and use or develop add-ins into XenCenter. (Poppelgaard, 2015)

Both the standard and Enterprise editions, have two license types, On-premise and Perpetual. On-premise is an annual license, offering technical support, software maintenance, access to updates and hotfixes, all of which are manageable though XenCenter. Perpetual offers “guaranteed supportability and upgrade path for the entire lifecycle of the current major version XenServer”, (Citrix, 2017). Additionally, offering one year of technical support, software maintenance, access to updates and hotfixes via XenCenter, however after the first year in place of purchasing a new license, the enterprise must negotiate a software maintenance plan if desired.  (Citrix, 2017)

Comparing these prices to SCVMM and vCenter, XenCenter is dramatically lower, something which is beneficial for an Enterprise that is developing with a datacenter.

Product Edition CPU Sockets Includes Price (USD)
XenServer Enterprise Edition / On-Premise 1 XenServer license

XenCenter License

XenMotion

1 yr. Support/Maintenance

Conversion Manager

$695.00
XenServer Enterprise Edition / Perpetual 1 XenServer license

XenCenter License

XenMotion

1 yr. Support/Maintenance

Conversion Manager

$1,525.00
XenServer Standard Edition  / On-Premise 1 XenServer license

XenCenter License

XenMotion

1 yr. Support/Maintenance

$345.00
XenServer Standard Edition  / Perpetual 1 XenServer license

XenCenter License

XenMotion

1 yr. Support/Maintenance

$763.00
XenServer Free Edition XenCenter basic features

XenMotion basic features

Free

Table 5 – Example costs of XenCenter and XenServer (Citrix, 2017)

4.3.4 Features

XenCenter as a centralized management interface for the XenServer. XenCenter contains a number of features and functionalities that are essential and needed for ease of management. With some of the features identified below for Version XenCenter 7.0.

The first major feature that the 7.0 version has is the ability to manage multiple severs from one interface, this is something new with this release, to keep up with competition from VMware and Microsoft products. XenServer has the ability to host 1000 concurrently, which can all be managed from within XenCenter. Like SCVMM and vCenter, XenCenter offers the ability to monitor and manage all the host and VMs that are connected to the VMM. XenCenter has the ability to to monitor and produce reporting information of the hosts, VMs, including the compute power, memory, network and disk performances. Here there is an ability to pre-set these to what the administrator and enterprise needs.

XenCenter as a centralized management, is able to provide a centralized area in which updates are able to be published from. These can be published via GUI, or command line in order to batch and automatically download and manage these updates efficiently. (Citrix, 2015)

XenCenter is configured and able to use XenMotion, this is a live migration tool designed to support and ease the live migration of VMs and storage. This live motion tools eliminate the need of downtime and can provide migrations with small issues to applications and servers.

XenCenter unlike the previous products has one main benefit in which it does not need to be installed on a server or machines that is attached or part of the data center. XenCenter is designed to be operated on a windows machine connecting remotely to manage the data center. This means that if something was to go wrong with the data center, the management machine is separated and distanced from the datacenter itself. (XenServer, n.d.)

With an open source system, users are able to edit and modify the source code. Xen Center has made this easy by allowing plugins to be easily added in. This can be anything from a private in company item that will support that enterprise further or items that can show different VM views that may be more suited to the management styles.

One of the most popular plugins is the ability to use PowerShell to control and manage the hosts and VMs that are available in XenCenter. This like vCenter is a slight variation on PowerShell, the Xen module however offers all of the same ability to PowerShell and its uses in SCVMM.  This scripting ability can reduce and automate daily task of the administrators. (Cardenas, 2014)

4.3.5 Data Center Management using XenCenter

For a datacenter, using XenCenter with XenServer would dramatically reduce costs and produce a virtualized space in which an IT administrator would be able to manipulate, manage, report and configure all that is required for the enterprise. ­With the minimal costs, ease of download and insulation, enterprises can easily set XenServer and XenCenter up. XenCenter enables the IT administrators to have flexibility, agility and full control of there environment.

Citrix XenCenter has the ability automate and streamline both daily and critical tasks, to create an ease of management of multiple hosts and VMs. This can be automated through the GUI provided in the XenCenter application or through the PowerShell plugin that can easily be enabled on to the windows based machine that XenCenter is ran from.

XenServer and XenCenter has the added functionality that the source code can be developed or added onto. If an enterprise is able to develop additional functionality to suit and benefit, any specific needs that they may have, this is beneficial to any technology based companies where they can improve systems in line with there own products to improve work flow and load.

4.4 Comparison and Evaluation

The three reviewed centralized Virtual machine managers, the project has looked into a number of areas that can both benefit and limit the VMM in an enterprise data center.  Within the project we reviewed that are recommended to be used within the centralized VMM, the current version, the potential cost to the enterprise, the main and most prominent features, and how the application will benefit in a data center environment.

There are many virtualization vendors within the industry, each with benefits and limitations depending on what an enterprise or user needs. The project has reviewed three of the top players within the industry, in terms of hypervisors. Currently VMWare is the leading vendor, with a user base of on average 71.3%. with the second player being Microsoft, with an average of 22.5%. Citrix has a dramatically lower user base with an average of 6%. With a small percentage being used by other hypervisor systems, such as oracle or IBM solutions ((SpiceWorks), 2016).

The review this project has undertake reflects these figures from SpiceWorks, VMware and Microsoft have the best GUI and offer the better usable features for IT administrators. Whereas, Citrix Xen offers a contemporary solution, the product is not as mature, with the management is more difficult compared the VMware or Microsoft. However, if an enterprise needs a Linux based system, that allows administrators to add and develop onto then this is the hypervisor for them.

Figure 18 – Virtualization Vendors per Company Size ((SpiceWorks), 2016)

Table six, identifies some of the most popular features that are desired by IT administrators of data centers.  The identification of these features, can show both benefits and limitations to enterprises that currently have or are planning to create a data center. With the use of these features administrators can identify what is needed due to the size and demand of data center and the Enterprise.  The table identifies that like SpiceWorks has earlier stated that Vmware has the largest capabilities, in terms of hosts and VMs. Each system can on average host the same amount of vms, with XenCenter dropping slightly lower. However, XenCenter has a larger number of downfalls in terms of management techniques. In comparison, vCenter has the most benefits, with SCVMM coming in, in a close second. (Romain serre, 2017)

All three systems have the ability to be scripted and automated using PowerShell and PowerShell modules. This providing a rich administration and scripting environment, that can be customized or automated to run daily, batch or advanced tasks. Combating complex tasks in relatively few steps. This PowerShell scripting can be actioned on the hosts themselves or remotely. (SQA, 2007)

General Microsoft System Center Virtual Machine Manager VMware vCenter Citrix XenCenter
Preferred hypervisor Hyper-V VMware vSphere XenServer
Type one or Two One One One
Max VMs per Host 1024 VMs/host

2048 vCPU/host

2048 VMs/Host 1000 VMs/host
Max hosts supported by VMM 400 tested, possibly more 1000 per vCenter 25
Third-party hypervisor Yes, full:  VMware,

partial: Citrix

Yes, Hyper-V

No, XenServer

No
Virtual and physical Management Yes, included in the SC package. Limited, can be done with plugins. No
Access control (AD) Yes Yes Yes
Web based VMM No Yes, not all functionality No
Automated host deployment Yes, for any physical machine on the network. Yes No
Scripting PowerShell, WMI API PowerCLI, .NET, Perl, etc. PowerShell, SDK, API

Table 6 – Table of comparison of features (Romain serre, 2017)

Overall centralized management for data center VMM, is an essential tool for successful and efficient management, providing administrators with a clear easy overview of all resources that are available within the datacenter. There are a large number of benefits for systems like the three reviewed, that can enhance and support administrators in providing a fully consolidated, efficient, low cost and fully virtualized data center.

Chapter 5 – Methodologies, Tools and Criteria

In this chapter the project will discuss the methodology used. After reviewing and identify current literature and VMM products, the project aims to create PowerShell scripts. To review and analyse these identifying the advantages and disadvantages of these in an enterprise data centre. In this chapter the project aims to identify the implementation methods and design aims.

5.1 Testing Platform

For the purpose of the project the project, Scripts will be executed and tested in a Hyper-V environment on a single host. Due to the limitations the project has for a complete functioning virtualized data center environment, this was chosen to be the most approximated method of testing and evaluating the scripts created. Although this provides boundaries on how certain scripts will run, in a complete virtualized data center environment and with the use of a centralized VMM. This however leaves room for future improvement in this area to support additional development.

The project will use PowerShell ISE, Version 5.1.  This will integrate with the Hyper-V version 10.0 on a Windows 10 pro Operating system. This configuration lies on the below hardware specification

  • Processor: Intel® Core™ I7-2670QM CPU @ 2.20GHz
  • RAM: 6.00GB
  • 64-bit operating system.

5.2 Development model

The development of the scripts has been created using the Prototype model. The prototype model is the method of building a prototype on the current requirements, before adding and increase the requirements. This enables users/customer to gain a feel of the product so that they can choose and advance in its design and production. The advantages of this model is that an active user can be involved in the design and development of the scripts, and can provide the user with a clear understanding of the development. This model enables any errors and missing functionality to be identified instantly, enabling a quicker implementation. However incomplete or even prototype versions can be implemented, then developed onto causing a higher risk (ISTQB Exam Certification, n.d.).

This model works well within the project, due to the limitations of the data center, the project can produce a first stage prototype and then in future development can develop and work on the basis of what is produced here. Additionally, as this is a testing and project based environment the risk is dramatically lowered as no live features are at risk to an enterprise or its virtual data center.

5.3 PowerShell ­­

The prototype the project will look into creating will be designed and scripted within Microsoft PowerShell environment. Aiming to improve the overall automation of VMM, supporting IT administrators in reducing time and workload spent running standard management and administrative tasks.

5.3.1 What is PowerShell?

Microsoft PowerShell is an interactive object-oriented programming language, with command line and scripting language integrated into the .NET framework, which can be embedded into other applications (Techopedia, n.d.). PowerShell has a standard command line shell, as well as an integrated scripting environment, PowerShell ISE, two options that are designed to support PowerShell users. PowerShell can execute a number of commands such as cmdlets, a single lightweight function, or PowerShell Scripts. Scripts are a collection of commands executed as a sequence, on average these will run a series of processes to complete an overall task. The object-oriented language, provides the administrators a flexibility to filter, compare and sort the objects through the commands, ensuring the administrators are able display the data in a readable format (Warren frame, n.d.).

5.3.2 Features of PowerShell

Microsoft PowerShell is a tool developed to increase automation and speed of administration tasks, with commands and modules that can be used across multiple platforms, such as VMware with PowerCLI module, Microsoft and Citrix. The remote management has a number of benefits that can support and enhance data center administrators with the ability to drastically increase the productivity of both daily tasks and advanced management tasks. (Helmick, 2014)

One of the largest benefits, is the reduction in time. PowerShell can be used on single items or in bulk, if an administrator was to run each action using the GUI provide from the virtualization applications, this is can be very time consuming and difficult to complete in a reasonable time frame depending on the size of the data center. Where as PowerShell can run tasks on multiple items, even in a specific order if needed, from one simple script (Creek, 2011).

Remote access, can have some security risks, however PowerShell has security, this can be manually set or set via group policy in the data center environment. PowerShell has a number of execution policies, set at the data center administrators discretion, which can restrict scripts from being run and published on both local and remote machines, such as those in a virtualized data center (TechNet, n.d.).

PowerShell can additionally be used to gain detailed information, that may be difficult to find and route through in the GUI platform. A script can be produced for example to create a report identifying all the information on each VM and Host in use, something that may be difficult or may need the use of an additional program if recreated through the VMM GUI (Guys, 2015).

However, a draw back to PowerShell would be that the administrator will have to have spent time writing and even testing the script, before this is a possibility. Although, once this script is written they then have this saved, and can be ran as much as needed, this producing an automated script. To fully gain the best advantage out of the PowerShell resource, the data center administrator will need to have experience or an understanding of how PowerShell works and how to write and publish scripts, once this is learnt then they can increase the level of automation in the virtualized environment (Helmick, 2014).

The ability to then repeat tasks, reusing the scripts can produce a simpler and less tedious work load for the administrators. Producing a better time allowance for other tasks. PowerShell is the standard tool for automation within the Microsoft environment, for systems such as SCVMM PowerShell provides a fully combined and supported system with many actions that can then be automated, such as processes, configuration and maintenance to the SCVMM data center. Both vCenter and XenCenter, have official modules/plugins that support the use of PowerShell. VMware vCenter uses the PowerCLI Module, this has nearly 400 cmdlets, to manage and automate the VMware data center.  Each PowerShell module has a large cross over with just small changes in the cmdlets, with many cmdlets remaining the same.

PowerShell for SCVMM and XenCenter and PowerCLI for vCenter, are able to be scheduled within vCenter this is conducted through vCenter, and within SCVMM and XenCenter this can be scheduled though windows task manager. Scheduling can again reduce time and workload from administrators, running scripts at a pre-set time, either conducting a change or management task or a report to benefit the virtualized environment.

5.3.3 Use of PowerShell

The Scripting produced within this project has been created within the PowerShell ISE environment, this integrated scripting environment providing an interface where scripts can be created. Here scripts have been created using the default file type, .PS1. This interface has additionally been used to run and test the scripts through there creation ensuring the full use and workability of them. The decision to use this platform for PowerShell was due to the number of benefits and flexibility that the platform has. Such as the ability to execute partial cmdlets in terms of the full script, or the full script as a whole. Additionally, PowerShell ISE, uses highlights syntax’s providing and easier view for reading and understanding the cmdlet or script, along with the ability to copy, paste and edit the cmdlets or scripts.

5.4 Criteria

The project will develop a number of scripts that will support managing and administrating a virtual data center environment. Some of the scripts will be used on a daily use other in more unique circumstances. The criteria of these scripts are identified below, creating a principle in which the scripts will be identified and reviewed around.

Creation of a single VM

The project will create a prototype script that will create a single VM, simply by running a single script, rather than by multiple selections using the GUI. With the intentions of allowing the administrator to enter in the specifications of the VM desired, such as the name and host. Resulting in a simpler and quicker method of creation of VM’s.

Checkpoint or Snapshot VM’s

Here, a checkpoint is capture of the state of the virtual hard disk, providing a temporary backup for events such as updating the operating system. A checkpoint can then be restored or removed if the update fails or us a success. Depending on the number of VM’s this can be a heavy workload, taking up a large amount of time. The intention is to create several scripts, the first creating a checkpoint of all the VMs that are managed, the second to restore the checkpoints to all the VM’s if required. The third is to remove the temporary checkpoints. By producing this sequence of scripts, the administrator can run a single script and leave it to run, coming back to checkpoints for every VM’s.

Change VM setup

A prototype will be developed to change a setup configuration within a previously created VM. As a change is aimed to be undertaken, the script will run as part of a workflow, creating a checkpoint before any changes are made, for the ease of protection encase a problem occurs. The script will then perform a change of setting. This script will be one that isn’t as commonly used as others, but can again decrease workloads, time and man power.

Sequence Start

A standard enterprise level virtual data center has a number of machines, such as a Domain Controller, which to ensure successful use of other machines and applications needs to be started first. Creation of a workflow, where machines are placed in a specified order of starting is an essential script. Being able to use the as a base to have this script on hand incase of a full scale shut off is essential, to one solve the sequence and workflow issues and secondly to ensure the VMM is not overloaded and tries to start all machines in a single instance.

Reporting

Reporting is an essential tool for any system, here reports may be needed and desired for a number of different areas and is an important tool for any business. Many of the VMM applications, use additional applications for reporting, which in some enterprises can produce extra costs. However, using scripting can produce report formats. Applying these methods to the scripts, to produce simple methods of reporting, that can be scheduled to ran when desired, producing information that can then be used for alternate business needs.

Migration of VM’s between hosts

One of the largest benefits of a virtual data center, is the ability to move and migrate VM’s between hosts, for a number of reasons, such as to meet the demands of resources, to lie separately and isolate itself from a specific VM, or to recover from a physical error on a host. A script to quickly migrate is an essential desire for a virtual data center center, and can allow administrators to quickly recover or to solve issues.

Chapter 6 – Script Implementation and Review

The chapter will focus upon the creation and review of PowerShell scripts that can be used for basic automation within a virtualized data centre environment. After the creation of the scripts the aim is to critically evaluate them, there use and the ease of transforming them between VMM applications and their equivalent PowerShell modules.

The Implementation:

describe how you implemented your approach. If it is a software system give diagrams, relevant algorithms etc.

This chapter should describe what was actually produced: the programs which were written, the hardware which was built or the theory which was developed. Any design strategies that looked ahead to the testing stage might profitably be referred to (the professional approach again).

Descriptions of programs may include fragments of high-level code but large chunks of code are usually best left to appendices or omitted altogether. Analogous advice applies to circuit diagrams.

Draw attention to the parts of the work which are not your own. Making effective use of powerful tools and pre-existing code is often laudable, and will count to your credit if properly reported.

It should not be necessary to give a day-by-day account of the progress of the work but major milestones may sometimes be highlighted with advantage

Evaluation:

describe how you evaluated to show that your approach was successful. You may need a methods section, a results section and a conclusion section.

This is where Assessors will be looking for signs of success and for evidence of thorough and systematic testing. Sample output, tables of timings and photographs of workstation screens, oscilloscope traces or circuit boards may be included.

As with code, voluminous examples of sample output are usually best left to appendices or omitted altogether.

There are some obvious questions which this chapter will address. How many of the original goals were achieved? Were they proved to have been achieved? Did the program, hardware, or theory really work?

Assessors are well aware that large programs will very likely include some residual bugs. It should always be possible to demonstrate that a program works in simple cases and it is instructive to demonstrate how close it is to working in a really ambitious case.

6.1 Creation of a single VM.

Using the New-VM Cmdlet as the syntax for this script,

About – describe the code

Why important

Code

Testing

Changes for SCVMM or vCenter

Result

Review against criteria

Automation review

Why use

Benefits

Review scripts then improve

Does it meet requirements?

Script 1 – Creation of a single VM

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: