Host, Clusters and Resource Pools together form the skeleton for any virtualization technology so that the virtualization software (hypervisor) can consume it to present a virtual machine to the end user. They are the building blocks of any virtualization platform (eg: VMware). Just like a normal enterprise application would have a front-end , business logic & back-end , Virtualization Applications are also very similar.
Now the best part of Virtualization application in the above diagram is that, the last layer (Hardware) is dynamic - in the sense that resources can be added and removed as per the needs.
A host is nothing but a high-end physical computer with only computing and memory resources. The number of CPUs for a Host is fixed , however there is an option of increasing or decreasing the RAM for a Host (with a constraint to max upper-limit ). Storage or data-stores can be added to the Host as per the need. This entire concept of host is same as that a physical desktop or laptop. But a host is much powerful than a desktop/laptop.
For example : A host with 4 dual-core CPUs each running at 3 GHz and 32GB of memory will have 24GHz (dual-core = 4x2 ) of computing power and 32 GB of RAM available for running virtual machines on top of the host.
Now how do we scale this ??? You guessed it right ... combine multiple hosts together. This is nothing but a Cluster. So if a Cluster has 4 hosts then a total of 24GHz * 4 = 96 GHz computational power , 32 GB * 4 = 128 GB of RAM is available for Virtualization.
How does clustering help ? Now the hypervisor software sees the each underlying hardware as a single entitiy. Which means if we want to create a Virtual Machine with 50GHz computational power and 50GB RAM then creating it with a single host is not possible. However creating it on a cluster is possible. One can relate Cluster as a solution to the problem of Defragmentation. Lets see this :
Any Resource Pools can be partitioned into smaller
Resource Pools at a fine-grain level to further divide and assign resources to different groups or for different purposes .
Obviously more machines and resources will be required by the test team inorder to test a piece of software on different operating systems. Comparatively developers would require lesser number of VMs but each VM should be powerful enough. We can easily load balance the resources as the team expands or shrinks. If the development team is not utilizing the resources to its peak and the test team needs more CPU/RAM , we can easily adjust the resource pool.
As a result resources are not wasted if they are not being used to their max capacity.Hence resource pools can be nested, dynamically reconfigured and organized hierarchically .
Individual business units can use their own dedicated infrastructure while still benefiting from the efficiency of resource pooling. Isn't this something like an " À la carte " where we use resources only as per the requirement.