Issues with Server Virtualization

Virtualization has been considered a great way to maximize utilization of server resources by increasing server density. In addition, this concept is considered to be very key to green IT initiatvies as it reduces the number of compute resources needing to be supported in a data center thus reducing the cooling and power costs.

Having said that, there seems to be a performance downside to virtualization. Given that a virtual machine (VM) is yet another layer of indirection the server HW this layer negatively impacts the performance any time there is a need for the application to access IO Channels, SAN disc and other network resources. This is especially so when the application code is either running inside of a JVM or an application server container such as a JEE container, .NET CLR or even a web server servlet container. In all cases, performance profiles of an application seem to vary whether the application is running inside of a virtual layer vs. directly on the physical server.

Vendors are now focusing on this drawback by introducing the concetp of Just Enough OS (JEOS). It remains to be seen how effective these optimizations prove to be in the realm of critical path enterprise systems.

Thank you.

surekha -

Comments

  1. My understanding is that JeOS are designed to minimize impact of inflated OS’s on resources utilization in virtual environment. The concept is designed to treat OS as virtual appliance instead of loading full version of the OS with hundreds of unused services and device drivers. IO issues (if exist) can be handled by application level architectural remedies (e.g. transactions caching) or infrastructure configuration (e.g. load balancing). Which approach to take, depends on application architecture and supporting network, system and data infrastructure. Dynamic resources allocation mechanisms build into virtual hosts assists in process of leveraging virtual resources across virtual infrastructure.

    ReplyDelete
  2. Thank you for your comment. Sounds like you are suggesting using application architecture best practices that have been known for a while to address some of the performance degradation issues caused by virtualization. This actually is a great suggestion as it enables one to embark down the path of virtualization and gain hardware independence and server density increases while still using some good old architecture best practices to mitigate response time latency issues caused by virtualization.

    I want to explore your statement of possible dynamic resource allocations from virtual hosts. This concept could involve having pools of scarce resources that are inherently stateless and are hosted on virtual server farms. For instance, these could be resources such as connections to databases, remote service URLs or caches of often referred to business concepts or even policies. When coupled with dynamic workload management where in provisioning of stateless resources pools are based on policies one could make significant strides toward improving the both in terms of reducing consumer response time latency and also to increase application performance and scalability.

    Thank you.
    surekha -

    ReplyDelete
  3. We certainly did not see any performance impact on CPU, MEM demand and supply, we should maintain the number of VM's which we run on each core as per the best practice and gauge ‘average CPU cycle ‘ wait time to load further , traditionally application developers by default ask for 2/4 CPU, in virtualization adding a additional vCPU is a overhead to hypervisor , unplanned CPU sizing may lead to performance impacts, the other area of concern is planning the storage arrays, if in case an ISCSI solution is being used the network design , backbone selection is key.
    -Girish S

    ReplyDelete
  4. In my experience with virtualization, if you over saturate you virtual box with too many VMs then you do suffer slightly with communication response times. The off-set is not so much that it deems the virtualization of your system to be useless. But what it does mean is that better management of the Virtual Center resources plays a critical role in preserving optimized system performance. For example, one of our virtual servers is a file share, and when you have a supersaturated VM box retrieving files which are larger than 100-200mb shows slightly noticeable lag in the transfer from one VM to another. Certain ESX boxes which VMware and HP sell have a cap of about 10-12 virtual machines per host box. My IT dept for a while was in the process of P2Ving (Physical to Virtual) About 8 of our old physical servers, while we had created some standalone virtuals for network monitoring applications. Which in turn left us with 16 virtual machines on a box that is rated for 10-12. Depending on how much memory you allocated to the other VMs, the ones with less memory assigned were more sluggish than the ones with 2-4gb worth of RAM. But now that we have consolidated our monitoring applications and reduced the number of VMs we host on the ESX, our latency offsets haven't been as great. Also have noticed faster file transfers between VMs. In summary, virtualized systems require a little bit more attentiveness to management of your network resources as well as the resources on your host machines. If you can manage these properly then virtualization is a tool which can save your business a lot of money in physical equipment and costs of energy to power those servers. Especially when dealing with large server farms, there is a lot of money saving to be had.

    ReplyDelete

Post a Comment

Popular posts from this blog

Data Aggregation & Data Discovery - Part I

What is the definition of a strategic partnership with a vendor?