Modern Multi-Core Architecture: Physical vs Virtualization vs Containerization

By | January 6, 2020

A common question that tends to come up during meetings is “what should a modern infrastructure look like” especially in an era in which cloud is the norm (or at least the first word that is asked).  Like all things, its best to tackle this with a desired outcome rather then a pile of hardware and software. 

There are four schools of thoughts on how to handle application deployments in the modern datacenter to best utilize multi-core systems.  Physical, Host Virtualization (VMs), Application Virtualization (Containerization), and Software as a Service (SaaS/Cloud).  Each has its merits and place.  Since the upside of any SaaS is to not have to manage an infrastructure (theoretically) we won’t go into details there.

With more counts, frequency, and memory scalability on the rise, one of the challenges to tackle is the best utilization of those resources.  We will start with physical methodology.

Physical Architecture


In the physical architecture the application is deployed to a static set of physical host and normally any form of resiliency or sustainability will need to come from the application layer.  Good fits for physical only deployments are your desired outcome is heavily reliant on physical performance.  These might be applications where microsecond response times are critical (HFT) or raw throughput dictates the success rate (backup or PFS).  These are going to be environments who can leverage fibrechannel or InfiniBand to achieve these goals. 

The downside to physical architecture is that you are reliant on the application to handle core balancing, orphaning memory, and manage an additional layer of technology (fabric) along with all the upkeep that might entail.  These architectures also rarely lend themselves to being portable between location or the cloud itself.  These deployments also do not lend themselves to allowing multiple applications for different purposes to reside on the same host.  Lastly these deployments create challenges from a security standpoint if multiple application reside on the same host. 

From a T3stN3t standpoint I run no dedicated physical hardware for applications.  All of the services we run, while they do have performance requirements (get latency in a game service and listen to the internet scream) they do not justify the capital expense.  On the other hands HFT (financial trading) systems and control platforms for ships are measured in millions of dollars in revenue and thousands of lives.  Those easily justify physical dedicated platforms.

Pros

  • Possibility of low response times (latency)
  • Possibility of high bandwidth (Mbps)

Cons

  • Need to manage another layer in the architecture to get full value out of potential performance
  • Architecture does not lend itself to portability
  • Greater chance of orphaned resources
  • Challenges in securing the system when several unrelated applications reside on the same host

Host Virtualization Architecture

Leap of faith that you haven’t been under a rock for the last 15 years and know what host virtualization is.  A small little company called VMWare put the technology on the map and in pretty much every datacenter.  Right, wrong, or indifferent the technology changed how things could be designed, architected, and then deployed.  In this architecture the goal is to better utilize cores and memory to reduce orphaned resources (wasted $$$).  Since the guest OS resides in its own segment, security is easier to implement with multiple applications.    Another benefit is now that each of the applications can be packaged with their own OS, they become very portable.  The value in this platform comes in its ability to provided application seat density.  It also serves as a very good development platform for new applications before they go fully into production.

While a virtual environment can be built to perform well, it is not the primary purpose of virtualization.  I often sit in meetings and when we talk about application performance inevitably, we wind up talking about the hypervisors performance and trying to drive latency to the lowest point for a handful of application.  While this is doable, it has an over arching cost at the hypervisor level.  The fewer seats you get out of the hypervisor license the more capital you burn to run the application.  The other downside to host virtualization is host management.  Each new application that spins up is placed in its own guest (you are doing this right, it’s a bad idea to just pile up unrelated application in the same guest).  This becomes another point that needs to be managed.

When I first designed T3stN3t it was built off a host virtualization platform (Ok when I built the first generation it was two physical nodes running Windows 2003, but we won’t talk about those early days).  As new services were needed a new guest was spun up to handle that service.  As we outgrew resources new physical nodes were added to the cluster and resources were shifted around.  Today the only services that remain on this platform are core services such as AD and DNS along with some Windows based applications that have not taken to containerization (I’m staring at you Space Engineers) and control systems.  Otherwise in a continual effort to lower operating cost we have moved to an Application Virtualization Model (K8s).

Pros

  • Better resource balancing
  • Better application segregation
  • Better application portability

Cons

  • Need to manage several guest OS
  • Additional cost for the hypervisor license
  • Performance tuning becomes challenging

Application Virtualization Architecture

Someone once asked me “Whats the big deal with this containerization stuff, its just more virtualization”.  He was absolutely on the money with the sentiment that its just “more virtualization”.  I believe the biggest difference is, its how virtualization would be built today if it was being developed from the ground up.  The idea behind containerization of an application is that the previous layers of OS and potentially hypervisor become mostly immaterial to the conversation on how to support the application.  When considering this architecture, the conversation should entirely be around the application.  Does the application already come as a container?  If it does not, does it have a linux distribution and clearly defined data layout?  If your developing the application do you have a clearly defined layout?  Theres normally a great deal of hudbub about all the effort it takes to “rebuild” and application to function in a container.  Those statements are normally toted by folks who either sell virtualization platforms, or folks who don’t fully understand containerization. 

Application virtualization strength lies in the fact that the application is now fully decoupled from the operating system.  This allows the application maximum portability.  If you can run it in one container environment, chances are you can run it on a different one regardless of who is hosting it.  Dirty secret: Most SaaS offerings are hosted as containers.

Some of the drawbacks to containers is they are a shared process service.  This can lead to scenarios where you get an overrun-on processor cores.  This also creates its own concerns from a security standpoint.  Since application designed in this manner are primarily designed for quick deployment and portability, they are not normally designed with the highest bandwidth or lowest latency in mind.  Lastly because application deployment is incredibly easy (normally a single command) application and service sprawl can become an issue, especially if you have no policies defined beforehand.

T3stN3t made the move to this architecture roughly two years ago.  This was done for three primary reasons (outside of wanting to play with new technology). 

  1. Lower are VMWare cost
  2. Speed up time it would take to deploy a new service
  3. Make it easier to manage various services for both myself and the admin team

What use to take me a day or two to spin up a new guest, update patches on (because I should have done a better job keeping my templates updated), figure out what IP was available, and then actually install the app, I can do in about an hour now.  There was a learning curve when I first started and it did take longer in the beginning compared to the old method, but what new technology doesn’t come with a learning curve?  Now git clone, edit, git push, kubectl apply -f. 

More time to spend doing other things.

Pros

  • Highly portable applications
  • Ease and speed of application deployment
  • Reduced hypervisor cost

Cons

  • Application Sprawl
  • “New” Technology
  • Performance tuning becomes extremely challenging

Conclusion

Only a Sith deals in absolutes

Obi-Wan Kenobi

When designing a new architecture platform, we should strive to avoid creating absolutes.  Far to often I work with folks who want to start their design with an end infrastructure and work their way backwards to the application.  Given the three prevailing platform architectures (four with the cloud over there) its often best to weigh the needs of the application against the infrastructure requirements and then a sprinkle of responsible capital use.  While some would love that only one platform exists to rule them all, the reality is you might want different ones.  There are also grey areas between them as well.  Almost every Kubernetes environment I have seen or spoken with the owners of started as a virtualized host on VMWare or HyperV.  As the service grew it made less sense to virtualize your virtualization and it eventually spun out into its own environment.  I have seen the same trend in physical applications.  It has been fun to watch things like HFTs which use to be cornered to the realm of physical systems start to branch out and see adoption of virtualization, if only to speed up the time it takes to redeploy the application with changes.

T3stN3t itself has undergone this same metamorphism through the years.  Starting with physical systems, moving to a virtualized blade architecture and now container first driven.  As technology changes and evolves so must the architecture, because one size will never fit all and to not evolve and change is to cease to be in IT.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.