VIRTUALIZATION, IN COMPUTING, REFERS TO THE ACT OF CREATING A VIRTUAL VERSION OF SOMETHING, INCLUDING BUT NOT LIMITED TO A VIRTUAL COMPUTER HARDWARE PLATFORM, OPERATING SYSTEM (OS), STORAGE DEVICE, OR COMPUTER NETWORK RESOURCES.
The movement toward a hybrid cloud, software defined data center, has been on-going for years now. We have seen the virtualization of compute, storage, and now networking. In this blog, I will be discussing this journey: where we started, where we are going, and why you want to be on this journey.
Traditional data center models are still very prevalent and accepted by organizations as the defacto model for their data center(s). If you have ever managed a traditional data center model, then you know the surmounting challenges we face within this model.
What comprises the traditional data center model? A traditional data center model can be described as heterogeneous compute, physical storage, and networking managed by disperse teams all with a very unique set of skills. Applications are typically hosted in their own physical storage, networking, and compute. All these entities-physical storage, networking, and compute- increase with the growth in size and number of applications. With growth, complexity increases, agility decreases, security complexities increase, and assurance of a predictable and repeatable production environment, decrease.
Characterizations of a Traditional Data Center:
Challenges around supporting these complex infrastructures can include things like slow time to resolution when an issue arises due to the complexities of a multi-vendor solution. Think about the last time you had to troubleshoot a production issue. In a typical scenario, you are opening multiple tickets with multiple vendors. A ticket with the network vendor, a ticket with the hyper-visor vendor, a ticket with the compute vendor, a ticket with the storage vendor, and so on and so on. Typically, all pointing fingers at each other when we all know that fault always lies with the database admins.
The challenges aren't just around the complexities of design, day to day support, or administration, but also include challenges around lifecycle management. When it comes to lifecycle management, we are looking at the complexities around publishing updates and patches. If you are doing your due diligence, then you are gathering and documenting all the firmware, bios, and software from all the hardware involved for the update/patch and comparing that information against Hardware Compatibility Lists and Interoperability Lists to ensure that they are in a supported matrix. If not, then you have to update before going any further. This can be extremely time consuming and we are typically tasked with testing in a lab that doesn't match our production environment(s) ensuring we don't bring any production systems down during the maintenance window.
The first attempt at reducing the complexities we face with the traditional model was when we witnessed the introduction of converged infrastructure. Converged introduced us to a pizza delivery model for infrastructure. Meaning, we gather our requirements, place an order, and have it delivered ready to be consumed on premise. This new model to infrastructure brought with it a reduction in complexities that are inherent with the traditional model.
What is converged infrastructure? Converged infrastructure is an approach to data center management that packages compute, storage, and virtualization on a pre-integrated, pre-tested, pre-validated, turnkey appliance. Converged systems include a central management software.
These pre-built appliances reduce concerns with support issues due to the fact that the vendor supports the entire stack. You gain that "one throat to choke" when issues arise. You are no longer required to open multiple tickets with multiple vendors. One call to the supporting vendor and they handle troubleshooting for the hyper-visor, compute, and storage. This can increase resolution time when issues present themselves.
You gain a reduction in data center footprint which, in turn, reduces power and cooling costs. I worked with a customer and reduced their multi-rack traditional data center to a single rack solution. The cost savings were tremendous, as they were able to reduce the costs of not only the power and cooling, but also the space they paid for at the collocation.
With converged, you also gain a reduction in lifecycle management. When an update comes out from the vendor, they have already pre-validated and pre-tested the update/patch and know how it will affect your production environment. This means that you can gain back all the time it takes for you to check the firmware, bios, and software against the HCL, etc. This can be a tremendous benefit allowing you to deploy new updates/patches with assurance.
VMware Validated Designs was also introduced to provide comprehensive and extensively-tested blueprints to build and operate a Software-Defined Data Center.
With the VMware Validated Designs, VMware also allows for more flexibility with a build your own solution. Think of Validated Design as a prescriptive method to SDDC. You follow the detailed guides and are ensured of a specific outcome. Unlike the vendor pre-validating and pre-testing the solution, then building it for you in an appliance approach, VMware handles everything but the build.
This approach has four benefits:
The converged model does still present some challenges. You may not be able to move to the latest hyper-visor software when it comes out but most don't like to be the guinea pig anyway.
Another challenge is with storage. Although storage is packaged and supported in this model, you still have to manage it as with traditional storage arrays. For example, if you need to build a new VM, typically we need to:
To further simplify the traditional model of infrastructure, VMware brought us the Software Defined Data Center (SDDC) vision with the hyper-converged model.
What is hyper-converged infrastructure (HCI)? Hyper-converged infrastructure allows the convergence of physical storage onto industry-standard x86 servers, enabling a building block approach with scale-out capabilities. All key data center functions run as software on the hyper-visor in a tightly integrated software layer, delivering services that were previously provided via hardware through software.
Reducing the complexities of traditional storage administration while taking the intelligence of the array and bringing it into the software layer. Take the previous example above. Now, when we provision a VM, the storage is provisioned along with it. There is no need to log into the storage array and provision the LUN, or zoning and masking, to present the newly created storage to the hyper-visor environment.
Management of the storage is performed through the vCenter server web interface that you use to manage the rest of the hyper-visor environment.
The hyper-converged environment further reduces the footprint at our data center(s) and the complexities we have in both traditional and converged environments. This new model of deploying an infrastructure gains us five benefits:
With hyper-converged, we have moved compute and storage into software defined. This simplifies the environment while gaining all the benefits from a converged infrastructure.
To recap, we have talked about where we began with the traditional data center model and all the challenges listed above with administering a traditional environment. Along with all the added benefits of converged and now hyper-converged infrastructures. Remember, that at this point, we have software defined the compute and the storage, but what about the network?
In 2012, VMware acquired Nicira and one year later introduced network virtualization with NSX. To further the SDDC vision of an all software defined data center, VMware virtualized the network. We now have compute, storage, and networking in the software stack.
This year at VMworld 2017, VMware introduced the next logical iteration to the journey of SDDC with VMware Cloud Foundations.
VMware Cloud Foundations, encompasses the best of VMware Validated Design and all the benefits of hyper-converged. It brings the three software defined solutions, compute, storage, and networking into a single packaged managed by the SDDC Manager. I wrote a previous blog about VMware Cloud Foundations you can find here to gain more insight.
Why do we want to be on this journey? VMware Cloud Foundation provides the simplest way to build an integrated hybrid cloud. They do this by providing a complete set of software defined services for compute, storage, network, security and cloud management. Allowing the user to run enterprise apps- traditional or containerized- in private or public environments along with being easy to operate with built-in automated lifecycle management.
This new model has four use cases:
To begin your journey toward this new infrastructure model and future proofing your data center for cloud, you begin with upgrading your current vSphere 5.x environment to 6.5. By upgrading to vSphere 6.5, you put your current infrastructure in an optimal place to take advantage of the latest vSAN and NSX deployments along with the following benefits you gain from the new features in 6.5.
Benefits of vSphere 6.5:
As you can see from the picture above the journey doesn't end with VMware Cloud Foundation but continues to progress toward the true hybrid-cloud solution that was announced this year out at VMworld 2017. The new announcement was a new partnership between VMware and Amazon.
This new offering is an on-demand service that will allow you extend your on-prem data center to the Amazon cloud, which is running VMware Cloud Foundation on physical hardware in Amazons cloud data center. This means no converting of workloads in order to take advantage of a cloud architecture because this is running the same SDDC applications you are running today.
VMware Cloud on AWS is ideal for customers looking to:
VMware Cloud on AWS is delivered, sold, and supported by VMware as an on-demand, scalable cloud service.
This new model is the most flexible and agile model for future data centers. This will allow you to transform your business from hardware dictating where applications reside to applications driving the business in a hybrid cloud model and gaining the ability to easily migrate applications to where it makes most since in alignment with the business requirements and objectives.
VMware announced VMware Cloud Foundation back in the general session of VMworld 2016. Cloud Foundation is a unified platform for private and public clouds.
Let's start with defining the term "Clouds". This term has been thrown around a lot and some take this term as "In the Cloud" off premises platforms, but some use the term more all inclusive which includes both "On-Prem" and "Off-Prem" platforms. Wikipedia defines this term as "computing that provides shared computer processing resources and data to computers and other devices on demand". For this blog I am using the definition of cloud as the latter. I think of cloud as all inclusive of both off and on-prem platforms for providing resources. I know some feel as though cloud was meant to replace the "on-prem" private cloud and yes, that will ultimately be the direction in years to come, but for now we live in a world of hybrid-cloud and that is what Cloud Foundation is here to assist us with.
Now that we have cleared that up, let's move on to Cloud Foundation from VMware. Cloud Foundation brings together, VMware's vision for SDDC where compute, storage, and networking services are decoupled from the underlying hardware and abstracted into software as pools of resources allowing for IT to become more flexible and agile while also allowing for better management, into an integrated stack for cloud. This is done by defining a platform common to both private and public clouds.
The foundational components of Cloud Foundation are VMware vSphere, Virtual SAN, and NSX and can be packaged with vRealize Suite to bring automation into the picture. If you are not familiar with the vRealize Suite from VMware let's just take a moment to discuss this.
The vRealize Suite is a software defined product suite built to enable IT to create and manage hybrid clouds. It includes products like IT Business Enterprise, which VMware just sold off, and is an IT financial management tool to manage and analyze cost associated with IT services. It also includes vCloud Automation Center, vCenter Operations Management, and LogInsight.
The management for Cloud Foundation is VMware's SDDC Manager. SDDC Manager serves as a single interface for managing the infrastructure. From this interface, the IT administrator can provision new cloud resources, monitor changes to the logical infrastructure, and manage lifecycle and other operational activities. The idea here is a single pane of glass for management along with monitoring of all your cloud environments whether it be on-prem, IBM-Cloud, AWS, etc., providing ongoing performance management, capacity optimization, real-time analytics, and cloud automation.
Cloud Foundation allows for a flexible solution allowing for on-prem and off-prem deployment options and can be deployed on-prem or off-prem as a service. You can choose on-prem options like integrated solutions from OEM providers such as VCE with hyper-converged systems and VSAN ready nodes from Dell.
Cloud Foundation will help to reduce the complexities faced with cloud strategies to date. The idea of "who cares where your data resides as long as it it secure and accessible" comes to mind. You can have applications being delivered from multiple clouds whether on or off-prem, Azure, or AWS. IT only needs a single pane of glass to monitor and manage these environments while also allowing for IT and management to track related costs. Ultimately giving IT the agility of migrating between cloud platforms when needed.
A use case for this would be a merger and acquisition of a company with a hybrid cloud environment. Cloud Foundation would help manage the complexities involved with integrating those resources into your own environment while maintaining security and the integrity of your current environment.
VMware announced alongside the Cloud Foundation announcement at VMworld 2016 the new partnership with IBM Cloud. This allows companies to have choice in deploying SDCC whether it be on-prem in their own private data center(s) or with IBM. This solution is based with Cloud Foundation and allowing VMware customers to seamlessly extend private to public.
Again, the software stack includes VMware vSphere, Virtual SAN, NSX, and VMware SDDC Manager. VMware SDDC Manager was announced back at VMworld 2015 and combined with Cloud Foundation is just the next step toward IoT with what VMware states as "Any Cloud, Any Application, Any Device". The SDDC Manager allows for a simplified management of a highly distributed architecture and resources.
Cloud foundation integrates with the entire VMware stack which includes Horizon, vRealize Suite, vRealize Automation, vRealize Business, OpenStack and products like LogInsight.
With Cloud Foundation natively integrating the software-defined data center stack and SDDC Manager, customers can flexibly upgrade individual components in the stack to higher editions allowing for flexibility in lifecycle management which consumes large amount of time in traditional IT.
With Cloud foundation you can automate the entire software stack. Once the rack is installed and powered on with networking to the rack, the SDDC Manager takes the BOM that was built with your partner like Advizex, and includes user-provided environmental information like DNS, IP addresses, etc. to build out the rack. The claim is that this can reduce the provisioning time from weeks to hours which for those of you that have done this in a non-automated fashion can attest to how painful the process can be. When complete you have a virtual infrastructure ready to start deploying and provisioning workloads.
In the complexities of traditional IT with silos, it takes extensive resources to provision a highly available private clouds, but with Cloud Foundation an administrator only needs to create and manage pools of resources decreasing the time to delivery of IT resources for consumption by the end-user whether it be a vm or a virtual desktop. This is done through a new abstraction layer called, Workload Domains.
Workload Domains are a policy-driven approach for capacity deployment. Each workload domain provides the needed capacity with specified policies for performance, availability and security. An admin can create a workload for dev/test with a balanced performance and low availability requirement while also creating one for production with high availability and high performance.
The SDDC Manager translates these policies into the underlying resources of compute which allows for the admin to concentrate on higher level tasks instead of spending time researching how to best implement.
Lifecycle management introduces a lot of complexities which are typically manual process to patch and upgrade and can lead to issues within an infrastructure due to interoperability and configuration errors. In turn the validation and testing of these patches takes a lot of time away from an IT staff. Sometimes patches get deployed before they have been vetted correctly for security and other reasons or defer patches which can slow down the roll-out of new features, etc. SDDC Manager automates these tasks for both physical and virtual infrastructures. VMware tests all the components for the Cloud Foundation before shipping new patches to the customer.
Within the lifecycle management of Cloud Foundation you can choose to apply the patches to just certain workloads or the entire infrastructure. SDDC can patch the vms, servers and switches while maintaining uptime thereby freeing resources to focus on business critical initiatives.
Scalability is built into the platform within a hyper-converged architecture. You can start with a deployment as small as 8 nodes, and scale to multiple racks. Capacity can be added linearly in increments as small as one server node at a time within each rack allowing IT to align CapEx with business needs. Cloud Foundation automatically discovers any new capacity and adds it into the larger pool of available capacity for use.
Some main use cases for Cloud Foundation are; Virtual Infrastructure allowing IT to expand and contract the underlying infrastructure to meet their changing business needs; IT Automating IT allowing IT accelerate the delivery and ongoing management of infrastructure, application and custom services, while improving overall IT efficiency; Virtual Desktop making VDI deployments faster and more secure. Administrators can focus on specifying the policies and needs of the VDI infrastructure instead of dealing with the details of deploying the VDI infrastructure.
To learn more about VMware's Cloud Foundation you can visit the product page here.
You can also get hands-on with the product from the hands-on lab provided online from VMware.
HOL-1706-SDC-5 - VMware Cloud Foundation Fundamentals