VIRTUALIZATION, IN COMPUTING, REFERS TO THE ACT OF CREATING A VIRTUAL VERSION OF SOMETHING, INCLUDING BUT NOT LIMITED TO A VIRTUAL COMPUTER HARDWARE PLATFORM, OPERATING SYSTEM (OS), STORAGE DEVICE, OR COMPUTER NETWORK RESOURCES.
The movement toward a hybrid cloud, software defined data center, has been on-going for years now. We have seen the virtualization of compute, storage, and now networking. In this blog, I will be discussing this journey: where we started, where we are going, and why you want to be on this journey.
Traditional data center models are still very prevalent and accepted by organizations as the defacto model for their data center(s). If you have ever managed a traditional data center model, then you know the surmounting challenges we face within this model.
What comprises the traditional data center model? A traditional data center model can be described as heterogeneous compute, physical storage, and networking managed by disperse teams all with a very unique set of skills. Applications are typically hosted in their own physical storage, networking, and compute. All these entities-physical storage, networking, and compute- increase with the growth in size and number of applications. With growth, complexity increases, agility decreases, security complexities increase, and assurance of a predictable and repeatable production environment, decrease.
Characterizations of a Traditional Data Center:
Challenges around supporting these complex infrastructures can include things like slow time to resolution when an issue arises due to the complexities of a multi-vendor solution. Think about the last time you had to troubleshoot a production issue. In a typical scenario, you are opening multiple tickets with multiple vendors. A ticket with the network vendor, a ticket with the hyper-visor vendor, a ticket with the compute vendor, a ticket with the storage vendor, and so on and so on. Typically, all pointing fingers at each other when we all know that fault always lies with the database admins.
The challenges aren't just around the complexities of design, day to day support, or administration, but also include challenges around lifecycle management. When it comes to lifecycle management, we are looking at the complexities around publishing updates and patches. If you are doing your due diligence, then you are gathering and documenting all the firmware, bios, and software from all the hardware involved for the update/patch and comparing that information against Hardware Compatibility Lists and Interoperability Lists to ensure that they are in a supported matrix. If not, then you have to update before going any further. This can be extremely time consuming and we are typically tasked with testing in a lab that doesn't match our production environment(s) ensuring we don't bring any production systems down during the maintenance window.
The first attempt at reducing the complexities we face with the traditional model was when we witnessed the introduction of converged infrastructure. Converged introduced us to a pizza delivery model for infrastructure. Meaning, we gather our requirements, place an order, and have it delivered ready to be consumed on premise. This new model to infrastructure brought with it a reduction in complexities that are inherent with the traditional model.
What is converged infrastructure? Converged infrastructure is an approach to data center management that packages compute, storage, and virtualization on a pre-integrated, pre-tested, pre-validated, turnkey appliance. Converged systems include a central management software.
These pre-built appliances reduce concerns with support issues due to the fact that the vendor supports the entire stack. You gain that "one throat to choke" when issues arise. You are no longer required to open multiple tickets with multiple vendors. One call to the supporting vendor and they handle troubleshooting for the hyper-visor, compute, and storage. This can increase resolution time when issues present themselves.
You gain a reduction in data center footprint which, in turn, reduces power and cooling costs. I worked with a customer and reduced their multi-rack traditional data center to a single rack solution. The cost savings were tremendous, as they were able to reduce the costs of not only the power and cooling, but also the space they paid for at the collocation.
With converged, you also gain a reduction in lifecycle management. When an update comes out from the vendor, they have already pre-validated and pre-tested the update/patch and know how it will affect your production environment. This means that you can gain back all the time it takes for you to check the firmware, bios, and software against the HCL, etc. This can be a tremendous benefit allowing you to deploy new updates/patches with assurance.
VMware Validated Designs was also introduced to provide comprehensive and extensively-tested blueprints to build and operate a Software-Defined Data Center.
With the VMware Validated Designs, VMware also allows for more flexibility with a build your own solution. Think of Validated Design as a prescriptive method to SDDC. You follow the detailed guides and are ensured of a specific outcome. Unlike the vendor pre-validating and pre-testing the solution, then building it for you in an appliance approach, VMware handles everything but the build.
This approach has four benefits:
The converged model does still present some challenges. You may not be able to move to the latest hyper-visor software when it comes out but most don't like to be the guinea pig anyway.
Another challenge is with storage. Although storage is packaged and supported in this model, you still have to manage it as with traditional storage arrays. For example, if you need to build a new VM, typically we need to:
To further simplify the traditional model of infrastructure, VMware brought us the Software Defined Data Center (SDDC) vision with the hyper-converged model.
What is hyper-converged infrastructure (HCI)? Hyper-converged infrastructure allows the convergence of physical storage onto industry-standard x86 servers, enabling a building block approach with scale-out capabilities. All key data center functions run as software on the hyper-visor in a tightly integrated software layer, delivering services that were previously provided via hardware through software.
Reducing the complexities of traditional storage administration while taking the intelligence of the array and bringing it into the software layer. Take the previous example above. Now, when we provision a VM, the storage is provisioned along with it. There is no need to log into the storage array and provision the LUN, or zoning and masking, to present the newly created storage to the hyper-visor environment.
Management of the storage is performed through the vCenter server web interface that you use to manage the rest of the hyper-visor environment.
The hyper-converged environment further reduces the footprint at our data center(s) and the complexities we have in both traditional and converged environments. This new model of deploying an infrastructure gains us five benefits:
With hyper-converged, we have moved compute and storage into software defined. This simplifies the environment while gaining all the benefits from a converged infrastructure.
To recap, we have talked about where we began with the traditional data center model and all the challenges listed above with administering a traditional environment. Along with all the added benefits of converged and now hyper-converged infrastructures. Remember, that at this point, we have software defined the compute and the storage, but what about the network?
In 2012, VMware acquired Nicira and one year later introduced network virtualization with NSX. To further the SDDC vision of an all software defined data center, VMware virtualized the network. We now have compute, storage, and networking in the software stack.
This year at VMworld 2017, VMware introduced the next logical iteration to the journey of SDDC with VMware Cloud Foundations.
VMware Cloud Foundations, encompasses the best of VMware Validated Design and all the benefits of hyper-converged. It brings the three software defined solutions, compute, storage, and networking into a single packaged managed by the SDDC Manager. I wrote a previous blog about VMware Cloud Foundations you can find here to gain more insight.
Why do we want to be on this journey? VMware Cloud Foundation provides the simplest way to build an integrated hybrid cloud. They do this by providing a complete set of software defined services for compute, storage, network, security and cloud management. Allowing the user to run enterprise apps- traditional or containerized- in private or public environments along with being easy to operate with built-in automated lifecycle management.
This new model has four use cases:
To begin your journey toward this new infrastructure model and future proofing your data center for cloud, you begin with upgrading your current vSphere 5.x environment to 6.5. By upgrading to vSphere 6.5, you put your current infrastructure in an optimal place to take advantage of the latest vSAN and NSX deployments along with the following benefits you gain from the new features in 6.5.
Benefits of vSphere 6.5:
As you can see from the picture above the journey doesn't end with VMware Cloud Foundation but continues to progress toward the true hybrid-cloud solution that was announced this year out at VMworld 2017. The new announcement was a new partnership between VMware and Amazon.
This new offering is an on-demand service that will allow you extend your on-prem data center to the Amazon cloud, which is running VMware Cloud Foundation on physical hardware in Amazons cloud data center. This means no converting of workloads in order to take advantage of a cloud architecture because this is running the same SDDC applications you are running today.
VMware Cloud on AWS is ideal for customers looking to:
VMware Cloud on AWS is delivered, sold, and supported by VMware as an on-demand, scalable cloud service.
This new model is the most flexible and agile model for future data centers. This will allow you to transform your business from hardware dictating where applications reside to applications driving the business in a hybrid cloud model and gaining the ability to easily migrate applications to where it makes most since in alignment with the business requirements and objectives.
Security these days can be more of that traditional, needle in a haystack approach, than a true centric security approach to include analytics and alerting. VMware is again shifting to a new paradigm, and that was evident from all the products and messaging that came out of VMworld 2017.
Security is on the forefront of all of our minds and VMware, as the leader in data center technologies, wants to lead the conversation and be the foundation that you are laying down to protect your data, along with adding significant value to you with their partnerships in the security space, like the new partnership announced with IBM around their security products like QRadar.
With increasing attacks on our data centers, take Equifax for example, we must first look at one of our most significant portions of our security foundation, ESXi and work to secure that. We typically start with securing the physical and the edge, throw in some anti-virus and call it secure, but are we secure?
When it comes to data center security, we must start with our foundation, ensure that we have designed it to follow recommended best practices, then evaluate the gaps, and add in products to get us the rest of the way there. This also includes following best practices for end-user access of the environments and not being "lazy" admins just to skip a few steps. We have to lean on trusted partners like Sirius that have developed a security practice that can help us navigate the waters of security because the landscape of security products is immense, as you can see from the picture below.
So where do we begin? I believe that we must start with VMware. VMware is no longer just a hyper-visor running your vms, but the most integral part of your data center security strategy and if you don't get that foundation right, then the rest will crumble too. We must secure the infrastructure, build and architect the data.
After we get the infrastructure secure we move into securing the entire ecosystem like controls, automation, validations and the security solutions.
Last we must get back to the basics and as VMware's CEO, Pat Gelsinger stated, "Learn from sport teams who follow the basic regimen over and over again. Every major breach in the last five years that made headlines happened because a simple cyber hygiene wasn’t followed somewhere.” VMware is working with the government to set cyber hygiene standards for the tech industry to simplify the security solutions, as Gelsinger stated that, “The role of the governments globally in making stronger cyber policies is equally important to ward off data breaches."
VMware has shifted to becoming a security centric company. With added features in their base product VMware ESXi 6.5 which represents a move toward "secure by default" and allows for a truly secure foundation to build the rest of the house. Let's take a look at these features.
ESXi Secure Boot
Secure Boot now leverages the capabilities of the UEFI firmware to ensure that ESXi not only boots with a signed bootloader validated by the host firmware but that it also ensures that unsigned code won’t run on the hypervisor. UEFI, or Unified Extensible Firmware Interface, is a replacement for the traditional BIOS firmware that has its roots in the original IBM PC.
ESXi is comprised of a number of components. There is the boot loader, the VM Kernel, Secure Boot Verifier and VIBs, or “vSphere Installation Bundles”. Each of these components is cryptographically signed.
You can read more about UEFI on wikipedia.
Virtual Machine Secure Boot
SecureBoot for VM's is simple to enable. Your VM must be configured to use EFI firmware and then you enable Secure Boot with a checkbox. (Note that if you turn on secure boot for a virtual machine, you can load only signed drivers into that virtual machine.)
Secure Boot for Virtual Machines works with Windows or Linux.
vSphere 6.5 introduces enhanced logging. Logs have traditionally been focused on troubleshooting and not security.
Complete logs are now sent via the syslog stream for actions like "VM Reconfigure". Logs now contain more complete information, so notices of something changing you will now see what changed it changed from and what it changed to. You can then take actions against the information collected like rollback the change if it caused an issue.
You will now see logs for actions like adding more memory to a vm. The associated logs will show you what it was before and after the change. From a security perspective you can see much more information like who made the change and with integrations with VMware Log Insight you will be able to parse the data quicker bringing you to faster remediation.
VM Encryption/vMotion Encryption
VM encryption works by applying a new Storage policy to a VM. It is Policy driven. You’ll be able to encrypt the VMDK and the VM home files.
There are no modification within the guest OS. You can run different OS's like Linux, Windows, etc. and can be run from different storage like NFS, block storage, and VSAN. The encryption is happening outside of the Guest OS and the guest does not have access to the keys.
The encryption works also for vMotion but both the source and the destination hosts must support it.
After you apply an encryption policy to a VM, the VM receives a randomly generated key for each VM, and that key is encrypted with a key from the key manager.
When you power-on the VM which has the Encryption Storage policy applied to, vCenter retrieves the key from the Key Manager, sends that to the VM encryption Module and unlocks that key in the ESXi hyper-visor.
Encrypted vMotion works by having the randomly generated key added to the migration information, this is sent to each of the hosts participating in the vMotion process, the data going across the network is encrypted with the randomly generated key only for the migration process, and is one-time generated random key, which is generated by vCenter.
vSphere Security Guide for vSphere 6.5
The new security guidelines have changed to a subset of things to focus on. This is changing from the traditional "Hardening Guides," from VMware to a "Security" guide. I will not go into the entire guide in this post but you can read the post from VMware here.
Along with these new settings, government work, and a new security guide being introduced, I think its time to shift into the products that support VMware security model.
The first of these is NSX. With organizations spending more on security than ever before, see Gartner, NSX becomes the next integral step to securing your production data center. I have written several blogs now on NSX so I will just write a quick recap as to what NSX is.
VMware NSX provides a platform that allows automated provisioning and context-sharing across virtual and physical security platforms. Combined with traffic steering and policy enforcement at the virtual interface, partner services, traditionally deployed in a physical network environment, are easily provisioned and enforced in a virtual network environment, VMware NSX delivers customers a consistent model of visibility and security across applications residing on both physical or virtual workloads.
To further enhance NSX VMware introduced at VMworld 2017, AppDefense. AppDefense adds data center threat detection and response to the micro-segmentation capabilities delivered by NSX.
NSX prevents threats from moving freely throughout the network, while AppDefense detects anything that does make it to an endpoint and can automatically trigger responses through integrations with NSX and vSphere. The idea is to prevent, detect, and respond.
AppDefense uses machine learning technology, were it learns application behavior and if the application deviates from that behavior, it is quarantined. This is very different from the traditional approach with anti-virus solutions. Anti-virus solutions use definitions to secure the vm. If a new attack has been brought to the attention of your provider then they will create a new definition, once they have had time to analyze it, and then you are responsible for pushing the new definition out to all you vms. This can cause a gap in your protection.
See this video below to learn more about AppDefense.
VMware has a dedicated internal team responsible for developing and driving software security initiatives across all of VMware’s Research and Development organizations to reduce software security risks; The VMware Security Engineering, Communications & Response group (vSECR).
The vSECR group takes a full lifecycle approach to product security from product inception to product end of life. VMware, through vSECR, is committed to the ongoing security of their products and the safety of their customers data.
VMware is also active in the greater security community, and is a member of SAFECode (the Software Assurance Forum for Excellence in Code) and BSIMM (Building Security In Maturity Model). For more details about VMWare product security, please refer to the VMware Product Security White Paper.
You may also be interested in the following resources:
Lastly, remember to reach out to your VMware Partner, like Sirius, who can help you with security health checks, education, and help you gain confidence in your production data center environment(s) is configured correctly.
Sirius can help you prevent, detect, and respond to security threats and secure your data.
VCE provides Pre-integrated, Pre-tested and Pre-validated Vblock Systems.
What the heck is VCE and what exactly does this mean to be pre-integrated, pre-tested and pre-validated?
Lets start with a little background on VCE and who they are and more importantly why you should be paying attention to them.
Formed by Cisco, EMC, and investments from VMware and Intel. VCE (VMware/Cisco/EMC) is the industry leader in hyper-converged infrastructure solutions or as it is known in the industry as "Converged Infrastructure" and according to the new Gartner study for 2014, VCE finds itself situated in the MAGIC quadrant.
Estimations for this market, which includes single-vendor and multi-vendor converged infrastructures and hyper-converged infrastructures, will grow more than 50 percent in 2014 over 2013 to reach $6 billion.
Going back to the tittle of this blog and mentioned above, VCE is providing the Ps with the VCE. Lets take a look at each one of these.
Pre-integrated sure sounds interesting but what does that mean? VCE first performs an analysis of your current environment which includes collaborative planning and design verification to determine the design information that will be used to perform the pre-integration work. They will then perform all the pre-integration tasks, that are normally performed by your IT staff or possibly even you, to integrate the VCE into existing infrastructure like Active Directory, IPs, SAN Configuration, etc.
If you have ever done the work to prepare a datacenter for a new infrastructure project like this, you know the tremendous amount of work that must be performed, from running the cabling to pre-configuration switch work for both network and SAN, insuring correct power, to interoperability matrix, etc. The service of pre-integration can be extremely accommodating when working with tight project driven deadlines and translates to a plug-in-play model that can be rolled into your current datacenter and plugged.
Pre-tested also is something that sounds good but what exactly is tested?
Well, once the pre-integration is complete, VCE will test the stack, from the hardware layer through to the software layer. Testing to make sure that your SAN is able to handle to appropriate amount of IOPS you called for in your design, testing that the network is configured properly to insuring that the hundreds of components are functioning properly. That is a lot of testing that again can save countless hours in work usually designated to the IT Staff.
Pre-Validated means that VCE will validate the design and validate that the company objectives are met.
This means that you can have confidence in the solution that VCE is providing to you.
The end result is that processes used to build a custom system on site are far more complex, far less reliable and far more expensive.
In my opinion, VCE is Best of Breed with Single Point of Support. That means best of breed technology and one vendor to contact for support and that will save IT a lot of finger pointing and headaches.