VIRTUALIZATION, IN COMPUTING, REFERS TO THE ACT OF CREATING A VIRTUAL VERSION OF SOMETHING, INCLUDING BUT NOT LIMITED TO A VIRTUAL COMPUTER HARDWARE PLATFORM, OPERATING SYSTEM (OS), STORAGE DEVICE, OR COMPUTER NETWORK RESOURCES.
The movement toward a hybrid cloud, software defined data center, has been on-going for years now. We have seen the virtualization of compute, storage, and now networking. In this blog, I will be discussing this journey: where we started, where we are going, and why you want to be on this journey.
Traditional data center models are still very prevalent and accepted by organizations as the defacto model for their data center(s). If you have ever managed a traditional data center model, then you know the surmounting challenges we face within this model.
What comprises the traditional data center model? A traditional data center model can be described as heterogeneous compute, physical storage, and networking managed by disperse teams all with a very unique set of skills. Applications are typically hosted in their own physical storage, networking, and compute. All these entities-physical storage, networking, and compute- increase with the growth in size and number of applications. With growth, complexity increases, agility decreases, security complexities increase, and assurance of a predictable and repeatable production environment, decrease.
Characterizations of a Traditional Data Center:
Challenges around supporting these complex infrastructures can include things like slow time to resolution when an issue arises due to the complexities of a multi-vendor solution. Think about the last time you had to troubleshoot a production issue. In a typical scenario, you are opening multiple tickets with multiple vendors. A ticket with the network vendor, a ticket with the hyper-visor vendor, a ticket with the compute vendor, a ticket with the storage vendor, and so on and so on. Typically, all pointing fingers at each other when we all know that fault always lies with the database admins.
The challenges aren't just around the complexities of design, day to day support, or administration, but also include challenges around lifecycle management. When it comes to lifecycle management, we are looking at the complexities around publishing updates and patches. If you are doing your due diligence, then you are gathering and documenting all the firmware, bios, and software from all the hardware involved for the update/patch and comparing that information against Hardware Compatibility Lists and Interoperability Lists to ensure that they are in a supported matrix. If not, then you have to update before going any further. This can be extremely time consuming and we are typically tasked with testing in a lab that doesn't match our production environment(s) ensuring we don't bring any production systems down during the maintenance window.
The first attempt at reducing the complexities we face with the traditional model was when we witnessed the introduction of converged infrastructure. Converged introduced us to a pizza delivery model for infrastructure. Meaning, we gather our requirements, place an order, and have it delivered ready to be consumed on premise. This new model to infrastructure brought with it a reduction in complexities that are inherent with the traditional model.
What is converged infrastructure? Converged infrastructure is an approach to data center management that packages compute, storage, and virtualization on a pre-integrated, pre-tested, pre-validated, turnkey appliance. Converged systems include a central management software.
These pre-built appliances reduce concerns with support issues due to the fact that the vendor supports the entire stack. You gain that "one throat to choke" when issues arise. You are no longer required to open multiple tickets with multiple vendors. One call to the supporting vendor and they handle troubleshooting for the hyper-visor, compute, and storage. This can increase resolution time when issues present themselves.
You gain a reduction in data center footprint which, in turn, reduces power and cooling costs. I worked with a customer and reduced their multi-rack traditional data center to a single rack solution. The cost savings were tremendous, as they were able to reduce the costs of not only the power and cooling, but also the space they paid for at the collocation.
With converged, you also gain a reduction in lifecycle management. When an update comes out from the vendor, they have already pre-validated and pre-tested the update/patch and know how it will affect your production environment. This means that you can gain back all the time it takes for you to check the firmware, bios, and software against the HCL, etc. This can be a tremendous benefit allowing you to deploy new updates/patches with assurance.
VMware Validated Designs was also introduced to provide comprehensive and extensively-tested blueprints to build and operate a Software-Defined Data Center.
With the VMware Validated Designs, VMware also allows for more flexibility with a build your own solution. Think of Validated Design as a prescriptive method to SDDC. You follow the detailed guides and are ensured of a specific outcome. Unlike the vendor pre-validating and pre-testing the solution, then building it for you in an appliance approach, VMware handles everything but the build.
This approach has four benefits:
The converged model does still present some challenges. You may not be able to move to the latest hyper-visor software when it comes out but most don't like to be the guinea pig anyway.
Another challenge is with storage. Although storage is packaged and supported in this model, you still have to manage it as with traditional storage arrays. For example, if you need to build a new VM, typically we need to:
To further simplify the traditional model of infrastructure, VMware brought us the Software Defined Data Center (SDDC) vision with the hyper-converged model.
What is hyper-converged infrastructure (HCI)? Hyper-converged infrastructure allows the convergence of physical storage onto industry-standard x86 servers, enabling a building block approach with scale-out capabilities. All key data center functions run as software on the hyper-visor in a tightly integrated software layer, delivering services that were previously provided via hardware through software.
Reducing the complexities of traditional storage administration while taking the intelligence of the array and bringing it into the software layer. Take the previous example above. Now, when we provision a VM, the storage is provisioned along with it. There is no need to log into the storage array and provision the LUN, or zoning and masking, to present the newly created storage to the hyper-visor environment.
Management of the storage is performed through the vCenter server web interface that you use to manage the rest of the hyper-visor environment.
The hyper-converged environment further reduces the footprint at our data center(s) and the complexities we have in both traditional and converged environments. This new model of deploying an infrastructure gains us five benefits:
With hyper-converged, we have moved compute and storage into software defined. This simplifies the environment while gaining all the benefits from a converged infrastructure.
To recap, we have talked about where we began with the traditional data center model and all the challenges listed above with administering a traditional environment. Along with all the added benefits of converged and now hyper-converged infrastructures. Remember, that at this point, we have software defined the compute and the storage, but what about the network?
In 2012, VMware acquired Nicira and one year later introduced network virtualization with NSX. To further the SDDC vision of an all software defined data center, VMware virtualized the network. We now have compute, storage, and networking in the software stack.
This year at VMworld 2017, VMware introduced the next logical iteration to the journey of SDDC with VMware Cloud Foundations.
VMware Cloud Foundations, encompasses the best of VMware Validated Design and all the benefits of hyper-converged. It brings the three software defined solutions, compute, storage, and networking into a single packaged managed by the SDDC Manager. I wrote a previous blog about VMware Cloud Foundations you can find here to gain more insight.
Why do we want to be on this journey? VMware Cloud Foundation provides the simplest way to build an integrated hybrid cloud. They do this by providing a complete set of software defined services for compute, storage, network, security and cloud management. Allowing the user to run enterprise apps- traditional or containerized- in private or public environments along with being easy to operate with built-in automated lifecycle management.
This new model has four use cases:
To begin your journey toward this new infrastructure model and future proofing your data center for cloud, you begin with upgrading your current vSphere 5.x environment to 6.5. By upgrading to vSphere 6.5, you put your current infrastructure in an optimal place to take advantage of the latest vSAN and NSX deployments along with the following benefits you gain from the new features in 6.5.
Benefits of vSphere 6.5:
As you can see from the picture above the journey doesn't end with VMware Cloud Foundation but continues to progress toward the true hybrid-cloud solution that was announced this year out at VMworld 2017. The new announcement was a new partnership between VMware and Amazon.
This new offering is an on-demand service that will allow you extend your on-prem data center to the Amazon cloud, which is running VMware Cloud Foundation on physical hardware in Amazons cloud data center. This means no converting of workloads in order to take advantage of a cloud architecture because this is running the same SDDC applications you are running today.
VMware Cloud on AWS is ideal for customers looking to:
VMware Cloud on AWS is delivered, sold, and supported by VMware as an on-demand, scalable cloud service.
This new model is the most flexible and agile model for future data centers. This will allow you to transform your business from hardware dictating where applications reside to applications driving the business in a hybrid cloud model and gaining the ability to easily migrate applications to where it makes most since in alignment with the business requirements and objectives.
Security these days can be more of that traditional, needle in a haystack approach, than a true centric security approach to include analytics and alerting. VMware is again shifting to a new paradigm, and that was evident from all the products and messaging that came out of VMworld 2017.
Security is on the forefront of all of our minds and VMware, as the leader in data center technologies, wants to lead the conversation and be the foundation that you are laying down to protect your data, along with adding significant value to you with their partnerships in the security space, like the new partnership announced with IBM around their security products like QRadar.
With increasing attacks on our data centers, take Equifax for example, we must first look at one of our most significant portions of our security foundation, ESXi and work to secure that. We typically start with securing the physical and the edge, throw in some anti-virus and call it secure, but are we secure?
When it comes to data center security, we must start with our foundation, ensure that we have designed it to follow recommended best practices, then evaluate the gaps, and add in products to get us the rest of the way there. This also includes following best practices for end-user access of the environments and not being "lazy" admins just to skip a few steps. We have to lean on trusted partners like Sirius that have developed a security practice that can help us navigate the waters of security because the landscape of security products is immense, as you can see from the picture below.
So where do we begin? I believe that we must start with VMware. VMware is no longer just a hyper-visor running your vms, but the most integral part of your data center security strategy and if you don't get that foundation right, then the rest will crumble too. We must secure the infrastructure, build and architect the data.
After we get the infrastructure secure we move into securing the entire ecosystem like controls, automation, validations and the security solutions.
Last we must get back to the basics and as VMware's CEO, Pat Gelsinger stated, "Learn from sport teams who follow the basic regimen over and over again. Every major breach in the last five years that made headlines happened because a simple cyber hygiene wasn’t followed somewhere.” VMware is working with the government to set cyber hygiene standards for the tech industry to simplify the security solutions, as Gelsinger stated that, “The role of the governments globally in making stronger cyber policies is equally important to ward off data breaches."
VMware has shifted to becoming a security centric company. With added features in their base product VMware ESXi 6.5 which represents a move toward "secure by default" and allows for a truly secure foundation to build the rest of the house. Let's take a look at these features.
ESXi Secure Boot
Secure Boot now leverages the capabilities of the UEFI firmware to ensure that ESXi not only boots with a signed bootloader validated by the host firmware but that it also ensures that unsigned code won’t run on the hypervisor. UEFI, or Unified Extensible Firmware Interface, is a replacement for the traditional BIOS firmware that has its roots in the original IBM PC.
ESXi is comprised of a number of components. There is the boot loader, the VM Kernel, Secure Boot Verifier and VIBs, or “vSphere Installation Bundles”. Each of these components is cryptographically signed.
You can read more about UEFI on wikipedia.
Virtual Machine Secure Boot
SecureBoot for VM's is simple to enable. Your VM must be configured to use EFI firmware and then you enable Secure Boot with a checkbox. (Note that if you turn on secure boot for a virtual machine, you can load only signed drivers into that virtual machine.)
Secure Boot for Virtual Machines works with Windows or Linux.
vSphere 6.5 introduces enhanced logging. Logs have traditionally been focused on troubleshooting and not security.
Complete logs are now sent via the syslog stream for actions like "VM Reconfigure". Logs now contain more complete information, so notices of something changing you will now see what changed it changed from and what it changed to. You can then take actions against the information collected like rollback the change if it caused an issue.
You will now see logs for actions like adding more memory to a vm. The associated logs will show you what it was before and after the change. From a security perspective you can see much more information like who made the change and with integrations with VMware Log Insight you will be able to parse the data quicker bringing you to faster remediation.
VM Encryption/vMotion Encryption
VM encryption works by applying a new Storage policy to a VM. It is Policy driven. You’ll be able to encrypt the VMDK and the VM home files.
There are no modification within the guest OS. You can run different OS's like Linux, Windows, etc. and can be run from different storage like NFS, block storage, and VSAN. The encryption is happening outside of the Guest OS and the guest does not have access to the keys.
The encryption works also for vMotion but both the source and the destination hosts must support it.
After you apply an encryption policy to a VM, the VM receives a randomly generated key for each VM, and that key is encrypted with a key from the key manager.
When you power-on the VM which has the Encryption Storage policy applied to, vCenter retrieves the key from the Key Manager, sends that to the VM encryption Module and unlocks that key in the ESXi hyper-visor.
Encrypted vMotion works by having the randomly generated key added to the migration information, this is sent to each of the hosts participating in the vMotion process, the data going across the network is encrypted with the randomly generated key only for the migration process, and is one-time generated random key, which is generated by vCenter.
vSphere Security Guide for vSphere 6.5
The new security guidelines have changed to a subset of things to focus on. This is changing from the traditional "Hardening Guides," from VMware to a "Security" guide. I will not go into the entire guide in this post but you can read the post from VMware here.
Along with these new settings, government work, and a new security guide being introduced, I think its time to shift into the products that support VMware security model.
The first of these is NSX. With organizations spending more on security than ever before, see Gartner, NSX becomes the next integral step to securing your production data center. I have written several blogs now on NSX so I will just write a quick recap as to what NSX is.
VMware NSX provides a platform that allows automated provisioning and context-sharing across virtual and physical security platforms. Combined with traffic steering and policy enforcement at the virtual interface, partner services, traditionally deployed in a physical network environment, are easily provisioned and enforced in a virtual network environment, VMware NSX delivers customers a consistent model of visibility and security across applications residing on both physical or virtual workloads.
To further enhance NSX VMware introduced at VMworld 2017, AppDefense. AppDefense adds data center threat detection and response to the micro-segmentation capabilities delivered by NSX.
NSX prevents threats from moving freely throughout the network, while AppDefense detects anything that does make it to an endpoint and can automatically trigger responses through integrations with NSX and vSphere. The idea is to prevent, detect, and respond.
AppDefense uses machine learning technology, were it learns application behavior and if the application deviates from that behavior, it is quarantined. This is very different from the traditional approach with anti-virus solutions. Anti-virus solutions use definitions to secure the vm. If a new attack has been brought to the attention of your provider then they will create a new definition, once they have had time to analyze it, and then you are responsible for pushing the new definition out to all you vms. This can cause a gap in your protection.
See this video below to learn more about AppDefense.
VMware has a dedicated internal team responsible for developing and driving software security initiatives across all of VMware’s Research and Development organizations to reduce software security risks; The VMware Security Engineering, Communications & Response group (vSECR).
The vSECR group takes a full lifecycle approach to product security from product inception to product end of life. VMware, through vSECR, is committed to the ongoing security of their products and the safety of their customers data.
VMware is also active in the greater security community, and is a member of SAFECode (the Software Assurance Forum for Excellence in Code) and BSIMM (Building Security In Maturity Model). For more details about VMWare product security, please refer to the VMware Product Security White Paper.
You may also be interested in the following resources:
Lastly, remember to reach out to your VMware Partner, like Sirius, who can help you with security health checks, education, and help you gain confidence in your production data center environment(s) is configured correctly.
Sirius can help you prevent, detect, and respond to security threats and secure your data.
vRealize Network Insight or vRNI is the newest addition to the range of products from VMware. vRealize Network Insight integrates with VMware's network virtualization platform, NSX. vRNI delivers intelligent operations for your software defined network environment. vRNI does for your virtualized network what vRealize Operations does for your virtualized environment, but only to the SDN environment. With the help of this product you can optimize network performance and availability with visibility and analytics across virtual and physical networks. Provide planning and recommendations for implementing micro-segmentation security, plus operational views to quickly and confidently manage and scale VMware NSX deployment.
Let's take a step back and discuss, briefly, what VMware NSX is and why you should, as a technologist, care about it.
NSX is an innovative approach to solving long-standing network provisioning bottlenecks within the data center, and it allows for the integration of switching, routing and upper-layer services into an integrated application and network orchestration platform. With an overlay solution that may not require hardware upgrades, NSX offers customers a potentially quicker way of taking advantage of SDN capabilities by decoupling the network from hardware into a software abstraction layer allowing the end-user to programmatically create, provision and manage networks.
Essentially, NSX is doing for your network what vSphere did for your compute environments and we have typically virtualized the compute and storage with vSAN, so adding network virtualization brings the full vision of SDDC giving you a lot of benefits like single pain of glass to manage your environments within vCenter, which a lot of us are already familiar with.
With NSX you gain visibility into your network that you may not have today while allowing for division of duties in a secure manner. NSX technology inception is on the rise and as of today, VMware has over 2,600 customers that have implemented NSX and over 50% increase in license bookings.
You can learn more on NSX from a previous blog here.
You might be familiar with vRealize Network Assessment (vNA) and be asking yourself, what is the difference between vRealize Network Insight (vRNI) and vRealize Network Assessment (vNA)? The difference is that vNA only gives you the report/preview portion of the product, which takes 30-minutes to install. It takes more time to install the full-product. vNA only needs to connect to the vCenter and can be ran with a Solutions Provider like Rotla Advizex. vRNI, in addition to the vCenter, you also need to connect it the hardware, firewalls, etc.
As mentioned above vRNI addresses the need for deeper, richer NSX operation and traffic analytics in the fast growing virtual networking market. vRNI transforms operations for NSX based on SDDC across your virtual, physical, and cloud.
Using vRNI and vNA, Rolta Advizex can help remove the guesswork from micro-segmentation deployments with a global net flow assessment, gain operational insights needed to quickly and confidentially manage and scale your NSX deployment with vRealize Network Insight.
What's New in 3.4
VMware recently updated vRealize Network Insight on June 01, 2017.
The new and enhanced features in this release are as follows:
Back on February 2nd, VMware announced two new products, VMware NSX for vSphere 6.3 and VMware NSX-T 1.1, and the adoption rate has reached new heights for VMware, as Chief Executive Pat Gelsinger mentioned in the Q4 2016 earnings that NSX is on track to bring in $1 Billion in revenue this year. That is impressive especially if you take into account the initial slow adoption rate of NSX.
The customer focused demand for tighter security in the data center with NSX and Micro-Segmentation, Automating IT provisioning while increasing efficiency, and Application Continuity is helping to drive the success of NSX into corporate IT.
So what is NSX anyway? As I mentioned in a previous blog, NSX is an innovative approach to solving long-standing network provisioning bottlenecks within the data center, and it allows for the integration of switching, routing and upper-layer services into an integrated application and network orchestration platform. With an overlay solution that may not require hardware upgrades, NSX offers customers a potentially quicker way of taking advantage of SDN capabilities by decoupling the network from hardware into a software abstraction layer allowing the end-user to programmatically create, provision and manage networks.
Let's take a look at what's new in version 6.3. You can see the announcement from VMware here.
VMware is bringing some new capabilities to security in NSX with Application Rule Manager, available in NSX Advanced and Enterprise editions. Application Rule Manager is responsible for the creation of security groups and firewalls for applications based on network traffic flows which is a sequence of packets from a source computer to a destination, which may be another host, a multicast group, or a broadcast domain. This along with Endpoint monitoring, available in NSX Enterprise, enables you to set profiles for applications inside the guest OS. This gives you end-to-end visibility into applications while simplifying the profile creations.
It is good to note that for security certification and requirements:
Here are a few other updates in NSX 6.3:
Software Defined Networking with NSX rounds off the Software Defined Data Center vision of VMware, bringing the ability to automate the provisioning of what once was, very manual physical networks, and the security of them. VMware continues to enhance the integration of NSX Load Balancers with vRealize Automation and offer support for third-party IP Address Management (IPAM) systems. VMware has also enhanced the integration with NSX for vSphere and vCloud Director. These new enhancements will enable new multi-tenant capabilities for our vCloud Air Network partners.
Some other new features found in Automation for 6.3:
As the adoption of NSX increases VMware is seeing more and more uses cases around Active-Active data center architectures utilizing the network overlay capabilities of NSX allowing for true workload mobility while maintaining ip addresses and consistent security policies across data centers. New enhancements in security tagging while simplifying security policy management across multiple data centers will help to ensure a consistent and reliable virtual network in a multi-vCenter deployment.
In NSX 6.3 there is also a new ROBO SKU introduced which allows you to take advantage of all these features in a ROBO solution allowing you to simplify the security and management across remote branch offices.
Here are a few other features introduced in NSX 6.3:
The focus for NSX-T is around emerging application frameworks and architectures like private IaaS on OpenStack and multi-hypervisor support for development teams using dev clouds. NSX-T supports multiple KVM distributions, within the hypervisor kernel, while delivering security with the use of distribute firewalls, logical switches and distributed routers; This includes Red Hat Enterprise and Ubuntu. This means freedom of choice to technologists allowing them to choose what's best suited for their applications.
Integration with VMware Photon allows IT to deliver security and services to their developers that are building containerized and cloud native applications. NSX can automate the creation of networks and routers when a new namespace/project/organization is created and then secure it all with micro-segmentation policies for containers and pods.
As noted above you now have standard, advanced, and enterprise editions. According to CRN, NSX Enterprise is $6,995 per CPU socket; Advanced costs $4,495 per socket and Standard will cost $1,995 per socket.
See VMware NSX for more information.
If you are interested in learning more and getting some hands-on lab time with NSX, take a look at VMware's hands-on labs, here.
VMware announced VMware Cloud Foundation back in the general session of VMworld 2016. Cloud Foundation is a unified platform for private and public clouds.
Let's start with defining the term "Clouds". This term has been thrown around a lot and some take this term as "In the Cloud" off premises platforms, but some use the term more all inclusive which includes both "On-Prem" and "Off-Prem" platforms. Wikipedia defines this term as "computing that provides shared computer processing resources and data to computers and other devices on demand". For this blog I am using the definition of cloud as the latter. I think of cloud as all inclusive of both off and on-prem platforms for providing resources. I know some feel as though cloud was meant to replace the "on-prem" private cloud and yes, that will ultimately be the direction in years to come, but for now we live in a world of hybrid-cloud and that is what Cloud Foundation is here to assist us with.
Now that we have cleared that up, let's move on to Cloud Foundation from VMware. Cloud Foundation brings together, VMware's vision for SDDC where compute, storage, and networking services are decoupled from the underlying hardware and abstracted into software as pools of resources allowing for IT to become more flexible and agile while also allowing for better management, into an integrated stack for cloud. This is done by defining a platform common to both private and public clouds.
The foundational components of Cloud Foundation are VMware vSphere, Virtual SAN, and NSX and can be packaged with vRealize Suite to bring automation into the picture. If you are not familiar with the vRealize Suite from VMware let's just take a moment to discuss this.
The vRealize Suite is a software defined product suite built to enable IT to create and manage hybrid clouds. It includes products like IT Business Enterprise, which VMware just sold off, and is an IT financial management tool to manage and analyze cost associated with IT services. It also includes vCloud Automation Center, vCenter Operations Management, and LogInsight.
The management for Cloud Foundation is VMware's SDDC Manager. SDDC Manager serves as a single interface for managing the infrastructure. From this interface, the IT administrator can provision new cloud resources, monitor changes to the logical infrastructure, and manage lifecycle and other operational activities. The idea here is a single pane of glass for management along with monitoring of all your cloud environments whether it be on-prem, IBM-Cloud, AWS, etc., providing ongoing performance management, capacity optimization, real-time analytics, and cloud automation.
Cloud Foundation allows for a flexible solution allowing for on-prem and off-prem deployment options and can be deployed on-prem or off-prem as a service. You can choose on-prem options like integrated solutions from OEM providers such as VCE with hyper-converged systems and VSAN ready nodes from Dell.
Cloud Foundation will help to reduce the complexities faced with cloud strategies to date. The idea of "who cares where your data resides as long as it it secure and accessible" comes to mind. You can have applications being delivered from multiple clouds whether on or off-prem, Azure, or AWS. IT only needs a single pane of glass to monitor and manage these environments while also allowing for IT and management to track related costs. Ultimately giving IT the agility of migrating between cloud platforms when needed.
A use case for this would be a merger and acquisition of a company with a hybrid cloud environment. Cloud Foundation would help manage the complexities involved with integrating those resources into your own environment while maintaining security and the integrity of your current environment.
VMware announced alongside the Cloud Foundation announcement at VMworld 2016 the new partnership with IBM Cloud. This allows companies to have choice in deploying SDCC whether it be on-prem in their own private data center(s) or with IBM. This solution is based with Cloud Foundation and allowing VMware customers to seamlessly extend private to public.
Again, the software stack includes VMware vSphere, Virtual SAN, NSX, and VMware SDDC Manager. VMware SDDC Manager was announced back at VMworld 2015 and combined with Cloud Foundation is just the next step toward IoT with what VMware states as "Any Cloud, Any Application, Any Device". The SDDC Manager allows for a simplified management of a highly distributed architecture and resources.
Cloud foundation integrates with the entire VMware stack which includes Horizon, vRealize Suite, vRealize Automation, vRealize Business, OpenStack and products like LogInsight.
With Cloud Foundation natively integrating the software-defined data center stack and SDDC Manager, customers can flexibly upgrade individual components in the stack to higher editions allowing for flexibility in lifecycle management which consumes large amount of time in traditional IT.
With Cloud foundation you can automate the entire software stack. Once the rack is installed and powered on with networking to the rack, the SDDC Manager takes the BOM that was built with your partner like Advizex, and includes user-provided environmental information like DNS, IP addresses, etc. to build out the rack. The claim is that this can reduce the provisioning time from weeks to hours which for those of you that have done this in a non-automated fashion can attest to how painful the process can be. When complete you have a virtual infrastructure ready to start deploying and provisioning workloads.
In the complexities of traditional IT with silos, it takes extensive resources to provision a highly available private clouds, but with Cloud Foundation an administrator only needs to create and manage pools of resources decreasing the time to delivery of IT resources for consumption by the end-user whether it be a vm or a virtual desktop. This is done through a new abstraction layer called, Workload Domains.
Workload Domains are a policy-driven approach for capacity deployment. Each workload domain provides the needed capacity with specified policies for performance, availability and security. An admin can create a workload for dev/test with a balanced performance and low availability requirement while also creating one for production with high availability and high performance.
The SDDC Manager translates these policies into the underlying resources of compute which allows for the admin to concentrate on higher level tasks instead of spending time researching how to best implement.
Lifecycle management introduces a lot of complexities which are typically manual process to patch and upgrade and can lead to issues within an infrastructure due to interoperability and configuration errors. In turn the validation and testing of these patches takes a lot of time away from an IT staff. Sometimes patches get deployed before they have been vetted correctly for security and other reasons or defer patches which can slow down the roll-out of new features, etc. SDDC Manager automates these tasks for both physical and virtual infrastructures. VMware tests all the components for the Cloud Foundation before shipping new patches to the customer.
Within the lifecycle management of Cloud Foundation you can choose to apply the patches to just certain workloads or the entire infrastructure. SDDC can patch the vms, servers and switches while maintaining uptime thereby freeing resources to focus on business critical initiatives.
Scalability is built into the platform within a hyper-converged architecture. You can start with a deployment as small as 8 nodes, and scale to multiple racks. Capacity can be added linearly in increments as small as one server node at a time within each rack allowing IT to align CapEx with business needs. Cloud Foundation automatically discovers any new capacity and adds it into the larger pool of available capacity for use.
Some main use cases for Cloud Foundation are; Virtual Infrastructure allowing IT to expand and contract the underlying infrastructure to meet their changing business needs; IT Automating IT allowing IT accelerate the delivery and ongoing management of infrastructure, application and custom services, while improving overall IT efficiency; Virtual Desktop making VDI deployments faster and more secure. Administrators can focus on specifying the policies and needs of the VDI infrastructure instead of dealing with the details of deploying the VDI infrastructure.
To learn more about VMware's Cloud Foundation you can visit the product page here.
You can also get hands-on with the product from the hands-on lab provided online from VMware.
HOL-1706-SDC-5 - VMware Cloud Foundation Fundamentals
Back in July of 2016, VMware issued a Field Advisory, announcing bugs for the release of NSX for vSphere 6.2.3. VMware urged its user community, not to upgrade to this version and if you had they came out with a 6.2.3.a release to resolve the issues. The issues that VMware found were that both primary and secondary HA nodes would be placed into Active State, causing network disruption and issues related to the DFW rules causing traffic disruptions.
VMware has now released, back in August, the new version 6.2.4 for GA. This release includes some critical bug fixes previously identified which includes a critical input validation vulnerability for sites that use NSX SSL VPN. You can see the full list what's new in the release notes.
Most of the new features were already discussed by me in a previous post you can find here. In this new version the only thing listed as new is a new feature around "Firewall Status API".
VMware also has announced the End of Availability (EOA) and End of General Support (EOGS) for Cloud Networking and Security 5.5.x. The date is September 19, 2016 for both.
You can see a list of NSX trending issues here.
VMware announced on June 9th, 2016 the new version of the NSX platform version 6.2.3. A minor release to their network virtualization platform.
The NSX solution is an innovative approach to solving long-standing network provisioning bottlenecks within the data center, and it allows for the integration of switching, routing and upper-layer services into an integrated application and network orchestration platform. With an overlay solution that may not require hardware upgrades, NSX offers customers a potentially quicker way of taking advantage of SDN capabilities by decoupling the network from hardware into a software abstraction layer allowing the end-user to programmatically create, provision and manage networks.
Networking and Edge Services
The release notes for NSX for vSphere 6.2.3 can be found here.
Disruptive innovation, is a term coined by Clayton Christensen. The term describes a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.
For example, take a look at what a company like Uber has done to the taxi service in San Francisco. They don't hire drivers like Yellow Cab. They don't own a fleet of cars. They built an application. An application that has been very disruptive to the taxi industry and is changing the landscape of ride-hailing services.
Thanks to Uber, San Francisco's largest yellow cab company is filing for bankruptcy. Yellow Cab Co-op President Pamela Martinez was quoted saying that some of the financial setbacks "are due to business challenges beyond our control and others are of our own making." Yellow Cab's drivers are flocking to Uber, an app-based enterprise, lured by the promise of more riders and better schedules.
Yellow Cab has been turned on its head by a disruptive innovation. Uber has disrupted the ride-hailing service industry with a lasting impact which is now moving across the county.
Why do I point this out? Because, you are either being disrupted or are the disrupter. Think about that for a second. Ask Yellow Cab how it feels to be disrupted in an industry they felt very secure in before an application took over.
Look at companies like Blockbuster. I bet you can tell me who disrupted them? Got it in your mind?
Blockbuster in its peak in 2004 consisted of nearly 60,000 employees and over 9,000 store locations. In 2000 a fledgling company came on the seen slowly changing the landscape of the movie rental industry and eventually bankrupting Blockbuster in 2010.
If you were thinking of Netflix then you are correct. Now a $28 billion dollar company, about ten times what Blockbuster was worth. Blockbuster has been greatly disrupted and is reinventing itself.
You can either be disrupted or be the disrupter as with VMware. They have been a disruptive force in the technology industry from their entry with vSphere to their latest creations like SDDC, vSAN and NSX. VMware's vSphere changed the landscape of compute forever, moving cpu, memory, etc. into software, removing the dependency on hardware and has now become the most popular infrastructure management API in use today.
Disruption doesn't happen overnight; Disruption happens gradually. Remember, the term "Disruptive Innovation," is taking root and relentlessly moving up the market. Uber didn't overtake Yellow Cab overnight just as with Blockbuster. A disruptor was introduced and slowly moved to overtake the industry.
The same is true for vSphere. Industry leaders were hesitant to adopt such a drastically different technology but now this tried, tested and proven technology is the leader in x86 server virtualization infrastructure.
VMware continues to be a disruptive force in the technology industry. Look at the movement to hyper-converged. Hyper-converged is about software, not hardware. Hyper-converged derive from being able to support all infrastructure in software, and without the need for separate dedicated hardware, such as a storage array or fibre channel switch. And, what is the core software technology in just about every hyper-converged product available today? VMware vSphere and the Software Defined Data Center.
VMware is disrupting the way that we have traditionally approached the data center. Fully virtualized infrastructure, delivered on a flexible mix of private and hybrid clouds. I'm sure you have all heard the mantra, "One Cloud, Any application, Any Device." This is the next evolution in data center technology and VMware continues to lead disruptive change with products like NSX for Software Defined Networking (SDN).
NSX like vSphere has had a slow adoption. I find myself having the same conversations with customers that I had when vSphere was introduced. You don't have to convince customers of the value of vSphere anymore. The speed of adoption is picking up and VMware saw an increase of threefold in the number of paying customers for its NSX network virtualization product and in Q4 of 2015 9 out of 10 VMware deals included NSX.
The NSX solution is an innovative approach to solving long-standing network provisioning bottlenecks within the data center, and it allows for the integration of switching, routing and upper-layer services into an integrated application and network orchestration platform. With an overlay solution that may not require hardware upgrades, NSX offers customers a potentially quicker way of taking advantage of SDN capabilities.
NSX is that disruptor in the networking industry bringing agility to existing network deployments with limited impact to existing network hardware and offering all of this without vendor lock-in. VMware NSX works across many IP-based network installations and in virtual environments running mainstream hypervisors and has established relationships with a broad set of IT vendor partners to provide integration of security and optimization solutions, as well as key network hardware players, such as Palo Alto, Arista Networks, Brocade, Dell, HP and Juniper Networks.
Remember back in the beginning of this blog where I quoted President Pamela Martinez as saying that some of the financial setbacks "are due to business challenges beyond our control and others are of our own making." Some challenges were of their own making. Remember too that disruptive innovation happens over a period of time. It took 10 years for Netflix to overtake Blockbuster. Could Blockbuster have moved quicker to insue their spot as the leader in the online movie rental industry? The same is true with VMware and vSphere. This disruptive innovation took time to take hold and now it is still a driving force to change the industry with SDDC.
VMware NSX is picking up steam and is in the heart of every hyper-converged to hybrid-cloud solution that companies are moving toward. The question is will you be disrupted or be part of the disruption? I want to be part of the disruption and drive change in an exciting time to be a part of this industry. Will you be disrupted or will you help disrupt? It's a call to action; To be the disruptive force that your company doesn't even know it needs because NSX will do for networking what vSphere did for compute.
Disrupt or be disrupted.
I just recently passed my VCP6-NV and wanted to take some time to blog about the experience and to gather together some resources for those that are looking to pursue this certification.
For those you that may not know much about NSX I will start with a brief introduction and explain why I feel that you should pursue this certification for your company.
What is NSX? VMware NSX is the next evolution in software defined everything. It is VMware's network virtualization and security software platform that came from an acquisition of Nicira back in 2012.
What does NSX do? NSX de-couples the network functions from the physical network devices in your data center, in a way that analogous to decoupling virtual servers from the physical. NSX natively creates the traditional network constructs in the virtual realm. These include ports, switches, routers, firewalls, load balancers, etc.
I could write an entire blog just on the features of NSX and the integrations with other third party vendors, such as Palo Alto Networks and Trend Micro; oh wait I did. You can read that in my blog here. But, that is not what this blog is about so let's move on.
The VMware Certified Professional Network Virtualization exam, tests candidates on their knowledge and abilities to demonstrate basic virtualization networking skills such as vSwitch, vDistributed Switches, installation & configuration of NSX, and finally administration of NSX. In order to pass the exam you will need to have in depth understanding of these areas. Hands on with both NSX and vSphere are highly recommended. In fact, I believe that VMware recommends at least 6 months of hands-on.
I would recommend setting aside dedicated time to go over the following resources along with practicing packet walks and architecture design.
These are the resources that I used to study for the exam over a period of 6 months.
Section 1 – Define VMware NSX Technology and Architecture
The test consists of 80+ questions in which you have approximately 1 minute per question, which doesn't seem like a lot of time but it is plenty. You can also mark questions for review.. I found that once I completed the exam I had enough time to go back through all the questions once more to check for anything I missed.
So, now that I have reviewed what NSX is and discussed the exam the next question is why should you take the exam? Besides certifications being a great way to show value to your company more importantly is that NSX is the next big wave in the virtual realm.
I chose to take this exam because I believe that NSX is the next step in virtualizing the datacenter and I wanted to be on the forefront to help lead the direction for my company and our customers. I have the same excitement with NSX that I felt when I first became engaged with ESX.
Since taking the exam, I have been between Buffalo and Albany NY, speaking to customers and white boarding their environments. This has lead to better engagements with customers and within VMUG (VMware User Group) where I lead three groups, Albany now Capital District, Syracuse and Rochester.
NSX will change the face of networking just as vSphere did for physical servers. If you want to help drive the future direction of your company and help them become more secure, agile and flexible or if your company, like many others, are in the process of developing their cloud strategy then NSX can play a large role in that.
Bringing VMware NSX and Horizon together
Virtual desktop infrastructure (VDI) has become an even more popular virtualization option for many organizations and VMware customers.
VMware continues to work with partners to advance the protection of VDI deployments. Most recently the focus has been on introducing advanced security controls with VMware NSX (network virtualization platform) and Horizon 6 (VDI) environment. VDI in combination with NSX offers organizations the chance to make huge leaps forward in the security and management of their virtualized desktop deployments.
Two big challenges that have slowed the adoption of large-scale desktop virtualization in the past are:
NSX addresses these concerns and much more.
Security for VDI deployments is more critical because of the need to limit “east-west traffic,” the internal traffic in the data center. However, “east-west traffic” isn’t monitored well, if at all, by traditional perimeter defenses. For example a basic surfing or email mistake by a trusted end user could bring a threat right past those defenses into your data center resulting in a breach.
VMware NSX with Horizon enables micro-segmentation and automates the deployment and provisioning processes. This allows for the insertion of advanced security services from third parties that includes:
This provides instant, automated protection as soon as a new virtual desktop is spun up.
NSX brings security inside the data center with automated fine-grained policies tied to the virtual machines, while its network virtualization capabilities let you create entire networks in software, without touching the underlying physical infrastructure
To learn more about NSX and Horizon see the VMware Deep-dive video below.