The movement toward a hybrid cloud, software defined data center, has been on-going for years now. We have seen the virtualization of compute, storage, and now networking. In this blog, I will be discussing this journey: where we started, where we are going, and why you want to be on this journey.
Traditional data center models are still very prevalent and accepted by organizations as the defacto model for their data center(s). If you have ever managed a traditional data center model, then you know the surmounting challenges we face within this model.
What comprises the traditional data center model? A traditional data center model can be described as heterogeneous compute, physical storage, and networking managed by disperse teams all with a very unique set of skills. Applications are typically hosted in their own physical storage, networking, and compute. All these entities-physical storage, networking, and compute- increase with the growth in size and number of applications. With growth, complexity increases, agility decreases, security complexities increase, and assurance of a predictable and repeatable production environment, decrease.
Characterizations of a Traditional Data Center:
Challenges around supporting these complex infrastructures can include things like slow time to resolution when an issue arises due to the complexities of a multi-vendor solution. Think about the last time you had to troubleshoot a production issue. In a typical scenario, you are opening multiple tickets with multiple vendors. A ticket with the network vendor, a ticket with the hyper-visor vendor, a ticket with the compute vendor, a ticket with the storage vendor, and so on and so on. Typically, all pointing fingers at each other when we all know that fault always lies with the database admins.
The challenges aren't just around the complexities of design, day to day support, or administration, but also include challenges around lifecycle management. When it comes to lifecycle management, we are looking at the complexities around publishing updates and patches. If you are doing your due diligence, then you are gathering and documenting all the firmware, bios, and software from all the hardware involved for the update/patch and comparing that information against Hardware Compatibility Lists and Interoperability Lists to ensure that they are in a supported matrix. If not, then you have to update before going any further. This can be extremely time consuming and we are typically tasked with testing in a lab that doesn't match our production environment(s) ensuring we don't bring any production systems down during the maintenance window.
Security these days can be more of that traditional, needle in a haystack approach, than a true centric security approach to include analytics and alerting. VMware is again shifting to a new paradigm, and that was evident from all the products and messaging that came out of VMworld 2017.
Security is on the forefront of all of our minds and VMware, as the leader in data center technologies, wants to lead the conversation and be the foundation that you are laying down to protect your data, along with adding significant value to you with their partnerships in the security space, like the new partnership announced with IBM around their security products like QRadar.
With increasing attacks on our data centers, take Equifax for example, we must first look at one of our most significant portions of our security foundation, ESXi and work to secure that. We typically start with securing the physical and the edge, throw in some anti-virus and call it secure, but are we secure?
When it comes to data center security, we must start with our foundation, ensure that we have designed it to follow recommended best practices, then evaluate the gaps, and add in products to get us the rest of the way there. This also includes following best practices for end-user access of the environments and not being "lazy" admins just to skip a few steps. We have to lean on trusted partners like Sirius that have developed a security practice that can help us navigate the waters of security because the landscape of security products is immense, as you can see from the picture below.
VCE provides Pre-integrated, Pre-tested and Pre-validated Vblock Systems.
What the heck is VCE and what exactly does this mean to be pre-integrated, pre-tested and pre-validated?
Lets start with a little background on VCE and who they are and more importantly why you should be paying attention to them.
Formed by Cisco, EMC, and investments from VMware and Intel. VCE (VMware/Cisco/EMC) is the industry leader in hyper-converged infrastructure solutions or as it is known in the industry as "Converged Infrastructure" and according to the new Gartner study for 2014, VCE finds itself situated in the MAGIC quadrant.
Estimations for this market, which includes single-vendor and multi-vendor converged infrastructures and hyper-converged infrastructures, will grow more than 50 percent in 2014 over 2013 to reach $6 billion.
Going back to the tittle of this blog and mentioned above, VCE is providing the Ps with the VCE. Lets take a look at each one of these.
Pre-integrated sure sounds interesting but what does that mean? VCE first performs an analysis of your current environment which includes collaborative planning and design verification to determine the design information that will be used to perform the pre-integration work. They will then perform all the pre-integration tasks, that are normally performed by your IT staff or possibly even you, to integrate the VCE into existing infrastructure like Active Directory, IPs, SAN Configuration, etc.
If you have ever done the work to prepare a datacenter for a new infrastructure project like this, you know the tremendous amount of work that must be performed, from running the cabling to pre-configuration switch work for both network and SAN, insuring correct power, to interoperability matrix, etc. The service of pre-integration can be extremely accommodating when working with tight project driven deadlines and translates to a plug-in-play model that can be rolled into your current datacenter and plugged.
Pre-tested also is something that sounds good but what exactly is tested?
Well, once the pre-integration is complete, VCE will test the stack, from the hardware layer through to the software layer. Testing to make sure that your SAN is able to handle to appropriate amount of IOPS you called for in your design, testing that the network is configured properly to insuring that the hundreds of components are functioning properly. That is a lot of testing that again can save countless hours in work usually designated to the IT Staff.
Pre-Validated means that VCE will validate the design and validate that the company objectives are met.
This means that you can have confidence in the solution that VCE is providing to you.
The end result is that processes used to build a custom system on site are far more complex, far less reliable and far more expensive.
In my opinion, VCE is Best of Breed with Single Point of Support. That means best of breed technology and one vendor to contact for support and that will save IT a lot of finger pointing and headaches.