Vblock Systems Management: Infrastructure Lifecycles

In a bit of cool, cross-company promotion, folks from VCE will be posting articles to Cisco, EMC and VMware blogs, highlighting releases that are being made at VMworld.  If you are interested in the entire list of events and sessions that VCE is participating in, check it out here. What I want to do with this blog post is to extend my recent contribution to the @CiscoDC blog regarding customer choice and interoperability with regards to systems management, and how VCE is enabling that ecosystem in a new way.

One of the things that VCE has been quietly working on for some time in conjunction with our partners, investors and customers, is the idea of how to extend the value of a converged infrastructure into the operations and management space in a more fundamental and organic way.  If you look at the Vblock as an example, how can we take the manufacturing and implementation discipline we apply to the systems and allow it to be leveraged by customers who are looking to drive efficiency into their operational processes as well?

To start this process, we took a long look at the systems management lifecycle that our customers use.  We wanted to see how converged systems were being used in the real world, and figure out what the challenges were that needed to be addressed.  Interestingly, we discovered that there were actually three separate processes that needed to be accounted for!

Management Lifecycle01

When you think through it, from an enterprise standpoint, you can map each of these lifecycles to a group within existing IT teams.  The Pre-Day 0 team is the “facilities” group.  They take boxes, and turn them into raw capacity.  This includes rack and stack, cabling, power and space provisioning, maybe hardware/bios level software deployment and firmware leveling.  In general, this lifecycle has a beginning and end only once for each infrastructure stack that is deployed.  An enterprise may deploy multiple stacks, and go through this process multiple times, but in general it’s a 1:1 mapping with the deployment of an infrastructure.

The second lifecycle is run by the “infrastructure” team, and this is where they take raw capacity and turn it into capacity that is usable by the business.  While there is only one Pre-Day 0 lifecycle, the Day process may run dozens or hundreds of time.  This process gets invoked anytime new capacity is added into the stack, be it network, compute or storage.  The “provisioning” process here involved adding new blades into existing VMware vSphere clusters or creating new ones, identifying and presenting new storage objects, or resizing existing ones and creating new network constructs or delivering new networks into the infrastructure.

Finally, there is a third lifecycle that happens once the raw capacity is turned into usable capacity.  In addition to being the most costly part of the process from a staffing and tooling perspective, this is also where traditional hardware vendors take a big step back and lets the customer engage with a legacy ITSM ecosystem.  It’s not really a failing on the part of the hardware vendors, it’s a matter of tooling.  The tools that each vendor puts out is designed with a very limited scope, because they only need to support their particular product domain.  DCNM is a fantastic tool that Cisco provides to manage networks and storage fabrics, but it will never provision storage or hypervisors.  Unisphere is a great storage management tool from EMC, but it will never assign UCS profiles to blades.  Knowing this, and knowing that the number of variables that could be present within a customer’s data center, the hardware vendors push the responsibility of creating an “infrastructure object” that is comprised of all of the individual components that are deployed and consumed together back to the customer.

tools

The answer here, is not to build a better tool.  It’s to enable our customers to consume and manage Vblock Systems in a way that provides logical unification of the components that are being used along with context and functionality that translates directly into operational simplicity, efficiency and savings by customers.  This enablement needs to be ubiquitous and integrated directly into the platform, and needs to be consumable by every ITSM product out there, from highly flexible and advanced tools like Cisco’s Intelligent Automation for Cloud stack and VMware’s vCenter to the most basic SNMP-aware offerings on the market.  Overwhelmingly, customers want programmable and extensible interfaces for these types of functionality, and anything that doesn’t have an open, documented API just isn’t prepared to play in this space.

As a demonstration of the possibilities that this kind of management model could offer, VCE will be showing demonstrations of multiple implementations, including with VMware vCenter and the Cisco CIAC stack.  Both are live and running in the VCE booth, and both are examples of how an implementation could look.  It’s very, very important to note, however, that the power of an open API is that the possibilities are endless as to who could implement a solution using it, and what those solutions can enable customers to do.  Could we see an implementation that specifically targets the management and operations needs of particular vertical markets, like service providers or healthcare?  Absolutely.  Could we see implementations that are focused around tight application managements for appliance-like Vblock Systems?  Of course.  Could we see implementations that map current API stacks, like those from AWS and OpenStack into a private or public cloud run on a Vblock System?  There’s no reason we couldn’t.

This idea, of unifying a multi-vendor, converged, productized infrastructure in order to streamline operations for customers is something that we are very excited about.  The core principal that allows us to take this step, is that our core standardization on the designs and components included in the Vblock Systems means that we have a known set of variables to work with, allowing us to provide deep and broad coverage for how those variables are put together.  When you don’t have that same level of standardization the number of variables increases exponentially making any kind of platform unification difficult.  It’s not the failing of a particular company or vendor, it’s a basic truth of tool development when you don’t have any control over the things the tool needs to manage.  At our core, VCE took a different path, and this is one of the many, many ways that choice ultimately pays off for our customers.