And Tomorrow Will Be Better Than Today

If Wednesday was spent finding all of the questions, Thursday was where we started looking for the answers.  Thank goodness for the team I have here with me, because I don't know what we'd do without their patience and creativity.

The network is still complex, and there are still things that need to be ironed out on the operational side, but our Network Engineering team has been great.  They really embraced the 1000v dvSwitch concept and were excited about the scripting options available in the current version of the UIM.  I think they have worked through the last of the issues with collapsing down the multiple physical networks we have going to the VM hosts and they even took time out to handle a little switch that decided it wanted to reboot itself.  The regular course of things seems to be that the Managed Services and Network Engineering teams are at odds more often than not, and this week has been very good for me in that regard.  While I still struggle with the conservatism at times, it's not hard to see that those guys are working hard to take care of customers too.  Being in the lab with them has been great, and I appreciate their time and effort.

The VMax finally came together!  Scott Baker did a lot of work to get through the issues, but had all of the VMFS LUNs presented bright an early.  I know it was a struggle, but I also know there was a lot of effort and energy put into getting it fixed.  Thank you Scott!

More than anything else today we worked on the IO/Capacity/Compute sizing discussion.  One thing that my company has in spades is historical data; we have been doing this since late 2007 and we know how our customers use disk and RAM and how much revenue we generate.  Using that, we were able to start to build out a purchasing model along with where the resource thresholds are and what the scalability is.  Right now, it looks like we'll be able to get two compute nodes (64 blades each) out of each vBlock before we need to purchase new storage on the back-end, which could support roughly 1600 VMs per location.  That's a tremendous amount of capacity and scalability, and it will let us create a couple new dedicated hardware products that aren't possible right now. 

I know that part of the magic is that our Cisco, VMware, EMC and ISV sales teams are fantastic, but the amount of communication and back and forth between the companies has been incredible.  The level of coordination that needs to happen between those companies in order to come to a final config can't be understated, and it's easy to see how much work has been done on the back end to facilitate that.  There's certainly work to be done there (some of the "coordination" was three guys sitting with their laptops open walking through the entering of the config into their individual ordering portal) but the level of commitment is easy to see.

The first series of hurdles has been cleared.  The technology is solid, the gains in operational efficiency are shocking in places and the roadmap for the hardware and management software is clear.  We built four 8-node clusters, complete with storage and dvSwitched-Nexus1000v switches, connected everything to the network and provisioned customer port profiles in less than 5 hours.  I'm embarrassed to tell you how long that takes us now, but suffice it to say that's a bit of an improvement.

This was a team effort.  Sincere thanks go out to everyone involved especially Andy Sholoman and his efforts in getting the lab up, stable and working.  He and Connie Varner were the glue for everything on the back end and did a great job.  The vSpecialist team was awesome, including Scott Baker, Scott Lowe, Chris Horn and Jonathan Donaldson, plus a big thanks to Chad Sakac for getting the ball rolling.  Jason Nash and all of the Varrow team were a huge help (and great hosts).  There were dozens of other partners who were involved, and all of them were critical to our progress.  I know I'm forgetting people who we met this week, but please know that your work and input was valuable and appreciated!

A big thank you to my team as well.  Ronnie Frames and Chris Martin did an incredible amount of work on the network side, and Brett Impens showed why he is a VMware ninja without parallel.  Having a great team to work with makes the hard stuff easier, and I couldn't do it without them.  The next hurdle is internal, where we figure out what the cost model looks like, but we are one step closer… 🙂