This is the 40th article in the Spotlight on IT series. If you’d be interested in writing an article on the subject of backup, security, storage, virtualization or MSPs for the series PM Eric to get started.
The feeling of satisfaction that rolls in at the end of the week is amplified here at Spiceworks. We have our company meetings on Friday afternoons. While that raises eyebrows from some folks whose vision of a company meeting is pretending not to yawn at a conference table, their expressions change when I mention our meetings include beer.
Beer is appropriate because there’s been plenty to celebrate this year. Spiceworks6 is just around the corner, the team is growing like crazy and we’re streamlining things in the product development group with more virtualization to ramp up delivery times on product features and improvements.
The plan
Our goal is to use virtualization to enable our dev and test teams to work independently on features and improvements. This builds upon how we use branching in our code repository - as a team branches the code for some feature work, we dynamically provision a test server environment for the team to collaborate and test. When the feature is complete, it merges with the main code base and is prepared for production. As we continue to make more improvements, the integration and push to production will take less time.
Expectation vs. reality
All that sounds sweet and simple, but we’ve learned a few things along the way. First and foremost, finding a way to automatically provision and de-provision VMs is no easy task if you’re on a tight budget. Free tools are still getting to production readiness, and tools that are ready now aren’t exactly cheap.
Efficient use of the hardware is tricky, but we ultimately found the number of VMs we need is tied more directly to the number of independent feature-projects our test teams can manage than anything else. Rather than provision a VM for every branch of project code as soon as the branch exists, we only need to provision a VM when a tester wants to test a particular branch.
Often, an existing VM can be recycled for this. This method allows us to scale our hardware needs based on available manpower, which is much more tenable.
Finally, there is more to setting up a VM than just cloning it and tweaking a few settings to individualize the host. External dependencies of the code being tested, such as baseline databases, also need to be provisioned along with the VM. Usually, there is a common, script-able process for this, but some tests that will be run against some of the VMs have unique requirements that are too costly to always configure for every VM. Right now these one-offs must be managed by hand, but we are working toward automating this process as well.
The results
So far the results have been great! The dev and test teams have adopted the new way of working, and we’ve seen less integration headaches than one would think.
From the internal IT perspective, we’ve cut costs by improving the use of the hardware we have. Development and test organizations use resources very sporadically. While we develop some code, physical hardware is typically idle. When we test the code, the physical hardware is being used. Obviously, we are now better utilizing the limited physical resources that we have by sharing them amongst the entire organization.
I’ve done several interviews of SMBs using virtualization and the single most often cited benefit of virtualization is its flexibility. I think what we’re doing here by automating the hypervisor showcases that flexibility brilliantly. If you have temporary workloads or frequent and similar requests, it’s worth considering to see if you could further remove yourself from “button pushing” by automating parts of the process.