I’ve just come from a really fascinating roundtable discussion (sponsored by Dell and hosted by Bryan Glick from Computer Weekly) about the "Efficient Enterprise in 2010". The meeting was well attended by a bunch of Enterprise customers and Partners, with the context for the discussion being a presentation from Robin Johnson, Dell’s Global CIO.
There were a number of really compelling things that came out from both the presentation and the ensuing discussion:
- Understand the Opportunity Cost of any savings you make.
OK, so you lot know I already get that one, but wow, Dell apparently are able to plough around 50% of their savings back into strategic IT (and when you’re making a $160m saving p/a, that’s a big deal). Read on to find out how they get away with that -especially at a time when most CFO’s want every penny they can get – and then some.
- Use the Time:Cost ratio as the pivotal argument, not simply Cost savings alone.
Robin (and the group) talked about the difference in motivating the "business" when you factor in the time to market for IT solutions rather than simply talking about cost savings alone. It sounds simple when you say it like that, but it’s a hard won position with many CFO’s/Steering Boards. If people understand the difference in time to market that more complex IT makes, it makes it easier for them to support you in making it simpler.
- Pursue "Ruthless Standardisation"
Driving a standards based architecture is a pretty tall challenge, no point in doing it then if you’re only going to go halfway. It’s tough, but if you’ve done the above, you can make it happen. Dell have only _2_ images for their 22,000 server estate. That’s pretty ruthless, but it enables them to do a lot more.
- Create a path of least resistance
The Dell guys talk about the "Happy Path" vs the "Unhappy Path" when it comes to IT Architecture and solutions design. Follow the "happy path" (i.e. use standard tools/architecture etc) and you will get your solution in place more quickly and more cost effectively. It is possible to walk the "unhappy path" but it’s hard work so only those that are committed take it.
- "Good enough" is good enough
It was in fact, the great Dash (from Disney’s Incredibles – see how I spare you no cultural expense on this blog 😉 that said (and I paraphrase) "When everyone is special, it actually means no-one is". Nowhere is this more true than in the internal IT vs Business debate. The more special we allow different groups/departments to be unique and special the more expensive their IT solution. This recession will force organisations and departments to come to terms with this (I hope)
- Rigidly define flexibility
Oxymoronic at first blush, but it simply means, leave a little wiggle room, so people still feel empowered and part of the solution. Avoid "doing things" to people, collaborate with them instead.
- Finally (and another of my favourite topics) be cognisant of the effects of "Consumerisation"
Robin talked about the "Sunday Night/Monday Morning" concept, whereby people have a great IT experience on Sunday night as they catch up on personal tasks on-line, then go into work the following morning to receive a comparatively poorer experience. This isn’t about embracing the millenials, but about providing a range of service that suits a range of generational stereotypes.
Although the discussion was mostly business focussed, there were a couple of key technological points that I felt we worth calling out:
- Power consumption is the new gold
Based on the granularity of their server provisioning approach (smallest unit of MIP "currency" is a 2U box), Dell reckon that it is now power consumption that drives their hardware refresh cycle. Robin currently reckons that a 3 year refresh cycle will provide sufficient financial savings in power consumption alone to pay for the refresh.
- Virtualisation alone is not enough
Although it took a record breaking 60 minutes into the discussion before anyone mentioned the "c" word (Cloud, that is), what was clear that a big part of Dell’s success in the rationalisation of their data centres was the automation of the server provisioning. This is a topic that we’re beginning to see again and again, a virtual server is still a server, it still needs to be provisioned and patched. You only get the big savings, when you can automate that process sufficiently (and model it so you know what you’re doing is right).