Archive for January, 2012

The Future Workforce – Curious, Confident and Tooled up with Tech

Wednesday, January 25th, 2012

I recently presented at an event at the RSA around the role of technology in jobs, the economy and the future workforce in the UK, and although this may initially feel a little counterintuitive (and for me, potentially career limiting) I’d like to bring some of the discussion to you, highlighting in particular the general irrelevance of technology in our deliberations as to what we need to do to ensure our future workforce is equipped to help maintain and extend our position (and economy) across a broad range of industries.

Over the past few weeks, there has been much in the press about the relationship between skills, technology and job prospects, especially recently with all the discussion around the role of ICT in the tool curriculum.  In all of this, overall, I grow increasingly worried that we have confused the word “skills” with the word “tools”.

Most people’s experience of technology is now more defined by their personal experience than it is by their experience at work. We no longer live in a world where people only ever see computers at a place of work or place of study and broadly speaking, technology has become a natural ingrained part of our everyday lives, just like the television, just like the 240v, 50Hz AC that comes out of the sockets in your wall.  However, even despite all this, we seem predominantly transfixed on the specifics of training people to use specific tools and technologies rather than on the broader principles that make their use important and valuable.  ICT continues to be a separate bolt on to both education and in how organisations use it rather than something that is naturally embedded into every aspect of our lives. 

(Please understand, I get that we do not live in an society where everyone has equal opportunities and access to digital resources, but we do live in a world where increasingly, like the recent government mandate, everything is becoming “digital by default”.)

scienceBy now, we are familiar with the cliché around how we are “currently training people for jobs that don’t exist yet”, but I would argue that, although the pace of change may be slightly faster these days, that particular problem has always pretty much always been true.

My own family offers me some evidence – I am but the latest in a long line of engineers bearing the Coplin surname, my grandfather grew up in the industrial heartland of this country, working for one of the many engineering firms in the midlands as a pattern maker.  My father grew up in that same environment and became an aeronautical engineer, I grew up surrounded by aeronautical and mechanical engineering insight and artefacts and became a software engineer, my son, is growing up similarly “blessed” (or cursed as his Mum may occasionally have it) and will no doubt, find his own way to re-engineer the world (although like any other six year old, his current ambition sees him working with the Police, not on the motorbikes or in the squad cars but specifically “with computers”, his own important addition to the job stereotype that makes me infinitely proud).

My grandfather went to a school without electricity, my father went to a school without calculators and I grew up in a world without personal computers and went to college in a world without the internet or the web.  My son will be similarly afflicted in relation to his children (“Tell me again Dad? You didn’t even have hoverboards?”) and so it goes on.

Although the generations of Coplin engineers grew up with incredibly different tools and concepts of education, we are united by a common set of skills; almost insatiable curiosity and a desire to re-engineer and improve the world around us.

What this says to me is that the tools are broadly irrelevant.  Don’t get pedantic on me, I’m not saying totally irrelevant, just that it’s more important to understand the principles that make them work and where to apply them, than it is to understand the specific workings of a given software package (or lathe for that matter).

This is really where our challenge lies – how to ensure our children and workforce are equipped with the broad principles and the aptitude and attitude to know when and where to apply them along with that sense of curiosity and wonder about the world about them.

Perhaps it was because I had just spent the best part of the past weekend with them, but my baseline for success is broadly defined by the incredible “Gov Camp” community we have here in the UK.  Some 250 or so individuals from all over the country, from all parts of public sector, united by a love of technology and a desire to improve public service (or as Chris Taggart so pragmatically puts it, to “make the world a little less shit”).

GovCamp 2012What makes this community special (and for my money, an early indicator of what we can look forward to across all industries and companies in the future) is that from all of these people, only a handful (certainly less than 10) would class themselves as being from “IT”.  These are individuals from the business end of government who use technology as a part of their everyday lives, and want to use it to the same extent in their professional roles.  They think of technology as an enabler not an outcome.  They are curious, they are confident, they overcome organisational boundaries and are guided by a civil purpose – they want to take the world apart and put it back together again in a way that it makes things better for those involved.  These are the hallmarks of a creative, capable and competent workforce and the principles that are behind this curious mind-set are exactly those I think we need to infuse in our children and future workforce (of all ages).  (If you want a more detailed look at what makes UK Gov Camp and the people behind it so special, you can find out what it feels like to “walk a mile in their sandals” from Steph Gray, one of the community’s incredible architects.)

For too long we have drawn a distinction between science and art, when in reality they can both be the same thing. We need to show kids (and adults alike) that, as Niko Macdonald, one of the audience members eloquently put it, “there is beauty in code” and “majesty in mathematics”. It is as much about inspiration as it is about perspiration.  Unfortunately, from the discussion it becomes clear that there is a significant gap between schools and industry in helping each other understand which skills are important and what sort of careers they could lead to. 

I think we can do more here, especially those of us who have children within the education system – we need to find a way of spending more time with schools to help demonstrate what careers and vocations the basic skills like maths, english and science can lead to (and that these subjects can be as creative as any art-related subject).  I think a re-birth of the school computer club is one key way that we can do this without getting caught up in (or in the way of) the curriculum discussion. (HT to @MadProf and the “Monmouth Manifesto” on that one).

There is no doubt that technology will play a crucial part in our future economy, and that technology skills will be fundamentally essential for individuals to have a challenging, rewarding career but I think it’s important to highlight those careers will increasingly not be in “IT” itself. I believe it far more likely that they will be spread across the existing (in some cases eternal) and the incredible new industries that our future will offer.  More importantly, the specifics of the technologies being used will vary even more significantly than over the preceding 100 years and so now, more than ever, it becomes crucial to infuse those essential principles into the mind-set of all those who are venturing into this new world of work.

Helping them understand that, as Matthew Taylor from the RSA puts it, “you don’t ‘get’ a job, you ‘create’ one” could be all it takes to get us started.

(GovCamp photo credit: David Pearson)

Consumerisation is a Fickle Beast

Thursday, January 19th, 2012

It’s been a while since we last spoke about the “consumerisation” of IT and recently I’ve seen a couple of warning signs that some organisations have missed the point of the extent of the philosophical change that consumerisation requires in order to be a strategic asset in how you empower individuals with technology.

ipadisationMany organisations clearly understand the potential of consumerisation inside their organisation, they get that it creates more engagement with their employees, especially around their use of technology. They get that it fosters innovation as people feel empowered to use technology creatively to help them solve business challenges and deliver better service. Hell, they even get that, done correctly, it can save money on top of all that.  But increasingly I’m seeing examples of organisations that try to jump to the answer without considering or implementing the principles that will make this approach successful year after year. Net result, short term gain, long term pain – worse still, that long term pain will fool people to think that consumerisation “failed” and we’ll be back where we started – expensive, constrained corporate desktops that provide a far worse experience than the one we enjoy in our personal lives.

The two warning signs of this short-termist approach are easily identified, basically, ask yourselves, or your IT department (and be honest)  – Are you chasing consumerisation based on a philosophical change in the way you think about the role of technology inside your organisation, or

1. is it the result of the demand for a specific device?  or

2. you think “consumer” equipment is cheaper? (you know the line – “you paid how much for that corporate laptop? Man, they’re half that price in PC World/on the interwebs” and so on)

Come on, I said be honest. Many I know are doing it to make it acceptable to use a specific device work on the corporate network – I even heard the phrase the “ipadisation” the other day (you know who you are Mr Weber). This my friends, is _not_ consumerisation, it is satiating the ego of you or your execs and if all you do is focus on one specific device, you’ll have to do it all again when that fickle consumer changes his or her mind and decides that this year, it’s the pink one we all like.

Others are looking at the price point difference between a shiny, consumer laptop and the ugly, expensive corporate alternative and thinking “What the hell? The spec is the same so why pay more”. Well, remember that TCO acronym that we all spent blood, sweat and tears getting established all those years ago?  It’s got the words “total cost” in it for a reason.

Many consumer devices are trinkets, they’re pretty, they work well for a time, but they won’t stand the day in, day out abuse that business machines get.  They may last a year or maybe two of that kind of toil, but ultimately you’ll end up spending more money keeping them running than you would have if you’d bought something more fit for purpose.

Please don’t mistake this post for an anti-iPad rant, it’s really not (and to be honest I’d hope you know me better than to think that).  If anything, this post is just a little catharsis for me, it’s to remind us that consumerisation is a change in how we should think about _people_ within organisations, it is about culture, not finance, politics or god forbid, technology.

Stick with that and no matter what “must-have” tech gadget is in season, we’ll all do just fine.

Voice Recognition: NUI’s Unsung Hero

Wednesday, January 11th, 2012

I recently got asked to provide an opinion on “voice recognition”, in particular around our philosophy towards it and how we’ve implemented it across the stack.  If you can stomach it, you can see how it turned out (let’s put it this way, it opens with a comparison to the Hoff’s “Knight Rider”) and it kind of goes downhill from there but regardless, in doing the research, I learnt some really interesting things along the way that I thought I’d share here.

soundwave2First off, let’s start by asking how many of you know how speech recognition works these days?  Well I thought I did, but it turns out I didn’t.  Unlike the early approach, where you used to have to “train” the computer to understand you by spending hours and hours reading to your computer (which always kind of defeated the object to me), today, speech recognition works pretty much the same way they teach kids to speak/read, using phonemes, digraphs and trigraphs. The computer simply tries to recognise the shapes and patterns of the words being spoken, then using some clever logic and obviously an algorithm or two, performs some contextual analysis (makes a guess) on what is the most probable sentence or command you might be saying.

In the early days of speech recognition, the heavy lifting required was all about the listening and conversion from analogue to digital, today it’s in the algorithmic analysis on what it is most likely that you are saying.  This subtle shift has opened up probably the most significant advance in voice recognition in the last twenty years, the concept of voice recognition as a “cloud” service.

A year or so ago, I opened a CIO event for Steve Ballmer, given I was on stage first, I got a front row seat at the event and watched Ballmer up close and personal as he proceeded to tell me, and the amassed CIO’s from our 200 largest customers, that the Kinect was in fact a “cloud device”.  At the time I remember thinking, “bloody hell Steve, even for you that’s a bit of a stretch isn’t it?”.  I filed it away under “Things CEO’s say when there’s no real news” and forgot about it, until now that is when I finally realised what he meant.

Basically, because with a connected device (like Kinect), the analysis of your movements and the processing for voice recognition can now also be done in the cloud. We now have the option (with the consumer’s appropriate permission) to use those events to provide a service that continuously learns and improves.  This ultimately means that the voice recognition service you use today is actually different (and minutely inferior) to the same service that you’ll use tomorrow.   This is incredibly powerful and also shows you that the “final mile” of getting voice recognition right lies more now with the algorithm that figures out what you’re mostly likely to be saying than it does with the actual recognition of the sounds.  MSR have a number of projects underway around this (my current favourite being the MSR’s Sentence Completion Challenge), not to mention our own development around how this might apply within search.

Those of you that have been following these ramblings in the past will know I’m slightly sceptical of voice recognition, thinking that it is technology’s consistent wayward child, full of potential, yet unruly, unpredictable and relentlessly under-achieved.  I’m not saying my view has changed overnight on this, but am certainly more inclined to think it will happen, based on this single, crucial point.

Kinect too provides its own clue that we’re a lot closer than we previously thought to making voice recognition a reality, not just in the fact that it uses voice recognition as a primary mode of (natural) interaction but more in how it tries to deal with the other end of the voice recognition problem – just how do you hear _anything_ when you are sat on top of the loudest source of noise in the room (the TV) when someone 10 feet away is trying to talk to you in the middle of a movie (or the final level on Sonic Generations, sat next to a screaming 6 year old who’s entire opinion of your success as a father rests on your ability to defeat the final “boss” ).  If you have a few minutes and are interested, this is a wonderful article that talks specifically about that challenge and how we employ the use of an array of 4 microphones to try and solve the problem.  There’s still more work to be done here, but it’s a great start in what is actually an incredibly complex problem  – think about it, if I can’t even hear my wife in the middle of a game of Halo or an episode of Star Trek (original series of course) how the hell is Kinect going to hear? (Oh, I’ve just been informed by her that apparently that particular issue is actually not a technical problem… #awkward).

So these two subtle technological differences in our approach are going to make all the difference in voice recognition becoming a reality as part of a much more natural way of interacting with technology.  Once that happens, we move into the really interesting part of the problem – our expectations of what we can do with it.

expectOur kids are a great way of understanding just how much of Pandora’s box getting into voice recognition (and other more natural forms of interaction) will be and I suspect that ultimately, our greatest challenge will be living up to the expectation of what is possible across all the forms of technical interaction we have, NUI parity across devices if you like.  My son’s expectation (quite reasonably) is that if he can talk to his xBox, then he should be able to talk to any other device and furthermore, if he can ask it to play movies and navigate to games why can’t it do other things?  I was sitting doing my research with him the night before my interview on all of this, and we were playing together at getting the voice recognition to work.  He asked the xBox play his movie, he told Sonic which level to play on Kinect FreeRiders then he paused, looked at me and then back at the TV, cracked a cheeky smile and said, “Xbox, do my homework…”.