Pages

Thursday, 18 June 2009

Recruiting square pegs for round holes

Currently looking for a new client, and as usual most advertisements include the idea that the would-be employer will only consider consultants with a strong background in [insert name of business sector/activity/system here]. Looking at a potential client with a requirement to build IT management systems and processes in the London financial sector, they say that they will only consider candidates with a strong risk system background.

I have worked in lots of sectors – software development, credit cards, insurance, defence, manufacturing, insurance – and I can’ t say that knowing about how the business worked made any substantial difference to how they needed to build their IT development management systems – methods, tools, reporting, etc. Basically, this is one case where the nature of the solution that is being built has far more in common with other solutions of that technical nature (i.e., other IT systems) than it has with other kinds of solution in that sector. So all development methodologies tend to look the same, all project management tools, all reporting systems... And why not? 80% of the time, the code for a word process is indistinguishable from the code for a bank system or a helicopter command-and-control systems.

But of course the business knows best. So they ask someone who knows all about helicopters – or insurance policies, or whatever – to build their processes, and they get ... a lump of dead, mechanical ‘process’ that looks like it was written by Franz Kafka and goes down like a lead balloon with developers.

A related mistake is to recruit someone who is good at a job to build management systems that will help other people do that job just as well. Seems sensible until you ask yourself whether you recruit a racing driver – even a very good one – to design your car. I wouldn’t. I wouldn’t even care if they had a driving license, so long as they had a long track record in designing race-winning cars.

Thursday, 11 June 2009

The value of validation

I just attended a workshop on testing procedures. It was a bit worrying – not that anything was wrong with the procedures themselves (in fact they were well thought through) , but it was a bit shocking that testers still need telling.

One topic the workshop didn’t address was the distinction between verification and validation. These terms are used in different ways in different environments, so I should start by saying what I mean by each. Verification is making sure that a production meets its spec. Validation is checking that, even if it does, it will also fulfil the original requirement it is intended, to meet. Not at all the same thing, not only in the obvious sense but also in the sense that step-by-step verification from the requirements to (say) the code you are reviewing would not be equivalent to validating the code against the original requirement. There is just no substitute for asking which requirement a piece of code contributes to – and how each requirement is realised in the code.

There are lots of reasons for this, of which the fallibility of previous checking it one. But there is (a mathematician friend tells me) a much more compelling one that should convince even the most hardened reviewer and tester. This is that it is (apparently) possible to prove mathematically that no two languages can be translated into one another such that the semantics is exactly correct.

This may seem an abstruse point, so here is a practical example I have used in training courses. There is a well known saying in English that, if you translate it correctly into Russian (again, I am told) and then re-translate it back into a possible but correct English phrase. One of the possible outcomes is the following:

The vodka is acceptable but the meat is off.
So what was the original English phrase? Give yourself a few seconds before you look at the answer, which is at the foot of this post.

Now look. Not quite the same, is it? Now it is essential to recognise that both the translations, from English to Russian and from Russian to English, were both 100% correct. Just like a model may correctly represent a requirement, a design a model and code a design. And yet it is clear that if you came up with the second English phrase (the code, as it were) rather than the first (the requirement), it would leave something to be desired. The problem is, requirements, models, design and code are all in different languages (in every sense) and not two languages (let alone four) are exactly equivalent.

Hence the critical value of validation as well verification. Your just have to do it, not because your verification (reviews, testing, static analysis, etc.) isn’t good enough but because it does a different job.

Not that you should validate everything. It’s expensive, and like verification itself, not always the most productive thing you could be doing with your resources. Like everything else in good management, what you look at should be determined by the risk it represents. So only validate the items that represent a real threat if they are wrong. By and large, focus on the critical, the complex (at any level), the novel (to you). After that, either an error won’t matter much or you should be able to fix it relatively easily.

Answer: The spirit is willing but the flesh is weak. Or, more completely, ‘Watch and pray, that ye enter not into temptation: the spirit indeed is willing, but the flesh is weak’ (St Matthew 26:41).

Monday, 8 June 2009

Implementing a methodology - do's and don'ts

An old friend sends me the outline of an IT methodology he is being expected to implement by a big client. It’s the usual bureaucratic behemoth and he asks me what can be done to make it work decently. My reply:

"Thanks for this. I read as much as I could bear and skimmed the rest. I can see why they’d hate it!

Going by this description, I had a similar problem at a large client financial services where I spent 2½ years implementing a global consultancy’s not very lovely methodology. They hated that too, but we eventually got grudging support and even commitment.

I guess that you could summarise what I would do as follows:

1. Insist on taking training very seriously – train everyone from top to bottom of the company, tailor the training to their exact needs, and as far as the delivery teams are concerned, don’t let anyone tell you can do this in less than a day for the introduction and a day for each major development stage (or discipline, perhaps). I personally trained more than 800 people in methodology at one client and they grudgingly agreed that it was money well spent. I think this which as because the training included lots of ‘whys’ as well as ‘hows’. That way people were constantly given the message that there really is a compelling reason to do this stuff, that this really is for your own good – as engineers, as professionals, and as people who don’t want to waste their own time or other people’s. The training has to be full of useful nuggets (eg, 40% of all software engineering is waste and rework, primarily for lack of decent processes), war stories, realistic exercises (ideally one big case study) and methods for finding that magic 80/20 position.

2. Encourage PMs to create tailored versions of the methodology for themselves. This is easy and reasonable and builds ownership. Given that it means cutting out waste and rework and building in the uniqueness of local areas, your client should want it too. After all, if the generic methodology includes stuff (as it always does) that doesn’t make sense in a given project context, they shouldn’t have to do it. And make sure that you build the process for tailoring the process into the main process (e.g., as a project initiation activity), make it a high priority item in training (for senior management too), and reward managers for being insightful and innovative.

3. Scale the methodology thoroughly – with a two-page checklist for truly tiny pieces of work, and a serious approach to low-risk work that really does require only a light touch. But never say that the process is optional. It is never optional. It is simply adaptable. Build in get-out clauses for compliance, take the maintenance people’s problems with project-size methodologies seriously, create massively simplified tools appropriate to very low risk situations. Make sure everything is driven by a clear sense that real risks are being managed rather than a formal procedure being complied with. But never let them do nothing – that’s just the thin end of the wedge.

4. Make the waiver/exception process as simple as possible. Quick, clear lines of authority (ideally as local as possible), 5 questions maximum, rapid response guaranteed. I would suggest simple answers to questions like:
  • What do you want an exception/waiver from?
  • Why do you want it?
  • What will you do instead?
  • Why is that better than the standard process?
  • What residual risks does that create?
5. Ensure that the methodology is properly owned, so that there is someone to go to for a decision on what is really meant or needed by a specific item, or to approve an exception. This is crucial if the system is to continuously improve itself – someone has to have it as a high-priority responsibility to actually improve it.

6. Support users constantly by training a methodology expert or two (one of my clients had 6-7 for 600 engineers) providing training, internal consultancy, project management consultancy, explanations and ideas for quick wins. They also provide a powerful conduit for communicating new ideas between groups.

7. Build an decent wiki (not a fixed website) that provides high-level process flow models to remind people what to do, and which can be drilled down for more details, online forms, etc. This paper you sent me is a prime example of a format people just won’t read – even I felt ill just looking at the endless levels of heading number. On the other hand, it’s not a bad script for a training course, so it’s not wasted. Even something as simple as online PowerPoint presentations are very effective, although they tend to offend web purists! I have a couple I built for Citibank, Churchill Insurance, Amex, Accenture, etc., if you’d like the see them.

8. Build an effective lessons learnt system that completes the loop from project experience to the methodology and back (through rollouts and training) back to projects.

9. Finally, pure stick: Tie their bonuses to compliance with the methodology, as confirmed by an independent assessor. Brutal, unpopular, initially counter-cultural in many companies but amazingly effective. It provides a somewhat perverse way of dealing with the inevitable feeling on the software engineer’s part that they have little interest in complying other than simple obedience to a very dubious corporate rule – if all else fails, fine them for non-compliance. Having dealt with thousands of IT people, I can only say that it has its place in the methodology implementation toolkit.

I dare say that some or all of this could be sold to anyone who a) hated methodology enough; b) had no choice about complying; and c) could afford the likes of me to make it work!"

Wednesday, 3 June 2009

Don’t measure what you won’t manage

I would guess that everyone in business has been asked to fill in a timesheet with dozens of charge codes - one for this project, one for training, one for admin, one for this other activity, one for... The list is usually endless. And equally endless seems to be managers’ craving for more and more data. Right now I am working in an otherwise quite sane organisation where I am nevertheless expected to complete a timesheet in excruciating detail. Given that my time is not chargeable and I’m supposed to be the metrics and measurement guru around here, it’s all a bit galling, to say the least!

Asked what they use it for, the answer often turns out to be ‘Nothing at the moment, but it will be useful...’ Oh really? Useful for what? And when? Naturally I don’t push this too far – some things are just corporate obsessions, and it’s definitely a Career-Limiting Move to question them too harshly.

Yet there are times when this fetishisation of numbers becomes quite bonkers. Two variants are especially stupid – when the data is deliberately falsified, and when the cost of collection is wildly over the top.

Take for example a consultancy I used to work for. A timesheet was put in every week, and then our chargeability was reviewed by the board – of which I was a member. Then one day I found myself being firmly reprimanded by the Chairman himself to the effect that no one was supposed to book more 37.5 hours a week. In fact I had booked 62. So I asked him, Whose data would you like me to falsify? No answer was forthcoming, and I went on booking my real hours. But the accounts department had firm instructions to edit my timesheet so that it fell in with the Chairman’s lack of numeracy.

Of course, it’s a petty story - until you work out how much effort is put into taking, processing and reporting measurements of all kinds that are never used. Probably tens of millions of people fill in a timesheet or some other record every day/week/month, only to have it effectively ignored.

But sometimes the waste is staggering. I used to work for a big consultancy, in an internal management role. One day a bunch of consultants I had not met before rolled up and announced that they were going to ‘fix’ our project proposal process. It seemed like a good idea – we were a bit haphazard and it was not unknown for us to sign up to a real disaster we really should have seen coming.

But then they started to tell me what they were going to do, and it was essentially a process of gathering the opinion of practically every senior manager and partner in the company. I was especially surprised at the long list of data they were going to collect for me. But I don’t have any use for this information, I said. But it’s very useful, they replied. For what? I asked. It will be very useful, they insisted. For what? I reiterated (I’m not very creative when it comes to people who repeat themselves). They would not back down, and I was in no hurry to admit that there was any point in collecting data about things no one actually wanted to know about.

So I decided to investigate the matter a little more thoroughly. I went to all of the people these consultants said they we helping to evaluate proposals and asked them two questions. One, as decision-makers, which parts of the information they were being offered would they actually use. And two, as information-suppliers, how much would it cost to generate the data they were being asked for.

The answer was less than astonishing. On average, only about half of the data that was to be collected would actually be used by anyone, and the total cost of this whole process would be about £60,000 per proposal. So we would be wasting about £30,000 every time we looked at a new job.

The moral of this tale? Don’t measure what you don’t manage. And while you’re at it, don’t measure things you do manage either, unless you are perfectly sure that the measurements will really be used – ideally as the clincher, but certainly as an important source of knowledge. Not that I would expect many companies to observe such a rule – after all, how many administrations, programme offices and finance departments would survive the resulting purge?

The other moral? Don’t ask people to solve problems they don’t understand and of which they have no experience, just because they are clever people and at a bit of a loose end. But that is a subject on which I could write a book.