Saturday, 9 July 2011
APM's mastery of metrics
Take, for example, their attempt to quantify RAG ratings. I'm firmly opposed to this on principle - RAG should define a qualitative difference in consequence, not just an arbitrary definition of the 'Oh-well,-1-to-3-can-be-red-and-4-to-6-amber,-and-oh-how-can-we-say-that-10-is-really-special?-I-know,-let's-make-it-blue!' variety.
Exactly how objective and rigorous this is comes out when they find that they can't actually tell you what the difference between neighbouring scores actually is. Their scoring for 4 is 'Better than a 3, but some elements required for a 5 rating are not in place'. And for 7? 'Better than a 6, but some elements required for an 8 rating are not in place'. As a colleague immediately responded to this marvellous insight, 'No shit, Sherlock…'
I suppose there is some kind of sense in this. It lets you to deal with the all-too familiar situation where you find yourself unable to decide between alternatives. But unfortunately all that really means is that the scale you are trying to use is not defined objectively, rigorously or consistently enough (usually because, in my experience, it isn’t a single scale at all). But that is only to say that it is still too immature to be used. But here it is, being recommended as a professional standard. Which leads me to refer the refer the reader - and the APM - to my previous piece on professionalism.
The rest of the paper is riddled by the sort of inarticulacy and arbitrariness that suggests that project managers probably shouldn't be allowed to write standards or even evaluate projects. I particularly despair at the description of what needs to be in place to get a 10: 'Processes have been refined to be best practice. IT is used in an integrated way to automate the workflow, providing tools to improve quality and effectiveness. The project is demonstrating innovative techniques, thought leadership and best practice'.
No definition of best practice, so it starts with a completely meaningless idea. The assumption that the Nirvana of management is automation is also a bit scary: providing IT-based tools to manage workflow and improve quality and effectiveness, far from being best practice, is about as basic as it gets. Well, it is around here. As for 'demonstrating innovative techniques, thought leadership and best practice' (there it is again!), having led innovation management programmes and having routinely laughed/despaired at the quality of thinking that portrays itself as 'leadership' in most organisations, I am astonished at what the APM has been prepared to release under its banner.
(For what is, I think, a slightly more intelligent approach to RAG statuses - which is to say, on is focused on action, not measurement - try here.)
Monday, 6 June 2011
What is the point of indemnity insurance?
So, is there any data anywhere showing exactly how many claims there are against individual management consultants, and how much is actually paid out by insurers? If, as I suspect, the answers are 'very, very few' and 'very, very little' respectively, perhaps someone can tell me why my clients bother, and why I should pay up?
Thursday, 26 May 2011
When are your requirements ready?
A very common failing in all sorts of projects is being stuck with what are in fact quite inadequate requirements.
The fact is, most organisations are pretty bad at explaining exactly what it is they want a project to accomplish. There are lots of good reasons for this - the situation at the start of a project is often fluid or unclear, there are too many options to be precise, it's hard to define a truly innovative idea in detail until you have tried it out, conditions and priorities change as the project proceeds, better ideas surface, and so on.
But that's not the same as simply being bad at requirements. The above issues relate mainly to the content of requirements, which is very hard to nail down definitively; what I am talking about here is their quality - a different issue. Badly defined requirements area major cause of problems for projects and businesses alike, causing the routine delivery of the wrong thing and lots of unhappy stakeholders. Fortunately there are ways of ensuring that the requirements - such as they are - are at least defined well enough for the project to proceed reasonably comfortably.
The issue I have in mind is the requirements review process - how requirements get signed off. There are lots of things you can do to make this fairly robust, but one technique I have seldom seem defined clearly enough is that of asking the requirements' principle users to confirm that they are fit for purpose.
There are three key groups of people who have an interest in how well requirements are stated:
- the analysts who will have to translate the requirements into a functional solution.
- the (user and operational) acceptance testers who will have to check that the requirements have been met.
- the operations personnel who will have to convert the requirements into SLAs and OLAs.
These people are not generally very interested in the requirements contents. But give them poorly quality requirements - too vague, imprecise, unanalysed, inconsistent, with key areas missing, and so on - and they simply won't be able to do their job. Which means that the solution the project delivers is all but bound to leave everyone with a nasty taste in their mouths.
All I am advocating here is that requirements be signed off by these groups. As far as I am aware, although many organisations ask their analysts to approve requirements, most don't ask testers or operations staff, and this may be a major cause for project failure. It's not hard to arrange, and it should certainly be welcomed by the groups in question. If, on the other hand, you cannot get their approval, maybe you should be looking to the users to define what they want a little more clearly - and so avoid storing up trouble for the future.
Of course, there's also a case for weak requirements. Well, not exactly weak, but at least requirements that recognise that they may not be the last word. Organisations and businesses change. So do markets and so does good practise and the opportunity the requirement was originally designed to address. So if you're embarking on an 18-month project to deliver x, it's probably not too smart to try to nail down your definition of x too soon. But that is not to say that you should not try to meet the above test from the start - it's just that your project should also make a strong allowance for change. That could mean a large tolerance, but it should also mean setting the right expectations from the start. For example, the business and users should expect to have to re-validate their initial requirements at regular intervals, you might want to prioritise items that are pretty safe from future fluctuations (e.g., stable regulatory requirements or generic interface components), design for flexibility (loose coupling, modularity, etc.) and so on.
This is the case for agile, of course - but of that more than enough has already been said by anyone and everyone, including me!
Friday, 20 May 2011
Defining RAG statuses
The only problem seems to that in many organisatons the very definition of Red, Amber and Green is usually, frankly, quite irrational.
A fairly representative set of answers to the question ‘What do RAG statuses actually mean?’ can be found here: http://www.linkedin.com/answers/management/planning/MGM_PLN/186467-1517184. However, there is surprisingly little on the web about this topic, so here are my current thoughts on defining red, amber and green. Nothing radical, but a little more consistent and logical than some of the ideas I have seen floated, especially in the companies I have worked in.
The first question that needs to be answered is what exactly ARG reports are for. If you look at most organisations’ RAG criteria, they are generally defined in terms of percentages or absolute numbers. For example, if a project budget looks like going over by 20%, it’s a Red. I don’t understand this approach, especially in a project-based organisation. One of the basic features of any intelligent project governance approach is to define project-specific tolerances t reflect the project-specific circumstances, known risks, and so on – not to treat them all as though they were peas in a pod.
So when project A is 20% over budget, that may indeed be disastrous, because its agreed budget tolerance is 10%. But project B, which has always been expected to need more re-financing at some point, has a tolerance of 30% (yes, such projects do exist), so overspending by 20% is not, by itself, cause for concern, and certainly not cause for trumpeting a disaster from the rooftops.
So what – or, more precisely, who - are RAG reports for? First and foremost, they are not for everyone. By and large, they are for people who a) know the basic parameters of the project, including its tolerances for budget, delivery, and so on; and b) are in some sense accountable for the project’s success, or at least need to understand its prospects for success. In other words, RAG reports are aimed at people like the project board, quality managers, your PMO and so on.
So what do they need to know about a project that can be usefully and meaningfully communicated in something as simple as a single colour? Really it’s very simple: Do you (the report’s audience) need to do about this work?
So the message the RAG status needs to convey to the reader:
- Green: Everything’s fine, you have more pressing things to worry about, go away.
- Amber: I have problems, but I’m pretty sure I can fix them with what I have available. So nothing to actually worry about yet, but you probably need to keep an eye on what happens next.
- Red: I have real problems and I can’t solve them with what I have available. YOU NEED TO DO SOMETHING.
- Green: All aspects of the project are fully under the PM's control using only the project's authorised plan & arrangements (e.g., budget, dependencies, resources, etc.).
- Amber: Additional actions are required, but can be successfully managed within the project's authorised capabilities & tolerances.
- Red: Cannot be resolved within the project's authorised capabilities & tolerances. Requires escalation.
So in addition to these high-level definitions, here are a few definitions of more detailed RAG statuses, as they relate to particular areas of project management. They are pretty useful tests of the overall status, but always bear in mind that the ultimate test is the above core definitions.
Finance
Green:
- The project's current budget is sufficient for the project, and is expected to remain so.
- There are outstanding changes that have yet to be budgeted for.
- The PM does not maintain a record of expenditures.
- Actual and forecast expenditure have not been reviewed/reconciled since the last report.
- The authorised budget is currently being challenged.
- The project is forecast to overspend (including tolerance) but there is a credible path to recovery.
- The project is forecast to overspend (including tolerance) and there is no credible path to recovery.
- Finances were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
- The project (or stage) has mobilised without budget authorisation.
- The project is overspent (including tolerance).
Green:
- The authorised scope is correct, is authorised, meets stakeholder expectations, and is expected to remain so.
- The current governance meets the project's needs, is within our governance framework, and is expected to remain adequate.
- The project is not explicitly aligned with an authorised business goal and/or has moved from baseline scope.
- There is at least on open & unauthorised change request (CR).
- Cumulative impact of CRs exceeds original tolerance.
- The authorised scope is forecast to become invalid (e.g., known change in business strategy) but there is a credible path to recovery.
- The authorised scope is forecast to become invalid and there is no credible path to recovery.
- Scope & Governance was Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
- The project has started without an authorised scope.
- There is no Project Sponsor/Senior supplier/Senior user on your project board.
- The authorised scope is no longer valid.
Green:
- The currently authorised plan and arrangements are sufficient to assure the successful delivery of the project as a whole.
- Plan updates are needed to reflect expected changes in activity, scope, CRs etc.
- The project plan has not been revised since the last report.
- A critical path product/milestone has slipped/is forecast to slip, but there is a credible path to recovery.
- A critical path product/milestone has slipped/is forecast to slip, without a credible path to recovery.
- Schedule was Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
- The project (or stage) has started without an approved plan.
- Unfinished work that should already have been complete has yet to be rescheduled.
- Work is underway that is not on the authorised plan.
Green:
- The current stage has named, agreed resources and the resource requirements for the project as a whole are agreed.
- The plan includes over-committed resources.
- The project lacks (or is forecast to lack) resources needed for successful delivery, but there is a credible path to recovery.
- The project lacks (or is forecast to lack) resources needed for successful delivery, and there is no credible path to recovery.
- Resources were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
- Plan contains tasks without predecessors or successors.
- Plan contains tasks in the current stage without assigned resources.
- The plan for the current stage is not fully resourced.
- The plan for the project as a whole does not identify at least resource types.
Green:
- All known risks and issues can be managed within the current project arrangements & capabilities.
- At least one severe risk/issue is unlikely to be resolved as planned.
- The project has escalated at least one risk.
- The project has no effective risk/issue log.
- Risks & Issues were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
- The risk/issue log has not been reviewed since the last report.
- At least one severe risk/issue is unlikely to be resolved as planned.
Green:
- All dependencies for the project as a whole have been formally defined and agreed.
- Not all dependencies for the project as a whole have been formally defined.
- Not all dependencies for the project as a whole have been formally agreed on both sides.
- An external dependency on the critical path has slipped (or is forecast to slip), but there is a credible path to recovery.
- Dependencies were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
- An external dependency on the critical path has slipped (or is forecast to slip), and there is no credible path to recovery.
- Not all dependencies for the current stage have been formally identified and agreed with the responsible managers.
- The project plan does not identify all external dependencies and deliveries.
Exactly which areas you chose to RAG is up to you, of course. But what ever they are, they should be the areas you regard as the best indicators of project success and failure. That’s why I tend to start with the set above: in my experience, it is because dependencies and resourcing and all the rest tend to be the areas that drag a project under. You should chose your own, and test them every six months or so to see whether trends in individual RAG statuses did indeed predict success and failure. A few quick statistical tests using Excel is all you need (though what you use to replace unhelpful tests or unexplained failures is more speculative – a bit of an experiment).
Of course, this leaves a very important point unclear. If one particular facet of my work – the dependencies, for example, or the risks – is red but the rest are green, how do I calculate the overall status?
It is very tempting to fudge things here. If it’s mostly Green with just one Red, can’t we take a sort of average and call it Amber? No, we can’t – and the reason is simple. All the RAG criteria suggested above are individually capable of wrecking your project. Or if they aren’t they should not be on your list of questions. So if any of them is Red, the project as a whole is Red too.
One more detail. All the above RAGs are based on objective information (though no information in business is safe from manipulation). But there is one area of subjecting knowledge this leaves out: the manager’s own expectations of success. This is an important factor: a project manager faced with reds and ambers but who still expects to deliver as planned either has something interesting to tell you or needs to re-learn the basics of project management. Either way, I always include a ‘deliverability’ RAG – the PM’s assessment of how likely they are to succeed. The basic definitions of each colour are the same, and here are a few things PMs should ask themselves when setting their deliverability RAG:
Deliverability
Green:
- You are confident that the project will deliver as planned and authorised, without disproportionate risks.
- You are not confident that the project will deliver as planned and authorised, but there are viable methods for recovering from this.
- You are not confident that the project will deliver as planned and authorised, and there are no viable methods for recovering from this.
- Deliverability was Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
Wednesday, 11 May 2011
Professionals and practice
The difficulty, as far as I can see, is two-fold. On the one hand, the business and IT managers I work with aren't professionals. They are often quite good, but they have none of the attributes of doctors or lawyers. There are few qualifications and none of any real substance. In they UK a doctor trains for five years and be formally qualified to a very high standard before they are permitted to treat people independently, but how many weeks does it take a modestly experienced manager to master Prince2? Nor are they obliged to join professional bodies exercising legal powers to strike them off if they aren't competent or are guilty of malpractice.
As for the values to which a manager is subject, there aren't any. Their only obligation is to do the job well enough not to get fired. No professional values, and absolutely none that transcend the interests of their employers - who in turn are under no obligation whatsoever to respect their managers' professional standards or concerns.
And last but by no means least, the quality and performance standards to which real professionals - especially doctors and nurses - are held simply do no apply. Just imagine what sort of state we'd all be in if the average doctor had as many failures and complications as the average project or programme manager!
On the other hand, businesses seem to be under the impression that selling something vigorously enough will somehow make the 'message' true. The discussion this all started from included a very senior member of the executive insisting that we could not call ourselves change 'management' because they wanted the name to convey not just management but also professionalism and leadership. But are they doing anything to empower their managers to lead? No. Are they inculcating a real professionalism? No. They like the sound of these words but, having no real idea what they mean, think that simply reciting them enough will somehow make them true.
So managers are not professionals. Is there any prospect that they could be? In the public sector, perhaps, though the erosion of the independence of civil service under the influence of consultants of all kinds makes that harder to imagine. As for business, absolutely no prospect at all. Managers are too in thrall to the interests, priorities and outrageously anti-professional powers of the businesses they work for.
Wednesday, 13 April 2011
Level 0 context diagrams
Click on the picture to expand it.
The idea is to identify all that factors that explain what, ultimately, the methodology is trying to accomplish, how it is governed, how project and programme goals, objectives and targets are set, what support is available, who controls the overall approach (e.g., the core methodologies), and so on. In my experience, most organisations address this issue in a very piecemeal manner, with occasional and very ad hoc references to the details scattered all across the methodology and in surrounding structures (e.g., PMO rules, local standards, and so on).
This is unfortunate, as it invites conflict, makes its hard to understand the whole, makes compliance with the methodology much harder to justify, all but ensures that major errors and omissions will exist, and so on. It also makes it hard to identify who to go to when the methodology does not actually answer a question. Of course, defining all this will demand a vast amount of information that is typically either widely scattered, hard to find or simply missing. But at the very least, for each box you will need to know:
- Overview of purpose in the organisation as a whole
- Role in delivery (eg, direction, prioritisation, project governance, and so on)
- Specific dependencies
- Process/standards
- Contacts
- Organisation
- Ownership
- Management cycles
You can find an editable PowerPoint version here. I'd be interested in comments, and eventually plan to create a fully-fledged presentation explaining each item in the model in detail.
Free stuff - no, really
Unlike most such pages, this one really is designed to give you free stuff, not just adverts for myself. Knowing full well that this is good stuff (well, good enough for other people to pay me for it) and not wanting to let it fall into oblivion, I thought I’d just give it away. Really.
Right now it has tools and training materials for lessons learnt systems, stakeholder management, various aspects of methodology, and so on.
I plan to add to it occasionally. The main areas will be methodology, quality, and governance, but I have a good deal else. And if you have any requests, I may have something I could post just for you.