Pages

Tuesday, 11 September 2012

Are green fields always swamps?

My family are currently building a pond in our back garden. Two days, say the manuals. Two months, says my back – if you’re lucky.

The trouble is, the pond is only one project we have in hand. Half the garden in being redeveloped – an 8-metre fir has been uprooted, the old (and, it turns out, asbestos-lined) summerhouse has been demolished, a new greenhouse (right by the pond) has gone up, the beds have all been massively redesigned, half the major shrubs have been moved (and probably killed), the shed has had to be completely rearranged to accommodate a ton of drying firewood…

Not so much green field then as builder’s yard – or now, after we have tramped all over it for weeks on end and dug up everything and dumped tree tons of topsoil and dug up half of it at lest twice just to get going - swamp. No wonder it’s take so long to complete the pond. The plan is in flux, everything around it is being changed, the very earth is unstable.

How like a so-called ‘green field’ site. And how very unlike a real green field. Had we just left everything else alone, we could have simply dug a pond into nice stable earth and lawn, and quite possibly done it in two days. Instead, just like our back garden, on a ‘green field’ site, everyone is trampling over it, struggling to execute their conflicting projects and making hundreds of quick fixes and claiming this space here and that system there.

So like most of the programmes I have worked on – the whole management processes, architecture, systems and data is often the last thing to be created, so everything is just like a swamp – so many interim and temporary solutions, almost all of which will gel into permanent – and permanently obstructive – blots on the landscape.

Ho hum…


Thursday, 7 June 2012

What is a cultural change?

Being in the midst of my umpteenth change process, I have been pondering exactly what it is that makes a change a cultural change, as opposed to the ordinary kind. I think I've finally got it.

An ordinary change is one where they say Yes and then don't do it. A cultural change, by contrast, is one where they glare at you first, and only then say Yes and don't do it.

Wednesday, 6 June 2012

Advanced test scripts

All IT and most business organisations will be familiar with the idea that they have to test systems. A basic tool in this is the test script.

A test script sets out a succession of a step-by-step instructions to carry out a test. It is crucial that testing be scripted to an extent commensurate with the risks of not testing properly, but when looking at either corporate standards or individual projects it’s striking how seldom even the minimum standards are met. Do your scripts state the expected result, so you tell unequivocally whether the test has been passed or failed? In my experience, most don’t. Nor do they tell the user what preconditions must be satisfied before the test can begin (navigate to…, using these privileges…), or tell you how to check the result, and so on.

So here is my recipe for a truly complete test script. You won’t want to use it because it’s long and complicated and you can’t see the point of it, but if you can tell me which fields are not needed by anyone with a legitimate interest in your testing, feel free to take them out.

The script falls in five sections:

  1. Document control
  2. Setup information
  3. Execution
  4. Outcome
  5. Result
  1. Document control data
    • Identifier.
      • Test name.
      • Reference no.
    • Parent identifiers.
    • Author.
    • Authoriser.
    • Preparation date.
    • Version.
  2. Set up information
    • Planned execution date.
    • Tester
    • Function.
      • Summary of test’s objective or purpose.
      • Test condition(s) implemented.
      • Positive/negative test?
    • Start point (i.e., navigating to appropriate screen/process/field.)
      • Set up/Initial conditions.
      • User ID/privileges required.
      • Preceding actions/tests to prepare the application.
      • File selection, parameter settings, etc.
      • The preconditions for the test as a whole (Eg, account no., currency, etc.)
  3. Execution (probably in table format)
    • Step no. (if sequence is significant.)
    • Location (Screen/form/field name at which testing should begin enter, for example, “Go to screen…”, “Select ‘Reports’ menu”, etc.)
    • Input data.
      • Data to be entered.
      • Option(s) to be selected.
    • Test actions (Step by step, checklist-style – e.g., “Enter data”, “Select option A”, “Click on Submit”, etc.)

  4. Outcome
    • Actual test time/date.
    • Actual result.
    • Checked boxes (against each test step, to confirm completion.)
    • Notes, with a general prompt to record anomalies, unexpected results, unplanned steps, & unusual system behaviour.
    • Narrative/commentary (to support re-runs & regression testing.)
    • Sign off.
    • Tester’s name & signature.


  5. Test result
    • Expected result.
    • Method for checking actual against expected (if not just “Check actual vs expected results” - including automated file comparisons, etc., as appropriate. E.g., checking back-end systems, end-of-day report, messages, etc.)
    • Pass/fail.
    • Cause of failure.(Eg, “Comm320 failure”, “Data feed”)
    • Defect reference field (to locate defect reports, anomalies, etc. my be needed at both step and script levels.)

Try it. Really, it works.

Invest in your corporate brain

The brain consumes about 20% of the body’s energy, and that is why we are by far the most dominant organism the world as ever seen.

From the point of view of development, I would say that formal processes represent about half of every company’s brain – a good deal of its memory, its practical skills, its controls of perception and behaviour, quite a lot of its capacity for reasoning, balance and coordination, and most of its basic language and social skills.

On the other hand, most companies' formal processes - their lifecycles, their operational mechanisms, their ways of working - are designed more like the brain of a crocodile. (The analogy is more exact than you might imagine.) They are spread out across they organism, they barely speak to one another, they are almost never tailored to the practical needs of their users, they are seldom created with a meaningful outcome or benefit in mind and never tracked or measured to see whether they are doing their job and seldom fixed even when they plainly aren't, they have no effective direction or ownership, they are formally changed in an arbitrary and impulsive manner, they are never changed to anticipate a real problem, and so on.

Without major evolutionary leaps processes like this will never evolve into something truly intelligent.

Why is this?

It is, I think, because we don’t bother to invest in them to the extent necessary. Perhaps we just can’t believe that they need maybe 20% of all resources. Perhaps, like the brain itself, the significance of these processes is lost on people who, almost by definition, cannot see what they contribute. After all, we only need to invest so much in a corporate nervous system because most the useful things an organisation does require a span of knowledge, control and attention no individual possesses. Hence the paradox of processes – they are installed because we cannot manage such vast and complex systems, but even when they are installed and doing their job perfectly well, we seldom construct, implement or support them in a manner that really improves our view of the whole. We are still cogs in a machine that, although it may now be working better, we still cannot grasp processes as a whole.

So what needs to be done to improve our grasp? There are quite a few things that can be done:

  1. Engineer processes properly in the first place!
    1. Make it a collective activity, don’t under-estimate how much effort it will take, or the hidden cost of getting it wrong.
    2. Start with a clear end in mind – and constantly test processes to see whether they are doing their job.
  2. Make sure everyone understands what the processes are for – i.e., don’t allow people to wander blindly through their work.
    1. Train everyone and train in detail. The ROI on training is higher than the ROI on almost anything else in management.
    2. Not training is not only completely counter-productive but also extremely demoralising and defeats the purpose of hiring intelligent, capable people.
  3. Do not over-engineer processes:
    1. Define standards and processes in terms of functional goals, not detailed technical steps, so they can be implemented and adapted locally.
    2. Conversely, explicitly Maximise local discretion and flexibility, so decisions made remotely cannot inadvertently force absurd actions locally.
  4. Make sure that you can fix any problems with the process.
    1. Instrument the processes to make sure they tell you how well they are doing without having to ask specially.
    2. Have an intelligent waiver/exemption process so local teams can escape the worst excesses of processes not defined for their purposes.
    3. Part of every well-defined process is the possibility of changing it. It will need changing, and you need that fact into it. So make sure that all processes are owned, with clear local accountability and authority to improve. Invest in the time and re-training needed to do that.


There’s a lot more, but if you are already doing all this, you probably have other things you should be focusing on instead.

Tuesday, 8 May 2012

How to implementation almost anything...

This diagram summarises (my own view of) how to roll out almost anything in IT that is not too technical - not an systems deployment, perhaps (which would be full of technical preparations and tests) but certainly a new process or function.



It's quite obvious once you've nailed down the detail, but you may find it helpful. The steps for each of these tasks are summarised below:
  1. Stakeholder workshops
    • Scope
    • Goals
    • Impact
    • Process
    • Expectations
    • Participation
  2. Finalise key updates
    • Functions
    • Organisation
    • Products
    • Processes
    • Controls
    • Training
    • Intranet
  3. Project kick-off
    • Scope + goals
    • Budget
    • Requirements
    • Sponsor + stakeholders
    • Mandate/Project definition
    • Issues + risks
    • Critical success factors + KPIs
  4. Communicate with stakeholders
    • Expectations
    • Impact
    • Budget
    • Alignment
    • Authority to proceed
  5. Identify audiences
    • IT stakeholders
    • Business stakeholders
    • Delivery managers
    • Development teams
    • Operations + support
    • Regulators & compliance
    • Cross-organisational teams
  6. Capture current status
    • Function mapping
    • As-is process/systems
    • Awareness & interests
    • Functional impact
    • Implementation impact
    • SWOT, risks & issues
  7. Validate roll-out process
    • Process walkthrough
    • Impact assessment
    • Participation requirements & plan
    • Risks & issues
    • Go/No Go decision
  8. Tailor roll-out packages
    • Local expectations + impact
    • Entry/exit points/processes/products
    • Current level of knowledge
    • Local implementation process
    • Local interfaces, access, reporting, etc.
  9. Communicate with SMEs & specialised functions
    • Expertise requirements
    • Participation in transition
    • Support during transition
    • Facilitation
  10. Train users
    • Functional objectives
    • Walk through solution
    • Metrics & standards
    • Participation in transition
    • Cutover impact
    • Supervision & support
  11. Train SMEs
    • [as for Train Users]
    • Brief stakeholders
      • Changes
      • Impact
      • Benefits/value
      • Transition process
    • Roll out components
      • Access tools & privileges
      • Localisation
      • Local integration
      • Measurement tools
      • Support arrangements
      • Remove existing systems/materials
    • Evaluate benefits
      • Take-up
      • Performance metrics
      • Quality metrics
      • User satisfaction
      • Transition costs
      • Stakeholder satisfaction
      • Compliance
    • Close project
      • Validate against requirements
      • Project performance
      • Update support
      • Residual issues
      • Lessons learned
      • Project closure
    Of course, this assumes that you have actually captured the requirements, modelled the process and design the implementation components in advance, which isn't always the case. But this process should at least tell you what you need to have done before you get to implementation.

    Your SDLC is your company's brain

    I have recently been asked to review a large insurance company's delivery methodology, and found a state of affairs I have not witnessed for about 20 years. It’s a sad thing that major companies routinely neglect their lifecycles, methods and processes, presumably because they do not appreciate just how valuable the potentially area. It’s almost like they can’t see what good their brains do, so they neglect them in favour of other, more obviously useful organs (mainly the stomach, I think), and as a result their brains shrink and they become still less able to evaluate those same brains' purpose, effectiveness or value.

    In fact the brain is very good analogue of the an organisation’s development lifecycles. not least because it is the reason why we are by far the most dominant organism the world as ever seen. From the point of view of development, I would say that an organisation’s formal processes represent about half its brain – a good deal of its memory, its practical skills, its controls for perception and behaviour, quite a lot of its capacity for reasoning about cause and effect, balance and coordination, and most of its basic language and social skills. Yet in many companies development lifecycles are organised and managed like the brain of a crocodile, not a human being, and while that continues it will never evolve into an intelligent being. (The analogy is more exact than you might imagine.)

    But of course, the human consumes about 20% of the body’s energy, while most development lifecycles would be lucky to receive 1% of a company's attention. Which is odd, to say the least. Any improvement in process brought about by improving the local development lifecycle would surely need to change in performance by at least, say, 4-5%. In a £20 million programme, that means spending £1 million on their lifecycle would at least be covered - yet does anyone spend so much on this crucial part of development? As for a £100 million portfolio, how many companies pend £5 million a year on maintaining their development processes, let alone the central nervous system's budget of 20%?

    On the other hand, the efficiencies that could be achieved by integrating the delivery process as a whole and making it a dynamic part of real delivery are vast. Some time ago Accenture published a paper showing that training had an ROI of more than 350%, and I suspect that the same would be true of improving most companies’ development lifecycles.

    Here is a quick questionnaire of the most important issues, based on about 20 years of looking at (and occasionally helping to six) the problem. Note that it does not start with the details of the lifecycle documents and products – that is the least important fact! On the other hand, in more mature organisations the issue is often no more than obsolescence and missing items follow from the lack of sustained management focus, but in all too many cases there are major gaps.
    1. Is there a global management approach to the delivery process itself?
      • Clear unitary and controlled ownership, management & rules of delegation of the end-to-end development process.
      • Is there a development strategy?
      • Is there real expertise in methodology development? Just asking PMs what they think is like asking drivers how to design a far – you’ll get some of the user requirements but nothing useful about the design.
      • Is there a coherent or proportionate rollout/update process?
    2. Is there a simple, intelligible presentation & access?
      • A single, integrated model of delivery as a whole, including:
        • Governance, management technical tasks and support functions?
        • All stages, including both work selection and initiation and solution deployment/transition and work closure?
      • A single, user-friendly site for accessing the delivery process as a whole?
      • Effective control over authoritative versions (and withdrawal of obsolete materials)?
    3. Does it cover all of your most important delivery strategies?
      • Outsourcing?
      • Offshoring?
      • Package procurement & implementation?
      • SAAS (Software As A Service – thinks like Salesforce.com)?
      • Does it have enough (or anything) to say about non-development activities?
      • Procurement?
      • Support and maintenance?
      • Technology upgrades (Oracle, SWIFT, etc.)?
    4. How mature is the lifecycle itself?
      • Are there proper delivery and management processes - i.e., some components exist, but they do not operate as a complete, end-to-end, Prince2-like process?
      • Is there a true programme management lifecycle (most organisations are dominated by programmes now)?
      • Does it include a convincing model of change management as a whole, notably:
      • Business design, development, readiness & transition?
      • Operational design, development, readiness & transition?
      • Does it handle very small projects (which can often be managed through a single artefact)?
    5. How well does the lifecycle define basic management elements?
      • Roles + responsibilities – are they current, consistent and complete?
      • Are there explicit criteria, rules and authorities for adaptation, scaling & exemption?
      • Does it include (or at least point to) integrated stage and task-level processes & tools?
      • If it is a waterfall lifecycle, does it include a risk-driven iteration model for managing individual tasks and stages?
      • Are there detailed procedures for basic management tasks (risks, issues, assumptions, dependencies, change/configuration control, product/document management, impact analysis, estimating, planning, resourcing…)?
      • Have standard stage/task/product level risks, assumptions, dependencies, etc. been identified and articulated?
      • Does it set credible gateways, including include stage-end consolidation & validation, evaluate full project content, review performance to date or readiness for next stage, etc.?
    6. Are all individual products actually inadequate?
      • Are products defined by independent product descriptions?
      • Are there stage, product- and task-level procedures, advice & information?
      • Are there samples of good practice, including instances for each major area of usage?
      • Does each item have a supporting quality checklist?
    7. Is alignment with other functions well defined?
      • Clear & efficient access to supporting management functions & data (resourcing, MI, finance, architecture, etc.)
      • Explicit alignment with and access to related standards + policies?
    8. Is the lifecycle actively supported?
      • Are there discipline or process owners, with clear roles & responsibilities, a proper management cycle and allocated time to do the job?
      • Are there SMEs, with clear requirements and channels for feeding their experience into the organisation (e.g., a central lessons learned system or training/briefing programme)?
      • Is there an R&D process (minimally to drive innovation, capture and socialise training and disseminate new joiners’ knowledge & experience)?
    9. Is there a training programme covering all processes, roles, tools & techniques?
      • Is there a specialised SDLC training function & system.
      • Is there a training programme for staff, consultants, outsourcers, offshore & contractors?
      • Are there self-training packages for key activities – individual products, reviews, testing, requirements management, etc. – so users can refresh their knowledge independently and as and when needed?
    Without a Yes to at least most of these questions, you have the methodological equivalent of a crocodile's brain, and it will be all but impossible to make substantial and sustainable progress to real intelligence.

    Saturday, 9 July 2011

    APM's mastery of metrics

    A colleague kindly sends me the output from the recent APM Assurance Specific Interest Group, which focuses on assessing project quality and performance. It's quite nice, at least by the standards of the profession, though with occasional lapses into half-baked thinking. As usual with most would-be management experts, they are obsessed with turning everything into quantitative measurement. It's not a bad idea in itself, though the uniqueness of individual projects and the fact (yes fact) that metrics are only a means and never the end) do suggest that the desire to be measurable is leading them to pointless and inconsequential quantification. This in turn leads the result that the attempt to provide a solution results only in a more and more artificial definition the problem.

    Take, for example, their attempt to quantify RAG ratings. I'm firmly opposed to this on principle - RAG should define a qualitative difference in consequence, not just an arbitrary definition of the 'Oh-well,-1-to-3-can-be-red-and-4-to-6-amber,-and-oh-how-can-we-say-that-10-is-really-special?-I-know,-let's-make-it-blue!' variety.

    Exactly how objective and rigorous this is comes out when they find that they can't actually tell you what the difference between neighbouring scores actually is. Their scoring for 4 is 'Better than a 3, but some elements required for a 5 rating are not in place'. And for 7? 'Better than a 6, but some elements required for an 8 rating are not in place'. As a colleague immediately responded to this marvellous insight, 'No shit, Sherlock…'

    I suppose there is some kind of sense in this. It lets you to deal with the all-too familiar situation where you find yourself unable to decide between alternatives. But unfortunately all that really means is that the scale you are trying to use is not defined objectively, rigorously or consistently enough (usually because, in my experience, it isn’t a single scale at all). But that is only to say that it is still too immature to be used. But here it is, being recommended as a professional standard. Which leads me to refer the refer the reader - and the APM - to my previous piece on professionalism.

    The rest of the paper is riddled by the sort of inarticulacy and arbitrariness that suggests that project managers probably shouldn't be allowed to write standards or even evaluate projects. I particularly despair at the description of what needs to be in place to get a 10: 'Processes have been refined to be best practice. IT is used in an integrated way to automate the workflow, providing tools to improve quality and effectiveness. The project is demonstrating innovative techniques, thought leadership and best practice'.

    No definition of best practice, so it starts with a completely meaningless idea. The assumption that the Nirvana of management is automation is also a bit scary: providing IT-based tools to manage workflow and improve quality and effectiveness, far from being best practice, is about as basic as it gets. Well, it is around here. As for 'demonstrating innovative techniques, thought leadership and best practice' (there it is again!), having led innovation management programmes and having routinely laughed/despaired at the quality of thinking that portrays itself as 'leadership' in most organisations, I am astonished at what the APM has been prepared to release under its banner.

    (For what is, I think, a slightly more intelligent approach to RAG statuses - which is to say, on is focused on action, not measurement - try here.)

    Monday, 6 June 2011

    What is the point of indemnity insurance?

    It's that time of the year when my insurers remind me that I need to pay them a few hundred pounds to renew my professional indemnity insurance. I need insurance because most of my clients insist. But I can't believe this makes much sense. Given that what I actually do is provide consultancy on governance, processes and methods, under what circumstances is someone going to sue me for - well, for anything? And if they did, how could they possibly prove that I was negligent (there are no public standards to adhere to) or that my work or advice led to material damage? I find it extremely hard to envisage any circumstances in which my insurer could possibly be forced to pay up.
    So, is there any data anywhere showing exactly how many claims there are against individual management consultants, and how much is actually paid out by insurers? If, as I suspect, the answers are 'very, very few' and 'very, very little' respectively, perhaps someone can tell me why my clients bother, and why I should pay up?

    Thursday, 26 May 2011

    When are your requirements ready?

    A very common failing in all sorts of projects is being stuck with what are in fact quite inadequate requirements.

    The fact is, most organisations are pretty bad at explaining exactly what it is they want a project to accomplish. There are lots of good reasons for this - the situation at the start of a project is often fluid or unclear, there are too many options to be precise, it's hard to define a truly innovative idea in detail until you have tried it out, conditions and priorities change as the project proceeds, better ideas surface, and so on.

    But that's not the same as simply being bad at requirements. The above issues relate mainly to the content of requirements, which is very hard to nail down definitively; what I am talking about here is their quality - a different issue. Badly defined requirements area major cause of problems for projects and businesses alike, causing the routine delivery of the wrong thing and lots of unhappy stakeholders. Fortunately there are ways of ensuring that the requirements - such as they are - are at least defined well enough for the project to proceed reasonably comfortably.

    The issue I have in mind is the requirements review process - how requirements get signed off. There are lots of things you can do to make this fairly robust, but one technique I have seldom seem defined clearly enough is that of asking the requirements' principle users to confirm that they are fit for purpose.

    There are three key groups of people who have an interest in how well requirements are stated:
    - the analysts who will have to translate the requirements into a functional solution.
    - the (user and operational) acceptance testers who will have to check that the requirements have been met.
    - the operations personnel who will have to convert the requirements into SLAs and OLAs.

    These people are not generally very interested in the requirements contents. But give them poorly quality requirements - too vague, imprecise, unanalysed, inconsistent, with key areas missing, and so on - and they simply won't be able to do their job. Which means that the solution the project delivers is all but bound to leave everyone with a nasty taste in their mouths.

    All I am advocating here is that requirements be signed off by these groups. As far as I am aware, although many organisations ask their analysts to approve requirements, most don't ask testers or operations staff, and this may be a major cause for project failure. It's not hard to arrange, and it should certainly be welcomed by the groups in question. If, on the other hand, you cannot get their approval, maybe you should be looking to the users to define what they want a little more clearly - and so avoid storing up trouble for the future.

    Of course, there's also a case for weak requirements. Well, not exactly weak, but at least requirements that recognise that they may not be the last word. Organisations and businesses change. So do markets and so does good practise and the opportunity the requirement was originally designed to address. So if you're embarking on an 18-month project to deliver x, it's probably not too smart to try to nail down your definition of x too soon. But that is not to say that you should not try to meet the above test from the start - it's just that your project should also make a strong allowance for change. That could mean a large tolerance, but it should also mean setting the right expectations from the start. For example, the business and users should expect to have to re-validate their initial requirements at regular intervals, you might want to prioritise items that are pretty safe from future fluctuations (e.g., stable regulatory requirements or generic interface components), design for flexibility (loose coupling, modularity, etc.) and so on.

    This is the case for agile, of course - but of that more than enough has already been said by anyone and everyone, including me!

    Friday, 20 May 2011

    Defining RAG statuses

    In my work I come across a lot of attempts to explain in simple, one-word ways what the status of a piece of work – usually a project – really is. Much the most popular is the ‘traffic light’ system, or ‘RAG report’ as it’s known around across the IT industry. It’s a great idea: simple, clear and apparently quiet unequivocal.

    The only problem seems to that in many organisatons the very definition of Red, Amber and Green is usually, frankly, quite irrational.

    A fairly representative set of answers to the question ‘What do RAG statuses actually mean?’ can be found here: http://www.linkedin.com/answers/management/planning/MGM_PLN/186467-1517184. However, there is surprisingly little on the web about this topic, so here are my current thoughts on defining red, amber and green. Nothing radical, but a little more consistent and logical than some of the ideas I have seen floated, especially in the companies I have worked in.

    The first question that needs to be answered is what exactly ARG reports are for. If you look at most organisations’ RAG criteria, they are generally defined in terms of percentages or absolute numbers. For example, if a project budget looks like going over by 20%, it’s a Red. I don’t understand this approach, especially in a project-based organisation. One of the basic features of any intelligent project governance approach is to define project-specific tolerances t reflect the project-specific circumstances, known risks, and so on – not to treat them all as though they were peas in a pod.

    So when project A is 20% over budget, that may indeed be disastrous, because its agreed budget tolerance is 10%. But project B, which has always been expected to need more re-financing at some point, has a tolerance of 30% (yes, such projects do exist), so overspending by 20% is not, by itself, cause for concern, and certainly not cause for trumpeting a disaster from the rooftops.

    So what – or, more precisely, who - are RAG reports for? First and foremost, they are not for everyone. By and large, they are for people who a) know the basic parameters of the project, including its tolerances for budget, delivery, and so on; and b) are in some sense accountable for the project’s success, or at least need to understand its prospects for success. In other words, RAG reports are aimed at people like the project board, quality managers, your PMO and so on.

    So what do they need to know about a project that can be usefully and meaningfully communicated in something as simple as a single colour? Really it’s very simple: Do you (the report’s audience) need to do about this work?

    So the message the RAG status needs to convey to the reader:
    • Green: Everything’s fine, you have more pressing things to worry about, go away.
    • Amber: I have problems, but I’m pretty sure I can fix them with what I have available. So nothing to actually worry about yet, but you probably need to keep an eye on what happens next.
    • Red: I have real problems and I can’t solve them with what I have available. YOU NEED TO DO SOMETHING.
    Hence the RAG definitions I would recommend:
    • Green: All aspects of the project are fully under the PM's control using only the project's authorised plan & arrangements (e.g., budget, dependencies, resources, etc.).
    • Amber: Additional actions are required, but can be successfully managed within the project's authorised capabilities & tolerances.
    • Red: Cannot be resolved within the project's authorised capabilities & tolerances. Requires escalation.
    These are very general descriptions, of course, though none the worse for that. By being so general, they make it easier to achieve the consistency that is the basis of good quality management. But at the same time they are probably too abstract (I would not say vague) to be easily sued by busy project managers who aren’t much inclined to debate the finer subtleties of their project’s status.
    So in addition to these high-level definitions, here are a few definitions of more detailed RAG statuses, as they relate to particular areas of project management. They are pretty useful tests of the overall status, but always bear in mind that the ultimate test is the above core definitions.
    Finance
    Green:
    • The project's current budget is sufficient for the project, and is expected to remain so.
    Amber:
    • There are outstanding changes that have yet to be budgeted for.
    • The PM does not maintain a record of expenditures.
    • Actual and forecast expenditure have not been reviewed/reconciled since the last report.
    • The authorised budget is currently being challenged.
    • The project is forecast to overspend (including tolerance) but there is a credible path to recovery.
    Red:
    • The project is forecast to overspend (including tolerance) and there is no credible path to recovery.
    • Finances were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
    • The project (or stage) has mobilised without budget authorisation.
    • The project is overspent (including tolerance).
    Scope & governance
    Green:
    • The authorised scope is correct, is authorised, meets stakeholder expectations, and is expected to remain so.
    • The current governance meets the project's needs, is within our governance framework, and is expected to remain adequate.
    Amber:
    • The project is not explicitly aligned with an authorised business goal and/or has moved from baseline scope.
    • There is at least on open & unauthorised change request (CR).
    • Cumulative impact of CRs exceeds original tolerance.
    • The authorised scope is forecast to become invalid (e.g., known change in business strategy) but there is a credible path to recovery.
    Red:
    • The authorised scope is forecast to become invalid and there is no credible path to recovery.
    • Scope & Governance was Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
    • The project has started without an authorised scope.
    • There is no Project Sponsor/Senior supplier/Senior user on your project board.
    • The authorised scope is no longer valid.
    Schedule
    Green:
    • The currently authorised plan and arrangements are sufficient to assure the successful delivery of the project as a whole.
    Amber:
    • Plan updates are needed to reflect expected changes in activity, scope, CRs etc.
    • The project plan has not been revised since the last report.
    • A critical path product/milestone has slipped/is forecast to slip, but there is a credible path to recovery.
    Red:
    • A critical path product/milestone has slipped/is forecast to slip, without a credible path to recovery.
    • Schedule was Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
    • The project (or stage) has started without an approved plan.
    • Unfinished work that should already have been complete has yet to be rescheduled.
    • Work is underway that is not on the authorised plan.
    Resources
    Green:
    • The current stage has named, agreed resources and the resource requirements for the project as a whole are agreed.
    Amber:
    • The plan includes over-committed resources.
    • The project lacks (or is forecast to lack) resources needed for successful delivery, but there is a credible path to recovery.
    Red:
    • The project lacks (or is forecast to lack) resources needed for successful delivery, and there is no credible path to recovery.
    • Resources were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
    • Plan contains tasks without predecessors or successors.
    • Plan contains tasks in the current stage without assigned resources.
    • The plan for the current stage is not fully resourced.
    • The plan for the project as a whole does not identify at least resource types.
    Risks & issues
    Green:
    • All known risks and issues can be managed within the current project arrangements & capabilities.
    Amber:
    • At least one severe risk/issue is unlikely to be resolved as planned.
    • The project has escalated at least one risk.
    Red:
    • The project has no effective risk/issue log.
    • Risks & Issues were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
    • The risk/issue log has not been reviewed since the last report.
    • At least one severe risk/issue is unlikely to be resolved as planned.
    Dependencies
    Green:
    • All dependencies for the project as a whole have been formally defined and agreed.
    Amber:
    • Not all dependencies for the project as a whole have been formally defined.
    • Not all dependencies for the project as a whole have been formally agreed on both sides.
    • An external dependency on the critical path has slipped (or is forecast to slip), but there is a credible path to recovery.
    Red:
    • Dependencies were Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
    • An external dependency on the critical path has slipped (or is forecast to slip), and there is no credible path to recovery.
    • Not all dependencies for the current stage have been formally identified and agreed with the responsible managers.
    • The project plan does not identify all external dependencies and deliveries.
    This multiplicity of criteria raises a key point: how many RAG statuses should your project have? Personally, I would strongly advocate using several RAG indicators at once. This not only gives your readers some idea of what questions they should be asking next, but they also allow organisations such as quality management or your PMO to compare reports from all their projects to identify hotspots and bottlenecks in the existing process that would perhaps benefit from a little company-wide improvement.

    Exactly which areas you chose to RAG is up to you, of course. But what ever they are, they should be the areas you regard as the best indicators of project success and failure. That’s why I tend to start with the set above: in my experience, it is because dependencies and resourcing and all the rest tend to be the areas that drag a project under. You should chose your own, and test them every six months or so to see whether trends in individual RAG statuses did indeed predict success and failure. A few quick statistical tests using Excel is all you need (though what you use to replace unhelpful tests or unexplained failures is more speculative – a bit of an experiment).

    Of course, this leaves a very important point unclear. If one particular facet of my work – the dependencies, for example, or the risks – is red but the rest are green, how do I calculate the overall status?

    It is very tempting to fudge things here. If it’s mostly Green with just one Red, can’t we take a sort of average and call it Amber? No, we can’t – and the reason is simple. All the RAG criteria suggested above are individually capable of wrecking your project. Or if they aren’t they should not be on your list of questions. So if any of them is Red, the project as a whole is Red too.

    One more detail. All the above RAGs are based on objective information (though no information in business is safe from manipulation). But there is one area of subjecting knowledge this leaves out: the manager’s own expectations of success. This is an important factor: a project manager faced with reds and ambers but who still expects to deliver as planned either has something interesting to tell you or needs to re-learn the basics of project management. Either way, I always include a ‘deliverability’ RAG – the PM’s assessment of how likely they are to succeed. The basic definitions of each colour are the same, and here are a few things PMs should ask themselves when setting their deliverability RAG:
    Deliverability
    Green:
    • You are confident that the project will deliver as planned and authorised, without disproportionate risks.
    Amber:
    • You are not confident that the project will deliver as planned and authorised, but there are viable methods for recovering from this.
    Red:
    • You are not confident that the project will deliver as planned and authorised, and there are no viable methods for recovering from this.
    • Deliverability was Amber in the last report and no recovery plan has yet been agreed to return those particular problems to Green.
    I have found this a very useful and workable system, even down to the fact that it makes the supporting tools really easy to build. It takes practically no knowledge of spreadsheets to calculate the overall RAG status of a piece of work based on this system. No complex look-ups of percentages of values or conditions: if it’s red down below, it’s Red on top. Simple, effective, and above all else it tells its audience exactly what they want to know – What do I need to do?

    Wednesday, 11 May 2011

    Professionals and practice

    Interesting discussion at my current client, about what to call the change management organisation. At the moment they planning to call themselves 'the Change Practise', but the desired effect - of being compared with legal, medical an other sorts of professional 'practise' - is being undermined by a barrage of ribald jokes about 'still having to practice' - exactly the opposite of what was intended.

    The difficulty, as far as I can see, is two-fold. On the one hand, the business and IT managers I work with aren't professionals. They are often quite good, but they have none of the attributes of doctors or lawyers. There are few qualifications and none of any real substance. In they UK a doctor trains for five years and be formally qualified to a very high standard before they are permitted to treat people independently, but how many weeks does it take a modestly experienced manager to master Prince2? Nor are they obliged to join professional bodies exercising legal powers to strike them off if they aren't competent or are guilty of malpractice.

    As for the values to which a manager is subject, there aren't any. Their only obligation is to do the job well enough not to get fired. No professional values, and absolutely none that transcend the interests of their employers - who in turn are under no obligation whatsoever to respect their managers' professional standards or concerns.

    And last but by no means least, the quality and performance standards to which real professionals - especially doctors and nurses - are held simply do no apply. Just imagine what sort of state we'd all be in if the average doctor had as many failures and complications as the average project or programme manager!

    On the other hand, businesses seem to be under the impression that selling something vigorously enough will somehow make the 'message' true. The discussion this all started from included a very senior member of the executive insisting that we could not call ourselves change 'management' because they wanted the name to convey not just management but also professionalism and leadership. But are they doing anything to empower their managers to lead? No. Are they inculcating a real professionalism? No. They like the sound of these words but, having no real idea what they mean, think that simply reciting them enough will somehow make them true.

    So managers are not professionals. Is there any prospect that they could be? In the public sector, perhaps, though the erosion of the independence of civil service under the influence of consultants of all kinds makes that harder to imagine. As for business, absolutely no prospect at all. Managers are too in thrall to the interests, priorities and outrageously anti-professional powers of the businesses they work for.

    Wednesday, 13 April 2011

    Level 0 context diagrams

    I have long been interested in context diagrams - which is to say, single diagrams of the Big Picture within which a methodology (SDLC, etc.) operates. As a result of a succession of methodology-related engagements, here is my current top-level version:

    Click on the picture to expand it.

    The idea is to identify all that factors that explain what, ultimately, the methodology is trying to accomplish, how it is governed, how project and programme goals, objectives and targets are set, what support is available, who controls the overall approach (e.g., the core methodologies), and so on. In my experience, most organisations address this issue in a very piecemeal manner, with occasional and very ad hoc references to the details scattered all across the methodology and in surrounding structures (e.g., PMO rules, local standards, and so on).

    This is unfortunate, as it invites conflict, makes its hard to understand the whole, makes compliance with the methodology much harder to justify, all but ensures that major errors and omissions will exist, and so on. It also makes it hard to identify who to go to when the methodology does not actually answer a question. Of course, defining all this will demand a vast amount of information that is typically either widely scattered, hard to find or simply missing. But at the very least, for each box you will need to know:

    • Overview of purpose in the organisation as a whole
    • Role in delivery (eg, direction, prioritisation, project governance, and so on)
    • Specific dependencies
    • Process/standards
    • Contacts
    • Organisation
    • Ownership
    • Management cycles

    You can find an editable PowerPoint version here. I'd be interested in comments, and eventually plan to create a fully-fledged presentation explaining each item in the model in detail.

    Free stuff - no, really

    After a long absence, here we are again. Part of the time away has been spent creating a new website, which will be of little interest to most people except for the Downloads section.


    Unlike most such pages, this one really is designed to give you free stuff, not just adverts for myself. Knowing full well that this is good stuff (well, good enough for other people to pay me for it) and not wanting to let it fall into oblivion, I thought I’d just give it away. Really.


    Right now it has tools and training materials for lessons learnt systems, stakeholder management, various aspects of methodology, and so on.


    I plan to add to it occasionally. The main areas will be methodology, quality, and governance, but I have a good deal else. And if you have any requests, I may have something I could post just for you.

    Tuesday, 27 July 2010

    Agile: Guidelines or methodology?

    Another discussion on LinkedIn, this time about whether Agile is a methodology or just guidelines. The consensus seems to be guidelines, which seems to reflect the spirit of Agile better.

    But at the same time, this view seems only to address Agile in the abstract, not Agile (or any other delivery model) as implemented in any real organisation. Which is a pity, because it is at that point that the strains will start to be felt if Agile remains no more than guidelines. On the other hand, to convert it to a formal corporate methodology would not only defeat much of its underlying philosophy and approach but also lead to Agile being ossified in the same way that waterfall – which was never inherently rigid or bureaucratic – was ossified by immature implementations and too much top-down corporate management freakery.

    And of course, there always been legitimate management reasons why Agile cannot be left to go its own way, any more than any other management tool. There will be inescapable (and entirely reasonable) reporting requirements, stakeholders will often want to know what progress is being made in non-Agile terms, and so on.

    So, for any organisation that does not want to see something as promising as Agile degenerate into either paperwork or making it up as you go along, it is necessary to be rather more specific about how either a Agile-as-methodology or Agile-as-guidelines is implemented.

    So even if Agile is to be forced into the mould of a corporate methodology, any self-respecting implementation can features that prevent it from becoming a rigid, inappropriate and self-defeating. For example:

    • It will be specifically tailored to the type of work that is really being done.
    • It will be abstract enough to permit considerable leeway for professional judgement.
    • It will be fully scalable (no, not just big, medium and small).
    • It will include user-friendly mechanisms for granting exceptions and waivers.
    • It will include wide-ranging but rigorous (the very opposite of rigid) ranges of meaningful options.
    • It will be implemented through training and tools, techniques and templates that make explicit the team’s authority to vary, depart from or just plain ignore the ‘rules.
    And so on.

    I don’t see that any of this gets in the way of Agile, or how any complex organisation could safely or profitably implement it without at least a few of these quite standard methodology components.

    Friday, 18 June 2010

    How good is DSDM?

    I recently initiated a discussion on LinkedIn entitled How good is DSDM? Although there was (unsurprisingly) a consensus that DSDM was A Good Thing, we were collectively unable to come up with much hard data until Jennifer Stapleton – a past Technical Director of the DSDM Consortium – kindly offered me some data from Xansa (bought by Steria in 2007) and British Airways. Xansa’s data is especially interesting, as it covers a number of clients (including BT) and Xansa was, at that time, the world’s largest DSDM practice.

    The gist of the data is that DSDM offers huge improvements in productivity, team size, delivery time and project quality. Here are the basic graphs (not as contemporary as I'd like, but still solid data):


    Friday, 11 June 2010

    What, ultimately, is Agile about?

    There is an interesting discussion of Agile going on at LinkedIn at the moment. The topic under review is 'Transitioning from command and control to a servant based style of leadership'. personally I think the idea of 'servant leadership' is both misconceived and redundant, as the answer (so far as I understand the issue) was provided by the German sociologist Max Weber about a century ago.

    In brief, at least as far as successful Agile projects are concerned, I suspect that this change in the way organisations work under Agile is closely connected to the distinction between being a professional and an employee.
    • An employee is someone you pay to be able to tell them what to do, and is best suited to command and control.
    • A professional is someone you pay so that they will tell you what to do, and so works better in a collaborative environment - which Agile is designed to create.
    Conversely, I suspect that whether an Agile initiative is successful depends heavily on the extent to which truly professional capabilities and a professional culture exist. In that respect it would be very interesting to hear from people for whom Agile had not worked as to why it had failed.

    Note the causal direction. If companies insist on command and control, they get employees – i.e., people who need to be told what to do. If you give people opportunity and responsibility (and a non-trivial amount of skill), you will get professionals.

    On the other hand, the training and coaching needed to get people who were previously treated as employees to operate as professionals (and therefore suite to Agile) can be very great. The transition is not easy or straightforward, not least because the skills required are by no means solely technical. There are personal and social capabilities that are also required to succeed at Agile. But they are encompassed by the concept of professionalism.

    This can be exemplified by a major cultural problem Agile implementaitons often seem to face, namely empowering staff saying no to their boss (eg, the Agile PM). Managers need technical training (i.e., how to do Agile) but other team members need the social and personal ability to insist on their own professional perspective. Few organisations cultivate this attitude (though I have known a few), but I would say that it is crucial to making a success of Agile.

    The same point applies at the other end – to business stakeholders. They also frequently need a change of culture – to become involved, to own the project, to participate effectively, to accept an incremental approach and to be able to change their minds without embarrassment or political penalty.

    Wednesday, 26 May 2010

    Why prefer DSDM to Scrum?

    Ultimately practitioners know that there is no need to choose between Scrum and DSDM, of course: they can be integrated into a hybrid that suits your specific requirements. However, it is helpful to have a clear idea of what the different flavours are and what they are capable of, because it is not only purists who want to know exactly what you are doing: so will the people who are shelling out the cash to pay for the change. Being able to give them a clear choice between clerly defined options, plus clear criteria for choosing, is essential to selling any form of agile.

    So, here are some reasons for preferring DSDM to Scrum, in direct proportion to the size, complexity and innovativeness of the project and the difficulty of the technical and compliance requirements.
    1. The Foundation stage makes sure that everyone ‘gets’ what you are up to. Without this, something very bad is likely to happen to your project. Adding a 'Sprint 0' isn’t the answer unless it looks very like DSDM’s Foundations anyway.

    2. DSDM is a true end-to-end process, not just the middle, development snippet. Since most of the mistakes in IT are made before the ink is dry on the contract (internal or commercial), this is a crucial consideration.

    3. Defining the roles as completely as DSDM does can be extremely helpful in any but the most trivial of projects. After decades of IT neglecting this issue, it is refreshing that DSDM includes it. Very hard to do well (and the sell to the business and operations alike), but absolutely essential.

    4. Defining a few basic products – BAD, SAD etc. - is also very helpful. Given the complexity of most real projects, it can be impractical to rely too heavily on prototypes and the authority of empowered teams, powerful though they are.

    As this list suggests, DSDM is much more suited to a corporate environment, where the playfulness, autonomy and spontaneity of Scrum clashes rather sharply with the demand for formal definition, accountability and predictability. It's hard to get the full benefits of Scrum while adhering to corporate formalities dictates concerning governance and accountability, investment prioritisation, intelligibility and visibility to a wider group of stakeholders, quality management, reporting and architecture.

    Anders Larson has kindly pointed me to this good comparison of DSDM and Scrum from the DSDM Consortium site. Its author, Andrew Craddock, summarises the position very well: after listing the principles of the Manifesto for Agile Software Development, to which both Scrum and DSDM practitioners contributed and subscribe -

    • People and Interactions over Processes and Tools
    • Working Software over Comprehensive Documentation
    • Customer Collaboration over Contract Negotiation
    • Responding to Change over Following a Plan

    - he notes that 'DSDM recognizes value in the items on the right of the Manifesto statements more than Scrum does, whilst still putting the highest value on the items on the left. This allows DSDM to fit more comfortably with the normality of larger organisations and gives rise to some of the differences between these two Agile approaches.'

    Tuesday, 25 May 2010

    Why is DSDM called Atern? A methodology by any other name...

    Rant on.

    For some while I had wondered why DSDM’s most recent incarnation is called ‘Atern’. What could it possibly mean? An internet search – including a search of the DSDM site itself unearthed – nothing. Then the other day I was browsing through the discussion of the DSDM Group on LinkedIn entitled – ‘What does Atern mean?’ Given that the discussion was initiated by David Winders, who seems to know what he is talking about when it comes to DSDM, this I slightly surprising.

    Like David, I awaited a potentially embarrassingly simple answer – and one came back straight away, from Inna Dalton:

    I was told that it stands for an arctic tern - a bird. There was something, it
    was claimed, in the fly pattern or behaviour of this bird that has some
    semblance to the iterative and/ or incremental nature of the method. Hence, the
    image of this bird on some of the books by DSDM Atern.
    Embarrassingly simple indeed. But who should be embarrassed by it is another question. Not so much those who did not know but, I think, those who gave it the name in the first place. Because it lacks meaning, 'Atern' is just annoying, or perhaps a pretentious little joke for insiders. Now that I know the answer, my reaction is somewhere between a despairing 'Oh dear' and a thudding 'So what?' - exactly the opposite of what a brand name should do. What could possibly have possessed them to do something so crass?

    Still more importantly, plainly hardly anyone seems to know what the name means – i.e., that it is the name of a bird - so the species probably doesn't matter much. Metaphorical names (eg, a bird) are useful, but only if it is clear what the metaphor actually is!

    As David concluded, ‘I do hope the Arctic Tern doesn't become a Dodo or indeed the version isn't a Turkey’.

    Rant off.

    The business case for DSDM (and Agile generally)

    Currently I am participating in a very interesting discussion about the business case for DSDM with the LinkedIn site’s DSDM Group, particularly with David Winders.

    There seems to be very little hard evidence that DSDM is significantly more efficient than a waterfall approach at doing the work, which would seem to kill the business case stone dead. After all, why incur the costs of migration if the grass really isn’t any greener on the other side?

    But as the discussion has evolved, it has become clear that this is to misread what Agile methods like DSDM (or Scrum) are offering. (And perhaps this should have been obvious from the start – I only really noticed this point once the discussion was well under way).

    To boil down what is (for the time being) the final argument, there is indeed a narrowly economic argument for DSDM. Unlike a waterfall, DSDM is never going to waste stakeholder time and money by delivering (say) 100 function points that were agreed a year ago, when they were almost certainly based on premature and immature judgements. We probably don’t want exactly those things any more: our thinking has evolved, our goals have evolved, and the business situation has evolved. Instead it (DSDM or any other agile methodology) will deliver 100 function points and for much the same price per FP, but they will all be FPs we know we want.

    Add to this DSDM’s far closer control over the delivery process, which makes it far less likely that the whole project will go over budget, schedule but under scope, and the business case for DSDM is clear: even if DSDM projects don’t cost less per FP, they represent better value for money, because they are all FPs you want.

    Another interesting point was made by John Isgrove of Collaborative Consulting -

    What we found was that DSDM completed the same development in 66% less time. It did not complete in less effort however as with agile projects they tend to be shorter but fatter resource profiles i.e. work for less time but more people involved at the same time across the time period. Overall effort was about the same with with higher proportions than the waterfall project being spent by the analysts and testers.

    Monday, 24 May 2010

    Moving to Agile - Identifying the truly essential documentation

    A crucial feature of migrating to Agile is identifying the irreducible documentation needed to support the process. For many Agile practitioners the very notion of a fixed documentation suite will raise their hackles, but there are many stakeholders, not all of whom are not directly involved in the delivery process, but all have interests that must be taken into account.

    To name only the most obvious, there are three general classes of stakeholders, each with their own distinct information needs:

    1. The delivery team itself
    2. The business
    3. Operations, compliance and support

    These groups are already often poorly served by waterfall processes; the risk is that Agile’s understandable and largely justified desire to take an axe to the great mass of pointless documentation will result in them being served still worse. So…

    The delivery team

    Well, obviously. And generally speaking the Agile assumption that there is, by default at least, no need for internal project documentation, holds good. Why would I need to write down what I can tell everyone in a tenth of the time? Why write down anything at all if its lifetime is likely to average 15 days?

    But less obvious, is it in fact the case that they need nothing? In very simple projects, yes, it usually is, but in a more complex programme – for example, a dozen Agile workstreams flowing into a common integration/release cycle – they may well need a good deal more. The PM is likely to be unavailable (doing programme-level duties at meetings and forums) and when they return to base, unable to convey everything by word of mouth.

    Then the standard criterion applies – fitness for purpose. But in a project of any significance (size or importance), that rule is unlikely to result in their being no documentation at all. Add to that the other reason why documents exist – because the project is innovative, organisationally complex, heavy with technical or regulatory content, and ‘pure’ Agile starts to look a bit thin. Absolutely never create more documentation than are strictly needed, but don’t assume that the minimum set is zero. That is usually wishful thinking.

    Business stakeholders

    Yes, a properly empowered business rep should allow you to dispense with almost all external review and approval, but even in the most benign environment, there are likely to remain yet higher level reports. Much as most of us would like to dispense with this too (why don’t they just have faith in us?) but ultimately projects exist solely because they serve the interests of the organisation as a whole, and someone up there really does need to know what you are up to. So there is all but bound to be some form of reporting.

    However, taking a constructive approach to this can also encourage those to whom reports are due – PMOs, line managers, HR, finance, direct reports to senior stakeholders, etc. – that they don’t really need the detail they are probably accustomed to. Reporting strictly by exception is the starting point, as it is that, rather than no reporting at all, which is the true corollary of empowerment. Using the move to Agile to rationalise reporting isn’t a bad idea either.

    Operations, compliance and support

    Now we come to the groups who are most in need to solid documentation and most likely to be neglected. In many organisations the needs of operations, service transition and support teams are already poorly met, and Agile is unlikely to do them any favours. Nevertheless, it is crucial that their needs are met, and these go far beyond touching base with them or even having them represented on the project. They will spend far more on the delivered system than the developers, and the better equipped they are to manage the delivered solution, the better for everyone.

    Operations, compliance and support will make up the great majority of all but the most superficial IT projects (e.g., throwaway web pages or transient rate changes), and their needs are not only substantial but also penetrate deeply into the heart of what is being developed. They all need to be consulted about the original requirements (e.g., to define SLAs) and strategy (e.g., to ensure sustainability), they all need early warning of what, functionally and technically, is coming down the track, they all need to know what exceptions have arisen as the original requirements/backlog is progressively shaped and re-shaped, and they all need to be aware of the (now multiple) schedules for release.

    Conclusion

    So what does this all add up to?

    1. There is a very great deal of documentation that genuinely can be thrown away – and good riddance. It adds nothing but dead weight to the project and should be discarded whenever possible. Which is remarkably often.
    2. Quite a lot of other stuff needs to be transformed. It is very likely that groups such as administrative functions (PMOs, HR, finance) will have to change the way they think about projects and how they are reported on, but it is unlikely that they will eschew reporting altogether. Nor should they. Likewise for business stakeholders – they need to assign real power to their business representatives within the project, but they are unlikely abandon all visibility. And given their responsibilities, how wise would it be?
    3. Finally, there are those who will need what they have always needed. But if you don’t give operations and support what they need, they probably won’t notice much: by and large they aren’t getting it now either.

    Wednesday, 19 May 2010

    Moving to Agile - Critical business factors

    Some years ago I implemented DSDM (then the preferred flavour of Agile) at Churchill Insurance. As far as the core processes were concerned, there was no great difficulty (admittedly we threw everything at it), but the business was a problem for two key reasons: empowerment and availability.

    The nub of the empowerment issue was asking the business to allow their own representatives on the project enough authority to approve of what was going on on the spot (e.g., prototypes, changes, reprioritisations, etc.), without constantly referring to senor management. In short, they could not let go of the strings. As anyone who has worked in IT projects for any length of time will know, there is a ludicrous irony in this, because quite a lot of the problem with delivering successful IT projects of any kind is the business’s ambivalent attitude the ownership. In all too many companies, the business want to be in control of the project (i.e., able to make make-or-break calls about it) without taking responsibility for its delivery. The implicit question is always, do you prefer delivery to control?, and all too often the effective answer is ‘control’. This is as much a problem for Agile as it ever was for waterfall projects.

    The second problem was getting the required effort from the business representatives to the project. These individuals were necessarily very experienced and valuable people (who else would you empower?) but that was precisely what made them hard to replace in their day jobs. So we needed them to spend about three days a week on the project, but they still needed five (usually very long) days for their BAU work. No surprise, then, that project work soon started the back seat to day job, the quality and speed of decision-making plummeted and the project started to look pretty wonky.

    So although the business was keen on an Agile approach, they could not participate effectively. It was a long struggle – only partially successful – to deal with this problem.

    Moving to Agile - Creating an Agile environment

    In my experience a crucial precondition for achieving agility is a certain level of maturity. Although the core processes, roles and tools are critical, so is the presence of quite a wide range of highly standardised core mechanisms of which the agile operations can take advantage.

    The classic example, familiar from IT development (whence agility sprang, of course) is test automation. Without this the development team is unlikely to be able check its day’s labours swiftly enough to move on confidently the next day. But test automation itself can only be adopted by an organisation that already has standardised test classes, a well established test process, a clear understanding of the basic mechanisms of test scripting, and so on. Without all of these (and much more), test automation will fall flat on its face – becoming either ineffectual or rigid – and quickly start to turn agility into paralysis.

    Of course, in a vey small, simple project or activity, the preconditions for agility are very limited. But in a more complex situation, such as BAU operations or a large-scale programme, agility can be achieved, but only ensuring that a full ‘agile environment’ is also in place. The elements of this environment can themselves be agile, but they certainly must be present and specifically geared to allowing other areas to take them completely for granted – it is, after all, the most basic basis of agility that the would-be agile activity can either omit or take for granted that everything in their environment.

    Hence one of the key task – perhaps the single most important task – when implementing agile methods is to investigate what the organisation’s ways of working can offer to the agile area – and what they demand from it too.

    A few (there are many more - this is just a flavour) of the areas you need to get right include:
    1. A governance system that allows for rapid validation and approvals of many incremental releases.
    2. Business and operational organisations and processes that are capable of assimilating frequent change.
    3. Office arrangements that support closely collocated teams.
    4. Extremely slick mechanisms for remote groups – vendors, outsourcers, other offices, and so on.
    5. System architectures that support rapid change.
    And so on - an enormous range of factors, many of which are typically intensively embedded in the wider organisation. it isn’t easy getting this sort of thing right.

    Wednesday, 21 April 2010

    Two Cheers for Bureaucracy

    ‘So, you love bureaucracy,’ said the researcher from the BBC. ‘Can you tell us why?’ She’s calling in response to my response to a BBC blog piece entitled ‘Are there too many bureaucrats in the UK?’, where I had indeed admitted to loving bureaucracy.

    ‘Well,’ I reply, ‘“love” would be putting it a bit strongly. But I do think bureaucracy is the most important thing humanity has ever invented.’

    I see. Not content with loving these crushing administrative behemoths – he thinks these embodiments of all that is wrong with the modern corporate world are our greatest achievement. What will the funny man say next – that toothache is a much undervalued pleasure?

    But it’s true. Bureaucracy is the nervous system of society, and has been for five millennia. It puts electricity in the wires, food on the supermarket shelves and controls business and government’s every move. It puts these words in front of you right now.

    So what does a bureaucracy do that society finds so invaluable? It controls the flows of information and decisions an organisation needs to carry out its activity. Different bureaucracies do this more or less well, but as long as society includes lots of different individuals doing lots of different things, and they need to collaborate across wide spaces and long periods of time, then something has got to organise them, and it’s hard to imagine any alternative that won’t be bureaucracy by another name.

    But why does bureaucracy have such a bad name? Why does so much of it seem to degenerate into red tape? Why are politicians constantly planning to cut it?

    The full answer is complicated, but the first part is simple. Contrary to popular feelings of frustration – feelings I share whenever they break down on me – bureaucracies rarely go wrong. A past colleague of mine, Melissa, had a useful analogy: bureaucracies are the brains of society. But, she would add, like the brains that control our bodies, ‘we only notice society’s bureaucratic nervous system when it stops working. Any idea what your visual system is actually doing right now? Me neither. But we’d all find out soon enough if it stopped doing it’.

    Hence the ease with which a bureaucracy gets a bad name. Most work just fine practically all the time, but who gives thanks for the good old bureaucrats when the lights go on or our salaries arrive on time? When it goes wrong, though – the wrong tax demand, the double booking, the failed delivery – who doesn’t automatically curse the idiots who made this mistake? And if it it’s only when it goes wrong that we notice bureaucracies, how can we help but have negative feelings about them?

    Actually it’s astonishing that bureaucracies work as well as they do. Even to professionals, it’s quite intimidating how much information and what a complicated process you need to come to what seems a simple decision. It takes careful thought, a strong sense of purpose and a great deal of effort to design, build and manage a bureaucracy of any size, let alone having it perform note-perfectly. And as soon as its parent organisation starts to change – which can well happen unintentionally – the bureaucracy also has to change. Given that change is increasingly a way of life for organisations, it’s not a recipe for easy success.

    On the other hand, the fact that bureaucracies are as imperfect as any other human creation is no argument for less bureaucracy. That would be like saying that, because cars sometimes break down, we should have fewer of them. Now, there are lots of reasons for having fewer cars, but that isn’t one of them. And just like your car, bureaucracies need occasional maintenance – which, in my professional experience, they seldom get.

    Not that I’m complaining. Most of my living comes from fixing management systems that have been allowed to wither – just recently one of the biggest UK supermarkets, a major player in the City exchanges, a healthcare company. They’re all equally lax about keeping themselves sharp – which is, after all, what even they say their administrations are for. Rather worryingly, though, I haven’t had to learn much in the course of two decades advising companies – there are still plenty making the same old mistakes.

    And every time I hear about some big consultancy or software company building yet another huge IT system for the government – often to the tune of hundreds of millions of pounds – I just remind myself remember that an IT system is simply an electronic bureaucracy. Given how badly we handle the paper variety, it is small wonder that so many major IT programmes – private as much as public - turn into a ‘bureaucratic’ quagmire. Years late, millions over budget, and few happy with the final result.

    In fact the way these huge government projects have so often failed illustrates what is wrong with the way bureaucracies – paper and electronic - are often treated by their owners. No one is quite sure what they are for, and executives are often allowed to make endless new demands for new information, new reports, new updates, without thought for the cost and disruption this entails. They’re also often allowed to change their minds in midstream. In fact the stream of fundamental mistakes seems to be as endless as it is easily repeated.

    Middle managers and staff then have little option but to turn their administrations into all things for all men: saying ‘No’ is too often a definite Career Limiting Move. And from then on, many bureaucracies face a constant demand for more and more, frequently with little budget or resource to do more than kludge together a jerry-built ‘solution’, which its ‘stakeholders’ then expect to have at their disposal forever. And if you add to this the number of reports that bureaucracies dutifully prepare and are then never read, you can perhaps imagine the disillusionment felt by many bureaucrats.

    So what is the end result? Ask the people at the sharp end. Call centre staff – the modern face of faceless bureaucracy – are quite routinely the butts of public scorn. Usually paid little more than the minimum wage, they are expected to answer perhaps 60 calls a day, day in, day out. Could you do that? I’m not sure that I could. And then there’s the abuse – not least the racial abuse routinely received by Indian call centre workers.

    As one anonymous call centre worker put it on The Weekly Gripe website, ‘It's shocking how badly these people are treated. First of all they are blamed for a multitude of sins by the customer about things that are completely beyond their control. As if that isn't bad enough, next they are treated like cannon fodder for the company to blame when it all goes wrong’.
    Or perhaps it’s in the nature of call centres: ‘I am sure a lot of people have the kind of mind set that means if they cannot see the person they are talking to, it is somehow perfectly acceptable to be rude, abrupt and patronising’.

    Do the staff deserve it? Of course not – and no doubt most abusive callers know that. But faced with delays or not being able to get what they need, who else is there to explode at but the anonymous, faceless innocent at the other end of the line?

    All in all, it’s astonishing how pleasant and polite the average call centre worker remains – but less than astonishing that the industry as a whole has staggering turnover rates. Industry figures suggest about 20% each year – which in turn suggests that the average call centre worker stays for five years. Hard to imagine. But research by George Callaghan of the Open University has challenged the official figures, concluding that the average worker stays only eighteen months.

    ‘Many employees expressed extreme frustration with their jobs. They were under the impression they were employed for their great people skills, but then not allowed to use those skills’, Dr Callaghan observes. ‘This is one white collar environment where employee performance is measured by the second.’

    It’s a familiar accusation: bureaucracies crush the life out of professionals through their constant demands for risk assessments and reports and the constant micro-management of every aspect of their work. The public services especially seem to be prone to this – scrutiny of them is so much more intense.

    Yet this is not really the result of bureaucracy as such. It is perfectly possible to design the rules that govern an organisation to include great latitude for personal discretion. They can also be built to be extremely flexible and adaptable and to be responsive to change, personal experience, unique circumstances and individual needs. It’s not bureaucracies that are at fault, any more than word processors are to blame for bad novels.

    Bureaucracy as such requires only that information and decisions are designed and organised well enough to get the larger job done. In a public service organisation like a hospital, a school or a social services department, it would be perfectly possible to treat broad sweeps of professional activity as ‘black boxes’, leaving precisely what is done and how to the professionals.

    The bureaucracy might also need to know a little more about how the job is carried out – who actually did the work, what resources were used (for inventory purposes only), and so on – but more than that? Only if there was a compelling reason. These people are, after all, professionals. You have employed them because they know how to do the job better than you do.

    So what counts as a compelling reason for more bureaucratic control? Well, one thing shouldn’t: a media frenzy. Because even where tragedy strikes, it’s hard to imagine a better recipe for creating the wrong answer than covering your back.

    After the Baby P tragedy, ‘The front-line professionals – teachers, health visitors, doctors - took fright’ recalls one healthcare worker, who prefers to remain anonymous ‘and started to make far more referrals to their local social services department. But at the same time, our own local social services department started to block direct calls between outside professionals and their staff. They even changed their telephone numbers so we couldn’t call them personally. Then we’d have to go through a long Q&A session – who we were, the child’s name and address, a lot of other details – when all we wanted was to have a quick chat, professional to professional, about whether they needed to know more about our cases. Mostly they probably wouldn’t have wanted us to refer them, but we had to go through the same rigmarole every time.’

    A typical bureaucratic response to a crisis. It’s hard to see how this sort of control over professional communications made a child any safer, but it’s easy to see how it would protect the organisation.

    ‘Of course, when we finally got through, we just exchanged our direct phone numbers anyway, so the system was side-stepped almost as soon as it came into existence’, added my informant.
    But this isn’t really the result of bureaucracy. It’s the result of managerial paranoia. Well, perhaps not paranoia – after all, a social services department’s fear of another media feeding frenzy is well founded.

    So the answer is, not more bureaucracy or less, but getting bureaucracy right. But this is not something that is likely to be achieved by politicians and business executives who understand little about their own organisations’ administrations beyond the a priori assumption that it must be a bloated monster.

    So the first step is to take a serious look. No, don’t hire a consultant to do it for you – go and look yourself. Martin Long, founder and ex-Managing Director of Churchill Insurance, insisted that every single Churchill employee spent regular mornings doing someone else’s job, including a turn on the call centre phones, and then suggesting three things to improve the work. Nor was he above taking a turn himself – indeed, pictures of Martin on line to customers were familiar to everyone.

    I have often thought that every MD and CEO should spend some time anonymously taking a closer look at their own organisation. Like King Harry before Agincourt, a stroll past the foot-soldiers’ tents might teach them a thing or two – a very surprising thing or two – about the organisations they imagine they understand. (They just need to avoid the shameless logic chopping Shakespeare puts into Harry’s mouth.)

    It’s not hard to do better than most organisations are doing right now. In fact there’s a bit of a vogue these days for so-called ‘lessons learnt’ systems that actively prompt a bureaucracy’s operators and users to feed their experience – good and bad - back to the managers who can do something about it. But it’s too early to get excited - this is only the latest of many rounds of self-improvement technique that have not quite solved the problem. Or rather, quality circles, kaizen, ‘lean’ management, Six Sigma have all helped to make bureaucracies more efficient, but whether they have made them better for their workers, customers or even the organisations that own them is a question that remains to be answered.

    Still, lessons learnt systems need to be tried, if only because, unlike many other improvement tools, they ask the fundamental question of what the bureaucracy is ultimately for. It’s a bit fiddly and for many organisations deeply counter-cultural. But it’s a lot cheaper than the millions they dole out every year to consultants like me to dig their existing bureaucracies out of the pits they have created by years of poorly thought-through initiatives, weakly managed change and simple neglect.

    In sum... One cheer for the general idea of bureaucracy, though it’s seldom carried out well. Another cheer for the poor bureaucrats, the butts of everyone’s scorn. But no cheers at all for the many organisations – private as well as public – who just don’t get it.

    [Listen to the original BBC broadcast here.]

    Saturday, 10 April 2010

    What are the Key Success Factors for a new CIO?

    I've been following the above discussion on LinkedIn here. Some interesting stuff, so I thought I'd summarise what I have read so far.

    The whole thing seemed to me to break into three parts: the first 90 days, creating the day job, and justifying the 'C' in CIO. Here goes...

    The first 90 days
    • Find how you can save the company money immediately.
      • Ask everyone who reports to you for 10 ideas to save the company money.
    • Find out how you can make the company money.
      • Understand the business – goals, strategy.
      • Identify your key customers and stakeholders, and set up KPIs and CSFs forthem.
    • Tap existing desire for change.
      • Poll IT and business for quick wins.
    • Establish your credibility.
      • Sell IT actively to executive & stakeholders.
      • Evangelize how information makes business competitive.
      • Lead how organization thinks about performance.
      • Build shared view & alliances with peers.
    • Know your department.
      • Operations, development, governance, architecture, support.
      • Define KPIs & results-based reporting.
    • Learn the business
    • Always have a backup plan for whatever the IT department does.
    Creating the day job
    • Build a strong team.
      • Talent, performance, delivery, R&R.
      • Stakeholder-facing responsibilities.
      • Self-managing, self-starting.
    • Integrate IT and business.
      • Business ownership of projects.
      • Shared/interlocking governance.
      • Budget/investment interlocks.
      • Portfolio management, etc.
    • Make sure IT do the basics right first time.
    • Emphasise speed of decision-making & delivery.
    • Get to grips with your basic technologies.
    • Bring development and operations closer.
    • Do change management simply but well.
    • Routinise governance, architecture, infrastructure and operations.
    • Build active innovation and self-improvement into IT.
      • Lesson learnt, empowerment, etc.
      • Track stakeholder satisfaction.
    • Develop backup plans for whatever IT does.
      Justifying the 'C' in CIO
      • Create a credible organisational plan that embeds the dynamism of your first 90 days in IT.
      • Be a business executive – not a techie.
      • Define a credible but radical vision that creates a quantum jump in IT’s competence and performance.
        • Goal: To move IT from technical enabler to business leader.
        • Create an IT strategy that solves real business problems.
        • Subordinate all tactics to this strategy.
      • Manage the politics.
      • Foster innovation and change.

      Thursday, 4 March 2010

      BPM, SOA and the relationship between business and IT

      I am struck by the fact that discussions about the perennially fraught relationship between business and IT always seems to omit reference to the very different natures of business and IT. IT reality has always worked more or less at right-angles to business reality - an engineering discipline in the midst of business. The links between are generally limited to very high levels at which the relationship is mediated by very general language that does not really require either side to have any insight into the other (i.e., requirements), and at very low levels that actually have little influence over the relationship as a whole (i.e., day-to-day system users).

      While this remains the case, there is no practical need for this relationship to get any better, and each side will continue to prioritise their owns engineering/business perspective over the other side’s concerns. And while IT technology always reverts to a strictly engineering mode of thought and the business side lacks a genuine ‘engineering’ element of its own, it is very unlikely that this impasse will be overcome. We can all agree that ‘something should be done’ and even agree what it is, but however important this is, it will never be urgent, and certainly not capable of being built into both sides’ day-to-day ‘common sense’.

      However, my own impression is that there are developments in the pipeline that will overcome this. On the one hand, the emergence of service-oriented architectures in IT is forcing IT people to define what they do (even for themselves) in terms even the business can understand. On the other, the (rather slower) emergence of formal business process management is creating a kind of ‘business engineering’ that obliges businesses to thing of their own activity in quasi-technical terms that even an IT person can recognise. Between them, SOA and BPM are creating the language, the conceptual framework, and the technical toolkit business and IT need to create that ultimate goal, a unified business/IT worldview.

      Not that this is either all that is wrong or likely to offer a complete solution. All the same, I doubt that there will be any solution until business and IT alike learn to think of themselves in such terms. This does not mean that IT people need to understand business or vice versa, but it does mean that they need to think of themselves in mutually compatible terms. Which is what, I think, a combination of SOA and BPM will achieve – without either side intending it.