Pages

Tuesday, 11 September 2012

Are green fields always swamps?

My family are currently building a pond in our back garden. Two days, say the manuals. Two months, says my back – if you’re lucky.

The trouble is, the pond is only one project we have in hand. Half the garden in being redeveloped – an 8-metre fir has been uprooted, the old (and, it turns out, asbestos-lined) summerhouse has been demolished, a new greenhouse (right by the pond) has gone up, the beds have all been massively redesigned, half the major shrubs have been moved (and probably killed), the shed has had to be completely rearranged to accommodate a ton of drying firewood…

Not so much green field then as builder’s yard – or now, after we have tramped all over it for weeks on end and dug up everything and dumped tree tons of topsoil and dug up half of it at lest twice just to get going - swamp. No wonder it’s take so long to complete the pond. The plan is in flux, everything around it is being changed, the very earth is unstable.

How like a so-called ‘green field’ site. And how very unlike a real green field. Had we just left everything else alone, we could have simply dug a pond into nice stable earth and lawn, and quite possibly done it in two days. Instead, just like our back garden, on a ‘green field’ site, everyone is trampling over it, struggling to execute their conflicting projects and making hundreds of quick fixes and claiming this space here and that system there.

So like most of the programmes I have worked on – the whole management processes, architecture, systems and data is often the last thing to be created, so everything is just like a swamp – so many interim and temporary solutions, almost all of which will gel into permanent – and permanently obstructive – blots on the landscape.

Ho hum…


Thursday, 7 June 2012

What is a cultural change?

Being in the midst of my umpteenth change process, I have been pondering exactly what it is that makes a change a cultural change, as opposed to the ordinary kind. I think I've finally got it.

An ordinary change is one where they say Yes and then don't do it. A cultural change, by contrast, is one where they glare at you first, and only then say Yes and don't do it.

Wednesday, 6 June 2012

Advanced test scripts

All IT and most business organisations will be familiar with the idea that they have to test systems. A basic tool in this is the test script.

A test script sets out a succession of a step-by-step instructions to carry out a test. It is crucial that testing be scripted to an extent commensurate with the risks of not testing properly, but when looking at either corporate standards or individual projects it’s striking how seldom even the minimum standards are met. Do your scripts state the expected result, so you tell unequivocally whether the test has been passed or failed? In my experience, most don’t. Nor do they tell the user what preconditions must be satisfied before the test can begin (navigate to…, using these privileges…), or tell you how to check the result, and so on.

So here is my recipe for a truly complete test script. You won’t want to use it because it’s long and complicated and you can’t see the point of it, but if you can tell me which fields are not needed by anyone with a legitimate interest in your testing, feel free to take them out.

The script falls in five sections:

  1. Document control
  2. Setup information
  3. Execution
  4. Outcome
  5. Result
  1. Document control data
    • Identifier.
      • Test name.
      • Reference no.
    • Parent identifiers.
    • Author.
    • Authoriser.
    • Preparation date.
    • Version.
  2. Set up information
    • Planned execution date.
    • Tester
    • Function.
      • Summary of test’s objective or purpose.
      • Test condition(s) implemented.
      • Positive/negative test?
    • Start point (i.e., navigating to appropriate screen/process/field.)
      • Set up/Initial conditions.
      • User ID/privileges required.
      • Preceding actions/tests to prepare the application.
      • File selection, parameter settings, etc.
      • The preconditions for the test as a whole (Eg, account no., currency, etc.)
  3. Execution (probably in table format)
    • Step no. (if sequence is significant.)
    • Location (Screen/form/field name at which testing should begin enter, for example, “Go to screen…”, “Select ‘Reports’ menu”, etc.)
    • Input data.
      • Data to be entered.
      • Option(s) to be selected.
    • Test actions (Step by step, checklist-style – e.g., “Enter data”, “Select option A”, “Click on Submit”, etc.)

  4. Outcome
    • Actual test time/date.
    • Actual result.
    • Checked boxes (against each test step, to confirm completion.)
    • Notes, with a general prompt to record anomalies, unexpected results, unplanned steps, & unusual system behaviour.
    • Narrative/commentary (to support re-runs & regression testing.)
    • Sign off.
    • Tester’s name & signature.


  5. Test result
    • Expected result.
    • Method for checking actual against expected (if not just “Check actual vs expected results” - including automated file comparisons, etc., as appropriate. E.g., checking back-end systems, end-of-day report, messages, etc.)
    • Pass/fail.
    • Cause of failure.(Eg, “Comm320 failure”, “Data feed”)
    • Defect reference field (to locate defect reports, anomalies, etc. my be needed at both step and script levels.)

Try it. Really, it works.

Invest in your corporate brain

The brain consumes about 20% of the body’s energy, and that is why we are by far the most dominant organism the world as ever seen.

From the point of view of development, I would say that formal processes represent about half of every company’s brain – a good deal of its memory, its practical skills, its controls of perception and behaviour, quite a lot of its capacity for reasoning, balance and coordination, and most of its basic language and social skills.

On the other hand, most companies' formal processes - their lifecycles, their operational mechanisms, their ways of working - are designed more like the brain of a crocodile. (The analogy is more exact than you might imagine.) They are spread out across they organism, they barely speak to one another, they are almost never tailored to the practical needs of their users, they are seldom created with a meaningful outcome or benefit in mind and never tracked or measured to see whether they are doing their job and seldom fixed even when they plainly aren't, they have no effective direction or ownership, they are formally changed in an arbitrary and impulsive manner, they are never changed to anticipate a real problem, and so on.

Without major evolutionary leaps processes like this will never evolve into something truly intelligent.

Why is this?

It is, I think, because we don’t bother to invest in them to the extent necessary. Perhaps we just can’t believe that they need maybe 20% of all resources. Perhaps, like the brain itself, the significance of these processes is lost on people who, almost by definition, cannot see what they contribute. After all, we only need to invest so much in a corporate nervous system because most the useful things an organisation does require a span of knowledge, control and attention no individual possesses. Hence the paradox of processes – they are installed because we cannot manage such vast and complex systems, but even when they are installed and doing their job perfectly well, we seldom construct, implement or support them in a manner that really improves our view of the whole. We are still cogs in a machine that, although it may now be working better, we still cannot grasp processes as a whole.

So what needs to be done to improve our grasp? There are quite a few things that can be done:

  1. Engineer processes properly in the first place!
    1. Make it a collective activity, don’t under-estimate how much effort it will take, or the hidden cost of getting it wrong.
    2. Start with a clear end in mind – and constantly test processes to see whether they are doing their job.
  2. Make sure everyone understands what the processes are for – i.e., don’t allow people to wander blindly through their work.
    1. Train everyone and train in detail. The ROI on training is higher than the ROI on almost anything else in management.
    2. Not training is not only completely counter-productive but also extremely demoralising and defeats the purpose of hiring intelligent, capable people.
  3. Do not over-engineer processes:
    1. Define standards and processes in terms of functional goals, not detailed technical steps, so they can be implemented and adapted locally.
    2. Conversely, explicitly Maximise local discretion and flexibility, so decisions made remotely cannot inadvertently force absurd actions locally.
  4. Make sure that you can fix any problems with the process.
    1. Instrument the processes to make sure they tell you how well they are doing without having to ask specially.
    2. Have an intelligent waiver/exemption process so local teams can escape the worst excesses of processes not defined for their purposes.
    3. Part of every well-defined process is the possibility of changing it. It will need changing, and you need that fact into it. So make sure that all processes are owned, with clear local accountability and authority to improve. Invest in the time and re-training needed to do that.


There’s a lot more, but if you are already doing all this, you probably have other things you should be focusing on instead.

Tuesday, 8 May 2012

How to implementation almost anything...

This diagram summarises (my own view of) how to roll out almost anything in IT that is not too technical - not an systems deployment, perhaps (which would be full of technical preparations and tests) but certainly a new process or function.



It's quite obvious once you've nailed down the detail, but you may find it helpful. The steps for each of these tasks are summarised below:
  1. Stakeholder workshops
    • Scope
    • Goals
    • Impact
    • Process
    • Expectations
    • Participation
  2. Finalise key updates
    • Functions
    • Organisation
    • Products
    • Processes
    • Controls
    • Training
    • Intranet
  3. Project kick-off
    • Scope + goals
    • Budget
    • Requirements
    • Sponsor + stakeholders
    • Mandate/Project definition
    • Issues + risks
    • Critical success factors + KPIs
  4. Communicate with stakeholders
    • Expectations
    • Impact
    • Budget
    • Alignment
    • Authority to proceed
  5. Identify audiences
    • IT stakeholders
    • Business stakeholders
    • Delivery managers
    • Development teams
    • Operations + support
    • Regulators & compliance
    • Cross-organisational teams
  6. Capture current status
    • Function mapping
    • As-is process/systems
    • Awareness & interests
    • Functional impact
    • Implementation impact
    • SWOT, risks & issues
  7. Validate roll-out process
    • Process walkthrough
    • Impact assessment
    • Participation requirements & plan
    • Risks & issues
    • Go/No Go decision
  8. Tailor roll-out packages
    • Local expectations + impact
    • Entry/exit points/processes/products
    • Current level of knowledge
    • Local implementation process
    • Local interfaces, access, reporting, etc.
  9. Communicate with SMEs & specialised functions
    • Expertise requirements
    • Participation in transition
    • Support during transition
    • Facilitation
  10. Train users
    • Functional objectives
    • Walk through solution
    • Metrics & standards
    • Participation in transition
    • Cutover impact
    • Supervision & support
  11. Train SMEs
    • [as for Train Users]
    • Brief stakeholders
      • Changes
      • Impact
      • Benefits/value
      • Transition process
    • Roll out components
      • Access tools & privileges
      • Localisation
      • Local integration
      • Measurement tools
      • Support arrangements
      • Remove existing systems/materials
    • Evaluate benefits
      • Take-up
      • Performance metrics
      • Quality metrics
      • User satisfaction
      • Transition costs
      • Stakeholder satisfaction
      • Compliance
    • Close project
      • Validate against requirements
      • Project performance
      • Update support
      • Residual issues
      • Lessons learned
      • Project closure
    Of course, this assumes that you have actually captured the requirements, modelled the process and design the implementation components in advance, which isn't always the case. But this process should at least tell you what you need to have done before you get to implementation.

    Your SDLC is your company's brain

    I have recently been asked to review a large insurance company's delivery methodology, and found a state of affairs I have not witnessed for about 20 years. It’s a sad thing that major companies routinely neglect their lifecycles, methods and processes, presumably because they do not appreciate just how valuable the potentially area. It’s almost like they can’t see what good their brains do, so they neglect them in favour of other, more obviously useful organs (mainly the stomach, I think), and as a result their brains shrink and they become still less able to evaluate those same brains' purpose, effectiveness or value.

    In fact the brain is very good analogue of the an organisation’s development lifecycles. not least because it is the reason why we are by far the most dominant organism the world as ever seen. From the point of view of development, I would say that an organisation’s formal processes represent about half its brain – a good deal of its memory, its practical skills, its controls for perception and behaviour, quite a lot of its capacity for reasoning about cause and effect, balance and coordination, and most of its basic language and social skills. Yet in many companies development lifecycles are organised and managed like the brain of a crocodile, not a human being, and while that continues it will never evolve into an intelligent being. (The analogy is more exact than you might imagine.)

    But of course, the human consumes about 20% of the body’s energy, while most development lifecycles would be lucky to receive 1% of a company's attention. Which is odd, to say the least. Any improvement in process brought about by improving the local development lifecycle would surely need to change in performance by at least, say, 4-5%. In a £20 million programme, that means spending £1 million on their lifecycle would at least be covered - yet does anyone spend so much on this crucial part of development? As for a £100 million portfolio, how many companies pend £5 million a year on maintaining their development processes, let alone the central nervous system's budget of 20%?

    On the other hand, the efficiencies that could be achieved by integrating the delivery process as a whole and making it a dynamic part of real delivery are vast. Some time ago Accenture published a paper showing that training had an ROI of more than 350%, and I suspect that the same would be true of improving most companies’ development lifecycles.

    Here is a quick questionnaire of the most important issues, based on about 20 years of looking at (and occasionally helping to six) the problem. Note that it does not start with the details of the lifecycle documents and products – that is the least important fact! On the other hand, in more mature organisations the issue is often no more than obsolescence and missing items follow from the lack of sustained management focus, but in all too many cases there are major gaps.
    1. Is there a global management approach to the delivery process itself?
      • Clear unitary and controlled ownership, management & rules of delegation of the end-to-end development process.
      • Is there a development strategy?
      • Is there real expertise in methodology development? Just asking PMs what they think is like asking drivers how to design a far – you’ll get some of the user requirements but nothing useful about the design.
      • Is there a coherent or proportionate rollout/update process?
    2. Is there a simple, intelligible presentation & access?
      • A single, integrated model of delivery as a whole, including:
        • Governance, management technical tasks and support functions?
        • All stages, including both work selection and initiation and solution deployment/transition and work closure?
      • A single, user-friendly site for accessing the delivery process as a whole?
      • Effective control over authoritative versions (and withdrawal of obsolete materials)?
    3. Does it cover all of your most important delivery strategies?
      • Outsourcing?
      • Offshoring?
      • Package procurement & implementation?
      • SAAS (Software As A Service – thinks like Salesforce.com)?
      • Does it have enough (or anything) to say about non-development activities?
      • Procurement?
      • Support and maintenance?
      • Technology upgrades (Oracle, SWIFT, etc.)?
    4. How mature is the lifecycle itself?
      • Are there proper delivery and management processes - i.e., some components exist, but they do not operate as a complete, end-to-end, Prince2-like process?
      • Is there a true programme management lifecycle (most organisations are dominated by programmes now)?
      • Does it include a convincing model of change management as a whole, notably:
      • Business design, development, readiness & transition?
      • Operational design, development, readiness & transition?
      • Does it handle very small projects (which can often be managed through a single artefact)?
    5. How well does the lifecycle define basic management elements?
      • Roles + responsibilities – are they current, consistent and complete?
      • Are there explicit criteria, rules and authorities for adaptation, scaling & exemption?
      • Does it include (or at least point to) integrated stage and task-level processes & tools?
      • If it is a waterfall lifecycle, does it include a risk-driven iteration model for managing individual tasks and stages?
      • Are there detailed procedures for basic management tasks (risks, issues, assumptions, dependencies, change/configuration control, product/document management, impact analysis, estimating, planning, resourcing…)?
      • Have standard stage/task/product level risks, assumptions, dependencies, etc. been identified and articulated?
      • Does it set credible gateways, including include stage-end consolidation & validation, evaluate full project content, review performance to date or readiness for next stage, etc.?
    6. Are all individual products actually inadequate?
      • Are products defined by independent product descriptions?
      • Are there stage, product- and task-level procedures, advice & information?
      • Are there samples of good practice, including instances for each major area of usage?
      • Does each item have a supporting quality checklist?
    7. Is alignment with other functions well defined?
      • Clear & efficient access to supporting management functions & data (resourcing, MI, finance, architecture, etc.)
      • Explicit alignment with and access to related standards + policies?
    8. Is the lifecycle actively supported?
      • Are there discipline or process owners, with clear roles & responsibilities, a proper management cycle and allocated time to do the job?
      • Are there SMEs, with clear requirements and channels for feeding their experience into the organisation (e.g., a central lessons learned system or training/briefing programme)?
      • Is there an R&D process (minimally to drive innovation, capture and socialise training and disseminate new joiners’ knowledge & experience)?
    9. Is there a training programme covering all processes, roles, tools & techniques?
      • Is there a specialised SDLC training function & system.
      • Is there a training programme for staff, consultants, outsourcers, offshore & contractors?
      • Are there self-training packages for key activities – individual products, reviews, testing, requirements management, etc. – so users can refresh their knowledge independently and as and when needed?
    Without a Yes to at least most of these questions, you have the methodological equivalent of a crocodile's brain, and it will be all but impossible to make substantial and sustainable progress to real intelligence.