Tuesday, 7 October 2008
Thoughts on thought leadership
Here is the comment I added to Cole's latest post:
I have often thought about - and despaired over - how so many companies are content to just go along with short-term actions. But if they are going to achieve a genuinely strategic approach then they absolutely need thought leadership, even if they have to buy the damn stuff from people like you ad me. Because without a clearly and explicitly articulated conception of past, present and future, all linked together by clear analytics and an implementable proposition at every level, they literally don't know what they are doing.
Which is, I think, more than a little mysterious - who would even go on holiday or down to the shops without knowing exactly what they were about? Perhaps the routines (and sheer inertia) of business makes it a little too easy to just get on with stuff. Which suggests a strategy - to put managers and execs into a situation that radically disrupts their myopic situation and forces them into innovation.
Don't know how that can be done realistically, short of threatening to fire them all is they don't come up with the goods! But my experience certainly suggests that even C-level management is neither equipped nor inclined to think at all deeply about their situation.
The reason for this is I think a little too close to home for most organisations to accept. According to some research I read a while back, the managers who are most likely to be promoted are not the one who are best at getting their job done. In fact there is almost no correlation between execution and promotion.
So of course, the higher many successful managers rise, the less they are relying on substantive knowledge and the more they rely on networking ,salesmanship and so on.
Doesn't bode well for people who care about thought leadership.
Thursday, 25 September 2008
Management methods, models + theories
Thursday, 18 September 2008
Successful managers vs effective managers
Professor Luthans 'found that communication and human resource management activities made by far the largest relative contribution to real managers' effectiveness and that traditional management and - especially - networking made by far the least relative contribution'.
By contrast, 'networking activity had by far the strongest relative relationship to success'.
And in summary, less than a tenth of managers made the top third of both 'successful' and 'effective' groups - which is what you would expect if there were no connection between the two.
Scary? Anyone have any ideas about how valid this finding is? Or how it is possible?
[Update - a quick email exchange with Professor Luthans confirms that, in his view, this is still the position.]
[Further update - I show this to various colleagues. They laugh. Basically, our bosses are blagging their way to the top (for non-London readers, that means getting to the top by less than legitimate means.) And no one is even faintly surprised.]
Tuesday, 2 September 2008
How many maturity models are there, dammit?
Looking through my files on maturity management this morning, I came across the following list - and I don't think it's even nearly complete!
- Agile Maturity Model
- Architecture Maturity Models
- Assessment Maturity Model
- Automated Software Testing Maturity Model
- Brand Maturity Model
- Business Archive Management (thanks to Cole Sandau)
- Business Continuity Maturity Model
- Business Intelligence Maturity Model
- Capability Assessment Tool (CAT)
- Capability Maturity Model for Software (retired)
- Capability Maturity Model Integration (CMMI)
- Change Management Maturity Model
- Change Proficiency Maturity Model
- Configuration Management Maturity Model
- Customer Experience Maturity Model
- Data Center Infrastructure Maturity Model
- Data Governance Maturity Model
- DITA Maturity Model
- Earned Value Management Maturity Model
- E-Government Maturity Model
- e-Learning Maturity Model
- Enterprise Architecture Maturity Model
- Extended Enterprise Architecture Maturity Model (E2AMM) v2.0
- Green Enterprise Maturity Model
- Information lifecycle management maturity model
- Information Maturity Model
- Information Process Maturity Model
- Information Security Management Maturity Model
- Gartners' Infrastructure Maturity Model
- Integrated Product Development Capability Maturity Model
- Internet Maturity Model
- IT Architecture Maturity Model
- IT Maturity Model
- IT Service Capability Maturity Model
- Knowledge Management Maturity Model
- Leadership Maturity Model (LMM)
- Learning Management Maturity Model
- Localization Maturity Model
- Managed Care Maturity Model
- Medicaid Information Technology Architecture
- Gartner's Network Maturity Model
- Open Source Maturity Model
- Operations Maturity Model
- Organisational Capability Maturity Assessment (CMA)
- Organizational Project Management Maturity Model
- Organisational Project Management Maturity Model (OPM3)
- Outsourcing Maturity Model
- People Capability Maturity Model
- Performance Engineering Maturity Model
- PM2 Maturity Model
- Portfolio, Programme and Project Management Maturity Model
- PRINCE2 Maturity Model (P2MM)
- Process Maturity Model (PMM)
- Product Development Capability Maturity Model
- Programme Management Maturity Model (PMMM)
- Project Management Maturity Model
- Property Asset Management Maturity Model
- R+D Maturity Model
- Resource-Oriented Architecture Maturity Model
- Reuse Maturity Model
- Risk Maturity Model
- SaaS Architecture Maturity Model
- SaaS Simple Maturity Model
- Security Maturity Model
- Service Desk aturity Model
- Service Integration Maturity Model
- Services Maturity Model, Self-Assessment Maturity Model
- SOA Maturity Model
- Software Acquisition Capability Maturity Model
- Software Engineering Capability Maturity Model
- Software Maintenance Maturity Model
- Software Reliability Engineering Maturity Model
- Stakeholder Relationship Management Maturity
- Systems Security Engineering Capability Maturity Model
- Talent Management Maturity Model
- Testing Maturity Model
- Threading Maturity Model (ThMM)
- Training Management Maturity Model
- Usability Maturity Model
- Web 2.0 Maturity Model
- Web Services Maturity Model
- Website Maturity Model
... and so on. And on. And on.
I also found a rather nice 'Maturity Maturity Model' and even a splendid Capability Im-Maturity Model!
Given that maturity models are basically a good idea - at least they get us away from the silly idea that radical change can be accomplished in a single step - it's a pity that so many of them are based on the chronically immature SEI CMM model. This, I have always thought, is more like a list of things the DoD finds it hard to do, in approximate order of difficulty.
I have had quite a few goes at maturity models (not to mention basing a complete book on the large-scale structure of human history on an analogous idea), including my 'Lattice Methodology', which is designed to direct strategic transformation programmes by maturity management methods, and a methodology maturity model. I may post either or both here, though I wouldn't get your hopes up just yet.
Anyone got any more? And if someone can find the URLs, I'd be happy to put them in.
Best Practice - yuk!
The objectives of maturity management
- To define a strategy for creating revolutionary change by means of evolutionary steps.
- To free leaders from the limitations of corporate management systems by creating management systems that enable leadership rather than constraining it.
- To define a truly manageable management system capable of supporting fundamental, strategic change.
Surprisingly, these are not stated objectives of other management models such as the Software Engineering Institute’s well known Capability Maturity Model or the Project Management Institute’s standards. Nor are they made any easier to achieve by the approach those standards adopt, which is basically pragmatic, eclectic and bound by convention.
These objectives are described in more detail below.
Objective 1: Revolution by evolution
The primary objective of maturity management is to deliver radical, even revolutionary change. That means not merely re-invigorating moribund management systems and staunching the haemorrhages caused by poor management practice, but creating genuinely world-class organisations.
But how is that objective to be achieved? Most approaches to organisational change share at least one assumption: that radical results could be delivered in a single heroic step. Maturity management is based on a quite different assumption: that realistically, radical change can only take place in well-defined, incremental steps, quite probably extending over many years and certainly requiring many discrete developmental steps.
Hence its first objective: to define a sequence of discrete, manageable stages through which radical change can be brought about. Revolution by evolution, in fact.
Objective 2: Freeing leadership from management
One way of conceptualising how maturity management works is in terms of the distinction many authors have drawn between management and leadership. To quote Stephen Covey's Seven Habits of Highly Effective People:
Management is efficiency in climbing the ladder of success; leadership determines whether the ladder is leaning against the right wall.
Other commentators have expressed similar sentiments in different ways, but it is striking that they all insist on this difference and on the importance of leading organisations rather than merely managing them. Leaders bring vision, inspiration and direction, and without it an organisation loses its impetus, its cultural integrity and its ability to take decisive action.
Yet many organisations seem determined to encumber their leaders with unnecessary or subordinate management tasks, even actively disabling them by failing to provide the basic information and decisions real leadership demands.
Of course, no organisation could succeed by completely replacing management by leadership. Conversely, where leadership is not supported by robust management, the ‘leadership’ and ‘empowerment’ routinely degenerates into senior management abdicating responsibility for the actions, accomplishments and performance of their subordinates, backed up by the usual blame and recrimination when things go wrong.
So a balance must be struck – but only the right balance:
- The ability to manage is quite commonplace, whereas leadership is notoriously rare.
- The ability of leaders to delivery results depends on the presence of management systems (including competent and empowered managers) capable of implementing their vision.
- Unbridled, universal ‘leadership’, if not backed up with clear control of the whole, will soon degenerate into chaos, and the whole becomes a great deal less than the sum of its parts.
- Once they have been applied to a range of assignments, many leadership skills can be translated into reliable methods, tools and techniques that can be taught to less inspired individuals.
Hence another aspect of maturity management: by continually upgrading management systems, activities that previously required that rare combination of inspiration and perspiration that defines genius can be done almost as effectively by any modestly capable individual who has been trained to use the appropriate methods, tools and techniques and is supported by the necessary flow of information and decisions. Indeed, the whole history of management consists very largely of the creation of management systems to do things that were previously done only by great leaders. That is one of the main reasons why great organisations – nations, teams, businesses and so on – can exist at all.
On the other hand, where will future leaders acquire the vision on which leadership so crucially depends? Where will they get that spark of insight leavened by sound practical experience? Surely the answer is, yet again, from the management systems in and through which they work. If these systems are bad, then any manager’s experience will be less than illuminating. If, on the other hand, the management systems they use are well designed, effective and properly directed and maintained, their experience of their work, the organisation and its goals will be clear, well-structured and informative. Its purposes, methods and underlying philosophy will be clear and reinforced throughout. Conversely, the better structured the system, the easier it will be to spot any residual problems. But most importantly of all from the point of view of inculcating leadership, the values, purpose and opportunities it faces will be clear.
Hence the maturity management approach: wherever possible it replaces leadership by management. This is not because we should prefer management to leadership after all, but because we should reserve the special talents involved in leadership for tasks where they are really needed. If some leadership skills can be made so straightforward that they happen as a matter of course and the same results can be reliably achieved by the routine use of a management system, this can only strengthen an organisation, and release its true leaders to focus on areas that demand real leadership.
To summarise the whole above argument in terms of a contemporary management buzz phrase, the trick is not to rely on those who can ‘think outside the box’, but to learn from them, and so make the box the rest of us work in bigger. Much, much bigger.
Objective 3: A manageable management structure
If the purpose of maturity management is to achieve radical change by incremental steps, and its principle instrument is the conversion of leadership into management, it is clear that its next objective must be to define a management system that drives change. More precisely, maturity management must tell us:
- To define why the assignment exists, and so ensure that the assignment contributes to strategic goals.
- To define what the assignment will do, and so keep the assignment is kept on track to its goals.
- To define how the assignment will do this, and so ensure that an effective technical solution is delivered.
- To handle all the data and decisions needed to reach the above goals.
To achieve this, a maturity management methodology defines a complete, generic management system consisting of three core components:
- A generic management task model.
- A generic management system model.
- A generic management maturity programme model.
Defining generic management components makes it much easier to define management in terms of discrete units of management activity that are easily understood, easy to implement and use, and easy to revise or replace in the face of new problems and changing circumstances. It also provides the bedrock of the principles of recursion and iteration. Furthermore, by breaking the implementation process into short-, medium- and long-term changes and by embedding the components in a well defined hierarchy of maturity levels, systems and tasks, it is easy to adapt the generic components to local needs and the most appropriate methods, tools and techniques.
Two principles of management design
To counter this a little, there are two general principles all management systems should implement: recursion and iteration.
Recursion
Recursion means that the same process is used at all levels of a given activity. For example:
- To ensure that we all mean the same thing when we speak of ‘management’, the same principles and generic process should govern management at every level of the organisation, from strategic direction to day-to-day operations.
- If a manager needs to define local processes in more detail, it should be possible to re-apply the main process recursively (ie, to its own components).
(If, like me, you like that sort of thing, the best definition of recursion of which I am aware was given by an early Smalltalk dictionary, whose entire entry for 'recursion' consisted of the words 'see "Recursion"'.)
Iteration
Iteration means that the same process is used across all parts of the organisation. For example:
- To ensure that the integrity of all processes and management activity is maintained, change-related processes such as change, issue or risk management should be designed so that they consist of the recursive application the standard generic process, not a special (and probably anomalous) processes of their own.
- However special they may feel that their work is, all specialist groups (such as legal departments and supplier management) should adhere to the same principles and generic processes as the groups responsible for the ‘main process’.
These principles are combined for managing individual assignments. To ensure that managers are empowered without increasing the risks inherent in allowing local groups to make critical decisions, the management system should consist of ‘black boxes’, within which the local managers can structure activity as they see fit, so long as the local management system adheres to the Lattice Methodology and they process the required inputs into the required outputs. This approach would also enhance local commitment to and involvement in system improvement.
The real challenge
The radical nature of this expectation stands in stark contrast to the real nature of management systems many contemporary managers are obliged to use. Some of the more common problems managers routinely face are:
- Being asked to achieve vague, unstable or conflicting business goals, often decided without adequate analysis or assessment.
- Translating disparate, fragmented and often obsolete management processes into efficient and effective assignments.
- Making do with less than optimal technical resources, including untrained personnel, unsupported tools and incompatible techniques.
- Being blinded by a combination of, on the one hand, inadequate management information-gathering and decision-making processes and, on the other, an unmanageable mass of irrelevant data and obstructive administrative controls.
- Complying with policies, standards and procedures that add little or no value to their assignments and are of debatable value to their organisations.
The causes of these problems are many and varied, and not all have to do with management systems as such. There is little a manager can do about fast-changing business environments and the regular irruption of massive new technical factors (the millennium, the Euro, electronic commerce, smart cards, identity management, electronic document, record and email management, and so on), and some of the less attractive features of the working environment such as a pervasive short-termism, disruptive cultural and political factors and the routine replacement of action and substance by glossy reports and corporate rhetoric.
Nevertheless, all organisations harbour a huge and largely untapped potential for improving their management systems, and not only managers but the organisations they work for would benefit immensely if only more attention was directed to these opportunities. For example, it is now well established that most organisations suffer from anything between 10% and 30% on waste and rework. Imagine what that means:
- Sometime between Thursday and Friday lunchtime every week, everyone stops doing anything useful and starts pouring money down the drain instead.
- If an averagely profitable company with 25% waste and rework could reduce that figure to 15%, the money they saved would double their profit – without doing anything else!
- Every year, a company with a billion dollar turnover spends between $100,0000,000 and $300,0000,000 on doing nothing.
- With $100,0000,000 extra to spend each year, your company could [enter preferred management fantasy here].
Again, this is not a solely managerial issue and managers cannot solve it on their own, but few managers would have any trouble identifying examples of poor practice and poor systems. For example, most managers could probably identify places where the following kinds of problems simply waste their time:
- Altogether too much time is spent fire-fighting problems that should never have arisen in the first place, or for which a routine solution should already be available.
- Many management systems are incapable of providing managers with the information and decisions they need to do their work properly. This often includes an effective gap between individual assignments and the organisation’s overall goals. So an inordinate amount of effort is wasted scrambling for hard data and taking ill-informed risks.
- Management systems commonly assume that all assignments are essentially the same, prescribing boiler-plate ‘solutions’ that force all work through a uniform mass production routine. Such systems actively disable managers from handling unique problems or business-critical opportunities effectively. Elementary control structures designed to adapt generic systems to individual assignments such as ‘quality plans’ are a start, but managers still find themselves bending the rules in order to get the results they need.
- Few systems provide the methods, tools and techniques needed to effectively coordinate multiple assignments into a viable programme of interconnected assignments – and increasingly important demand on contemporary management.
- Although most organisations have a nominal strategy, even the most sophisticated, for whom multi-year, multi-functional, multi-national programmes are the norm, are often incapable of industry leadership, be it at the level of values, products, processes, technology or systems. As a result their strategies are prone to vagueness, instability and even flat self-contradiction
I should emphasise that I do not take a utopian view of management: there really are times when sacrificing a manager’s time and abilities is the least evil. But all too often even perfectly routine activities are brought to a halt by an arbitrary decision, false information, the lack of the right tool, an untrained staff member or bureaucratic ossification.
What is a management system?
A ‘management system’ consists of the totality of structures, functions, processes and mechanisms provided by the organisation as a whole, that enables managers to carry out their work successfully. Depending on its overall maturity, a typical management system will:
Identify the strategic purpose of individual assignments:
- Translate corporate purposes, goals and objectives into assignment purposes, goals and objectives.
- Provide strategic, process, technical and work environment planning.
- Assignment definition and validation processes.
- Define the relationship between the manager’s work and the company strategies, including an organisational structure, communications networks, shared information and decision-making processes, and so on.
Define the processes needed to carry out an individual assignment:
- Define management’s authority and responsibilities.
- Define a range of generic methodologies for executing processes of different kinds.
- Define the detailed functions and tasks needed to carry out a process.
- Provide a range of options and alternatives within any single process, and supply the methods and tools you need to choose between them.
Provide the technical resources and materials needed to carry out any assignment:
- Skilled people.
- Tools and systems (production lines, computer-assisted development tools, test tools).
- ‘Delivery vehicles’ (such as templates) for common technical activities.
- Technical support (R&D, configuration management, standards, tools development, etc.).
Create a working environment that actively supports the assignment:
- Support services (recruitment, training, repositories, tools development, coaching and mentoring, etc.).
- Administrative services (clerical support, data management, record management, standards, analysis and reporting tools, etc.).
- Work facilities and infrastructure (space, hygiene, communications, clerical materials, etc.).
- Storage for interim products (filing, configuration management, etc.).
- Processes and mechanisms for reassigning the assignment’s facilities once the assignment is complete.
Create and manage generic standards and procedures:
- Establish and maintain management methods, tools and techniques.
- Reference metrics.
- Create common and generic work facilities.
- Create support organisations.
- Define the manager’s relationship to stakeholders and regulatory authorities.
- Define the manager’s relationship to third parties such as contractors, suppliers and consultants.
Looks like a checklist to me...
Another way of defining the ideal management system is to take the shortcomings of many existing systems, as described above, and see what would have to be done to remedy them Here is an initial list:
Unnecessary fire-fighting would be eliminated if each management task or function was defined and effectively implemented. Among these tasks would be that of constructing new tasks as circumstances require. The definition of any given task might include (amongst other things):
- Task-specific objectives.
- The steps that are needed to carry it out.
- Defined inputs and outputs.
- Parameters for adapting it to different types of assignment.
- Supporting standards and procedures.
- The methods, tools, techniques, skilled resources needed to execute it efficiently.
These management tasks would be integrated into single whole, thus creating a management system properly so called.
- That system would incorporate (again, amongst other things) clear task interfaces and a fully mapped flow of information and decisions connecting the start and end points of any given assignment.
- Such a system would not only translate organisational goals into assignment requirements, management processes, technical resources and administrative functions …
- …but also provide early warning systems that trigger realistic and appropriate action, before adverse trends and risks turn into crises.
An ideal management system would also be adaptable to the demands of individual assignments. Very many current management practice already include quality plans to deal with this situation, but a more sophisticated system would be truly systematic:
- It would define parameters and tools managers would need to configure the system to meet each assignment’s unique objectives.
- It would be based on business objectives, the assignment’s critical success factors and its intended business purpose (product/service quality, operating costs, time-to-market, etc).
- It would match system components dynamically, to each assignment’s functional needs, not statically and according to the formal management system’s structure.
The performance and outcomes of individual assignments would be recorded in reusable formats, from which other assignments could benefit.
- This would require global repositories for organising and distributing key timely and reliable information and decisions, accredited subject matter experts and information mining tools.
- Such an approach would also allow multiple assignments to be integrated into completely work programmes.
- And of course, you would have to structure assignments in appropriately flexible and multi-dimensional terms in the first place – otherwise dismantling them for reuse would become a major project in its own right.
Out of such a system and the experience that it generates is abstracted and systematised a comprehensive model of all the factors and forces affecting the organisation, and a system for their management and further development. Such a structure would allow management to be as precise as it needs to be (with any degree of precision being attainable), would look forward and backwards over any strategically meaningful timescale, would structure any degree of internal and external complexity and change into simple, manageable terms, and could deal with any meaningful and credible future scenario. Thus, it would enable the organisation to exercise genuine industry leadership, capable not only of ensuring the organisation’s attainment of its current strategic goals but also of achieving the ultimate strategic objective, namely control over the environment in which the organisation operates.
Of course, even the most sophisticated a management system alone cannot create the vision needed to see where an organisation should be going, but the kind of system that is described above would surely be able to integrate complexly interacting strategies, goals, processes, systems, and so turn any rational vision into reality.
Few organisations really provide such a system, so managers as seldom as efficient or effective as they could be. On the other hand, the lack of such a system means that both individual managers and entire organisations operate in a half-light of inefficiently, assumption, politics, ad hoc adjustment and barely concealed crisis management. Internal propaganda levels are high, but real expectations are low.
Thursday, 28 August 2008
Can non-technical reviewers review technical products?
In my own specialist area – IT – this might manifest itself in a pained question such as ‘How can the business approve a change in a database design?’ But there are similar questions in all complex management situations – can techies contribute usefully to business cases, for example?
Good question – and in my experience, people only say ‘good question’ when they mean that there’s no good answer. But in this case, there is an answer, and what is more once the answer is understood it leads to a more robust approach to reviewing generally.
The basic problem is to decide what objective reviewing is trying to achieve, and so to decide whether non-technical (non-business, etc.) reviewers have any role to play in achieving it. To put it concisely, the purpose of reviewing is to decide whether the item under review is meeting its requirements. I don’t mean this in the technical sense of ‘requirement’ – i.e., something the work is supposed to achieve for it to be considered a success. I just mean does it do what it is supposed to do? This might well mean ‘does it fulfil its requirements?’, but it could also mean ‘does it comply with this specification’ or ‘if we follow this plan, will we succeed?’, or any number of other things.
From that point of view, the right reviewers are the people who can – and need to – make that call. But that still doesn’t mean that they are technically capable of understanding the item they are reviewing. Or are they? In what sense do they need a precise technical understanding of the content of the item – for example, a design document - to be able to evaluate it? To put my complete argument in a nutshell, what I am getting at is the idea that reviewing is based not on what it says so much as on what it means.
To go back to my problem about business people reviewing a change to a database design. Can they understand what the change says? Probably not, if by that you mean their grasp of namespaces, indexing and denormalisation issues, and their opinion of such things, in strictly technical terms, is probably worthless.
But that isn’t necessarily all that the review is for. For behind every such technical change, there is a pyramid of managerial and business implications that non-technical reviewers can not only understand perfectly well but are probably better judged by non-technical people.
This is illustrated in the following diagram (click to expand):
Hopefully it is clear what the diagram implies. At the lowest level, where the database change itself occurs, there is probably little benefit to be had from asking non-technical people what they think of the change from a purely technical point of view. ‘Who knows, and who cares?’ is probably the right answer. But as soon as the wider implications of the change – the non-technical elements of what the change means rather than the details of what the change documents say – start coming to the fore, both their interest and their ability to judge should start to grow rapidly.
For example, assume that the database change in question is to move from a distributed to a centralise structure. Although the technical issues will be beyond the business’ grasp, and so will most of the implementation and operational issues, not much else should be beyond them. Looking at the diagram again, what are the changes in test requirements this database change will call for? To have all your database testers in one team, located centrally, rather than separate teams all around the business? What does that entail? Much lower costs? Great, we’ll have it. And a simplified roll-out that can now happen three months earlier? Even better. But what is the downside? The changes in platform mean that we will need to recruit a whole new database team? How long will that take? What will it cost? Oh... not such a no-brainer then. And there’s a small chance that we won’t be able to meet our delivery timescales after all? But at least the total development cost will be well down? Great! But the operating cost will in fact go up? Damn...
It’s a complicated business, as anyone who has been in such a situation will testify. But perhaps it should be – and perhaps excluding the business (and other non-technical people) from reviews on the grounds that they ‘won’t understand’ what they are reviewing is not only a very narrow interpretation of what ‘understanding’ means in such a situation but positively counter-productive. After all, if you don’t ask them now, when will you? When it’s too late?
Of course, it’s not easy to make sure a review like this is successfully executed. It’s very hard to work out the real implications of as subtle a thing as a database change. But if you are the project manager and you can’t tell your customers what the consequences of your project really are, perhaps you should be finding out. After all, it’s not as though they will never find out. But the alternative to telling them in an orderly and systematic manner like the above can only be finding out through missed milestones and blown budgets.
In a way none of this should need saying – anyone who does a change request nowadays will perform an impact analysis that covers most of these issues. But as so often in project management, this simple lesson simply has not spread in the systematic manner to areas like reviewing (product or project) as one would have hoped.
Sunday, 17 August 2008
How stage boundary reviews work
This rather unhelpful diagram (click to expand it) shows the basic position reviewers finds themselves in: the orange oblong is the project, with the vertical lines marking the stages. The current review is right there in the middle – some way through, but still a way off the project’s end.
So how can you tell how well you are doing? There are basically four questions about the project itself you want answers to:
- Did the last stage go as planned?
- Is your project making satisfactory progress as a whole?
- Will your project deliver as expected?
- Based on the above, what exactly do you need to do about the next stage?
So that is exactly which the next four diagrams explain. Firstly, looking back on the most recent stage, how did it go?
For example, was product quality as required, specified and planned? Were milestones and deliverables as expected? Was the stakeholders’ involvement as agreed, and even if it was, was it enough? When coming up with a stage boundary review checklist, you could do a lot worse than start from these basics.
Next, looking back right to the project start, how has it gone so far?
In particular, what have the trends been? How has the project’s profile evolved over time – stage by stage, how have its basic features such as scope, delivery, cost, quality and risk unfolded? Are there recognisable trends? If so, what were they, what do they mean and what do you plan doing about them?
The next question involves a complete about-face, and requires you to stop looking back and start looking forward. And the basic question is now, What are your project’s prospects? Looking to the end of the project, are there any unexpected obstacles? Risks? Threats? Opportunities? If there are, again, what do you plan to do about them?
Finally – at least as far as the project itself is concerned - now that you know how you are doing and what the longer-term picture looks like, are you ready to start the next stage? For example, are the following all well defined and has provision been made for the all to be managed? Your plans and estimates? All outstanding issues and risks? Your project’s dependencies? The right team + resources? The right technology, facilities and environments? The right stakeholder awareness, commitment and involvement? If not, now is the time to do something about it.
But of course, projects do not exist in isolation. Unless you are operating in your own private universe, the project must also be evaluated from the point of view of the organisation on whose behalf it is being run. So there are four more questions that need to be answered before you can call your stage boundary review complete:
- Does the project still fit the portfolio?
- Is the project’s business performance acceptable?
- Does the project comply with all relevant policies and standards?
- Is all project information and decisions under formal control?
Conversely, exactly how well is the project doing from the organisation’s point of view? Hence the next question, which is to evaluate the project against its business case. Costs? Benefits? Risks? Without answers to questions like these, it is hard to see how the project continues to be justified.
There is also a more practical side to a project’s ‘fit’, which is illustrated in the final two pictures. The essential question posed by the following diagram is this: Does the project comply with all relevant policies and standards? These might take many forms – regulatory requirements, quality standards, corporate policies, business roadmaps – anything that defines the broader shape into which the project must fit to be considered a success.
Finally, the project is part of the wider organisation from an operational point of view too. It needs to fit in in the sense that it is being tracked and recorded and measured and analysed and all those other things middle management do. This naturally raises a range of essentially administrative questions about whether the project is up-to-date regarding things like records, reports, escalations, change control, lessons learned, and so on. If not, perhaps now is the moment to do something about it.
Once you have this basic logic, the next issue is to identify specific questions (and perhaps measures) you would use to work out the answer. You will probably end up with a hundred or so. Usually people react to this number with horror – surely it will take days to review a project against more than 100 criteria? But in practice this is not a problem. After all, the stage is presumably only ending because the project manager believes that the project has met all that stage’s requirements (or if not, has obtained the necessary exemptions and waivers and re-baselined the project accordingly). That means that deliveries are complete, records and reports up to date, change requests all dealt with, all residual issues and risks under control, and so on. If that is the case – which is a logical entry condition for a stage boundary review – then the answer to every single one of your hundred questions is going to be simple and straightforward. The entire review should take literally seconds per question, and minutes for the review as a whole. Well, that may be a little optimistic, but if the review does take a lot longer than that, it should not be because there were so many questions to answer.
It is quite simple in concept. However, there are also certain things you do not want your boundary reviews to deal with. Unfortunately quite a few review systems I have worked with fell into these mistakes. Perhaps the most common is repeating tasks that should already have been put to bed – checking that the right people signed of the last stage’s deliverables, even reviewing them again, and so on. This mistake is usually indicated by the kinds of question the review checklist contains – about the content of documents, not the state of the project.
That in turn brings up a further important point: that the purpose of the review is to check the viability of the project as a whole. It is crucial that the review process is designed to perform this task and this task only. Everything else should have been completed as an entry condition for the review itself. If it isn’t already done, most people won’t be interested or qualified to participate. After all, stage boundary reviews are governance events, and the way they work – and do not work – should reflect this fact.
Another typical error is to attempt to score the results. Although not a mistake in principle, it usually doesn't work. Recently I worked with a client whose reviews include scoring each item, and the review has to reach a pre-defined target if it is to pass. I don’t really understand this.
Firstly, it is the Project Board’s job to make that call – not some artificial calculation. Secondly, most such systems are not in fact measuring anything. In some cases, the scores are completely subjective. That is, reviews are asked to give the item a score. But by and large they do this without any objective guidelines as to how to score and in full knowledge what the ‘pass’ score is! So if they want the review to pass and they know that the pass score is, say, 3 out of 5, they give the item – at least 3! Not only are scores of this kind quite meaningless but by using numbers an illusion of objectivity is created.
In other cases, the scores stand in no real relationship to any quantified metric of success or failure. So even if you really can tell that this item is worth only 3 out of 5, there is little or no link between the criteria and the overall success of the project. So the number, interesting though it may be, is completely unconnected with the purpose of the review!
Finally, problems that arise during a stage boundary review should not usually derail or even delay the project. Again many companies take the view that ‘failing’ a review should stop the project until everything is fixed. In the first company I ever worked in that used SBRs, the whole of the previous stage had to be repeated! This is bonkers, of course.
The right approach, I think, is to treat the review as a whole as an key moment of consolidation for the project as a whole, but to treat the problems it raises as individual risks. It is possible that the outcome of the review will be the project’s cancellation or a fundamental re-structuring, but this should be rare. More usually, most work should continue as planned while the review is taking place, and only things connected with the specific issues raised by the review should be delayed as a result of the review. If there is something so fundamentally wrong with the project that it should simply cease, it should not take a stage boundary review to work this out!
Friday, 15 August 2008
What business cases are worth
Pity so few businesses have any idea how to use them. A few may be using concepts like ROI for real planning, but for most this sort of calculation is used strictly after the event. Likewise for project-based organisations. For example, in the IT world a majority of projects now have a business case, but only a minority really use it to manage the project. It ought to be an invaluable means for making all sorts of decisions – prioritisation, triage, change requests, everything really. But in practice it isn’t.
My favourite business case story comes from a decade ago, when I was consulting to a credit card company. One day there landed on my desk the business case for a marketing project that said, among much else, that one of the benefits planned to accrue from the project was that the company would issues 750,000,000 more cards in Europe.
750,000,000 more cards? That was almost two for everyone in the EU! So of course I rang up the analyst who had written this and it turned out that he had meant to write 750,000 – a rather more realistic number. We had a friendly and very amusing conversation about how easy it is to make mistakes of that kind. But when he said he would correct the document and reissue it, I asked him to leave it just as it was and se who else noticed.
So we waited. And waited. The claim was repeated in every important document from that point on wards – the requirements spec, the analysis, the designs, the testing – everywhere. And not a single other individual questioned this preposterous number. Ever.
Friday, 8 August 2008
Top 10 CSFs for metrics programmes
- Work out what you are measuring for. More precisely, make sure that every metric is tied to a goal you are trying to achieve. More precisely still, make sure that every measurement is explicitly tied to a key performance indicator that is explicitly tied to a critical success factor that is explicitly tied to a goal you are trying to achieve.
- Conversely, ensure that you have the sponsorship needed to force/enforce action. If your boss doesn't' want it enough to make it happen, it won't survive.
- Measurement is not the first step in management. It assumes at least a fairly mature management environment. If you don't have that, and if the problems, data, tools and techniques you use are not at least fairly well established, then your measurements will mean practically nothing.
- A measurement programme is not just a technical tool, it's a whole management programme. And as with every management programme, success comes from spreading awareness, commitment and involvement.
- Things must be seen to improve following metrics-based reports. Otherwise what is it for, and why should anyone collaborate?
- Conversely, only measure things you can really change. Discovering that you are really bad at something you have no choice about (regulations, things that are too expensive or not politically acceptable to change, and so on) is a waste of effort, creates aspirations to improve in areas you don't control and is just plain depressing.
- The programme must serve the interests of those who collect the data. Otherwise collecting it will be hard work and the quality of the the data will be poor.
- Don’t use metrics to single out individual culprits. They will soon start to massage the figures, and personal problems are usually only symptoms of system problems.
- Measures must be unambiguously defined, fully understood and consistently applied.
- This isn't trivial. Investment, training and tools must all be provided.
Metrics with no baseline
Here are some methods for creating a perfectly credible metrics programme with no baselines:
- Decide it doesn’t need a baseline. Sometime trends are not important to the problem you are trying to solve.
- Estimate the baseline from indicative data. For example, financial figures are frequently good indicators, even if they are inherently indirect measures of what you are really interested in.
- Don’t create a baseline – measure from Day 1 only. Just worry about getting better or worse.
- Review a sample of the existing population, and treat that as your baseline. Just make sure that your sampling is meaningful, which is not as easy a thing as it sounds.
- Adopt industry standards. They may not represent the best, but they are not a bad starting point.
- Start from targets, not baselines. That way you'll have something to move towards rather than away from, which is lot more positive.
- Don't even measure. Sometimes you just know what needs doing!
Thursday, 7 August 2008
Documenting V-model-based methodologies
In response, the first question I would ask is what exactly your organisation’s ‘standard’ approach is. That is, what is the approach that you usually take to delivery, that occupies 80% of your people 80% of the time, and so on? That then tells you what should lie in the main V, and what not.
In any organisation whose maid delivery process is bespoke development (as opposed to, say, buying in and adapting packages), I would suggest that all else should be pushed off the main V. This would normally mean that the following are all put offline from the main diagram:
- Procurement.
- Service delivery.
- Testing.
- Project management.
- Business change management.
- It’s a minor variant of the major process - the main V should show only the major process (e.g., in this case, procurement).
- It is logically asynchronous with the chosen logical model of solution delivery – in most cases, stage-based development (e.g., test preparation of various kinds).
- Its logical structure is non-linear (e.g., project management).
- It may be invoked at any point (e.g., change control, defect management, risk management, and so on).
- It has a specialised audience, so most people don’t need to know how it is done (all lower-level technical processes).
Hence also my exclusion of project management from the V, which will probably strike most people as off. But most project management activity (governance, planning, task assignment + tracking, risks and issues, reporting, etc.) is either ad hoc, repetitious or cyclical, and does not fit the linear structure of a V.
On the other hand, to make sure that everything stays aligned and everyone knows what they are supposed to be doing, the main V should ideally include not only everything in the standard delivery sequence path but also the touch-points with each of these other functions. This should indicate the points at which show not only where they provide their respective ‘services’ but also the points at which they take their ‘feed’ from the main process. This can make the main diagram physically or logically complex, but I think that that would be ideal.
For example, the main links between a bespoke development V and service planning are probably:
- During Initiation, where service planning needs to indicate the contributions it will need to make to the project.
- During Requirements, where service planning needs to identify the non-functional requirements it they need to define SLAs, shape the environmental design, and so on.
- During Design, service planning generally need to be involved in environmental + infrastructural design.
- During Build, service planning may need to be involved in unit + integration testing where these relate to infrastructure and environments, and in non-functional testing where this is will bear on SLAs.
- During system testing, service planning will be interested not only in non-functional testing but also in identifying any work-arounds + FAQs (for the Help Desk), plus starting to collect any residual defects that are likely to affect the working solution.
- Finally, during the Deployment stage, service planning will be involved in Operational Acceptance Testing and a range of deployment planning and activity.
Thursday, 31 July 2008
Programme management - an alternative architecture
The traditional model
The traditional programme management structure is relatively simple, and summarised in this diagram (click to expand):
- A Programme Board, including representatives of major stakeholder groups such as business units and vendors, defines the programme’s strategic policy and goals.
- The Programme Director supervises the realisation of these policies and goals.
A Programme Manager has day-to-day responsibility for delivering the programme as a whole. - The Programme Manager is supported by small groups of specialists, notably a Programme Office and an Architecture team. By and large the architects are IT specialists, not functional or business architects, and the Programme Office provides administrative support, albeit very powerful and imposing it may be. It is seldom the central nervous system - making intelligent decisions as well as gathering critical inforamtion - that it should be.
- The workstreams on which delivery depends report directly to the Programme Manager.
- It fails to reflect the reasons why programmes succeed and fail.
- It fails to reflect or support the flows of information and decisions on which a real programme relies.
- It fails to define the management relationships and cycles through which a large-scale programme must be organised and controlled.
- It fails to identify many of the very wide range of stakeholders, interests and dependencies, both within and beyond the programme’s boundaries, on which success depends.
- It fails to create a real division of functions through which the Programme Manager can realistically manage the programme as a whole.
- It fails to establish the roles needed to pursue and manage these critical success factors.
Below a model is presented of the critical roles through which all these shortcomings of the standard programme management model can start to be resolved. It is by no means a completely original solution – some of the roles already exist in a rudimentary form in many programmes. Nor is it a complete solution: there are many more issues that need to be resolved before programme management becomes a matter of routine. However, from the point of view of most current programmes it is perhaps the single most valuable improvement currently available.
It should be emphasised that the roles described here do not need to be assigned to individuals. As will be argued in the section entitled ‘Implementation’, there is a maturity sequence through which any programme environment should evolve, from which these roles emerge quite naturally. The crucial issue is not how they are implemented but that the necessity to conceive of programmes in such integrated, systematic and dynamic terms is recognised – and acted upon.
A good programme management environment will already be someway up this model; the purpose of this paper is to indicate new directions for development and to accelerate the pace at which progress in programme management is made. On the other hand, although the additional cost of implementing the proposed model will be obvious, it should not be forgotten that one of the largest costs of current programme management practice is the cost of delivering late, over budget and short of the full planned scope. If the model proposed here can significantly reduce these other costs, its own costs will seem quite insignificant.
An alternative model
The alternative presented here (summarised in the following diagram - click to expand) assumes that there are fundamentally four dimensions to a programme’s success, and that a closely coordinated team of senior roles is needed for them all to be kept in alignment as the programme unfolds. These roles would report directly into Programme Manager, but their functions would extend not only across the entire programme but also far into the business as a whole.
Programme Architect
The first question any Programme Manager must be able to answer at any time must be: what is its vision? In other words, what is the programme going to achieve? That is, not only should the overall goals and objectives be clear but the actual delivered solution should be fully understood. As is now extremely well established, this goes far beyond technological systems: a successful programme also invariably delivers a huge range of new and improved environments, processes, organisations, technologies, resources and facilities. Most important of all, any signfiicant programme must positively transform the very nature of the organisation - its performance, its competences, its goals, and quite possibly its ultimate purpose. The business blueprint documents far more than a collection of new systems and upgrades, and the Architect is its guardian.
What is more, the success of the programme depends not only on delivering ‘inventory’, so to speak, but also value. That is, the programme’s deliverables must be conceived in terms of their ability to produce success. That means that the architecture must be constantly modelled not only in narrowly technical terms but also in terms of fundamental business factors such as functional capability, impact on the business’s own ability to deliver, and ultimately the pure business benefits of profit, market share, ROI, and so on.
Hence the central responsibility of the Programme Architect: to maintain a single, unified conception not only of what exactly it is that the programme will deliver but also what exactly it will accomplish.
Plainly this is a more substantial role that that of current architecture teams. Indeed, at present, only some of these elements of architecture are actually under the direct control (or even substantial influence) of the programme as a whole. The role of the Programme Architect as conceived here is therefore far wider and more complex than that of the traditional architecture team that already figures in existing programmes.
There are many means for achieving this. For example, in recent years the concept of ‘Enterprise Architecture’ has emerged, which applies engineering-style concepts to the full range of business, processes, organisation and technology, thus producing an integrated vision of the organisation from the highest level of strategy down to the nuts and bolts of local systems.
Programme Strategist
If the role of Programme Architect is to define what the programme’s vision, the role of the Programme Strategist is to define its mission. That is, the Strategist defines exactly how the programme will go about its business. This includes identifying the specific benefits the programme will deliver, from which are derived the overall priorities and the top level delivery schedule. This in turn determines the financial profile of the programme, since it controls not only the internal rate of expenditure (through recruitment, support, licensing, development costs, and so on) but also how soon the planned benefits will come on stream – and, of course, how well they will realise their targets.
In short, the Programme Strategist is the key mediator between the programme and the business. Also, if the Programme Architect ultimately controls the maximum return the business can expect on its investment, the Strategist controls how fully and how quickly that maximum is reached.
The core of the Strategist’s role is their ability to manage relationship between the programme and the business. Business environments are always undergoing change, and so creating both new requirements for the programme and new opportunities to exploit (or discard) the capability the programme is designed to deliver. The Strategist’s function is to ensure that the programme is precisely geared to delivering the optimum balance of costs and benefits that can be squeezed out of a constantly shifting situation.
Hence the need for the Strategist not only to control and coordinate the plans for delivery and change but also to grasp and influence the business models and strategies through which the business as a whole operates – and to see the programme through the same dashboards and Balanced Scorecards through which the most senior management also see it.
Programme Engineer
The purpose of the Programme Engineer role is to ensure that the environment within which the programme operates is appropriate for delivering the Architect’s solutions according to the Strategist’s priorities. In other words, the Programme Engineer is responsible for the programme’s capability. This role again goes far beyond the normal technical environment that is typically under some degree of integrated management is many current programmes, and it is probably the easiest to complete according to the present model.
However, what differentiates the Programme Engineer from contemporary environment management functions is that it systematically connects the technical elements of the programme directly to its management. For example, progress management is normally achieved through more or less manual processes, which makes the real status of the programme very difficult to ascertain. In an integrated programme engineering environment, modern test tools would not only allow a far more systematic approach to verification but the results can be reported directly into management reports. This would provide a more objective and quantifiable approach to management while precluding a good deal of ‘noise’ that typically besets a large, politically sensitive programme
Within the programme, the main functions the Programme Engineer would perform would be to define, create and maintain the programme standards, infrastructure & operational processes; to set and maintain development, production and delivery environments; to propagate the results of benchmarking and internal R&D; and to ensure a common approach based on not only on comprehensive standards but also a full suite of reusable, generic delivery vehicles.
However, the Programme Engineer also ensures that the programme is connected directly to the business, operational and other external environments with which the programme needs to be connected if it is to succeed. Again, this has become considerably easier with the emergence of concepts such as Enterprise Architecture and Operational Management methods that allow more realistic analysis, tighter control of the change process and far more detailed of planning for changes to the current ‘business as usual’ environment.
Programme Controller
Finally, the Programme Controller. Again this role is largely a substantial revision to the conventional Programme Officer Manager role, but as with the other roles described here, the change is more fundamental than that would suggest.
Like any other form of management, managing a programme is essentially a matter of controlling three flows: information, decisions and materials. So it is the Programme Controller’s responsibility to ensure that these flows are in fact appropriate, effective and unimpeded by internal or external barriers, bottlenecks or biases. In other words, the Programme Controller’s job is to provide the programme with its knowledge. This naturally requires the management of traditional concerns such as the availability of resources or the traditional dissemination of management decisions and status reports.
However, in the context of a programme management structure that incorporates the new roles of Programme Architect, Strategist and Engineer, completely new classes of information and decision also need to be managed. The increased intensity of relationships within the programme and the far closer links these roles create with the external environment also demand that the role of Programme Controller is far more structured and dynamic than under traditional programme management arrangements.
Friday, 18 July 2008
A generic development process
Partly in response to this and partly out of simple intellectual curiosity, I have drafted what is, I think, a completely generic model of development. Or rather, of how each stage in any waterfall method should work. I don’t worry too much about identifying the stages themselves – Requirements, Analyse, Design, Built, Test, Accept, Deploy are pretty much universal now.
Anyway, here is the generic stage model, followed by an example of how it might be made more concrete for a design stage. (As always, click to expand the images.)
How would you use this model? Just string together as many instances as you need - one for each development stage. Then add a project management and governance layer, preferably from a stage-based approach such as Prince2. Then make sure that you have something in each box - or if you haven't, that you can explain why not.
Here's a high-level implementation for a design stage:
As you can see, there is a doubling-up of tasks, and things can be defined and redefined to many more levels. but the general logic remains visible.
'It's not realistic' and 'we don't do things like that around here' are not valid reasons for rejecting an underlying logical model. Being 'realistic' means 'I have given up trying to make things work properly', and 'around here' means 'in this god-forsaken hole where nothing makes any sense' - neither a suitable reason for making an exception. If you're not persuaded, try here.
How stages work
First, a complete overview. (Don't worry, it's all explained as you go along.) The blue box represents a single stage. The rest is explained, step by step, in the next few images. Just click on the pictures to expand them and read the pink boxes for details. If you use any of them, please acknowledge their source.