In many (perhaps most) of the organisations I have worked in, the question of who reviews what has been either contentious, ambiguous or been given a quite unnatural answer. The reason for this unhappy situation is that it is often difficult for a non-technical reviewer to evaluate a project, especially its technical content.
In my own specialist area – IT – this might manifest itself in a pained question such as ‘How can the business approve a change in a database design?’ But there are similar questions in all complex management situations – can techies contribute usefully to business cases, for example?
Good question – and in my experience, people only say ‘good question’ when they mean that there’s no good answer. But in this case, there is an answer, and what is more once the answer is understood it leads to a more robust approach to reviewing generally.
The basic problem is to decide what objective reviewing is trying to achieve, and so to decide whether non-technical (non-business, etc.) reviewers have any role to play in achieving it. To put it concisely, the purpose of reviewing is to decide whether the item under review is meeting its requirements. I don’t mean this in the technical sense of ‘requirement’ – i.e., something the work is supposed to achieve for it to be considered a success. I just mean does it do what it is supposed to do? This might well mean ‘does it fulfil its requirements?’, but it could also mean ‘does it comply with this specification’ or ‘if we follow this plan, will we succeed?’, or any number of other things.
From that point of view, the right reviewers are the people who can – and need to – make that call. But that still doesn’t mean that they are technically capable of understanding the item they are reviewing. Or are they? In what sense do they need a precise technical understanding of the content of the item – for example, a design document - to be able to evaluate it? To put my complete argument in a nutshell, what I am getting at is the idea that reviewing is based not on what it says so much as on what it means.
To go back to my problem about business people reviewing a change to a database design. Can they understand what the change says? Probably not, if by that you mean their grasp of namespaces, indexing and denormalisation issues, and their opinion of such things, in strictly technical terms, is probably worthless.
But that isn’t necessarily all that the review is for. For behind every such technical change, there is a pyramid of managerial and business implications that non-technical reviewers can not only understand perfectly well but are probably better judged by non-technical people.
This is illustrated in the following diagram (click to expand):
Hopefully it is clear what the diagram implies. At the lowest level, where the database change itself occurs, there is probably little benefit to be had from asking non-technical people what they think of the change from a purely technical point of view. ‘Who knows, and who cares?’ is probably the right answer. But as soon as the wider implications of the change – the non-technical elements of what the change means rather than the details of what the change documents say – start coming to the fore, both their interest and their ability to judge should start to grow rapidly.
For example, assume that the database change in question is to move from a distributed to a centralise structure. Although the technical issues will be beyond the business’ grasp, and so will most of the implementation and operational issues, not much else should be beyond them. Looking at the diagram again, what are the changes in test requirements this database change will call for? To have all your database testers in one team, located centrally, rather than separate teams all around the business? What does that entail? Much lower costs? Great, we’ll have it. And a simplified roll-out that can now happen three months earlier? Even better. But what is the downside? The changes in platform mean that we will need to recruit a whole new database team? How long will that take? What will it cost? Oh... not such a no-brainer then. And there’s a small chance that we won’t be able to meet our delivery timescales after all? But at least the total development cost will be well down? Great! But the operating cost will in fact go up? Damn...
It’s a complicated business, as anyone who has been in such a situation will testify. But perhaps it should be – and perhaps excluding the business (and other non-technical people) from reviews on the grounds that they ‘won’t understand’ what they are reviewing is not only a very narrow interpretation of what ‘understanding’ means in such a situation but positively counter-productive. After all, if you don’t ask them now, when will you? When it’s too late?
Of course, it’s not easy to make sure a review like this is successfully executed. It’s very hard to work out the real implications of as subtle a thing as a database change. But if you are the project manager and you can’t tell your customers what the consequences of your project really are, perhaps you should be finding out. After all, it’s not as though they will never find out. But the alternative to telling them in an orderly and systematic manner like the above can only be finding out through missed milestones and blown budgets.
In a way none of this should need saying – anyone who does a change request nowadays will perform an impact analysis that covers most of these issues. But as so often in project management, this simple lesson simply has not spread in the systematic manner to areas like reviewing (product or project) as one would have hoped.
Thursday, 28 August 2008
Sunday, 17 August 2008
How stage boundary reviews work
Stages are a basic concept in project management these days, and whole methodologies such as Prince2 and practically every other delivery method, from waterfalls to DSDM assume the same stage-based structure. But although all such methods conclude each stage with a stage boundary review, I have found little coherent thought on this topic. So here is my tuppence-worth.
This rather unhelpful diagram (click to expand it) shows the basic position reviewers finds themselves in: the orange oblong is the project, with the vertical lines marking the stages. The current review is right there in the middle – some way through, but still a way off the project’s end.
So how can you tell how well you are doing? There are basically four questions about the project itself you want answers to:
So that is exactly which the next four diagrams explain. Firstly, looking back on the most recent stage, how did it go?
For example, was product quality as required, specified and planned? Were milestones and deliverables as expected? Was the stakeholders’ involvement as agreed, and even if it was, was it enough? When coming up with a stage boundary review checklist, you could do a lot worse than start from these basics.
Next, looking back right to the project start, how has it gone so far?
In particular, what have the trends been? How has the project’s profile evolved over time – stage by stage, how have its basic features such as scope, delivery, cost, quality and risk unfolded? Are there recognisable trends? If so, what were they, what do they mean and what do you plan doing about them?
The next question involves a complete about-face, and requires you to stop looking back and start looking forward. And the basic question is now, What are your project’s prospects? Looking to the end of the project, are there any unexpected obstacles? Risks? Threats? Opportunities? If there are, again, what do you plan to do about them?
Finally – at least as far as the project itself is concerned - now that you know how you are doing and what the longer-term picture looks like, are you ready to start the next stage? For example, are the following all well defined and has provision been made for the all to be managed? Your plans and estimates? All outstanding issues and risks? Your project’s dependencies? The right team + resources? The right technology, facilities and environments? The right stakeholder awareness, commitment and involvement? If not, now is the time to do something about it.
But of course, projects do not exist in isolation. Unless you are operating in your own private universe, the project must also be evaluated from the point of view of the organisation on whose behalf it is being run. So there are four more questions that need to be answered before you can call your stage boundary review complete:
Conversely, exactly how well is the project doing from the organisation’s point of view? Hence the next question, which is to evaluate the project against its business case. Costs? Benefits? Risks? Without answers to questions like these, it is hard to see how the project continues to be justified.
There is also a more practical side to a project’s ‘fit’, which is illustrated in the final two pictures. The essential question posed by the following diagram is this: Does the project comply with all relevant policies and standards? These might take many forms – regulatory requirements, quality standards, corporate policies, business roadmaps – anything that defines the broader shape into which the project must fit to be considered a success.
Finally, the project is part of the wider organisation from an operational point of view too. It needs to fit in in the sense that it is being tracked and recorded and measured and analysed and all those other things middle management do. This naturally raises a range of essentially administrative questions about whether the project is up-to-date regarding things like records, reports, escalations, change control, lessons learned, and so on. If not, perhaps now is the moment to do something about it.
Once you have this basic logic, the next issue is to identify specific questions (and perhaps measures) you would use to work out the answer. You will probably end up with a hundred or so. Usually people react to this number with horror – surely it will take days to review a project against more than 100 criteria? But in practice this is not a problem. After all, the stage is presumably only ending because the project manager believes that the project has met all that stage’s requirements (or if not, has obtained the necessary exemptions and waivers and re-baselined the project accordingly). That means that deliveries are complete, records and reports up to date, change requests all dealt with, all residual issues and risks under control, and so on. If that is the case – which is a logical entry condition for a stage boundary review – then the answer to every single one of your hundred questions is going to be simple and straightforward. The entire review should take literally seconds per question, and minutes for the review as a whole. Well, that may be a little optimistic, but if the review does take a lot longer than that, it should not be because there were so many questions to answer.
It is quite simple in concept. However, there are also certain things you do not want your boundary reviews to deal with. Unfortunately quite a few review systems I have worked with fell into these mistakes. Perhaps the most common is repeating tasks that should already have been put to bed – checking that the right people signed of the last stage’s deliverables, even reviewing them again, and so on. This mistake is usually indicated by the kinds of question the review checklist contains – about the content of documents, not the state of the project.
That in turn brings up a further important point: that the purpose of the review is to check the viability of the project as a whole. It is crucial that the review process is designed to perform this task and this task only. Everything else should have been completed as an entry condition for the review itself. If it isn’t already done, most people won’t be interested or qualified to participate. After all, stage boundary reviews are governance events, and the way they work – and do not work – should reflect this fact.
Another typical error is to attempt to score the results. Although not a mistake in principle, it usually doesn't work. Recently I worked with a client whose reviews include scoring each item, and the review has to reach a pre-defined target if it is to pass. I don’t really understand this.
Firstly, it is the Project Board’s job to make that call – not some artificial calculation. Secondly, most such systems are not in fact measuring anything. In some cases, the scores are completely subjective. That is, reviews are asked to give the item a score. But by and large they do this without any objective guidelines as to how to score and in full knowledge what the ‘pass’ score is! So if they want the review to pass and they know that the pass score is, say, 3 out of 5, they give the item – at least 3! Not only are scores of this kind quite meaningless but by using numbers an illusion of objectivity is created.
In other cases, the scores stand in no real relationship to any quantified metric of success or failure. So even if you really can tell that this item is worth only 3 out of 5, there is little or no link between the criteria and the overall success of the project. So the number, interesting though it may be, is completely unconnected with the purpose of the review!
Finally, problems that arise during a stage boundary review should not usually derail or even delay the project. Again many companies take the view that ‘failing’ a review should stop the project until everything is fixed. In the first company I ever worked in that used SBRs, the whole of the previous stage had to be repeated! This is bonkers, of course.
The right approach, I think, is to treat the review as a whole as an key moment of consolidation for the project as a whole, but to treat the problems it raises as individual risks. It is possible that the outcome of the review will be the project’s cancellation or a fundamental re-structuring, but this should be rare. More usually, most work should continue as planned while the review is taking place, and only things connected with the specific issues raised by the review should be delayed as a result of the review. If there is something so fundamentally wrong with the project that it should simply cease, it should not take a stage boundary review to work this out!
This rather unhelpful diagram (click to expand it) shows the basic position reviewers finds themselves in: the orange oblong is the project, with the vertical lines marking the stages. The current review is right there in the middle – some way through, but still a way off the project’s end.
So how can you tell how well you are doing? There are basically four questions about the project itself you want answers to:
- Did the last stage go as planned?
- Is your project making satisfactory progress as a whole?
- Will your project deliver as expected?
- Based on the above, what exactly do you need to do about the next stage?
So that is exactly which the next four diagrams explain. Firstly, looking back on the most recent stage, how did it go?
For example, was product quality as required, specified and planned? Were milestones and deliverables as expected? Was the stakeholders’ involvement as agreed, and even if it was, was it enough? When coming up with a stage boundary review checklist, you could do a lot worse than start from these basics.
Next, looking back right to the project start, how has it gone so far?
In particular, what have the trends been? How has the project’s profile evolved over time – stage by stage, how have its basic features such as scope, delivery, cost, quality and risk unfolded? Are there recognisable trends? If so, what were they, what do they mean and what do you plan doing about them?
The next question involves a complete about-face, and requires you to stop looking back and start looking forward. And the basic question is now, What are your project’s prospects? Looking to the end of the project, are there any unexpected obstacles? Risks? Threats? Opportunities? If there are, again, what do you plan to do about them?
Finally – at least as far as the project itself is concerned - now that you know how you are doing and what the longer-term picture looks like, are you ready to start the next stage? For example, are the following all well defined and has provision been made for the all to be managed? Your plans and estimates? All outstanding issues and risks? Your project’s dependencies? The right team + resources? The right technology, facilities and environments? The right stakeholder awareness, commitment and involvement? If not, now is the time to do something about it.
But of course, projects do not exist in isolation. Unless you are operating in your own private universe, the project must also be evaluated from the point of view of the organisation on whose behalf it is being run. So there are four more questions that need to be answered before you can call your stage boundary review complete:
- Does the project still fit the portfolio?
- Is the project’s business performance acceptable?
- Does the project comply with all relevant policies and standards?
- Is all project information and decisions under formal control?
Conversely, exactly how well is the project doing from the organisation’s point of view? Hence the next question, which is to evaluate the project against its business case. Costs? Benefits? Risks? Without answers to questions like these, it is hard to see how the project continues to be justified.
There is also a more practical side to a project’s ‘fit’, which is illustrated in the final two pictures. The essential question posed by the following diagram is this: Does the project comply with all relevant policies and standards? These might take many forms – regulatory requirements, quality standards, corporate policies, business roadmaps – anything that defines the broader shape into which the project must fit to be considered a success.
Finally, the project is part of the wider organisation from an operational point of view too. It needs to fit in in the sense that it is being tracked and recorded and measured and analysed and all those other things middle management do. This naturally raises a range of essentially administrative questions about whether the project is up-to-date regarding things like records, reports, escalations, change control, lessons learned, and so on. If not, perhaps now is the moment to do something about it.
Once you have this basic logic, the next issue is to identify specific questions (and perhaps measures) you would use to work out the answer. You will probably end up with a hundred or so. Usually people react to this number with horror – surely it will take days to review a project against more than 100 criteria? But in practice this is not a problem. After all, the stage is presumably only ending because the project manager believes that the project has met all that stage’s requirements (or if not, has obtained the necessary exemptions and waivers and re-baselined the project accordingly). That means that deliveries are complete, records and reports up to date, change requests all dealt with, all residual issues and risks under control, and so on. If that is the case – which is a logical entry condition for a stage boundary review – then the answer to every single one of your hundred questions is going to be simple and straightforward. The entire review should take literally seconds per question, and minutes for the review as a whole. Well, that may be a little optimistic, but if the review does take a lot longer than that, it should not be because there were so many questions to answer.
It is quite simple in concept. However, there are also certain things you do not want your boundary reviews to deal with. Unfortunately quite a few review systems I have worked with fell into these mistakes. Perhaps the most common is repeating tasks that should already have been put to bed – checking that the right people signed of the last stage’s deliverables, even reviewing them again, and so on. This mistake is usually indicated by the kinds of question the review checklist contains – about the content of documents, not the state of the project.
That in turn brings up a further important point: that the purpose of the review is to check the viability of the project as a whole. It is crucial that the review process is designed to perform this task and this task only. Everything else should have been completed as an entry condition for the review itself. If it isn’t already done, most people won’t be interested or qualified to participate. After all, stage boundary reviews are governance events, and the way they work – and do not work – should reflect this fact.
Another typical error is to attempt to score the results. Although not a mistake in principle, it usually doesn't work. Recently I worked with a client whose reviews include scoring each item, and the review has to reach a pre-defined target if it is to pass. I don’t really understand this.
Firstly, it is the Project Board’s job to make that call – not some artificial calculation. Secondly, most such systems are not in fact measuring anything. In some cases, the scores are completely subjective. That is, reviews are asked to give the item a score. But by and large they do this without any objective guidelines as to how to score and in full knowledge what the ‘pass’ score is! So if they want the review to pass and they know that the pass score is, say, 3 out of 5, they give the item – at least 3! Not only are scores of this kind quite meaningless but by using numbers an illusion of objectivity is created.
In other cases, the scores stand in no real relationship to any quantified metric of success or failure. So even if you really can tell that this item is worth only 3 out of 5, there is little or no link between the criteria and the overall success of the project. So the number, interesting though it may be, is completely unconnected with the purpose of the review!
Finally, problems that arise during a stage boundary review should not usually derail or even delay the project. Again many companies take the view that ‘failing’ a review should stop the project until everything is fixed. In the first company I ever worked in that used SBRs, the whole of the previous stage had to be repeated! This is bonkers, of course.
The right approach, I think, is to treat the review as a whole as an key moment of consolidation for the project as a whole, but to treat the problems it raises as individual risks. It is possible that the outcome of the review will be the project’s cancellation or a fundamental re-structuring, but this should be rare. More usually, most work should continue as planned while the review is taking place, and only things connected with the specific issues raised by the review should be delayed as a result of the review. If there is something so fundamentally wrong with the project that it should simply cease, it should not take a stage boundary review to work this out!
Labels:
All,
Management systems,
Risk,
Stage containment
Friday, 15 August 2008
What business cases are worth
Although a sceptic about many aspects of business, one thing that seems to offer real value is the business case. Being able to show that what comes out will be more than what goes in strikes me as a fairly elementary requirement for testing whether a project or operation is worthwhile, and some of the business case models I have seen are pretty sophisticated.
Pity so few businesses have any idea how to use them. A few may be using concepts like ROI for real planning, but for most this sort of calculation is used strictly after the event. Likewise for project-based organisations. For example, in the IT world a majority of projects now have a business case, but only a minority really use it to manage the project. It ought to be an invaluable means for making all sorts of decisions – prioritisation, triage, change requests, everything really. But in practice it isn’t.
My favourite business case story comes from a decade ago, when I was consulting to a credit card company. One day there landed on my desk the business case for a marketing project that said, among much else, that one of the benefits planned to accrue from the project was that the company would issues 750,000,000 more cards in Europe.
750,000,000 more cards? That was almost two for everyone in the EU! So of course I rang up the analyst who had written this and it turned out that he had meant to write 750,000 – a rather more realistic number. We had a friendly and very amusing conversation about how easy it is to make mistakes of that kind. But when he said he would correct the document and reissue it, I asked him to leave it just as it was and se who else noticed.
So we waited. And waited. The claim was repeated in every important document from that point on wards – the requirements spec, the analysis, the designs, the testing – everywhere. And not a single other individual questioned this preposterous number. Ever.
Pity so few businesses have any idea how to use them. A few may be using concepts like ROI for real planning, but for most this sort of calculation is used strictly after the event. Likewise for project-based organisations. For example, in the IT world a majority of projects now have a business case, but only a minority really use it to manage the project. It ought to be an invaluable means for making all sorts of decisions – prioritisation, triage, change requests, everything really. But in practice it isn’t.
My favourite business case story comes from a decade ago, when I was consulting to a credit card company. One day there landed on my desk the business case for a marketing project that said, among much else, that one of the benefits planned to accrue from the project was that the company would issues 750,000,000 more cards in Europe.
750,000,000 more cards? That was almost two for everyone in the EU! So of course I rang up the analyst who had written this and it turned out that he had meant to write 750,000 – a rather more realistic number. We had a friendly and very amusing conversation about how easy it is to make mistakes of that kind. But when he said he would correct the document and reissue it, I asked him to leave it just as it was and se who else noticed.
So we waited. And waited. The claim was repeated in every important document from that point on wards – the requirements spec, the analysis, the designs, the testing – everywhere. And not a single other individual questioned this preposterous number. Ever.
Friday, 8 August 2008
Top 10 CSFs for metrics programmes
Some preliminary thoughts about what will help make a success of a metrics programme. Most are not about metrics at all, though - which should not be too surprising, given that the main problems with metrics programmes are much the same as for any other management programme:
- Work out what you are measuring for. More precisely, make sure that every metric is tied to a goal you are trying to achieve. More precisely still, make sure that every measurement is explicitly tied to a key performance indicator that is explicitly tied to a critical success factor that is explicitly tied to a goal you are trying to achieve.
- Conversely, ensure that you have the sponsorship needed to force/enforce action. If your boss doesn't' want it enough to make it happen, it won't survive.
- Measurement is not the first step in management. It assumes at least a fairly mature management environment. If you don't have that, and if the problems, data, tools and techniques you use are not at least fairly well established, then your measurements will mean practically nothing.
- A measurement programme is not just a technical tool, it's a whole management programme. And as with every management programme, success comes from spreading awareness, commitment and involvement.
- Things must be seen to improve following metrics-based reports. Otherwise what is it for, and why should anyone collaborate?
- Conversely, only measure things you can really change. Discovering that you are really bad at something you have no choice about (regulations, things that are too expensive or not politically acceptable to change, and so on) is a waste of effort, creates aspirations to improve in areas you don't control and is just plain depressing.
- The programme must serve the interests of those who collect the data. Otherwise collecting it will be hard work and the quality of the the data will be poor.
- Don’t use metrics to single out individual culprits. They will soon start to massage the figures, and personal problems are usually only symptoms of system problems.
- Measures must be unambiguously defined, fully understood and consistently applied.
- This isn't trivial. Investment, training and tools must all be provided.
Metrics with no baseline
Most organisations start a metrics programme without baselining their current performance. Although quite a few people get their knickers in a twist about this ('How can you measure anything if you don't have a baseline??!!), it may not be good, but neither is it fatal …
Here are some methods for creating a perfectly credible metrics programme with no baselines:
Here are some methods for creating a perfectly credible metrics programme with no baselines:
- Decide it doesn’t need a baseline. Sometime trends are not important to the problem you are trying to solve.
- Estimate the baseline from indicative data. For example, financial figures are frequently good indicators, even if they are inherently indirect measures of what you are really interested in.
- Don’t create a baseline – measure from Day 1 only. Just worry about getting better or worse.
- Review a sample of the existing population, and treat that as your baseline. Just make sure that your sampling is meaningful, which is not as easy a thing as it sounds.
- Adopt industry standards. They may not represent the best, but they are not a bad starting point.
- Start from targets, not baselines. That way you'll have something to move towards rather than away from, which is lot more positive.
- Don't even measure. Sometimes you just know what needs doing!
Thursday, 7 August 2008
Documenting V-model-based methodologies
I am often involved in methodology development projects, usually based on the standard V-model. Because any serious solution delivery process now greatly exceeds this simple model, yet it is hard to diagram the extra tasks without creating a complete mess that no–one can understand, this raises the question of what functions, processes and tasks need to be shown outside the main V.
In response, the first question I would ask is what exactly your organisation’s ‘standard’ approach is. That is, what is the approach that you usually take to delivery, that occupies 80% of your people 80% of the time, and so on? That then tells you what should lie in the main V, and what not.
In any organisation whose maid delivery process is bespoke development (as opposed to, say, buying in and adapting packages), I would suggest that all else should be pushed off the main V. This would normally mean that the following are all put offline from the main diagram:
Hence also my exclusion of project management from the V, which will probably strike most people as off. But most project management activity (governance, planning, task assignment + tracking, risks and issues, reporting, etc.) is either ad hoc, repetitious or cyclical, and does not fit the linear structure of a V.
On the other hand, to make sure that everything stays aligned and everyone knows what they are supposed to be doing, the main V should ideally include not only everything in the standard delivery sequence path but also the touch-points with each of these other functions. This should indicate the points at which show not only where they provide their respective ‘services’ but also the points at which they take their ‘feed’ from the main process. This can make the main diagram physically or logically complex, but I think that that would be ideal.
For example, the main links between a bespoke development V and service planning are probably:
In response, the first question I would ask is what exactly your organisation’s ‘standard’ approach is. That is, what is the approach that you usually take to delivery, that occupies 80% of your people 80% of the time, and so on? That then tells you what should lie in the main V, and what not.
In any organisation whose maid delivery process is bespoke development (as opposed to, say, buying in and adapting packages), I would suggest that all else should be pushed off the main V. This would normally mean that the following are all put offline from the main diagram:
- Procurement.
- Service delivery.
- Testing.
- Project management.
- Business change management.
- It’s a minor variant of the major process - the main V should show only the major process (e.g., in this case, procurement).
- It is logically asynchronous with the chosen logical model of solution delivery – in most cases, stage-based development (e.g., test preparation of various kinds).
- Its logical structure is non-linear (e.g., project management).
- It may be invoked at any point (e.g., change control, defect management, risk management, and so on).
- It has a specialised audience, so most people don’t need to know how it is done (all lower-level technical processes).
Hence also my exclusion of project management from the V, which will probably strike most people as off. But most project management activity (governance, planning, task assignment + tracking, risks and issues, reporting, etc.) is either ad hoc, repetitious or cyclical, and does not fit the linear structure of a V.
On the other hand, to make sure that everything stays aligned and everyone knows what they are supposed to be doing, the main V should ideally include not only everything in the standard delivery sequence path but also the touch-points with each of these other functions. This should indicate the points at which show not only where they provide their respective ‘services’ but also the points at which they take their ‘feed’ from the main process. This can make the main diagram physically or logically complex, but I think that that would be ideal.
For example, the main links between a bespoke development V and service planning are probably:
- During Initiation, where service planning needs to indicate the contributions it will need to make to the project.
- During Requirements, where service planning needs to identify the non-functional requirements it they need to define SLAs, shape the environmental design, and so on.
- During Design, service planning generally need to be involved in environmental + infrastructural design.
- During Build, service planning may need to be involved in unit + integration testing where these relate to infrastructure and environments, and in non-functional testing where this is will bear on SLAs.
- During system testing, service planning will be interested not only in non-functional testing but also in identifying any work-arounds + FAQs (for the Help Desk), plus starting to collect any residual defects that are likely to affect the working solution.
- Finally, during the Deployment stage, service planning will be involved in Operational Acceptance Testing and a range of deployment planning and activity.
Subscribe to:
Posts (Atom)