Previously, I took a aim at MoSCoW as a sub-optimal approach to the prioritisation of requirements. Keen not to simply leave a deficit, I wanted to follow up quickly with something a good deal more rigorous. To be fair, too rigorous for it to be used all the time. But as the saying goes, every job has a tool, and every tool has a job.
My principal concerns regarding MoSCoW was the coarseness of its gradated priorities. Taking a little license, its categories correspond to a critical must have requirement, an almost critical must have requirement, a don’t really need it but I’ll take it if it’s going and the somewhat ambiguous ‘won’t have this time’.
As I elaborate the theme of requirements, I’ll try and specify usefully the attributes that can be applied to requirements which inform prioritisation. However, the prioritisation of requirements might not simply be a process of analysis based on ROI, cost of non-compliance or other similar quantitative measure. It might come down to what the customer wants.
The customer isn’t going to understand why you want to prioritise requirements (they’re all critical aren’t they?). Once you’ve justified the importance of prioritised requirements (I’ll cover this in a future post), you’ll have to assist the customer in the prioritisation while at the same time facilitating the customer’s participation. You’ll want to ensure the output is documented and that any prioritisation is evidence based with a clear audit trail.
That’s where the technique of paired comparison comes in. This will prioritise requirements with respect to one another achieving a potentially limitless gradation of requirements. It has to be said however that in practice you’re not going to be applying it to large number of requirements. The effort increases pretty steeply with the number of requirements being compared. For instance with 10 requirements you’d have 45 separate decisions that have to be made, collated and recorded. With 20 you're up to 180. So really, you’re talking about small number here.
See below for an illustration of paired comparison. What’s being done here is to compare each item in a group with every item in the group.
For our purposes here, we could relate various items of fruit to letters, A = Apple, B = Banana, C = cantaloupe and so on. The next step would be to compare A with B, and record what the preference is. Having done that, A is compared with C and so on until every fruit (requirement) is compared with every other. In my example above, the score is a simple count of the number of times that option was selected, this is then expressed as a percentage and finally a rank order is created. If you use my spread sheet here – it’ll do all the sums for you once you’ve filled in the table.
If you have a short list of requirements which are expensive and need to be shortlisted with respect to funding constraints, this would be good approach. Equally it could be used to get some user perspective on which features they'd like to see prioritised within a new implementation. There’s nothing to stop you underpinning this activity with deep dive technical analysis but if you have the figures you’re less likely to need to undertake paired comparison. However, if you’re in the business of comparing apples with pears and require an output which clearly reflects stakeholder preferences, this should help you out.