Monday, May 28, 2012

Proportionality in controls

Early identification of the 'pinch' points and putting more effort towards controlling those elements is, in my opinion, time well spent.


In the example below, we're 5 months in on a project, the customer isn't too sensitive to cost, but is extremely sensitive to delays.






































Doubtless there are a few ways of tracking quality. I've often thought that if you're handing over  project deliverables sequentially, CPI and SPI sort of give you a 'QPI' or quality performance index since your customer is signing off acceptance of products as you go. 


With IT projects you do involve the customer in test and review activities throughout the design and development work, but sign-off tends to be a bigger bang type activity. The PM of any IT project needs to be fully cognisant of the customer's wants, needs and any deficiencies as an ongoing process. This activity is encompassed within testing and defect management.


I'll do a post at a later date on the range of metrics relating to defect management which I like to capture but here I've singled out something I've always been very keen on. See the bottom of the three charts above. Something not entirely clear is that that the graph isn't cumulative. Each month the number of defects identified that month is plotted against the number of defects fixed that month. At a glance, this shows whether or not you have an unsustainable, worsening or improving state with regard to defect identification and resolution.

Note the control charts above, not only are these easily maintained and communicable, they illustrate trends and the tolerances to which the project is expected to adhere. Incidentally, I'm a fan of publishing this sort of material directly to the project team - it encourages shared ownership and good alignment of decisions throughout the project team.

As can be seen above we've got more tolerance on cost than time. Take the cost (CPI) plot, we went from a good start, to holding steady, followed by three consecutive periods of deteriorating performance. With the benefit of hindsight it would have been useful to have a more frequent reporting interval - if we'd had fortnightly reporting - somewhere between month 2 and month 3, it would have become apparent that corrective action was required, and consequently far more easy to justify.

While the time (SPI) plot serves to illustrate this point very well, what we could have seen in advance was the very limited tolerance on schedule would have benefit from more regular reporting for precisely the reason above. The emergence of a trend would have happened far more quickly.


These sort of indicators are one approach to helping ensure that within any given project, benefits are maximised and risk is minimised. The best indicators are those that inform timely decision making before things go off the rails, not simply reporting on the fact that they have.



Tuesday, May 15, 2012

The Great Wall of China, governance and my sock drawer

Went to a talk by Geoff Reiss tonight - author of, amongst other things, Project Management Demystified (I own a copy - recommended).

All sorts of interesting points cropped up and I thought to relate a few here. The talk was entitled "Great and Deadly Projects". Geoff it turns out has a considerable interest in singular projects undertaken across the millennia (The Great Pyramids and The Great Wall of China) to name a couple.

The Panama Canal which was undertaken in 1911 for instance (great) thought to have had 30,000 fatalities (deadly) was noteworthy. The question of whether in 2011 there were the 30,000 projects with <1 fatality begs an answer.

A well intentioned attendee suggested that there was a clearly a case of better governance. Some wag in the audience responded that this would simply have led to more accurate record keeping of the casualties.

This last is a prompt that I must set pen to paper on the topic of governance. Mind you I must first acknowledge the dilemma that if you asked 10 PMs what governance was you'd get 11 different answers. Can the governance specialists do better?

Thank goodness too that Geoff Reiss fielded something that's been on my mind for quite some time now, namely that the truly great projects were very lucky.

There's a lengthy post in its own right here but I'll aim to unsettle you bit right now. Let's set the scene a bit on the topic of risk. (I have to do this because the nomenclature used on the subject within the spectrum of project management is inconsistent and imprecise). And that's twice I've been hindered by terminology in one blog post.

First, what's risk? Something bad that may happen? Maybe. Let's say that risk management is the job of reducing the likelihood of something bad happening and reducing the impact if it does happen. So if you can't assess the likelihood of something or what the impact could be - is it a risk?

Building on this is the assertion that if the something bad that may happen is a) subject to probabalistic.distribution and b) has a known consequence / impact then it is a risk. I've been racking my brains for the perfect illustrative vehicle for this and the best I could come up with is my sock drawer. More on this in another post, but for now, if its not a risk what is it? I suggest you're standing at the yawning abyss of mathematical uncertainty. The problem is  that almost everything you read about risk will incorporate the word uncertainly. 

Something else rather interesting arose on the topic of communications today too. While bemoaning dwindling levels of interest and low rates or response to communications initiatives a fellow project professional bemoaned that stakeholders were only reporting positively via surveys about their experiences. The noteworthy point in this is that those stakeholders who were still attending sessions were feeding back positively. No channel of communication existed with stakeholders who'd left for greener pastures. I don't quite know what the communications parlance is for talking to people to who are talking to you any more, but in its absence you might try 'friendly consultation'.

Note to self - must write to Francis Maude and ask why isn't Earned Value Management mandated in public sector projects.












Wednesday, May 9, 2012

Requirements, part 3

So far, I've had a good whinge about the MoSCoW method of requirements prioritisation, and suggested the alternative method of paired comparison. In this post I'm going to talk a little about the life-cycle of a requirement and why the management of requirements is pivotal to testing and product acceptance. In a future post, I'll come back to the topic of requirements prioritisation to provide a middle ground between MoSCoW and paired comparison.

 First, let's revisit the method of achieving quality I put forward in a previous post.




It's probably time to add a few more specifics around this, and that in turn will provide an entry point into several subsequent posts.

  1. Requirements - this is the subject of this post and (at time of writing) two others besides. The scope of this activity extends to the elicitation, prioritisation, specification and management of the requirements
  2. The QRA will be the topic of a subsequent post. For those impatient to get to grips with this, read Rex Black's material here. In short it's an analytic approach to prioritising the testing effort with a view to minimising risk
  3. Development and implementation - we will get to this at some future point. As project manager you should be planning, directing and controlling this activity.
  4. Testing - a big big topic but I'll endeavour to distil some valuable points during the course of future posts. I'm a big fan of V-Model development and sowing test activities throughout all development activities. Crucially; every development activity has a corresponding test activity. I would also keep in mind IEEE 829, a standard approach to test plan management. Links here and here Plenty more to come on this topic.
  5. Defects have a management life-cycle all of their own. This is to ensure that efforts to resolve the defects are suitably prioritised, that defects when resolved are confirmed as fixed and a whole host of other activities. I'll post a couple of articles on this in the future.
  6. Lastly the quality log. Technically, it is not strictly required however, in isolation the defect log (a component of defect management) paints an overly bleak picture. (Try having a defect log of 600 things which don't work - it doesn't make for happy project boards either). Incidentally, it helps too with product acceptance as opposed to the defect log (recording things which don't work) it provides a central repository encompassing all the things that do work.
So with this all said, let's focus on the role requirements play at each stage above. First, there's the requirements themselves. The QRA cannot be produced without a clearly understood set of requirements and do please note that this tool is sometimes crucial in placating legitimate stakeholder anxieties. Undoubtedly it is possible to start development and implementation activities without a detailed requirements specification. Doubtless too, this is one of the principal causes of project failure. You can certainly build something, but you cannot achieve quality without a clear understanding of what is required. Testing specifically is the activity of identifying defects and defects are defined as requirements that haven't been met. You cannot test a system if you don't know what the requirements for that system are. Defect management - the process of managing defects through the defect life-cycle cannot be undertaken without a detailed understanding of requirements.

So at every stage of this activity, requirements are playing a principal role. I think that'll probably suffice in terms of the 'big picture'. I have endeavoured to provide a solid basis upon which to build in future posts. These will include posts on how I typically approach the prioritisation of requirements, an explanation of the QRA, a complete review of the defects management life-cycle and more besides.






Tuesday, May 8, 2012

Influence - a pragmatic and effective approach

Robert B. Cialdini's book - Influence: The Psychology of Persuasion is a good read. The consistently underhanded techniques of car salespeople and evangelists for one or other faiths is entertaining, enlightening and extremely useful in terms of knowing the sorts of tactics employed by sales, marketing and a host of other people trying to get you to do something they want.

However, its not that much use for this project manager other than gaming* members of the project team on who's turn it is to make the tea round.

* One of those verbalised words becoming increasingly popular. However, links to game theory and a constant reminder that complexity manifests unintended consequences means the author has a ticket for this bandwagon. 

Typically I have fallen back on the techniques learned in the field of service delivery. Equally I have had the wonderful opportunity to see just how powerful presenting stakeholders with new and better information can be. Don't ever underestimate the potency of this very simple approach. And while we're at it, don't ever confuse simple with easy!

By pure luck (a lot more on this in the future) I was fortunate enough to attend a talk run by the excellent APM on the subject of Influence. This was an hour long distillation of some of the key points contained in the book Influencer: The Power to Change Anything  - a deservedly bold title with too many authors to include here easily.

I'm not going to seek to reproduce precisely what the book encompasses, but I will include the illustration below which I think is certainly a core principal.


I've used here the simple example of encouraging cycle helmet use amongst cycle commuters. The book's authors, backed by a bibliography longer than is typical, contend that people don't do things because they either can't or don't want to and do things because they can and they want to.

Straight away, I think there's an interesting observation to be made right there - people only need one box ticked not to do something, but both boxes ticked to do anything.

Next, the book's authors identify three domains in which these motivations can arise, the personal (you), the interpersonal (peer pressure) and environmental (what's sitting in the immediate vicinity).

Supplemental to this having been proven to be a very effective approach, it is a wholly defensible and legitimate strategy to employ with stakeholders, one that can be documented and one even that the customer will find it difficult to find fault with.

The book elaborates some remarkable success stories using this approach. It makes the point that you only need to 'tick' four out of the possible six boxes in order to stand a very great chance of success.

Again, the dimension of proportionality must be considered, but if you need to reposition stakeholders and this is critical to your project's success, then I contend there are few better approaches.





Monday, May 7, 2012

Dimensions in stakeholder engagement

In my last post, I tried to make the case for the abolition of stakeholder management. In summary, people don't like being 'managed'.


Don't misunderstand me however, neglect your stakeholders or under resource this crucial aspect of project management and you're in trouble.


There's a very good article here published in The Register in 2007. Quite apart from making some very good points about about Gartner's Magic Quadrant generally, it's got some solid points which can be applied here.




This illustration is perhaps best described as atypically typical. It embodies superficially the approaches adopted by some organisations in appraising stakeholder's importance and influence to a project or programme. 


There are shortcomings in this approach; it's a snap-shot in time, your appraisal of a stakeholder's importance might not tally with theirs, it's too coarsely graded - there are shades of grey here and, perhaps most importantly, it seeks an 'absolute' measure of stakeholders when I feel a far better approach would be a 'relative' measure.




A few amendments can mitigate some of these shortcomings.



We've now got a more finely graded system of measurement. We have (slightly) alleviated the issues of labelling a stakeholder as unimportant, we've got a relative approach via which one stakeholder can be appraised relative to another.


It's still a snap shot in time, and it doesn't incorporate as many dimensions as I personally would like.


Let's keep going.




Okay - so now we have some way of illustrating the different types of stakeholders. In the example here, stakeholder 1 could be an individual and stakeholder 2 could be an organisation and I've used the size of the marker to indicate this. I've coloured one red and one blue and this could indicate anything you want really. Say what you will, we all have the occasional 'red' stakeholder.


Where stakeholder 2 has come from and where we'd like stakeholder 1 to go is also illustrated. We could change the shape too - adding still yet more information.








You'll note too that I've added identifiers to each quadrant on the chart. This is a good way to tie up the stakeholder matrix to the communications plan enabling you to cite which communications go to which stakeholders (if you need to).


Nothing we've discussed here has encompassed the very important issue of proportionality. I do not say what you must or must not do when it comes to stakeholder engagement activities. I'll  post on this more in the future.


This question of proportionality will go along way towards informing how you proceed in terms of generating, maintaining and publishing this information. You could scratch this out on a sheet of paper with paper and pencil and often that might be wholly suited to the activity. In this instance, remain conscious of the limitations and adopt a more appropriate level of rigour if circumstances change.


At the other end of the scale, you could derive stakeholder's position relative to one another with questionnaires, plot the output via Excel, publish via SharePoint's excellent Excel Web Services and have hyper-links to stakeholder contact information and host of other information. 


Some points to finish off with. It doesn't have to be importance and influence. The following may be just a valid; impact, interest, sponsorship, tenure, location, sensitivity and probably quite a few more. Often influence and impact work just fine, but don't just 'go along' with those measures if they're not optimal.


Undoubtedly, the question of how to reposition stakeholders must be asked. That'll be the subject of another post.


Also, has anyone ever thought of sitting down with stakeholders and asking where they feel they are most appropriately placed? This resolves the potentially knotty issue of ascribing stakeholder's a standing that they do not support.


Lastly, what I've tried to do here is remain consistent with the point made in a previous post. Don't manage stakeholders, manage the relationship. Do this through engaging with stakeholders.







Sunday, May 6, 2012

Whatever you do, don't manage stakeholders...

This post was initially going to be entitled "Dimensions in stakeholder management". But I realised I had a more serious bee in my bonnet to deal with first and then, and only then, might we be able to move on.


One thing I'm pretty certain about is the practice of stakeholder management, while probably sound, legitimate, crucial even, is about as shoddily titled an activity as any I've come across.


Management, not the practice of training horses as the sixteenth century Italian derivation of the word might lead you to believe, but planning, directing and controlling. And therein lies the problem. You're going to plan, direct and control your stakeholders are you? Well if I'm on the project team, could I be excused? And if I'm one of the stakeholders, beware!


You cannot plan, direct and control stakeholders. In point of fact the practices of 'stakeholder management' don't actually tend to encompass these activities in quite such a provocative fashion, so really, it's something of a misnomer.


The sales and marketing teams have been on to this one for years and you don't hear the phrase "Customer Management" do you? Off course not, because it's about money and these things matter. So we don't manage the customer (they're always right anyway) we manage the relationship. So right here right now, I call for the practice of stakeholder management to be abolished and be replaced with stakeholder relationship management. If successful, and if this is my only contribution to the canon of project management nomenclature, I will be more than happy.


I think this is a stand-alone post, so I'll leave it there. For dimensions in stakeholder relationship management, you'll have to wait.



Championing document management and document quality, part 1

Consider the idealised world in which work is undertaken to a high standard with a minimum requirement for supervision by the project manager, where project communications are well managed and self-sustaining, where product handover is almost a formality, where all project stakeholders can independently source the correct information they need, when they need it with little need for involvement from the project team, where all project resources are enabled to work independently towards the management of risk and quality. Consider a world of high performing project teams and projects which take on some of the characteristics of a repeating cyclical process and where project managers can manage by exception. Sound too good to be true? I don't think it is.

I'm going to return to this theme of "projects as BAU" (business as usual) in the coming weeks, but I'm going to start with what I consider to be the single biggest step that an organisation can take towards a more stable footing for its project activities, namely high quality documentation supported by high quality document management.

I'm going to take another look at the points in paragraph 1.

  • High standard of work, undertaken independently - enabled by the authoring of high quality work packages, peer reviewed and agreed with the named project resources who are to do the work. The work package's deliverables should be detailed, reviewed and approved product descriptions which relate back to the WBS. The approach to quality should be stated clearly
  • Project communications which are well managed and self-sustaining - enabled by a collaborative approach to the generation, publication and sharing of high quality project products (PID, Brief, Communications Plan, Configuration Management Plan etc). Usually there isn't the time to spend on these products that they demand because everyone is too busy managing email. This in turn is because they don't have high quality project products published to say, SharePoint.
  • Product handover is a formality - enabled by detailed, reviewed and approved product descriptions and a rigorous approach to quality (V-Model for instance)
  • Where project stakeholders can independently source the information they need - enabled by getting out of email, getting into documentation and publishing in a collaborative platform (SharePoint, SameTime, Exchange Public folders, or your own document management platform)
  • Project resources enabled to work independently towards the management of risk and quality - enabled through project resources accessing and sharing information about all aspects of the project - they're not just compelled to raise a risk or an off-specification. They can actively manage, own and control outcomes because they're equipped with the knowledge to do so
  • Taking on the characteristics of repeating cyclical processes - almost every project will have some repeating cyclical processes, whether it's moving servers between data centres, packaging applications or designing web applications. Assess whether a process will add value*. If it will, work with all parties involved to generate a process which is shared, owned and understood. It doesn't necessarily need to be perfect, that will come later.
* I'll post something on this in due course.

I don't offer this without the benefit of seeing both sides of the fence and also the consistent and tangible improvements resulting from transitioning from one state to the other.

I'm not going to belabour the matter too much here  although  there are some perspectives relating to proportionality which I'll pick up another time. I think the points above are self-evident. I hope you do too.

If you want more information generally and a bit of structure you might try this from JoAnn Hackos



Friday, May 4, 2012

Requirements, part 2


Previously, I took a aim at MoSCoW as a sub-optimal approach to the prioritisation of requirements. Keen not to simply leave a deficit, I wanted to follow up quickly with something a good deal more rigorous. To be fair, too rigorous for it to be used all the time. But as the saying goes, every job has a tool, and every tool has a job.

My principal concerns regarding MoSCoW was the coarseness of its gradated priorities. Taking a little license, its categories correspond to a critical must have requirement, an almost critical must have requirement, a don’t really need it but I’ll take it if it’s going and the somewhat ambiguous ‘won’t have this time’.

As I elaborate the theme of requirements, I’ll try and specify usefully the attributes that can be applied to requirements which inform prioritisation. However, the prioritisation of requirements might not simply be a process of analysis based on ROI, cost of non-compliance or other similar quantitative measure. It might come down to what the customer wants.

The customer isn’t going to understand why you want to prioritise requirements (they’re all critical aren’t they?). Once you’ve justified the importance of prioritised requirements (I’ll cover this in a future post), you’ll have to assist the customer in the prioritisation while at the same time facilitating the customer’s participation. You’ll want to ensure the output is documented and that any prioritisation is evidence based with a clear audit trail.

That’s where the technique of paired comparison comes in. This will prioritise requirements with respect to one another achieving a potentially limitless gradation of requirements. It has to be said however that in practice you’re not going to be applying it to large number of requirements. The effort increases pretty steeply with the number of requirements being compared. For instance with 10 requirements you’d have 45 separate decisions that have to be made, collated and recorded. With 20 you're up to 180. So really, you’re talking about small number here.

See below for an illustration of paired comparison. What’s being done here is to compare each item in a group with every item in the group.


For our purposes here, we could relate various items of fruit to letters, A = Apple, B = Banana, C = cantaloupe and so on. The next step would be to compare A with B, and record what the preference is. Having done that, A is compared with C and so on until every fruit (requirement) is compared with every other. In my example above, the score is a simple count of the number of times that option was selected, this is then expressed as a percentage and finally a rank order is created. If you use my spread sheet here – it’ll do all the sums for you once you’ve filled in the table.

If you have a short list of requirements which are expensive and need to be shortlisted with respect to funding constraints, this would be good approach. Equally it could be used to get some user perspective on which features they'd like to see prioritised within a new implementation. There’s nothing to stop you underpinning this activity with deep dive technical analysis but if you have the figures you’re less likely to need to undertake paired comparison. However, if you’re in the business of comparing apples with pears and require an output which clearly reflects stakeholder preferences, this should help you out.

Requirements, part 1

In a recent post, I proposed an approach to quality, through the elicitation of requirements, the development of a quality risk analysis, design and delivery activities, testing and the use of the defect and quality logs to record and track progress. I could only touch briefly on each component within the bounds of a reasonably sized blog post, but I said I come back to the topic.


Here's two lines that tell you why you need to be very alert around the topic of requirements.




I can't promise you its linear (in fact, for the cost of defect resolution - I know its not) but you get the picture. Most defects are introduced at the requirements elicitation, scoping and design phases and the cost of resolving them increases radically as the project progresses.There are a couple of articles here and here to support this chart.




Before I make a start on anything useful on the topic of requirements, I'm going to take some time to critically review the MoSCoW method of prioritising requirements.


You can read the basics on Wikipedia - there's not much point in me reproducing it here. 


Sounds eminently sensible doesn't it? Must have, Should have, Could have and Won't have. Get the stakeholders in a room, elicit the requirements, prioritise with MoSCoW. What could possibly go wrong?


Well first, you'd better explain to stakeholders that "won't have" doesn't really mean won't have. It means "wont have this time". And I've never really understood what that means. I know that if you really have a requirement not to have something (say heat output not greater that 50 BTUs) then you have to put it in the Must NOT have pigeon hole along with all the other must haves. By the way, you'll likely have quite a few other Must haves. That's another shortcoming of MoSCoW. While it does prioritise requirements, it's a very coarse tool providing really only 3 gradations of priority.


Your stakeholders are pretty much going to want their requirements in either the "Must have" or "Should have" categories. Must have describes a requirement that must be satisfied in the final solution for the solution to be considered a success. "Should have" represents a high-priority item that should be included in the solution if it is possible. This is often a critical requirement but one which can be satisfied in other ways if strictly necessary.*


* From Wikipedia


So you can see from the points here that probably at least 50% of your requirements will likely be ascribed a Must have or Should have status. If any one of those requirements isn't met, you're looking at having to assign the corresponding defect a status of 'critical' i.e. can't hand over while it remains unresolved. Quite how you prioritise your defect resolution activities at this juncture is anyone's guess. Almost all your defects are critical and are ranked with the same priority.


It is also, on its own, very arbitrary. Why is this requirement a "Must have"? "Because I say it is" is likely to be the extent of it. Other approaches might look at cost of non-compliance, stakeholder satisfaction if delivered, a business use case or other factor(s)  to support its status as a critical requirement. Not so MoSCoW, though admittedly it doesn't necessarily preclude it. Even if you do factor in a rationale, you still only have a very coarse yardstick to appraise the relative importance of requirements.


I'll leave the elicitation of requirements, their management, tracking and a host of other points for other posts. But requirements have to be prioritised in almost all instances (more on this too) and in conclusion to the points above that prioritisation should be; finely graded, evidence based, unambiguous and tailored to your specific needs.


In the mean time - some additional reading on the topic here. I couldn't put it better myself, so I won't try.





Wednesday, May 2, 2012

Henry Miller, part 1.


I joined my current programme  four months ago. The stakeholder landscape was barely civil, the project couldn't get off the ground. Contract deadlines were approaching and options seemed very limited. 

Three months later commercials were signed,  the stakeholders were united around a common purpose and a defined plan to implementation existed.

See the illustration below. I should concede at this point that this isn't mine - I'd discussed the profound turn around with a very experienced senior leader and they were kind enough to share the benefits of their wisdom.

For over 12 months, the programme had tried to go from A (the departure point) direct to D (the end game). It had tried to do this several times, with slight variations but had ultimately never left A. The route from A to D was attractive because it was expedient, efficient, delivered on the project's mandate and looked deceptively simple. The problem was that amongst all the stakeholders, only staff on the project team and sponsor knew this. For everyone else, it simply wasn't feasible, let alone desirable.

When finally this was recognised, stakeholders were invited to engage in a far more participatory fashion. To make modest concessions and to consider alternatives. Progress was made and built upon.


We don't need to be too literal about the illustration above. However, its worth noting for our purposes here that having moved from an initial position (A), to a new position (B), there's a willingness to acknowledge possible further incremental change or for stakeholders themselves to actively participate in defining the next step.

Over on the excellent Voices on Project Management blog, Lynda Bourne makes the following observation.

The key to shifting stakeholders' expectations is to provide new and better information.

That certainly played a part too and I'll come back to that point in a future post. For now I should leave you with the words of Henry Miller.

In this age, which believes that there is a short cut to everything, the greatest lesson to be learned is that the most difficult way is, in the long run, the easiest.





Tuesday, May 1, 2012

Requirements, testing and quality

I'm a big fan of testing, test methodologies and approaches. It's a fantastically rich and mature discipline and offers a lot to the project manager. I think the value of a project manager will almost certainly be amplified by even the most rudimentary knowledge of the discipline.


One approach I've become comfortable with is illustrated below.











Here's a link to a read only Google Doc of the illustration. Please help yourself. Here's a link to an editable version - if you can come up with something you think is better or interesting, please amend it as you see fit. If you do edit it, add some comments to the blog explaining your rationale. 


I'm only going to touch on each component briefly now. In practice, with the exception of the quality log, the components are in themselves substantial topics. But I intend to make this something of a recurrent theme and I'll try and support the process with a few useful resources along the way.


Hopefully, most of the elements illustrated are reasonably clear. Requirements, a big topic in itself, will almost certainly be the starting point for the majority of projects. I'll come back to the quality risk analysis (QRA) later. Development and implementation are condensed - they're not my focus here. We undertake testing* and either verify a requirement has been met, recording this in the quality log or find that it hasn't been met and raise a defect. 

*In practice, testing is an ongoing activity throughout the project life-cycle, but for my purposes here, it is illustrated it as a single element.

In practice, the quality log might seem a superfluous overhead. It can however prove to be quite a useful document, illustrating what has been achieved. By comparison, the defect log which, on its own, can paint a picture that isn't wholly representative.


The quality risk analysis was something I came across in Rex Black's book, Critical Testing Processes. Equally, he covers it in reasonable detail here. I include the abstract below.


Testing any real-world system is potentially an infinite task. Of this infinite set of possible tests, test managers need to focus on the most significant risks to system quality. These are the potential failures that are likely to occur in real-world use or would cost a lot if they did occur. This article describes practical ways to analyze the risk to system quality, providing guidance along the way to achieving effective and efficient testing.


I'll return to the topic of quality risk analysis and include a useful resource to support its use in due course. However, it would seem an opportune time to leave you with a very succinct and almost universal definition of quality namely; meets requirements, fit for purpose.