Wednesday, September 12, 2012

Ubiquitous constraints (and what to do about them)

For the record, when I say constraint I mean some circumstance or factor which limits an activity, benefit or the overall success of part or all of a project. (I do like to cast my net wide with these things!)

I personally think they're a bit under-nourished in the general scheme of things. I'm not at all certain that within most projects there's a consensus on what a constraint is. There's often little or no effort to record or track constraints.

I think most projects face constraints in various forms, but there are three in particular which are almost omnipresent.

Before I go too much further, I suggest that unless you're intimately acquainted with Eliyahu M. Goldratt's theory of constraints, there's no better time than now to hop on over to Wikipedia and brush up on what I think is some pretty smart thinking about the way the world works.

All the way back in 1984, Mr. Goldratt was putting pen to paper and publishing material that included the following:


Types of (internal) constraints

  • Equipment: The way equipment is currently used limits the ability of the system to produce more salable goods/services.
  • People: Lack of skilled people limits the system. Mental models held by people can cause behaviour that becomes a constraint.
  • Policy: A written or unwritten policy prevents the system from making more.

*Source - Wikipedia

I don't have too many unbreakable golden rules but I do take the time to reflect on these three constraints roughly speaking at the following junctures.
  1. Spend approval
  2. Project start-up and initiation and every subsequent stage boundary
  3. All discussions relating to project change
Equipment

The effective delivery of change can be hindered by simply not having the tools for the job.

Whether this is heaving lifting machinery, the right widgets, collaborative toolsets for the project team or specialist software tools it matters not.

How to defend against deficiencies in equipment? Good planning principally. Not altogether intuitively, a detailed product description can often identify the particular equipment requirements for the delivery of a product.

The PMO too (if you have one) should be a source of lessons learned, historical data, policies and standards of its own that illuminate requirements for equipment.

Lastly, subject matter expertise is often the differentiator between getting things right the first time or slowly learning the right way through time and patience.

Your approach (as ever) should be proportionate and informed by risk.

Policy

I've seen a few of ways in which absence of policy can seriously undermine project success.
  1. The output of a project isn't used due to a lack of policy
  2. Resources cannot be brokered for a project due to lack of sponsorship
  3. Lack of stakeholder engagement (because no one's telling them any different)
I think this is my personal favourite having been caught on all three counts at one time or another.

The really simple answer (really far too simple) is simply get a clearer than clear project mandate. I've yet to see one. The less simple and less effective answer is wrap your project's mandate up in a terms of reference (PID probably, charter or brief maybe) and get that approved, authorised and sponsored.

If you're still worried, add it to the issues log - this won't help ever so much potentially, but you will have done your job as a project manager to the degree possible.

There's a bit of an addendum to this as well. There's (certainly in the UK public sector) an increasing drive toward the management and realisation of benefits (at last!). This can help the mandate / policy quagmire as all of a sudden, staff outside the project team are likely to be accountable for the delivery of benefits and this might get you an entry point to a discussion about policy / mandate or lack thereof.


People

You could spend a lot of time discussing the various human fallibilities that can constrain a project's success. I think there are pretty much three principal headings.
  • Management
  • Culture
  • Availability
Consider the following to assist with the management of staff assigned to projects.

Responsibility assignment matrixes are (I think) very important in most projects. Projects are temporary, unique and time-bound (aren't they?). Thus, it is not reasonable to require a team to know what is expected of them unless they're told. And, in my sometimes prescriptive and process orientated world, telling people anything important should be documented, versioned and recorded.

I like the RACI chart - there are others.

Use a skills inventory. A skills inventory is a system or tool that identifies the skills and skill levels required to deliver your project, programme, specific work packages or products. It may also specify the individuals who possess those skills. Skills inventories are most effective if they are aligned with a particular programme.

I don't think it will often be within the scope of project manager's accountabilities to influence or manage cultural change within a project or programme team. I think this is one of those situations where if the culture isn't (for instance) delivery focussed, then you'll need to adopt a suitably appropriate posture to ensure the project's objectives are still attainable.

As far as availability is concerned, I cite the following.

  1. Planning is criticial, as is estimating and forecasting. If estimating and forecasting is to be worthwhile, it must be based on historical data. One of the jobs of resource planning must be to specify what is actually going to be required in terms of resource burn.
  2. Sponsorship from resource brokers (see policy above). This should be two fold. First, to release the resources for the period specified at the time specified. Secondly, to deliver (as far as possilbe) the scope of work agreed to schedule, cost and quality.
  3. Manage change and issues so that deviations from what is agreed is predicatable, sustainable and planned.
On another occasion, I'll upload a RACI template and skills inventory tracker to the resources page.

Thursday, August 23, 2012

Something a bit fishy...

Well it's no good beating about the bush with this one. I've produced a double headed Ishikawa diagram for the purposes of illustrating causal factors corresponding to the influence and repositioning of stakeholders. I think it's fair to say no one's done that before.


So what's the point?

First, if you need to get some background on Ishikawa / fishbone diagrams, pop along to Wikipedia. They are worth having some background information on whether or not you're a convert to my eccentric approach to presentation above.

When writing my post some weeks back "Influence - a pragmatic and effective approach" I recall a sense that I hadn't quite nailed the visuals. Sometime later I had to generate a communication management strategy for a project and, when re-visiting the topic with fresh perspective, I came up with the approach above.

So, with the combined knowledge supplied by Wikipedia and my initial post, I hope you can at least intuitively grasp what I'm trying to do here. 

But why? And more to the point, why include something like this in a communication management strategy?

Well first, I think this is a good workshop approach to elicit input. You're not always going to have enough back story on a client site to generate one of these yourself, but you can provide the footpath and fill in the information as (hopefully) the client supplies it. You can also record that input and articulate it in a way that makes sense.

Incorporating it into the communications management strategy (or other similar document) has the benefit that all communications undertaken under the aegis of the project have the opportunity to be informed by themes that should assist that overall stakeholder engagement effort. Better yet, it should do that whether or not you're there to review and edit prospective communications yourself. Generate one of these Ishikawa diagrams, provide a supporting narrative in the communications policy and hopefully, you'll see helpful references and positive themes sewn throughout the project communications effort.
*Per my previous article - the credit for the better part of the approach to stakeholder influence goes to these guys.

Monday, August 13, 2012

Projects? Like poker? Surely you jest...?

If you've been around a project life-cycle a few times, you've probably stood on the periphery of some passable projects, some not so good projects, the odd biblical disaster and possibly, just possibly a success story or two.

I've often reflected on the various circumstances corresponding to success and failure and have (as doubtless we all do) a few thoughts on the matter. 

However, let's look elsewhere for inspiration other than our own personal project hurt lockers for a moment.

How do Google and Apple manage such consistent successes? What are their project manage approaches that so consistently deliver commercial success in amongst the most high risk, cut throat fields imaginable?

Well here's the thing - Google and Apple fail just as much (if not a good deal more) than us mere mortals.

So the distinction is certainly not simply one of success or failure.

I'm not going to prattle on too much at this juncture as most of this is done to death elsewhere in great detail. I can't help however draw a distinction between projects and poker however. Namely, when you lose, try and lose a little. When you win, try and win a lot.

You don't read a lot in the project management blogosphere about Prince 2 - admittedly, it can be a dryish cornerstone of what must seem to PMI / PMBOK advocates to be a project management oddity. I would even have a modicum of sympathy for the view that it is a project management methodology with no project management due to its conspicuous (and quite deliberate omission) of earned value management or anything resembling it. 

But, I will break the trend a little in the context of this post (possibly a web first to encompass poker and Prince 2 within a single blog post).

I had to put pen to paper in a professional setting recently and wrote the following. 

"Teams seeking to undertake projects via Prince 2 are challenged by constraints which often diminish the effectiveness of the Prince 2 methodology and consequently the overall success of their projects. These constraints relate to people, processes and systems. Specifically, the efforts to recruit appropriate staff, acquire Prince 2 knowledge, develop and implement appropriate policies and the subsequent execution of the Prince 2 methodology is exceptionally demanding."

And, I know what I'm talking about here having assembled a the odd 180 line responsibility assignment matrix for the purposes of administering the full suite of Prince 2 processes. And, yes, just in case you were in any doubt, that's 180 activities that that need to be undertaken in a fully compliant (admittedly non-tailored) Prince 2 compliant project that are quite independent of actually delivering anything. I'll upload this matrix to the resources page accompanying this blog in due course.

But, I remain a fan albeit with one or two provisos. Consider the following.


  1. Continued business justification
  2. Learn from experience
  3. Defined roles and responsibilities
  4. Manage by stages
  5. Manage by exception
  6. Focus on products
Not too shabby a list of tenets for the project manager to abide by is it? Well those six points are 6 of the 7 Prince 2 principles. The seventh is "Tailor to suit the project environment" which goes some way to palliating the 180 line items in the responsibility assignment matrix.

Just in case some of you were sitting there scratching around a long ago Prince 2 practitioner course thinking you really don't recall anything about 7 principles (and seven additional themes for that matter) you're probably right. I'm not sure what preceded Prince 2 2005, but in 2009 the framework was overhauled with the imaginative re-branding of "Prince 2 2009". I judge it to be substantially improved.

And what's all this got to do with Apple and Google? Well, whatever those folks are doing with their project management methodologies I'm pretty sure it will incorporate the 6 principles above. I shall resist the urge to conclude that Apple and Google are Prince 2 houses but I am guessing they don't rely much on full-houses either.












Wednesday, July 18, 2012

Making life easier (and a bit of process stuff)

Projects generally have a lot of 'interconnectedness'. And please - I don't just mean railway projects.

Processs are rarely 'stand-alone'. The outputs from one process are, more often than not, the inputs fir something else. (So help me) you'll start to hear the words ecosystem and (abandon hope all who enter) 'synergy'.

In an ideal (and possible mythical world) you have ERP, CRM and EPM tools into which all your various risks, issues etc are included. But, from time to time, the enterprising PM may find themselves without these tools and forced to fall back on more prosaic mechanisms (by which I mean Excel).

I've provided a spread sheet here. I call it ACRID (derived from Assumptions, Constraints, Risks, Issues and Dependencies). You'll also come across the term RAID (minus the constraints), and you'll probably not often come across a CORDIAL log which (of course) contains the lessons learned log. Any attempt to include a quality log is headed for the rocks.

We don't want to make life too hard for ourselves and, of course, we'll want to keep an eye on the whole process ecosystem angle so we're not duplicating effort left, right and centre.

I include here a template for a highlight report. I like highlight reports as I know of no better way to bridge the divide between the poets (for whom business transformation is a mere pen stroke) and the plumbers who lie awake all night worrying about it. I take some license but there's a sliding scale in there somewhere.

So, back to the whole inputs and outputs thing. If you take a look at the highlight report template (which is aligned with Prince 2) you'll notice there's quite a bit on work packages and products. Check back to previous posts and you'll have all all you need to cut and paste into the highlight report.

In fact I'll paste in the contents page from the highlight report template and append it with where you derive the content from each section from.

1. This reporting period

Not much going on in here

1.1 Work Packages

...Or here

1.1.1 Pending authorisation

If you need them, you'll have been writing them and you'll know which ones are outstanding authorisation

1.1.2 Completed in this period

From the project plan, paste in relevant sections of WBS

1.2 Products completed in this period

From the project plan, paste in relevant sections of WBS

1.3 Products planned but not started

From the project plan, paste in relevant sections of WBS


1.4 Corrective actions taken during the period


Issue log or other sources as appropriate

2 Next reporting period


Not much going on in here



2.1 Work Packages

...Or here



2.1.1 To be authorised

From the project plan, paste in relevant sections of WBS

 2.1.2 To be completed in the next period

From the project plan, paste in relevant sections of WBS


2.2 Products to be completed in the next period

From the project plan, paste in relevant sections of WBS

2.3 Corrective actions to be completed in the next period


Could be anything - use your judgement to include what you feel is appropriate


3 Product and stage tolerance status

SPI / CPI figures as appropriate (this will have to wait for another day for detailed coverage).


4 Requests for change


From the ACRID log so long as you raise all your changes as issues


5 Key Risks


From the ACRID log

6 Issues

From the ACRID log

7 Lessons Report


From the ACRID log

***************************************

Some points to bear in mind.
  1. Truncate (i.e. hide a few columns) on the product descriptions, WBS elements, risks etc as you'll not fit them all on a single landscape A4 and the detail is probably more than your audience will want
  2. I'll cover off a bit on the cost performance index (CPI) and schedule performance index (SPI) another day.
  3. Corrective action could mean almost anything - include what you think is appropriate
  4. Some stakeholders will want detailed information about resource burn, budget status or other detailed information not included above - this is a good sign and shows that the sponsor is 'on board' and giving the project focus.
And, to wrap up the topic of the highlight report I conlude with the following key points which I hope will impress upon you the benefit of producing it, even if your stakeholders are sanguine on the topic.
  • It is an excellent tool of communication to all stakeholders, both to relay issues and status concerns but also to keep all parties abreast of progress. It is a principal tool by which the project manager can relay, escalate and communicate issues and anxieties which require board / sponsor input.
  • I find it amongst the best ways of structuring a project board meeting, particularly for stakeholders less experienced in sitting on project boards
  • If (like me) you keep all your WBS elements, product descriptions and logs up to date, with practice, you can produce one of these in about 20 minutes.

Monday, July 16, 2012

Theories need evidence, facts need proof.

Always nice to stumble upon an academic bun fight. For the very tip of a very large iceberg on the relative merits of quantitative versus qualitative analysis see here, here or here.

But I'm a PM not an academic so what's the angle? My qualitative answer would be that knowing the difference will sometime enable a project manager to make optimal decisions . My quantitative answer is about 45 degrees - which to be fair isn't much use in this context.

For reference.

Quantitative research consists of those studies in which the data concerned can be analysed in terms of numbers ... Research can also be qualitative, that is, it can describe events, persons and so forth scientifically without the use of numerical data ... Quantitative research is based more directly on its original plans and its results are more readily analysed and interpreted. Qualitative research is more open and responsive to its subject. Both types of research are valid and useful. They are not mutually exclusive. It is possible for a single investigation to use both methods. (Best and Khan, 1989: 89-90)

Qualitative research is harder, more stressful and more time-consuming than other types. If you want to get your MEd dissertation or whatever finished quickly and easily do a straightforward questionnaire study. Qualitative research is only suitable for people who care about it, take it seriously, and are prepared for commitment (Delamont, 1992: viii)

Both these excerpts are from "An introduction to the qualitative and quantitative divide"

Whether it be the generation of the initial business case, the management of risk, key design decisions or resource planning, the project manager is faced with a veritable zoo of decisions. Some are more critical than others and on a sliding (qualitative) scale we make decisions which if wrong have very limited consequences to those 'irreversible' decisions to which great heed must be paid.

Question; has anyone ever sat down with project stakeholders and asked them if they want quantitative risk management, qualitative risk management or both? Do you understand the question? Would your stakeholders? Does it matter?

Take the following examples. First, what I hope will look a fairly typical excerpt from a fairly typical quanititative risk log. All with me so far?




 Next, something from the qualitative end of the spectrum.

R1 - qualitative assessment


Failure to provide adequant fencing, or early warning mechanisms as appropriate may result in injury or death to Donald Duck.

(A little digression here on the the two approaches to risk - if you fail to communicate to your board the potential impact of a risk with numbers (quantitative) try words instead (qualitative). On more than one occasion I've managed to elicit a response by describing in detail the consequence of a risk occuring having failed by ascribing it an impact and probability.

The Beaufort Scale incidentally makes rather good use of both qualitative and quantitative approaches - that's meteorology for you.

The PERT weigted average here is purely quantitative.

Now I could rattle on at length here, but I don't know if a blog post is quite the place for it. So I'll leave you with a couple of summary points.

  1. Make a judgement of which type of data suits your needs and then go and get it
  2. Personally, I prefer numbers to adjectives (with the proviso that they're right)
  3. Look back at the quantitative risk assessment example above. Are your quantitative risk assessments based on analysis of the statistical likelihood of the event and the impact to cost and time should it occur? If not, then your quantitative analysis is in fact a qualitative analysis.





Friday, July 13, 2012

Stacking the odds in your favour

A project manager benefits (and is sometimes disadvantaged) by having an itinerant pair of ears and an equally mobile mouth. It may also be the case that they have some experience which can usefully be applied to the customer's specific needs.


Before I go any further, let me re-state (and supplement) a few truths I hold dear.

  1. Most defects are introduced in the early phases of a project and get a good deal more costly to fix as time passes
  2. Most capital is committed relatively early in the project life-cycle
  3. Commercial contracts offer plenty of opportunity to write in haste arrangements that there will subsequently be plenty of time to leisurely repent
With this all said, how does the project manager take these self-evident truisms and instate a control that will serve to guard against mishaps?

I've talked about a several elements that together can serve as a foundation for project delivery. There are a number of things that can cause instability, amongst which are the following.
  1. Poor scope definition
  2. Inadequate product descriptions
  3. Lack of stakeholder engagement
  4. Poor change management
I've covered quite a bit of detail in relation to the first three but very little on the the topic of change. Incidentally, this isn't a complete list, but it's certainly some headlines.

The invidious thing about change is that it comes in many guises and there won't always be a governance product or person that can be called upon to respond appropriately.

With this said, you need a catch-all, something that informs all decisions, something that is quantifiable, and readily comprehensible to all stakeholders.

Say after me - "EVERY DEVELOPMENT STEP HAS A CORRESPONDING TEST ACTIVITY".

Put it in the brief, put it in the PID, the project charter, give it a slide in the project kick-off meeting, write it on the wall, include it in SOPs for the project team and champion it at every opportunity.

And when, as inevitably will be the case, someone deviates from this virtuous path and you've finished picking up the pieces, conduct a root cause analysis which will almost certainly find the absence of an appropriate test activity the principal culprit.

,,,and then of course, you can add it to the lessons learned log.


Wednesday, July 4, 2012

Conundrums in communications (part 2)

2 down, four to go.
  1. You might saturate communication channels
  2. Someone else might saturate communication channels 
  3. Your stakeholders 'lose sight' of the communication plan
  4. Your stakeholders actually tell you they should have read the communications, but they're sorry they didn't and now they're in a bit of a mess
  5. You've racked your brains but you can't work out a way to ascertain whether or not your stakeholders have read, retained and understood your communications.
  6. You're facing universal stakeholder apathy (this is on a sliding scale from mildly disinterested to venomous mischief making

Your stakeholders have 'lost sight' of the communication plan


The symptoms


This is one of those damning euphemisms which you really don't want offered forth in a board meeting. It's usually a response to some adverse incident which an observer has laid at the door of a communications management deficiency. A more forthright assessment might find the issue wholly predicated upon human factors.

The cause


Either you don't have a communication plan or, you're not 'managing your stakeholders'*. Or, you do have a communications plan, you are appropriately engaged with stakeholders and the assessment is simply wrong.

I take the stance that you don't treat stakeholders like farm stock, but not everyone maintains such an enlightened view.

The solution


Do not allow something that isn't a communications issue to be painted as a communications issue. If you don't have a communications plan, you'll have to bite the bullet. If you have a communications plan and the issue really is a communications issue then use something like 5-whys of other root cause analysis approach and take the corrective action necessary.

Your stakeholders are in a bind because they've not read your communications


The symptoms


You've done your job. Really, you have, You can show unequivocally the accountability for the lapse (be what it may) lies elsewhere and in this particular instance with someone who should have, would have, could have but didn't read any of the 6 carefully planned and executed communications bulletins. However, they can't work now and are costing the business money and that's starting to make you look bad.

The cause


Insufficient quality management. The key here is not the fact that your stakeholders didn't read the communications, its the impact of them not reading the communications. You cannot ensure your stakeholders read, understand and retain what is provided to them. In most instances you can manage what happens if they don't read it, understand it or retain it.

The solution


Implement poka-yoke or simliar. However you do it, don't let your stakeholders fall into a man trap because they 'didn't get the memo'.

You can't validate the efficacy of your communications (#5)


The symptoms


You've no qualified or quantified measure of the efficacy of your communications plan. Nagging doubts. Often at 2am.


The cause

You're a project manager not a mind reader

The solution


You've hopefully still got your communications stooges on talking terms from issue #1. They'll be a good help with this from simply opening up a dialogue to engaging them in test activities. I'm not a fan of surveys for this purpose. If I instate a test activity it will be designed to weed out deficiencies in communications or, what might in fact turn out to be confirmation testing. I'm not sure a survey does either one of these things well.

Apathy (and sometimes venom and mischief)


The symptoms


Rather self evident this one

The cause


Not ever so likely to be a communications issue but quite possible one which is first encountered by those project staff engaged in communications activities.

The solution


It's not a communications issue. Communications probably isn't the root cause and certainly is unlikely to resolve this independently. Escalate to the board and sponsor. Use the risk and issue logs with due prejudice.