Our editorial board answer your questions on differences in commissioning across PCG/Ts and choosing an audit design

Q How might we best address the differences in purchasing of a service or treatment across PCG/T boundaries when it seems that the intervention in question is strongly supported y multiple, high quality randomised controlled trials?

A You first need to explore the reasons for the differences. They may be:

  • Rational, i.e. understandable and justified differences in conclusions based on different circumstances
  • Non-rational, i.e. personal preferences or historical differences in practice, which are not understandable but are not obviously erroneous
  • Irrational, i.e. patterns of practice that fly in the face of evidence and cannot be justified.

Examples of rational variation include differences in population needs (different patterns of disease, different social circumstances), differences in interpretation of the evidence, or differences in judgments about relative priorities. These may be important factors in small areas, even within a single primary care organisation (PCO).

Non-rational variation stems from the fact that much of clinical pratice is a matter of individual judgment, because in many circumstances there is no 'right' answer.

Irrational variation stems from individual and organisational habit, where the dead hand of history creates inertia where there should be change.

The question would seem to be about irrational differences, but it is worth checking this out. It is very easy to judge other people's points of view as irrational when they differ from one's own.

Clearly, the steps to take will depend on the reasons for the differences. For example, in the case of differences in population needs, commissioning should be within a single agreed framework, but the resulting decisions will be different. But where linical practice flies in the face of good evidence, there may be a personality problem that requires a psychological solution or a need for education.

Assuming that the situation is that clinical practice is unsatisfactory, the second step is to understand the reasons why. Is this simply habit – the way 'things have been done round here' – in which case it may be fairly simple to change. Or is the pattern of practice defended with significant energy, in which case it may reflect:

  • Ignorance about specific evidence
  • Ignorance about the way evidence should be interpreted
  • External constraints, e.g. )nability to recruit good staff locally,(leading to excessive workload pressure
  • Poor work habits, such as procrastination or poor organisation
  • A personality problem, such as deep conservatism or a conviction that clinical freedom means that one's practice should never be questioned.

As in clinical practice, the solution should follow a sound diagnosis of the problem. Consider the situation carefully, and sensitive – but determined – action following your understanding, will usually bear fruit.

If neither you nor your PCO managers can solve the problem, ask for help from other managers or, if the situation is educational or professional, from a medical expert or someone involved with postgraduate education.

Q It seems that we benefit from the audit process, part of which is the audit design. What are the relative merits of using an 'off-the-peg' audit,which can provide data for a whole PCG, health authority or the NHS, and an audit developed 'in house'?

A It is certainly true that designing an audit can be a useful learning experience, but it does take time and involves particular skills. Audits designed by people who have not acquired the relevant skills may involve much more work than is necessary, may create unnecessary uncertainties in the data, and may not lead to any useful conclusion.

In general, I would use an 'off the peg' audit if it met my needs in most respects, because energy is usually best devoted to peer review and implementation – which have to be owned and driven by local participants.

If there is no suitable 'off-the-peg' audit, and the subject is of sufficient importance, I would look for examples elsewhere to plagiarise and arrange for support or facilitation by local audit staff or a clinician with experience in audit design.

If the principal purpose of a local audit is to educate clinicians, choose something of immediate importance or interest to participants, arrange a workshop or mini-course led by a skilled auditor with educational experience, and keep it short and simple.

Q We sometimes have sepaate committees developing guidelines for the same condition at HA, trust and PCG level. Is this a waste of time as well as being likely to send differing and confusing messages?

A This situation is wasteful at best, and counterproductive at worst. Given how much work there is to do, and how little time to do it, it is disappointing that organisations continue to duplicate effort.

This is a management problem. You should approach those responsible for dealing with the situation (e.g. local health authority and trust chief executives and the chair of the PCG board) and suggest a meeting to consider why the situation has arisen and what might be done to change it.

Such a meeting will probably achieve most if it is prompted by constructive criticism rather than complaint, and if it considers one or more specific examples rather than general propositions about lack of coordination.

Can we solve your problem?
If you have a question or a problem that you would like to put to our editorial boar, please contact Guidelines in Practice by:
Post: Guidelines in Practice, The Chapel, Park View Road, Berkhamsted, Herts HP4 3EY
Fax: 01442 877100
Email: corinne@mgp.ltd.uk
Website: feedback page

Guidelines in Practice, February 2001, Volume 4(2)
© 2001 MGP Ltd
further information | subscribe