The effectiveness of a guideline is best measured using indicators of key processes, as Dr Mark Charny explains

As part of the plan for introducing a guideline, described in the previous article in this series (Guidelines in Practice, August/September 1999), you need to monitor how effective you have been in implementing change.

Without measurement, it will be impossible for you to know whether you have been successful, and – if you have not been successful – where the problems are. With measurement, you will be in a position to take action.

The effectiveness of implementing change is best measured by the use of one or more indicators rather than by trying to capture information about everything. Indicators are precise and measurable markers that show how things are going.

Table 1 shows some examples of indicators. The examples given may look as if they do not capture what, intuitively, you think of as the important things. But research suggests that if the right markers are used, they will reflect what is happening with the other parts of the activity that are not measured.

Table 1: Examples of indicators

Guideline Possible indicator
Prophylaxis of venous thromboembolism History of previous deep vein thrombosis or embolism recorded on admission
Management of epilepsy Drug blood levels measured within 3 months
Physiotherapy for back pain Assessment by physiotherapist within 6 weeks of referral

Reasons for not measuring every-thing related to your guideline are as follows:

  • You haven't time to measure everything
  • You don't need to measure everything
  • Not everything is measurable.

1. Do you measure compliance with processes or outcome?

The recent preoccupation with outcomes has mesmerised us: we think that if we do not measure outcomes we are missing the point.

Of course outcomes are important – they represent the reason for giving care. But it does not follow, as many assume, that measuring outcomes is an essential management tool.

The problems with measuring outcomes are that they may be:

  • Good with bad care
  • Bad with good care
  • Due to influences outside the NHS
  • Effects of causes decades ago
  • Difficult to measure
  • Subject to the uncertainty of small numbers;
  • Difficult to track, e.g. after discharge from hospital;

or

  • They may occur much later.

Experience in quality improvement in the business world suggests that we should concentrate on monitoring processes. Processes are the only things that clinicians control: good outcomes result from best processes.

For example, if we consider prophylaxis against deep vein thrombosis (DVT), the outcome measure is 'How many patients got DVT?' and the process measure is 'Were agreed drugs given?' It is clear that the latter is much easier to measure than the former and less ambiguous.

2. The indicators should cover key processes of care.

In screening for hypertension, recording the result may be considered a key process (it demonstrates that the blood pressure has been measured and is essential for good communication with other health professionals). The key component of care is the aspect of the guideline that is most critical to the outcome.

3. The indicator chosen should be a sensitive way of measuring whether the guideline is being followed or not.

Sensitivity reflects the degree to which changes in the indicator track changes in the underlying care. To monitor a guideline about the treatment of childhood asthma in primary care, deaths are an insensitive measure, whereas peak flow measurements are a sensitive measure.

Indicators may reveal one of five patterns:

1. Unreliable data

See Figure 1. The widely fluctuating data, particularly with missing data for some time periods, suggest that you cannot rely on the data. In these circumstances, you need to examine data collection.

Figure 1: Unreliable data
bar chart­follow-up:new patient ratio

2. Little immediate change

See Figure 2. The data suggest that however busy the implementation team is, their efforts are not having an effect on the ground.

Figure 2: Little immediate change
bar chart­follow-up:new outpatient ratio

3. Post-guideline blip

See Figure 3. The post-guideline blip is well recognised. It represents a reaction to the novelty of the guideline's introduction, but the changes are not internalised and the context in which clinicians work is not being managed consistently by the implementation team.

Figure 3: Post-guideline blip
bar chart­follow-up:new outpatient ratio

4. Hawthorne effect

See Figure 4. The Hawthorne effect takes its name from the Western Electric Company's Hawthorne works in Chicago, USA, which was studied in the 1920s to confirm the theory that increasing the light on the production line would increase output. In fact, production was increased in areas in which light had been reduced as well as in areas where it had been increased. The researchers came to the conclusion that the observed change in behaviour simply reflected the fact that the people involved in the study knew that they were being monitored. When the monitoring stopped, the behaviour returned to its previous pattern.

In all of these cases, the implementation team should consider what further needs to be done to influence clinicians and ensure permanent changes in the way they practise.

Figure 4: Hawthorne effect
bar chart­follow-up:new outpatient ratio

5. Sustained change

See Figure 5. If you plan well, and everything works out, this is the pattern you want to see. The next article in the series will deal with sustaining change.

Figure 5: Sustained change
bar chart­follow-up:new outpatient ratio

Measure progress, even on a broad subject, by choosing measurable indicators of key processes as a proxy for a more comprehensive assessment.

The indicators will tell you whether everything is going according to plan: if it is not, then you need to consider what is getting in the way of progress, and take appropriate action. This is considered in more detail in the next article.

Guidelines in Practice, October 1999, Volume 2
© 1999 MGP Ltd
further information | subscribe