Skip navigation

How doctors use tests


Sensitivity, specificity, positive predictive value, likelihood ratio, relative operating characteristics (ROC). Words that fill most of us with a deep sense of dread. Turn offs.

That has always been how Bandolier feels when trying to make sense of another paper on diagnostic tests. What we look for, and never find, is that comforting word, pathognomonic ('characteristic of a disease, distinguishes it from other diseases' is how our ancient medical dictionary defines it). Trying to make sense of a test, to put it in context, is awfully hard.

The task has been made no easier by a survey of US doctors [1] showing that almost none of them use these terms in any formal way.

Study


A stratified random sample of physicians in six specialties with direct patient care (at least 40% of time with patients) across the USA was determined by researchers at Yale. These physicians were then contacted, by letter and telephone, and this resulted in a 10 minute telephone survey about their attitudes to formal methods of test use. They were told that interviewers were not necessarily advocates of the use of formal methods. There were 10 questions, reproduced in an appendix to the paper. An example of a question was (question 4):

'Do you use test sensitivity and specificity values when you order tests or interpret test results?'

Results


There were 300 physicians in the final sample, 50 in each specialty. They had a mean age of 46 years, 80% were men, and they spent a median of 90% of their professional time providing direct patient care. They worked in a variety of settings.

The main result was that few of them used formal methods of assessing test accuracy (Table). Bayesian methods were used by 3%, and ROC and likelihood ratio data by 1% each.

Frequency of use of methods of assessing test accuracy: 50 physicians in each category
  Bayesian method ROC curve Likelihood ratios
Specialist physician 5 1 1
Generalist physician 2 0 1
Paediatrician 1 1 0
General surgeon 0 1 0
Family practice 0 0 0
Obstetrics/Gynaecology 0 0 0
Overall percentage 3% 1% 1%


Although as many as 84% said they used sensitivity and specificity at some time, from adopting the use of a new test to using them when interpreting a diagnostic test result, this was almost always done in an informal way.

Comment


There's a film in which Michael Caine, as only he can, declaims 'I don't blame 'em' in tones of indignation and contempt that completely captures Bandolier 's reaction to reading this paper (free copy of all of the first five years of Bandolier in PDF format to the first correct identification of the film). These results mirror the reaction of just about any medical audience to similar questions. Diagnostic tests are presented in ways that are neither intuitive nor useful.

The authors make a number of salient points:

  • Information on test accuracy must be 'instantly available' when tests are ordered.
  • Formal training needs to be improved.
  • Published information is mostly useless, because it usually fails to reflect the patient population in which the test is being used.


Diagnostic testing needs a new beginning. If the methods we have of expressing test accuracy don't cut it, then we must find new ones that do. They must be understandable by the doc on the Clapham omnibus, relevant to a wide range of clinical situations and patient populations, easy to use in everyday practice, and instantly available. Bandolier has pointed out before that we spend perhaps £1.6 billion on laboratory testing in the NHS. If the results are being used with super-sub-maximal efficiency, then why are we bothering?

Reference:

  1. MC Reid, DA Lane, AR Feinstein. Academic calculations versus clinical judgements: practicing physicians' use of quantitative measures of test accuracy. American Journal of Medicine 1998 104: 374-80.




previous or next story in this issue