From 1917 until the early 1990s, accreditation standards emphasized structure and process, perhaps for practical reasons in the evolution of accreditation. In contrast, patients have tended to evaluate quality according to the outcomes of care, as represented by the OTA definition given above. (This contrast is illustrated in the ironic proverb, "The operation was a success but the patient died.")
Medical societies and the American Hospital Association traditionally dominated the Joint Commission. During this time, professional consensus was the predominant source of quality standards. Many requirements were based in common sense, such as fire retardant floor coverings and fire escapes. Some were based on professional opinion, for example, documentation of patient care. Few, however, had research to confirm their relationship to outcomes.
Managing quality based on formally validated measures and standards is an ideal that has still not been achieved, and may not be achievable. The complexity of health care, compounded by a rapid rate of change in available technique and equipment, introduces a large element of judgment, and therefore disagreement. Given such circumstances, evidence-based professional consensus standards seem to be the next best alternative to formally validated standards. This is, so to speak, a generalized version of the idea that peer review is the gold standard for quality assessments.
There are two real problems involved in the issue of evidence-based vs. professional consensus standards. First, large, inexplicable "practice pattern variations," especially in surgical rates, call into question the validity of professional consensus standards. Second, some of the research evidence linking procedure to outcome does not itself meet scientific standards. If professional consensus is an unreliable basis for quality standards and well-done scientific studies are unavailable, it is not clear how quality can be defined and measured.
A review by David Eddy and colleagues gives a number of examples of the first problem.15-17 In one study, four cardiologists were asked to assess patient status based on some good-quality angiograms. The cardiologists differed by 40% in deciding whether there was 50% or more blockage (which could be a criterion for angioplasty or bypass). In another study, cardiologists changed their opinions about blockage in up to 37% of cases just from seeing the angiogram a second time.
In his series of studies, Wennberg20-22 found up to 3-fold differences in procedure rates from practice to practice for heart bypass, thyroid, and prostate surgery; 7-fold differences in knee replacement rates; and 20-fold differences in carotid endarterectomy rates. These differences do not support the assumption that medical quality standards (and decision making) have a common scientific basis.18-22
In the field of medications use, it is commonly required that health care organizations maintain lists of approved drugs and conduct drug use evaluations to screen prescribing for compliance with the approved agents. (Chapter 6 will discuss this topic in more detail.) Formulary restrictions, however, have little scientific support, and some studies suggest that the requirement can have unintended (negative) consequences not only on outcome, but also on total costs of care.23
The second problem is the empirical basis for consensus in clinical research. It has two aspects. First, even when professional consensus is based on clinical research, the research may have severe limitations. One common problem is lack of scientific controls. Sometimes this is a difficult problem to overcome, since one cannot blind a surgeon to the identity of the procedure he is providing. The ethical problems involved in randomizing patients to a placebo group can be insurmountable. Placebos are commonly used in new drug research; in fact, such studies are required for marketing approval. However, studies comparing a new drug to standard therapies are not legally required and often unavailable for years after a new drug is marketed. Further, drug efficacy is only one dimension of safe and effective medications use. The evidence base for quality standards in other aspects of care, e.g., medications use management, is limited.
The second aspect of the evidence problem is that clinical research may not address outcomes that matter from a patient's perspective, e.g., changes in patient's quality of life. Adar et al. reviewed 39 studies of a surgical procedure intended to open popliteal or femoral arteries, to relieve leg pain, and to restore patients' ability to walk. These studies usually evaluated the success of surgery according to whether the artery remained open. Not one study measured pain relief or whether patients could walk after recovery from surgery.16 Studies of corneal transplantation measure visual acuity (with an eye chart), but not patients' ability to see in everyday situations.
Was this article helpful?
This guide will help millions of people understand this condition so that they can take control of their lives and make informed decisions. The ebook covers information on a vast number of different types of neuropathy. In addition, it will be a useful resource for their families, caregivers, and health care providers.