Most of the biomarkers listed in Table 17.1 were identified in studies of pathophysiology and epidemiology that demonstrated an association between the marker and the presence or prognosis of the underlying clinical condition. Once a putative bio-marker is identified, its subsequent evaluation consists of an analysis of its validity, in the traditional sense of precision, bias, and reproducibility, and of its predictive utility.
For example, laboratory biomarkers are used to establish prognosis and to predict or monitor response to therapy or disease progression in patients with cancer. A few of these biomarkers, such as prostate-specific antigen, also have had diagnostic utility. Because tumor biomarkers play a critical role in patient management, rigorous assessment of their validity is required and their marketing in the United States is regulated by the Food and Drug Administration under the Medical Device Law (13). Currently, candidate tumor markers are evaluated with respect to their analytical sensitivity and specificity and the robustness of the cutoff value that is chosen to distinguish positive from negative test results. Different antibody assays for the same tumor biomarkers can give different results, in part because tumor antigen proteins have several distinct epitopes protruding from their surface (14). Therefore, studies are required to compare new and old versions of a given tumor biomarker assay (13). The AIDS Clinical Trials Group has implemented similarly rigorous programs for standardization and quality control of biomarker measurements in patients with HIV-1 infection (15).
Statistical criteria have played an important role in assessing the predictive utility of biomarkers (criterion validity), but it is always hazardous to equate causation with statistical association. For that reason, increasing emphasis has been placed on establishing the biological plausibility, or construct validity, of biomarkers. Thus, clinical and epidemiological observations led to the conclusion that elevated blood pressure was associated with an increased risk of atherosclerotic cardiovascular disease, heart failure, stroke, and kidney failure (16). Subsequent pathophysiologic studies in humans and in animal models then were particularly helpful in establishing a firm linkage between hypertension and cerebral hemorrhage and infarction (17). A later epidemiologic study demonstrated that the risk of stroke and coronary heart disease is correlated with the extent of diastolic blood pressure elevation (18). In the aggregate, this considerable evidence supports the biological plausibility of using blood pressure as a surrogate endpoint. In clinical practice, the measurement of blood pressure is used to diagnose hypertension, to estimate its severity, and to monitor response to antihypertensive therapy.
Further support for using blood pressure as a surrogate endpoint is provided by the concordance of evidence from a number of clinical trials in which blood pressure lowering with low-dose diuretics and b-blockers was shown to reduce the incidence of stroke, coronary artery disease, and congestive heart failure in hypertensive patients (19). Of particular interest is a meta-analysis that was conducted to compare the extent of blood pressure reduction achieved in different clinical trials with the maximum benefit that was anticipated on epidemiolgic grounds (Table 17.3) (20). The decrease in stroke incidence anticipated for a 5- to 6-mm Hg average reduction in diastolic blood pressure was fully realized with only 2 to 3 years of antihypertensive therapy.
Was this article helpful?
Your heart pumps blood throughout your body using a network of tubing called arteries and capillaries which return the blood back to your heart via your veins. Blood pressure is the force of the blood pushing against the walls of your arteries as your heart beats.Learn more...