OUP user menu

Using Evidence in Pain Practice: Part II: Interpreting and Applying Systematic Reviews and Clinical Practice Guidelines

DOI: http://dx.doi.org/10.1111/j.1526-4637.2008.00422_2.x 531-541 First published online: 1 July 2008


Systematic reviews and clinical practice guidelines are useful tools for bringing evidence into pain practice. However, even when their conclusions or recommendations appear valid, interpreting and applying systematic reviews and clinical practice guidelines in everyday practice is not always straightforward. Judging external validity or applicability of findings requires careful consideration of factors related to patient selection, clinical setting, feasibility, costs, and availability of interventions. Clinicians should also consider whether effects on clinically relevant outcomes are large enough to warrant use of the intervention in question. Other challenges to using systematic reviews and clinical practice guidelines in pain practice include the need to make decisions about pain interventions when evidence is weak or inconclusive, and the increasing and confusing presence of discordant systematic reviews and clinical practice guidelines. This article discusses how to evaluate applicability and clinical relevance of systematic reviews and clinical practice guidelines, and provides a framework for approaching clinical decisions when evidence is weak or conflicting.

  • Evidence-Based Medicine
  • Meta-Analysis
  • Review Literature
  • Practice Guideline
  • Decision-Making
  • Pain


Systematic reviews and clinical practice guidelines can be very useful tools for helping clinicians incorporate evidence into pain practice. Well-conducted systematic reviews summarize risks and benefits of interventions more objectively than narrative or traditional reviews [1]. Rigorously developed clinical practice guidelines provide “actionable” recommendations grounded in the best evidence for what to do in specific clinical circumstances [2]. Unfortunately, clinical practice is not as straightforward as simply taking a series of high-quality systematic reviews or practice guidelines and routinely applying them to every patient. If this was the case, clinical decision-making could be based solely on algorithms. In the real world, evidence-based findings may not be applicable for every patient or clinical setting, expected benefits from an intervention may be too small or costs too high to justify routine application, and patient preferences and values [3] may not be concordant with evidence-based recommendations [4]. In addition, systematic reviews and clinical practice guidelines will not always provide a clear answer because of poor quality evidence or conflicting results and recommendations. This can lead to frustration [5].

Principles for applying systematic reviews or clinical practice guidelines interventions in pain practice are similar. The first step (reviewed elsewhere [6]) is to assess their quality. After deciding that a systematic review or clinical practice guideline is trustworthy, the next step is to interpret results and determine whether they should be applied to the circumstances at hand [7]. Particularly challenging situations often encountered in pain practice include clinical decision-making when evidence is weak or unclear, or when different systematic reviews and clinical practice guidelines reach discordant conclusions.

Interpreting and Applying Systematic Reviews and Clinical Practice Guidelines

What Is the Applicability of Findings?

Even when systematic reviews and clinical practice guidelines appear valid, findings and recommendations may not be applicable to all patients [8–10]. This is due in part to the fact that most systematic reviews and clinical practice guidelines are based on studies designed to evaluate efficacy (whether an intervention works under ideal conditions), rather than effectiveness (whether an intervention works in real world settings) [11]. However, just because an intervention can work in carefully selected patients under controlled circumstances, does not mean it will work in usual care [12].

Unless the scope and purpose of the systematic review or guideline are clearly stated, users can only guess whether and when it is pertinent [10]. Systematic reviews should describe a focused clinical question and the Populations, Interventions, Comparisons, and Outcomes evaluated (often referred to as the PICO framework [13]). Guidelines usually are broader in scope than systematic reviews, but should also provide PICO information and describe the target audience and setting (such as primary care vs referral practice) [14]. Other factors to consider when judging applicability (also referred to as generalizability or external validity) include how patients were selected, whether trial protocols reflect routine practice, appropriate selection of comparator interventions, and availability and acceptability of interventions (Table 1) [7].

View this table:
Table 1

Factors to consider when assessing applicability of systematic reviews and clinical practice guidelines

Does the systematic review or clinical practice guideline report characteristics of the patients, interventions, comparisons, and outcomes in enough detail to assess applicability to routine practice?
Is the systematic review or clinical practice guideline based on a number of studies from a broad range of settings, including settings similar to the one in which the intervention will be applied? (threats to applicability include studies only performed in other countries or health care systems, studies performed in referral settings, or use of stringent criteria to select participating centers and clinicians)
Is the systematic review or clinical practice guideline based on a number of studies from a broad range of patient populations, including populations similar to the one in which the intervention will be applied? (threats to applicability include use of numerous selection criteria, use of run-in periods, or small ratio of enrolled patients relative to the number approached for possible inclusion)
Is the systematic review or clinical practice guideline based on studies that evaluated the intervention using protocols that can be duplicated in routine practice? (threats to applicability include prohibition of certain routinely used nontrial interventions, inappropriate comparison treatments such as nonequivalent dosing, evaluation of interventions not acceptable to patients, or use of frequent or intense follow-up, monitoring, or methods to ensure compliance and safety)
Is the intervention as studied in the trials available in routine practice?

In some situations, judging applicability is fairly straightforward. For example, a systematic review found glucosamine superior to placebo for osteoarthritis only in trials evaluating a European, pharmaceutical grade preparation [15]. Results are less applicable to glucosamine preparations available in the United States, where it is not regulated as a drug and the content and purity of over-the-counter preparations varies substantially. In other cases, judging applicability requires a more detailed understanding of the particular clinical condition and its management in routine practice [16]. For example, two recent systematic reviews both found acupuncture moderately effective for chronic low back pain compared with no treatment [17,18]. However, acupuncture techniques varied substantially in the trials included in the reviews, effectiveness may depend on the skill or training of the acupuncture provider, and a significant proportion of trials were performed in Asia, where patients may have greater expectations for benefit than in the United States [19]. A reasonable approach for applying the results in the United States might be to recommend to patients who express strong conviction in acupuncture benefits the specific acupuncture techniques found effective in primary studies if skilled providers are available in the area. Clinicians might reasonably suggest alternative therapies in other situations.

What Is the Clinical Significance of Results?

Systematic reviews and guidelines can generate conclusions or make recommendations based on benefits that are statistically, but not necessarily clinically, significant [9,20]. Factors to consider when judging clinical significance of results include the magnitude of treatment benefits, whether patient-centered clinical outcomes were assessed, whether validated and standardized methods were used to measure outcomes, and whether all important potential outcomes—both beneficial and harmful—were considered (Table 2) [10]. For example, recent systematic reviews of nonsteroidal anti-inflammatory drugs (NSAIDs) for osteoarthritis [21] and exercise for acute low back pain [22] found statistically significant benefits for pain relief compared with no treatment or placebo, but mean differences may not be large enough for most patients to detect (10 or fewer points on a 100-point pain scale) [23,24]. Systematic reviews or clinical practice guidelines that focus on surrogate outcomes (such as physiologic, imaging, or laboratory results) can be misleading because they often do not correlate well with patient-centered outcomes such as pain, functional status, or ability to work [25]. For example, a recent Cochrane review of surgery for degenerative conditions of the spine found fusion with instrumentation associated with higher rates of radiologic fusion than fusion without instrumentation, but no clear difference in rates of good clinical outcomes [26].

View this table:
Table 2

Factors to consider when assessing clinical relevance of systematic reviews and clinical practice guidelines

Are there benefits on outcomes that are patient-centered and clinically relevant?
Are net benefits large enough to be clinically important?
Were standardized and validated methods used to measure clinical outcomes?
Were all important benefits and harms considered?

In pain research, the concept of minimal clinically important differences, or “the smallest change or difference in an outcome measure that is perceived as beneficial and would lead to a change in the patient's medical management”[27], can be helpful for determining whether observed effects are meaningful [28,29]. In addition, absolute measures of risk reduction such as the “number needed to treat” (NNT) can be more useful than relative measures (such as the relative risk reduction) for evaluating clinical relevance of treatment effects by conveying the number of patients necessary to treat in order to achieve one case of clinically relevant benefit [30,31]. NNTs can be particularly informative for comparing effects of different interventions for the same condition. For example, separate systematic reviews conducted by the same investigators estimated odds ratios (a relative measure of benefit) of 6.2 (95% CI 3.0 to 10.6) for >50% pain relief with anticonvulsants vs placebo for painful diabetic neuropathy [32], compared with 3.6 (95% CI 2.5 to 5.2) for antidepressants vs placebo [33]. However, the NNTs to achieve one case of >50% pain relief for the two drug classes were similar (2.5, 95% CI 1.8 to 4.0 vs 2.9, 95% CI 2.4 to 4.0, respectively). Though such indirect comparisons (across different sets of studies) should always be interpreted cautiously [34,35], they can provide useful guidance when direct head-to-head trials comparing interventions are lacking. One caveat in interpreting NNTs is that estimates can vary according to the underlying risk in a population, so it is important to verify that rates of response in the placebo arms are similar across the two sets of trials. In this example, rates of pain relief were comparable at 36% in trials of antidepressants [33] and 43% in trials of anticonvulsants [32].

In order to provide a balanced view of overall clinical effects of an intervention, systematic reviews and guidelines should consider all important harms as well as benefits [36]. In the case of celecoxib, a cyclo-oxygenase-2 selective NSAIDs, it would be important to evaluate evidence on pain relief, functional status, and other efficacy outcomes, as well as evidence on gastrointestinal and cardiovascular safety [37]. However, systematic reviews frequently focus only on benefits [38,39], in part because harms data are often sparse or of poor quality [40]. Ideally, a single systematic review would review all important outcomes, but in some cases clinicians and guideline developers may have to consider multiple systematic reviews in order to cover all relevant harms and benefits.


Clinical Decision-Making When Evidence Is Weak

One frequent difficulty in using evidence from systematic reviews of pain interventions is the poor quality of available studies. On average, randomized controlled trials of low back pain interventions adequately meet only about half of the quality rating items [41]. Not surprisingly, about two-thirds of systematic reviews of low back pain interventions emphasize the need for more high-quality trials [42]. When evidence is weak or when studies are conflicting or inconsistent, it may not be possible to generate strong evidence-based conclusions about the effectiveness of an intervention [43]. However, there may be situations where clinicians may consider recommending a pain intervention based on weak or inconclusive evidence. Such recommendations should always be made cautiously, as there is a well-documented history of pain interventions adopted based on initial, poor-quality evidence, only to be abandoned after subsequent higher-quality studies showed ineffectiveness or even harm [44]. For low back pain, examples of such discarded interventions include invasive procedures such as coccygectomy and sacroiliac joint fusion, as well as noninvasive interventions such as advice to rest in bed [45]. The following questions provide a basic framework of important issues for clinicians to consider when systematic reviews or clinical practice guidelines do not provide a clear answer because the evidence is weak or inconclusive (Table 3).

View this table:
Table 3

Issues to consider when considering a pain intervention supported by weak or inconclusive evidence

Is there an alternative intervention proven to be effective?
Can evidence on the target intervention be reasonably extrapolated from trials in other settings or populations?
Is new evidence that may help clarify risks and benefits expected shortly?
Is the target intervention associated with significant potential harms or costs?
Are persons with appropriate skill and training available to provide the intervention?
How willing is the patient to accept uncertainty in estimates of benefits and harms?

Is There an Alternative Intervention Proven to Be Effective?

For some conditions, a number of interventions are available and have been evaluated in high-quality systematic reviews. For example, evidence on transcutaneous electrical nerve stimulation [46] and traction [47] for chronic low back pain is limited and provides (at best) mixed evidence of benefit. On the other hand, there is fairly consistent evidence from a number of trials of at least modest benefits from acupuncture [17,18] and spinal manipulation [48] (as well as certain other interventions). When multiple alternatives are available, clinicians should preferentially choose interventions with at least fair-quality evidence of benefit over interventions with unproven or uncertain benefits.

Can Evidence on the Target Intervention Be Reasonably Extrapolated from Trials in Other Settings or Populations?

Direct evidence on benefits and harms of an intervention may not be available for the exact clinical situation of interest. For example, there are few high-quality trials demonstrating benefits of opioids compared with placebo in patients with low back pain [49]. However, systematic reviews have found opioids consistently more effective than placebo for short-term outcomes in patients with other chronic noncancer pain conditions [50,51]. It would be reasonable to extrapolate evidence on efficacy of opioids from patients with other chronic musculoskeletal or degenerative pain conditions to patients with low back pain, until better studies in that population become available. On the other hand, it would not make sense to extrapolate evidence on efficacy of opioids for acute, postoperative, or cancer pain to patients with chronic low back pain. Risks and benefits of opioids are likely to vary substantially because of differences in the natural history of the underlying pain condition, population characteristics, likelihood of long-term opioid use, and risk of aberrant drug-related behaviors. Evidence from such disparate populations should be considered separately and not used to support use of opioids in patients with low back pain.

Is New Evidence That May Help Clarify Risks and Benefits Expected Shortly?

Evidence on efficacy of pain interventions is constantly being updated by new studies. For example, a recent systematic review included only one small randomized trial comparing surgery vs initial nonsurgical therapy for symptomatic spinal stenosis [26]. Although this trial found surgery associated with moderately superior outcomes after 1 year, differences were attenuated at longer-term follow-up [52]. Furthermore, two large, multicenter trials evaluating surgery vs initial nonsurgical therapy for spinal stenosis (with or without degenerative spondylolisthesis) were known to be in progress at the time the systematic review was published [53,54]. Because new evidence to better inform decision-making was known to be forthcoming, clinicians could have reasonably deferred decisions on surgery for spinal stenosis patients without severe symptoms or a strong preference for surgery until results of these trials became available.

Is the Target Intervention Associated with Significant Potential Harms or Costs?

Clinicians should be particularly cautious about offering interventions supported by weak or inconclusive evidence when there may be significant potential harms or costs, especially when benefits are also uncertain or appear small. For example, vertebral disc replacement with the Charite® and the ProDisc®-II artificial discs for nonspecific low back pain with degenerative disc disease was noninferior to fusion in two published trials [55,56]. However, the fusion technique used in one of these trials is no longer widely used because of frequent failures [55], and even standard fusion as used in the other trial [56] is not clearly superior to intensive, multidisciplinary rehabilitation [57]. In addition, long-term outcomes (and complications) of vertebral disc replacement are unknown, and costs of vertebral disc replacement are considerably higher than fusion [58]. A reasonable approach would be to wait for more evidence before recommending vertebral disc replacement, given the substantially higher costs and uncertain long-term benefits and harms. For certain other interventions, such as glucosamine or chondroitin for osteoarthritis, a time-limited trial may be justified because harms appear minimal and costs relatively low, even though evidence on benefits is inconclusive or shows no clear benefit [11,15,59,60].

Are Persons with Appropriate Skill and Training Available to Provide the Intervention?

Interventions may be supported by only weak evidence because they are new, or not widely available. For some emerging interventions, outcomes may vary substantially depending on the skill and training of the providers. In the example of vertebral disc replacement with the Charite® prosthetic disc, procedures performed by surgeons and hospitals with higher operating rates are associated with significantly better outcomes [61]. If a decision is made to recommend such an intervention, it is critical for clinicians to insure that patients are referred to providers or centers with appropriate skill levels and training.

Does the Patient Have a Strong Preference for the Target Intervention?

Patient preferences or expectations of benefit from an intervention can have a substantial effect on outcomes [62,63]. For example, in a randomized trial of patients undergoing massage or acupuncture for chronic low back pain, patients who expected greater benefits from massage than acupuncture were significantly more likely to experience better outcomes with massage than with acupuncture, and vice versa [19]. Strong patient preference for a pain intervention supported by only weak evidence can be a factor influencing the decision to offer the intervention. However, important concerns about potential risks or costs should not be overridden by patient preferences alone.

How Willing Is the Patient to Accept Uncertainty in Estimates of Benefits and Harms?

Patients are likely to differ in how willing they are to accept uncertainty in estimates of benefits and harms. For example, there is little evidence on long-term risks of abuse or addiction with use of opioid analgesics for chronic noncancer pain [49–51]. Patients with severe pain may be more willing to accept uncertainty in estimates for this important outcome because opioids are the strongest medication for most types of pain. For example, a trial of opioids for severe low back pain may be indicated if a patient wants to try returning to their job or normal function, despite the lack of evidence documenting long-term benefits and risks of opioids specifically for chronic low back pain [49], as long as they are not at higher risk for drug abuse, addiction, or other aberrant drug-related behaviors. Similarly, patients who have failed several “proven” interventions may be more apt to accept uncertainty when making decisions about a less proven intervention. It is important for clinicians to clearly discuss gaps in the evidence with patients when considering such interventions, in order to better incorporate patient preferences and values into the decision-making process [64].


When Systematic Reviews or Clinical Practice Guidelines Disagree

High-quality systematic reviews of the same topic can be very helpful for guiding medical decisions when they reach similar conclusions. A more challenging scenario is when systematic reviews—especially well-conducted reviews—of the same topic disagree [65]. As more systematic reviews are published, such situations have become increasingly common. One study found that conclusions on 11 of 13 different low back pain interventions evaluated in two or more systematic reviews conflicted [42]. Clinicians may also face conflicting recommendations from different guidelines. Among eleven guidelines for acute low back pain, for example, two recommended use of muscle relaxants in at least some situations, six recommended not using muscle relaxants, and three provided ambiguous recommendations [66].

Choosing which systematic review or clinical practice guideline to trust in the face of these discrepancies can be confusing. The following questions address some important issues for clinicians to consider when working through such situations (Table 4) [65].

View this table:
Table 4

Issues to consider when systematic reviews or clinical practice guidelines are discordant

Which systematic review or clinical practice guideline is more applicable?
Which systematic review or clinical practice guidelines is most current?
Which systematic review or clinical practice guideline is most rigorous?
Is discordance due to different methods for rating or synthesizing evidence?
Are potential conflicts of interest present that could affect interpretation of evidence?
Is discordance due to differences in how outcomes are prioritized or valued?

Which Systematic Review or Clinical Practice Guideline Is More Applicable?

Clinicians should first determine which systematic review or clinical practice guideline most directly addresses their specific clinical situation. For example, two systematic reviews of topical NSAIDs for osteoarthritis came to different conclusions in part because of different criteria for selecting studies and analyzing outcomes. One systematic review included studies of topical salicylates and eltenac gel (no longer available for human use) and evaluated mean differences in effect sizes for pain relief [67]. The other excluded topical salicylates (which are not thought to penetrate into the joint space and have weak evidence of efficacy) and evaluated differences in the proportion of patients classified as “clinical responders”[68]. In this case, the systematic review that evaluated topical NSAIDs available for human use excluded topical salicylates and reported the proportion of “clinical responders” is more likely to be clinically applicable and relevant [68].

Issues related to applicability may be particularly relevant for clinical practice guidelines, which take into account local circumstances that affect interpretations about acceptable side effects, benefits, and costs [69]. In the United States, for example, there was perceived overuse of surgery when the 1994 Agency for Healthcare Policy and Research (AHCPR) guidelines for acute low back pain were developed [70]. This could have swayed recommendations toward noninvasive interventions if some data, even if not conclusive, suggested benefit with little evidence of harm. In fact, the AHCPR guidelines recommended spinal manipulation even though contemporaneous high-quality systematic reviews [71,72] came to different conclusions about benefit. By contrast, guidelines produced in other parts of the world at around the same time, where there may not have been the same concerns about overuse of invasive therapies, either recommended against spinal manipulation (Holland) or did not make a recommendation (Israel) [73].

Other factors should also be considered when assessing applicability of clinical practice guidelines. For example, recommendations for interventional pain procedures developed by a narrow group of specialists for use in interventional pain centers, where patients typically have more severe disease and have already undergone extensive testing and management, are not likely to be applicable to primary care settings [74]. A guideline developed by a European multidisciplinary panel may in fact be more applicable to U.S. primary care settings [75].

Which Systematic Review or Clinical Practice Guidelines Is Most Current?

Conclusions from systematic reviews and recommendations from clinical practice guidelines can rapidly change as the body of literature accumulates. One study found that guidelines issued by the AHCPR started becoming outdated after only 3 years, and half required major updating after 5.8 years [76]. Similarly, one-quarter of systematic reviews require updating after 2 years, and half by 5.5 years [77]. Several factors should be considered when evaluating whether a more recent systematic review or guideline is likely to be more reliable or relevant, including whether new evidence has been published, new interventions have become available, changes have occurred in resources available for health care, or changes have taken place in values placed on different outcomes [77,78].

Which Systematic Review or Clinical Practice Guideline Is Most Rigorous?

Shortcomings in methods used to identify, select, and rate studies, or in methods used to analyze and synthesize data can lead to discordance between different systematic reviews or clinical practice guidelines [65]. Several studies have shown that lower-quality systematic reviews of pain interventions are more likely to generate positive conclusions than higher-quality reviews [42,79,80]. Obviously, higher-quality and more comprehensive systematic reviews or clinical practice guidelines should be given more weight over poorer-quality guidance. Methods for distinguishing higher-quality from lower-quality systematic review and clinical practice guidelines are reviewed in a separate article [6].

Is Discordance Due to Different Methods for Rating or Synthesizing Evidence?

Even when systematic reviews or clinical practice guidelines identify and include the same (or a similar) body of studies, they may reach discordant conclusions. Differences in how similar bodies of evidence are rated and synthesized are an important source of such discordance [65]. Conclusions of systematic reviews may be particularly sensitive to methods used to rate and synthesize evidence when data are largely from lower-quality trials [81].

Different methods of data synthesis appear to be the critical factor explaining why two earlier systematic reviews of epidural steroid injections for sciatica came to different conclusions [82]. One systematic review, using a quantitative approach, reported a pooled odds ratio of 2.61 (95% CI 1.90 to 3.77) for a positive outcome favoring epidural steroid over placebo injection [83]. The other study, using a “vote-counting” approach, concluded that there is no difference between epidural steroid and placebo injections because six studies found a statistically significant positive effect for steroids and six did not [84]. In general, simple vote counting—or classifying studies as “positive” or “negative” and tallying the number of each—is suboptimal because it ignores the size and direction of effects from individual studies and their confidence intervals, and may not adequately consider effects of study quality [85]. On the other hand, pooling heterogeneous or lower-quality trials with higher-quality trials, as performed in the quantitative meta-analysis of epidural steroids [83], may also be misleading [86]. A better approach would be to perform sensitivity analyses to evaluate stability of conclusions based on the highest-quality trials, or to use a “best evidence” approach emphasizing evidence from the largest, highest-quality, and most homogeneous trials [87]. Unfortunately, a subsequent Cochrane review [88] found only one high-quality trial comparing epidural steroids to placebo injection (finding no differences in outcomes) [89], though other higher-quality randomized trials [90,91] reporting favorable short-term outcomes following epidural steroid injections have since been published.

Are Potential Conflicts of Interest Present That Could Affect Interpretation of Evidence?

Although well-recognized as a potential source of bias in primary studies [92–96], potential effects of conflicts of interest on results of systematic reviews have only recently received attention. One study found no differences between Cochrane reviews (which are not funded) and reviews of the same drugs with nonprofit or no support, but industry-supported meta-analyses were more likely to reach favorable conclusions, even when estimates of treatment effect were similar [97]. In some cases, industry- and nonindustry-funded reviews may reach opposite conclusions, as in the case of several systematic reviews of cardiovascular risk associated with rofecoxib [98–100].

Investigators or guideline developers may also have conflicts of interests not directly related to funding source. This may occur when there is a vested interest in finding that a certain intervention is effective. For example, one study found that systematic reviews of spinal manipulation with at least one osteopath or chiropractor author were more likely to reach positive conclusions than other systematic reviews, though these reviews also tended to be rated lower-quality [80]. In some cases, these discrepancies may be due to differing interpretations of “positive” evidence. Recent guidelines developed by the American Society of Interventional Pain Physicians, for example, recommend intra-articular facet joint and sacroiliac joint injections, based on “moderate” evidence for the former and “limited” evidence for the latter [74]. As defined in these guidelines, “moderate” evidence does not require even a single properly designed randomized controlled trial, and “limited” evidence only requires nonexperimental studies or conflicting evidence from multiple trials. Such criteria are more generous than usual standards for grading evidence and strength of recommendations [101]. Other guidelines using more stringent criteria for grading recommendations found insufficient evidence to support either intervention [75].

Is Discordance Due to Differences in How Outcomes Are Prioritized or Valued?

In some cases, systematic reviews and clinical practice guidelines may evaluate similar evidence and report comparable results, but reach discordant conclusions about how clinicians should act on those findings. This can occur because of differences in how outcomes are prioritized or valued. For example, recent guidelines from the American Heart Association [102] recommend short-term opioid analgesics over NSAIDs as first-line analgesics (along with acetaminophen, aspirin, and nonacetylated salicylates) for musculoskeletal symptoms in patients with known cardiovascular disease or those at high risk, because of increased risk (0.3% to 0.6%/year, RR 1.86, 95% CI 1.33 to 2.59 [103]) of myocardial infarction. These recommendations are likely informed in part by the high priority the American Heart Association places on preventing myocardial infarctions. In other guidelines, opioids remain a second-line agent, particularly for chronic pain, because side effects and long-term risks of opioids are perceived as offsetting neutral cardiovascular risks or other potential advantages [104,105]. Weighing of outcomes involved in assessing the balance of benefits and harms almost always involves subjective judgments. However, clinicians evaluating discordant systematic reviews or guidelines should consider whether the values placed on different outcomes are congruent with the importance they (and their patients) would assign to them.


Using high-quality systematic reviews and clinical practice guidelines for decision-making is a good starting point for bringing evidence into pain practice, but will not eliminate uncertainty or provide answers for all clinical situations. In addition, evidence-based medicine is not intended as a substitute for thoughtful clinical judgment and sound reasoning [106]. Providers should consider whether findings and recommendations of systematic reviews and clinical practice guidelines are applicable and clinically relevant, and bear in mind that it is important to incorporate individual patient circumstances and preferences into their medical decisions. There may be times that clinicians choose to recommend a pain intervention supported by only weak or inconclusive evidence, but such decisions should be limited to circumstances where effective alternatives are not available or have failed, potential (or unknown) harms and costs have been carefully weighed, and patients clearly understand the level of uncertainty involved in the decision. Deciding which systematic review or clinical practice guideline to follow for clinical decision-making when conclusions or recommendations are discordant can be a challenge, but clinicians should be able to work through most situations by considering several important issues related to applicability, quality, methods used to synthesize data, potential conflicts of interest, and values placed on different outcomes.


The author would like to acknowledge Jayne Schablaske for her assistance with this manuscript, and Laurie Hoyt Huffman for reviewing the manuscript.


View Abstract