OUP user menu

Computerized Decision-Support Systems for Chronic Pain Management in Primary Care

Meredith Y. Smith MPA, PhD, Judith D. DePue EdD, MPH, Christine Rini PhD
DOI: http://dx.doi.org/10.1111/j.1526-4637.2007.00278.x S155-S166 First published online: 1 October 2007


Objective. Computerized decision-support systems (CDSSs) can offer clinical guidance, as well as promote doctor–patient collaboration and patient self-care. As such, they have great potential for improving chronic pain management, particularly in the primary care setting, where physicians often lack sufficient pain-specific clinical expertise and communication skills. The objective of this study was to examine the use of CDSSs in chronic pain management, and to review the evidence for their feasibility and effectiveness.

Design. A review of the available literature using search terms associated with computerized decision-support and chronic pain management. Major databases searched included: MEDLINE, CINAHL, PsychINFO, HealthSTAR, EMBASE, Cochrane Library, Computer and Information Systems Abstracts, and Electronics and Communications Abstracts. Descriptive and evaluative studies were included.

Results. Nine studies describing eight CDSSs met study inclusion criteria. With but two exceptions, CDSSs were specific to a pain-related condition(s). All were designed to assist clinicians to manage pain medically. Aside from pain status, input specifications differed markedly. Evaluative studies were exclusively feasibility studies and varied widely in design and level of description. All were nonexperimental; most were methodologically weak. Two primary care studies were reported. Patient and clinician acceptability ratings of CDSSs ranged from moderate to high. Due to insufficient data, definitive conclusions concerning the impact of CDSSs on provider performance and patient outcomes were not possible.

Conclusion. Research on CDSSs in chronic pain management is limited. The effects of CDSSs on provider and patient outcomes remain understudied, and their potential to improve doctor–patient collaboration and self-care largely untested.

  • Computerized Decision-Support Systems
  • Expert System
  • Primary Care
  • Chronic Pain
  • Doctor–Patient Communication
  • Disease Management for Chronic Pain


An estimated 9% of adults in America suffer from chronic pain and its sequelae, and over half of these individuals seek treatment for this condition from a primary care physician (PCP) [1]. In recognition of this fact, recent American Pain Society guidelines advocate that PCPs should “… participate in the process of screening, diagnosis, and long-term follow-up treatment of patients who suffer from chronic pain [j]ust as PCPs diagnose and maintain patients with other chronic diseases …”[2]. In order to accomplish this goal, however, PCPs must be equipped not only with the necessary clinical tools and expertise, but also with the communication and related interpersonal skills to build and sustain a strong alliance with their patient. Collaborative management that strengthens and supports self-care is recognized as the most appropriate and cost-effective way to treat chronic pain and is, moreover, the approach most often preferred by both patients and health care providers [2–6].

In reality, however, deficits in physicians' training and knowledge regarding pain management, coupled with time constraints during the primary care visit, frequently prevent PCPs from meeting these dual clinical and psychosocial expectations [5,7–15]. PCPs commonly focus more on technical aspects of care when treating chronic pain patients, and less on promoting patient self-management behaviors [7]. In addition, doctor–patient communication regarding pain is frequently inadequate [16–19]. Consequently, physicians' perceptions of patient pain are often incongruent with patients' self-ratings, treatment goals are frequently developed without patient input, and patient adherence to treatment plans is less than optimal [20–25].

Faced with declining reimbursement rates from both public and private payers, physicians need an alternative that enables them to reconcile the twin imperatives of providing high-quality pain care while maximizing efficiency in their clinical practice. Computerized decision-support systems (CDSSs) offer one potential solution to this dilemma. CDSSs are information systems designed to enhance the quality of clinical decision making and to minimize deviations in clinical performance from accepted professional standards [26]. Individual patient data are entered into the system whereupon predetermined algorithms, guided by a resident logic library of expert-based clinical data, generate patient-specific recommendations [26]. Data can be collected via a personal computer or a handheld device, such as a palm pilot. Typically, output is available in real time for use during the medical visit.

The process involved in using a CDSS based on personal computer technology is illustrated as follows. Upon arrival at a medical appointment, a patient is directed to a computer workstation equipped with a laptop, stylus, and printer. Following instructions on the computer screen, the patient completes an electronic questionnaire containing the following types of items: current pain status (e.g., intensity, location, duration), past and present pain treatment regimens, degree of adherence to prescribed pain medication, side effects of current medication, risk factors for opioid abuse, impact of pain on respondents' quality of life (QOL), psychological status, pain-related coping and self-management strategies, type and degree of social support, personal goals for daily living, and (for initial visit only) demographic information. The data collected are automatically stored electronically and then run through a series of algorithms. In the final step, a report is generated (electronically, in hard-copy format, or both) that contains a synopsis of the patient's pain status, recommendations as to possible improvements in the current pain treatment regimen, key issues to discuss with the patient (e.g., problems with medication adherence or side effects, signs of depression), and a list of patient-targeted suggestions designed to encourage pain self-management (e.g., recommendations to pursue a specific exercise regimen).

Both the type and application of CDSSs have been expanding rapidly in recent years. To date, CDSSs have been used to diagnose a variety of medical conditions, to enhance the provision of preventive care (e.g., cancer screening, vaccination), to facilitate disease management, and to assist with drug prescribing [27–32]. Evidence suggests that CDSSs can improve health care practitioner performance and patient outcomes, particularly in the areas of vaccinations, tetanus immunizations, breast cancer screening, colorectal cancer screening, cardiovascular risk reduction, and smoking cessation [26,33–36].

CDSSs can be designed not only to provide clinical guidance, but to capture and integrate patients' perspective on their illness, and to promote positive patient health-related behaviors. Sometimes referred to as “expert systems,” this variant of a CDSS is particularly promising for use in the clinical management of chronic pain for two reasons: (i) pain is a highly subjective experience, hence patient-reported data are essential to obtain, and (ii) successful outcomes of pain management are at least equally, if not more, dependent on appropriate self-care (e.g., self-monitoring of pain, adherence to prescribed medications, regular exercise, weight control) than on the quality of the clinical diagnosis or recommended therapeutic regimen [5]. Based on the data entered, the expert system produces tailored recommendations, often derived from behavioral change models, that are specific to the needs of the individual patient [37]. Recommendations are directed at either the clinician or the patient, or separate output can be generated for each party. To date, expert systems have been used widely in the arena of health behavior modification, such as smoking cessation and weight reduction [38–43].

The purpose of this study was to systematically review the research evidence for CDSSs to address the following questions: (i) To what extent have CDSSs been utilized in the context of chronic pain management? (ii) What are the characteristics of these systems? and (iii) To what degree have they been evaluated and in what types of clinical settings?


Data Sources

We conducted an automated literature search using the Ovid search engine. With the assistance of a research librarian, we searched the following databases: MEDLINE (1966 to April 2006), CINAHL (1982 to April 2006), PsychINFO (1967 to April 2006), HealthSTAR (1981 to April 2006), EMBASE, Cochrane Library, Computer and Information Systems Abstracts, Electronics and Communications Abstracts, Proust Digital Dissertations, Computer Retrieval of Information on Scientific Projects (CRISP), LISA, ERIC, Computer and Information Systems Abstracts, and Dissertation Abstracts. Key search words employed included the following: computer-generated decision support systems and expert systems. Additional terms included: chronic pain, primary care, tailored reports, personalized computer-based information, disease management for chronic pain, patient goals, pain diagnosis and management, decision support systems, neural networks, and fuzzy logic. We also conducted a manual search to supplement the automated search. The manual search was not limited in time period and included articles that had been referenced in other articles.

Study Selection Criteria

Eligibility for inclusion in the final set included: any studies describing the development and/or application of a CDSS, or expert system in the context of chronic pain management. We defined a CDSS as any electronic system designed to assist in clinical decision making regarding chronic pain management, and in which patient-specific assessments and recommendations were generated for use by a clinician and/or patient [44]. Consistent with this definition, we excluded any CDSSs that examined pain as only one component of an overall assessment of QOL, as well as any that addressed acute pain management only. We also excluded studies that were written in languages other than English.


The cross-database search yielded 70 published articles and three federally funded research studies describing ongoing investigations. No additional studies were identified via the manual search process. Full-text articles were retrieved for all titles considered to be potentially relevant by the authors. Nine studies describing eight discrete CDSSs were identified as meeting our inclusion criteria. Lack of homogeneity among the final set of studies precluded a quantitative meta-analysis of the data. Due to these analytic constraints, we conducted a descriptive literature review only.

As shown in Table 1, all eight CDSSs were designed to assist in the diagnosis and/or management of chronic pain. Two of these systems, the Pain Management Advisor (PMA) and the Diagnostic Headache Diary, were also designed to offer education to the health care provider. One, the PMA, had an interactive capability that permitted users to query the system for explanations, therapeutic rationale, and therapy guidelines. Two other systems, the SymptomReport and the PAINReportIt, featured adjunctive software programs (SymptomConsult, PAINConsultN) that were expressly designed as interactive educational tools for the patient.

View this table:
Table 1

Key features of clinical decision-support systems (CDSSs) developed for chronic pain management

Name of CDSSPurposeDescriptionStage of DevelopmentHardware & Software RequirementsData Input(s)Output(s)Target Recipient(s) of Output
RHINOS [45]To assist physicians in the diagnosis of chronic headaches or facial painBased on expert knowledge of headache and facial pain specialists. Uses 4 sets of conditional, probability-based rules: 1) exclusive rules (i.e., if patient has disease D, s/he must have symptoms S1, S2, … Sn); 2) inclusive rules; 3) associate rules; and 4) disease image rulesPrototype only: no testingSystem developed using the Prolog-KABA programming language. Runs on personal computers with CPU 8086, memory 382 KbytesPhysician input based on patient interview: – Patient demographics – Onset of headache – History since onset – Pain characteristics– Neurological signs associated with pain– Sleep status– Personal and familial history of headache– Jolt headache (Yes/No) – Sclerosis of retinal artery  Initial output after first set of screening questions: – List of diseases not rejected. Output after additional question(s):– List of possible diseases – Diagnostic conclusions– Explanation of the disease required examinations and suggested therapyHospital chart can be printed from the input data in the form of a natural language representationPhysicians
IVAN [46]To provide recommendations for controlling pain and providing symptom relief in cancer, osteo- and rheumatoid arthritisCase-based reasoning strategy to record and retrieve information stored in an internal knowledge basePrototype only: no testingIVAN software written in LPA Prolog programming language for Windows (“WinProlog”); runs on PC Windows 95/NT. WinProlog– Pain symptom checklist – Current pain diagnosis– Current pain treatment regimenComputer screen display: – Diagnostic confirmation– Description of symptoms that may appear later– Treatments proven successful in similar or related cases– Possible alternative causes of the painPhysicians and patients
The Diagnostic Headache Diary [47]To educate and provide diagnostic support to primary care providers in order to improve management of headachesRule-based expert system using Boolean logic. A set of diagnostic rules used to determine a diagnosis based upon the data entered in patient diary. Diary data are transformed into a diagnosis following the International Headache Society's classificationPrototype developed; system-generated diagnoses were validated against physician expert- generated diagnosesStand-alone Windows 95 program written in Delphi programming language– Patient data – Headache diary entries– Medications used to alleviate headache– Diagnosis of headache typePCPs
Pain Management Advisor (PMA) (NovaIntelligence Inc., San Diego, CA) [48]To enhance primary care providers' (PCPs) management of chronic pain– Relies on rule-based algorithms derived from expert knowledge of pain specialists– User asked a series of questions to refine the diagnosis and determine appropriate therapy– Interactive capability (e.g., for explanations, therapeutic rationales, therapy guidelines)Working version developed: some field testing conducted– Pentium-based PCs– Windows 95– PMA written in MicroSoft Visual Basic, v. 5.0, run as an expert application in XpertRule– Algorithms stored in MicroSoft Access database– MicroSoft Help Utility used for explanations and queries– Patient demographics – Diagnosis – Pain characteristics– Laboratory tests & imaging studies– Current medications– Prior therapies– Concurrent disease conditions– Allergies – Psychological status– A prioritized list of recommendations: 1) medical management (pharmacologic and nonpharmacological management, physical, psychosocial modalities); 2) invasive procedures; 3) referralsPCPs
SymptomReport and Symptom Consult [49]To assist clinicians in assessing cancer- related chronic pain and fatigue, and clarify patients' misbeliefs about pain assessment and managementNot describedWorking version developed; field testing conductedMicroSoft Windows 95/98 personal computers with touch screen (Pen-Tab)  Patient self-reported input: – Patient demographics– 1970 version of McGill Pain Questionnaire (MPQ)– Barriers Questionnaire (BQ)– Schwartz cancer fatigue scale (SCFS-6)1) Hard copies of expert system report given to patient and to clinician2) Patient receives educational materials on how to report pain, use pain medications safely, and manage fatigue. Materials are customized to the patient's needs and presented in an interactive, multimedia format. Patients have option to read or listen to information on the computer, print any or all of the materials, or do bothOncology nurses; other clinicians
Decision-support computer program for cancer pain management [50]To improve the oncology nurses' decision making related to cancer pain management among culturally diverse female oncology patients– Survey data on multicultural cancer pain characteristics were analyzed using fuzzy inference logic to develop 4 modules: 1) a generic knowledge base; 2) a culture-specific knowledge base; 3) decision-making; and 4) self-adaptation.– Decision-making module consists of 2 sets of fuzzy inference logic developed via a genetic algorithmHardware not described– Adaptive fuzzy logic software used to develop and run the knowledge base generation, and the decision-making and self- adaptation modules  Nurse-entered data based on patient interview: – Patient demographics – Pain characteristicsComputer screen display of analgesic treatment recommendations based on the World Health Organization (WHO)'s analgesic ladderOncology nurses
PAINReportIt and PAINConsultN [51,52]To assist clinicians in assessing chronic pain and to educate patients regarding pain monitoring and managementNot describedWorking version developed; field testing conductedMicroSoft Windows 95/98 personal computers with touch screen (Pen-Tab); data stored in Access 97 database  Patient self-reported input: 1) Patient demographics – McGill Pain Questionnaire – Pain status– Patient goals for and expectations about pain– Type and effectiveness of previous pain treatmentsHard copies of expert system report given to patient and to clinician; on-screen viewing of report is also possibleOncology nurses and other clinicians
Touch-screen Computer Assessment of Chronic Low Back Pain [53]To collect pain symptom status and other health information from patients with low back painNot describedWorking version developed; limited field testingWeb-based system using Dell Inspiron 1100 laptop with Microsoft XP operating system; 14′ touch screen (Magic Touch, Keytec)– Patient demographics– Oswestry Low Back Pain Disability Index (Version 2)– Beck Depression Inventory– MOS Short Form-36 (MOS SF-36)Not describedPhysicians

All but two of the CDSSs were designed to target a specific type of chronic pain or pain-related condition. These included: headache (2), low back (1), and cancer-related (3). Input specifications also varied widely, both in terms of the type and amount of data required, and in terms of the party responsible for data entry (i.e., physician, other health care provider, or patient). All of the CDSSs required detailed input regarding pain symptomology. Half of the systems reviewed also elicited data on pain medications currently used, and three requested QOL information. Other data, such as psychological status, history of prior therapies (both pharmaceutical and nondrug-related therapies), patient goals for pain care, and barriers to pain management, were specified as required inputs in only two of the CDSSs reviewed. Three of the CDSSs used standard, psychometrically validated instruments (e.g., McGill Pain Questionnaire, Medical Outcomes Study Short Form 36, Oswestry Low Back Pain Disability Index) to collect input data.

The majority of the CDSSs (5 of 8) produced output that was intended for clinician use (e.g., physician or nurse) only. Three targeted both the clinician and the patient. Output varied considerably in terms of content, format, and delivery (e.g., electronic, paper, or both). Several CDSSs scored and summated patient responses on standard pain and QOL-related assessment measures. Based on the published descriptions, at least five of the CDSSs were designed to generate output in real time at the patient's medical visit.

In terms of systems architecture, all CDSSs reviewed were stand-alone, personal computer-based systems. None interfaced with existing electronic medical records systems, pharmacy, appointment scheduling, or laboratory results reporting. Three either were Web-based or had the capacity to use a Web-based platform.

Table 2 summarizes the types of studies conducted to date to evaluate chronic pain CDSSs. Of the eight CDSSs identified, five had published evaluation results. With one exception, all were feasibility studies exclusively. Studies were conducted in both inpatient and outpatient settings. Among the outpatient studies, two had been conducted with PCPs; the remainder involved specialists in tertiary care settings. Study designs used for evaluating these CDSSs varied: two were cross-sectional, two involved immediate pre- and post-assessments of CDSS use, one was longitudinal (12-month follow-up), and one was a focus group. Study sample sizes ranged from 213 to 4, with the majority having 50 or fewer subjects.

View this table:
Table 2

Evaluation of computerized decision-support systems (CDSSs) in chronic pain management

StudyName of CDSSSampleDesignOutcomes AssessedResults
Nielsen et al.[47]Diagnostic Headache DiaryPCPsNot described–% agreement of computer-generated vs expert physician diagnoses– 100% agreement of computer-generated vs expert physician diagnoses
Knab et al.[48]Pain Management AdvisorN = 50 PCPs; N = 50 chronic pain patientsLongitudinal, with 12- month PCP follow-up– Ease of use– Medical appropriateness of recommendations–% physicians' adopting Pain Management Advisor treatment recommendations–% patients referred to pain specialty clinicPhysician adoption of system-generated recommendations: 85% of cases Average physician time spent per case: 4.9 minutes (SD ± 3.4) Ease of use as rated by physicians: 4.2 (± 2.8 cm) on scale of 1–10 25% of patients referred to pain specialty clinic Average time to pain specialty referral: 3.7 months (SD ± 0.6)– 70% of nonreferred patients still receiving computer-recommended treatment 1 year post
Wilkie et al.[49]SymptomReportN = 41 outpatients with cancerCross-sectional; telephone interviews– 13-item patient acceptability scale assessing ease of use of SymptomReport– Input completion time– Qualitative assessment of degree of communication with health care providers regarding pain and other symptoms– Mean time to complete SymptomReport: 38.2 minutes (SD ± 20.2)– Mean time to complete SymptomConsult: 20.9 minutes (SD ± 18.6)– 71% of participants rated SymptomReport as easy, enjoyable, and informative– 68% reported that the amount and content of their pain-related communication with their doctor had not changed much– Qualitative patient comments: 1) helped them talk more explicitly about pain; 2) gave them greater awareness of pain symptoms; 3) increased understanding of and enhanced compliance regarding symptom management – Patient comments re: SymptomConsult: 1) contained too much information; 2) not targeted to individual needs; (3) provided no new information
Wilkie et al.[52]PAINReportItN = 213, of whom 106 were cancer inpatients; 10 metastatic cancer outpatients; and 97 were individuals experiencing acute or chronic pain recruited from nonhealth care settingsDescriptive, cross- sectional study in 3 settings: 1) tertiary care, 2) radiation oncology clinic, 3) mobile clinic– 13-item scale measuring acceptability of PAINReportIt (i.e., time to complete, ease of use, understandability of directions, ergonomic elements of system, and completeness of responses)– 86% of respondents rated the PAINReportIt as beneficial for pain communication– 100% patient completion rate of PAINReportIt– Mean time to patient completion: 15.8 minutes (SD ± 6.7)– Mean patient acceptability score: 11.7 (SD ± 1.6); scores ranged from 6 to 13. 80% of patients rated acceptability as greater than minimum criterion of 10– User comments: 1) some mechanical difficulties; 2) reactivity to use of system (e.g., vomiting); 3) preference to relay information directly to provider
Huang et al.[51]PAINReportIt and PAINConsultNPilot study 1: N = 9 patients with bone metastasis-related pain Pilot study 2: N = 15 patients with cancer and bone metastasis Physician focus group: N = 4 radiation oncologists1) Pilot test 1: Feasibility study using a test–retest within- subject design 2) Pilot test 2: Feasibility study using an 11-day test–retest within-subject design 3) Physician focus groupOutcome measures used for both pilot studies: – Acceptability – Completeness – Time to complete – Validity Physician focus group:– Receptivity to PAINReportIt and PAINConsultNPilot study 1:– Mean time to complete PAINReportIt at pretest: 12 minutes (SD ± 4)– Time to complete PAINReportIt at post-test: 1–7 minutes per patient– Mean acceptability score: 11.2 (SD ± 1.8)– 100% completion rate Pilot study 2:– Mean time to complete PAINReportIt at pretest: 17 minutes (SD ± 6)– Time to complete PAINReportIt at posttest: 14 minutes (SD ± 8)– Mean acceptability score: 12.2 (SD ± 1.3)– 100% completion rate– PAINConsultN recommended a median of 4 drugs; physicians prescribed a median of 3 drugs postuse– Patient pain intensity average 4 at baseline and 2.7 at posttest (not significant)Focus group:– Physicians saw value of PAINReportIt: 1) increased efficiency during clinic visit; 2) supplemented pain service consultation; 3) provided outcome data– PAINConsultN was viewed as clinical adjunct but formatting needed improvement
Koestler et al.[53]Touch-screen Computer Assessment of Chronic Low Back PainN = 30 low back pain patientsCross-sectional design in tertiary care clinic– Patient-ratings of ergonomic design; degree of technical difficulties; acceptability, ease of use; data security– Mean time to complete the 67-item touch screen: 27.4 minutes (SD ± 13.8)
  • PCP, primary care physician.

Patient acceptability of the CDSS was the single most commonly assessed variable. Evaluations of both the SymptomReport and the PAINReportIt employed a common tool to assess patient acceptability. Results across four pilot tests, involving a total of 254 subjects, consistently showed high acceptability of these two CDSSs and 100% completion rates in terms of data input [49,51,52]. The average amount of time required ranged from a high of 38 minutes to a low of 14 minutes.

Two studies addressed the issue of medical accuracy of the system-generated recommendation. Both studies examined this issue by comparing system-generated diagnoses and/or treatment recommendations with those generated by physician experts based on a select sample of patient cases. Results showed moderate to high agreement between the system- and expert-generated recommendations [47,48].

Clinician perceptions concerning ease of use and value of a CDSS for chronic pain management were examined in two studies. Overall, physicians found the system to be moderately easy to use and of some clinical worth [48,52]. Knab and colleagues [48] reported that the average clinician time spent per case on the PMA to obtain output was approximately 5 minutes (± 3.4 minutes) [48]. In addition, 85% of physicians adopted the PMA-generated recommendations, and 25% of the study patients seen were referred to a pain specialist, with the average time to referral being 3.7 months (± 0.6 months) [48].

Analyses concerning the impact of CDSSs on patient outcomes were limited. Huang and colleagues [51] assessed changes in pain intensity pre- and post-CDSS use in a sample of radiation oncology clinic patients. Although there was a downward trend in pain levels over time, results were not statistically significant, possibly due to the small size of the sample (N = 15). Wilkie and colleagues [49,52] reported qualitative data regarding the impact of the SymptomReport and the PAINReportIt on patient behavior. Results were contradictory. Of the 41 outpatients who used the SymptomReport, approximately 68% stated that it had not affected their pain-related communication. Some, however, felt that they talked more precisely and explicitly about their pain as a result of using it. Other comments associated with use of the SymptomReport included an increased awareness of pain symptoms and greater compliance with pain symptom management. In contrast, 86% of users of the PAINReportIt cited it as beneficial for patient–doctor pain-related communication, and that it “freed them to describe their pain.”


Over the past two decades, there has been a gradual but steady growth in research on the use of CDSSs for chronic pain management. To date, the number of such systems is small but expanding. Advances have also been evident in terms of both the quantity and quality of evaluation studies conducted. While the earliest versions were presented in the literature as prototypes only [45,46], CDSSs developed since 2001 have all undergone some form of field testing. The majority of these studies, however, have been nonexperimental in design, and focused exclusively on process measures, such as patient or clinician ratings of system acceptability and usability. Other salient process measures, such as the degree to which the clinician and/or patient actually reviewed and utilized system output, or had confidence in its accuracy, have not been consistently assessed. Poor usability and practitioner nonacceptance of computer recommendations can serve as significant barriers to system adoption in routine clinical practice [36,54,55].

User preferences regarding the presentation of computer output, including content, formatting (e.g., color, graphics), and length, have not been solicited in most instances either. Similarly, there are few published data concerning technical difficulties (e.g., type and number of system crashes or touch-screen calibration problems) encountered by CDSS users. Both issues have important ramifications for future system refinements [52]. Additionally, there is a paucity of information on contextual circumstances (e.g., presence of a local “champion” of the system) or the processes used to integrate the CDSS into the existing clinical workflow, key considerations for successful system implementation. Not least, testing has been confined almost exclusively to either inpatient or tertiary care settings, with only two studies conducted in the primary care context to date.

The effects of these systems on patient outcomes remain understudied. Two studies reported qualitative data concerning CDSS impact on patients' perceived pain-related communication with their physician; however, sample sizes were small and results inconsistent [49,52]. One study reported system impact on patient pain intensity level over time, but the study lacked adequate statistical power to detect clinically important differences [52]. Other major patient outcomes, such as health care utilization, health care costs, pain relief, pain medication usage, communication with health care provider about pain, functional status, and QOL, have not been examined. One study reported evidence that CDSS use may invoke patient reactivity (e.g., vomiting, intensified pain symptoms). Potential adverse effects on patients should be measured in future investigations [52]. Similarly, there is need for more extensive and consistent examination of system impact on clinician pain management performance.

While we sought to be as comprehensive as possible in our literature search, our review was restricted to only English-language studies. In addition, it is possible that there are other CDSSs under development that we failed to identify. The limited size of the available literature, as well as the methods used in these primary studies, prevented us from conducting a meta-analysis of research findings, and from reaching more definitive conclusions about the impact of these systems on physician performance and patient pain, functioning, and other aspects of QOL. Notably, in all the studies we examined, study investigators and CDSS developers were one and the same, a fact that may have resulted in more positive findings [36]. Lastly, we did not conduct a separate evaluation of the clinical appropriateness of either the CDSS algorithms or treatment recommendations, nor of the underlying logic employed to generate such algorithms.

The potential for these computer-based systems to improve the quality of chronic pain management in the primary care context is substantial. To manage chronic pain effectively, PCPs first need to conduct a comprehensive patient assessment [56]. Information on the patient's pain experience, history of and preferences for pain treatment, psychological status, approach to self-management, and personal goals/priorities are key variables to collect during assessment, as they are critical for making an accurate diagnosis and for developing an appropriate treatment plan to which the patient will adhere [56]. An expert system-type CDSS provides a way to elicit and integrate such patient-specific information in a manner that is convenient and timely for both physicians and patients. Moreover, the ensuing system-generated recommendations are individualized to the needs and circumstances of the specific pain patient per best clinical practices [56].

CDSSs developed for chronic pain management have as yet, however, to fulfill this promise. As our review indicates, systems developed thus far have been predominantly biomedical in focus, and designed to assist physicians and other health care providers in the medical management of pain symptoms (including invasive procedures and referrals) exclusively. Only a few of these systems have reached a sufficiently advanced stage of development to warrant more rigorous testing in large-scale, randomized controlled trials [26,49,52]. Such trials are imperative for understanding system effect on provider performance and patient outcomes.

Significantly, none of the systems reviewed were integrated with existing electronic records systems, nor did they include reminder or documentation functionalities, features which have all been shown to increase the likelihood of physician adoption [57,58]. This lack of integration may reflect the fact that widespread adoption of electronic records systems by health care institutions has been a relatively recent occurrence. Potentially this trend, coupled with pressures from major accrediting agencies to document the provision of pain screening and treatment, along with the recent publication of primary care pain management guidelines, may serve to spur additional, more rigorous research on the use of CDSSs for chronic pain management in primary care [56,59,60]. Demonstrating the clinical value of these systems is a critical step in convincing health care organizations and clinicians that the benefits of investing in a CDSS for pain management outweigh potential risks. In particular, physicians need to be assured that this type of system can enhance rather than erode their decision-making abilities, and that time spent learning how to use a CDSS yields measurable improvement in patient health and well-being.


View Abstract