OUP user menu

Computerized Dynamic Assessment of Pain: Comparison of Chronic Pain Patients and Healthy Controls

Robert N. Jamison PhD, Gilbert J. Fanciullo MD, MS, John C. Baird PhD
DOI: http://dx.doi.org/10.1111/j.1526-4637.2004.04032.x 168-177 First published online: 1 June 2004

ABSTRACT

Objective Computerized software holds the potential for the novel assessment of the pain experience of patients with chronic pain not available through traditional paper-and-pencil methods. The aim of this study was to test the feasibility and discriminant validity of a dynamic computer-administered program for the assessment of pain.

Design Three computer-administered programs were created to assess the intensity (dynamic visual analog scale DVAS]), character (dynamic verbal ratings), and location (dynamic pain drawings) of pain. The programs were administered to 115 chronic pain patients recruited from a hospital-based pain management program and 115 age- and gender-matched healthy individuals without pain. The healthy controls were instructed to respond as if they had chronic pain.

Results Analyses showed pain patient DVAS pain intensity ratings to be significantly higher than ratings by the healthy group. Patients selected more words in describing their pain, rated those words higher, and marked significantly more pain locations than the comparison group. However, no differences were found in the DVAS ratings of emotional impact between patients and healthy individuals.

Conclusions Chronic pain patients were shown to differ from healthy individuals in their assessments of degree of pain intensity and pain location with the use of a novel computerized pain assessment program. Although further investigations are needed, these initial findings support the use of computer methods for the effective assessment of pain.

  • Chronic Pain
  • Computerized Assessment
  • Healthy Controls
  • Interactive Computer

Introduction

Chronic pain is a subjective multidimensional experience that is strongly influenced by attitudes, beliefs, attention, motivation, and personality. The measurement of sensory processes such as chronic pain falls within the domain of psychophysics, defined as the quantitative branch of perception [1,2]. In clinical settings, psychophysical methods are commonly applied to the assessment of pain through the use of validated paper measures [3–6]. Over the years, pain assessment and management have relied heavily on data obtained through reliable and valid self-report questionnaires [7–11].

Despite the ease of administration and widespread use of paper-based pain assessment techniques, they have disadvantages. Use of such instruments can lead to noncompliance, missing data, and fabrication of information if the responders have not completed the requested information at the designated times [12–15]. The process of transferring the data from the paper forms to the computer is a potential source of error. A further disadvantage of paper measures for clinical research is the inability of investigators and clinicians to analyze the data until they are entered into the computer database. With the advent of the Joint Commission on Accreditation of Healthcare Organizations Pain Management Standards, which state that all patients have a right to adequate pain assessment, accurate, versatile, and comprehensive pain scales are needed (see http://www.jcaho.org). Electronic pain assessment holds much promise for meeting these needs [16,17].

With the ready availability of laptop and handheld computers and the capacity to capture time-stamped data, more investigators are exploring options of electronic data collection [18,19]. The advantages of using computers are their portability, the ease of data sharing, and the availability of numerous software applications [20]. Ecological Momentary Assessment, the capture of data from individuals in their natural environments using electronic diaries [18,21,22], has been used to assess smoking cessation [23], alcohol consumption [13,24], compliance with inhaled medications [25], episodes of asthma [26], as well as pain among patients with rheumatoid arthritis [27,28] and fibromyalgia [17]. Electronic data entry affords direct transfer from a study participant's device to a central database, and the use of a dynamic display permits a variety of user-friendly data entry elements and formats. There are interactive scales designed for the computer that cannot be used in the same manner with paper. In light of these technological developments, there is enormous potential for a computer system to administer and analyze the pain experiences of patients [29,30].

Although several reports have been published demonstrating patient use of computer techniques to express the intensity and location of their pain, such studies have been limited in their range of methods [19,31]. Often the same self-report questionnaire is created for the computer screen. The present investigation represents a more comprehensive approach to pain assessment based on dynamic computer-administered methods. Here, we employ both traditional and new techniques for assessing the intensity, character, and location of pain along with its emotional impact. In order to determine the general applicability of the methods and to identify response differences among two markedly different populations, a sample of chronic pain patients and an age- and gender-matched healthy comparison group were tested. The aims of this study were to: 1) Examine the ease of use of a new computer assessment program designed for persons with chronic pain and 2) Compare the differences between patients with chronic noncancer pain and healthy individuals who are asked to imagine that they are in pain. We hypothesized that this program would be quick and easy to administer for both groups and that persons with chronic pain would demonstrate higher pain and emotion intensity ratings, identify more locations of pain, and use more sensory and emotion pain descriptors in describing their pain than persons without chronic pain. Confirmation of these hypothesized differences between patients and healthy comparator individuals would provide evidence for the discriminant validity of the computerized interactive assessment methods.

Methods

The study was approved by the Committee for the Protection of Human Subjects. Two groups participated. The patient group consisted of 115 individuals (69 women and 46 men) who attended an outpatient hospital-based pain management center for treatment of their pain. Most patients reported having chronic low back pain and had pain for an average of 4 years. The majority of the participants was Caucasian and had at minimum a high school education. All participants were asked to sign an informed consent form, and no one was excluded or refused to participate because of a lack of computer skills. A comparison age- and gender-matched group consisted of 115 healthy volunteers without pain (69 women and 46 men). The latter individuals were recruited from a local senior center and health club. Potential participants were asked to volunteer for a study regarding ratings of pain using a computer. Inclusion criteria were anyone able to read and understand English and use the computer display to render judgments. Individuals under the age of 18 or with a mental or physical disability (e.g., poor eyesight) that precluded their understanding of, or performance on, the task were excluded. Potential healthy volunteers were excluded from participation if they answered “yes” to either of the two following questions: 1) Have you suffered from chronic pain during the past three months? 2) Have you been seen as a patient in a pain clinic within the past three years? The healthy volunteers were instructed to give ratings that reflected their personal view about how a person with chronic pain would make such ratings. Participants did not receive financial remuneration.

The assessment battery was implemented using software written by the research team and administered on an iBook Macintosh G3 laptop computer (13-inch screen, resolution 800 × 600 pixels), which was on a table approximately 70 cm from the user's eyes. The computer and screen were positioned to suit the user. Each session began with a practice trial to familiarize the individual with the computer procedures. All items needed to be completed before advancing to the next screen. The software stored ratings in a separate file for later statistical analysis.

Dynamic VAS

Rating scales were presented on the screen in a horizontal orientation. The scales contained eleven bold marks at integer intervals ranging from 0 (left end) to 10 (right end) and minor marks equally spaced between the bold marks. Successive integers from 0–10 appeared above the bold marks. The scale was 14 cm long. The label “none” anchored the low end of the scale and “maximum” anchored the high end. The words “pain intensity on a typical day” appeared immediately above the scale for ratings of pain, and the words “emotional impact on a typical day” appeared in the same location for ratings of emotion. At the top of the screen for the pain rating, the participant was instructed to “indicate the absolute degree of your pain,” and for the rating of emotion the participant was instructed to “indicate the absolute degree of your pain's emotional impact.” Participants made ratings by adjusting the horizontal length of a continuous green bar (1 cm in width) that changed direction with the use of the left or right arrow keys. The adjustment process continued until the participant was satisfied with the rating. A screen illustration of a hypothetical rating of pain intensity is shown in Figure 1.

Figure 1

Screen illustration showing the rating of DVAS pain intensity. Hypothetical data generated by a single participant.

Dynamic Verbal Ratings

A novel computer method was employed for collecting ratings of pain and emotion using verbal descriptors. Participants chose adjectives to describe the sensory nature of their pain and, on a separate display, adjectives to describe the emotional impact of their pain. Eleven sensory pain descriptors were used (by permission of the author) from the Short-Form McGill Pain Questionnaire (SF-MPQ) [32,33]: splitting, tender, heavy, aching, hot-burning, gnawing, cramping, sharp, stabbing, shooting, and throbbing. An equal number of affective descriptors were included (five descriptors from Wade et al. [34], three from the McGill Pain Questionnaire [32], and three by consensus of the authors) to describe the emotional impact of the pain: depression, anxiety, frustration, fear, anger, sickness, exhaustion, stress, sadness, disgust, and shame.

All the verbal descriptors of sensory pain and emotional impact initially appeared in a vertical list at the left margin on one of two screens. A measurement scale across the bottom of the screens displayed numerical values (integers 1–10) and verbal anchors at the two ends (labeled least and best). The vertical positions of the descriptors were randomly varied for each participant. The user dynamically moved the words (descriptors) to positions along the scale to indicate the degree to which the word was appropriate (from least to best) in describing their pain (actual or imagined) or in describing the emotional impact of their pain (actual or imagined). The participant was also permitted to leave words at the starting point (zero on the scale). Word movement was accomplished by using the arrow keys on the keyboard. The user pressed the up or down arrow key to indicate the word to be moved (highlighted in blue), and then pressed the left and right arrow keys to move the highlighted word along the horizontal scale. The target word appeared to change position continuously across the screen as the left or right arrow key was depressed.

The movement of a word was accompanied by a corresponding movement in the position of a red arrowhead that slid along the bottom scale. This allowed the participant to see the rating at any given moment. The words were located in their own independent rows above the horizontal scale so that more than one descriptor could receive the same rating without overlap. The method allowed the user to continue manipulating the positions of the words until satisfied with all the ratings. An illustration of the list of pain descriptors at the beginning of a trial and a hypothetical arrangement of descriptors after ratings were made are shown in Figures 2A and 2B. The same procedure was employed for rating the descriptors of emotional impact. Four separate scores were obtained: 1) Number of descriptors chosen for sensory pain; 2) Number of descriptors chosen for emotional impact; 3) Mean ratings of words chosen for sensory pain; and 4) Mean ratings of words chosen for emotional impact.

Figure 2

Screen illustrations of the list of pain descriptors as seen before ratings were given (A), and the location of pain descriptors after ratings were given (B).

Dynamic Pain Drawings

Participants marked the locations of their pain (actual or imagined) on outline figures of the human body by moving the cursor and clicking the mouse. Either single locations (pointing and clicking) or entire regions of the body (holding the mouse button down while highlighting areas of the body) could be marked in this manner. A marked location was designated by a filled 0.4 cm × 0.4 cm red square. The total number of possible squares was 267. The program allowed the user to select locations only within or directly on the outline of the human figures. Participants could select up to three different locations as either single locations or larger regions of pain. A single location (or region) was defined by all the locations marked from the time the mouse was depressed until it was released. Participants were able to return and modify the pain drawings if necessary. An illustration of the pain locations produced by a chronic pain patient is presented in Figure 3.

Figure 3

Screen illustration of the location of pain generated by a chronic pain patient.

Each participant completed the three assessment tasks in the same order: 1) Dynamic visual analog scale (VAS); 2) Dynamic verbal ratings; and 3) Dynamic pain drawings. The total number of descriptors with ratings greater than zero was calculated separately for sensory pain and emotional impact. The average ratings of sensory pain intensity and emotional impact were calculated on the basis of data from all 11 descriptors (including those receiving a zero rating). Finally, a percentage of the possible body locations (available squares) marked on the pain diagrams was determined.

Statistics

All data were analyzed with Statistical Package for the Social Sciences (SPSS) v.11.0. Relationships between demographic characteristics and computer responses were analyzed using Pearson product moment correlations. t-test comparisons were also made for the individual variables between groups. Because this was a preliminary study, no power analyses were conducted.

Results

The ages of individuals in the patient group ranged from 18–91 years (mean: 50.1 ± 15.8 SD). The ages of individuals in the healthy group ranged from 19–85 years (mean: 49.4 ± 14.6 SD). There was no significant difference between the mean ages of individuals in the two groups. Once participants were recruited and began the assessment procedure, no individuals dropped out or expressed a desire to quit the task. Table 1 presents means and standard deviations of ratings for patients and healthy individuals on each of the variables. Significant differences were found between patients and controls on DVAS pain intensity ratings. Patients also demonstrated significantly higher verbal ratings of descriptors for sensory pain and emotional impact, chose more words to describe their pain, and identified a greater percentage of pain locations than controls. These significant levels held up when analyzed nonparametrically. Differences between groups on DVAS ratings of emotional impact were nonsignificant.

View this table:
Table 1

Means and standard deviations of ratings by patients and healthy controls for pain assessment variables.

VariablesPain patients, (N=115)Healthy controls, (N=115)P value
meanSDmeanSD
DVAS rating
Pain intensity6.82.25.72.0<0.05
Emotional impact5.82.95.42.7NS
Number of descriptors
Sensory pain*3.92.41.91.2<0.05
Emotional impact*4.22.92.51.9<0.05
Ratings of descriptors
Sensory pain2.23.41.02.4<0.05
Emotional impact2.43.41.22.4<0.05
Pain diagrams+7.07.13.04.8<0.05
  • Ratings scored on DVAS (0–10).

  • * Average number of words selected greater than 0 (1–11).

  • Average ratings of verbal descriptors (0–10).

  • + Number of squares marked on pain diagrams (total possible=267).

Pearson product moment correlations were calculated between the variables in each group. Age was not found to be related to pain assessment. In general, correlations of the pain assessment variables were significant in the predicted direction as shown in Table 2. For the pain patients, DVAS pain rating was significantly related to DVAS emotion rating, number of sensory words chosen, and the mean ratings of the descriptors, but not significantly correlated with the number of emotional impact words chosen or marked areas on the pain diagram. For healthy controls, all variables were correlated with DVAS pain rating. The ratings among descriptor variables (sensory and emotional impact) were highly correlated. Figure 4 presents the correlations obtained between variables for the healthy group as a function of the correlations obtained for the patient group. A strong linear relationship was found between these two data sets (r=0.92), indicating that the interrelationships among variables, that is, the pattern, was similar in the two groups. This value represents a correlation relating two sets of correlations between the variables of interest (all values taken from Table 2).

View this table:
Table 2

Pearson product moment correlations for chronic pain patients (N=115) and healthy controls (N=115).

Patient group
VariablePain DVASEmotion DVASSensory word #Sensory meanEmotion word #Emotion mean
Emotion DVAS0.34*
Sensory word number0.19*0.33*
Sensory word mean0.35*0.34*0.88*
Emotion word number0.130.54*0.68*0.62*
Emotion word mean0.23*0.57*0.61*0.64*0.90*
Pain marks total0.120.040.24*0.39*0.18*0.25*
Healthy group
VariablePain DVASEmotion DVASSensory word #Sensory meanEmotion word #Emotion mean
Emotion DVAS0.59*
Sensory word number0.25*0.16
Sensory word mean0.41*0.30*0.89*
Emotion word number0.27*0.29*0.62*0.60*
Emotion word mean0.39*0.42*0.74*0.78*0.84*
Pain marks total0.19*0.120.170.160.28*0.26*
Figure 4

Plot of correlations between variables for pain patients and healthy controls. Data taken from Table 2.

Figures 5A and 5B are bar charts showing the mean ratings and standard deviations for patients and controls on appropriateness of the verbal descriptors. The words in each chart are ordered (bottom to top) from the highest to lowest means obtained for the patient group. Both groups rated aching as the most useful and splitting as the least useful sensory pain descriptor. Both groups also rated frustration as the most useful descriptor of emotional impact and shame as the least useful. The ratings by the patient group for individual pain descriptors were found to be significantly higher (P < 0.05, two tailed) than comparable ratings by the healthy group for all descriptors except gnawing and splitting. All the descriptors of emotional impact were rated significantly higher by the patient group than by the healthy group (P < 0.05, two tailed).

Figure 5

Bar chart comparisons between groups of mean ratings and standard deviations for the sensory pain descriptors (A) and the descriptors of emotional impact (B). r values represent overall Pearson product moment correlations between the mean ratings of descriptors of pain and emotion of the pain patients and those of healthy controls.

Both groups demonstrated a similar rank order among the sensory pain and emotional impact descriptors. Pearson product moment correlations between the mean ratings of descriptors of pain and emotion for the two groups were r=0.82 and r =0.97, respectively. Both correlations were significant at P < 0.05.

We also examined ratings of descriptors greater than zero for sensory pain and emotional impact, as shown in Figures 6A and 6B. The relationship between ratings by patients and controls was nonsignificant. This finding suggests that there are few differences among ratings of different descriptors once they are judged to be useful in describing pain. Substantial differences in the number of ratings contributing to the values in Figure 6 mitigated against statistical testing of possible differences between the means, but inspection of the charts suggests relatively small differences in ratings once a descriptor is scored greater than zero.

Figure 6

Bar chart comparisons between groups of mean ratings of sensory pain descriptors (A) and emotional impact descriptors (B) with ratings >0. r values represent correlations between the pain patients and healthy controls The relationship between ratings by patients and controls was nonsignificant.

The pain location data were analyzed by computing the total number of times each possible location was marked. We then computed the 25th, 50th, and 75th percentiles of frequencies within each location to highlight differences in the frequencies of marks for different parts of the body in the diagrams. For patients, the most frequent pain locations (75–100th percentile) were at the small of the back and at the base of the neck, with secondary (50–75th percentile) peaks in adjacent back areas, on both knees, and in the region of the right wrist. The comparison group demonstrated a wider distribution of pain locations, although the areas most frequently identified were around the neck and low back. Patients marked more than twice as many pain locations on the body than nonpatient controls (Table 1).

Discussion

This is a preliminary, descriptive study of a dynamic computer-assisted assessment of pain using three instruments in one software program. Our results show that the computerized interactive assessment program is easy to administer and complete. All participants understood the task, quickly learned how to rate each pain variable, and finished the assessment program in approximately 10 minutes. Persons with chronic pain reported significantly higher pain intensity, identified a higher percentage of areas of pain, selected more sensory pain and emotional impact descriptors, and rated the intensity of verbal descriptors of their sensory pain and emotional impact higher than did age- and gender-matched healthy individuals, in agreement with the original hypotheses of the study. No differences, however, were found between groups in their DVAS ratings of emotional impact. The rank orders of the mean ratings for different pain descriptors and for different emotion descriptors were also very similar in the two groups.

Overall, healthy participants reported fewer pain symptoms than did chronic pain patients, thus supporting the notion that persons treated for chronic pain can be differentiated from healthy individuals asked to rate an imagined pain problem. The literature suggests that individuals who have had chronic pain tend to endorse more adjectives from the McGill Pain Questionnaire than individuals with an acute pain problem [4]. It has also been demonstrated that the number of verbal descriptors chosen for pain relates to the level of affective distress experienced by chronic pain patients [35,36]. These results offer support for the discriminant validity of the computerized interactive methods for this population of chronic pain patients.

It is assumed that persons who have not experienced severe persistent pain would tend to underestimate the degree of pain in others, as suggested in the literature [37]. Memory for pain is affected by the amount of pain currently experienced [38,39]. If an individual is without pain, she/he tends not to remember the severity of past pain experiences. It would follow that those without pain would identify fewer areas of pain. Chronic pain often has accompanying radicular symptoms that would not be appreciated by those who are not experiencing discomfort. It is tempting to suggest that degree of pain intensity and area of pain could be used to identify those individuals who are not experiencing “real” pain. Unfortunately, little support exists for the differentiation of medical diagnoses based on pain assessment alone, and factors such as a somatization disorder, extreme emotional distress, or a conscious attempt to deceive the clinician can account for differences on pain assessment scales among pain patients.

It is interesting to speculate why patients and healthy individuals used similar words and marked similar locations for pain. One possibility is that all individuals have experienced an acute pain problem sometime in their life, and, although the comparison group did not consider themselves to have a chronic pain problem, their ratings may have been heavily influenced by their own past acute pain experiences or the chronic pain experiences of their acquaintances. In this study, a new list of emotional descriptors was created to match the number of sensory descriptors. Since this list of words has not been validated for chronic pain patients, some words may be less reliable than others in describing the pain experience of pain patients. Although the dynamic verbal ratings employed innovative computer methodology, the placement of each word was found to be less important than the number of words chosen to describe the pain between groups. Further study is needed to validate and establish this scale.

There are known benefits of computer programs over paper-and-pencil measures for pain. Data from paper-and-pencil tests for pain can be time consuming to enter into a computer so the researcher or clinician can search for possible patterns among different aspects of pain ratings. This issue has hampered attempts to seek such patterns in the past. With the online collection of large amounts of data, it is now possible to identify patterns among measures to help health care providers distinguish among diagnoses and the effectiveness of alternative treatment programs. In addition, the existence of a large pool of assessment data would allow these providers to examine aspects of the judgment process itself that might be linked to specific illnesses. Such an undertaking would be impractical with current paper-and-pencil measures. Future software programs with standard statistical and graphics capabilities could supply health care providers with instant access to multiple aspects of self-reported pain. In addition, the clinician could pick and choose among alternative comparisons between a single patient's ratings and those of other individuals of similar medical diagnoses and demographics.

Because participants had no major problems understanding or using the program, the computerized approach may prove especially valuable in a clinic setting where follow-up information can be obtained while the patient is waiting for treatment. What remains to be determined is how variations in the way the program is organized and administered might influence responses. For instance, the order of administration of tests as well as the nature of the input device (e.g., computer mouse vs arrow keys) may influence outcome. It is also not known how the novelty of the program and the presence of a staff assistant influence ratings.

There are several limitations to this study that deserve mention. First, the healthy individuals may have shared important characteristics with the pain patients that were not identified during the initial screening. The exclusion criteria asked the healthy volunteers whether they had chronic pain over the past 3 months and/or had been treated at a pain clinic over the past 3 years. What was not determined was whether they had a history of intermittent pain. Many individuals are treated for pain by a primary care physician without attending a pain center or seeing a pain specialist. Although more complete information about the history of the healthy individuals might prove valuable, the main finding of this research is that such individuals grossly underestimate the magnitude of pain in all its manifestations. Second, no direct comparisons were made between a dynamic rating of pain and similar paper measures. Logistically this would be difficult to do, since the interactive nature of the computer program cannot be replicated on paper. It is thought that the dynamic ratings offer a different dimension to the pain assessment process, although it is difficult to determine empirically what this difference might be. Third, further reliability testing, with test-retest assessment of each scale and validity testing with other well-known measures, is needed. The descriptor words were taken from other known assessment instruments, but the usefulness of each of the words has not been determined. Future studies may examine other sensory and emotional impact descriptors to determine ultimate usefulness. Finally, the items were not time sensitive and no assessment of current pain was obtained. Re-evaluation of the items in which a period of time is given (e.g., over the past 2 weeks) would be important.

Studies are currently under way to examine the psychometric properties and generality of this novel software program. Before advocating widespread adoption of this approach to assessment, one must determine how well the method can discriminate among patients with different diagnoses, as well as the method's reliability and sensitivity to treatment interventions. It would also be useful to have data on concurrent validity as determined by comparison with an established paper assessment instrument and predictive validity to establish usefulness over time. Despite these limitations, the results suggest that this interactive software program has potential utility for patients and clinicians in the assessment of chronic pain.

Acknowledgements

This study was supported in part by a grant from the National Institute of Mental Health (R43 MH62833-01). Special thanks to Jaylyn Olivo for reviewing an earlier draft of this manuscript. A portion of this study was presented at the 22nd Annual Scientific Meeting of the American Pain Society, Chicago, March 2003.

References

View Abstract