Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Randomized Controlled Trial
. 2018 May 24;13(5):e0197844.
doi: 10.1371/journal.pone.0197844. eCollection 2018.

Accurate Pain Reporting Training Diminishes the Placebo Response: Results From a Randomised, Double-Blind, Crossover Trial

Affiliations
Free PMC article
Randomized Controlled Trial

Accurate Pain Reporting Training Diminishes the Placebo Response: Results From a Randomised, Double-Blind, Crossover Trial

Roi Treister et al. PLoS One. .
Free PMC article

Abstract

Analgesic trials frequently fail to demonstrate efficacy of drugs known to be efficacious. Poor pain reporting accuracy is a possible source for this low essay-sensitivity. We report the effects of Accurate-Pain-Reporting-Training (APRT) on the placebo response in a trial of Pregabalin for painful-diabetic-neuropathy. The study was a two-stage randomized, double-blind trial: In Stage-1 (Training) subjects were randomized to APRT or No-Training. The APRT participants received feedback on the accuracy of their pain reports in response to mechanical stimuli, measured by R-square score. In Stage-2 (Evaluation) all subjects entered a placebo-controlled, cross-over trial. Primary (24-h average pain intensity) and secondary (current, 24-h worst, and 24-h walking pain intensity) outcome measures were reported. Fifty-one participants completed the study. APRT patients (n = 28) demonstrated significant (p = 0.036) increases in R-square scores. The APRT group demonstrated significantly (p = 0.018) lower placebo response (0.29 ± 1.21 vs. 1.48 ± 2.21, mean difference ± SD = -1.19±1.73). No relationships were found between the R-square scores and changes in pain intensity in the treatment arm. In summary, our training successfully increased pain reporting accuracy and resulted in a diminished placebo response. Theoretical and practical implications are discussed.

Conflict of interest statement

SEH has received research funding from Cerephex, Eli Lily, Forest Laboratories, and Merck; and served as a consultant for Pfizer, Analgesic Solutions, Aptinyx, and deCode Genetics. He is also co-inventor of the MAST pain testing device and a member of Arbor Medical Innovations, LLC (Ann Arbor, MI). GHK is a co-inventor of the MAST and a member of Arbor Medical Innovations (AMI), LLC, which assisted in providing the MAST systems used as part of this study. He has also served as a consultant to Analgesic Solutions and deCode Genetics. NPK is the CEO of Analgesic Solutions, a clinical research and consulting firm with many clients throughout the pharmaceutical industry. RT and OL are employees of Analgesic Solutions. NK and JDS are former employees of Analgesic Solutions. JB and MF are employees of Grunenthal. This does not alter our adherence to PLOS ONE policies on sharing data and materials.

Figures

Fig 1
Fig 1. CONSORT participant flow diagram.
Fig 2
Fig 2. Study design.
The study included 2 phases: An unblinded parallel-design training stage, and a double-blind crossover evaluation stage.
Fig 3
Fig 3. Improved experimental pain reporting accuracy.
* = P<0.05.
Fig 4
Fig 4. The placebo response in the entire cohort, trained and untrained subjects—Primary outcome measure.
Change in placebo was calculated as difference between pain scores in the placebo arm (pre-minus post treatment). Black bars represent changes in pain in the entire cohort. White and Black bars represent changes in pain in the trained (n = 28) and untrained (n = 23) sub-cohorts, respectively. * = P<0.05; Error bars are Standard Error of the Mean (SEM).
Fig 5
Fig 5. The placebo response in the entire cohort, trained and untrained subjects—Secondary outcome measures.
Change in placebo was calculated as difference between pain scores in the placebo arm (pre-minus post treatment). Black bars represent changes in pain in the entire cohort. White and Black bars represent changes in pain in the trained (n = 28) and untrained (n = 23) sub-cohorts, respectively. Error bars are Standard Error of the Mean (SEM).

Similar articles

See all similar articles

Cited by 7 articles

See all "Cited by" articles

References

    1. Belzung C. Innovative Drugs to Treat Depression: Did Animal Models Fail to Be Predictive or Did Clinical Trials Fail to Detect Effects? Neuropsychopharmacology. 2014;39: 1041–1051. doi: 10.1038/npp.2013.342 - DOI - PMC - PubMed
    1. Gelenberg AJ, Thase ME, Meyer RE, Goodwin FK, Katz MM, Kraemer HC, et al. The history and current state of antidepressant clinical trial design: a call to action for proof-of-concept studies. J Clin Psychiatry. 2008;69: 1513–28. - PubMed
    1. Khan A, Leventhal RM, Khan SR, Brown WA. Severity of depression and response to antidepressants and placebo: an analysis of the Food and Drug Administration database. J Clin Psychopharmacol. 2002;22: 40–5. - PubMed
    1. Becker RE, Greig NH, Giacobini E. Why Do So Many Drugs for Alzheimer’s Disease Fail in Development? Time for New Methods and New Practices? J Alzheimer’s Dis. 2008;15: 303–325. - PMC - PubMed
    1. Kobak KA. Inaccuracy in Clinical Trials: Effects and Methods to Control Inaccuracy. Curr Alzheimer Res. 2010;7: 637–641. - PubMed

Publication types

Grant support

Funding for this project was provided by Grunenthal. The funders had no role in data collection and analysis. We wish to thank the patients who participate in the study.
Feedback