Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Filters applied. Clear all
. 2019 Oct 17;19(20):4503.
doi: 10.3390/s19204503.

Exploring Deep Physiological Models for Nociceptive Pain Recognition

Affiliations
Free PMC article

Exploring Deep Physiological Models for Nociceptive Pain Recognition

Patrick Thiam et al. Sensors (Basel). .
Free PMC article

Abstract

Standard feature engineering involves manually designing measurable descriptors based on some expert knowledge in the domain of application, followed by the selection of the best performing set of designed features for the subsequent optimisation of an inference model. Several studies have shown that this whole manual process can be efficiently replaced by deep learning approaches which are characterised by the integration of feature engineering, feature selection and inference model optimisation into a single learning process. In the following work, deep learning architectures are designed for the assessment of measurable physiological channels in order to perform an accurate classification of different levels of artificially induced nociceptive pain. In contrast to previous works, which rely on carefully designed sets of hand-crafted features, the current work aims at building competitive pain intensity inference models through autonomous feature learning, based on deep neural networks. The assessment of the designed deep learning architectures is based on the BioVid Heat Pain Database (Part A) and experimental validation demonstrates that the proposed uni-modal architecture for the electrodermal activity (EDA) and the deep fusion approaches significantly outperform previous methods reported in the literature, with respective average performances of 84.57 % and 84.40 % for the binary classification experiment consisting of the discrimination between the baseline and the pain tolerance level ( T 0 vs. T 4 ) in a Leave-One-Subject-Out (LOSO) cross-validation evaluation setting. Moreover, the experimental results clearly show the relevance of the proposed approaches, which also offer more flexibility in the case of transfer learning due to the modular nature of deep neural networks.

Keywords: convolutional neural networks; information fusion; pain intensity classification; signal processing.

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Recorded physiological data. From top to bottom: Series of artificially induced pain elicitation (T1, pain threshold temperature; T2, first intermediate elicitation temperature; T3, second intermediate elicitation temperature; T4, pain tolerance temperature); EDA (μS); EMG (μV); and ECG (μV).
Figure 2
Figure 2
Data preprocessing. (a) Signal Segmentation. The classification experiments were performed on windows of length 4.5 s with a temporal shift of 4 s from the elicitations’ onset. (b) The ECG signal was further detrended by subtracting a least-squares polynomial fit from the preprocessed signal.
Figure 3
Figure 3
Early Fusion Architecture. A 2D representation of the input data is generated by concatenating the three physiological modalities and is subsequently fed into the designed deep architecture.
Figure 4
Figure 4
Late Fusion Architectures. (a) The features extracted by the second fully connected layer are concatenated and fed into the output layer. (b) The final output consists of a weighted average of the outputs of each uni-modal model.
Figure 5
Figure 5
Box plots of the weighting parameters α1, α2 and α3 for the late fusion architecture (Late Fusion (b)), computed during the LOSO cross-validation evaluation of each conducted classification experiment. Within each box plot, the mean and median values of the performed LOSO cross-validation evaluation are depicted with a dot and a horizontal line, respectively.
Figure 6
Figure 6
EDA classification performance. Within each box plot, the mean and median values of the respective performance evaluation metrics are depicted with a dot and a horizontal line, respectively.
Figure 7
Figure 7
Late fusion classification performance (Late Fusion (b)). Within each box plot, the mean and median values of the respective performance evaluation metrics are depicted with a dot and a horizontal line, respectively.
Figure 8
Figure 8
Multi-class classification performance (confusion matrix) of the fusion architecture (Late Fusion (b)). The darker the color the higher the corresponding performance.

Similar articles

See all similar articles

Cited by 1 article

References

    1. LeCun Y., Bengio Y., Hinton G. Deep Learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. - DOI - PubMed
    1. Bengio Y., Courville A., Vincent P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013;35:1798–1828. doi: 10.1109/TPAMI.2013.50. - DOI - PubMed
    1. Krizhevsky A., Sutskever I., Hinton G.E. ImageNet Classification with Deep Convolutional Neural Networks; Proceedings of the 25th International Conference on Neural Information Processing Systems; Lake Tahoe, NV, USA. 3–6 December 2012; pp. 1097–1105.
    1. Simonyan K., Zisserman A. Very Deep Convolution Networks for Large-Scale Image Recognition; Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition (ACPR); Kuala Lumpur, Malaysia. 3–6 November 2015; pp. 730–734.
    1. Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan Dumitru abd Vanhoucke V., Rabinovich A. Going deeper with convolutions; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Boston, MA, USA. 7–12 June 2015; pp. 1–9.
Feedback