Background: UK general practitioners largely conduct computer-mediated consultations. Although historically there were many small general practice (GP) computer suppliers there are now around five widely used electronic patient record (EPR) systems. A new method has been developed for assessing the impact of the computer on doctor-patient interaction through detailed observation of the consultation and computer use.
Objective: To pilot the latest version of a method to measure the difference in coding and prescribing times on two different brands of general practice EPR system.
Method: We compared two GP EPR systems by observing use in real life consultations. Three video cameras recorded the consultation and screen capture software recorded computer activity. We piloted semi-automated user action recording (UAR) software to record mouse and keyboard use, to overcome limitations in manual measurement. Six trained raters analysed the videos using data capture software to measure the doctor-patient-computer interactions; we used interclass correlation coefficients (ICC) to measure reliability.
Results: Raters demonstrated high inter-rater reliability for verbal interactions and prescribing (ICC 0.74 to 0.99), but for measures of computer use they were not reliable. We used UAR to capture computer use and found it more reliable. Coded data entry time varied between the systems: 6.8 compared with 11.5 seconds (P = 0.006). However, the EPR with the shortest coding time had a longer prescribing time: 27.5 compared with 23.7 seconds (P = 0.64).
Conclusion: This methodological development improves the reliability of our method for measuring the impact of different computer systems on the GP consultation. UAR added more objectivity to the observation of doctor-computer interactions. If larger studies were to reproduce the differences between computer systems demonstrated in this pilot it might be possible to make objective comparisons between systems.