Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2012 Dec;19(12):1474-83.
doi: 10.1016/j.acra.2012.09.002. Epub 2012 Oct 4.

Application of threshold-bias independent analysis to eye-tracking and FROC data

Affiliations

Application of threshold-bias independent analysis to eye-tracking and FROC data

Dev P Chakraborty et al. Acad Radiol. 2012 Dec.

Abstract

Rationale and objectives: Studies of medical image interpretation have focused on either assessing radiologists' performance using, for example, the receiver operating characteristic (ROC) paradigm, or assessing the interpretive process by analyzing their eye-tracking (ET) data. Analysis of ET data has not benefited from threshold-bias independent figures of merit (FOMs) analogous to the area under the receiver operating characteristic (ROC) curve. The aim was to demonstrate the feasibility of such FOMs and to measure the agreement between FOMs derived from free-response ROC (FROC) and ET data.

Methods: Eight expert breast radiologists interpreted a case set of 120 two-view mammograms while eye-position data and FROC data were continuously collected during the interpretation interval. Regions that attract prolonged (>800 ms) visual attention were considered to be virtual marks, and ratings based on the dwell and approach-rate (inverse of time-to-hit) were assigned to them. The virtual ratings were used to define threshold-bias independent FOMs in a manner analogous to the area under the trapezoidal alternative FROC (AFROC) curve (0 = worst, 1 = best). Agreement at the case level (0.5 = chance, 1 = perfect) was measured using the jackknife and 95% confidence intervals (CI) for the FOMs and agreement were estimated using the bootstrap.

Results: The AFROC mark-ratings' FOM was largest at 0.734 (CI 0.65-0.81) followed by the dwell at 0.460 (0.34-0.59) and then by the approach-rate FOM 0.336 (0.25-0.46). The differences between the FROC mark-ratings' FOM and the perceptual FOMs were significant (P < .05). All pairwise agreements were significantly better then chance: ratings vs. dwell 0.707 (0.63-0.88), dwell vs. approach-rate 0.703 (0.60-0.79) and rating vs. approach-rate 0.606 (0.53-0.68). The ratings vs. approach-rate agreement was significantly smaller than the dwell vs. approach-rate agreement (P = .008).

Conclusions: Leveraging current methods developed for analyzing observer performance data could complement current ways of analyzing ET data and lead to new insights.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Schematic of the data collection and processing to obtain real and virtual marks: the radiologists interpreted the images using a two-monitor workstation. Concurrently, and for the duration of the interpretation, an ASL eye-position tracking system determined the line-of-gaze. The ASL fixation and clustering algorithms are described in the text. The proximity criterion, defined as 2.5° of visual angle, is the maximum distance between a lesion center and a mark for the mark to be considered a LL (correct localization). Non-lesion localizations are all other marks. ASL = Applied Sciences Laboratory; NL = non-lesion localization; LL = lesion localization.
Fig. 2
Fig. 2
Example of (a) small-clusters in yellow, (b) big-clusters, prior to threshold, in green and (c) big-clusters, after applying the 800 ms threshold, in blue. The green ‘diamond’ symbol marks the location where search started, whereas the red ‘cross’ symbol indicates the locations marked by the radiologist as containing a malignant lesion. The small yellow dots mark the raw eye-position data. The red circles mark the true locations of the lesions in the two views.
Fig. 2
Fig. 2
Example of (a) small-clusters in yellow, (b) big-clusters, prior to threshold, in green and (c) big-clusters, after applying the 800 ms threshold, in blue. The green ‘diamond’ symbol marks the location where search started, whereas the red ‘cross’ symbol indicates the locations marked by the radiologist as containing a malignant lesion. The small yellow dots mark the raw eye-position data. The red circles mark the true locations of the lesions in the two views.
Fig. 2
Fig. 2
Example of (a) small-clusters in yellow, (b) big-clusters, prior to threshold, in green and (c) big-clusters, after applying the 800 ms threshold, in blue. The green ‘diamond’ symbol marks the location where search started, whereas the red ‘cross’ symbol indicates the locations marked by the radiologist as containing a malignant lesion. The small yellow dots mark the raw eye-position data. The red circles mark the true locations of the lesions in the two views.

Similar articles

References

    1. Metz CE. Basic principles of ROC analysis. Seminars in Nuclear Medicine. 1978;8(4):283–98. - PubMed
    1. Metz CE. ROC Methodology in Radiologic Imaging. Investigative Radiology. 1986;21(9):720–33. - PubMed
    1. Bunch PC, Hamilton JF, Sanderson GK, Simmons AH. A Free-Response Approach to the Measurement and Characterization of Radiographic-Observer Performance. J of Appl Photogr Eng. 1978;4(4):166–71.
    1. Chakraborty DP. New Developments in Observer Performance Methodology in Medical Imaging. Semin Nucl Med. 2011;41(6):401–18. - PMC - PubMed
    1. Chakraborty DP, Berbaum KS. Observer studies involving detection and localization: Modeling, analysis and validation. Med Phys. 2004;31(8):2313–30. - PubMed

Publication types