Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
, 81 (5), 409-18

Enabling User-Guided Segmentation and Tracking of Surface-Labeled Cells in Time-Lapse Image Sets of Living Tissues

Affiliations

Enabling User-Guided Segmentation and Tracking of Surface-Labeled Cells in Time-Lapse Image Sets of Living Tissues

David N Mashburn et al. Cytometry A.

Abstract

To study the process of morphogenesis, one often needs to collect and segment time-lapse images of living tissues to accurately track changing cellular morphology. This task typically involves segmenting and tracking tens to hundreds of individual cells over hundreds of image frames, a scale that would certainly benefit from automated routines; however, any automated routine would need to reliably handle a large number of sporadic, and yet typical problems (e.g., illumination inconsistency, photobleaching, rapid cell motions, and drift of focus or of cells moving through the imaging plane). Here, we present a segmentation and cell tracking approach based on the premise that users know their data best-interpreting and using image features that are not accounted for in any a priori algorithm design. We have developed a program, SeedWater Segmenter, that combines a parameter-less and fast automated watershed algorithm with a suite of manual intervention tools that enables users with little to no specialized knowledge of image processing to efficiently segment images with near-perfect accuracy based on simple user interactions.

Figures

Figure 1
Figure 1
Common segmentation difficulties in confocal images of living, cadherin-GFP stained fruit fly embryos. The large cells on the right of the image are amnioserosa cells; the smaller ones at the left are epidermal. (A) is an unsegmented image and (B) is the same image with an overlay of seeds (small green squares) generated automatically by application of a Gaussian filter (σ = 2.5 μm) and segment outlines (red lines) generated by a watershed algorithm. The numbered arrows point to several common errors in automatic segmentation. (1) An object obscures the view of the cell edge. (2) A single cell is divided between two seeds, i.e., oversegmentation. (3) Two cells share a single seed, i.e., undersegmentation. (4) A region that should be part of the image background instead receives seeds and is assigned as cells. (5) An area of epidermis cells that is very badly mis-segmented because the Gaussian filter is too large for these smaller cells. The user must decide if segmentation of this region should be completely reworked manually or skipped altogether. A smaller Gaussian filter (σ = 0.625 instead of 2.5) will effectively generate seeds for these smaller cells, but at the expense of severely oversegmenting the amnioserosa cells (into ~10 segments each, not shown). (6) Sub-cellular regions are misassigned. One can often determine which cells these regions belong to based on other images in the time-lapse set.
Figure 2
Figure 2
Adding and deleting seeds manually. (A) Initial automatic segmentation of an image (σ = 2.5 μm). Over-segmented regions with unwanted seeds are circled. The upper circled region highlights a cell at the edge of the imaging plane with a poorly defined boundary. The lower-left circled region has two seeds dividing a single cell. (B) Segmentation after manual removal of unwanted seeds. (C) Segmentation after manual addition of seeds to correct under-segmented regions (cyan fill). Seeds were added for sixteen cells around the margins of the tissue. These cells had been considered part of the background by the automatic algorithm. Seeds were also added for three internal cells that had not automatically received their own seeds.
Figure 3
Figure 3
Adding multiple extra seeds to correct mis-segmentation of cellular subregions. (A) Original image to be segmented. (B, C) Initial segmentation shown as an overlay with green seeds and red segment boundaries in (B) and as a false-colored cell ID map in (C). The subregion just to the right of the central seed is misassigned to an adjacent cell (blue). The upper left boundary of the central cell (pink) is also not satisfactory. (D, E) By adding a single extra seed, the originally mis-segmented subregion is reassigned to the appropriate cell (pink instead of blue). For the user, this is a two-click process: a left-click on the region that needs to be expanded followed by a right click to place the extra seed. The inset to the left of (D) shows a close-up of a remaining problematic boundary (with and without overlay). (F, G) This boundary is improved by adding a polyline of extra seeds (green line in (F) that appears as distinct seeds in the inset). For the user, creating a line of seeds works as above except multiple right clicks are used. A line segment of seeds is added between each successive right-click location. As shown in the inset to the left of (F), the result is a slight improvement in the overlap of the watershed boundary and the imaged cell-cell boundary. The best location for this boundary was determined by visually tracking its motion in previous and subsequent frames of the image stack (not shown).
Figure 4
Figure 4
Cell tracking when the frame-to-frame movements of cells are large. (A–C) Complete manually assisted segmentation of a cluster of amnioserosa cells. The segmentation overlay shows seeds (green squares) and segment outlines (red lines). (B) is a close-up of the boxed region and (C) is the corresponding false-colored cell ID map. (D–F) Automatic tracking and segmentation of the next frame after laser ablation of a central cell and with a large time interval between frames (70 seconds). The large time interval exaggerates cell motion between frames and causes the centroid-based algorithm to track cells improperly in some regions, especially near the bottom middle of the image (zoomed region in E, F). The errors are clearly discernable in (F) compared to (C). Note that even in this relatively extreme case, the automatic tracking performs very well for most cells in the image, particularly outside the boxed region. Tracking generally works well unless the cell moves over half its diameter. (G–I) Corrected tracking and segmentation after using the “Lasso” and “Move Seeds” tools. The “Lasso” tool works by clicking to form a polygon that encircles multiple seeds. These seeds are then moved using the arrow keys to position them properly. This seed adjustment process is quite fast (a few seconds) by starting with bulk motion and then adjusting individual seeds as needed.
Figure 5
Figure 5
Comparison of segmentation speed and accuracy for a typical data set: 190 frames with an average of 64 cells per frame. (A, B) Improved accuracy versus time spent on manual intervention using SWS. Both graphs represent the same data; (B) simply has a tighter zoom in the y-axis to more clearly show the data after 50 minutes. Intermediate segmentations were saved after each change in watershed borders or at least every 60 s. The accuracy at each intermediate time point was assessed based on either the percentage of pixels whose assignment matched the final segmentation or the percentage of cells whose boundaries matched the final segmentation. The “first pass” segmentation was performed with minimal tracking, generally letting the algorithms do the work automatically (50 min), achieving a pixel-based accuracy of 98.6% and a cell-based accuracy of 88%. We then performed three more rounds of manual intervention and adjustment that improved the segmentation and tracking to user-desired accuracy in approximately six hours (~ 2 min per frame). The efficacy of manual intervention will vary with user experience and imaging quality, but this set is representative. The diminishing returns of continued manual intervention are most evident in the pixel-based comparison, but even this is somewhat linear over large time periods because we made successive passes through the entire stack. (C) Distribution of errors over all segmented cells for selected intermediate segmentations and other techniques. Errors are defined as deviations from the final SWS segmentation. The x-axis is a normalized list of cell indices sorted from largest to smallest relative error. The y-axis is the number of erroneous pixels for each cell divided by the average area of all cells. We compared a “Hands Off” SWS segmentation with no user intervention and a “First Pass” SWS segmentation with minimal manual tracking assistance. For comparison, we include a 3D watershed segmentation, and SWS on an CLAHE-filtered image set. (D) Distribution of errors over all segmented cells in comparison to a “gold standard” manual segmentation using a vector editor. These comparisons are limited to five evenly spaced frames (every 47th of the full data set). Absolute errors are thus compared for SWS segmentation, Packing Analyzer, and 3D watershed segmentation. As a baseline, we include errors induced by pixelating the vector segmentation.
Figure 6
Figure 6
Time-and-space pair correlation function of triple-junction velocities for a data set with 66 segmented cells. The x-axis is the time separation between points (τ) and the y-axis is the distance between points (Δ). Correlations are normalized so that the peak at (0,0) has a value of 1. The Δ=0 axis (autocorrelation function) is plotted above the density plot, and the τ=0 axis (distance correlation) is plotted to the left. Dashed lines in these two plots represent zero correlation. The minima and maxima of the autocorrelation appear as dark and bright spots on the Δ=0 axis of the density plot with the first of each occurring at 123 and 267 s, respectively. In the full pair correlation function (density plot), the extrema move to longer time delays as the distance between the pair increases. This wave-like propagation is demarcated by the angled dashed line, which has a slope and thus velocity of 0.14 μm/s.

Comment in

Similar articles

See all similar articles

Cited by 22 PubMed Central articles

See all "Cited by" articles

Publication types

MeSH terms

LinkOut - more resources

Feedback