Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Filters applied. Clear all
. 2012 Dec 21;13(1):119-36.
doi: 10.3390/s130100119.

GPS-supported Visual SLAM With a Rigorous Sensor Model for a Panoramic Camera in Outdoor Environments

Affiliations
Free PMC article

GPS-supported Visual SLAM With a Rigorous Sensor Model for a Panoramic Camera in Outdoor Environments

Yun Shi et al. Sensors (Basel). .
Free PMC article

Abstract

Accurate localization of moving sensors is essential for many fields, such as robot navigation and urban mapping. In this paper, we present a framework for GPS-supported visual Simultaneous Localization and Mapping with Bundle Adjustment (BA-SLAM) using a rigorous sensor model in a panoramic camera. The rigorous model does not cause system errors, thus representing an improvement over the widely used ideal sensor model. The proposed SLAM does not require additional restrictions, such as loop closing, or additional sensors, such as expensive inertial measurement units. In this paper, the problems of the ideal sensor model for a panoramic camera are analysed, and a rigorous sensor model is established. GPS data are then introduced for global optimization and georeferencing. Using the rigorous sensor model with the geometric observation equations of BA, a GPS-supported BA-SLAM approach that combines ray observations and GPS observations is then established. Finally, our method is applied to a set of vehicle-borne panoramic images captured from a campus environment, and several ground control points (GCP) are used to check the localization accuracy. The results demonstrated that our method can reach an accuracy of several centimetres.

Figures

Figure 1.
Figure 1.
An overview of the results of our proposed method. The blue line in the middle represents the trajectory through the Kashiwa campus of the University of Tokyo, and the nearby green circles are the tie-points. The separate images in the route are from the 5 mono-lenses. Green dots indicate correctly matched tie-points with good distributions, and red dots indicate mismatched points that have been excluded from the error detection steps. A 3D view of the results is shown in the top left corner; blue dots represent the position and posture after SLAM, and pink dots represent the GPS route. The two boxes in the bottom right are the zoomed area in which the GCPs are included. The light green corresponding rays intersect correctly in the right box, and the RMSE of the tie-points and check points both reach an accuracy of several centimetres with a grid scale of 0.2 m (left box).
Figure 2.
Figure 2.
Representation of a panoramic camera consisting of five mono cameras. The dashed line on the left indicates an ideal ray corresponding to ideal sensor model that passes through the panoramic centre TS, a point on the panoramic sphere us and the object ps. In reality, us is imaged from the mono camera, and the projection centre is Tc; the real ray is represented by the solid line corresponding to our rigorous sensor model and passes through Tc, us and pc. Two errors are introduced by the ideal model: one is the ray direction bias, and the other is the position offset of the landmarks.
Figure 3.
Figure 3.
The main workflow of GPS-supported BA-SLAM.
Figure 4.
Figure 4.
A 3D view of successfully matched tie-points (green). The points excluded by RANSAC (the first outlier elimination step) and histogram voting (the second step) are shown in red, and those excluded by BA (the third step) are shown in blue. The blue rays represent features that cannot intersect precisely, such as feature 267. Feature 267 may be regarded as correctly matched (right-middle images), but the lack of information about features in the window may introduce a bias of one or more pixels, which causes a slight intersection error (left-middle image). In contrast, feature 245 has a better texture and intersects precisely.
Figure 5.
Figure 5.
Panoramic image and separate images captured by the Ladybug system. (a) Panoramic image. (b) Images from 6 separate fish-eye lenses. The image aimed at the sky is not used in our SLAM.
Figure 6.
Figure 6.
Results of the segmented BA-SLAM and GPS-supported BA-SLAM methods. The yellow line is the trajectory of the unconstrained results after data association and block BA. The start point is located in the correct position, but the trajectory shows a large accumulation of uncertainty in angle and scale. The blue line represents the trajectory after the GPS-supported BA-SLAM method is applied and shows a high level of accuracy. All eight GCPs are located in the enlarged area and are shown in Figure 7.
Figure 7.
Figure 7.
The eight GCPs, with accuracy up to 2 cm, are used in the experiments to check the accuracy of the GPS-supported BA-SLAM.
Figure 8.
Figure 8.
(a) Check errors vs. the number of GPS observations used. “Distance interval n” on the X axis means that one GPS observation is selected for every n m. The check errors of all 8 GCPs increase when the number of GPS observations is reduced but are still less than 0.35 m. (b) Check errors vs. number of gross errors added to the GPS observations. On the X axis, “Distance interval n” means that the gross error is added to one GPS observation every n meters. The check errors of all 8 GCPs increase when more GPS observations are given gross errors but are still less than 0.4 m.
Figure 9.
Figure 9.
(a) Result comparison between GPS with 50 m interval and All GPS. In fact there are two trajectories with different colours, blue and green, which cannot be distinguished in a scale of 20 m. While at the zoomed area with a scale of 2 m, we can see the very slight difference. (b) Results comparison between GPS with errors introduced per 5 m and all good GPS. The same to (a), we can only distinguish the difference of trajectories in the zoomed area.

Similar articles

See all similar articles

Cited by 6 articles

See all "Cited by" articles

References

    1. Eade E., Fong P., Munich M.E. Monocular graph SLAM with complexity reduction. Proceedings of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Taipei, Taiwan. 18– 22 October 2010; pp. 3017–3024.
    1. Weiss S., Scaramuzza D., Siegwart R. Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-Denied environments. J. Field Robot. 2011;28:854–874.
    1. Senlet T., Elgammal A. A framework for global vehicle localization using stereo images and satellite and road maps. Proceedings of 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops); Barcelona, Spain. 6–13 November 2011; pp. 2034–2041.
    1. Lin K.H., Chang C.H., Dopfer A., Wang C.C. Mapping and localization in 3D environments using a 2D Laser scanner and a stereo camera. J. Inf. Sci. Eng. 2012;28:131–144.
    1. Geyer C., Daniilidis K. Catadioptric projective geometry. Int. J. Comp. Vis. 2001;45:223–243.

Publication types

MeSH terms

Feedback