Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Filters applied. Clear all
. 2020 Jan 16;20(2):504.
doi: 10.3390/s20020504.

Visual Locating of Reactor in an Industrial Environment Using the Composite Method

Affiliations
Free PMC article

Visual Locating of Reactor in an Industrial Environment Using the Composite Method

Chenguang Cao et al. Sensors (Basel). .
Free PMC article

Abstract

To achieve an automatic unloading of a reactor during the sherardizing process, it is necessary to calculate the pose and position of the reactors in an industrial environment with various amounts of luminance and floating dust. In this study, the defects of classic image processing methods and deep learning methods used for locating the reactors are first analyzed. Next, an improved You Only Look Once(YOLO) model is employed to find the region of interest of the handling hole and a handling hole corner detection method based on the image morphology and a Hough transform is presented. Finally, the position and pose of the reactors will be obtained by establishing a 3D handling hole model according to the principle of a binocular stereo system. To test the performance of the proposed method, a set of experimental systems was set up and experiments were conducted. The results indicate that the proposed location method is effective and the precision of the position recognition can be controlled to within 4.64 mm and 1.68 ° when the cameras are approximately 5 m away from the reactor, meeting the requirements.

Keywords: Hough transform; YOLO; handling-hole; reactor; sherardizing.

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Schematic of a new automatic unloading system.
Figure 2
Figure 2
Structure of the automatic unloading system.
Figure 3
Figure 3
Experimental model. (a) front of the model; (b) back of the model.
Figure 4
Figure 4
The results of classical image segmentation. (a) raw image; (b) Otus; (c) watershed algorithm; (d) histogram segmentation
Figure 5
Figure 5
The MBR and counter of the handing-hole
Figure 6
Figure 6
Flowchart of the RAL method.
Figure 7
Figure 7
Structure of the YOLO-MobileNet.
Figure 8
Figure 8
The distribution of handling hole image set.
Figure 9
Figure 9
Flowchart of the accurate location.
Figure 10
Figure 10
Processing of accurate handing hole location: (a) ROI, (b) denoised image, (c) result of counter detection, (d) parameter coordinate system, (e) raw line detection result, (f) result of corner extraction.
Figure 11
Figure 11
Schematic of a line.
Figure 12
Figure 12
Principle of binocular stereo vision.
Figure 13
Figure 13
Schematic of non-collinear problem for matching points.
Figure 14
Figure 14
The categories of training images: (a) low brightness images without dust, (b) low brightness images with dust, (c) high brightness images without dust, (d) low brightness images with dust, (e) normal brightness images without dust, (f) normal brightness images with dust.
Figure 15
Figure 15
The training loss curve.
Figure 16
Figure 16
Detection results of YOLO-MobileNet model: (a–c) good detection results, (d–f) parameter coordinate system.
Figure 17
Figure 17
Flowchart of the true value calculation.
Figure 18
Figure 18
Measurement error in the experiment: (a) Measurement error of distance, (b) Measurement error of rotation angle.
Figure 19
Figure 19
Interface of software.

Similar articles

See all similar articles

References

    1. Wortelen D., Frieling R., Bracht H., Graf W., Natrup F. Impact of zinc halide addition on the growth of zinc-rich layers generated by sherardizing. Surf. Coat. Technol. 2015;263:66–77. doi: 10.1016/j.surfcoat.2014.12.051. - DOI
    1. Burri M., Oleynikova H., Achtelik M.W., Siegwart R. Real-time visual-inertial mapping, re-localization and planning onboard MAVs in unknown environments; Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems; Hamburg, Germany. 28 September–2 October 2015; pp. 1872–1878.
    1. Lu R.S., Li Y.F. A global calibration method for large-scale multi-sensor visual measurement systems. Sens. Actuators A. 2004;116:384–393. doi: 10.1016/j.sna.2004.05.019. - DOI
    1. Huangpeng Q.Z., Zhang H., Zeng X.R., Huang W.W. Automatic Visual Defect Detection Using Texture Prior and Low-Rank Representation. IEEE Access. 2018;6:37965–37976. doi: 10.1109/ACCESS.2018.2852663. - DOI
    1. Luo Z.F., Zhang K., Wang Z.G., Zheng J., Chen Y.X. 3d pose estimation of large and complicated workpieces based on binocular stereo vision. Appl. Opt. 2017;56:6822–6836. doi: 10.1364/AO.56.006822. - DOI - PubMed
Feedback