OUTLIERS DETECTION IN THE STEREO-VISUAL ODOMETRY BASED ON HIERARCHICAL CLUSTERIZATION

  • P. A. Pantelyuk Southern Federal University
Keywords: Navigation, visual odometry, optical flow, outliers detection

Abstract

This paper presents an approach to stereo-visual odometry without explicitly calculating
optical flow. Visual odometry is a method of obtaining navigation information by processing a
sequence of frames from onboard cameras. There are two approaches to processing video information
- using well-localized areas of the image - feature points and using all high-contrast pixels
- a direct method. The direct method works by using the intensities of all high-contrast pixels in
the image, making it possible to reduce the computational complexity spent searching, describing,
and matching feature points and increasing the accuracy of motion estimation. However, methods
of this class have a drawback - the presence of moving objects in the frame significantly reduces
the accuracy of the estimation of motion parameters. Outlier detection techniques are used to
avoid this problem. Classical methods for detecting outliers in input data, such as RANSAC, are
poorly applicable and have high computational costs due to the computationally complex function
of rating hypotheses. This work aims to describe and demonstrate an outlier detection approachbased on the hierarchical clustering algorithm, which selects the statistically most probable solution,
bypassing the stage of rating each hypothesis, which significantly reduces the computational
complexity. For hierarchical clustering, a measure of the distance between hypotheses with low
sensitivity to errors in estimating motion parameters is proposed. An extension of the stereo-visual
odometry algorithm is also proposed to work in more complex visibility conditions due to the transition
from the intensity representation of the image to a multichannel binary one. Transforming
an image to a multichannel binary representation gives invariance to changes in image brightness.
However, it requires modification of nonlinear optimization algorithms to work with binary descriptors.
As a result of the work, it has been shown that the proposed outlier detection algorithm
can operate in real-time on mobile devices and can serve as a less resource-intensive replacement
for the RANSAC algorithm in problems of visual odometry and optical flow eviction. Qualitative
metrics of the proposed solution are demonstrated on the KITTI dataset. The dependences of the
performance of the algorithm on the parameters of the algorithm are given.

References

1. Howard A. Real-time stereo visual odometry for autonomous ground vehicles, 2008 IEEE/RSJ
International Conference on Intelligent Robots and Systems. IEEE, 2008б зз. 3946-3952.
2. Scaramuzza D., Fraundorfer F. Visual odometry [tutorial], IEEE robotics & automation magazine,
2011, Vol. 18, No. 4, pp. 80-92.
3. Fraundorfer F., Scaramuzza D. Visual odometry: Part ii: Matching, robustness, optimization,
and applications, IEEE Robotics & Automation Magazine, 2012, Vol. 19, No. 2, pp. 78-90.
4. Maimone M., Cheng Y., Matthies L. Two years of visual odometry on the mars exploration
rovers, Journal of Field Robotics, 2007, Vol. 24, No. 3, pp. 169-186.
5. Aqel M.O.A. et al. Review of visual odometry: types, approaches, challenges, and applications,
SpringerPlus, 2016, Vol. 5, No. 1, pp. 1-26.
6. Badino H., Yamamoto A., Kanade T. Visual odometry by multi-frame feature integration, Proceedings
of the IEEE International Conference on Computer Vision Workshops, 2013, pp. 222-229.
7. Liu H. et al. Navigational drift analysis for visual odometry, Computing and Informatics, 2014,
Vol. 33, No. 3, pp. 685-706.
8. Usenko V. et al. Direct visual-inertial odometry with stereo cameras, 2016 IEEE International
Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 1885-1892.
9. Usenko V. et al. Direct visual-inertial odometry with stereo cameras, 2016 IEEE International
Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 1885-1892.
10. Derpanis K.G. Overview of the RANSAC Algorithm, Image Rochester NY, 2010, Vol. 4,
No. 1, pp. 2-3.
11. Zuliani M. RANSAC for Dummies, Vision Research Lab, University of California, Santa Barbara,
2009.
12. Nistér D. Preemptive RANSAC for live structure and motion estimation, Machine Vision and
Applications, 2005, Vol. 16, No. 5, pp. 321-329.
13. Raguram R., Frahm J. M., Pollefeys M. A comparative analysis of RANSAC techniques leading
to adaptive real-time random sample consensus, European Conference on Computer Vision.
Springer, Berlin, Heidelberg, 2008, pp. 500-513.
14. Li H. et al. An efficient image matching algorithm based on adaptive threshold and RANSAC,
IEEE Access, 2018, Vol. 6, pp. 66963-66971.
15. Haller I., Nedevschi S. GPU optimization of the SGM stereo algorithm, Proceedings of the
2010 IEEE 6th International Conference on Intelligent Computer Communication and Processing.
IEEE, 2010, pp. 197-202.
16. Piña E. Rotations with Rodrigues’ vector, European journal of physics, 2011, Vol. 32, No. 5,
pp. 1171.
17. Suhr J.K. Kanade-lucas-tomasi (klt) feature tracker, Computer Vision (EEE6503), 2009, pp. 9-18.
18. Hafner D., Demetz O., Weickert J. Why is the census transform good for robust optic flow
computation?, International Conference on Scale Space and Variational Methods in Computer
Vision. Springer, Berlin, Heidelberg, 2013, pp. 210-221.
19. Johnson S.C. Hierarchical clustering schemes, Psychometrika, 1967, Vol. 32, No. 3, pp. 241-254.
20. Geiger A. et al. Vision meets robotics: The kitti dataset, The International Journal of Robotics
Research, 2013, Vol. 32, No. 11, pp. 1231-1237.
Published
2021-07-18
Section
SECTION II. COMMUNICATION, NAVIGATION AND GUIDANCE