IMAGE MATCHING USING DIFFERENT KEYPOINTS TYPES
Abstract
The work is devoted to experiments with various methods of selecting special points on images,
followed by their description with a binary descriptor and comparison by a full search method.
This paper actively uses the method of describing the neighborhood of singular points, based
on the construction of a binary string that characterizes changes in the brightness of pixels in the
described neighborhood. The resulting string is obtained by comparing the brightness of pixels
according to a specific template. Today, the use of special points when working with images allows
you to develop applied methods in various areas of computer vision with increased requirements
for working time and resistance to sudden changes in scenes. The paper presents the results
of experiments with special points of various classes, the classification is given in section 1. During
the experiments, methods implemented in the OpenCV library were used. The paper provides
brief descriptions of the methods used in experiments. Section 1 of the paper offers a classification
of modern types of singular points of images and provides a brief description of popular methods
for detecting the described types of singular points. In section 2, the authors give a General description
of methods for working with special image points. Section 3 describes the experiments
that are being carried out with the comparison of special points of different types described by a
single descriptor, and reveals their results. The experiments performed allow us to identify the
strengths and weaknesses of bundles of different types of singular points when comparing them.
References
videopotoke [Overlapping solving in tracking tasks]. Inzhenernyy zhurnal: nauka i innovatsii
[Engineer magazine: science and innovations], 2013, No. 6 (18), pp. 1-18.
2. Harris C., Stephens M. A combined corner and edge detector, Proceedings of the 4th Alvey
Vision Conference, 1988, pp. 147-151.
3. Lowe D.G. Distinctive image features from scale-invariant keypoints, International Journal of
Computer Vision 60, 2004, pp. 91-110.
4. Matas J., Chum O., Urban M., and Pajdla T. Robust wide baseline stereo from maximally stable
extremal regions, Proceedings of the British Machine Vision Conference, 2002, pp. 384-396.
5. Kaehler A., Bradski G. Learning OpenCV 3: Computer Vision in C++ with the OpenCV
Library, O'Reilly Media, 2017, 1024 p.
6. Rublee E., Rabaud V., Konolige K., Bradski G. ORB: an efficient alternative to SIFT or SURF,
IEEE International Conference, 2011, pp. 2564-2571.
7. Riba E., Mishkin D., Ponsa D, Rublee E., Bradski G. Kornia: An open source differentiable
computer vision library for PyTorch, IEEE Winter Conference on Applications of Computer
Vision, WACV 2020, 2020, pp. 3663-3672.
8. Morev K.I. Metod sopostavleniya osobykh tochek izobrazheniy dlya zadach trekinga,
osnovannyy na intuitsionistskoy nechetkoy logike [The method of feature matching for tracking
tasks based on intuitionistic fuzzy sets], Izvestiya YuFU. Tekhnicheskie nauki [Izvestiya
SFedU. Engineering Sciences], 2019, No. 1 (203), pp. 282-295.
9. Leutenegger S., Chli M., Siegwart R. BRISK: Binary Robust Invariant Scalable Keypoints,
Computer Vision (ICCV), 2011, pp. 2548-2555.
10. Calonder M., Lepetit V., Strecha C., Fua P. BRIEF: Binary Robust Independent Elementary
Features, 11th European Conference on Computer Vision (ECCV), 2010, pp. 778-792.
11. David L., Object recognition from local scale-invariant features, Proceedings of the International
Conference on Computer Vision, 1999, pp. 1150-1157.
12. Salahat E., Qasaimeh M., Recent advances in feature extraction and description algorithms:
Hardware designs and algorithmic derivatives, Computer Vision: Concepts, Methodologies,
Tools, and Applications, 2018, 33 p.
13. Bay H., Tuytelaars T., Gool L. SURF: Speeded up robust features, European Conference on
Computer Vision, 2006, pp. 404-417.
14. Bian Lin, Matsushita Yeung, Nguyen Cheng. GMS: Grid-based Motion Statistics for Fast,
Ultra-Robust Feature Correspondence, CVPR, 2017, 10 p.
15. Stauffer C., Grimson W. Learning patterns of activity using real-time tracking, IEEE Transactions
on Pattern Analysis and Machine Intelligence, 2000, pp. 747-757.
16. Raja Y., Gong S. Scaling up multi-camera tracking for real-world deployment, Proceedings of
SPIE - The International Society for Optical Engineering, 2012, No. 8546.
17. Zheng W.-S., Gong S., Xiang T. Person re-identification by probabilistic relative distance comparison,
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, 2011, pp. 649-656.
18. Yang M., Wu Y., Hua G. Context-aware visual tracking, IEEE transactions on pattern analysis
and machine intelligence, 2006, No. 31, pp. 1195-1209.
19. Sundaresan A., Chellappa R. Multi-camera tracking of articulated human motion using shape
and motion cues, IEEE Transactions on Image Processing, 2009, pp. 2114-2126.
20. Vrazhnov D.A., Shapovalov A.V., Nikolaev V.V. O kachestve raboty algoritmov slezheniya za
ob"ektami na video [About quality of tracking algorithms in video], Komp'yuternye
issledovaniya i modelirovanie [Computer investigation and modeling], 2012, No. 2 (4),
pp. 303-313.