ISSUES OF NEURAL NETWORK CLUSTERIZER CONSTRUCTION FOR ZIP CODE AUTOMATIC RECOGNITION

  • А.О. Pyavchenko Southern Federal University
  • A.V. Ilchenko Southern Federal University
Keywords: Code stamp, postal index for correspondence, preprocessing, convolution neural networks, supervised learning, neural network clusterizer, modeling, MATLAB package

Abstract

The article discusses topical issues of a neural network clusterizer development designed to recognize handwritten ZIP code for correspondence, specified using a simplified code stamp. A structured convolutional neural network is used as a neural network basis for clustering. It pro-vides the formation and clustering of classifying features set that characterize the index code characters. A software model of a neural network clusterizer has been developed that provides character-by-character recognition of the scanned and appropriately preprocessed zip code im-age. In developing the clusterizer we use the MatLab version 18b Deep Learning ToolBox pack-age. A digital set of MNIST handwritten Arabic digits was used to train the neural network clusterizer. The article presents the experiments results, from which it follows that the trained software model of a neural network clusterizer provides an error recognition of an arbitrary handwritten digit from the validation set at the level of 0.72 %. The advantages of the proposedapproach are: the rejection of the currently used code stamp by replacing it with a simplified rep-resentation and a formally unlimited length; the possibility of using as part of an index handwrit-ten symbol adopted by the national postal systems, subject to appropriate prior training neural network clusters; the possibility of applying the proposed approach at various stages of processing correspondence, ranging from its reception from citizens and ending with sorting machines as part of large sorting centers.

References

1. Convolutional Neural Networks for Visual Recognition. – Backpropagation, Intuitions // Курс лекций CS231n стэндфордского университета (дата обновления: 2019). – URL: http://cs231n.github.io/ optimization-2/ (дата обращения: 17.07.2019).
2. Convolutional Neural Networks for Visual Recognition. – Neural Networks Part 1: Setting up the Architecture // Курс лекций CS231n стэндфордского университета (дата обновления: 2019). – URL: http://cs231n.github.io/neural-networks-1/ (дата обращения: 17.07.2019).
3. Convolutional Neural Networks for Visual Recognition. – Neural Networks Part 2: Setting up the Data and the Loss // Курс лекций CS231n стэндфордского университета (дата обнов-ления: 2019. – URL: http://cs231n.github.io/neural-networks-2/ (дата обращения: 17.07.2019).
4. Convolutional Neural Networks for Visual Recognition. – Neural Networks Part 3: Learning and Evaluation // Курс лекций CS231n стэндфордского университета (дата обновления: 2019). – URL: http://cs231n.github.io/neural-networks-3/ (дата обращения: 17.07.2019).
5. Convolutional Neural Networks for Visual Recognition. – Optimization: Stochastic Gradient Descent // Курс лекций CS231n стэндфордского университета (дата обновления: 2019). – URL: http://cs231n.github.io/optimization-1/ (дата обращения: 17.07.2019).
6. Deep Learning Toolbox // Официальный сайт компании Mathworks (дата обновления: 2019). – URL: https://www.mathworks.com/products/deep-learning.html (дата обращения: 20.07.2019).
7. Deng Li and Dong Yu. Deep Learning: Methods and Applications // Foundations and Trends in Signal Processing. – 2013. – Vol. 7, No. 3–4. – P. 197-387.
8. Gonzalez R.C., Woods R.E. Digital Image Processing. – 3rd ed. – Prentice-Hall, 2008. – 976 p.
9. Ioffe, Sergey, and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift // arXiv preprint. – 2015. – arXiv:1502.03167.
10. Schmidhuber J. Deep Learning in Neural Networks: An Overview // Neural Networks. – Janu-ary 2015. – Vol. 61. – P. 85-117.
11. Kingma D., Ba J. Adam: A method for stochastic optimization // International Conference on Multimedia. – 2014. – P. 675-678.
12. Mansi Shah, Gordhan B Jethava. A literature review on hand written character recognition // Indian Streams Research Journal. – March 2013. – Vol. 3, Issue 2.
13. MNIST handwritten digit database // Официальный сайт. – URL: http://yann.lecun.com/ exdb/mnist/ (дата обращения: 17.07.2019).
14. Universal Postal Union. – Universal POST*CODE® DataBase // Официальный сайт почто-вого союза (дата обновления: 05.2019). – URL: http://www.upu.int/en/resources/postcodes/ universal-postcoder-database.html (дата обращения: 05.07.2019).
15. Y. LeCun et al. Backpropagation Applied to Handwritten Zip Code Recognition // Neural Computation. – Dec. 1989. – No. 1 (4). – P. 541-551.
16. Закон Парето / Авторы Википедии // Википедия (дата обновления: 06.06.2019). – URL: https://ru.wikipedia.org/wiki/Закон_Парето (дата обращения: 05.07.2019).
17. Акулич И. Идеальный почтовый индекс // Популярный журнал «Квантик». – 2015. – № 1. – С. 10-14.
18. Почтовый индекс / Авторы Википедии // Википедия (дата обновления: 24.06.2019). – URL: https://ru.wikipedia.org/wiki/ Почтовый_индекс (дата обращения: 05.07.2019).
19. Соловьев Р.А., Кустов А.Г., Рухлов В.С., Щелоков А.Н., Пузырьков Д.В. Аппаратная реа-лизация свёрточной нейронной сети в ПЛИС на базе вычислений с фиксированной точ-кой // Известия ЮФУ. Технические науки. – 2017. – № 7 (192). – С. 186-187.
20. Гудфеллоу Я., Бенджио И., Курвилль А. Глубокое обучение: пер. с англ. А.А. Слинкина. – 2-е изд., испр. – М.: ДМК Пресс, 2018. – 652 с.
21. Доронченко Ю.И., Коваленко А.Г. Методика реализации на реконфигурируемых вычис-лительных системах крупных узлов вычислительных задач в виде библиотечных VHDL-элементов // Известия ЮФУ. Технические науки. – 2016. – № 11 (184). – С. 4-13.
Published
2019-09-23
Section
SECTION I. INFORMATION PROCESSING ALGORITHMS.