DESIGNING MLP AND CNN NEURAL NETWORK MODULES ON FPGA FOR IMAGE CLASSIFICATION TASKS

Abstract

Relevance. The development of machine learning methods and neural network architectures, as well as their spread into various industrial sectors, determine the relevance of solving problems related to their hardware implementation. The use of programmable logic integrated circuits in this area will increase data processing speed and the adaptability of the implemented algorithms. However, designing neural network architectures on programmable logic integrated circuits is associated with a number of methodological and technical difficulties, including the optimization of parallel computing, hardware resource management, and ensuring operation under conditions of limited computing resources. The purpose of this work is to analyze and compare two neural network architectures, the multilayer perceptron (MLP) and the convolutional neural network (CNN), in the context of their hardware implementation on programmable logic integrated circuits (PLICs). Particular attention is paid to the trade-off between classification accuracy and the efficient use of limited FPGA hardware resources. Research methods. To achieve the goal, two modules were developed and simulated on a Virtex 7 FPGA, a perceptron and a convolutional module. The MNIST dataset, reduced to 20×20 pixels, was used. The implementation included quantizing parameters to a fixed 16:16 format, optimizing hyperparameters, using tabular computations for nonlinear functions, and evaluating FPGA resource usage. Results and discussions. MLP achieved 93% accuracy using 11% of logic elements, while CNN achieved 98% accuracy but required significantly more resources. The use of internal buffers to store intermediate data in CNN resulted in exceeding the allowable resources. The forced transition to external memory increased delays and the number of I/O ports. Conclusions. The study showed that the choice of architecture depends on priorities: CNN provides better accuracy but is less resource-efficient. For embedded systems with memory and power consumption constraints, a simplified MLP implementation is preferable. The main problems remain the lack of internal memory and the high resource intensity of operations, which requires further research in the field of hardware optimization and adaptive computation control

Authors

References

1. Vineetha K.V., Reddy M.M.S.K., Ramesh C., Kurup D.G. An efficient design methodology to speed up the FPGA implementation of artificial neural networks, Engineering Science and Technology, an Inter-national Journal, 2023, Vol. 47, Art. 101542. DOI: 10.1016/j.jestch.2023.101542.

2. Zhan J.Y., Yu A.T., Jiang W., Yang Y.J., Xie X.N., Chang Z.W., Yang J.H. FPGA-based acceleration for binary neural networks in edge computing, Journal of Electronic Science and Technology, 2023,

Vol. 21, No. 2, Art. 100204. DOI: 10.1016/j.jnlest.2023.100204.

3. Gyulai-Nagy Z.V. Acceleration of Neural Network training algorithms via FPGA devices, Procedia Computer Science, 2023, Vol. 225, pp. 2674-2683. DOI: 10.1016/j.procs.2023.10.259.

4. Saady M.M., Essai M.H. Hardware implementation of neural network-based engine model using FPGA, Alexandria Engineering Journal, 2022, Vol. 61, No. 12, pp. 12039-12050. DOI: 10.1016/j.aej.2022.05.035.

5. Krutikov A.K., Mel'tsov V.Yu. Metod formirovaniya mnogoyarusnoy neyrosetevoy sistemy prognoziro-vaniya s vozmozhnost'yu rekonfiguratsii [Method for forming a multi-layer neural network forecasting system with the possibility of reconfiguration], Izvestiya Yugo-Zapadnogo gosudarstvennogo universi-teta [News of Southwestern State University], 2024, Vol. 28, No. 4,

pp. 104-123. DOI: 10.21869/2223-1560-2024-28-4-104-123. EDN: IEBISN.

6. Boudjadar J., Islam S.U., Buyya R. Dynamic FPGA reconfiguration for scalable embedded artificial intelligence (AI): A co-design methodology for convolutional neural networks (CNN) acceleration, Fu-ture Generation Computer Systems, 2025, Vol. 169, Art. 107777. DOI: 10.1016/j.future.2025.107777.

7. Mehrabi A., Bethi Y., van Schaik A., Afshar S. An Optimized Multi-layer Spiking Neural Network im-plementation in FPGA Without Multipliers, Procedia Computer Science, 2023, Vol. 222,

pp. 407-414. DOI: 10.1016/j.procs.2023.08.179.

8. Konoval'chik A.P. Arkhitektura vysokoproizvoditel'nykh vychislitel'nykh sistem na osnove PLIS [Archi-tecture of high-performance computing systems based on FPGA], Izvestiya Yugo-Zapadnogo gosudar-stvennogo universiteta. Seriya: Upravlenie, vychislitel'naya tekhnika, informatika. Meditsinskoe pribo-rostroenie [News of Southwestern State University. Series: Management, computing, informatics. Med-ical instrument engineering], 2011, No. 2, pp. 6-9. EDN: PZRVTN.

9. Lebedev M.S., Beletskiy P.N. Realizatsiya iskusstvennykh neyronnykh setey na PLIS s pomoshch'yu otkrytykh instrumentov [Implementation of artificial neural networks on PLDs using open tools], Tr. ISP RAN [Proceedings of the Institute of Systems Engineering, Russian Academy of Sciences], 2021, Vol. 33, No. 6, pp. 175-192. DOI: 10.15514/ISPRAS.2021.33(6).12.

10. Tarasov I.E., Potekhin D.S., Platonova O.V. Perspektivy primeneniya soft-protsessorov v sistemakh na kristalle na baze programmiruemykh logicheskikh integral'nykh skhem [Prospects for the use of soft processors in on-chip systems based on programmable logic integrated circuits], Russian Technological Journal [Russian Technological Journal], 2022, Vol. 10, No. 3, pp. 24-33. DOI: 10.32362/2500-316X-2022-10-3-24-33.

11. Titenko E.A., Titov V.S., Konoval'chik A.P. Vysokoproizvoditel'nye vychislitel'nye sistemy na osnove PLIS [High-performance computing systems based on FPGAs], Izvestiya Yugo-Zapadnogo gosudar-stvennogo universiteta [News of Southwestern State University], 2012, No. 4-2(43),

pp. 73a-77. EDN: PGPOQD.

12. Namboothiripad M.K., Vadhyan G. Efficient implementation of artificial neural networks on FPGAs using high-level synthesis and parallelism, International Journal of Advanced Technology and Engineer-ing Exploration, 2024, Vol. 11, No 119, pp. 1497-1511. DOI: 10.19101/IJATEE.2023.10102538.

13. Khalil K., Mohaidat T., Darwich M., Kumar A., Bayoumi M. Efficient Hardware Implementation of Artificial Neural Networks on FPGA, Proceedings of AICAS, 2024, pp. 233-237. DOI: 10.1109/AICAS59952.2024.10595867.

14. Tasci M., Istanbullu A., Tumen V., Kosunalp S. FPGA-QNN: Quantized Neural Network Hardware Acceleration on FPGAs, Applied Sciences, 2025, Vol. 15, No. 2, Art. 688. DOI: 10.3390/app15020688.

15. Gholami A., Kim S., Dong Z., Yao Z., Mahoney M.W., Keutzer K. A Survey of Quantization Methods for Efficient Neural Network Inference, ArXiv, 2021. abs/2103.13630.

16. Kryshnev Yu.V., Sobolev V.I. Apparatnaya realizatsiya iskusstvennoy neyronnoy seti na FPGA dlya raspoznavaniya napisannykh ot ruki tsifr [Hardware implementation of an artificial neural network on FPGA for handwritten digit recognition], Sovremennye problemy mashinovedeniya [Modern Problems of Machine Learning], 2020, pp. 165-167.

17. Blokh D.E., Bezmel'tsev A.I., Panishchev V.S. Neyrosetevoy modul' klassifikatsii rukopisnoy tsifry na PLIS [Neural network module for classification of handwritten digits on FPGA], Trinadtsatyy Natsion-al'nyy superkomp'yuternyy forum [Thirteenth National Supercomputer Forum], 2024.

18. Blokh D.E., Bezmel'tsev A.I. Raspoznavanie rukopisnogo vvoda tsifr na PLIS [Recognition of handwrit-ten digit input on FPGA], Intellektual'nye informatsionnye sistemy: tendentsii, problemy, perspektivy «IIS – 2024» [Intelligent Information Systems: Trends, Problems, Prospects “IIS – 2024”]. Kursk: Uni-versitetskaya kniga, 2024, pp. 55-57. EDN: SWQBIZ.

19. Kadhim Z., Abdullah H., Ghathwan K. Artificial Neural Network Hyperparameters Optimization:

A Survey, International Journal of Online and Biomedical Engineering (iJOE), 2022, Vol. 18,

pp. 59-87. DOI: 10.3991/ijoe.v18i15.34399.

20. Vorontsov K.V. Realizatsiya na PLIS neyroseti dlya raspoznavaniya izobrazheniy [Implementation of a neural network for image recognition on a PLIS], Radioelektronika, elektrotekhnika i energetika [Radioe-lectronics, Electrical Engineering, and Energy], 2025, Issue 25. EDN: KQWEHQ.

Скачивания

Published:

2025-11-10

Issue:

Section:

SECTION IV. MACHINE LEARNING AND NEURAL NETWORKS

Keywords:

FPGA, neural networks, convolutional networks, multilayer perceptron, quantization, hardware implementation, embedded systems

For citation:

E. V. Melnik , D.Е. Blokh , А.I. Bezmeltsev , V.S. Panishchev , S.N. Poltoratsky DESIGNING MLP AND CNN NEURAL NETWORK MODULES ON FPGA FOR IMAGE CLASSIFICATION TASKS. IZVESTIYA SFedU. ENGINEERING SCIENCES – 2025. - № 5. – P. 214-229.