No. 5 (2025)
Full Issue
SECTION I. INFORMATION PROCESSING ALGORITHMS
-
ROBOT PATH PLANNING FOR MULTI-TARGETS BASED ON A HYBRID OF PRM AND AGA ALGORITHM
Alzubairi Shaymaa М. Jawad Kadhim , А.А. Petunin , S.S. Ukolov6-18Abstract ▼Optimal path planning problems for mobile robots have been particularly actively studied in the last decade. The goal is to find an optimal or near-optimal path from a starting terminal to one or more terminals in an environment with various obstacles, in terms of minimizing robot travel time, distance traveled, energy costs, or other optimization criteria. In this paper, we propose a hybrid algorithm combining a probabilistic roadmap algorithm (PRM) and an adapted genetic algorithm (AGA) to solve a path planning problem with one or more independent objectives. The robot's path length is used as an optimization criterion. Compared with existing approaches used in genetic algorithms (GAs), the proposed approach has two main differences. The first is the environment representation, which relies on image processing and morphological operations, which has proven to be a more efficient method than methods based on cellular representation. In particular, the proposed method eliminates the need to find a trade-off between accuracy and speed of processing geometric information. The second is a new tactic for creating an initial population of the genetic algorithm to accelerate convergence in the presence of multiple objectives. By leveraging the capabilities of a probabilistic roadmap algorithm. Another key feature of the algorithm's implementation is the appropriate (for the domain under study) selection of numerical parameters that determine the characteristics of all stages of the evolutionary strategy, including the time required to complete each stage. This applies in particular to the parameters of the mutation operator and the elite strategy. The proposed algorithm was tested on two real-world maps with varying levels of complexity. Its effectiveness was confirmed by comparison with path planning results for test maps obtained using a standard genetic algorithm and an ant colony optimization algorithm. Experimental results demonstrate that the hybrid algorithm expands the capabilities of a conventional genetic algorithm and finds rational path variants with the best objective function value for single and multiple objectives in significantly less time than other traditional GA implementations.
-
CASCADE CLASSIFICATION ALGORITHM FOR DETECTING MALICIOUS SOFTWARE BY STATIC ANALYSIS
А.V. Kozachok , А. V. Kozachok , S.S. Matovykh18-35Abstract ▼A study is presented on the development and experimental validation of a two-level cascading architecture for static classification of Portable Executable (PE) format executable files. The aim of the work is to reduce computing costs without compromising the quality of malware detection. At the first level of the cascade, a decision tree model is used, trained on the ten most informative features, providing a high completeness of Recall 0.990 detection with an acceptable error of 1 kind. The second level is implemented by the random forest model on forty features and is intended for clarifying classification, reaching the metrics Precision 0.988 and Recall 0.987 with an F1 measure of 0.988. The classification threshold at the first level was established empirically, taking into account the minimization of errors of the second kind, while at the second level the optimal threshold value was determined by the Juden index, which provides a balanced ratio of sensitivity and specificity. Experiments on a representative sample have shown that with a malicious traffic fraction of < 20%, the proposed cascade reduces the average analysis time of one object by 5-12% compared to the 40-feature model while maintaining comparable classification quality.
The time limit of the cascade, = 20.6%, is analytically derived, confirmed by empirical data. The practical significance of the work lies in the possibility of integrating the proposed algorithm into antivirus gateways and endpoint protection tools, where fast response and high completeness of detection are required during mass scanning of mostly legitimate code. -
A METHOD FOR CALCULATING CRYPTOGRAPHIC KEYS FROM A PERSON'S BIOMETRIC DATA BASED ON STABLE TRANSFORMATIONS
I.V. Kaliberda36-52Abstract ▼This article discusses the task of converting a person's biometric data into cryptographic keys that provide a high level of security. Biometric data, although unique, does not have sufficient randomness to create strong cryptographic keys. In addition, key storage issues arise: an attacker can steal the template, and the slightest change in the input data (different lighting, facial expressions) creates a risk of inconsistency, which leads to a high frequency of false rejections. As a solution, a cryptographic key generation method is proposed that combines several key technologies to ensure the efficiency and security of the key creation process. The main stages of the method are described, including obtaining a face image, image processing, image analysis with the extraction of necessary features using a convolutional neural network, image transformation (feature vector) into a binary string, and stable transformations. Sustainable transformations are called upon as techniques that are aimed at protecting biometric data: the use of Reed-Solomon correction codes, the generation of a biometrically dependent key, followed by its distribution into parts according to the classical Shamir scheme, encryption. The advantages of this approach have been theoretically justified in the context of reducing the likelihood of false tolerances and false deviations. The results of experiments based on public datasets are presented. It is shown that compared with classical methods simple sampling and some existing schemes (Bio-Hashing without error correction), the proposed solution provides higher accuracy. The presented method provides significant security advantages, making cryptographic systems more suitable for high-security applications
SECTION II. DATA ANALYSIS, MODELING AND CONTROL
-
ON CALCULATING THE MEAN INFECTED TIME USING A DISCRETE MARKOV EPIDEMIOLOGICAL MODEL WITHOUT TREATMENT
А.А. Magazev , А. Y. Nikiforova53-63Abstract ▼Modeling of the spread of viruses is a relevant research field. There are a lot of «continuous» epidemic models based on the use of systems of differential equations. The disadvantage of such models lies in their error in describing the initial stage of virus propagation and in the fact that they ignore the specific features of inter-individual connections. «Discrete» models, in which the time and the number of infected and susceptible nodes are discrete values, provide a more accurate picture of the epidemic process. In this work, we study a discrete Markov model in the case when there is no treatment. This is an important case, since it can be viewed as either an approximation to the initial phase of an epidemic or as a model for epidemics of viruses that are difficult to treat. The first section provides a detailed description of the properties of the Markov model used in this study. In the second section, using Markov approach, we define the mean infected time, i.e. the number of time steps taken to infect all individuals in the population. However, calculating the mean infected time in populations with a large number of individuals (or in networks with a large number of nodes) is computationally difficult problem, so in the third section we propose the corresponding approximate formula for this parameter. This approximation is designed for conditions of low network connectivity and а low probability of virus spread. In the fourth section, to validate our approximate formula, we compare its results against both exact calculations (using the fundamental matrix M) and data from simulation modeling. For the simulations, we developed a custom C++ console application. Our analysis demonstrates that all three methods yield consistent results under the specified conditions, confirming the practical utility of the simpler approximate formula
-
A METHOD FOR EXPRESS ASSESSMENT OF PI-REGULATOR PARAMETERS FOR APERIODIC TRANSIENT PROCESSES IN AUTOMATIC CONTROL SYSTEMS OF NUCLEAR POWER PLANT UNITS
А.О. Tolokonsky , D.S. Menyuk64-71Abstract ▼This article discusses the key aspects of setting the parameters of automatic regulators that are used in process control systems, in particular at nuclear power plants (NPP). The need for fine-tuning regulators is emphasized to ensure the stability, efficiency and safety of the systems. Traditional tuning methods such as the Ziegler-Nichols method and frequency analysis are described, which, despite their reliability, require significant time and an accurate mathematical model of the control object. In modern production conditions, where efficiency is important, express methods are relevant to reduce setup time, but their accuracy and versatility remain questionable. Special attention is paid to the problems that arise when using real regulators, such as integral saturation and periodic invocation of the control algorithm. Integral saturation can lead to a deterioration in the dynamic characteristics of the system and even to the activation of technological protections, and an incorrect choice of the period for calling the regulator can cause a loss of stability of the system. Methods A method for tuning PI controllers is proposed that takes into account the dynamic characteristics of control objects and the results of experimental studies. Recommendations are given on the choice of proportionality coefficients and the integration time constant, which make it possible to achieve an aperiodic transition process, minimize the risk of saturation and ensure high quality control. Results The results of experiments conducted on the UMICON software and hardware complex confirmed the effectiveness of the proposed approach. Conclusion. The developed rules for rapid evaluation of regulator parameters make it possible to simplify the setup process, reduce setup time, and improve the reliability of automatic control systems at nuclear power plants. This is especially important to ensure the safety and stability of such critical facilities as nuclear power plants.
-
CONTROL OF A MULTI-ROBOT SYSTEM BASED ON HIGHER-ORDER SLIDING MODES
Nandanwar Anuj , L. А. Rybak , D. А. Dyakonov72-83Abstract ▼The article addresses the control problem of a second-order multi-agent robotic system with discrete time under network-induced delays. A novel approach to formation control is proposed, based on higher-order sliding mode control and cloud technologies. The interaction between agents is described using graph theory, where the Laplacian matrix represents the communication channel between agents and the leader. The system dynamics are modeled by motion equations for the position and velocity of each agent. Special attention is paid to the impact of network-induced delays that occur during data transmission from sensors to the controller and from the controller to actuators. A multi-stage state predictor is developed, utilizing prediction methods to compensate for random delays in the network.
The proposed control algorithm ensures rapid convergence of the system to the desired formation even in the presence of significant network delays. For each agent, a sliding surface and a reaching law are defined, taking into account multiple timestamps. A detailed stability analysis of the closed-loop system confirms the asymptotic stability of the developed control algorithm. Simulation results in MATLAB demonstrate the high efficiency of the proposed approach: a system consisting of five followers and one leader achieves the desired formation in 10.3 seconds and successfully maintains it despite random network delays. Compared to traditional first-order control methods, the new approach shows significantly improved performance, particularly in reducing chattering effects in control signals. The use of cloud technologies enables efficient real-time processing of large data volumes and implementation of complex prediction algorithms without overloading the local computational resources of the agents. The obtained results confirm the potential of the proposed approach for controlling multi-agent systems under real-world network constraints. The work also demonstrates the feasibility of using prediction methods to compensate for random packet losses and communication delays, ensuring reliable control and communication in dynamic, unpredictable scenarios -
NATURAL LANGUAGE CONTROL OF CONSTRUCTION ROBOTIC SYSTEMS
D.G. Makoeva , I. R. Tlupov , А. О. Shogenov83-93Abstract ▼The study aims to investigate the potential of natural language control systems for construction robots. It is the lack of reliable natural language processing systems that serves as a limiting factor that prevents intelligent robotics from fully realizing its potential. The work provides an overview of modern robotic construction systems that are used to facilitate and improve construction and engineering processes and tasks. What unites all these systems is the lack of natural language control. In this paper, we present principles, algorithms, and methods that allow an intelligent agent to penetrate the essence of the context of a situation unfolding in the field of construction and engineering tasks. The approach is based on a multi-agent neurocognitive architecture, which serves as a kind of tool for modeling the process of automatic interpretation of phrases taken from a limited subset of natural language. In order for an intelligent agent to correctly interpret an incoming message, it must accurately determine the conditions, actions, properties, and relationships that take place in the "intelligent agent - environment" system. Only then does the agent gain the ability to interpret the context of the current dialogue and generate statements necessary for designing cooperative behavior aimed at jointly overcoming technical obstacles. One of the most common problems requiring a solution in the rapidly developing field of robotics is the development of a dialogue control system capable of coordinating joint human-machine behavior and interpreting goals and mission conditions set out in natural language. A control system based on natural language is an integral part of an intelligent system, the foundation of which is a self-organizing multi-agent neurocognitive architecture. Its main goal is to establish seamless communication between human-machine teams so that they can jointly set, describe and successfully complete complex construction tasks. The fundamental element of the approach is multi-agency, which allows the robot's decision-making system to be flexible, adaptive and continuously expand the range of its knowledge, generating questions necessary for further work.
-
THE MODULE FOR PREDICTING CONVERTER PARAMETERS BASED ON SPECIFIED AMPLITUDE-FREQUENCY CHARACTERISTICS
V.I. Shlaev93-103Abstract ▼The article discusses the solution of the problem of developing converters based on specified amplitude-frequency characteristics. The main problem is to carry out a large number of measuring measures with changes in the parameters of the transducers to achieve the necessary amplitude-frequency characteristics, which leads to high time and resource costs for development. The analysis of the main parameters of the converters affecting the specified amplitude-frequency characteristics is carried out. The existing approaches, methods and algorithms for creating converters of the required characteristics are analyzed. The development of a module for predicting the parameters of electromechanical converters based on specified amplitude-frequency characteristics is described. The research objectives include the creation of structural-parametric and mathematical models for calculating the characteristics of converters at the design stage. An algorithm for training a model based on experimental data obtained during measurements is described. The use of machine learning methods to predict parameters minimizes the number of experiments performed and reduces the cost of developing converters. The proposed approach is based on the use of the relationship between the design parameters of the converters and their frequency characteristics. The gradient boosting algorithm is used to increase the accuracy of forecasting. The stages of data preparation for model training are presented. The learning process of the model is described. The results demonstrate a significant reduction in the modeling time of the converters: the use of the module makes it possible to speed up the process several times compared with the experimental approach. Predicting characteristics based on a model provides comparable accuracy with a larger amount of data. The findings of the study confirm the effectiveness of the proposed approach in the development of converters, reducing time and financial costs, increasing the accuracy of modeling and applicability in conditions of limited resources.
-
UNITARY CODE CONVERTERS FOR HOMOGENEOUS COMPUTING SYSTEMS
Е.А. Titenko104-115Abstract ▼Relevance. Effective operation of computing systems, among other things, is based on generally significant supporting calculations for planning parallel calculations and analyzing the results. Converters (formers) of unitary codes that combine the properties of numerical and symbolic information are quite important computing units. The purpose of the work is to create high-performance computing tools for processing unitary codes on a single theoretical basis. Research methods. Known one-dimensional and two-dimensional iterative networks are the basis for creating homogeneous converters of unitary codes that have the necessary and sufficient conditions for organizing parallel calculations. To synthesize unitary code converters, the following processing principles inherent in numbers and strings were identified: bidirectional processing, splitting into many local processes with their own starting points, hierarchy, multifunctionality, digit/symbol dualism. The described converters use known and introduce new circuit solutions. A digital compressor, a generator of a series of logical "1", an arbiter, a threshold element of weight and unitary codes are described. Results and discussions. Practically significant circuits of direct and inverse converters of "8-4-2-1 – normalized code" codes are created, used in homogeneous computing systems - multiprocessors, associative processors, etc. Quantitative assessments of unitary code converters are carried out for the created converter – a threshold element of weight and unitary codes. This converter is based on the dual interpretation of code elements as a digit and a symbol, which made it possible to exclude the linear time dependence on obtaining the result of comparing two codes at the final stage of calculations (versus the standard method). It is shown that for unitary codes of sizes from 12 to 36 bits, the time gain is 14-16%. This effect is obtained by eliminating sequential calculations between the cells of the iterative network. Conclusions. To construct effective time-saving schemes for converting unitary codes, the apparatus of iterative networks was used and developed, on the basis of which one-dimensional and two-dimensional iterative networks with regular connections were created, as well as converters based on universal logical modules
SECTION III. ELECTRONICS, NANOTECHNOLOGY AND INSTRUMENTATION
-
FORMATION AND INVESTIGATION OF DOPED ZINC OXIDE MEMRISTIVE FILMS FOR MACHINE VISION SYSTEMS OF ROBOTIC COMPLEXES
Z. Е. Vakulov , R.V. Tominov , Д.A. Dzyuba , V.А. Smirnov116-123Abstract ▼The results of investigation of the influence of synthesis modes of doped zinc oxide thin films by pulsed laser deposition on their morphological and electrophysical characteristics are presented. Experimental studies of the influence of dimensional effects on the parameters of resistive switching of memristor structures based on thin films of doped zinc oxide have been carried out. The relationship between the morphological parameters of the films, their thickness and resistive switching characteristics has been established. The results showing how thickness, surface roughness and average grain diameter influence the ratio of resistance in the high-resistance and low-resistance states, as well as the switching voltages Uset and Ures have been obtained. It is shown that an increase in the thickness of gallium-doped zinc oxide films leads to an increase in the Uset and Ures voltages, while the dependence of the resistance ratio in the high-resistance and low-resistance states has a complex character, with a maximum observed at a film thickness of about 30 nm. The obtained results allow us to estimate the degree of influence of structural and morphological parameters of doped zinc oxide films on the resistive switching effect in them, and also to formulate recommendations for obtaining these films with the required resistive switching parameters. It was found that increasing the thickness of gallium-doped zinc oxide films from 11.8±5.1 nm to 55.1±18.4 nm it is possible to change the value of charge carriers concentration from (2.84±0.22)∙1019 cm-3 to (1.42±0.13)∙1020 cm-3, as well as the mobility of charge carriers from 54.48±4.07 cm2/(V∙s) to 18.77±0.83 cm2/(V∙s). At the same time, increasing the thickness of gallium-doped zinc oxide films also leads to an increase in resistance in the high-resistance state from 1.38±0.11 MΩ to 62.59±5.4 MΩ and resistance in the low-resistance state from 0.005±0.001 MΩ to 0.041±0.002 MΩ. The results obtained can be used in the development of physical principles of creation of electronic component base of artificial intelligence systems for manufacturing new devices and devices of nanoelectronics and adaptive neuromorphic systems
-
MODELING THE ELECTRIC FIELD OF A SILICON N-I-P NANOSTRUCTURE
N.М. Bogatov , V. S. Volodin , L.R. Grigoryan , М. S. Kovalenko123-133Abstract ▼Distribution of ionized impurities, electrons, holes determines the structure, physical properties, performance characteristics of semiconductor devices. The role of surface electron states is negative, the degree of their influence on the characteristics of the device depends on the features of the structure. Reducing the size of semiconductor devices is a modern trend in improving electronics. The influence of surface states on the properties of nanoscale objects increases with decreasing size. The object of the study is the electric field of a silicon n-i-p nanostructure. The purpose of the study is to analyze the influence of surface states on the internal electric field of a silicon n-i-p nanostructure. Research objectives: 1 – Calculate numerically, taking into account the surface states, the potential and electric field strength, the concentration of donors and acceptors in a silicon n-i-p nanostructure with a diffusion doping profile.
2 – Determine the influence of the thickness of the n-i-p nanostructure and the density of surface states on the potential and electric field strength. 3 – Determine the composition of the space charge region of the n-i-p nanostructure with the minimized influence of surface states. The calculation method is based on the numerical solution of the Poisson equation taking into account the surface states and boundary conditions, including the condition of the general electroneutrality of the sample. As a result, the distributions of the potential and electric field strength were obtained for different values of the nanostructure thickness and the density of surface states. It is shown that charged surface states change the potential and electric field strength not only in the surface region, but also in the volume of the nanostructure. The value of the strength in the base increases with decreasing thickness, this value decreases if the density of surface states exceeds 1013 cm–2. Reducing the density of surface states to 1012 cm–2 eliminates the surface potential barrier created by them. The space charge region consists of 5 parts: a region of positive charge created by ionized donors, a region enriched in electrons, a region depleted in charge carriers, a region enriched in holes, and a region of negative charge created by ionized acceptors -
STUDY OF THE PROPAGATION OF LIGHT WITH A WAVELENGTH OF 1.3 ΜM IN TWO-DIMENSIONAL GaAs-BASED PHOTONIC CRYSTALS WITH A WAVEGUIDE–MICRORESONATOR CONFIGURATION
Maximilian Pleninger , S.V. Balakirev , М.S. Solodovnik133-142Abstract ▼Photon crystals are semiconductor structures characterized by a periodic variation of dielectric permittivity in space with a period comparable to the wavelength of electromagnetic radiation. Interest in these structures is driven both by the importance of fundamental research into light-matter interactions and by the prospects for applying photonic crystals in optical integrated circuits and next-generation optoelectronic components. This paper presents the results of a study on the propagation patterns of electromagnetic radiation with a wavelength of 1.3 μm in two-dimensional photonic crystals based on gallium arsenide (GaAs). The research is based on a numerical model using the Comsol Multiphysics 6.1 software package and includes an analysis of the electric field intensity distribution in complex photonic crystal structures consisting of a waveguide coupled to a hexagonal microcavity (microresonator) with various geometric parameters. The influence of a deliberately introduced defect radius in the waveguide region on the efficiency of radiation transmission into the resonator area also analyzed. For numerical analysis, methods for simulating the propagation of transverse electric waves in two-dimensional photonic crystals with a hexagonal lattice of air holes employed. The geometric parameters of the basic photonic crystal structure remained constant: the air hole radius was 209 nm, and the lattice period was 520 nm. The waveguide was formed by removing one row of air holes, while the microresonator was created by forming a hexagonal air cavity near the waveguide. To enhance the coupling efficiency between the waveguide and resonator, a defect in the form of an air hole with a variable radius was introduced into the structure. Analysis showed that maximum localization of the electromagnetic field in a hexagonal cavity with a diameter of 1.65 μm was achieved when the cavity was positioned two rows of air holes away from the waveguide. Increasing this distance resulted in a reduction of field intensity within the resonator. Introduction of the defect significantly enhanced energy transfer efficiency from the waveguide to the resonator. The highest integral electric field intensity in the resonator region was observed when the defect radius ranged from 246 to 290 nm. The obtained data can be used in the development of compact optical devices such as lasers, modulators, and switches based on photonic crystals
-
SIGE BICMOS OUTPUT STAGES OF HIGH-TEMPERATURE OPERATIONAL AMPLIFIERS
А.А. Zhuk , D. V. Kleimenkin , N.N. Prokopenko143-159Abstract ▼Development and design of silicon-germanium (SiGe) analog functional units (operational amplifiers, output stages, etc.) is one of the urgent tasks in modern microelectronics. The use of the combined technological process of SiGe BiCMOS makes it possible to combine in a single integrated circuit the advantages of complementary CMOS triansistors (low power consumption and high integration density) and bipolar heterojunction transistors (HBT) n-p-n type (the ability to operate at high frequencies, low power consumption and, as a result, low intrinsic heat dissipation, high gain, high performance, increased reliability, relatively low cost). To create a micro-power analog component base operating at high temperatures (up to + 250 degrees Celsius), it is necessary to develop special SiGe BiCMOS circuit solutions that take into account the process limitations on the use of certain types of transistors. Four modifications of buffer amplifiers for application as output stages of operational amplifiers, which are oriented to SiGe BiCMOS technological process, are investigated. A program for cataloging and visualization of the considered circuits is developed, which differ from each other by the values of input and output resistances, static current consumption, circuitry of static mode establishment circuits, maximum amplitudes of positive and negative output voltages, etc. Examples of computer simulation of static modes and amplitude characteristics in the Cadence electronics and microelectronics design environment at two temperatures of + 27 and + 250 degrees Celsius are given. The proposed circuit design solutions are recommended for practical use in microelectronic devices operating at elevated temperatures
-
SIMULATION AND ANALYSIS OF THE STRESS-STRAIN STATE OF A PRESSURE SENSOR’S ELASTIC MEMBRANE BASED ON “SILICON ON SAPPHIRE” STRUCTURE
S.P. Malyukov , V. D. Mishnev159-167Abstract ▼High accuracy and improved performance of pressure sensors are essential to ensure safety, quality and efficiency in various industries and machinery. The use of the finite element method (FEM) in the design of pressure sensors makes it possible to improve their accuracy due to a deeper analysis of mechanical and physical processes that arise when exposed to pressure loads. The purpose of this work is to build an accurate three-dimensional model of the sensitive element of the pressure sensor and to analyze the stress-strain state of the elastic membrane under the load from 0 to 15 MPa. The main tasks of the work: research of the properties and parameters of materials used as part of the sensitive element of the pressure sensor based on the structure “silicon on sapphire”; obtaining the values of the maximum equivalent stress arising in the design of the elastic membrane of the sensitive element under the influence of a pressure load of 125% of the nominal value; distribution of radial and tangential deformations of the elastic membrane and determination of the best location of resistance strain gauges on the surface of pressure sensor’s sensitive element. As a result of the research, it was found that the materials used have good resistance to an aggressive environment, as well as the ability to work in a wide temperature range and under high pressure loads. Based on the simulation results, the value of the maximum equivalent stress was determined, the stress value does not exceed the ultimate strength of the sensitive membrane, the distribution of radial and tangential deformations on the surface of the sensitive element was obtained, which makes it possible to determine the most optimal pattern of the resistance stain gauge bridge circuit.
-
FLOATING–POINT ADDER IN DIGITAL PHOTONIC COMPUTING SYSTEMS
D.А. Sorokin , I.I. Levin168-178Abstract ▼Within the structural computation paradigm proposed by the authors, digital photonic computing systems are expected to employ sequential data processing, which allows for the minimization of operand duty cycle gaps when data is supplied from external memory or other electronic sources to the photonic device. This becomes feasible when the processing time per operand does not exceed the number of clock cycles corresponding to the operand’s bit width. Moreover, sequential digit–wise processing significantly reduces hardware costs associated with dataflow synchronization. The elimination of duty cycle gaps and reduction in structural overhead can substantially enhance the efficiency of digital photonic computing systems relative to their electronic counterparts. However, to enable photonic computational architectures capable of solving complex and computation–intensive problems in domains such as mathematical physics, linear algebra, neural network processing, and others, it is necessary to implement core arithmetic functions in floating–point format. Most of these functions are built around elementary integer addition. In binary systems with sequential processing in least–significant–digit–first order, integer adders are unable to begin producing results until all bits have been processed and carry propagation is complete, thereby doubling the operand duty cycle and increasing latency. To address these issues, this paper proposes the use of a quaternary signed–digit number representation with operands processed in most–significant–digit–first order. This representation enables immediate transmission of the most significant digits of the result to downstream processing units, without waiting for the completion of lower–order digit computation. This paper addresses the design of all components of the signed–digit floating–point adder: the exponent difference unit, the mantissa denormalization unit for the operand with the smaller exponent, the mantissa adder, the mantissa normalization unit for the result, and the exponent correction unit. Operational algorithms for these units are presented. The performance of the proposed signed–digit adder has been evaluated on a prototype implemented in a digital photonic logic framework on the reconfigurable “Terzius” computing platform. It is demonstrated that, due to the high clock frequency achievable by digital photonic computing devices, their performance can exceed that of microelectronic devices by nearly two orders of decimal scale.
-
DISCRETE-ANALOGUE FILTER OF THE SECOND ORDER ON SWITCHED CAPACITORS WITH TUNING OF POLE FREQUENCY BY DIGITAL POTENTIOMETER
D.Y. Denisenko , N.N. Prokopenko , Y.I. Ivanov , D.V. Kuznetsov179-189Abstract ▼A discrete-analogue filter of the second order on two frequency-switching capacitors is developed and investigated. The proposed circuit contains two inputs (In_LPF_HPF, In_BPF_NPF) and four outputs (Out_LPF, Out_BPF, Out_HPF, Out_NPF). The filter type (numerator of the transfer function) is determined by connecting a signal source to the corresponding input of the circuit and taking a signal from the corresponding output. The pole attenuation depends on the resistance of a single resistor R5, which does not affect the other parameters. Therefore, the pole attenuation can be tuned using this resistor. To set the passband gain at a given level, it is appropriate to use resistor R1 in the LPF and HPF, and resistor R2 for the BPF and NPF. Changing these resistors will not cause changes in other parameters of the filter circuit. It is established that the pole frequency depends on the resistance of the resistor R8 or digital potentiometer Kdp (Kf), the transmission coefficient of which can be changed by changing the binary digital code Kf, fed to its control inputs, and the other parameters of the filter link do not depend on them, so by changing the resistance of this resistor or the transmission coefficient of the digital potentiometer the pole frequency can be tuned in a wide range while preserving other parameters. Computer modelling of the investigated discrete-analogue filter is performed in Micro-Cap environment. The sequences of pulses controlling electronic keys are given. Graphs of output voltages at the circuit outputs (Out_LPF, Out_BPF, Out_HPF, Out_NPF) are shown. The application of a digital potentiometer in the filter circuit is extremely promising in the construction of adaptive signal processing systems.
-
RECOGNITION AND ADAPTIVE GENERATION OF PSEUDO-RANDOM TESTS OF SEQUENTIAL DIGITAL DEVICES
Y.Е. Zinchenko , Т. А. Zinchenko189-204Abstract ▼The purpose of this paper is to improve the efficiency of pseudo-random testing of digital devices compared to the conventional approach. To achieve this goal, the following main tasks are solved in the work: analysis of the effectiveness of traditional testing approaches; developing a new approach based on recognition and adaptive pseudo-random testing of digital devices and developing a testing system based on the proposed approaches and conducting experimental studies based on it. The devices under test in this paper are sequential digital devices (with memory elements), implemented as printed circuit board on microcircuits with medium and small degree integration. Stuck-at faults are used as fault models in test synthesis and analysis. The subject of this research is sequential digital devices as diagnostic objects and approaches to their pseudo-random testing. An approach to recognizing and testing sequential digital devices is presented, which is based on a combination of traditional pseudo-random testing device under test at the first stage with and constructing an "alternative graph" of the device at the second stage and subsequent "wandering" along this graph in order to improve the testing efficiency. Based on the proposed approach a system AGAT for recognizing and testing digital devices has been developed. Testing can be performed for one or a group of devices under test on one computer or as part of a local computer network, including taking into account "multithreading" based on multi-core processors of personal computers in the network. Extensive research of the proposed approach and the developed system is carried out on two types of devices under test: the ISCAS'89 and the set of PCBs of the specialized radio engineering system.
SECTION IV. MACHINE LEARNING AND NEURAL NETWORKS
-
HARDWARE NEURAL NETWORK BASED MEMRISTIVE TITANIUM OXIDE STRUCTURES
V.I. Avilov , L. А. Dushina , N.V. Polupanov , V. А. Smirnov205-214Abstract ▼The paper presents the results of manufacturing, training and research of a hardware neural network prototype implemented as a crossbar array of artificial synapses based on memristor nanostructures of electrochemical titanium oxide. A prototype of a fully connected neural network was developed, consisting of four input electrodes, a crossbar array of 16 artificial synapses based on electrochemical titanium oxide nanostructures and four output electrodes. It is shown that the process of current flow through such a structure fully corresponds to the mathematical model of the neural network. Various implementations of artificial synapses that allow the implementation of negative "weights" of the neural network were analyzed and one of the optimal options was selected. Based on the developed structure, a prototype of a fully connected neural network was manufactured using magnetron sputtering, optical lithography and nanolithography technologies using scanning probe microscopy methods. To train the neural network, an algorithm for switching individual memristors was developed, eliminating parasitic switching of neighboring structures due to the occurrence of leakage current. To demonstrate the operation of the manufactured neural network model, a task of classifying two input signals was proposed. To implement negative "weights", each of the incoming signals was duplicated with negative polarity. It is assumed that the outputs of the trained neural network should register: 1) the excess of the first signal; 2) the excess of the second signal; 3) both high signals. The training and research of the neural network was carried out using the hardware and software complex "Neuro InT", developed by the staff of the Research Laboratory "Neuroelectronics and Memristive Nanomaterials", SFedU. Research of the neural network model showed that all outputs successfully classify incoming signals, maximizing the current through the corresponding outputs for the given input values. The proposed structure can be improved by adding two additional inputs with a constant high positive and negative potential to implement a "shift" during the operation of the neural network. The obtained results can be used in the development of technological foundations for the formation of hardware neural networks based on memristor titanium oxide nanostructures
-
DESIGNING MLP AND CNN NEURAL NETWORK MODULES ON FPGA FOR IMAGE CLASSIFICATION TASKS
E. V. Melnik , D.Е. Blokh , А.I. Bezmeltsev , V.S. Panishchev , S.N. Poltoratsky214-229Abstract ▼Relevance. The development of machine learning methods and neural network architectures, as well as their spread into various industrial sectors, determine the relevance of solving problems related to their hardware implementation. The use of programmable logic integrated circuits in this area will increase data processing speed and the adaptability of the implemented algorithms. However, designing neural network architectures on programmable logic integrated circuits is associated with a number of methodological and technical difficulties, including the optimization of parallel computing, hardware resource management, and ensuring operation under conditions of limited computing resources. The purpose of this work is to analyze and compare two neural network architectures, the multilayer perceptron (MLP) and the convolutional neural network (CNN), in the context of their hardware implementation on programmable logic integrated circuits (PLICs). Particular attention is paid to the trade-off between classification accuracy and the efficient use of limited FPGA hardware resources. Research methods.
To achieve the goal, two modules were developed and simulated on a Virtex 7 FPGA, a perceptron and a convolutional module. The MNIST dataset, reduced to 20×20 pixels, was used. The implementation included quantizing parameters to a fixed 16:16 format, optimizing hyperparameters, using tabular computations for nonlinear functions, and evaluating FPGA resource usage. Results and discussions.
MLP achieved 93% accuracy using 11% of logic elements, while CNN achieved 98% accuracy but required significantly more resources. The use of internal buffers to store intermediate data in CNN resulted in exceeding the allowable resources. The forced transition to external memory increased delays and the number of I/O ports. Conclusions. The study showed that the choice of architecture depends on priorities: CNN provides better accuracy but is less resource-efficient. For embedded systems with memory and power consumption constraints, a simplified MLP implementation is preferable. The main problems remain the lack of internal memory and the high resource intensity of operations, which requires further research in the field of hardware optimization and adaptive computation control -
DETECTION OF CYBER INTRUSIONS BASED ON NETWORK TRAFFIC AND USER BEHAVIOR USING THE UNSW-NB15 DATASET
V. А. Chastikova , К.V. Kozachek , Е.S. Korobskaya , V. P. Kravtsov229-243Abstract ▼The article focuses on the study of user behavior and the creation of behavioral models. This helps to improve the accuracy of anomaly detection and quickly identify non-standard network activity.
The purpose of this study is to compare the effectiveness of two machine learning models – the multilayer perceptron (MLP) and the Random Forest algorithm – for detecting cyber intrusions based on the analysis of network traffic and user behavior. Behavioral models make it possible to detect deviations from normal user activity and network interactions, which significantly increases the completeness of cyber intrusion detection. The study used the UNSW-NB15 dataset, which includes current types of attacks and characteristics of both network traffic and user activity. Prior to the implementation of the models, preliminary data processing, feature selection, normalization and coding of categorical features were carried out.
The models were evaluated using various metrics such as accuracy, recall, AUC-ROC, precision,
F1-score, and others. The results of the study showed that the Random Forest algorithm provides high classification accuracy (95%), and the multilayer perceptron (MLP), in turn, achieved outstanding results in AUC (0.9830) and accuracy (precision, 0.9869). The paper presents an analysis and characterization of methods for analyzing user behavior and classifying network traffic, a comparison of data sets for intrusion detection systems (IDS), and practical recommendations for choosing models depending on operating conditions. The results of the study can be useful in the development of adaptive protection systems that combine high accuracy and speed -
NOISE GENERATION METHOD BASED ON A SET OF NOISY IMAGES WITHOUT CLEAN EXAMPLES
А.S. Kovalenko , Y. М. Demyanenko243-254Abstract ▼In this work, a novel method is proposed for noise generation from noisy images that does not require aligned pairs of clean and noisy data. Unlike traditional approaches demanding matched image sets or a priori noise models, the developed technique models complex noise characteristics intrinsic to specific CMOS sensors solely from observed noisy data. Noise synthesis is achieved via a U‑Net‑like generative adversarial architecture based on StyleGANv2, featuring a modified discriminator conditioned on camera parameters and input image metadata. Special emphasis is placed on preserving the spatial–color structure and textural details of each image, enforced through a dedicated loss function that ensures fidelity to the original color rendering and fine-grained patterns. Training of the noise generator is performed without any paired clean and noisy images, which proves particularly valuable when handling real-world datasets acquired from multiple camera models under varied lighting conditions. The experimental section presents a detailed comparative analysis of the synthesized images using PSNR and SSIM metrics, along with an evaluation of the noise distribution based on intensity statistics and spectral characteristics. It is demonstrated that the generated dataset functions effectively as a standalone training corpus for denoising neural networks and, when combined with a real dataset (e.g., SIDD), yields further enhancements in denoising performance. Results indicate that combined training on the union of generated and real examples produces an average PSNR improvement of 1.5 dB compared to existing methods reliant on aligned data. Independence from the specific optical characteristics of any given sensor significantly broadens the method’s applicability. These findings confirm the utility of the proposed approach for realistic noise synthesis and removal in scenarios lacking clean reference images, and they open avenues for future research into adaptive noise-model generation
-
APPLICATION OF COMPUTER VISION TECHNOLOGIES IN VISUAL INFORMATION PROCESSING SYSTEMS
О.B. Lebedev , R.I. Cherkasov254-276Abstract ▼This paper considers the application of artificial intelligence technologies, in particular computer vision, in visual information processing systems. A comprehensive analysis of neural network approaches to solving computer vision problems is carried out, including systematization of key types of problems: image classification, object detection and semantic segmentation. The architectural principles of convolutional neural networks are studied in detail with an emphasis on the mechanisms of spatial feature extraction through convolutional layers, optimization of data representation through pooling operations and feature transformation in fully connected layers. Particular attention is paid to the evolution of object detection methods, where the problem of model selection is considered as an extension of classification due to the integration of spatial coordinate regression, and an assessment of the effectiveness of detectors is carried out based on the IoU, Precision, Recall and F1-score metrics, demonstrating a fundamental trade-off between localization accuracy and processing speed. The YOLOv7 algorithm is presented as an optimal solution for real-time systems. Its architecture is based on splitting the input image into a grid of S×S cells with direct prediction of the bounding box parameters (center coordinates, width, height) and class probabilities for each cell, as well as the use of specialized layers (SPP, PANet) for multi-scale feature aggregation. The structure of the neural network confirms the effectiveness of the approach used, which ensures high performance without critically reducing accuracy in strategically important applications of video surveillance, autonomous systems, and augmented reality. A comparative study of one-stage and two-stage detectors was conducted with an assessment of their performance by key metrics. Particular attention is paid to the practical aspects of using computer vision technologies in real visual information processing systems.
-
PREDICTION OF THE REMAINING USEFUL LIFE OF TECHNOLOGICAL EQUIPMENT USING THE DEEP LEARNING METHOD LSTM
Y.А. Korablev277-288Abstract ▼The relevance of this study stems from the widespread implementation of predictive maintenance sys-tems. In modern industrial settings, accurately predicting the remaining service life (RUL) of critical equip-ment is particularly important. However, traditional data analysis methods demonstrate significant limita-tions when working with multivariate non-stationary time series characterized by high levels of noise and complex nonlinear dependencies. This leads to significant forecast errors, suboptimal repair planning, and an increased risk of sudden failures, which can cause significant economic losses and disrupt production processes. The goal of this study was to develop an improved RUL prediction model based on deep recur-rent neural networks. To achieve this goal, the following tasks were sequentially addressed: detailed analy-sis and multi-stage preprocessing of multivariate monitoring data; and design of a specialized two-layer LSTM architecture with integrated regularization mechanisms. The methods and approaches included the use of a unique methodology combining cascaded LSTM layers with normalization and dropout regulariza-tion. The model was trained on the NASA Turbofan Engine Degradation Simulation dataset using the state-of-the-art Adam optimizer and an early stopping strategy to prevent overfitting. Particular attention was paid to developing specialized preprocessing algorithms that effectively handle noisy time series and pre-serve long-term dependencies in the data. The main results of the experiments demonstrate high forecast accuracy. Detailed visual analysis of the time series confirmed the precise correspondence of the predicted values with the actual wear trajectories of mechanical components. The findings of the study demonstrate the high practical effectiveness of the developed model for solving current industrial forecasting problems. The feasibility of successful integration of the model into modern predictive maintenance systems for pro-cess equipment was established. The practical significance of the work lies in the potential for significant optimization of maintenance costs and minimization of the risk of critical failures. Prospects for further research include the development of hybrid architectures, the integration of attention mechanisms, and the adaptation of the model to various types of industrial equipment








