No. 4 (2025)

Published: 2025-09-02

SECTION I. INFORMATION PROCESSING ALGORITHMS

  • ESTIMATION OF TIME SPENT ON MULTIPLICATION OF SQUARE BINARY MATRICES OF A DEVICE WITH PIPELINING OF DATA READING FROM SPECIALIZED MULTIPORT MEMORY

    А.V. Bolgak , E.I. Vatutin , D.А. Trokoz
    6-20
    Abstract

    The purpose of this work is to estimate the time costs for multiplying square binary matrices of size n × n by a device with pipelining the operation of reading data from a specialized multiport memory and compare it with the time costs of the prototype. This work used methods of mathematical logic, set and graph theory, discrete systems and computer devices, and finite state machine design theory. As a result of the study, it was shown that the use of pipelining the operation of reading data from specialized multiport memory reduces the time spent on processing square binary matrices with a size of n ≤ 2048 up to 206.3 times. It can be seen from the data obtained that the loading and unloading time of the source and result data for the proposed device is significantly higher than the matrix multiplication time, which makes frequent loading and unloading of matrices impractical. For example, when performing the operation of transitive closure of a binary relation represented as a binary matrix, the initial matrix is loaded once, followed by a series of squaring, which is effectively implemented by the proposed device. Based on the obtained results, it can be concluded that the proposed device for multiplying square binary matrices with

  • MULTI-STAGE ANT ALGORITHM OF ONE-DIMENSIONAL PACKING BASED ON EFFICIENT DECISION ENCODING METHODS AND TWO-LEVEL EVOLUTIONARY MEMORY

    М.А. Ganzhur , B.К. Lebedev , О.B. Lebedev
    21-37
    Abstract

    The aim of the work is to develop and study bioinspired search methods for solving problems of one-dimensional packaging in identical containers based on effective algorithms for encoding and decoding solutions, composite criteria and a two-level structure of evolutionary memory. The paper proposes the structure of an ordered code for packing one-dimensional elements into identical containers, the main advantage of which is that one packaging solution corresponds to one code and vice versa. The search procedure is based on the modified metaheuristics of the ant algorithm. At each iteration, the one-dimensional packing algorithm has a multistep structure. The stages are performed sequentially one after the other, starting from the first one. Each stage of the Сk includes procedures performed by the zk agent. The number of stages is equal to the number of agents in the population plus the final iteration stage.
    The main task solved by the constructive algorithm at the Сk stage is to construct the Rk code for packing a set of X elements into identical containers. The stage is divided into periods according to the number of lists Xjk generated by the agent zk. The period is divided into stages. In each period, the following tasks are solved sequentially in stages: agent zk constructively generates a set Rk of ordered lists Xjk of onedimensional packaging in identical containers; fjk estimates of the packaging of each container Oj by elements of the list <Xjk> are calculated; the amount of λjk pheromone proportional to the fjk estimate is calculated; the estimate Wk=∑i(fjk)  is calculated one-dimensional packing of a set of elements X into H identical containers; pheromone is deposited on the edges of graph G corresponding to the list Xjk in the cells of the accumulative memory matrix E of the second level. After all agents of the zk population Z have formed ordered lists of Rk, the accumulated pheromone is added to the main memory matrix Φ of the first level. For each Rk, the total Fk indicator of the packaging quality of the set of X elements is calculated. The final operation in the iteration is pheromone evaporation on the edges of graph G and fixation of zk with the best Fk. Experimental studies have been conducted to determine the quality of the method's operation on large-dimensional test sets. To compare the developed algorithm with known methods and approximate algorithms, the authors selected several groups of benchmarks from various sources

  • ANALYSIS OF STABILITY AND FEATURES OF PRACTICAL IMPLEMENTATION OF HOGENAUER FILTERS AS RECURSIVE DIGITAL FILTERS WITH FINITE IMPULSE RESPONSE

    I.Е. Moiseenko , S. P. Tarasov , I.I. Turulin
    37-46
    Abstract

    The article considers the issues of stability of cascade integrator-comb (CIC) filters used in digital signal processing, including decimation and interpolation. A brief review of modern publications on the architectural optimization of CIC filters is given. The main attention is paid to increasing the stability of filters to the overflow of the bit grid, analyzing their stability and the method of synthesis of recursive FIR filters (filters with a finite impulse response). For a better understanding of the nature of the stability of CIC filters, the paper presents mathematical calculations illustrating the features of the accumulation of the constant component for various block configurations. A change in the structure of the CIC filter is proposed, consisting in the permutation of the integrator and comb filter blocks. It is proved that such a change prevents the accumulation of the constant component of the signal in the integrators and, therefore, eliminates the overflow of the bit grid due to the accumulation of the constant component in the integrator. This approach is based on the property of linear filters, according to which changing the order of inclusion does not affect the transfer function. amplitude-frequency characteristic, but in the case of digital implementations it allows to significantly reduce the probability of overflow. The possibilities of hardware and software implementation of such structures are considered from the point of view of minimizing the loss of accuracy and increasing the reliability of digital signal processing systems. It is proposed to use integers or numbers with a fixed point to eliminate the accumulation of quantization errors. In addition, a program in Python was developed that implements a CIC filter taking into account the stability of the constant component in the input signal and the accurate execution of operations. The obtained results are compared with modern approaches presented in scientific research in recent years. The proposed solutions can be useful in developing digital filters for systems with limited computing resources and increased stability requirements.

  • FEATURES OF THE FORMATION OF THE PROCESS OF CLASSIFYING THE CONDITION OF A TECHNICAL FACILITY BASED ON THE ANALYSIS OF POINTS IN THE TIME SERIES OF THE PARAMETER

    S.I. Klevtsov
    47-57
    Abstract

    Assessment of the operability of a technical facility in real time is important for the stable and trouble-free operation of the facility during its operation. Previously, a classification model for the rate of parameter change was proposed based on specialized point cloud processing of a time series segment without trend extraction. However, some proposals, for example, related to the non-inclusion of some points of the series in the model construction procedure, were not sufficiently justified and are an unobvious attempt to get rid of abnormal values of the time series. Some stages of the model implementation, for example, building an ellipse on a transformed point cloud, require a detailed representation, which is important for further model training and classification.  In the article, as part of the preliminary data preparation, a procedure is proposed for detecting and screening out abnormal values of the time series of a parameter based on a modification of the Irwin method. In addition, an updated scheme for evaluating the values of the criterion in the classification model for the condition of a technical facility parameter is presented. The ellipse compression ratio is used as the evaluation criterion, which is based on a cloud of scatter plot points cut out by a sliding time window from the time series of the parameter. An iterative ellipse construction procedure has been developed for this purpose. The new procedure provides a more informed and accurate assessment of the criterion. Thus, a modified model has been built that will allow real-time assessment of the occurrence of an emergency situation at an early stage of its development.
    The evaluation procedure can be implemented as part of the hardware and software of the monitoring system of a technical facility

  • DEVELOPMENT OF A METHODOLOGY FOR INTEGRATING LARGE LANGUAGE MODELS INTO THE PROCESSES OF SECURITY OPERATIONS CENTERS

    V. А. Chastikova , А. S. Bahtin , P.А. Merkulov
    57-69
    Abstract

    The article discusses the importance of integrating large language models (LLMs) into information security monitoring center processes (SOCs) to increase their effectiveness in dealing with growing cyber threats. The aim of the research is to develop a method for incorporating LLMs into SOCs aimed at automating data analysis and incident response processes. The research goals include the theoretical justificajustification for and development of a safe LLM implementation platform, as well as assessing existing SOC processes and technical infrastructure. The article analyses key SOC metrics such as average incident detection times and the number of outstanding incidents, and proposes using the GQM approach to improve these metrics.. It also considers the need to assess the risks associated with the use of LLM, taking into account vulnerabilities and threats, as well as methods for minimizing them, including using the OWASP list of critical vulnerabilities. The article suggests the main stages of system development and implementation, including inventory of existing resources, analysis of integration complexity and system deployment. Key aspects such as assessing the complexity of integration, operational and supporting factors, as well as assessing risks associated with introducing new technologies into SOC infrastructure, are considered.
    In conclusion, the relevance of LLM use is emphasized to improve efficiency and quality of SOC work, contributing to increased information security level and faster response to cyberthreats. The introduction of such technologies will allow SOC to not only respond faster to incidents, but also improve the accuracy of data analysis and reduce the risks associated with the human factor

  • DISTRIBUTED SYSTEM FOR BARCODE RECOGNITION USING NEURAL NETWORKS

    А.Y. Yurchenko , М.Y. Polenov
    70-79
    Abstract

    This work presents a distributed software-hardware system for automated barcode recognition on moving objects in industrial environments. The primary objective of the research is to develop a reliable and adaptive solution capable of consistently reading barcodes regardless of the orientation, speed, or height of objects moving along a conveyor belt. The main focus is not on achieving maximum processing speed, but rather on providing a wide field of view and ensuring reliable recognition of moving objects. Unlike traditional scanners that require precise positioning and expensive hardware, the proposed approach leverages a single network camera and a server equipped with neural processing modules, providing a cost-effective and versatile alternative suitable for a wide range of industrial applications. A key component of the system architecture is a neural image restoration module based on the MPRNet model, which effectively reduces motion blur and optical distortions in video frames. After preprocessing, frames are passed to an object detection module built upon the YOLO architecture, which has been adapted specifically for barcode recognition. Detected barcode data is stored in a database using an ORM interface, enabling seamless integration with existing enterprise systems. To prevent frame loss and maintain high throughput, the system incorporates asynchronous processing mechanisms using multithreading and buffered queues. The relevance of this research stems from the widespread use of barcodes as the primary method of product marking in industrial settings and the increasing demand for automation in product tracking and inventory control. Despite the availability of various vision-based and scanning solutions, most existing systems are not designed to handle unstable or low-quality video streams. The proposed system demonstrates robustness to visual distortions and motion-related artifacts, making it suitable for deployment in real production environments. Its affordability and adaptability also open up possibilities for implementation in logistics, warehousing, and supply chain management.

  • APPLICATION OF BACKPACK ALGORITHMS TO PREVENT UNAUTHORIZED EXCHANGE OF INFORMATION BETWEEN DIFFERENT LEVELS USERS IN THE HIERARCHICAL SYSTEM OF PROTECTION AGAINST UNAUTHORIZED ACCESS

    А.S. Zhuck
    80-91
    Abstract

    The problem of designing a secure system of protection against unauthorized access is considered. In particular, this article considers hierarchical data protection systems with cryptographic key distribution, namely, the problem of organizing access to file storages is considered. Although cryptographic key distribution can ensure the security of information from users who do not have access to it, the hierarchical access control system was not originally designed to solve the problem of protecting information from the dishonest actions of the user himself. Thus, the overall objective of the study is to prevent unauthorized exchange of information between users of different levels of a hierarchical system of protection against unauthorized access with cryptographic key distribution. To achieve the stated goal, the authors previously proposed to use the problems of Diophantine analysis, in particular the knapsack problem. Previously, the authors formulated the properties of the knapsack vector, applicable for improving the hierarchical system of protection against unauthorized access. In this article, the authors present the conditions for the injectivity of knapsack vectors. A comparative analysis of these conditions with the already established injectivity conditions is carried out. The analysis shows the need to formulate such conditions and the applicability of knapsack vectors that satisfy them for improving the hierarchical model of protection against unauthorized access. Based on the specified conditions, this article develops a recursive algorithm for constructing an injective multiplicative knapsack vector. The authors then analyze the possibility of its application for modeling a hierarchical mandatory model of information protection from unauthorized access. The analysis shows how already known algorithms for constructing knapsack vectors can be used as part of the developed algorithm. The authors also show where exactly in the developed system it is necessary to apply this algorithm to implement the properties required for hierarchical systems of protection against unauthorized access

SECTION II. DATA ANALYSIS, MODELING AND CONTROL

  • METHOD OF SPATIAL-TEMPORAL DIVERSITY OF TRAJECTORIES OF A GROUP OF ROBOTS IN THE CONTEXT OF OBSTACLES

    V.А. Kostyukov
    92-102
    Abstract

    When developing algorithms for planning the paths of robots forming a group, the problem of ensuring that they do not collide with each other and with possible obstacles arises. In addition, the group may be required to maintain a given formation template in those sections of the group's movement where this is possible taking into account obstacles. However, a narrow spatial corridor of permissible movement of the group is often formed, which can be caused by both the initial requirements for the trajectory (for example, the condition of its location in a certain vicinity of a given point), and the presence of obstacles and other interference effects. The presence of such a restrictive corridor can lead to a forced convergence and even intersection of the spatial trajectories of movement of individual robots in the group. One possible solution to this problem is to specify or adjust the time parametric representations of these individual trajectories so that two robots with spatial trajectories approaching each other are at their closest points at different times. Moreover, the time interval separating the moments of these two robots being at these points should be selected depending on the speed of the robots and their dimensions. The developed method of space-time separation of the trajectories of individual robots in a group is based on this idea. The method involves the formation and solution of a special linear programming problem relative to the target moments of time of previously selected nodes of the spatial trajectory of each slave robot. The limiting factor for changing these moments is the maximum possible speed of the robot. For each robot, a preliminary selection of a set of trajectories of other robots in the group is made, from which it is then necessary to detach in space-time. This occurs depending on the priority of the robots in the group. Examples of numerical implementation of the algorithm based on the proposed method are given, confirming its effectiveness

  • USING PROJECT PLANNING TOOLS: GANTT CHART AND NETWORK DIAGRAM

    А.А. Bognyukov , D.Y. Zorkin , I.А. Tarasova
    102-110
    Abstract

    An integrative model has been developed that combines calendar planning methods with software functionality (Excel, MS Project) for multi-level project optimization. Central focus is placed on three complementary methodologies: the Gantt chart, network diagram, and critical path analysis, which form the conceptual foundation for effective coordination of project processes. The study details the algorithm for creating a Gantt chart, which visualizes timeframes and task sequences, with emphasis on the functional capabilities of specialized software solutions, including Microsoft Project and Excel, enabling automated construction and adjustment of schedules. Further, the principles of constructing a network diagram, interpreted as a directed graph with edges (tasks) and vertices (events), are elaborated. This approach allows for identifying logical dependencies between project stages and determining the critical path – a sequence of operations with zero time reserves, defining the project’s minimum duration. Practical illustrations of critical path calculations are supported by examples demonstrating its role in optimizing time resources. A key aspect of the study is the analysis of time reserves, aimed at minimizing deadline risks through rational resource reallocation. The methodological framework is supplemented by visualization tools: resource requirement graphs and resource load diagrams, ensuring operational control over material and personnel assets across all project phases. The final element of the planning system is the calendar plan, which structures data on work titles, chronological intervals, and resource intensity. This document serves as an integrative foundation for synchronizing operational activities, ensuring adherence to established deadlines

  • A MODEL OF RESOURCES ALLOCATION INFORMATION PROCESS IN DYNAMIC DISTRIBUTED COMPUTING ENVIRONMENTS

    А.B. Klimenko
    110-120
    Abstract

    The article considers the issue of modeling the information process of distributing computing resources in geo-distributed heterogeneous dynamic computing environments. The relevance of the work is due to the fact that by now "cloud" data processing systems are becoming insufficient due to the need to process large volumes of data in real time regime. In this regard, the  "fog" and "edge" computing are in use. This implies localization of data processing in order to reduce the time required for this, on the one hand, and on the other hand, limitations on the computing power of devices leads to the need for a distributed solution of computing problems in a heterogeneous, dynamic and geographically distributed environment. This entails the need to develop new methods and algorithms for computing resources allocation, since previously developed methods did not take into account the properties of geographic distribution and dynamics of computing environments. The model of the information process of computing resources allocation proposed in this work includes the parameters of the resource cost of data transfers over the network individually for the nodes participating in the data transfer route, as well as the process of distribution of computing resources, which is what distinguishes it from analogs. The conducted experimental studies confirm the feasibility of the proposed model usage for the computing resources allocation in geo-distributed heterogeneous dynamic computing environments. The practical significance lies in reducing the resource intensity of the process of distribution of computing resources and the process of solving a computing problem

  • GROUPING PREDICTORS IN COMBINED PIECEWISE LINEAR REGRESSION

    S.I. Noskov , S.V. Belyaev
    120-127
    Abstract

    The article provides a brief overview of publications on the application of combined structures containing known model forms as constituent elements in mathematical modeling of complex systems. In particular, the following are considered: an algorithm for estimating parameters for creating mathematical models of dynamic systems; structured mathematical models of an oxygen electrode and biological wastewater treatment; a combined model including ion exchange between calcium and copper; a combination of non-standard finite-difference schemes and the Richardson extrapolation method to obtain numerical solutions of two models of biological systems; a mathematical formulation of the problem and a heuristic approach to optimal planning of delivery routes in a multimodal system; a mathematical model for optimizing strategic and tactical decisions in all types of biomass-based supply chains; a method for developing models of various types for elements of chemical-engineering systems taking into account various types of available information and combining these models into a single complex. Two variants of the problem statement for calculating the estimates of the parameters of a combined piecewise linear regression are formulated: with a non-empty and empty intersection of the index sets that define the composition of the independent variables in the linear and piecewise linear components of the model. It is shown that in both cases, when the sum of absolute deviations of approximation errors is selected as the loss function, both variants are reduced to linear-Boolean programming problems. Two versions of a combined piecewise linear regression model of revenue of the mining and metallurgical company Severstal are constructed. The following production volumes are used as independent variables of the model: hot-rolled, cold-rolled and galvanized sheet, sheet with another metal coating, sheet with a polymer coating, rolled products, hardware products.

  • NEURAL NETWORK METHOD OF PREDICTIVE CONTROL IN MICROGRIDS WITH MECHATRONIC WIND-GENERATOR SYSTEM

    N.К. Poluyanovich , N.I. Svetlichnyi , О. V. Kachelaev , М.N. Dubyago
    128-144
    Abstract

    The influence of various factors on the accuracy of forecasting wind turbine generator (WTG) generation is considered. The optimal set of input parameters (day, month, time, wind speed, air temperature, atmospheric pressure and estimated power output of wind turbine) for forecasting is determined, and the methods of their processing are substantiated. The influence of influencing factors on the accuracy of forecasting the generated power of wind turbines was investigated. Profiles of input data for forecasting the power generation of wind power plants are constructed. The peculiarities of meteorological conditions for a year are considered, frequently occurring wind speed values are determined, etc., for the selection of an optimal wind turbine. It is shown that the meteorological conditions meet the passport requirements of the WTG selected for the region under consideration. Neural network (NN) models for forecasting the power generation of wind turbines are considered, the optimal NN is selected, the structure is built and the algorithm of NN for forecasting the generated power of wind turbines is developed. The developed mathematical model of wind power generation is aimed at improving accuracy and adaptability by taking into account key dynamic factors (wind speed and change in wind direction, air temperature and density, etc.). The combined wind turbine generation control method (MPPT + Pitch) is chosen to ensure a balance between efficiency and safety. The combined method of controlling the wind turbine generation (MPPT + Pitch) is chosen, which provides a balance between efficiency and safety. Based on the estimated generated power of wind turbines, and meteorological conditions at the location, the neural network model showed high accuracy in predicting the power of wind turbines. It is shown that the selected type of wind turbine combines technological reliability, cost-effectiveness and compliance with modern trends in wind energy. The NN model allows maintaining a balance between generated and consumed electricity, and, consequently, increases efficiency, reduces parasitic losses in the microgrid, and reduces wear and tear of equipment.

  • DEVELOPMENT OF A METHOD FOR SOLVING THE PROBLEM OF TASK ALLOCATION IN A MULTI-AGENT SYSTEM

    V.А. Kostyukov , F.А. Houssein
    144-155
    Abstract

    This paper considers the problem of task distribution within a multi-agent system, where each agent is an autonomous robot, and each task corresponds to a point in a two-dimensional environment that one of the agents must visit. This problem is essentially similar to a multi-agent version of the classical traveling salesman problem, where several agents are involved instead of one participant. Each of them must go through a unique route covering a certain set of points. In this regard, a study of the multi-agent traveling salesman problem is conducted as one of the formats for setting the problem of distributing goals among agents. This problem is of great importance in the field of routing and optimal task distribution. Its solution includes two closely related subproblems: determining the set of points assigned to each agent and constructing the optimal route for visiting them. There are three main approaches to solving this problem in the scientific literature: Optimization approach, where both subproblems are solved jointly; Cluster-First, Route-Second model, where tasks are first distributed among agents, and then routes are built;
    The Route-First, Cluster-Second model assumes initial optimization of the route for all points with its subsequent division between agents without changing the order of visits. In this paper, a hybrid method is proposed that combines elements of the Cluster-First, Route-Second and Route-First, Cluster-Second approaches. The goal is to combine the strengths of both concepts and minimize their drawbacks. To test the effectiveness of the developed method, a comparative study was conducted. The evaluation was carried out according to three main metrics: the time spent on constructing a solution, the total length of all routes, and the maximum route length among all agents. The experimental results showed that the use of the proposed method allows for a reduction in the maximum route length (thereby reducing the load imbalance between agents) by an average of 26%.

  • METHODS FOR ASSESSING THE TEMPORAL STRUCTURE OF INTERNET DISCUSSIONS BASED ON THE NUMBER AND DURATION OF USER INTERACTIONS

    А.Y. Taranov
    155-162
    Abstract

    The aim of the research is to develop and test methods for assessing the temporal structure of online discussions based on the analysis of the number and duration of user interactions on the internet (in social networks, forums, etc.). The article describes new methods for assessing the temporal structure of online discussions, developed within the scope of this work, based on the analysis of the number and duration of user interactions on the internet. Particular attention is given to methods for determining both the intensity and duration of discussions, which enables a more accurate assessment of the real-time dynamics of discussions. The intensity of a discussion is assessed through the ratio of the number of interactions (such as comments, replies, likes) and the duration of an online discussion. Methods for accurately determining the duration of a discussion are proposed, which take into account not only the time since a post was published but also the activity of users during the discussion, making these methods more flexible and precise. The methods were tested using real data from VKontakte communities in the cities of Taganrog and Sarov. The results of the practical study confirmed the existence of expected patterns, such as daily fluctuations in user activity levels and bursts of activity associated with significant social and political events. The developed methods for assessing the temporal structure of online discussions based on the number and duration of user interactions allow for effective analysis of the dynamics of discussion participants' involvement, identifying key moments and significant events in the process of online communication. These methods can be useful in various fields, such as social research, marketing, political analysis, reputation risk management, and others, where the analysis of online activity and involvement is required.

  • SOLUTION OF THE INVERSE PROBLEM OF SPECTRAL GRAPH THEORY IN THE ABSENCE OF OBSERVABLE VARIABLES

    А.N. Tselykh , V. S. Vasilev , L.А. Tselykh , S.А. Barkovskii
    163-173
    Abstract

    The article is devoted to solving the main inverse problem of spectral graph theory – determining the main parameters of a graph based on the spectrum of its eigenvalues. The article studies cognitive causal graph models of complex systems with unknown dynamics of variables. Non-stochastic graph models with non-numeric values of nodes and links, as well as poorly defined system factors are considered. In the absence of initial data, solving the inverse problem for a directed weighted signed graph is significantly complicated. When graphs have the same topology but different weights on arcs, their spectra form a set of fuzzy collinear vectors in the solution space. The straight lines of these vectors diverge in the vector space due to their directionality to different vertices. The article proposes to use an algorithm that allows one to accurately restore the weights of a cognitive graph when the conditional principal eigenvector and the topological structure of the adjacency matrix are known. This algorithm takes into account an important feature of the adjacency matrix of the graph - the direction of the main eigenvector to the target vertex, which allows finding the correct solution from a set of fuzzy collinear vectors in the solution space. To achieve complete restoration of the graph weights with acceptable accuracy, it is proposed to combine the graph spectrum and the effective control model with the combinatorial optimization problem. Restoring the adjacency matrix weights using our approach, we compare them with the given graph. The comparison takes into account such graph parameters as the graph spectrum, similarity coefficients of the restored matrix, response and control vectors

SECTION III. ELECTRONICS, NANOTECHNOLOGY AND INSTRUMENTATION

  • FEATURES OF LOW-VOLTAGE DIGITAL CIRCUITS BASED ON CMOS TECHNOLOGIES 90–20 NM

    B.G. Konoplev
    174-181
    Abstract

    To increase energy efficiency, CMOS integrated circuits use a subthreshold mode of operation.
    The supply voltage decreases to a level lower than the threshold voltages of the MOSFETs, currents decrease and performance decreases. However, often a reduction in power consumption is more important than low performance. Therefore, CMOS integrated circuits in the subthreshold mode have applications where a radical reduction in power consumption is a crucial requirement. Now firms have used technologies with minimum sizes from 500 to 3 nm, with most of the products being at the 90–20 nm. The paper analyzes low-voltage circuits based on technologies 90–20 nm to develop recommendations for the design of energy-efficient devices. A technique for determining the key parameters of predictive MOSFET models in the subthreshold mode is considered. Expressions of the characteristics of inverter in the subthreshold region are obtained. Analysis shows a significant deterioration in the characteristics of CMOS elements in the subthreshold mode with a decrease in the dimensions of less than 90 nm. It is explained that when developing technology 90–20 nm, all measures were aimed at reducing leakage currents in the over threshold mode to reduce static power consumption. To improve the characteristics of CMOS elements in the subthreshold mode, it is necessary to optimize the design and technology to reduce the values of the subthreshold span, the DIBL coefficient and increase the characteristic current. The results may be useful for developers of energy-efficient equipment.

  • ANALYSIS OF THE EFFECTIVENESS OF TRADITIONAL AND MODERN TECHNOLOGIES FOR MONITORING POWER TRANSMISSION LINES

    О. V. Afanaseva , Т.F. Tulyakov
    182-188
    Abstract

    The article provides a comprehensive analysis of the effectiveness of traditional and modern technologies for monitoring power transmission lines (PTL). Power transmission lines are a critical element of the energy infrastructure, and their reliable operation directly affects economic stability and safety. Traditional monitoring methods, such as visual inspections and mechanical devices, have long remained the main control tools, but their limited accuracy, high dependence on the human factor and the inability to promptly detect hidden defects make them less effective in the face of increasing loads on power systems. Modern technologies, including unmanned aerial vehicles (UAVs), the Internet of Things (IoT), automated monitoring systems and digital twins, offer fundamentally new opportunities for monitoring the condition of PTLs. They provide high diagnostic accuracy, continuous data collection in real time, reduced operating costs and increased personnel safety. The article presents a classification of both traditional and modern methods, as well as a comparative analysis of their key parameters: accuracy, response speed, cost, safety and impact on operation. The results of the study demonstrate that modern technologies outperform traditional approaches in all the criteria considered. In particular, the use of IoT and UAVs allows minimizing the human factor, reducing inspection time and increasing data detail. Digital twin systems make it possible to predict possible accidents and optimize scheduled maintenance. However, successful implementation of innovative solutions requires additional investment, personnel training and integration with existing management systems. The conclusion is made about the strategic importance of switching to modern power transmission line monitoring technologies to improve the reliability and sustainability of the energy infrastructure. Despite high initial costs, their long-term benefits, including reduced accidents, resource savings and increased safety, fully justify the investment. The authors emphasize the need for further development of digital technologies in the energy sector to ensure stable and efficient operation of power grids

  • THEORITICAL AND EXPERIMENTAL STUDY OF TOXICITY OF FLUE GASES OF A BURNER WITH COMBINED FLAME FORMATION

    D. А. Bogdanets , V. I. Grishchenko , К.F. Kalmykova , А.I. Rakhmanov , D.S. Tsymbalov
    189-200
    Abstract

    A convenient and reliable algorithm based on mathematical programming methods is proposed for calculating the chemical composition of flue gases from a hydrocarbon-fueled burner with combined flame formation. The stability of the computational process is ensured by using the equilibrium constant method in order to reduce the dimensionality of the system of governing equations with an appropriate choice of the nomenclature of reagents and base substances. The model considers twenty-two reagents, a significant proportion of which are toxic, and CO, H2, O2 and N2 are selected as base substances.
    The equilibrium constants are calculated based on an original approximation of the temperature dependence of the heat capacity of all components. The proposed approximation, unlike standard polynomial ones, not only provides physically justified limits in the region of low and high temperatures, but also contains twice as few fitting parameters. A technique for calculating these parameters using known tabular values has been developed. Preliminary testing of the model was performed for individual chemical blocks (nomenclature subsets) by comparing the calculation results with the data obtained using the JANAF chemical calculator. Calculations of toxicity of flue gases of the experimental burner device with formation of a torch by means of an electric arc and supply of superheated steam are performed in the range of operating parameters extended in comparison with the expected operational one. The result is generalized by methods of mathematical programming taking into account the weighted toxicity of flue gases. An estimate of the error of the formula representation of toxicity is performed. An interpolation algorithm is proposed that allows specifying the weighted toxicity of flue gases in the entire range of scenario modeling. The results of computer modeling were compared with the data of technical tests of the experimental burner: their agreement was established within the multiplier of 2, which corresponds to the methodological and instrumental measurement error. The results obtained in the computational experiment allow optimizing the design and operating parameters of innovative burner devices with a combined mechanism of torch formation. Calculations have shown and experimentally confirmed that the supply of superheated steam to the combustion zone homogenizes the flame, equalizes the rate of oxidation of hydrogen and carbon in the oil fuel, thereby reducing emissions of incomplete combustion products and nitrogen oxides. The development is proposed for use in the design, technical testing and fine-tuning of promising burner devices on oil, alcohol and biofuel.

  • EXPERIMENTAL FACILITY BASED ON ANECHOIC CHAMBER FOR ACTIVE RADIOSENSORY TECHNICAL DIAGNOSTICS

    А.Y. Zvyagin , К.А. Boikov
    201-211
    Abstract

    The article presents the results of the development and experimental study of two designs of anechoic chambers designed for active radiosensory technical diagnostics (ARTD). The aim of the work was to create conditions for obtaining a reliable radio signal profile (SRP) of the object under study with minimal distortion due to the effective suppression of external electromagnetic interference and reflections. As part of the study, two configurations of shielded volumes were implemented: the first based on a multilayer foil reflective structure, the second using high-quality Faraday fabric. Comparative evaluation testing of the shielding efficiency was carried out at different states of the microwave element. The state of the microwave element was changed by changing the scheme of the microwave mixer, by soldering out key elements. To verify the functional characteristics, a series of measurements were performed by probing the surface of the microwave mixer under study with short pulses (SP). The resulting SRP was recorded using an oscilloscope and a receiving antenna. Pearson correlation analysis was used to process the results, which proved effective in quantifying the differences in the SRP of an object in good and defective conditions. The experimental data obtained made it possible to evaluate the quality and prospects of materials based on key parameters: the degree of suppression of parasitic signals, resistance to external interference in various operating conditions, mechanical durability under cyclic loads, and the economic feasibility of implementation, taking into account the cost of materials and the complexity of assembly. The results of the study demonstrate the practical applicability of both designs in precision radio measurement tasks, while the choice of a specific material is determined by an optimal compromise between production costs, performance characteristics in various climatic conditions and the required level of shielding for a specific application. The data obtained can be successfully used in the design of both stationary laboratory complexes and mobile ARTD systems in conditions of limited resources, including field measurements and industrial monitoring. Studies have shown the promise of both approaches to achieve maximum shielding characteristics over a wide frequency range

  • DEVELOPMENT AND RESEARCH OF THE EQUALIZER CONFIGURATION TO INCREASE NOISE IMMUNITY IN DECAMETER RADIO LINES

    А. I. Rybakov
    212-227
    Abstract

    The problem of increasing the noise immunity of decameter radio lines (DRL) in modern domestic radio communication systems, including short-wave (SW) communication, remains significant and in demand, despite the existence of numerous classical studies. The decameter radio communication systems have been chosen as the object of our analysis. Specifically, the R-016 system, which is used as a prototype, faces limitations such as its frequency range, which affects its effectiveness in conditions of variable ionospheric characteristics that negatively impact the signal level. Signal processing issues in prototypes can lead to bit errors reaching a bit error rate (BER) of 10-3, even in the absence of significant interference. The main objective of the study is to evaluate the impact of various factors, such as changing the length of the preamble and implementing adaptive filters, on the noise immunity of systems. The analysis of the results indicates that increasing the length of the preamble improves the noise immunity of decameter radio lines. The practical significance of the results is that they can be used to improve the noise immunity of existing DFM radio communication systems that operate in variable ionospheric conditions. This paper also discusses the scientific basis for selecting an equalizer configuration for decameter radio lines to increase communication range and improve noise immunity. The developed methods provide a scientific basis for selecting effective equalizer settings, which can lead to improved communication range. To test different equalizer configurations in a Rayleigh channel, Simulink simulation is used, which confirms the correctness of the selected parameters. Experimental validation of the decameter radio line model includes the study of various preamble lengths, and the analysis of signal-to-noise ratios (SNR) at the receiver input allows for the adaptation of these parameters. Thus, the research results show that increasing the preamble length has a positive impact on the system's noise immunity. The work focuses on the modeling and operation of DFM radio lines, and the obtained results have practical significance for the active adaptation of existing radio systems in a changing ionosphere

  • STUDY OF THE STABILITY OF THE MIMO-OFDM SYSTEM TO ACTIVE INTERFERENCE USING AN ADAPTIVE ALGORITHM FOR PROCESSING SPATIAL-TEMPORAL SIGNALS

    V.P. Fedosov , А. V. Tsirkulenko
    227-236
    Abstract

    Modern communication systems often operate in a complex interference and signaling environment, while there are various ways to reduce the error of the restored signal. Some of the methods relate directly to mathematical processing algorithms in the receiver, however, there are other approaches based on spatial filtering of signals. In particular, in recent years, an approach based on the weighted processing of signals received from different antennas has been actively developed using the correlation matrix of the input signal, which makes it possible to use information from the antennas more efficiently by choosing an antenna with the maximum level of useful signal and lower levels of noise and interference, which is physically the formation of an equivalent radiation patterns of the receiving antenna array with a maximum on the path with the maximum level of the useful signal and minima on others. The application of this approach is of practical interest, especially in systems with active interference, for example, from electronic warfare stations, as it can improve the quality of signal recovery. Separately, it should be noted that in the case of active interference, a method based on the minimum RMS error of restoring the pilot tones should be used to select the eigenvector for weight processing (which is possible in OFDM), since if the maximum eigenvalue is selected, it is unknown whether it will be signaling or interfering if it is high. This paper presents an experimental study of an adaptive algorithm for processing spatiotemporal signals for a MIMO-OFDM communication system with different levels of active interference from an electronic warfare (EW) station. In this case, experiments are carried out both in the descending (Downlink, from the base station to the mobile) and ascending (Uplink, from the mobile station to the base station) channel using adaptation on both the BS and MS sides, and both. It is shown that the application of the algorithm can improve the quality of signal processing and reduce the bit error rate for a wide range of signal–to-noise ratios (SNR – signal-to-noise ratio), even with imperfect channel estimation (by pilot tones).

SECTION IV. MACHINE LEARNING AND NEURAL NETWORKS

  • OPTIMIZATION OF PID PARAMETERS OF SERVO SYSTEMS USING A GENETIC ALGORITHM AND A NEURAL NETWORK CLASSIFIER

    Ahmad Zoualfikar , Y.А. Kravchenko , А.М. Mansour
    237-250
    Abstract

    Machine learning algorithms play a vital role in enhancing the performance of industrial systems, providing high precision and operational efficiency in real time. In servo motor control systems, these algorithms help reduce noise and vibration, improving efficiency and extending equipment lifespan. This article examines various types of noise that occur and their negative impact on industrial processes. The primary research objective is to optimize PID controller parameters in servo systems using a combined algorithm that combines neural networks and genetic algorithms. Unlike traditional methods such as genetic algorithms (GA) and particle swarm optimization (PSO), which suffer from slow convergence and risk of motor damage, the proposed solution is based on a control software platform. This platform ensures safe real-time interaction with the servo motor. A CAN Bus-based control system has been developed that enables developers to: read all servo motor parameters (speed, current, voltage, encoder position); modify PID coefficients with a single click, eliminating the need for manual tuning as in MOTO-MASTER. The implementation of the developed control system allowed the use of a trained neural classifier to constrain PID parameters within safe limits, reducing search space and accelerating the optimization process. Experimental results on SPH-S servo motors demonstrated significant reduction in noise and mechanical vibrations during real-time operation while maintaining stability across a wide speed range (0-1500 rpm).

  • RESEARCH OF MACHINE LEARNING METHODS FOR DETECTING FRAUDULENT WEBSITES

    М.А. Lapina , D. А. Lukyanov , V.G. Lapin , N.N. Kucherov
    250-262
    Abstract

    Every year our lives become more and more connected with large volumes of data that need to be analyzed. As the volume of information increases, its analysis becomes a more voluminous and complex task.
    In this situation, the problem of finding a tool that will help companies and institutions in collecting, analyzing and forecasting data arises. Machine learning is an area of artificial intelligence that finds patterns in a database and, based on them, tries to predict the result. Another area of application of machine learning is the detection of fraudulent sites. Currently, with the development of information technology, digital crimes have become a serious threat to confidential information and user data. Artificial intelligence is able to analyze site parameters and determine the presence of threats to information. The study is aimed at systematizing knowledge about phishing attacks and studying machine learning methods for detecting fraudulent sites. During the study, machine learning methods for detecting phishing sites were developed, schemes were built that allow machine learning models to correctly transform data for feeding them to models. The analysis of the data provided in the dataset made it possible to correctly transform the data for the correct operation of the models, which will avoid errors. The problem of retraining machine learning models was solved. A detailed study of the dataset made it possible to filter out data that could cause errors in the model and reduce the quality of artificial learning forecasting. As a result of the work, the developed methods for searching for phishing attacks using machine learning models were tested on test data, based on the results obtained, graphs of changes in the accuracy of detecting illegitimate sites from changing the model settings were constructed. An analysis of the study was carried out and the results of the work were summarized.

  • DEVELOPMENT OF AN AUTONOMOUS ROBOT TO PERFORM THE FUNCTIONS OF A SALES - CONSULTANT IN RETAIL NETWORKS

    М.А. Khapova , К. C. Bzhikhatlov , L.B. Kokova
    262-272
    Abstract

    The active increase in the share of large chain stores in the retail sector increases the demand for employees of such networks. At the same time, with the growth of the turnover of large stores, the requirements for timely display of goods on the shelves also grow. According to the estimates of the retailers themselves, losses from incorrect or untimely display of goods can reach 5% of the total annual turnover. Given the significant turnover of large chain retailers and noticeable staff turnover, the problem of automation of product display in chain stores can be considered relevant. This paper presents the results of the development of an autonomous robotic system that can ensure uninterrupted control of filling of shelves and timely display of goods. Based on the results of a survey of representatives of large retail chains, the requirements for an autonomous system for monitoring and placing goods in a store are determined.
    In particular, the requirements for the capabilities of an intelligent robot control system, design features and hardware implementation of robots, requirements for the capabilities of the system of interaction with employees and customers in the store and preferences for the appearance and user interface of the robot are determined. Based on the identified requirements of retailers, a prototype of an autonomous robot for work in sales areas has been developed. The basis of the robot is a transport module with two motor wheels and a pair of steering wheels, on which an anthropomorphic unit with two manipulators is installed. The manipulators are made in the form of human hands and have a full set of necessary degrees of freedom. In addition, the article presents the architecture of the autonomous robot control system.
    The robot is controlled by an intelligent decision-making and control system based on a multi-agent neurocognitive architecture that simulates the processes occurring in the human brain. The design and mechatronic part of the robot were tested in real conditions: in the sales areas of a retail store in Nalchik in the presence of sales consultants and customers. In the future, work is planned to refine and train the intelligent decision-making system.

  • SPIKING REPRESENTATIONS COMPARISON FOR LOCALIZATION AND NAVIGATION IN THE KEYFRAME MAP

    I.S. Fomin , V.D. Matveev , А.Е. Arkhipov
    273-284
    Abstract

    The task of navigating a mobile robotic platform in a known environment has been efficiently solved for a long time and using a flat passability map, which is built using lidar. Nevertheless, situations regularly arise when, for one reason or another, the platform is not equipped with lidar or other active navigation tools. At the same time, a camera is usually installed on the robotic platform, designed for visual monitoring of the situation by the operator, which can also be used for navigation when moving the robot in a known environment. There are well-known examples of navigation algorithms based on the use of sequences of keyframes, for example, visual SLAM. At the same time, various variants of video images (blurred, masked, etc.) are considered as keyframes. In this paper, a cognitive (non-metric, non-spatial) map of keyframes representing a spiking representation of the observed images is considered as a base for navigation. The possibility of using neuromorphic information control elements developed at the RTC to compare the current spiking representation with all spiking representations of a key sequence is analyzed. It is shown that by such a comparison, the keyframe closest to the current one can be determined, and parameters for the shift of spiking representations can also be selected, which is an analog of localization and navigation for a cognitive map. The description of a software tool for emulating the construction of a map and moving in it for experimental testing of the proposed algorithms is given. Data collection and experimental evaluation of the quality of localization and navigation algorithms have been performed. To do this, we have collected several keyframe maps with different patterns of movement between frames. When determining the position of the frame in the map, the quality was from 70 to 98%, when determining the direction of displacement between frames, the accuracy was from 94 to 97%. The results obtained are assessed as sufficient to solve the tasks assigned to the algorithm.

  • INTEGRATION OF A RECURRENT NEURAL NETWORK TO INCREASE THE FAILURE TOLERANCE OF THE MOISTURE TRANSFER MODEL IN THE SMART GARDEN SYSTEM

    S.S. Obaid , V.А. Pogonin , I.B. Kirina
    284-297
    Abstract

    The paper presents a study on the development and integration of a recurrent neural network (RNN) to improve the accuracy and fault tolerance of a moisture transfer model in a smart garden system. The problem of soil moisture control is becoming especially relevant in modern agricultural and environmental monitoring, where high accuracy is required to manage water resources, forecast crop yields and prevent drought periods. Traditional methods, such as remote sensing and moisture transfer models, have significant limitations: low accuracy, computational complexity, dependence on accurate sensor data and difficulty in applying in real field conditions. To solve these problems, the study proposes the use of RNN, which is able to effectively process time series data and predict soil moisture even in the presence of incomplete, inaccurate or distorted input data. The global soil moisture dataset GSSM and weather data from the Meteostat platform were used as initial data, which made it possible to take into account the climatic features of regions with different soil types. The model includes a long short-term memory (LSTM) layer and a fully connected layer for the final forecast. Particular attention is paid to data preprocessing, including calculating average daily, average monthly and average annual values, as well as data correction taking into account the characteristics of different soil types. The study showed that the developed RNN model is highly resistant to sensor failures, has minimal dependence on the volume of input data and is able to adapt to different climatic and soil conditions. The proposed solution improves the accuracy of soil moisture monitoring in the Smart Garden system, optimizes the use of water resources and increases the stability of the system in the face of changing external factors. Thus, the integration of RNN opens up new opportunities for the development of agriculture and ecology, ensuring more efficient water resource management and increasing the productivity of agrosystems