No. 6 (2025)

Published: 2026-01-03

SECTION I. INFORMATION PROCESSING ALGORITHMS

  • CLUSTERING ALGORITHM FOR LARGE GROUPS OF EXPERTS BASED ON THE INTERPRETIVE STRUCTURAL MODELING METHOD

    Е.М. Gerasimenko , P.S. Gerasimenko
    6-21
    Abstract

    This article presents an algorithm for achieving consensus in social networks during large‑scale group decision‑making with incomplete probabilistic fuzzy information containing elements of uncertainty, which takes into account the trust relationships among experts. A method for clustering experts based on interpretive structural modelling is proposed. It serves both to classify experts and to enhance the efficiency of consensus achievement.The study examines trust propagation and aggregation operators for probabilistic fuzzy information with elements of uncertainty. These operators enable indirect trust assessment and determination of experts’ weight coefficients. As a result, it becomes possible to form several subsets of experts and to determine weight coefficients for a large number of experts based on their mutual trust relationships. Based on the clustering of experts and the calculated indirect trust relationship between experts, decision‑making in emergency situations is carried out by achieving consensus, taking into account fluctuating probabilistic fuzzy information, and the best evacuation alternative is identified.
    The assessments provided by experts in the form of probabilistic fluctuating fuzzy values allow for effective modelling of doubts, uncertainty, and inconsistencies in expert evaluations when a group of experts or various expert organisations are involved. At the same time, it becomes possible to take into account different expert assessment values in multi‑criteria decision‑making tasks when experts cannot agree on common membership degrees. The algorithm allows classifying a large group of experts into several subsets based on their social trust relationships. This method prevents the formation of overlapping subsets and does not require pre‑setting clustering parameters. It relies exclusively on social trust relationships between experts, thereby avoiding the issue of subjective intervention in the clustering process. Compared to traditional clustering methods, the interpretive structural modelling‑based clustering approach effectively reveals the hierarchical structure of relationships among experts. It also minimizes the number of participants in large‑scale group decision‑making within a social network by reducing the dimensionality of the expert set. Clustering experts based on the interpretive structural modelling method significantly enhances the efficiency and feasibility of large‑scale group decision‑making

  • OPTIMIZATION OF THE COMPUTATIONAL SCHEME FOR THE INTERPOLATION OF DECADAL METEOROLOGICAL DATA BY INVERSE DISTANCE WEIGHTING WITH PARALLEL PROCESSING OF MULTIPLE TIME SLICES

    О.М. Golozubov , А.V. Kozlovskiy , E.V. Melnik , Y.E. Melnik , А.N. Samoylov
    22-32
    Abstract

    The present study is devoted to solving the problem of computational inefficiency in spatial interpolation of large arrays of decadal meteorological data using the inverse distance weighting method. Traditional approaches involving sequential and independent processing of each time slice demonstrate a linear increase in execution time and significant RAM consumption, which becomes a critical barrier to the rapid construction of detailed and geographically linked raster fields in GeoTIFF format. This significantly limits the use of the method in tasks requiring rapid processing of long-term data archives. The purpose of this work is to develop and validate an optimized computational scheme that can radically reduce time costs while maintaining the completeness and accuracy of the results. The key scientific novelty of the proposed approach lies in the fundamental rethinking of the computational process. Instead of repeating identical operations many times, a scheme is proposed based on a single calculation of the full vector of geodetic distances from each grid cell to all weather stations. This most resource-intensive operation is performed only once. Subsequently, the resulting distance vector is applied to all time slices (decades) to calculate the interpolated values, which eliminates the main computational redundancy and ensures a sublinear dependence of processing time on the number of decades. To further improve performance, a parallel processing mechanism is used at the CPU level, implemented by dynamically dividing the raster into independent computing units (batches). The size of the batches is adaptively adjusted taking into account the available RAM, which guarantees the stability and scalability of the solution on systems of various capacities. The testing of the method on real meteorological data for the period 2015-2024 demonstrated a radical reduction in the execution time. In particular, processing ten decade time slices on a standard laptop takes less than 3.5 minutes, and on a server platform it takes about 3 minutes, which represents a multiple acceleration compared to traditional implementations. Thus, the developed solution makes the operational processing of large spatial and temporal meteorological arrays a reality for a wide range of researchers, opening up new opportunities for climate monitoring, agrometeorology and geoinformation analysis without the need for specialized expensive equipment

  • METHOD FOR GENERATING TOPOLOGICAL CONSTRAINTS OF COMPUTATIONAL STRUCTURES FOR RECONFIGURABLE COMPUTING SYSTEMS

    А.А. Dichenko , I. I. Levin , D.А. Sorokin
    33-46
    Abstract

    For reconfigurable computing systems based on FPGAs, efficient application programs are parallel-pipeline programs that achieve real performance exceeding 50% of the peak. This article addresses the problem of reducing the development time of such programs. The computational structures of these programs utilize a large volume of FPGA resources operating at high clock frequencies. However, simultaneously maximizing both the amount of FPGA resources used and the clock frequency presents a certain contradiction: as resource utilization increases, the placement flexibility of the functional units of the computational structures decreases, and the FPGA switching matrix fails to provide the required signal propagation characteristics when routing information channels between them. Moreover, in modern CAD tools, placement and routing algorithms consider only the architectural and geometric features of the FPGA. Therefore, when a large number of specialized primitives with very limited placement flexibility are used, achieving high clock frequencies in automatic synthesis mode becomes virtually impossible. To address this problem, it is also necessary to consider the information dependencies between the functional units of the computational structures, but the nature of these dependencies in tasks from different subject areas can vary significantly. As a result, developers are often forced to manually place the functional units of the computational structures on the FPGA by creating script-based instructions for topological constraints. In earlier generations of FPGAs, the time required to generate topological constraints was acceptable, as they typically contained only a few hundred specialized primitives. However, in modern FPGAs, the number of such primitives reaches several thousand or even tens of thousands, significantly increasing the development time of efficient application programs. The proposed method makes it possible to automate the process of developing topological constraints for computational structures. The research was carried out during the development of application programs for solving a range of problems based on FFT, AES, and LU decomposition algorithms for the reconfigurable computer “Tertius-2.” As a result of significantly reducing the time required for optimization iterations of computational structures, the total synthesis time was reduced by up to three times.

SECTION II. DATA ANALYSIS, MODELING AND CONTROL

  • ANALYSIS OF THE POSSIBILITY OF USING BIG LANGUAGE MODELS FOR MONITORING TECHNOLOGICAL TRENDS AND DETERMINING DIRECTIONS FOR THE DEVELOPMENT OF HIGH-TECH ENTERPRISES

    А. М. Belevtsev , А.А. Belevtsev , V.А. Balyberdin
    47-58
    Abstract

    In modern times some foreign countries pay a great attention  at the development and using in military field the netcentric conception of control (NCC). The conception defines the architecture of operations as a composition of three network structures:  reconnaissance, information control and  destruction. The analyses made show that the  great part  in the network structure of reconnaissance play the radar systems and complexes  (RSC).  It is pointed out that at present the great efforts are made to realize a new quality RSC on the base of new technologies in nanoelectronics, MEMS/NEMS, nanomaterials, large information networks. That is why the predictions for ways to new technologies  constructing of perspective RSC for military objectives is of great interest. The paper deals with some problems connected to the  estimations certainty  when technological trends and technologies are  analyzed. The study is made on the example of radar complexes (RC) in NSS. The procedure for hierarchy criterions system forming is suggested, the bystages priority vector analyses are made on the example of  technological trends and technologies developments for sensor greed of NSS. It has been known that the availability of  the criterions interrelations reciprocal of the  technological trends can made errors for estimations constructed: in vectors priorities estimations; in  roadcarts constructing for radar part of NSS. It is recognized that to raise the certainty  requires the priorities estimation in the common schematic  for technological trends and technologies on the base of analytical networks method.

  • MODERN APPROACHES TO NATURAL FIRE MONITORING AND FORECASTING: REVIEW AND CONCEPT OF AUTONOMOUS UAV-BASED SYSTEM

    N.D. Boldyrev , V. V. Gilka , А.S. Kuznetsova , D.А. Morozov
    58-80
    Abstract

    Natural fires cause serious damage to ecosystems, the economy, and public safety every year, and timely detection of fires and prediction of their development increases the speed of response to threats and allows for optimal allocation of resources during emergency response. Existing monitoring methods are limited by the speed of detecting fire outbreaks and the speed of their further spread, which reduces the effectiveness of rescue services. To solve this problem, heterogeneous data sources can be used, including unmanned aerial vehicles (UAVs), distributed sensor networks, mobile field observation systems, ground-based thermal imaging stations, etc., which can contribute to a more accurate analysis of the current situation and improve the reliability of predictive models of fire spread. The aim of the study was to develop a concept for an automated approach to monitoring and predicting wildfires based on unmanned aerial vehicles. We believe that this approach will improve the speed of detecting fire outbreaks and the accuracy of predicting their spread. The tasks include analyzing existing monitoring methods, developing a concept for a system that integrates multispectral imaging, optimized data transmission, automatic segmentation, and forecasting based on machine learning, as well as ensuring interaction between the operator and alert specialists. The work used methods of collecting, analyzing, and transmitting data from UAVs, processing multispectral images, machine learning and neural networks for fire detection, image segmentation algorithms and simulation modeling for fire spread prediction, data visualization to support decision-making by operators and administrators, logging and analysis of results for model training, software engineering, and human-computer interaction technologies. The system will reduce the time required to detect and predict fires, enable operators to launch multiple drones simultaneously, and automate the processing of data received from them. Process automation will reduce emergency response times and staffing levels, improve resource allocation, increase forecast accuracy, and improve the timeliness of emergency service notifications. This will help reduce damage from wildfires and improve the safety of people and ecosystems. Despite the progress made in addressing this challenge, the comprehensive system described in this article does not yet exist in its entirety in Russia, the CIS countries, or in Western and Asian countries. Although individual components, such as UAVs for monitoring and artificial intelligence (AI) for data analysis, are already in active use, there is currently no integrated solution that combines all elements (drone control, near real-time fire spread prediction, data transmission, and interaction with emergency services). does not currently exist. This concept represents a new approach that could become a breakthrough technology for combating natural disasters.

  • THE METHOD OF PIECEWISE APPROXIMATION OF STATIC CHARACTERISTICS OF DEEP-LYING HYDROLITHOSPHERIC PROCESSES

    I. А. Bondin
    81-89
    Abstract

    This paper addresses the problem of describing the static characteristics of hydrolithospheric processes in deep-lying aquifers using the Essentuki mineral groundwater field, classified as a Category IV deposit in terms of geological complexity, as a case study. It is shown that classical methods for approximating distributed transfer functions, widely applied in the analysis and design of control systems for shallow aquifers at depths of 50–400 m, are not applicable to deep-lying conditions. This limitation is caused by high gas saturation of groundwater, pronounced structural heterogeneity of reservoirs, complex and often nonlinear hydraulic interaction between production and observation wells, as well as spatial and temporal variability of hydrochemical parameters. A modified method of piecewise approximation of static characteristics of hydrolithospheric processes is proposed. The method is based on the separate identification of parameters of approximating elements over individual distance intervals between wells using the results of pumping tests. The approach was implemented for the Cenomanian–Maastrichtian aquifer of the Novoblagodarnensky area of the Essentuki field, where hydraulic interaction coefficients between wells were calculated and static transfer functions were constructed for different spatial intervals, taking into account actual geological and filtration conditions. The results demonstrate that the use of piecewise approximation provides a better agreement with experimental data compared to homogeneous models and allows spatial variability of filtration properties to be taken into account. The obtained results form a methodological basis for forecasting hydrodynamic and gas–hydrochemical changes, assessing the stability of operating regimes, and developing and synthesizing control systems for regulating discharge rates of deep mineral groundwater wells. The practical significance of the study lies in the applicability of the proposed method to substantiating pilot industrial operation parameters, adjusting design solutions, interpreting monitoring and pumping test data, and formulating scientifically grounded recommendations for groundwater abstraction management, reduction of technogenic disturbances, and preservation of the stability of hydrolithospheric systems under conditions of intensive field development.

  • THE TECHNIQUE FOR SPLINE APPROXIMATIONS BUILDING IN CONDITIONS OF LIMITED SOURCE DATA

    А.А. Dorofeev
    90-105
    Abstract

    Mathematical modeling is widely applied in different fields of activity but in cases when the available numerical information is insufficient to get a complete picture of the object mathematical model building is difficult or can lead to a deliberately unreliable result. There are approaches to solving modeling problems with a lack of data therewith it may be necessary to use methods with complicated mathematical structure to get the desired result. In this regard, the task of adapting mathematical methods to data scarcity conditions is relevant. This paper discusses the solution of the interpolation problem using spline functions as one of the most widely used methods in mathematical theory and applied mechanics. A technique has been developed to adapt spline methods to data shortage conditions, applying this technique to building of a cylinder with elliptical bottoms forming model provided smooth joining of fragments and absence of kinks. Meeting the requirements of precision and smoothness of the model was achieved by sequential refinement of numerical data with adding interpolation nodes to the model and correction of their location. As a result of building and analysis of the model it was found that spline methods can be applied to solving interpolation problems of almost any complexity. The availability of an analytical justification for every step of modeling process allows to automate the process fully. The problem considered in the article is connected with manufacturing products on the numerically controlled machines. However, this technique due to its versatility can be applied to solving problems in various fields of activity.
    The practical value of the developed technique is the possibility of its application to many practical tasks. Its integration into modern CAD systems and application software packages will allow to expand their functionality by providing the user with the possibility to introduce the extra restrictions imposed on the model practically what can ensure a high degree of implementation flexibility.

  • INVESTIGATION OF APPLICABILITY OF MULTIMODEL DATA WAREHOUSES IN GAMING INDUSTRY

    А.А. Koblov , О.М. Romakina , А.S. Klemesheva , А. Z. Arseneva
    105-121
    Abstract

    This paper examines the feasibility and effectiveness of using multi-model databases for storing and processing data in the gaming industry. Modern gaming projects are characterized by highly complex and heterogeneous data: from strictly structured information about players, items, and quests to semi-structured and tightly coupled data, such as recipe systems, dialog trees, clan relationships, and in-game encyclopedias. Existing approaches based on relational or single-model NoSQL storage systems often fail to provide the necessary flexibility, performance, and development ease for such complex scenarios. The aim of this study is to design and comparatively analyze the performance of a multi-model solution for typical gaming mechanics. The authors developed a multi-model storage structure based on the ArangoDB DBMS that integrates document, graph, and key-value data models. The solution architecture encompasses key RPG game components: player and inventory management, quest systems, dialogue, crafting recipes, loot tables, clan relationships, and full-text search of the in-game encyclopedia using ArangoSearch. The experimental section includes a detailed performance comparison of the developed multi-model storage system with the PostgreSQL relational DBMS and the MongoDB document DBMS on realistic datasets and queries. The results demonstrate a significant advantage of the multi-model approach when performing operations that require traversing complex relationships: for example, searching for hostile players through a clan relationship graph in ArangoDB is, on average, 11 times faster than a similar JOIN query in PostgreSQL. However, for scenarios with frequent modifications to linearly organized data (e.g., updating quest status), the multi-model storage system exhibits slightly lower performance compared to the relational model, which, however, is acceptable within the context of the overall game project architecture. The study confirms that multi-model DBMSs, particularly ArangoDB, represent a promising solution for the gaming industry, enabling efficient combination of different data models within a single platform, simplifying development, and achieving high performance on complex data, which is critical for modern multiplayer games.

  • STUDY OF POSSIBILITIES OF USING PHOTONIC AND QUANTUM COMPUTING TECHNOLOGIES TO CALCULATE EXACT PROBABILITY DISTRIBUTIONS OF STATISTIC VALUES FROM FINITE DISCRETE SEQUENCES

    А.К. Melnikov
    121-136
    Abstract

    This article explores the feasibility of using photonic and quantum computing technologies to calculate exact probability distributions of discrete sequence statistics, assuming the existence of working hardware prototypes of computing systems and the development of the required quantum algorithms. The performance evaluation of computing systems based on photonic computing technologies is based on materials from the Sarov Scientific Center for Physics and Microphysics of the Russian Academy of Sciences. The performance of a quantum computing system is assessed by comparing the time it takes to solve a boson sampling problem from a given distribution on a computing system with known performance and the time it takes to solve it on a quantum computing system. To assess the feasibility of using photonic and quantum computing technologies to calculate exact distributions, modern methods for calculating them are considered. These methods are based on solving the type multiplicity equation and a system of linear equations in non-negative integers. Analytical expressions determining the computational complexity of these methods are presented. The values of the boundaries of the parameters of exact distributions accessible for calculation using photonic and quantum computing technologies are determined. A comparison of the obtained results with the results of using multiprocessor computing technologies to calculate exact distributions using various methods is presented. An analysis of the feasibility of using photonic and quantum computing technologies to calculate exact distributions is conducted by comparing the number of parameter pairs that can be calculated for exact distributions with the total number of distribution parameters within the Fisher region, which determines a fivefold increase in sample size over the alphabet size. An analysis of the data on the number of sample parameters shows that with increasing performance of the computing technologies used, the ability to calculate exact distributions increases. However, even with the most powerful quantum technologies, this number does not exceed one-tenth of the total number of exact distributions required for statistical analysis of discrete sequences in alphabets up to 256 characters long

  • USING FUZZY GRAPH INVARIANTS FOR THE STABILITY ANALYSIS OF COMPLEX TRANSPORT SYSTEMS

    I.N. Rozenberg , I.А. Dubchak
    136-145
    Abstract

    This article examines the issues of assessing the sustainability of transport and logistics systems (TLS) under conditions of uncertainty, which play a key role in ensuring the effective functioning of supply chains. The sustainability of systems is analyzed in the context of their ability to adapt to external and internal influences, such as economic fluctuations, changes in demand, natural disasters and technological failures. In this paper, it is proposed to use fuzzy graph invariants, namely, a fuzzy dominating set, to assess and analyze the sustainability of transport and logistics systems under uncertainty. It is shown that a fuzzy dominating set allows solving the problem of placing distribution hubs in a transport and logistics system. Examples of finding fuzzy dominating sets for fuzzy and fuzzy temporal graphs as the models of transport and logistic system are presented. Fuzzy temporal graphs also allow for more adequate modeling and analysis of systems in cases where the time parameter is one of the important factors. The practical significance of the study lies in the possibility of designing a more reliable and adaptive TLS capable of functioning effectively under conditions of uncertainty. The results can be used to optimize logistics processes, reduce costs and increase the sustainability of supply chains. The findings also open prospects for further research in the field of integrating artificial intelligence methods and big data analysis in transport system management. Further research is proposed to be directed at integrating flow optimization methods considering time factors and developing digital twins of TLS.

  • CONSTRUCTING A MATHEMATICAL MODEL OF A VIRUS-PROTECTED INFORMATION SYSTEM BASED ON SIR-MODEL

    Е. V. Karachanskaya , О.V. Rybkina
    145-157
    Abstract

    This article presents an analysis of deterministic models of computer virus epidemic propagation (SIR models) and their classification. The main areas of research into these models are highlighted. An analysis of existing stochastic models based on the SIR model and their diversity are presented. A method for constructing a stochastic SIR model based on the classical SIR model, represented by a system of Ito stochastic differential equations with a Wiener process, is proposed. The possibility of constructing a stochastic and deterministic model of an information system protected against computer virus infection with probability 1 is demonstrated: a stochastic model, in which infection by viruses occurs continuously, and a deterministic model, in which the virus is present in the information system. A mathematical stochastic model of an information system protected from computer virus outbreaks is constructed as a system of stochastic differential equations whose first integrals are invariants preserved with probability 1. A certain functional relationship between model variables, maintaining a constant value, is considered as the system's security indicator. Introducing a compensator (program control with probability 1 (PCP1)) into the model allows the specified security indicator, described by the model variables, to be maintained with probability 1. Introducing a compensator (program control with probability 1 (PCP1)) into the model allows the specified security indicator, described by the model variables, to be maintained with probability 1. Similarly, based on the proposed algorithm, a deterministic model of an information system protected from computer virus infection is constructed. A control similar to programmed control with probability 1 (PCP1) is introduced into the constructed model, which allows the invariants to be maintained. A distinctive feature of the proposed models is that they preserve invariants associated with the properties that ensure the security of the information system. The behavior of the constructed models is studied using numerical simulation in the MathCad environment. Based on the research results, conclusions were drawn on the possibility of using the proposed method in constructing stochastic models based on other models of epidemic spread, as well as for models of protecting an information system from the spread of a computer virus epidemic.

  • A TIME SERIES FORECASTING METHOD BASED ON COGNITIVE FUZZY MODELING AND REGRESSION ANALYSIS

    А.I. Guseva , R.М. Romanov
    157-178
    Abstract

    The relevance of the study stems from the low effectiveness of traditional time series forecasting methods under conditions of high uncertainty and limited data, which are typical of weakly formalized systems. The aim of the work is to develop and substantiate a time series forecasting method based on a hybrid approach that integrates cognitive fuzzy modelling, regression analysis, and the analytic network process. Within the study, a systematic review and comparative analysis of existing forecasting methods was carried out, including approaches based on fuzzy logic, neural network and cognitive modelling, as well as ensemble and hybrid methods, and their limitations were identified when dealing with small samples, nonlinear dependencies, and uncertainty. The proposed method includes: the construction of fuzzy cognitive maps, defuzzification of linguistic assessments, clustering of factors, application of the analytic network process to determine priorities, and the formation of a weighted regression model. The model undergoes statistical validation using the , , , and  metrics, as well as diagnostic checks of the assumptions underlying regression analysis, including tests for multicollinearity and autocorrelation. Application of the method reduced  from 0.38 to 0.22,  from 0.30 to 0.18, and  from 11.65 % to 7.12 %, thereby confirming an improvement in the accuracy and robustness of forecasts under limited data compared with classical multiple regression. The novelty of the proposed method lies in the integration of cognitive modelling, regression analysis, and the analytic network process, whereby the strengths of each component compensate for their individual limitations, providing more accurate and robust forecasting under the uncertainty inherent in the system under study. The practical significance of the work consists in the possibility of applying the proposed method to support decision-making and to enhance the validity of forecasts in various subject domains and situations characterized by a limited number of observations, a substantial role of expert judgments, and a complex structure of causal relationships between indicators over time

  • FEATURES OF THE AGENT-MODULAR APPROACH APPLICATION IN THE DESIGN AND IMPLEMENTATION OF INFORMATION-COMPUTATIONAL SYSTEMS

    А.F. Zaytsev
    179-189
    Abstract

    Information-computational systems play a key role in processing and analyzing large amounts of data, ensuring the efficient functioning of production, provision, as well as receipt of various digital goods and services. In the conditions of intellectualization and increasing diversity of functional requirements imposed on information systems, there is a need to develop new approaches for their construction and implementation. The paper considers the main features of the proposed agent-modular approach, which involves the construction of flexible and scalable information-computational systems capable of functioning in a distributed environment. The agent-modular approach is a methodological approach to the organization of information-computational systems based on the integration of system analysis methods, as well as agent and modular principle's of building various systems. The aim of the paper is to investigate theoretical and practical aspects of the application of the proposed agent-modular approach in building an information system, to analyze its advantages over other common approaches (object, component, service), and to present examples of its successful use. To achieve the set goal, it is necessary to solve the following tasks: – to research the theoretical foundations and identify the specific features of practical application of the proposed agent-modular approach; – to analyze the differences and describe the advantages of the proposed approach in its comparison with other approaches; – to present an example of building an information-computational system using the proposed approach. The following general scientific methods were used in the research process: decomposition, analysis, synthesis, comparison, description, formalization, structurization, modeling, design, as well as the basic principles of system, modular and agent-based approaches. Computers, and computing software tools were used as research materials. As a result of the study, the main theoretical and practical aspects of the proposed approach were considered. The agent-modular approach can be used to build various information systems at the stages related to their modeling and design. The proposed approach allows describing the structure, functioning, and interaction of various components of information-computational systems.

  • A STOCHASTIC FRAMEWORK FOR MODELING TRADERS’ COGNITIVE RISK UNDER VOLATILITY IN DECENTRALIZED FINANCIAL MARKETS

    D. G. Veselova , N. Е. Sergeev
    189-199
    Abstract

    This study is devoted to the development of a stochastic model of traders’ cognitive risk as a core component of an intelligent decision support system (DSS) for decentralized cryptocurrency markets.
    The relevance of the research is determined by the specific characteristics of the DeFi environment, which include high and nonstationary volatility, the absence of centralized stabilization mechanisms, information asymmetry, and a strong influence of behavioral factors on trading decisions. Under these conditions, traditional deterministic and static DSS frameworks demonstrate limited effectiveness, as they fail to account for the dynamic perception of risk by market participants and the associated cognitive biases.
    The objective of this research is to formalize traders’ cognitive risk as a memory-dependent stochastic process and to integrate the proposed model into the architecture of an adaptive DSS for risk management. To achieve this objective, a stochastic differential equation is developed to describe the dynamics of cognitive risk as a function of market volatility and prevailing market regimes. In addition, a probabilistic transition kernel is introduced to link objective market characteristics with the subjective perception of risk. For parameter estimation, an identification framework based on the Expectation–Maximization algorithm combined with particle filtering is proposed, enabling robust inference in the presence of nonlinear dynamics and latent state variables. The research methodology includes numerical simulations on synthetic data, parameter estimation using real cryptocurrency time series, and validation of the proposed approach through walk-forward and purged K-fold schemes. The quality of probabilistic forecasts is evaluated using the Negative Log-Likelihood (NLL), Brier Score, and Expected Calibration Error (ECE) metrics. Experimental results demonstrate that incorporating the stochastic cognitive layer improves probabilistic forecasting performance by an average of 10–15%, reduces NLL by approximately 8%, decreases the Brier Score by about 11%, and lowers ECE by nearly 35%. Furthermore, the accuracy of predicting key transitions between market regimes increases by 5–7 percentage points. The obtained results confirm the effectiveness of the proposed stochastic cognitive-risk model and demonstrate its applicability for the development of adaptive DSS solutions in the DeFi domain. The proposed framework provides a foundation for further research on predictive models of trader behavior and the design of intelligent risk-management systems for decentralized financial ecosystems.

SECTION III. ELECTRONICS, NANOTECHNOLOGY AND INSTRUMENTATION

  • INVESTIGATION OF SYNAPTIC PLASTICITY IN MEMRISTIVE CROSS-POINT STRUCTURES FOR NEUROMORPHIC ROBOTIC SYSTEMS

    R.V. Tominov , Z. Е. Vakulov , V.I. Varganov , I.О. Ignatieva , V. А. Smirnov
    200-207
    Abstract

    The results show multilevel resistive switching and synaptic plasticity of a memristive cross-point based on a nanocrystalline zinc oxide film. It is shown that with a decrease in the amplitude and duration of input pulses, the memristive cross-point demonstrates resistive states from 4.27 × 105 Ohm to 8.34 × 107 Ohm. It is shown that the switching energy of some synaptic levels is picojoules, which is promising for creating compact low-power neuromorphic systems. Thus, it is shown that nanocrystalline ZnO films have synaptic plasticity, i.e. When applying voltage pulses, large limits and duration can vary depending on the synaptic levels.
    The fabricated memristive cross-point demonstrates paired-pulse facilitation PPF at tp from 1 ms to 10 ms and pair-pulse depression PPD at tp from 50 ms to 100 ms. The analysis of the experimental results of the PPF and PPD study showed that an increase in the number of pulses from 10 to 90 leads to an increase in postsynaptic current EPSC from 32 μA to 73 μA for tp = 1 ms, from 31 μA to 59 μA for tp = 5 ms, from 31 μA to 48 μA for
    tp = 10 ms, and a decrease in EPSC from 30 μA to 25 μA for tp = 50 ms, from 30 μA to 17 μA for tp = 70 ms, from 30 μA to 5 μA for tp = 100 ms. From the obtained results it follows that the interval between pulses, the higher the PPF index, thus it can be concluded that the manufactured memristive cross-point based on ZnO nanocrystalline films imitates the crucial plasticity of the biological synapse, in which the plasticity of PPF and PPD is determined by the concentration of Ca+ ions and which plays a role in many biological functions of the brain, such as determining the key source of sound, pattern recognition, associative learning, filtering unnecessary. information. The obtained results can be used for hardware implementation of neural networks, neuromorphic structures of robotic complexes, prostheses and artificial intelligence systems

  • GENERATION OF COHERENT OPTICAL RADIATION MODULATED BY A QUADRATURE PHASE-SHIFT KEYED RADIO SIGNAL

    А. S. Mamitov , К.Е. Rumyantsev
    207-220
    Abstract

    The use of broadband optical amplification, wave division multiplexing, dispersion compensation of optical radiation, and differential phase-shift keying enables data transmission at rates of up to 40 Gbit/s. Prospects for further increasing transmission rates to 100 Gbit/s are associated with the use of multilevel modulation formats for radio signals on multiple subcarrier frequencies, modulation of the radiation from a single optical quantum generator by radio signals on multiple subcarriers, balanced homodyne detection of coherent optical radiation, and digital signal processing. Symbol-based transmission via quadrature phase-shift keying (QPSK) provides high data rates. Prior studies have substantiated the use of a single-sideband coherent optical radiation generation algorithm with subcarrier QPSK modulation. Due to hardware instabilities, amplitude and phase errors may arise, leading to quadrature imbalance. These inaccuracies introduce additional errors during demodulation of the received signal, which can significantly degrade reception interference immunity. The aim of this study is to analyze the process of generating single-sideband optical radiation modulated by a QPSK radio signal on a subcarrier frequency using two parallel Mach–Zehnder interferometers. A distinguishing feature of the proposed approach is that the derived mathematical relationships make it possible to subsequently assess the impact of amplitude and phase errors in quadrature signal generation (quadrature imbalance) on reception quality.

  • FORMATION OF THE IMPULSE RESPONSE OF A RECURSIVE LOW-PASS FILTER WITH FINITE IMPULSE RESPONSE AS A SUM OF QUASI-HARMONICS OF A TRUNCATED FOURIER SERIES

    D.I. Bakshun , I.I. Turulin
    221-228
    Abstract

    The problem of reducing the number of arithmetic operations in digital filtering algorithms is highly relevant, as it directly impacts power consumption, processing speed, and hardware costs. Under strict power efficiency requirements for mobile and embedded systems, minimizing multiplication and addition operations becomes a critical design factor. This paper presents a method for implementing a recursive filter with a finite impulse response (FIR) based on a truncated sinc function smoothed by a window (weighting function), represented as a sum of quasi-harmonic functions. These quasi-harmonic functions with different frequencies are polynomials of degree r. The study adopts a second-degree polynomial as a baseline and proposes a numerical method for increasing the polynomial order to improve the accuracy of the approximation. Accuracy analysis demonstrates that using 4th- and 6th-order polynomials achieves an approximation error of less than 1%. The coefficients of the non-recursive part of the filter are computed via inverse finite differences of the original FIR impulse response. These coefficients are integers whose values depend on the number of samples (length) of the half-period of the quasi-sinusoidal function, simplifying the implementation of such a recursive FIR (RFIR) filter on a field-programmable gate array (FPGA). Numerical analysis of finite differences for each quasi-sinusoid revealed that quadratic approximation requires only 16 samples but results in relatively high side-lobe levels (–30 dB). Switching to 4th-order approximation increases the number of non-zero coefficients to 20 and significantly reduces
    (by 13 dB) the stopband magnitude of the frequency response, reaching –43 dB.

SECTION IV. MACHINE LEARNING AND NEURAL NETWORKS

  • PROSPECTS FOR THE APPLICATION OF QUANTUM COMPUTING IN ONBOARD COMPUTING SYSTEMS OF ROBOTIC COMPLEXES

    N.А. Bocharov , N.B. Paramonov
    229-239
    Abstract

    Modern robotic systems are solving increasingly complex tasks, imposing higher demands on the speed and efficiency of onboard computing systems. Traditional methods of increasing performance (scaling hardware, parallel computing, etc.) are approaching their limits, necessitating the search for fundamentally new approaches. Quantum computing is considered a promising direction that could significantly surpass classical computational capabilities in certain tasks. In this regard, the goal of this study is to explore the applicability of quantum computing for onboard computing systems in robotic complexes (RCs). To achieve this goal, a comprehensive analysis of the requirements (performance, energy consumption, size and weight constraints, reliability, etc.) for onboard computing systems of RCs has been conducted. The potential of quantum algorithms in solving typical robotic tasks, including optimization problems and machine learning, has been assessed, followed by simulation modeling and comparison with classical methods. Additionally, current limitations of modern quantum computers (e.g., limited qubit count and decoherence issues) have been examined, and a forecast has been made regarding their development in the coming years based on technological trends. The study confirms the promising application of quantum computing for solving optimization and machine learning problems, which are critical for intelligent RCs. However, current technological limitations (size, operational conditions, and instability of quantum processors) do not yet allow for their direct use onboard. Nevertheless, directions for further research have been proposed, and possible scenarios for the gradual integration of quantum computing into RC architectures over the next 5–15 years have been considered, particularly as quantum processors become more compact and methods for integrating them into onboard systems improve. Thus, as existing barriers are overcome, quantum computers may eventually become an integral part of onboard control systems for RCs, providing a significant leap in their performance.

  • RECOGNITION OF EMOTIONAL STATES IN RUSSIAN SPEECH USING MFCC FUNCTIONS AND THE BLSTM MODEL FOR THE DUSHA DATASET

    P.G. Bukina , А.А. Merinov , S.S. Kharchenko , Е.Y. Kostyuchenko
    240-248
    Abstract

    This paper investigates the task of automatic emotion recognition from speech signals using contemporary deep learning techniques. The relevance of this study arises from the increasing demand for intelligent systems capable of assessing human emotional states, with potential applications in medicine, psychology, information systems, and personnel management. The primary objective is to develop an efficient neural network model for emotion recognition in Russian speech that outperforms existing state-of-the-art architectures. The experiments were conducted using the open-source Russian-language dataset Dusha, which contains 300,000 audio recordings. A total of 183,055 samples from the Crowd subset, annotated with four emotional categories—joy, sadness, anger, and neutral state—were used for training. Mel-frequency cepstral coefficients (MFCCs) were extracted as input features (20 coefficients with a
    20 ms window and 10 ms overlap), followed by normalization. The baseline architecture employed a bidirectional long short-term memory network (BLSTM), capable of modeling both past and future temporal dependencies. To improve generalization and mitigate overfitting, the model was enhanced with convolutional layers (CNN), MaxPooling layers, and regularization mechanisms including Dropout and Batch Normalization. The resulting hybrid CNN–BLSTM architecture achieved 62.9% accuracy on the test set, exceeding the baseline performance (56.2%) by 6.7%. The results were further compared with state-of-the-art architectures such as MobileNetV2, HuBERT, and WavLM. The analysis highlights future directions for improving model performance through structural optimization, class balancing, and incorporation of additional acoustic features.

  • METHODOLOGY FOR CONSTRUCTING AND EVALUATING AN ONTOLOGICAL PROFILE FOR CONTENT PERSONALIZATION SYSTEMS: STAGES AND EVALUATION CRITERIA

    Z.H. Mohammad
    248-262
    Abstract

    This article presents the development and testing of a methodology for building an ontological profile designed for content personalization systems. It details the modular architecture of a web-based personalization system, illustrating the text processing and analysis methods and algorithms employed at each stage, and provides a step-by-step procedure for ontology creation. The methodology encompasses primary data processing, including the extraction of keywords and phrases, followed by their hierarchical clustering to reveal the semantic structure of the domain. Subsequent stages involve defining thresholds to filter out insignificant connections, and extracting and formalizing relationships between concepts using natural language processing techniques such as word-sense disambiguation and semantic similarity-based relationship extraction. An integrated pipeline was developed to implement this process, combining improved algorithms proposed by the author in previous studies, namely, an algorithm for extracting key phrases from individual text based on semantic similarity and a modified algorithm for word sense disambiguation. This pipeline also optimally integrated all necessary natural language processing tools, ensuring the efficient operation of these methods in the process of automatically constructing an ontology from text. The study places particular emphasis on a comprehensive evaluation of the resulting ontology using a specialized set of criteria designed to objectively assess the profile's quality, completeness, and consistency. A important component of the work is a computational experiment that clearly demonstrates the impact of each data processing stage on the final quality and efficacy of the ontology. The results show that the proposed method enables the construction of a practical, scalable, and relevant ontology, suitable for industrial deployment and integration into personalization systems to enhance their accuracy and adaptability

  • INTELLIGENT METHODS OF PARAMETRIC FORECASTING AND OPTIMIZATION OF UAV TRAJECTORIES

    V.I. Danilchenko , V.V. Bova
    263-276
    Abstract

    This paper examines the problem of intelligent parametric forecasting and trajectory optimization for unmanned aircraft systems (UAS) using evolutionary algorithms and machine learning methods. The relevance of the study stems from the multi-criteria and high complexity of UAS trajectory generation processes, as well as the need for accurate and timely assessment of its flight parameters. This is particularly important for ensuring the reliability, safety, and efficient performance of flight missions in UAS operating conditions, including scenarios related to the operation of critical infrastructure facilities. The objective of the study is to improve the accuracy of trajectory parameter diagnostics and the reliability of parametric forecasting of UAS trajectories under conditions of uncertainty and the multi-criteria nature of the problem. The paper proposes a hybrid approach incorporating a genetic algorithm (GA), a particle swarm algorithm (PSO), and an XGBoost machine learning model that provides adaptive assessment of the quality of the generated solutions. A computational software package has been implemented, including selection, recombination, mutation, and elite inheritance mechanisms, as well as a machine learning module for validating route trajectories and associated parameters. A computational experiment was conducted, which compared the effectiveness of GA and PSO under various operating scenarios. Testing was performed on industry-specific datasets with varying numbers of iterations. The computational experiment revealed the advantage of the genetic algorithm, namely, a 14–17% improvement in the quality of design solutions. The results of the study demonstrate high adaptability and practical applicability in modeling, parametric forecasting, and routing tasks, and also indicate the potential for integration with intelligent UAS navigation and monitoring systems. The article's materials are of practical interest to specialists in the field of UAS development and operation, as well as to researchers working on multi-criteria route planning, parametric forecasting, and improving the reliability of UAS operations.

  • NEURAL NETWORK APPROXIMATION OF MODEL-PREDICTIVE CONTROL FOR A DYNAMIC OBJECT STABILIZATION SYSTEM

    B.А. Komarov , S.V. Leonov , Т.Е. Mamonova
    276-287
    Abstract

    Relevance. When solving problems of stabilization of dynamic objects, classical model predictive control is widely used. It provides high quality control by solving the optimization problem at each step, but it has significant computing costs, which limits its application in real-time systems with high requirements for update frequency. Therefore, the question of investigating the applicability of a neural network regulator trained on a model predictive regulator (MPC) when solving the problem of stabilizing the position of a dynamic object with a limited computational and time resource is relevant. Goal.  The purpose of the presented work was to develop and study a neural network regulator trained on the basis of an MPC regulator to stabilize the position of a dynamic object on a mobile platform. Methods. When performing the work, methods of system analysis, simulation modeling, as well as experimental tests on the bench were used. Results and conclusions. As part of the study, a neural network regulator was developed and trained that approximates the behavior of MPC based on data obtained when controlling a real balancing platform. The training was conducted on the input and output data of the MPC without using the internal model of the system, which made it possible to reproduce the dynamics of the regulator at significantly lower computational costs. Experimental results showed that the neural network model provides a stabilization quality comparable to the original MPC, while the calculation time was reduced from 47 ms to
    1.6 ms, which amounted to an acceleration value of 29 times. The proposed approach demonstrates the potential of neural network control methods in the problems of replacing complex optimization regulators for systems with limited computing resources.

  • ANALYSIS OF TRADITIONAL AND NEURAL NETWORK-BASED CONTROL METHODS FOR ELECTRIC DRIVES IN ROBOTICS AND PERSPECTIVES OF HYBRID APPROACHES

    А. I. Tataurov , V.Е. Vavilov
    287-298
    Abstract

    The objective of this study is to conduct a comparative analysis of traditional and neural network-based control methods for electric drives in robotics, with an emphasis on identifying their strengths and weaknesses, determining their areas of application, and assessing the prospects for the development of hybrid approaches. Effective control of electric drives is critically important for modern robotic systems, which must demonstrate high performance, reliability, and versatility in various application domains. Specifically, key challenges include high-precision trajectory tracking, energy-efficient control, robust control under uncertainties and disturbances, constraint-aware control, as well as synchronized and coordinated control of multiple electric drives. In this regard, optimizing the control of electric drives to ensure motion accuracy, energy efficiency, and adaptation to changing conditions becomes a top priority. To achieve this goal, the study systematizes and analyzes the characteristics and applications of traditional electric drive control methods, such as PID controllers, Kalman filters, sliding mode control, and model predictive control. It also examines key neural network-based approaches to electric drive control, including feedforward neural networks, recurrent neural networks, radial basis functions, neuro-fuzzy systems, and reinforcement learning. A comparative analysis of these methods is conducted to identify their advantages and limitations based on key parameters such as trajectory tracking accuracy, robustness to disturbances and uncertainties, adaptability to changing operating conditions, and computational complexity. Additionally, the study investigates and assesses the prospects for hybrid electric drive control methods that combine the reliability and control quality of traditional methods in linear and structured environments with the flexibility and adaptability of neural network-based methods in complex and dynamic robotic systems. The study’s key findings indicate that traditional electric drive control methods, such as PID controllers and sliding mode control, remain effective and preferable in linear and well-defined systems due to their simplicity and reliability. At the same time, neural network-based approaches demonstrate significant advantages in controlling complex nonlinear systems, as well as in uncertain conditions requiring adaptation to changing environments. Special attention is given to hybrid control methods, which integrate the strengths of both traditional and neural network-based approaches. These methods are regarded as the most promising and advanced direction, enabling the development of intelligent and robust electric drive control systems capable of operating efficiently in complex and dynamic environments.