No. 3 (2025)

Published: 2025-07-24

SECTION I. CYBERATTACKS AND THEIR DETECTION

  • MULTIMODAL DATA FEATURE EXTRACTION METHOD FOR NETWORK ATTACK CLASSIFICATION

    A.V. Balyberdin
    6-16
    Abstract

    An intrusion detection system (IDS) is an important component of corporate data network (CDN) protection. IDS analyzes network traffic and detects network attacks. Depending on the detection methods, IDS can be classified into the following types of systems: signature-based analysis systems, anomaly detection systems (ADS), and hybrid systems combining the aforementioned approaches. Recently, anomaly detection systems (IDS) have been actively developing. For anomaly detection systems, network attacks are anomalous behavior of network traffic consisting of a set of features or event attributes. Modern IDS are based on machine and deep learning methods, and therefore the detection of network attacks and anomalies is formulated as a classification and clustering problem. To solve these problems, methods for optimizing the feature space of network traffic are required. The aim of the work is to develop a feature extraction method based on a multimodal approach to representing network traffic data for classifying network attacks. The paper considers the analysis of relevant studies on feature extraction methods from various fields. The objective of the study is to improve classification efficiency using a multimodal representation of network traffic features. The result of the work is a method for extracting data features based on two modalities: a spectral representation of network traffic features and an image feature matrix. The novelty of the presented method lies in the application of the windowed Fourier transform method for network traffic events, followed by the calculation of spectral features for discrete signals, as well as the transformation of data features into an image matrix and its expansion to optimize the feature space using a convolutional neural network (CNN). Evaluation of the multimodal method showed that this method increased the classification accuracy for unbalanced classes of network attacks

  • RESEARCH OF MACHINE LEARNING METHODS FOR DETECTING SPOOFING ATTACKS IN DECENTRALIZED NETWORKS

    М.А. Lapina , R.А. Dymuha , N.N. Kucherov , Е.S. Basan
    16-31
    Abstract

    Unmanned aerial vehicles are appearing more and more in our lives and are used for various purposes such as cargo delivery, monitoring, household management, exploration and entertainment. But along with their growing popularity, the number of people who intentionally want to interfere with the operation of UAVs and use them for their own interests and purposes is also increasing. They use various types of attacks to eliminate or intercept the drone by any means. Spoofing attacks are one of the most common and dangerous types of attacks, as they allow attackers to act unnoticed, faking the identifiers of autonomous aircraft or operators, posing as legitimate participants in the system. The purpose of such attacks may be to intercept control, steal data, sabotage, or use UAVs to perform malicious actions such as espionage, damage, or malfunction operations. But every year it becomes more difficult to prevent attacks, as they are difficult to detect and can lead to serious consequences, which is why such a solution as detecting spoofing attacks on an unmanned vehicle using machine learning was invented. The article discusses spoofing attacks on UAVs, analyzes spoofing on autonomous aircraft, and studies machine learning methods for detecting spoofing attacks based on a dataset using the Knime platform. The results of the study demonstrate that the method of detecting attacks using machine learning based on the ensemble method, the Tree Ensemble Learner and Random Forest Learner models, which showed results of 97.110% and 97.039%, respectively, is the best among other methods, which will improve the security of unmanned aerial vehicles, reduce the burden on operators and increase the reliability of the system as a whole. In the future, the proposed approach can be expanded to detect other types of cyberattacks, which will make it a universal method of protection against intruders

  • A SYSTEM FOR AUTOMATING DOCUMENT FLOW AND MONITORING ECONOMIC SECURITY INCIDENTS BASED ON ARTIFICIAL INTELLIGENCE TECHNOLOGIES

    А.Е. Anpilogova , V.А. Anpilogov
    31-41
    Abstract

    Automation of document flow is a key element of process optimization and efficiency improvement. Automation of document flow based on artificial intelligence improves the management of economic security incidents by optimizing work processes and reducing costs. The transition to automated document flow in Russia is associated with a complex regulatory framework and large-scale implementation costs at  enterprises. Automation helps to comply with legal requirements and reduces the risks of legal and financial consequences. Integration of digital signatures increases the efficiency of document approval.
    The implementation of automation systems supports national digital transformation goals. Automation of document flow reduces dependence on paper processes and facilitates the creation of centralized digital repositories. The implementation of document automation systems requires a strategic approach and careful planning. Document automation provides time savings, reduced errors and increased compliance with regulatory standards. The article discusses the theoretical foundations of BPM, integration of digital technologies and regulatory aspects specific to Russia. The proposed system combines monitoring with AI and IoT, provides real-time data processing, automates the creation of legal documents and reports. The workflow automation system is based on data integration, artificial intelligence technologies and seamless solutions. The system combines monitoring technologies, facial recognition and behavior analysis algorithms, a centralized database and a communication module. The system generates reports and legal documents certified by QES and ensures interaction with law enforcement agencies and security services. Implementation results: a 30–40% reduction in operating costs and a 50% reduction in losses. The system complies with digital transformation standards and supports the modernization of the national economy.

  • MODELING OF SECURITY THREATS FOR BUILDING A COMPREHENSIVE INFORMATION PROTECTION SYSTEM AT OBJECT OF INFORMATIZATION

    I. А. Eremin , А.Е. Yakushina , I.L. Shcherbov
    41-54
    Abstract

    Within the framework of this study, the typical structure of the informatization facility was analyzed in detail, which allowed qualified specialists to better understand the mechanisms and aspects through which various categories of objects and subjects of information processing that may be subject to security threats. The main mechanism for building a comprehensive information security system is the threat model. This model is aimed at identifying and identifying potential threats, their subsequent analysis and minimizing the risks of their implementation associated with damage to the informatization facility. In the framework of this study, the domestic FSTEC knowledge base and the international ATT&CK and CAPEC knowledge bases are considered to build a threat model. They contain comprehensive information about the tactics and techniques used by intruders in carrying out attacks on informatization facilities. In the course of the research, various tactics used by the attackers were classified in detail. Special attention was paid to the definition of the main tactics that determine the entry points of the informatization object, which are used to further carry out the attack. In the context of developing an effective threat model, it seems advisable to conduct a comprehensive analysis of the data contained in knowledge bases and their subsequent joint use in the process of building a threat model at informatization facilities. This approach makes it possible to systematize and structure information, which contributes to a more accurate and reasonable construction of a model for the implementation of potential threats at different stages of an attack on an informatization facility. To build a comprehensive information security system, a decision support system was considered. The analysis of modern scientific research devoted to the applied methods in the construction of support systems is carried out. As a result of the work, the relationship between knowledge bases of tactics and techniques, as well as well-known vulnerabilities, was shown using the ontology method, which allows us to build a model of a complex threat attack, and identify the targets affected by an attacker at various stages of a complex attack, the criticality of the vulnerability used and the platform on which this vulnerability is implemented, and the definition of negative consequences

SECTION II. METHODS OF PROTECTION AND SECURITY TECHNOLOGIES

  • CLASSIFICATION OF PROCESSING NODES IN BIG DATA SYSTEMS ACCORDING TO THE ZERO TRUST APPROACH

    М.А. Poltavtseva , D. V. Ivanov
    55-62
    Abstract

    Data cybersecurity is one of the most important factors for the successful implementation of the national project ‘Data Economy and Digital Transformation of the State’. The challenges of building secure big data systems lie in their heterogeneous nature, large number of heterogeneous tools, high connectivity and high trust between distributed components. Reducing the internal trust and reducing the attack surface according to the zero-trust approach is necessary to increase the security of such systems with the least impact on their performance. The aim of the paper is to create a method for dynamic classification of nodes and data processing components in heterogeneous big data systems based on the application of different approaches to trust reduction with respect to the objects realising the information processing process. The paper considers the zero trust approach as applied to the class of systems under study, as well as the task of extended implementation of the principle of minimum privilege to reduce the attack surface. The authors present a classification of nodes - handlers based on their operations with data, unified according to the previously developed conceptual data model. A comparison of nodes and security methods applied to them based on the need for access to semantics and data components to perform operations is proposed. Based on this classification, a method of dynamic node type determination during system operation is developed for situations of changing component composition of a big data processing system, typical for multi-component distributed highly loaded systems. The results of the work are a part of the complex consistency approach to the construction of secure big data processing systems.

  • STOCHASTIC DYNAMIC MODEL OF UNDERWATER WIRELESS SENSOR NETWORK BASED ON LOUVAIN CLUSTERING ALGORITHM

    А.М. Maevsky , V.А. Ryzhov , Т. А. Fedorova , I. V. Kozhemyakin , N.М. Burov
    62-81
    Abstract

    Underwater wireless sensor networks (UWSNs) play an important role in monitoring ocean processes, underwater navigation, environmental control and security. However, underwater environment features such as high signal attenuation, limited energy resources and changing network topology create significant challenges in organizing efficient data transmission. To optimize network operation and extend its service life, a clustering method is used to group nodes, reduce the load on communication channels and improve energy efficiency. However, in the event of network node failure, static clustering becomes ineffective, which requires the implementation of dynamic reclustering. The procedure of redistributing node roles and rebuilding the network topology allows maintaining communication stability and minimizing data losses, taking into account the energy balance of the entire network as a whole. This paper examines modern approaches to clustering and reclustering in UWSNs taking into account the energy balance, node failure probability and interference in the transmission medium. The development of adaptive UWSN control methods is an urgent task aimed at increasing the reliability, energy efficiency and durability of underwater communication networks. The article presents a stochastic cross-level model for dynamic three-dimensional PBSNs of arbitrary topology. The model uses a new clustering/reclustering technique based on the Louvain algorithm, a routing protocol built on the Dijkstra method, and a time-domain management (TDMA) method. The proposed PBSN operating model is the basis for the developed simulation complex, which allows assessing the efficiency and reliability of the network, taking into account the loss of connectivity and vulnerabilities for PBSNs of various scales and purposes. As part of the research, a parametric analysis of systematic calculations of the PBSN functional characteristics was performed. The results of the analysis showed that the proposed simulation model provides an increase in the autonomous network operation time and a decrease in the number of lost messages compared to the models of other authors

  • ADAPTIVE ALGORITHM FOR PROCESSING SPATIAL-TEMPORAL SIGNALS WITH REED-SOLOMON CODING FOR A THREE-DIMENSIONAL MODEL OF A WIRELESS RADIO COMMUNICATION CHANNEL

    V.P. Fedosov , Mohammedtaqi M. Jawad Al-Musawi Wisam , S.V. Kucheryavenko
    81-90
    Abstract

    Reducing the probability of errors in message transmission is important in satellite, wireless and space communication systems. Reducing the probability of bit errors in a wireless communication system is possible by using encoding of the data being sent. Using channel encoding allows detecting and correcting errors in message transmission in a noisy channel. The aim of the work is to study the effect of using Reed-Solomon codes and the algorithm of space-time signal processing in a receiver using an adaptive antenna array on increasing noise immunity in wireless radio communication systems. In the presence of complex signal propagation paths, this allows performing spatial filtering in channels with reflections. The adaptation method, considered in this paper, is based on the theory of vectors and eigenvalues of the spatial correlation matrix. For Reed-Solomon codes, the simulation results show a significant decrease in bit error rates due to the correction of transmission errors. By using adaptive algorithms for single-input multiple-output orthogonal frequency division multiplexing (SIMO-OFDM) and multiple-input multiple-output MIMO-OFDM systems together with the Reed-Solomon code for the transmitted message, the signal-to-noise ratio for a fixed bit error level was increased to 8 dB and 5 dB, respectively. The results show that the adaptive algorithm with simultaneous use of the Reed-Solomon code can increase the throughput while significantly reducing the error probability. Under conditions of multipath signal propagation, it can be argued that the use of adaptive space-time algorithms improves the noise immunity of the receiving system during signal processing.

SECTION III. CRYPTOGRAPHIC SYSTEMS AND ENCRYPTION

  • DEVELOPMENT OF ALPHABETICAL DISSYMMETRIC TRIGRAM CRYPTOSYSTEM BASED ON SOLVING A NORMAL SYSTEM OF DIOPHANTINE EQUATIONS OF THE 5TH DEGREE OF DIMENSION SIX OVER THE RING OF GAUSSIAN INTEGERS

    V.О. Оsipyan , Е.S. Fursina , E.Т. Alghareeb
    91-99
    Abstract

    The aim of the work is to develop a mathematical model of an alphabetic cryptosystem based on a general two-parameter solution of a normal system of Diophantine equations of the fifth degree of dimension six over the ring of Gaussian integers and to write a program demonstrating the capabilities of such a cryptosystem. The paper implements the idea of K. Shannon to develop a mathematical model of a cryptosystem containing Diophantine difficulties encountered in solving normal and other multistep systems of Diophantine equations (MSDE) of the Tarry-Escott type. K. Shannon noted that cryptosystems containing Diophantine difficulties have the greatest uncertainty in selecting keys. The peculiarity of such MSDEs is that general non-exhaustive methods for solving them based on a negative solution to Hilbert's 10th problem on the algorithmic undecidability of an arbitrary Diophantine equation in integers are unknown.
    It should also be noted that Diophantine equations are a powerful tool in cryptography due to their complexity, but their use requires a deep understanding of the mathematical apparatus of Diophantine analysis with possible methods of solutions to prevent vulnerabilities in such cryptosystems. Solutions are key factors for ensuring the security and reliability of cryptographic systems based on these equations. We provide for the use of strategies and approaches depending on the values of the dimension and degree of such MSDE to increase the share of resistance of alphabetic information security systems (ISS), including the number of parameters included in its general parametric solution, taking into account either the complexity of the algorithm for solving the system of equations, or the solution itself, or both at the same time. The paper presents a mathematical model of an alphabetic dissymmetric trigram cryptosystem based on a general two-parameter solution of a normal system of Diophantine equations of the fifth degree of dimension six over a ring of integer Gaussian numbers, among the numerical values of the parameters of which are both numerical equivalents of elementary messages and keys, for finding which an illegal user will need to look for a general two-parameter solution of a normal system of Diophantine equations.
    The mathematical model of the alphabetic dissymmetric trigram cryptosystem presented in the paper contains Diophantine difficulties, so it has good cryptographic resistance: an illegal user will not be able to reduce the set of keys being tried, he needs to solve a system of Diophantine equations in Gaussian numbers, which is a difficult-to-calculate problem without having the corresponding secret keys. Also, the use of three-symbol (trigram) encryption of plaintext instead of symbolic encryption of plaintext further increases the cryptographic resistance of the system. A software implementation of the specified cryptosystem using the Python language is provided.

  • DATA ENCRYPTION IN EDMS BASED ON BLOCKCHAIN TECHNOLOGIES

    К.S. Romanenko , Е.А. Ishchukova , N. B. Elchaninova
    99-110
    Abstract

    The article discusses the issues of storing confidential and personal data in electronic document management systems. The possibility of storing confidential and personal data in electronic document management systems based on blockchain technologies is considered. One of the key characteristics of blockchain is the openness of data. All transactions entered into the blockchain are visible to all network participants. This can become a serious problem when storing sensitive data, such as personal information, bank details or medical history. storage of personal data, since the blockchain platform is open. Various methods are used to hide information, including homomorphic encryption, ZK-SNARKs (zero-knowledge proofs), specialized hardware add-ons, and other methods. Previously, the authors presented a protocol for storing confidential data in blockchain systems using hybrid encryption. The paper focuses on the use of symmetric cryptography algorithms in conjunction with elliptic curve cryptography, as it is widely used in modern blockchain platforms such as Bitcoin and Ethereum. The reason for choosing elliptic curves is their high cryptographic strength with a relatively short key length, computational efficiency, and low resource requirements, which is especially important for decentralized networks with limited node computing capabilities. The article presents the results of modeling the process of generating encrypted confidential data using various encryption algorithms: ECC ElGamal, ECDH-AES, ECDH-Magma (in CTR and CBC modes). Experiments have shown that the most effective solution is to use the hybrid ECDH-AES algorithm with AES-NI support, which provides high data processing speed while maintaining a high level of security. The analysis suggests that the use of hybrid encryption in blockchain systems strikes a balance between the need to ensure privacy and preserve the key benefits of the technology – decentralization, immutability, and transparency for authorized participants. Possible formats of data presentation are considered, an experimental comparison of various encryption algorithms that can be used in electronic document management systems based on blockchain technologies is carried out.

  • ESTIMATION OF THE SEARCH TIME FOR KEY COMPONENTS IN A KNOWN PLAINTEXT ATTACK ON THE DOMINGO-FERRER CRYPTOSYSTEM

    L. К. Babenko , V. S. Starodubcev , N.B. Yelchaninova
    110-118
    Abstract

    This paper provides a brief description of the fully homomorphic Domingo-Ferrer cryptographic system and describes the stages of an attack with a known plaintext on this cryptosystem. The stage of searching for the key components of the attack in question is analyzed, for which existing implementation methods are described, among which the method with minimal computational complexity is determined. The rationale for the computational complexity and time costs of the considered method for implementing the key component search stage is based on theoretical calculations, as well as experimental studies.
    The aim of the study is to evaluate the complexity of implementing the stage of searching for key components in an attack with a known plaintext on a fully homomorphic Domingo-Ferrer cryptographic system using the Gauss method, developed for solving systems of linear algebraic equations modulo a prime number. The main result of this work is an assessment of the computational complexity of the key component search stage in a known plaintext attack on the Domingo-Ferrer cryptographic system, implemented using the Gauss method. The complexity estimate is expressed in the number of basic mathematical operations and is confirmed by a number of experimental studies, which allows us to draw reasonable conclusions about the computational complexity of the method under consideration. The conducted research represents a significant contribution to the development of a fully homomorphic Domingo-Ferrer cryptosystem based on the integer factorization problem. It has practical significance, as it allows us to assess the criticality of an attack with a known plaintext on a given cryptosystem. The results obtained can serve as a basis for researchers and cryptographers to develop recommendations for choosing the parameters of the Domingo-Ferrer cryptosystem to ensure the necessary level of security in various applications.

SECTION IV. MACHINE LEARNING AND DATA PROCESSING

  • REALIZATION OF METHODS FOR SYNCHRONIZATION OF DATA FLOWS IN DIGITAL SIGNAL PROCESSING SYSTEMS

    I.I. Levin , D.S. Buryakov
    119-134
    Abstract

    In digital signal processing applications involving coherent processing of data from a phased antenna array, it is important to ensure the coordinated arrival of digitized data from antenna elements to processing units. As the number of data transmission channels in DSP complexes grows, the probability of errors in the data transmission channels increases significantly, which puts forward increased requirements to the assurance of the program complex of isochronous data transmission. The paper presents the results of the development and realization of methods that increase the assurance of isochronous data transmission. A combined method of isochronous data transmission is proposed, characterized by the use of service gaps in the transmission of operand arrays and dynamic compensation of delays in data channels. The most probable errors occurring during data transmission are singled out and methods of their parrying are proposed. A program complex realizing the combined method is described. Using the attributive model of dependability, the dependability of the program complex is analyzed. The analysis has shown that the use of the combined method will quadruple the number of data transmission channels in the DSP complex at a given level of dependability and fixed time of reliable operation in comparison with the basic method. With a significant increase in the number of data channels, there is a need to maintain a given level of dependability. In this regard, a modernized method of isochronous data transmission is proposed, in which the algorithms for checking data integrity, checking the acceptable range of delay mismatch in the data channels and the algorithm for switching the reference channels were improved. An evaluation of the implementation dependability of the modernized method showed its ability to provide twice the number of data channels compared to the combined method.

  • ALGORITHM FOR TRAINING DATA PREPARATION OF CONVOLUTIONAL NEURAL NETWORKS FOR LETTER AND CHARACTER RECOGNITION

    D.А. Bezuglov , М.S. Mishchenko , S.E. Mishchenko
    134-144
    Abstract

    The accuracy of text image recognition remains limited in practice. This is due to the fact that the alphabet of symbols can include lowercase and uppercase letters with a similar font, as well as composite characters formed from several simpler characters. To solve this problem, the character recognition system is supplemented with semantic or structural analysis systems, which significantly complicates the information system for text recognition. Currently, convolutional neural networks are widely used for recognizing single characters, for which a database with images of recognized characters is used for training. The paper proposes an algorithm characterized in that the image of a single character for a training sample includes fragments of characters that can be located in a line in close proximity to the recognized character.  This allows you to expand the set of images for training and additionally include information in the image about the placement of the symbol in the string, its relative size and whether this symbol is composite. The formation of images for the training sample simulates the process of segmentation of a symbol by brightness, which is usually used when selecting a symbol for further recognition.
    At the same time, the size of the symbol is estimated, it is supplemented with images of neighboring symbols, and then the size of the area, the image that will be placed in the training sample, is estimated. The resulting image is scaled and cropped in such a way that images of a given size are received at the input of the neural network. In the work, to recognize the alphabet of symbols, including uppercase and lowercase characters of the Russian and English alphabets, numbers, symbols and punctuation marks, it is proposed to use a variety of convolutional neural networks, each of which is trained to recognize one character. The symbol is selected by comparing the responses of all neural networks and selecting the maximum response. The proposed algorithm for training data preparation is compared with a well-known algorithm based on the use of images of single characters. It is established that the proposed algorithm for preparing data for training provides an increase in the accuracy of recognizing the alphabet of 138 characters by more than two times.

  • IDENTIFICATION OF KEY TECHNOLOGIES BASED ON COLLECTION AND ANALYSIS OF DATA FROM OPEN RUSSIAN-LANGUAGE SOURCES

    А.G. Bondarenko , А.G. Kravets
    144-159
    Abstract

    This article is devoted to the development and approbation of a new approach to the collection, processing and analysis of open data in the Russian language for identification of key technological trends. To solve the problem of formation and subsequent analysis of structured datasets methods of web scraping, natural language processing and analysis of time-series have been developed and implemented via programming. The approach described in the article has been applied for the first time in order to extract and structure information from scientific articles, news resources and patent documentation in the Russian language for the first time. As a result of analyzing the obtained dataset of scientific publications, 30 most frequently mentioned bigrams and the same number of trigrams of technological terms have been identified. Based on the frequency analysis of bigrams and trigrams, key technological terms were   identified which then were used for complex filtration on key technologies. Complex filtration enabled to fulfill the search of patents in Russian and their collection for further analysis. As a result of preprocessing of the obtained patent data time series of patent activity have been formed. The programme system of key technological identification has been implemented in JavaScript and Python using Selenium and BeautifulSoup libraries for web scraping, NLTK and Scikit-learn for text data processing and analysis. The study focused on the dynamics of the development of key technologies over time has allowed to identify periods of intensive patent activity and declining interest in this or that kind of technology. The results presented in the article provide a basis for further development of machine learning methods for the purpose of predicting technological development and identifying promising areas of applied research.

  • DEVELOPMENT OF A CHATBOT FOR CLASSIFICATION AND ANALYSIS OF NATURAL LANGUAGE TEXTS USING LOCAL LARGE LANGUAGE MODELS

    Juman Hussain Mohammad , Juman Hussain Mohammad , Y.А. Kravchenko
    159-171
    Abstract

    This paper explores local large language models (LLMs) and their application in text classification tasks, while also comparing their performance with traditional methods. The paper provides a comprehensive review of several key local LLMs, with particular focus on their architectural advantages, characteristics, and application domains. Specifically, we examine models with varying numbers of parameters, their ability to adapt to specialized domains, and their computational requirements when deployed on local hardware. Special emphasis is placed on the trade-offs between performance and resource efficiency. As a practical contribution, we developed a chatbot that utilizes local LLMs (such as DeepSeek, Gemma, and Llama2 via Ollama) to classify incoming texts into predefined categories, demonstrating the operation of these models without cloud computing. The system features a modular architecture that allows for easy integration of new models and comparison of their effectiveness. The computational experiment involves evaluating the accuracy and inference speed of local LLMs compared to simpler methods such as Sentence-BERT, TF-IDF and BoWC, highlighting scenarios in which local models outperform or underperform traditional approaches. Testing was conducted using the benchmark BBC dataset. The results show that language models (including 7-billion parameter models) demonstrate strong and logically consistent classification performance in natural language text processing. However, their results are not perfect for benchmark datasets. Notably, we identified cases where all tested models, including traditional methods, misclassified documents, suggesting potential issues with data labeling. These findings indicate the need to reconsider benchmark labels in standard datasets, particularly for domains with subjective categories where expert evaluations may vary significantly. On the other hand, while local LLMs lag behind cloud-based solutions in speed, their advantages in data privacy and offline operation make them suitable for specialized tasks. This is particularly valuable in medical and financial institutions where protection of sensitive information is critical, and where local models can be fine-tuned for specific business processes without the constraints of cloud APIs.

  • CLASSIFICATION OF RADAR IMAGES OF MULTI-ROTOR UNMANNED AERIAL VEHICLES USING THE YOLO11 ALGORITHM

    V.А. Derkachev
    171-180
    Abstract

    This article discusses a classifier of radar images of unmanned aerial vehicles based on a neural network built on the YOLO algorithm version 11. Solving the problem of detecting and classifying unmanned aerial vehicles has become one of the priority tasks at present. The increase in the number of modifications of unmanned aerial vehicles greatly complicates the use of statistical classification methods, which requires the use of new approaches to solving the classification problem. The development of neural network methods, simultaneously with an increase in the performance of computers for training, on the one hand, and embedded solutions, on the other, allows for the classification of aircraft using radar images in real time. The use of the YOLO11 algorithm allows, in addition to determining the class of the target, to estimate the range to the observed object. The use of radar images is justified due to the fact that visual observation is not always possible due to difficult weather conditions and darkness. To train the neural network, it is proposed to use a set of radar images obtained using the author's model of data generation with an arbitrary configuration of unmanned aerial vehicles. The neural network of the Detection YOLO11s class (9.4 million parameters) was trained on a sample of radar images of two classes, a total of 8192. As a result of training, an accuracy of 0.99 was obtained for classification in 2 classes of objects (on test model data). Tests were conducted using natural data taken using the TI IWR1642 millimeter-range radar system, as a result of which error-free classification of objects on a small sample was achieved

  • METHOD OF AUTOMATIC OPTIMIZATION OF THE FUZZY RULE BASE OF AN INTELLIGENT CONTROLLER BASED ON SUBTRACTIVE CLUSTERING

    А.S. Ignatyeva , V.V. Shadrina , D.S. Ignatyev , А.V. Maksimov
    181-197
    Abstract

    The aim of the work is to develop a method for optimizing the fuzzy rule base of an intelligent controller for controlling a technical object using subtractive clustering. The article provides an overview and a brief analysis of the state of affairs in the field of optimizing the operation of intelligent control systems. To achieve the goal of the study, a hybrid model has been developed in which the technical object is controlled using a classical PI controller and a fuzzy PI controller with a generated structure of a Cygeno-type fuzzy inference system and a developed model of an adaptive neuro-fuzzy inference system. This configuration of the model allows you to form a fuzzy rule base that does not depend on the expert's knowledge in the subject area. The article proposes a new method for optimizing the fuzzy controller rule base based on clustering methods, in particular subtractive clustering, which allows you to reduce the number of fuzzy logical inference rules and increase the performance of the technical object control system. First, a hybrid model synthesized on the basis of the values of the fuzzy and classical controllers before applying subtractive clustering was simulated. The application of subtractive clustering according to the method developed in the study for the values of the classical and fuzzy controllers allowed us to achieve their quantitative reduction by 1.7 and 5.25 times, respectively. Then, the hybrid model synthesized on the basis of the values of the fuzzy and classical controllers after applying subtractive clustering was simulated. The results obtained in the process of simulation showed high efficiency of the proposed method for optimizing the fuzzy controller rule base. Due to the application of subtractive clustering in the hybrid model for the intelligent controller, it was possible to significantly reduce the number of membership functions required to describe the input linguistic variables (from five to four) and reduce the number of fuzzy logical inference rules (from twenty-five to sixteen). The analysis of the resulting graphs of transient processes obtained for the hybrid models before and after applying subtractive clustering showed that the main indicators of the quality of the control process remain unchanged with a significant reduction in the calculations performed.

  • MULTI-AGENT SYSTEM USING ARTIFICIAL INTELLIGENCE TO PROCESS IMAGES FROM THE DRONE'S TECHNICAL VISION CAMERAS

    А. L. Verevkin , I.E. Josephs , V.V. Misyura , L.S. Verevkina
    198-212
    Abstract

    Multi-agent technology with drones, modern sensors, precise GPS and artificial intelligence, have led to a breakthrough in the field of cyber-physical systems. This article presents a multi-agent system using artificial intelligence to process images from technical vision cameras installed on a drone. A block diagram of a multi-agent system on a drone was developed based on an effective and simple platform taken from the ARRISE 410 octocopter – an agricultural sprayer drone with: intelligent control system; omnidirectional digital microwave radar; 6-axis high-precision accelerometer; electronic level for measuring tilt; real-time optical camera 1 with a first-person view; control panel equipped with the latest Light Bridge 2 signal transmission system; remote control has a design protected from dust and water. The kit must be supplemented with: hyperspectral HS - camera for scanning, its power module and the ability to interface with the ARRISE 410 drone systems, an information compression module. Model for studying the throughput on the DJI Agras T20 hexacopter DJI Agras T20, MikrotikRB411 5G network card, Raspberry Pi 3 microcomputer, 1 Mpix RGB camera, built-in on-board computer Raspberry Pi OV5647 v1.3 and hyperspectral HS - camera 2 Resonon Pika L shoots hyperspectral data with 281 spectral bands with spectral wavelengths from 400 to 1000 nm and a spatial resolution of 900 hyperspectral pixels per image line. The article solves the problem of experimentally and computationally determining the required compression of information obtained from hyperspectral and optical range cameras with transmission through a telecom operator and the Internet for image processing by an artificial Internet

  • FAILURE PREDICTION USING FACTOR ANALYSIS METHODS

    Е.S. Podoplelova
    213-223
    Abstract

    This article discusses the application of a risk assessment method based on the combination of the FMEA (failure mode and effect analysis) methodology and the MCDM (Multiple Criteria Decision Making) methods. This approach allows taking into account both expert knowledge and historical data on the operation of the equipment. MCDM methods process the assessment more flexibly in comparison with the standard method of calculating the priority number of risks (PRN), which helps to better assess the risks by three criteria: the probability of occurrence, the complexity of detection and the severity of the consequences. One of the criteria can be obtained not only through an expert assessment, but also on the basis of data recording the operation of the equipment. This approach was tested using the example of synthetic open-source data on the operating modes of production equipment. The task was to predict both the failure itself and its type, as well as to identify the factors that have the greatest impact on the failure. For this purpose, data preprocessing was carried out, during which it was necessary to eliminate the imbalance of classes. There are several approaches to solving this problem, aimed at reducing the dominant class or generating instances of poorly represented classes. In this example, random reduction of the number of records without errors was used. Then, AdaBoost, Random Forest and LinearSVC were compared as classification algorithms. Since multi-class classification was required, it was decided to use the one-vs-the-rest strategy. As a result, it was possible to achieve 86% forecasting accuracy by F-measure using the AdaBoost and Random Forest algorithms. LinearSVC turned out to be ineffective. Thus, the resulting forecasting model recognizes different types of errors, but there is room for improvement, which requires a larger sample, including more examples with different types of failure. Based on this, this approach as an alternative to expert assessment is promising, improving objectivity, and also making it possible to foresee risks and prevent a real failure or risk-related incident.

SECTION V. RISK MODELING AND MANAGEMENT

  • CONSTRUCTION OF AN OPTIMAL CONTROL TRAJECTORY IN AN INTELLIGENT SYSTEM IN THE ABSENCE OF OBSERVABLE VARIABLES

    А.N. Tselykh , V. S. Vasilev , L.А. Tselykh , Е.S. Podoplelova
    224-233
    Abstract

    Constructing optimal control in the complete absence of data on the system dynamics is a pressing problem. In this paper, we propose a solution to a finite-horizon linear quadratic problem (LCP) for a time-invariant system with a graph dynamics matrix. Unlike the control problem, stability and complete controllability of the system are not assumed. The construction of the control trajectory is controlled by the direction of increase in the change in the state of variables over a small number of steps, which is determined by the conditional principal eigenvector of the adjacency matrix of the graph model. The solution of classical optimal control is carried out in an autonomous mode and requires complete knowledge of the system dynamics. In the absence of complete knowledge of the system dynamics, solving optimal control problems for systems with uncertainty, including discrete linear systems, has attracted considerable interest in recent years. The main approach when complete information about the system is unavailable is the design of optimal control, in which the system parameters are initially determined, and then an algebraic equation in the dual space is solved. An important difference from the standard discrete control problem is that the control model was modified to estimate changes in the state of variables under controls transmitted through the dynamics matrix. The proposed algorithm using a graph matrix implements recurrent calculations of dynamic and adjoint equations, as well as the Powell method for solving a system of linear algebraic equations (SLAE). The authors introduced a new interpretation of the mathematical construction of the system dynamics matrix in a standard discrete control problem on a finite time interval, which can be used to design any controlled dynamic system with unobservable parameters.

  • TECHNOLOGICAL SOLUTION FOR FORMING A TRUST INFRASTRUCTURE IN THE DIGITAL RUBLE SECURITY SYSTEM

    А.V. Ivanov , А.V. Tsaregorodtsev , М.V. Valeev
    233-245
    Abstract

    The relevance of the article is due to the digital transformation of the Russian economy, the most important direction of which is the development and implementation of digital ruble instruments in the credit and financial sector. In this regard, the national system should be based on the information technology infrastructure of trust in the digital ruble security system. The main functional properties of such a trust infrastructure include identification and authentication mechanisms, secure financial transactions based on protecting the integrity and confidentiality of data of participants and users of the digital ruble platform. In addition to the technological readiness of the trust infrastructure, it is necessary to build public trust in the digital ruble. The above circumstances determined the importance and necessity of developing a technological solution for the formation of a trust infrastructure in the digital ruble security system. In the course of the study, the following tasks were solved: a theoretical interpretation and empirical operationalization of the basic concepts of the digital ruble trust infrastructure were carried out; its organizational and technological prerequisites were investigated; the structural elements of the basic and role models of the digital ruble infrastructure were clarified; the analysis of encryption and tokenization methods of API was carried out, and a technological solution was formulated to ensure the security of the digital ruble trust infrastructure. Based on the results of the study, a set of measures aimed at the security of access to the digital ruble platform for participants and users via secure channels is proposed; the security of access for credit institutions based on two-factor authentication, as well as the security of the privacy of individuals and legal entities on the trust infrastructure in the digital ruble security system. Of practical importance is the list of works related to the deployment of Certification Authorities, information security tools and cryptographic information protection tools, integration with a unified information identification and authentication system and a fast payment system and their implementation in the general digital ruble system.

  • INFORMATION SECURITY MANAGEMENT IN THE DIGITAL TRANSFORMATION PROCESS: MODELING BASED ON HETEROGENEOUS GRAPHS AND RISK METRICS

    К.V. Yakimenko , V.V. Zolotarev
    246-256
    Abstract

    This study is devoted to the critical problem of ensuring information security of organizations in the context of active digital transformation, which inevitably entails an increase in attack surfaces, the emergence of new vulnerabilities and risks of destabilization of security systems. The authors propose a process-oriented approach based on modeling business processes (BP) and the IT landscape using heterogeneous graphs. This model represents three key types of entities: operations, information systems (IS), and data as objects of protection, as well as attributed edges reflecting transmission channels and their security characteristics. This approach ensures the complete identification of CII objects in accordance with the requirements of the FSTEC and allows the analysis of complex relationships in the transitional states of CT. The study developed a set of key quantitative metrics for information security risk management:
    1. Number of Critical Paths (CCPs): Reflects the change in the attack surface when adding/removing ICS and data routes. 2. Node Centrality Level (UCU): Defines the most critical for connectivity and vulnerable IP (risk concentration points). 3. Data Distribution Index (DDI): Characterizes the ratio of cloud and local data storage/processing nodes and the associated control and security risks. 4. Recovery Time (BB): Evaluates the stability of the PS to failures and attacks. 5. The level of Automation of Protection (UAZ): Shows the proportion of automated information security tasks for rapid response. Based on the model and metrics, a dynamic algorithm for managing the information security of the CT process is proposed. The algorithm provides: 1. Construction of graph models of BP "as it is" and "as it should be". 2. Continuous dynamic updating of the current state model during the CT. 3. Regular calculation of metrics for risk assessment in transition states. 4. Updating the list of risks and protective measures based on the analysis of metrics. The results include practical recommendations on: reducing the attack surface; prioritizing node protection with a high level of criticality; optimizing data distribution taking into account security and fault tolerance requirements. The proposed approach ensures transparency and manageability of information security at all stages of the IT process, increases the resilience of the IT landscape to threats and compliance with regulatory requirements.

  • ENERGY MODEL OF THE QUANTUM BACKBONE NETWORK

    А.P. Pljonkin
    256-264
    Abstract

    Already today, quantum communications networks are being actively deployed and created in Russia and around the world, and standards in the field of quantum technologies are being developed. As part of the roadmap for the development of quantum communications in Russia, the length of quantum networks is more than 7 thousand km, and by 2030 it is planned to be more than 15 thousand km. Quantum communications today are, in fact, a technology of quantum key distribution, which is at the stage of intensive scientific research and development. With regard to backbone quantum networks, the technology of secret key distribution requires new approaches to implementation, since the use of equipment from various vendors and the length of fiber-optic communication lines impose surmountable restrictions on the topology of backbone networks. An important aspect in the design of quantum networks is the calculation of losses in optical communication channels. Attenuations introduced by various passive and active elements are usually calculated individually for each section of the network and ultimately form a comprehensive energy model. The article considers several topologies of backbone quantum networks and presents the calculation of optical losses for fiber-optic communication channels of these topologies. In general, a method for detecting an optical signal in quantum communication networks is presented. The purpose of the article is a comparative analysis of energy models of backbone quantum networks and a presentation of a variant of implementing a section of an urban quantum network. The work describes a generalized principle of operation of a quantum key distribution system both in a two-pass version and in a single-pass configuration. The results of the analysis of the energy model and the calculation of average losses in a quantum channel are presented. In conclusion, we propose for consideration a possible variant of the topology of a quantum network.

  • ON THE SIMILARITY FUNCTION OF GRAPHIC REPRESENTATIONS OF EXECUTIVE FILES IN THE OBFUSCING TRANSFORMATION EVALUATION MODEL

    P.D. Borisov , Y.V. Kosolapov
    264-273
    Abstract

    Obfuscation of program code is used to complicate its analysis in a model when the analyst has full access to the program. Obfuscation is usually divided into cryptographically secure and heuristically resistant. In the first case, the complexity of the analysis is comparable to the difficulty of solving some known mathematical problem. In the second case, the resistance is usually justified by the lack of effective techniques for analyzing the obfuscation method known at the time of its creation. Cryptographically secure obfuscation has not yet found practical application, while heuristically resistant is widely used. Previously, the authors proposed a model for assessing the efficiency and resistance of heuristic obfuscating transformations based on the use of a similarity function. In this paper, such a similarity function is constructed using machine learning methods based on a comparison of the graphical representation of program executable files. In particular, the comparison is performed using a convolutional network with four convolutional layers, an RMSprop optimizer, an NLLLoss loss function, and two outputs of a fully connected layer. The proposed function is used in the implementation of a model for evaluating the efficiency and resistance of obfuscating transformations. In addition to the similarity function, the implementation of the model also includes: a basic set of obfuscating transformations provided by the Hikari obfuscator; a set of obfuscating transformation sequences based on the basic set; a test set of programs for training models based on the CoreUtils, PolyBench and HashCat program sets; approximation of the most "understandable" version of the program using the smallest version of the program (searched among the versions obtained using various optimization options of the GCC, Clang and AOCC compilers); a program deobfuscation scheme based on the optimizing compiler from LLVM. The results of an experimental study with the implemented model showed that it is impractical to use the constructed similarity function in the framework of the evaluation model due to its low accuracy, but it is possible to use it when constructing more complex functions.

  • DETERMINATION OF TARGET COORDINATE ERRORS IN MULTI-POSITION RADAR USING GROUPS OF UNMANNED AIRCRAFT

    I.V. Borisov , А.S. Kuzmenko , V. Е. Kuryan , Е. М. Levchenko , М.V. Kuryan
    273-284
    Abstract

    The article proposes and develops an algebraic method for determining the coordinates of targets and their errors as part of a group of unmanned aerial vehicles. The main assumptions of the developed model of the functioning of a group of unmanned aerial vehicles: The speeds of aircraft do not exceed the speed of sound in the air, and the speeds of targets do not exceed the first space were justified. The main assumptions of the model of operation of a group of unmanned aerial vehicles: the UAV speeds do not exceed the speed of sound in the air, and the target speeds do not exceed the first space one, are justified in the article. Qualitative estimates of the radar signal reception time for a given spatial error of the target coordinates were presented. The conditions for the number of aircraft in the group are formulated, which increase the accuracy of determining the location of the target in space. The various types of errors that arise when organizing the search for targets by a group of aircraft are analyzed. The issues of dependence of the resulting error in calculating the coordinates of the search target on the error in measuring the distance between the aircraft in the group and the target itself, depending on their mutual spatial orientation, are investigated. An algorithm has been developed, calculations and analysis of the results for this task have been carried out. The simulation is based on the proposed algorithm, taking into account random coordinates of the target in a fixed sector and taking into account random errors in the measured distance between a group of aircraft and the search object. The results of modeling the influence of the configuration of a group of unmanned aerial vehicles and the location of the target on the error in determining its coordinates are presented. An assessment was carried out to determine the coordinates of the goals and an error estimate of the proposed algebraic approach. The ways of further research are determined. The issues of estimating the amount of calculation for a large number of goals are considered.
    The scope and effectiveness of the proposed algorithm and method for solving the problem as a whole are determined.

  • THEORETICAL STUDY OF ESTIMATING THE PROBABILITY OF A CONNECTION IN SYSTEMS WITH BROADBAND SIGNALS AND FHSS

    D.I. Konkov , А. А. Schmidt , D.N. Polyakov , V.R. Bikbulatov
    285-294
    Abstract

    The article is devoted to a theoretical study of the probabilistic characteristics of communication systems with broadband signals and pseudorandom tuning of the operating frequency in a complex electromagnetic environment. An analytical tool for calculating the communication probability is presented, taking into account the complex effects of multipath propagation, frequency-selective fading, and intentional interference. The dependence of the communication probability on the signal-to-noise ratio is investigated using integral expressions that take into account the normal power distribution at the input of the receiving device. A mathematical analysis of the transmission function of the communication channel as a complex characteristic describing the amplitude-frequency and phase distortions during signal propagation is performed. Theoretical models of synchronization processes are developed, including the stages of signal search, capture, and tracking, using the Markum function to describe the probability of signal detection against the background of Gaussian noise. Methods for optimizing the key parameters of the RFP system, such as the frequency tuning period, the total number of frequency channels, and the width of the protective frequency interval, are proposed. The theoretical foundations of adaptive control based on the maximum likelihood method and recursive filtering for estimating the parameters of the channel\are described. The energy efficiency of systems with RFP is studied, taking into account the frequency tuning losses and the necessary adjustment of the signal-to-noise ratio. A comprehensive indicator of the quality of a communication system combining probabilistic, energy, and time characteristics of the system is proposed. Analytical expressions are developed for estimating the intensity of synchronization failure based on statistical analysis of experimental data and calculation of the covariance matrix of measurement noise. The expediency of using reference signals to increase the reliability of measurements of communication channel parameters in adaptive control of the system is justified. Relations are derived for determining the duration of the synchronization window, taking into account the maximum allowable time of entering synchronism and the margin factor, which takes into account possible frequency instabilities of the reference generators.  The influence of the protective frequency interval on preventing inter-channel interference and ensuring electromagnetic compatibility of neighboring channels is analyzed. The presented theoretical results provide a scientific basis for the design of radio systems with increased noise immunity and can be used in the development of adaptive algorithms for controlling RF control systems in a dynamically changing electromagnetic environment, ensuring a balance between the reliability of information transmission and the efficiency of using frequency-time resources of the communication system.