Skip to main content Skip to main navigation menu Skip to site footer
##common.pageHeaderLogo.altText##
Izvestiya SFedU
Engineering sciences
  • Current
  • Previous issues
    • Archive
    • Issues 1995 – 2019
  • Editorial Board
  • About journal
    • Officially
    • The main tasks
    • Main sections
    • Specialties of the Higher Attestation Commission of the Russian Federation
    • Editor-in-Chief
Русский
ISSN 1999-9429 print
ISSN 2311-3103 online
  • Login
  1. Home /
  2. Search

Search

Advanced filters
Published After
Published Before

Search Results

Found 7 items.
  • COMPARATIVE ANALYSIS OF MISSING DATA RECOVERY METHODS

    A.A. Sorokin , A. V. Dagaev , I. M. Borodyansky
    2020-11-22
    Abstract ▼

    In recent decades, the methods of system analysis have been developing qualitatively. It is
    associated with an increase in the rate of technical development, the densification of time processes,
    the rapid growth of accumulated information and new capabilities of computer technology.
    These include methods for analyzing large amounts of data, methods of data mining, methods of
    analytical modeling, methods of parallel data processing, neural network methods, forecasting
    methods, and others. The presented methods make it possible to quickly and efficiently process
    heterogeneous clusters of information, accumulate and synthesize data, generalize and classify
    information. The last of the presented methods are methods of interpolation and extrapolation of
    lost, damaged or missing information. These methods allow to structure, restore and model information
    based on statistical data, mathematical and algorithmic methods. Thus, the article deals
    with the problem of recovering missing data in graphic and complex objects. Literary sources on
    the problems under consideration are given. They provide extensive information on the topic under
    consideration: present genetic algorithms used for spatial interpolation; the solution of problems
    of heterogeneity of interpolation of seismic data is considered; it is described the use of
    spline approximation to calculate the characteristics of nonlinear electronic components; the
    method of constructing a model of three-dimensional parametric rational bodies using generalized
    Bezier interpolation is analyzed, which allows modeling the shape of a body and anisotropic
    space; methods using fuzzy linear equations are described, which are widespread in computer
    vision; the method of adaptive interpolation based on the gradient and taking into account the
    local gradient of the original image is investigated. It is made comparing several common methods
    of interpolation and data restoration, in article, such as: bilinear interpolation, Bezier surface.
    Each method and features of its application within the framework of the experiment are briefly
    described. The result of a series of experiments with the presented methods with different numbers
    of tests is presented. In conclusion, summary is drawn about the rationality of choosing one of the
    proposed methods without the use of a long field experiment in each case.

  • TRANSFORMATION AND ANALYSIS OF INFORMATION WHEN CREATING A DATABASE OF PARTICIPANTS OF THE GREAT PATRIOTIC WAR 1941-1945 IN THE MEMORIAL COMPLEX «ROAD OF MEMORY» IN THE MAIN RUSSIAN ARMED FORCES CATHEDRAL ON THE BASIS OF COMPUTER METHODS OF INFORMATION PR

    S. A. Botsvin , V.A. Khvatkov
    2021-11-14
    Abstract ▼

    Preserving the historical memory of the participants of the Great Patriotic War
    1941–1945 is a world-class task that should preserve the truth about the most terrible war and the
    feat of our people. In modern conditions, attracting interest in history, traditions and finally
    recognition of one's duty to the past generations requires modern methods. One of these methods
    is the transformation of information, which allows you to present this information in such a way
    that it can be used most effectively. At the same time, the main goal in the transformation of historical
    data is to optimize their representations and formats and not change the information content.
    The presented algorithms of transformation and analysis of information when creating a database
    of participants of the Great Patriotic War were aimed at maximizing the preservation of historical
    value and reliability of information. To achieve this goal, computer methods of information processing
    for normalization and consolidation of personal data obtained from various sources are
    considered. The analysis of the content of information in archival documents with the presentation
    of statistical data on the number of documents (records) from various sources (archives, databases,
    information resources, etc.) is carried out and the procedure for translating information
    from archival documents into electronic form, which has been applied in practice, is described.
    Based on the analysis of the information, diagrams of the content of personal information in archival
    sources are constructed, the stages of systematization and bringing the generalized information
    array records to a single format are presented, as well as the procedure for combining and
    deleting duplicate records. For the possibility of using in other projects, an algorithm for consolidating
    data obtained from various sources is described in detail, and its block diagram is constructed.
    In addition, the applied fuzzy search algorithms are described, which made it possible to
    minimize errors in records, as well as image comparison algorithms for searching for duplicates
    from photographs. All of these algorithms have made it possible to bring together information
    contained on various media, having different structures and geographical location. The created
    information resource allows you to enormously reduce the resources needed to find the necessary
    information, including access to which was limited or not at all. Further improvement of algorithms
    for normalization and consolidation of information can serve as a basis for data migration
    from outdated to promising systems, as well as for the formation of information resources from
    existing heterogeneous archival funds.

  • DEVELOPMENT OF A DISTRIBUTED CONTROL SYSTEM OF THERMAL PROCESSES IN A HYDRAULIC PRESS

    A.L. Liashenko
    2021-12-24
    Abstract ▼

    The necessity of regulating the temperature of the coolant in hydraulic presses, providing
    hot gluing of plywood, regulating the pressure in the press channels and maintaining technological
    parameters at a given level, is considered. A column hydraulic press P-714-B for hot gluing of
    plywood, installed at the Ust-Izhora plywood mill, is considered as a control object. The article
    provides a description of the column hydraulic press. To monitor the parameters of the presented
    installation of plywood production, it is proposed to consider the heating press plates and plywood
    packages as an object with distributed parameters. To develop a mathematical model of the control
    object, a functional diagram of this device with the main equipment and technological flows of
    the coolant was considered. A technique has been developed for modeling objects of this class as
    objects with distributed parameters. Consideration of the processes occurring in the channels of
    the heating plates made it possible to formulate differential equations of motion that describe the
    flow of the working medium in the system of channels. The developed method of mathematical
    modeling of heat propagation in the heating plates of the press and plywood packages made it
    possible to draw up a mathematical model for the object under consideration. This mathematical
    model turned out to be quite complex, and it is not possible to solve the resulting system of partial
    differential equations analytically (to isolate the transfer function). For a numerical analysis of the
    considered control object, a discrete model of equations and a computational algorithm were
    compiled. In the process of compiling discrete models, the problems of “joining” the boundary
    conditions were solved, the stability of the computational scheme was ensured, and the steps of
    discretization with respect to spatial variables were selected. Software was specially developed for
    computer modeling. With its help, the temperature values at the control points were calculated.
    The presented mathematical model made it possible to carry out a numerical experiment, as a
    result of which the frequency characteristics of the object under study were obtained. These characteristics
    were used in the synthesis of a distributed high-precision controller.

  • ASSESSMENT OF INFLUENCING FACTORS AND FORECASTING OF POWER CONSUMPTION IN THE REGIONAL POWER SYSTEM, TAKING INTO ACCOUNT ITS OPERATING MODE

    N.K. Poluyanovich, М. N. Dubyago
    2022-05-26
    Abstract ▼

    The article is devoted to the research of the assessment of influencing factors and forecasting
    of power consumption in the regional power system, taking into account its operating modes.
    The analysis of existing methods of forecasting energy consumption is carried out. The choice of a
    forecasting method using an artificial neural network is justified. An algorithm for creating a neural
    network for short-term prediction of electrical load is considered. The relevance of the work is
    due to the requirements of the current legislation for forecasting electricity consumption in order
    to solve the problem of maintaining a balance of power between the generating side and the consumption
    of electric energy. At the same time, one of the main tasks related to the generation of
    electric energy and its consumption is the task of maintaining a balance of capacities. On the one
    hand, with an increase in the planned load, interruptions in the supply of electricity may occur, on
    the other hand, a decrease in electricity consumption will also lead to a decrease in the efficiency
    of power plants, and ultimately to an increase in the cost of electricity both for the wholesale electricity
    market and for the end user. The developed neural network model reduces the task of shortterm
    forecasting of power consumption to the search for a matrix of free coefficients by training
    on available statistical data (active and re-active power, ambient temperature, date and index of
    the day). The received NS model of short-term forecasting of power consumption of a section of
    the district 10 kV electric grid takes into account the factors: – time, - meteorological conditions,
    – disconnections of individual power supply lines of cottages, – operating mode of electricity consumers.
    Predictive estimates of the power consumption of the power system have been obtained
    based on the data of the electricity consumed by the outdoor temperature, the type of day, etc. The
    model for predicting the magnitude of the consumed active and reactive power is quite workable,
    but at this stage still has a fairly high level of forecasting error. To improve the accuracy of forecasting,
    it is necessary to increase the database that makes up the training sample, because at the
    moment the available data cover a time period of only 3–4 months. The results of the analysis
    showed that forecasting reactive power consumption causes the greatest difficulties.

  • METHODS AND ALGORITHMS FOR TEXT DATA CLUSTERING (REVIEW)

    V.V. Bova, Y.A. Kravchenko, S.I. Rodzin
    2022-11-01
    Abstract ▼

    The article deals with one of the important tasks of artificial intelligence – machine processing
    of natural language. The solution of this problem based on cluster analysis makes it possible
    to identify, formalize and integrate large amounts of linguistic expert information under conditions
    of information uncertainty and weak structure of the original text resources obtained from
    various subject areas. Cluster analysis is a powerful tool for exploratory analysis of text data,
    which allows for an objective classification of any objects that are characterized by a number of
    features and have hidden patterns. A review and analysis of modern modified algorithms for agglomerative
    clustering CURE, ROCK, CHAMELEON, non-hierarchical clustering PAM, CLARA
    and the affine transformation algorithm used at various stages of text data clustering, the effectiveness
    of which is verified by experimental studies, is carried out. The paper substantiates the
    requirements for choosing the most efficient clustering method for solving the problem of increasing the efficiency of intellectual processing of linguistic expert information. Also, the paper considers
    methods for visualizing clustering results for interpreting the cluster structure and dependencies
    on a set of text data elements and graphical means of their presentation in the form of
    dendograms, scatterplots, VOS similarity diagrams, and intensity maps. To compare the quality of
    the algorithms, internal and external performance metrics were used: "V-measure", "Adjusted
    Rand index", "Silhouette". Based on the experiments, it was found that it is necessary to use a
    hybrid approach, in which, for the initial selection of the number of clusters and the distribution of
    their centers, use a hierarchical approach based on sequential combining and averaging the characteristics
    of the closest data of a limited sample, when it is not possible to put forward a hypothesis
    about the initial number of clusters. Next, connect iterative clustering algorithms that provide
    high stability with respect to noise features and the presence of outliers. Hybridization increases
    the efficiency of clustering algorithms. The research results showed that in order to increase the
    computational efficiency and overcome the sensitivity when initializing the parameters of clustering
    algorithms, it is necessary to use metaheuristic approaches to optimize the parameters of the
    learning model and search for a global optimal solution.

  • IDENTIFICATION OF KEY TECHNOLOGIES BASED ON COLLECTION AND ANALYSIS OF DATA FROM OPEN RUSSIAN-LANGUAGE SOURCES

    А.G. Bondarenko , А.G. Kravets
    144-159
    2025-07-24
    Abstract ▼

    This article is devoted to the development and approbation of a new approach to the collection, processing and analysis of open data in the Russian language for identification of key technological trends. To solve the problem of formation and subsequent analysis of structured datasets methods of web scraping, natural language processing and analysis of time-series have been developed and implemented via programming. The approach described in the article has been applied for the first time in order to extract and structure information from scientific articles, news resources and patent documentation in the Russian language for the first time. As a result of analyzing the obtained dataset of scientific publications, 30 most frequently mentioned bigrams and the same number of trigrams of technological terms have been identified. Based on the frequency analysis of bigrams and trigrams, key technological terms were   identified which then were used for complex filtration on key technologies. Complex filtration enabled to fulfill the search of patents in Russian and their collection for further analysis. As a result of preprocessing of the obtained patent data time series of patent activity have been formed. The programme system of key technological identification has been implemented in JavaScript and Python using Selenium and BeautifulSoup libraries for web scraping, NLTK and Scikit-learn for text data processing and analysis. The study focused on the dynamics of the development of key technologies over time has allowed to identify periods of intensive patent activity and declining interest in this or that kind of technology. The results presented in the article provide a basis for further development of machine learning methods for the purpose of predicting technological development and identifying promising areas of applied research.

  • MODERN APPROACHES TO NATURAL FIRE MONITORING AND FORECASTING: REVIEW AND CONCEPT OF AUTONOMOUS UAV-BASED SYSTEM

    N.D. Boldyrev , V. V. Gilka , А.S. Kuznetsova , D.А. Morozov
    58-80
    2025-12-30
    Abstract ▼

    Natural fires cause serious damage to ecosystems, the economy, and public safety every year, and timely detection of fires and prediction of their development increases the speed of response to threats and allows for optimal allocation of resources during emergency response. Existing monitoring methods are limited by the speed of detecting fire outbreaks and the speed of their further spread, which reduces the effectiveness of rescue services. To solve this problem, heterogeneous data sources can be used, including unmanned aerial vehicles (UAVs), distributed sensor networks, mobile field observation systems, ground-based thermal imaging stations, etc., which can contribute to a more accurate analysis of the current situation and improve the reliability of predictive models of fire spread. The aim of the study was to develop a concept for an automated approach to monitoring and predicting wildfires based on unmanned aerial vehicles. We believe that this approach will improve the speed of detecting fire outbreaks and the accuracy of predicting their spread. The tasks include analyzing existing monitoring methods, developing a concept for a system that integrates multispectral imaging, optimized data transmission, automatic segmentation, and forecasting based on machine learning, as well as ensuring interaction between the operator and alert specialists. The work used methods of collecting, analyzing, and transmitting data from UAVs, processing multispectral images, machine learning and neural networks for fire detection, image segmentation algorithms and simulation modeling for fire spread prediction, data visualization to support decision-making by operators and administrators, logging and analysis of results for model training, software engineering, and human-computer interaction technologies. The system will reduce the time required to detect and predict fires, enable operators to launch multiple drones simultaneously, and automate the processing of data received from them. Process automation will reduce emergency response times and staffing levels, improve resource allocation, increase forecast accuracy, and improve the timeliness of emergency service notifications. This will help reduce damage from wildfires and improve the safety of people and ecosystems. Despite the progress made in addressing this challenge, the comprehensive system described in this article does not yet exist in its entirety in Russia, the CIS countries, or in Western and Asian countries. Although individual components, such as UAVs for monitoring and artificial intelligence (AI) for data analysis, are already in active use, there is currently no integrated solution that combines all elements (drone control, near real-time fire spread prediction, data transmission, and interaction with emergency services). does not currently exist. This concept represents a new approach that could become a breakthrough technology for combating natural disasters.

1 - 7 of 7 items

links

For authors
  • Submit article
  • Author Guidelines
  • Editorial Policy
  • Reviewing
  • Ethics of scientific publications
  • Open access policy
  • Supporting documents
Language
  • English
  • Русский

journal

* not an advertisement

index

Индексация журнала
* not an advertisement
Information
  • For Readers
  • For Authors
  • For Librarians
Address: 347900, Taganrog, Chekhov St., 22, A-211 Phone: +7 (8634) 37-19-80 E-mail: iborodyanskiy@sfedu.ru
Publication is free
More information about the publishing system, Platform and Workflow by OJS/PKP.
logo Developed by RDCenter