ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 2 at 2023 year.

Order result by:
Public date | Title | Authors

1. Absolute stability of explicit difference schemes for the heat equation under Fourier–Tikhonov regularization [№2 за 2023 год]
Author: Bakhmutsky, M.L.
Visitors: 1459
The paper considers the possibility of constructing a simple and absolutely stable explicit difference scheme for the heat equation. Due to the too rigid stability condition, the invention of the sweep method for solving SLAE with three diagonal matrices, and splitting schemes, absolutely stable implicit schemes forced out explicit schemes for the heat equation of programming practice. However, implicit schemes are poorly parallelized. Therefore, programs for solving problems of heat conduction, diffusion, underground hydrodynamics, etc. on huge spatial grids using multiprocessor computing systems require using explicit difference schemes. This is especially true for multiprocessor systems of teraflop and higher performance that combine hundreds of processors. In this case, explicit schemes must be absolutely stable or, in the extreme case, their stability condition must be no more stringent than the same for hyperbolic equations. The paper proposes modifications of explicit difference schemes that approximate a parabolic equation and have absolute countable stability. Countable stability of a solution obtained at each time step by the classical explicit scheme is achieved by the fast Fourier transform and the subsequent Fourier synthesis with Tikhonov regularization. When calculating the direct and inverse Fourier transforms, the author used the Cooley–Tukey algorithm of the fast Fourier transform. There are the results of comparing numerical calculations of model problems with analytical solutions. The absolute stability of the proposed explicit schemes for the heat equation allows their wide use for parallel computations.

2. Aggregation and analysis of information from logistics companies to build a complex cargo transportation route [№2 за 2023 год]
Authors: Esin, M.S., Korepanova, A.A., Sabrekov, A.A.
Visitors: 954
The work is devoted to optimizing the construction of transportation routes in the field of cargo logistics. There are cases when cargo transportation between two cities by one transport company is more expensive than transportation by different companies with cargo transshipment at intermediate points. Information about such complex routes is of interest to both transport companies, which can find ways to reduce the cost of routes, and ordinary users looking for options for cheaper cargo delivery. The subject of the study is the automation of building the most profitable complex route for cargo transportation performed by several road and rail carriers and passing through intermediate points for transshipment (transfer of cargo). A distinctive feature of the research method used in this work is that it is based on the analysis of data from the calculator websites of carrier companies, which enable dynamical extraction of the transportation cost information is during a query process, as well as on heuristic approaches to building a complex route. The authors formulated the criteria for selecting potential transshipment points and their number. The proposed route cost estimation approach was tested on open data of 40 logistics companies, 9 cargo configurations and routes between 171 cities. As a result of the work, the authors proposed and tested a new heuristic algorithm for constructing a complex cargo transportation route and developed a software module. The test results have shown the effectiveness of the algorithm: using the proposed heuristics, in 10% of cases it is possible to build a complex route between cities, the cost of which might be significantly less than a simple one. The theoretical significance of the work lies in the development of a new algorithm for solving the problem of constructing a complex route for the transportation of goods, the practical significance lies in the implementation of a new module that will be implemented in the existing logistics service Cargotime.ru.

3. An algorithm and software implementation of test object model synthesis based on the solution of the nonparametric identification equation [№2 за 2023 год]
Authors: Gusenitsa, Ya.N., Mingachev, E.R., Iskhakov, N.U., Kolokolov, M.I.
Visitors: 1086
The paper considers the development of the theory of testing in general and the experimental-theoretical method in particular. In the aspect of this issue, the authors have developed an algorithm for synthesizing a model of a test object based on solving the equation of nonparametric identification of a dynamic system using hyperdelta approximation and the Laplace transform. Unlike the existing ones, the algorithm is applicable to input and output signals of arbitrary shape and physical quantities. In addition, it does not require large computing resources. Taking into account these features, the algorithm enables formalizing a multidimensional relationship between factors and performance characteristics of the test object through repeated use for different input and output signals. The authors have implemented a mathematical library for identifying a test object model and an application with a graphical user interface for automating calculations using the C++ and Python programming languages. The presented software solution is made similar to classical machine learning models. To substantiate the possibility of using the developed algorithm, the authors carried out a computational experiment that involved various types of input and output signals (periodic, non-periodic and random) with different hyperdelta approximation accuracy. Based on the results of the computational experiment, the authors have made recommendations on using the algorithm. In particular, they recommended to increase the number of initial moments of the hyperdelta approximation at high amplitudes of the output signal.

4. Extrinsic calibration of the omnidirectional vision system and 3D reconstruction of an indoor environment [№2 за 2023 год]
Author: Kholodilin, I.Yu.
Visitors: 1044
Autonomous navigation of mobile robots indoors has attracted the attention of many computer vision researchers over the years. A wide variety of approaches and algorithms were proposed to solve this problem. The proper perception of the environment becomes an important part for such robots. Robots must be able to evaluate the three-dimensional structure of the environment in order to perform their algorithms. However, visual sensors, such as conventional cameras, do not allow processing enough information due to the limited viewing angle. This article presents a comprehensive approach for three-dimensional modeling of an indoor environment. The vision system considered in this paper consists of an omnidirectional camera and a structured light. The omnidirectional camera captures a wide range of information, while the laser beam is easy to detect and extract for further analysis. To obtain reliable measurement results, the vision system must be calibrated. For this purpose, the paper considers an improved method of external calibration. The paper also considers the 3D reconstruction algorithm of an indoor environment that includes a semantic segmentation neural network. A single input image is required to perform the calibration method as well as the 3D modeling method. These methods significantly speed up the data processing process, without losing accuracy in measurements. In turn, recent advances in neural networks require a large amount of training data in environments with different conditions. Thus, developing and testing navigation algorithms can be expensive and time-consuming. This article evaluates the proposed methods experimentally using data generated by a previously developed simulator.

5. Information and software support for the automated system of scientific research on the survivability of gas production facilities [№2 за 2023 год]
Author: Valeev, A.F.
Visitors: 1027
The article is devoted to the automation of information processes of scientific research on the survivability of gas production facilities under the conditions of well flooding. The author proposes the structure of an automated system for scientific research of the survivability of gas production facilities, which includes a mathematical apparatus for modeling reservoir-well objects, watering processes and survival aids – various technologies for combating watering. The existing application software on the market for performing hydrodynamic modeling or hydraulic calculations does not allow studying the survivability of gas production objects. Therefore, a newly developed information and software for an automated system of scientific research that will allow assessing the survivability of gas production facilities under conditions of flooding, as well as helping a specialist to make decisions to improve it using technologies to combat flooding in wells. The automated system components are based on system analysis, the theory of hydraulics and oil and gas mechanics, object-oriented programming methods, statistical analysis, graph theory, modeling theory, control theory, nodal analysis method, etc. As a result of predictive modeling, the coefficient of survivability of gas production facilities is calculated taking into account the properties of efficiency, resource intensity when using means of ensuring survivability. Based on the given criterion, the system software offers the best technology to combat the watering of a gas production facility.

6. Using three-dimensional data cubes in the implementation of a business intelligence system [№2 за 2023 год]
Authors: Chernysh, B.A., Murygin, A.V.
Visitors: 1698
Business analysis is one of the key management tools that allows getting a reliable picture of the current business situation in an enterprise in all areas of its activity. To ensure this process in any company, there are various data used as its performance indicators. The data source is primarily integrated information systems (IIS) of various types (ERP – Enterprise Resource Planning, CRM – Customer Relationship Management, MES – Manufacturing Execution System, etc.) These systems either incorporate business analysis tools (BI – Business Intelligence) or use specialized solutions that allow performing complex analytical tasks according to a given formulation. This article discusses the features of both approaches, their advantages and disadvantages, provides examples of foreign and domestic products for business analysis existing on the market. The authors propose a method for constructing three-dimensional cubes using the data contained in this system on the example of the BI-module developed by the authors of the IIS SciCMS. There are descriptions of the used methods and algorithms, the initial requirements and limitations. The authors have carried out the formalization of tasks and considered the mathematical apparatus for constructing multidimensional data models based on information from a fixed set of normalized tables of a relational database. There are examples of SQL queries and output data. In some cases (working with a non-relational DBMS, the need for precalculated aggregate values, the complexity and high cost of direct SQL queries, etc.), the described method for building multidimensional cubes may not be applicable. The solution to this problem in SciCMS is its own data import and transformation module based on an open source library. The article summarizes the main advantages and disadvantages of the proposed approach, the prospects for its use in domestic enterprises.

7. T5 language models for text simplification [№2 за 2023 год]
Authors: Vasiliev, D.D., Pyataeva, A.V.
Visitors: 1653
The problem of text readability in natural Russian is relevant for people with various cognitive impairments and for people with poor language skills, such as labor migrants or children. Texts constantly surround us in real life, such as various instructions, directions, and recommendations. Increasing the availability of these texts for these categories of citizens is possible by using an automated text simplification algorithm. This article used deep neural architecture transformers as an automated simplification algorithm. The following language models were applied: ruT5-base-absum, ruT5-base-paraphraser, ruT5_base_sum_gazeta, ruT5-base. Experimental studies used two data sets: a data set from the Institute of Philology and Language Communication and data from the open Github repository. The following set of metrics was used to evaluate the models: BLEU, Flesh Readability Index, Automatic Readability Index, and Sentence Length Difference. Further, using a test data set, statistical indicators were extracted from the listed metrics, which became the basis for comparing algorithms with different training parameters. The authors carried out several experiments with these models that used different values of the learning rate parameter for each dataset, batch sizes, and the exclusion of an additional dataset from training. Despite the different metrics, the models outputs did not differ much from each other during manual comparison. The results of experimental studies show the need to increase the data set for model training, as well as the change in the parameters of model training, or the use other algorithms. This study is the first step towards creating a decision support system for automatic text simplification and requires further development.

8. On-the-fly data clustering for the PostgreSQL database management system [№2 за 2023 год]
Author: Tatarnikova, T.M.
Visitors: 1616
The paper determines the relevance of the task of real-time data clustering in the form of a dynamically embedded library for the PostgreSQL open-source database management system. There are formulated conditions for performing real-time clustering, which consist in ensuring sufficient performance, in which the time for determining clusters does not exceed the time for writing data to the table and a limited amount of data for clustering. PostgreSQL methods are available in the devel-library, which allows them to be used to interact with data at the internal representation level and other programming languages that perform some operations faster than the SQL query language. The scheme of interaction between elements for clustering includes a database with a dynamically embedded library and the TimescaleDB extension to organize data storage by the database server; an interpreter – a software layer for translating data from the internal representation into the types of the language used before clustering, and vice versa, translating the clustering results into an internal format for saving them to the database; a clusterizer – a program that performs clustering of transmitted data according to an algorithm. The proposed library is an implementation of a trigger function, which in fact is an interpreter that connects the clusterizer with the database. If this is the first function operation for the table, then the initial centroids are selected in the way that the user specified. Otherwise, the centroid data is read from the table. There is a demonstration of the library work. The data set for clustering is randomly generated with a concentration around the given centroid coordinates. The library does not limit the user both in the dimension of points that need to be distributed among clusters, and in the number of tables for inserting data. Due to the computational complexity of the algorithms, there is a limit on the maximum amount of data for clustering.

9. Neural network tool environment for creating adaptive application program interfaces [№2 за 2023 год]
Authors: Tagirova, L.F., Zubkova, T.M.
Visitors: 1732
The software is used in almost all areas of human activity. Erroneous actions of the user, which often depend on his emotional state, can lead to negative consequences, especially in production management, technological processes, design activities, medicine, etc. The article is devoted to the problem of personalizing the interface of application programs to user’s individual features based on neural network technologies. The novelty of the approach proposed in the work is the prototype interface formation based selecting each menu item separately, which allows forming a personalized interface. The authors propose using a tool environment, which includes a set of components of the interface part for a dynamically generated unique prototype of the interface adapted to each user features. As a tool for selecting interface components, the authors used a deep neural network presented in the form of a multilayer perceptron. The input parameters of the neural network are the distinctive features of users, the outputs are the components of the future prototype interface. Professional, psychophysiological characteristics of users, their demographic characteristics, as well as emotional state were chosen as criteria for adapting the interface part of applications. The output parameters are interface components: text font size and hyperlinks, size and distance between web page elements, tooltip view and context menu, messages to the user, color scheme, availability of a window for information search, etc. Aы a result, the paper presents a developed instrumental environment for creating personalized application program interfaces using neural network technologies. During the software work, users are evaluated by their characteristics using basic tests of the IT sphere and psychology. To determine the emotional tone, age and gender, the system uses the Python Deepface library, which implements an algorithm based on a trained vertical neural network. The implementation of the proposed instrumental environment will ensure comfortable interaction between users and the application.

10. Optimal control of non-linear systems via quadratic criteria with bounded controls [№2 за 2023 год]
Authors: Emelyanova, I.I., Pchelintsev, A.N.
Visitors: 1309
The paper suggests a method of developing an optimal control of a single class of nonlinear systems via a quadratic criterion with a bounded type of inequality for the controls. This method is a further derivation from the method of successive approximations suggested in the earlier works of the group of authors, to which the authors of the current paper belong. By modifying the given method, the researchers have managed to state the existence of an optimal control of the problem in question and to synthesize the actual optimal control. The crucial issue of optimal control development is the problem of convergence of the method of successive approximations. Besides, the suggested scheme leads to a computational procedure that implies constructing a solution for a two-point boundary value problem. As known, it causes certain computational difficulties. In order to avoid those difficulties, the paper includes a modified scheme that converges and provides control which is close to an optimal one. It is demonstrated that the developed scheme reduces the initial problem to a sequence of Cauchy problems that can be easily solved using the simplest methods of numerical analysis. To illustrate the suggested method, the paper shows the results of a computational experiment on developing optimal control for a controlled system described with Van der Pol equation. In this case, it turned out that it is the modified scheme that gives the optimal control.

| 1 | 2 | Next →