ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
17 June 2024

Articles of journal № 4 at 2020 year.

Order result by:
Public date | Title | Authors

1. Issues of construction and application of knowledge basis in designing and production of innovative objects [№4 за 2020 год]
Authors: R.I. Solnitsev , G.I. Korshunov , Do Xuan Cho , Do Hai Kuan
Visitors: 4903
The paper discusses the principles of building and applying a knowledge base (KB) for innovative ob-jects. The paper describes the construction and application of knowledge base in designing and produc-tion on the basis of an example of a closed control system for the neutralization of exhaust gases of transports, which is an innovative object in the field of the functioning of Nature–Technogenics control system. The authors substantiate the choice of a closed control system for the neutralization of exhaust gases of transports as a standard for innovative design and production facilities. The design and production process of this object will be built on a knowledge base, which includes the development of principle diagrams and circuits, structural solutions and documentation, software, production, assembling, and adjusting of prototypes. In the knowledge base, there must be data and knowledge to design and produce research objects. So, for the circuit design stage, an electrical scheme is introduced for a closed control system for the neutralization of exhaust gases of transports, which developed by PCAD design tools in accordance with the established standards stored in the knowledge base. CAD and PPAS tools for the next steps will ensure the calculation of IPC standards also through the knowledge base, which includes the de-velopment of design solutions and documentation for the production of printed circuit boards and oth-er elements, and PCBA verification. The development of project documentation at subsequent design stages is also based on a knowledge base. At the same time, the structure fragments stored in the knowledge base are used for new design solutions and the production of design documentation. The paper also proposes a knowledge base management system, which will allow developers to use UML notation to provide support its, receive answers from requests of "ordinary" design engineer (cir-cuit engineers, designers, technologists, testers), experts will ensure the input and control of knowledge. The user interface is also presented, in which there are fields "enter requests", "show answers", as well as control and visualization operators. The application of the developed knowledge base for the procedures into the circuit design, architectural design stages, and technological preparation for the production is shown in examples. The knowledgebase application for specific production and design sequences is made on the basis of the user's problem-oriented language when creating requests and answers.

2. Algorithm analysis for multiple marking of percolation clusters with a partial load of computing nodes on supercomputer systems [№4 за 2020 год]
Authors: S.Yu. Lapshina, A.N. Sotnikov, V.E. Loginova
Visitors: 5355
The paper considers the behavior of the Parallel Cluster Multiple Marking Technique in the course of simulation experiments on the problem of multi-agent modeling with a partial load of the requested computing nodes of modern supercomputer systems installed in the JSCC RAS. The Cluster Multiple Marking Technique is a universal tool that can be used in any field as a tool for differentiating large lattice clusters. It receives data as input in an application-independent format. So, at the JSCC RAS, this tool was used to study the problem of spreading epidemics. It is possible to use this technique to study the behavior of oil reservoirs, the processes of water flow through porous materials, study the spread of forest fires, and much more. In the course of simulation experiments, the authors applied a version of the algorithm for multiple making of Hoshen – Kopelman percolation clusters, which was improved on a multiprocessor system, and associated with the linking mechanism of labels. The paper provides a comparative analysis of algorithm execution time of Hoshen – Kopelman mul-tiple labeling of percolation clusters and partial and a full load of computational nodes and various values of input parameters on four main high-performance computing systems installed in the JSCC RAS: MVS-10P MP2 KNL, MVS-10P OP, MVS 10P Tornado, MVS-100K.

3. Using decision tables transformations when creating the «Detector» intelligent software module for web applications [№4 за 2020 год]
Author: Yurin A.Yu.
Visitors: 4484
Creating decision-making modules for web applications (which are including knowledge bases) re-quires the development of specialized methods and tools. In this connection, the use of model-driven approaches that implement the principles of transformations, generative and visual programming is promising. This paper describes a new specialization of one of these approaches and its application for creat-ing the Detector intelligent software module. This specialization involves the use of decision tables and conceptual models in the form of UML class diagrams for knowledge formalization and representa-tion; a domain-specific language, namely, Rule Visual Modeling Language for designing logical rules; the Hypertext Preprocessor (PHP) language as a target software platform; Personal Knowledge Base Designer as software that implements the approach. The advantage of this approach is the automated creation of web-based decision-making modules based on transformations of conceptual models and decision tables without direct programming (direct manipulation of programming language constructs). The limitations of the approach are related to a cer-tain class of created systems (PHP web modules), as well as the depth of the implemented logical infer-ence: the decision in the modules is made in one step and does not involve a chain of reasoning. The description of the proposed approach and an example of its application for developing the De-tector module are presented. The Detector is intended for decision making when detecting banned messages and clients violating rules of using a Short Message Service. The applicability of the devel-oped module is shown, as well as the evaluation of the approach based on a time criterion for solving educational (test) problems.

4. Implementation of logical conclusion in the production expert system using Rete-network and relational database [№4 за 2020 год]
Authors: L.V. Massel , G.V. An , D.V. Pesterev
Visitors: 5538
One of the areas of artificial intelligence is associated with the development of expert systems (ES). Most often, these systems use a knowledge model in the form of rules, called the Post-production mod-el, such ES are called production expert systems. The classical algorithm for obtaining a solution in an expert system is a sequential logical conclu-sion. With an increase in the volume of rules in the knowledge base, the inference is performed for an unacceptably large time period, which reduces the possibility of obtaining an operational solution. To speed up the output, it is proposed to use the Rete network, a logical inference algorithm for production expert systems proposed by Charles Forgy. The Rete network – the pattern matching algorithm – par-tially solves this problem, but it is desirable to accelerate the conversion of the original rules to the Re-te network. To this goal, the paper proposes the formation and storage of working memory of the logi-cal output system of production expert systems based on Rete network technology using a relational data model. The paper demonstrates the architecture of the data store and knowledge of intelligent systems, de-scribes the implementation of the expert system based on the specification of this architecture, shows the structure of the developed expert system. The approbation has been tested using cognitive models. The cognitive model is one of the types of semantic models that reflects the causal relationship between concepts. It was previously proposed to use the conversion of cognitive models into ES production rules to automate the interpretation of cog-nitive models. The authors illustrate The use of a Rete network for logical inference on products by the example of a cognitive model of one of the threats to energy security: “Underinvestment in the energy sector”. The paper shows that the use of a Rete network and a relational database for storing the working memory of the logical inference system in the developed expert system allows reducing output time with a large number of rules in the knowledge base compared to the naive search algorithm.

5. Implementation of data classification software based on convolutional neural networks and case-based reasoning approach [№4 за 2020 год]
Authors: Varshavskiy P.R., A.V. Kozhevnikov
Visitors: 5358
This paper devotes to the implementation of software for data classification using case-based reasoning (CBR) and convolutional neural network technology (CNN). CBR-methods are widely used to find so-lutions to various problems based on accumulated experience, and CNN are successfully used in solv-ing classification problems by isolating individual elements and forming high-level features using con-volution kernels. One of the necessary conditions for the success of solving the data classification problem is the presence of a correct training dataset. Unfortunately, this condition cannot always be fulfilled (for ex-ample, due to the complexity of the objects under consideration and lack of base information). Due to the ability to accumulate, use, and adapt existing experience, CBR-methods can be used to form a train-ing dataset that can be further used by other methods to solve the data classification problem. Thus, the integration of CNN and CBR improves the efficiency of solving the data classification problem. In addition, CBR-methods can be applied in areas with unpredictable behavior and can be trained in the process of functioning, for example, in the process of training neural networks. This paper proposes the CBR-method for CNN training, managing the process of training, and presentation of iterations of CNN training as a case. The selection of a training step based on precedents improves the performance of the neural network training algorithm. Based on the proposed methods a neural network block using CNN for extending the capability of the CBR-system for data classification is implemented in MS Visual Studio in C# language. To evaluate the effectiveness of the solutions proposed in the work, computational experiments were performed on real data sets.

6. Development of theoretical bases for classification and clusterization of fuzzy features based on the theory of categories [№4 за 2020 год]
Authors: K.D. Rusakov , D.E. Seliverstov, S.Sh. Hill , S.B. Savilkin
Visitors: 5238
The paper provides a rationale for choosing the measure of uncertainty of information. It describes a modern approach based on the application of fundamental algebraic constructions of category theory. A feature of the set of equivalence relations is the direct establishment of an equivalence relation be-tween an object and a class. The paper shows that at present, there are a number of actual applied problems in the classification field that require a different approach to establishing the equivalence relationship – the use of a cas-cade filter model with intermediate states. To justify the measure of uncertainty about an object, the au-thors proposed to use theoretical propositions based on the mathematical apparatus of the theory of ul-tra-operators. The proposed device also operates with information in terms of definitions of non-elementary information. The characteristics of the proposed device include: the suggestion operate not with information, but with their uncertainties, not considered in the device of ultra-operators; some problems are considered basic information which is a special case in the device of ultra-operators and facilitates the calcula-tions; the scope is narrowed to numbers (i.e., data – sets can only be of numeric nature, compacts, in-cluding multidimensional); operating with numeric sets-information in some cases eliminates the need to explicitly use the grid (and the corresponding scales) concepts, and allow to operate implicitly with infinite lattices. The approach proposed by the authors and the presented mathematical model and measure of in-formation uncertainty is an integral part of the developed "Method of classification and clustering of States of complex systems based on the set-theoretic approach" and allows us to consider the process of obtaining clear classes from the point of view of reducing information entropy using a cascade filter.

7. Optimizing speed for VPN providing the possibility of telework using routers powered by ARM CPU [№4 за 2020 год]
Authors: S.V. Andreev , A.A. Khlupina
Visitors: 3832
This paper devotes to the problems of optimizing VPN speed connections when using routers with ARM processors. In the current context, many enterprises and institutions around the world raise the urgent issue of ensuring access for employees, as well as a remote branch or unit, to the resources of the head office local area network. The paper discusses the possibility of connecting employees through an encrypted VPN channel using modern home routers using ARM processors. With this ap-proach, all remote user devices are automatically connected to the local network resources of the head office, enterprise, and there are no needs for enterprise IT specialists to configure each of the user’s device individually. The paper considers a solution to a key problem of this approach, namely, ensuring the maximum speed of an encrypted VPN connection, and, therefore, accelerating the software components of routers included in its internal software (firmware) to provide a high-speed encrypted VPN connection. We consider the optimization of the speed of encryption and decryption algorithms using the features of the target processor of the device, such as parallelizing the execution of processor instructions using SIMD (Single Instruction, Multiple Data), general improvement of router performance when using op-timal compiler options, non-traditional use of PCI hardware encryption devices, use of alternative op-tions for modern private virtual networks (VPNs) for routers with a relatively low clock cycle frequen-cy of the ARM central processor (CPU), but containing more than two cores, while providing VPN channel multithreading.

8. Lorentz attractor simulation [№4 за 2020 год]
Authors: F.V. Filippov , A.M. Struev , A.L. Zolkin
Visitors: 6012
This paper describes the mechanism that allows applying the Scilab system during dynamic systems simulation with kept high accuracy of the obtained data, based on the example of the Lorentz attractor build-up. The Lorentz model is a real physical example of dynamical systems with chaotic behavior, which differs from other created artificial systems. Over time, it was possible to find out that the law worked out by Lorentz has extreme importance, since it characterizes both processes in turbulent flows and processes in the laser physics and hydro-dynamic systems, as well as in complex processes of biology and chemistry. In the literature dedicated to the numerical study of the Lorentz system with classical values of its parameters, conclusions are of-ten made about the structure of the attractor based on data obtained from a computational experiment (for example, the statement that the attractor contains cycles). The program proposed for consideration by the authors consists of two main parts. The first part regulates the creation of the user function solv_lor (n), which characterizes the system of differential equations that simulates the Lorentz attractor. The second part of the listing contains a call to the creat-ed user function solv_lor (n). The paper contains the specific changes in the behavior of the Lorentz system using various values of the r parameter. Graphic illustrations that reflect the results of simula-tion using various values of the r parameter are also given in the article. Significant changes in the tra-jectory have for the large values of the parameter been found. The program sets the user function Lorenz (t, y). The numerical methods have been used in the work with this function in order to solve the system of ordinary differential equations. Moreover, the system allows performing the graphical modeling of solutions at a qualitatively high level. A set of graphical tools is provided for the dynamic editing of graphs and the management of graphic window parameters. The computer experiments carried out have proved the simplicity and convenience of using the Scilab system for modeling dynamic systems while maintaining the high accuracy of the results ob-tained.

9. Using cloud-based technologies for the typical optimization of logistic processes models [№4 за 2020 год]
Authors: A.A. Levchenko , V.V. Taratukhin
Visitors: 3976
The use of standard models of processes in the enterprise resource planning systems (ERP) implemen-tation can reduce the time and budget of the project. The problem of standard process models optimiz-ing becomes more significant when applying SaaS technologies (Software as a Service). Available standard process models and methods for their optimization don’t take into account the specifics of cloud computing and therefore cannot be applied to new projects using SaaS. The use of system analysis techniques and systems theory has allowed us to formalize the problem of typical process models managing. The problem is formalized for cases of managing the process of implementing the automated process control systems using both the classical implementation method-ology and the methodology for implementing automated process control systems with SaaS technology. To compare the classical approach and the approach using SaaS technologies, a general management theory was applied. When describing the optimization problem, the goal and criteria for the effective-ness of its achievement are determined, and models are built to justify the decision-making. In the models of management systems for the implementation and automated process control sys-tems support, blocks of control, planning, load distribution, and execution were identified; the connec-tion of blocks with each other and with the external environment was described. Recommendations were made to build a model of the simulation of the procurement process, the model was tested, and the implementation of a digital procurement process management system was done. To justify the economic feasibility of the applied method, value management was applied based on standard models of logistics processes by taking into account the regional specifics of Russia and Japan. The developed technology has been successfully tested at the enterprises of the metallurgical indus-try and the high-tech industry.

10. The neural network development for evaluating the technical condition of a hydro turbine using vibration monitoring [№4 за 2020 год]
Authors: А.A. Santalov , Klyachkin, V.N.
Visitors: 3973
The prevention of emergencies at technical facilities is largely provided by the diagnostics of their functioning. One of the important problems is the diagnosis of the technical condition of the hydraulic unit. In the history of hydropower, examples are known where poor quality diagnostics led to serious accidents. To prevent such situations, vibration monitoring of the hydraulic unit is carried out, while the vibration data is sent to the data collection server and transmitted to the control rack, where load adjustments or complete unit shutdown occur. The need for prompt intervention is determined by many indicators that characterize the quality of functioning of the hydraulic unit. This paper explores the effectiveness of the use of neural network methods for vibration diagnos-tics of a hydraulic unit. The resulting sample is divided into three parts: training, control, and test. The training part is designed to build a neural network model - the relationship between the indicators of the functioning of the unit and its states. The control sample is used for the training quality control and helps prevent network retraining. The quality of classification is evaluated by a test sample. When us-ing cross-validation, the original sample is split into several blocks. To assess the diagnosis efficiency, three different quality criteria were used: average error on the test sample, AUC, and F-measure. The practical implementation was carried out using the MATLAB package. For a given set of input data, the best fit configuration was a neural network of three layers with 18 neurons in each layer. As a learning function, it uses the Levenberg-Marquardt algorithm with the backpropagation method of er-ror. The percentage of the average error in recognizing the state of a hydraulic unit using a neural net-work is 4,85 %, AUC is 0,8833, and the F-measure is 0,8282. Analysis of the effectiveness of the ob-tained network configuration compared to the automatically built network using the Statistics and Ma-chine Learning Toolbox library showed an increase in F-measure by 6,7 %.

| 1 | 2 | 3 | Next →