ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

4
Publication date:
09 September 2024

Journal articles №4 2022

1. Organizational problems of implementing flexible approaches in the applied software development [№4 за 2022 год]
Authors: Sayapin, O.V. (tow65@yandex.ru) - 27 Central Research Institute of the Ministry of Defense of Russia (Associate Professor, Leading Researcher), Ph.D; Tikhanychev, O.V. (tow65@yandex.ru) - 27 Central Research Institute of the Ministry of Defense of Russia (Senior Researcher), Ph.D; Bezvesilnaya, A.A. (a.bezvesilnaia@amchs.ru) - Civil Defence Academy EMERCOM of Russia (Associate Professor, Head of the Department), Ph.D;
Abstract: The article analyzes and considers the features of cascade and flexible approaches to organizing applied software development for automated control systems, their positive aspects and disadvantages. The most critical shortcomings are the increase in development time and poor interaction between a customer and a developer when using the cascade approach. At the same time, the existing regulatory documents define this approach as the main one. The use of general scientific methods of analysis and synthesis provides obtaining quantitative and qual-itative estimates in terms of time and the expected result of using cascade and flexible approaches in soft-ware development. Based on the results of comparing the obtained estimates, the authors make a conclu-sion that a replacement of the cascade approach with a flexible one could be a rational solution. At the same time, the analysis of regulatory and technical documentation showed that the use of flexible ap-proaches in developing application programs is hindered by existing organizational problems associated not only with the regulatory requirements, but also with the complexity of coordinating the work performed by distributed teams of performers. Based on the results of the analysis of a typical process of developing ap-plication software for automated control systems, the authors formulate proposals on possible options for replacing cascade approaches with flexible or combined ones. The novelty of the proposed approach is in its complexity. Its implementation will allow building a de-velopment system that will increase the interest of all participants in the process as a result and implement this system through a process of continuous specification of requirements.
Keywords: organizational development problems, decision support, agile, software development, control automation
Visitors: 2907

2. Developing the Expert system as a tool to form encyclopedias and to fill the Common digital space of scientific knowledge [№4 за 2022 год]
Authors: S.A. Vlasova (svlasova@jscc.ru) - Joint Supercomputer Center of the Russian Academy of Sciences – JSCC (Leader Researcher), Ph.D; N.E. Kalenov (nkalenov@jscc.ru) - Joint Supercomputer Center of the Russian Academy of Sciences – JSCC (Professor, Chief Researcher), Ph.D; A.N. Sotnikov (asotnikov@iscc.ru) - Joint Supercomputer Center of RAS (Professor), Ph.D;
Abstract: The article reflects the results of work on the development of a universal customizable WEB-based expert system designed to distinguish from a variety of objects those that meet the specified criteria to the maxi-mum extent. Authorized experts, who are selected based on structured data characterizing them, which are entered into the system when registering an expert, assess each object. The system can be used for expert selection of any objects whose properties are presented in digital form, and the formulated assessment criteria are determined by these properties. The task of expert selection of objects of various kinds is one of the most important in the formation of the content of the Common digital space of scientific knowledge (CDSSK). The concept of including printed materials in CDSSK is based on the principle of point reflection (but with deep semantic text markup) of the most important fundamental publi-cations in each scientific field. The selection of such publications should be based on the expert opinion of leading experts in this field of knowledge. It is also necessary to use the principle of expert selection when deciding on the creation of 3D models of museum objects as well as reflections in the CDSSK of film, photo and audio materials. The Expertise sys-tem can be used for selecting publications to be included in the National Electronic Library (NEL); for select-ing authors of articles for the Great Russian Encyclopedia; for reviewing articles submitted to the editorial offices of scientific journals; for reviewing articles submitted to the editorial boards of scientific journals; for conducting contests of scientific papers, photographs, video materials and other objects, the evaluated prop-erties of which can be presented on the Internet. Compared to the previous version of the system, the description of which was published in 2020, the new version presented in this article has more flexible configuration tools. It can be configured to the expert re-view of various object groups. The objects of each group are assessed according to their own criteria using their own rating system. By group is meant fundamental monographs published in different years related to the same scientific field; articles proposed for publication in a particular journal; a group of specialists who have expressed a desire to act as authors of commissioned scientific articles, databases reflecting objects of the same type, etc. Each group has its own system of object assessment and own circle experts. The result of the system operation is various rating lists of objects based on processing expert assess-ments. The system also has a special built-in application that allows a user with the “administrator” status to analyze rating lists and the activity of experts. The article provides a detailed description of the system struc-ture, its functionality, and examples of its use.
Keywords: content formation, selection of information resources, the software, WEB technologies, rating list, expert estimation, digital space of scientific knowledge
Visitors: 2754

3. Unification of a data presentation model and format conversion based on a non-relational Neo4j DBMS [№4 за 2022 год]
Authors: Eremeev, A.P. (eremeev@appmat.ru) - National Research University “MPEI” (Professor), Ph.D; Paniavin N.A. (paniavinna@mpei.com) - National Research University "MPEI" (Postgraduate Student);
Abstract: Nowadays, due to the digitalization concept, a lot of software tools have appeared, including those using ar-tificial intelligence methods that process large data streams (big data) of varying degrees of complexity. Voice assistants, chat bots, search recommender systems not only use incoming up-to-date data, but also store and analyze changes in this data, the number of which is constantly growing. Under the conditions of a combinatorial explosion hazard, the multidimensional modeling problems, the efficient requests processing, and the necessary information extraction arise. This article presents the analysis of the possibility of increasing the efficiency of multidimensional OLAP modeling and temporal data extraction based on built-in software components offered by the non-relational DBMS Neo4j. The choice of a graph DBMS is due to the absence of the need to strictly fix the da-ta structure at the initial stage, as well as on the flexibility of the data presentation structure itself, which can change as new information becomes available. Making changes to strict pre-fixed relational table views is an expensive operation. The typical way to store temporal data (time moments and intervals) is to store timestamps as node at-tributes. At the same time, this option for storing and handling events may not be effective enough in the case of a large dimension of the data representation. The experimental results have shown that the graph of a multidimensional data cube can be projected onto the coordinate axes in the form of separate temporal slices, where the abscissa axis displays the event start time, and the ordinate axis displays its end time. Additional axes, if necessary, can be introduced to de-termine the cause-effect relationship of processes occurring simultaneously in time. At the same time, the rules of Allen's temporal logic will be supported. The paper considers the possibility of unifying the representation model of the internal data structure of varying complexity based on graphs.
Keywords: Data Model, data analysis, data presentation, database
Visitors: 3436

4. Requirements for the software implementation of the Industrie 4.0 system for creating network enterprises [№4 за 2022 год]
Authors: Telnov, Yu.F. (Telnov.YUF@rea.ru) - Plekhanov Russian University of Economics (Professor, Head of the Department), Ph.D; Kazakov, V.A. (Kazakov.VA@rea.ru) - Plekhanov Russian University of Economics (Associate Professor), Ph.D; Danilov, A.V. (Danilov.AV@rea.ru) - Plekhanov Russian University of Economics (Senior Lecturer); A.A. Denisov (aadenisov88@gmail.com) - K.G. Razumovsky Moscow State University of Technology and Management (First Cossack University) (Postgraduate Student);
Abstract: The digital transformation of enterprises based on the digital technologies leads to a radical change in busi-ness models and the formation of new organizational and production structures, which include network en-terprises. Network enterprises as dynamically formed production structures in the business ecosystem that unite many enterprises participating in joint economic activity; in modern conditions they are based on digi-tal platforms. Nowadays, an approach to building digital platforms is actively developing within the frame-work of the Industrie 4.0. The subject of the study is to determine the requirements for software implementa-tion of Industrie 4.0 systems (i4.0 systems) based on digital platforms using multi-agent technologies and an ontological approach. As a research method, the authors propose to use the method of decomposing the i4.0 system into plat-form software components and software administrative shells related to managing and functioning of net-work enterprise resources (assets) – i4.0 components. The reference architectural model of the Industrie 4.0 (RAMI 4.0) is chosen as the basis for building the architecture of the i4.0 system. It is proposed to use an on-tological approach in order to implement multi-agent interaction of i4.0 components within the framework of building the value-added chain of a network enterprise. The main results of the study are the formulated requirements for the software implementation of the i4.0 system with regard to the i4.0 platform software components formation and the software administrative shells of the i4.0 components at the levels of the RAMI architecture. As a software mechanism for the inter-action of i4.0 components the paper proposes an algorithm for the i4.0-components interaction using a do-main ontology. The software implementation of the formulated requirements for constructing the i4.0 system architec-ture will increase the flexibility and efficiency of creating and functioning of value-added chains of network enterprises in the dynamically developing business ecosystem of industrial manufacturing of products and services.
Keywords: domain ontology, requirements for the program implementation, rami architecture, administrative shell (as), industrie 4.0 component (i4.0-component), industrie 4.0 platform (i4.0-platform), industrie 4.0 system (i4.0-system), digital platform, business ecosystem
Visitors: 3187

5. DIY DDoS Protection: operational development and implementation of the service in the National Research Computer Network of Russia [№4 за 2022 год]
Authors: Abramov A.G. (abramov@niks.su) - St. Petersburg branch of Joint Supercomputer Center of the Russian Academy of Sciences (Associate Professor, Leading Researcher), Ph.D;
Abstract: Nowadays, the protection of digital infrastructures of organizations and end users from constantly growing in number and becoming more sophisticated cybersecurity threats is receiving increased attention at various levels. An extremely important task is to ensure reliable and effective protection of critical infrastructures of large telecommunications companies. One of the most common types of cybersecurity threats is Distributed Denial of Service (DDoS) performed at different levels of network interaction, from infrastructure to applica-tions, and aimed at different resources and services. This paper provides an overview of modern methods and technologies to prevent and mitigate DDoS at-tacks with an emphasis on protecting the networks of telecom operators and their users. It also discusses such methods as BGP Blackhole and BGP FlowSpec based on dynamic routing mechanisms and protocols, as well as the methods based on network traffic intelligent analysis and filtering by specialized cleaning sys-tems. The main technical requirements, quality criteria and some quantitative characteristics of DDoS pro-tection solutions are outlined. There are examples of commercial and freely distributed systems. A separate section of the paper is devoted to a detailed description of a relatively simple service for pro-tecting against DDoS attacks. The service is developed and put into operation by specialists of the National Research Computer Network of Russia (NIKS) based on real-time processing and analysis of NetFlow data collected from boundary routers and on the BGP FlowSpec protocol. The is also general information about the hardware and software complex, architecture and main components of the service, involved software packages and technologies along with some statistical data on the results of detecting DDoS attacks in the NIKS network infrastructure.
Keywords: national research computer network, bgp flowspec, netflow, network traffic analysis, protection against network attacks, ddos attack, cybersecurity threats, infosecurity, niks, elk stack
Visitors: 3508

6. A GraphHunter software tool for mapping parallel programs to a supercomputer system structure [№4 за 2022 год]
Authors: Baranov, A.V. (antbar@mail.ru, abaranov@jscc.ru ) - Joint Supercomputer Center of RAS (Associate Professor, Leading Researcher), Ph.D; Kiselev E.A. (kiselev@jscc.ru) - Joint Supercomputer Center of the RAS – branch of Federal State Institution "Scientific Research Institute for System Analysis of the RAS" (Senior Researcher); P.N. Telegin (pnt@jscc.ru) - Joint Supercomputer Center of RAS (Leading Researcher), Ph.D; Sorokin A.A. (rexantmaster@yandex.ru) - MIREA – Russian Technological University (Student);
Abstract: One of well-known problems in high-performance computing is optimal mapping of parallel program pro-cesses to supercomputer system nodes. A solution for this problem minimizes the overhead for information exchanges between the processes of a parallel program and thus increases the performance of calculations. When solving a mapping problem, both a supercomputer system and a parallel program are represented as graphs. The paper shows solving the mapping problem in relation to a system for collective use of a supercom-puter that handles a queue of parallel programs. After passing the queue, a new previously unknown subset of supercomputer nodes is allocated to the parallel program. In this case, it is necessary to construct a graph of a selected subset of nodes and find a suitable mapping of the parallel program onto this graph in a rea-sonable time. It is suggested to run parallel mapping algorithms on the supercomputer nodes allocated for parallel program. To study the properties of mapping algorithms, the GraphHunter software tool was developed. This tool makes it possible to conduct experiments with three parallel algorithms: simulated annealing, genetic algo-rithm, and their combination. This article discusses the structure of the GraphHunter software tool, and pre-sents the results of experiments with GraphHunter runs on the MVS-10P OP supercomputer at the Joint Su-percomputing Center of the Russian Academy of Sciences.
Keywords: high-performance computing, parallel mapping algorithm, simulated annealing, generic algorithm, job scheduling
Visitors: 2735

7. Development of trusted microprocessor software models and a microprocessor system [№4 за 2022 год]
Authors: S.I. Aryashev (aserg@cs.niisi.ras.ru) - Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (SRISA RAS) (Branch Manager), Ph.D; Grevtsev N.A. (ngrevcev@cs.niisi.ras.ru) - Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (SRISA RAS) (Postgraduate Student, Research Associate); P.S. Zubkovsky (zubkovsky@niisi.ras.ru) - Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (SRISA RAS) (Head of Department); Chibisov P.A. (chibisov@cs.niisi.ras.ru) - Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (SRISA RAS) (Senior Researcher), Ph.D; Kuleshov A.S. (rndfax@cs.niisi.ras.ru) - SRISA RAS; Petrov K.A. (petrovk@cs.niisi.ras.ru) - Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (SRISA RAS) (Deputy Head of Department OAVM), Ph.D;
Abstract: When developing a trusted microprocessor for digital SCM control systems (systems with a critical mission), it is necessary to develop a software model (emulator) of a trusted microprocessor and a system emulator based on it to approve the architectural model and to study the possibilities of parrying threats. Instruction-based and behavioral microprocessor emulators are tools for modeling the microprocessor architecture and the system as a whole. They play a fundamental role in various areas of microarchitecture design. Emulators are used as a reference model for functional verification and for assessing the contribution of new ideas in-troduced by developers at the microarchitecture level to the performance of the system as a whole, as well as for understanding the behavior of user programs and identifying hardware elements that limit the system ef-fectiveness. The paper presents the criteria necessary for creating trusted systems, a developed instruction-based emulator of the trusted microprocessor microarchitecture (vmips), as well as a behavioral emulator of the microprocessor system architecture (QEMU) based on a trusted microprocessor to approve the architectural model and study the possibilities of parrying threats. There is a demonstration of software that tests the functions of the emulator to ensure the fulfillment of the system’s trusted execution environment criteria by parrying threats from the FSTEC information security threats data bank. The paper also describes launching a demonstration task in a virtual environment on a virtual programmable logic controller with a trusted microprocessor using SCADA for monitoring and con-trol. Using a virtual PLC with a virtual environment allows testing and debugging, conducting security studies, building models of existing and future nodes, working out various scenarios, and getting complete infor-mation about the work progress. Preliminary testing in a virtual environment also allows reducing the risks of commissioning and working out various threat models and their parrying before developing a microproces-sor. Based on the results of the work performed, the development of a trusted microprocessor with a MIPS-like architecture for digital control systems of the SCM is planned in the future.
Keywords: qemu, mips, virtualization, programmable logic controllers, behavioral emulator, command emulator
Visitors: 2781

8. A software platform demonstrator for configuring ANFIS neural network hyperparameters in fuzzy systems [№4 за 2022 год]
Authors: Ivanov V.K. (mtivk@mail.ru) - Tver State Technical University, Ph.D; Palyukh B.V. (pboris@tstu.tver.ru) - Tver State Technical University, Ph.D;
Abstract: This article describes the research demonstrator for experimental verification and evaluation of fuzzy algo-rithms and neural networks in an expert system for complex multi-stage technological processes. The de-monstrator development purpose is to create a scientific and technical foundation for the ready-to-implement solutions transfer to the next project stages. The demonstrator allows assessing the readiness level of the components being developed, conducting re-search tests, checking the operability and efficiency of the software implementations functioning proposed at various parameter values and their combinations. A complex multi-stage technological process state di-agnostics involves the joint primary data processing to obtain probabilistic abnormal critical events or inci-dents characteristics under conditions of uncertainty. The authors propose a way of using a fuzzy neural network, which is trained with data generated by be-lief functions. The approach makes it possible to significantly speed up calculations and to minimize the re-source base. The article focuses on describing the neural network models and training datasets management, neural network training and quality control, the technological process diagnostics in various modes. The con-figurable hyper-parameters of the neural network are described in detail. There are examples of the diagnos-tic procedures implementation in various modes. It is shown that with the software diagnostic system func-tioning in conditions close to real, the initial assumptions concerning the time reduction for detecting and predicting incidents can be verified and experimentally substantiated. In addition, the technological chains sets that are the incidents causes can be more accurately determined.
Keywords: demonstrator, anfis, fuzzy set function, membership function, technological chain, evidence theory, production rule, fuzzy neural network, fuzzy logic, multistage production process, incident, diagnostic system, tsk
Visitors: 2657

9. Evaluating the capabilities of classical computers in implementing quantum algorithm simulators [№4 за 2022 год]
Authors: Zrelov P.V. (zrelov@jinr.ru) - Joint Institute for Nuclear Research, Meshcheryakov Laboratory of Information Technologies, Dubna State University, Institute of the System Analysis and Management, Plekhanov Russian University of Economics (Head of Department), Ph.D; Ivantsova O.V. (ivancova@jinr.ru) - Joint Institute for Nuclear Research, Meshcheryakov Laboratory of Information Technologies, Dubna State University, Institute of the System Analysis and Management (Research Associate); Korenkov V.V. (korenkov@jinr.ru) - Joint Institute for Nuclear Research, Meshcheryakov Laboratory of Information Technologies, Dubna State University, Institute of the System Analysis and Management, Plekhanov Russian University of Economics (Director of the Laboratory), Ph.D; N.V. Ryabov (ryabov_nv95@mail.ru) - Dubna State University – Institute of System Analysis and Control; Ulyanov, S.V. (ulyanovsv46_46@mail.ru) - Dubna State University – Institute of System Analysis and Control, Dubna, Joint Institute for Nuclear Research – Laboratory of Information Technology (Professor), Ph.D;
Abstract: Modern quantum devices have severe limitations in the number of qubits, which limit the width and depth of the quantum circuit and have strong noise processes that make it difficult to obtain correct results. It is also necessary to design quantum circuits for a particular quantum device taking into account the coupling be-tween qubits and to apply quantum error mitigation. It is possible to avoid these problems using classical computers to simulate quantum computation. Classical computers are used both for quick testing of hy-potheses before running on quantum devices and for solving real-world problems. The paper describes the process of designing and efficient modeling of quantum algorithms, approaches to developing quantum search algorithms, Grover's algorithm. Qiskit and QuEST quantum simulators were used to study the efficiency of using a supercomputer to simulate quantum circuits on CPUs and GPUs using the example of the quantum test circuit and Grover's algorithm. This paper describes a quantum phase esti-mation algorithm, which is a basic unit in some quantum algorithms of quantum computational physics and chemistry. The algorithm is simulated using NVIDIA's newest cuQuantum quantum simulator. It allows efficient simulation of quantum circuits on GPUs using multiple GPUs, which significantly increases speed and allows the quantum phase estimation algorithm to be executed with sufficient computational accuracy. The paper also notes the difficulties when simulating different algorithms using a large number of qubits or circuit depth.
Keywords: supercomputer, quantum simulator, quantum phase estimation, grover’s algorithm, quantum computing
Visitors: 3001

10. Modelling a supercomputer job bundling system based on the Alea simulator [№4 за 2022 год]
Authors: Baranov, A.V. (antbar@mail.ru, abaranov@jscc.ru ) - Joint Supercomputer Center of RAS (Associate Professor, Leading Researcher), Ph.D; D.S. Lyakhovets (anetto@inbox.ru) - Research & Development Institute Kvant (Research Associate);
Abstract: Modern supercomputer job management systems (JMS) are complex software using many different sched-uling algorithms with various parameters. We cannot predict or calculate the impact of changing these pa-rameters on JMS quality metrics. For this reason, researchers use simulation modelling to determine the op-timal JMS parameters. This article discusses the problem of developing a supercomputer job management system model based on the well-known Alea simulator. The object of study is our scheduling algorithm used for developing the supercomputer job bundling system. The algorithm bundles jobs with a long initialization time into groups (packets) according to job types. Initialization is performed once for each group, and then the jobs of the group are executed one after the other. By using a bundling system, it is possible to reduce the initialization overhead and increase the job scheduling efficiency. We implemented the bundling algorithm as a part of the Alea simulator. We have done comparative simulation of implemented algorithm for various workloads. The comparison involved the FCFS and Backfill scheduling algorithms built into Alea. Several workloads with different intensities were generated for the simulation. The minimum job initialization share thresholds for these workloads were determined based on the simulation results. The bundling system noticeably im-proves the scheduling efficiency compared to the FCFS and Backfill algorithms starting from these thresh-olds. The study results showed that the developed simulation model could be used as a software tool for a comparative analysis of various algorithms for supercomputer job scheduling.
Keywords: high-performance computing, job management system, simulation, job bundling, alea
Visitors: 3565

| 1 | 2 | 3 | Next →