ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Bookmark

Next issue

1
Publication date:
16 March 2021
-->

Articles of journal № 4 at 2019 year.

Order result by:
Public date | Title | Authors

21. Approaches to the development and debugging QEMU simulators using the high-level architecture describing language PPDL [№4 за 2019 год]
Authors: A.Yu. Drozdov , Yu.N. Fonin , M.N. Perov , A.S. Gerasimov
Visitors: 3537
The paper describes an approach to the development and debugging simulators based on QEMU (Quick EMUlator) binary translation. This approach is based on using PPDL (Processor and Periphery Descrip-tion Language) that is a high-level architecture describing language. Simulations based on binary trans-la- tion work several times faster in contrast to instruction interpreters while providing a wide range of possibilities for software debugging, as well as for dynamic analysis of applications. Thus, binary translation simulators based on QEMU in particular are of high interest either to system-level SoC (System on Crystal) developers and to embedded software developers. However, developing of binary translators is a more complicated and more time-consuming task compared to instruction interpreter development. Development of the QEMU simulator assumes the implementation of instructions of the simulated processor as a sequence of so-called tcg micro-operations. Tcg micro-operations are not executed directly, rather used for binary translation to the in-structions of the host machine. Therefore, there is no possibility to debug tcg description of instruc-tions using standard debuggers. It is possible to simplify QEMU simulator developing using PPDL language. PPDL compiler gener-ates two simulators from PPDL description of processor: an interpreter and a QEMU component kit. The compiler generates an interpreter as a C++ source code. With generated C++ code, any debugger like gdb or Microsoft Visual Studio can debug PPDL description. Than from the same description PPDL compiler generates the QEMU description of a processor representing instructions as a sequences of tcg micro-operation. Due to PPDL, developers can avoid debugging of the tcg processor description and therefore accelerate development of a QEMU based simulator.

22. The development of fast software implementation of specialized neural network architecture with sparse connections [№4 за 2019 год]
Author: Yu.S. Fedorenko
Visitors: 2409
The paper is devoted to the development of fast software implementation of a specialized neural net-work architecture. Feature engineering is one of the most important stages in solving machine learning tasks. Nowadays, the algorithms of handcrafted feature selection lose their popularity, giving way to deep neural networks. However, application of deep models is limited in online learning tasks as they are not able to learn in real time. Besides, their using is difficult in high-loaded systems due to signifi-cant computational complexity. In the one of previous articles, the author has proposed a neural network architecture with automat-ic feature selection and the ability to train in real time. However, specific sparsity of connections in this architecture complicates its implementation on the base of classic deep learning frameworks. Therefore, we decided to do our own implementation of the proposed architecture. This paper considers data structures and algorithms developed when writing software implementa-tion. It describes sample processing in details from the program system point of view during model pre-dicting and training. For a more complete description of implementation details, there are UML classes, sequences and activity diagrams. The performance of the developed implementation is compared with implementations of same ar-chitecture on the base of deep learning frameworks. The analysis has shown that the developed soft-ware works an order of magnitude faster than library-based implementations. Such acceleration is due to the fact that the developed implementation is optimized for a specific architecture, while the frame-works are designed to work with a wide class of neural networks. In addition, the benchmarks have shown that the developed implementation of a proposed neural network works only 20-30 percent slower than a simple logistic regression model with handcrafted features. Thus, it can be used in high loaded systems.

23. Development of a spiking neural network with the possibility of high-speed training to neutralize DDoS attacks [№4 за 2019 год]
Authors: E.V. Palchevsky, O.I. Khristodulo
Visitors: 4217
Effective data accessibility is one of the key challenges in information security. Often DDoS attacks violate in-formation availability. The imperfection of modern protection methods against attacks by external unauthorized traffic leads to the fact that many companies with Internet access are faced with the inaccessibility of their own services that provide various services or information. This results in company financial losses from equipment downtime. To solve this problem, the authors have developed a spiking neural network to protect against attacks by external unauthorized traffic. The main advantages of the developed spiking neural network are high self-learning speed and quick response to DDoS attacks (including unknown ones). A new method of a spiking neural network self-training is based on uniform processing of spikes by each neuron. Due to this fact, the neural network is trained in the shortest possible time, therefore it quickly and efficiently filters attacks with external unauthorized traffic. The paper also compares the developed spiking neural network with similar solutions for protecting against DDoS attacks. As a result, it reveales that the developed neural network is more optimized for high loads and is able to detect and neutralize DDoS attacks as soon as possible. The developed spiking neural network was tested in idle conditions and in protection against DDoS attacks. Load values were obtained on the resources of the computing cluster. Long-term testing of a pulsed neural net-work shows a rather low load on the central processor, RAM and solid state drive during massive DDoS attacks. Thus, the optimal load not only increases the availability of each physical server, but also provides the ability to simultaneously run resource-intensive computing processes without any disruption to the functioning of the work-ing environment. Testing was carried out on computing cluster servers, where a spiking neural network showed stable operation and effectively protected from DDoS attacks.

24. Implementing an expert system to evaluate technical solutions innovativeness [№4 за 2019 год]
Authors: Ivanov V.K., Obraztsov I.V., Palyukh B.V.
Visitors: 2775
The paper presents a possible solution to the problem of algorithmization for quantifying innovativeness indicators of technical products, inventions and technologies. The concepts of technological novelty, relevance and imple-mentability as components of product innovation criterion are introduced. Authors propose a model and algorithm to calculate every of these indicators of innovativeness under conditions of incompleteness and inaccuracy, and sometimes inconsistency of the initial information. The paper describes the developed specialized software that is a promising methodological tool for using in-terval estimations in accordance with the theory of evidence. These estimations are used in the analysis of com-plex multicomponent systems, aggregations of large volumes of fuzzy and incomplete data of various structures. Composition and structure of a multi-agent expert system are presented. The purpose of such system is to process groups of measurement results and to estimate indicators values of objects innovativeness. The paper defines ac-tive elements of the system, their functionality, roles, interaction order, input and output interfaces, as well as the general software functioning algorithm. It describes implementation of software modules and gives an example of solving a specific problem to determine the level of technical products innovation. The developed approach, models, methodology and software can be used to implement the storage technology to store the characteristics of objects with significant innovative potential. Formalization of the task's initial data significantly increases the possibility to adapt the proposed methods to various subject areas. There appears an op-portunity to process data of various natures, obtained during experts’ surveys, from a search system or even a measuring device, which helps to increase the practical significance of the presented research.

25. . Industrial technologies states management based on risk criterion [№4 за 2019 год]
Authors: S.R. Bakasov , G.N. Sanaeva , Bogatikov V.N.
Visitors: 2056
The present paper is devoted to the study of conceptual formulation of the industrial technologies state management problem. It considers managing a potentially hazardous technology for selective treatment of tail gases from production of non-concentrated nitric acid. This is a practical application of the state management idea. In this application state management is closely connected with industrial systems safety management. One of the problems in the synthesis of industrial technologies safety management systems is the presence of uncertainty both in the knowledge of physicochemical processes and in the uncertainty as-sociated with the influence of random disturbances. This gives rise to the development of new methods for the synthesis of technological safety management systems, as well as to the improvement of exist-ing ones. As a consequence of the above factors, methods for implementing goal-setting mechanisms and revising control quality criteria become a promising approach for this kind of dynamic processes occurring in poorly structured and poorly formalized environments. These methods rest upon funda-mental background knowledge. Various types of defects are reflected in technological processes state variables. Violations can be caused by defects in control systems, process equipment and violations in the technological process itself. All kinds of damage in the technological system (non-compliance with the requirements of source materials, non-compliance with the requirements of regulatory and tech-nical documents and the human factor) lead to similar results. This indicates both the complexity of di-agnosing procedure and the complexity of forming criteria to assess states. At the present time from a management point of view, a technological safety system is a multi-level hierarchically organized technological system. The main goals of such systems are to timely detect malfunctions and to take measures to eliminate their root causes. The paper considers multi-level or-ganization of the technological safety system. As a practical application of the proposed approach the paper considers multi-level organization of the technological safety system for the process of selective treatment of tail gases from the production of weak nitric acid. Authors propose the main technology management criterion (a risk criterion for conducting the technological process) and the impulse model of the criterion. Management is based on predictive management. The developed system allowed not only to increase economic indicators, but also to reduce air pol-lution.

26. Evolution and features of hyperconverged infrastructures [№4 за 2019 год]
Author: Yu.M. Lisetskiy
Visitors: 4204
The paper considers hyperconverged infrastructures that are widely used by companies to build a flex-ible cloud-level IT infrastructure. This infrastructure only uses private data centers or clouds and do not use public resources. The paper describes the evolution of hyperconverged infrastructures, their features and strong points. Emergence of hyperconverged infrastructures is a logical step forward in development of IT in-frastructures and the next level of converged infrastructures. The concept of hyperconverged infra-structures combines several infrastructure components into the complex initially integrated using con-nection software. This concept is a development of traditional approaches to building an IT infrastruc-ture. Hyperconverged infrastructures further develop the concept of converged infrastructures adding the modularity concept. It makes operation of all virtualized computing, network and storage resources autonomous inside separate modules, which are virtualized computing resources. Typically, they are grouped to provide fault tolerance, high performance and flexibility in building resource pools. One of essential reasons why hyperconverged infrastructures are important is that not all enterpris-es are ready to migrate their services and applications into public cloud in order the eliminate costs of building own IT infrastructure. However, many of them are interested in taking advantages of cloud technologies in their infrastructures and hyperconverged infrastructures give such opportunity. They are a realistic alternative to leasing cloud services from third party providers as the hyperconverged in-frastructures enable deployment of private clouds fully under control of an enterprise. Therefore, hy-perconverged infrastructures dominate as a hardware platform to build private clouds, virtualized working places, and to develop new applications.

← Preview | 1 | 2 | 3