ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 4 at 2019 year.

Order result by:
Public date | Title | Authors |

11. A method of identifying technical condition radio engineering means using artificial neural network technologies [№4 за 2019 год]
Authors: Dopira R.V., A.A. Shvedun , D.V. Yagolnikov, I.E. Yanochkin
Visitors: 5118
Due to the fact that modern military-grade radio equipment is becoming functionally and technologi-cally more complicated, the urgency of the task of creating functional control systems and identifying technical state of radio equipment is increasing. Nowadays, there are no effective and fully automatic systems for identifying technical state of vari-ous types of radio equipment. One of the ways to solve the problem is to create systems for identifying technical state of radio equipment is based on machine learning principles. A distinctive feature of the application of trained artificial neural networks to solve the identifying problem is the development of a prototype of the observed situations, generalizations for the predomi-nance and similarity in a variety of same type radio equipment, as well as high efficiency and reliabil-ity of solving this problem. The paper presents a method for identifying technical state of radio equipment using case-law prin-ciples of machine learning of artificial neural networks. It allows solving the problem of identifying current classes of the radio equipment technical condition based on measurement results of the main system controlled parameters in real time. Taking into account the problem specifics, the choice of a multilayer direct distribution neural net-work including three hidden layers is substantiated. The number of neurons of the input layer is deter-mined by the number of controlled parameters of the technical condition of the main systems of radio equipment of a particular type. The number of output layer neurons is determined by the number of possible classes of the radio equipment technical condition. Elementary converters of this network have an activation function of a sigmoid type. To train an artificial neural network, the authors used a heuristic modification of the Levenberg-Marquardt algorithm.

12. Approaches to the development and debugging QEMU simulators using the high-level architecture describing language PPDL [№4 за 2019 год]
Authors: A.Yu. Drozdov , Yu.N. Fonin , M.N. Perov , A.S. Gerasimov
Visitors: 5976
The paper describes an approach to the development and debugging simulators based on QEMU (Quick EMUlator) binary translation. This approach is based on using PPDL (Processor and Periphery Descrip-tion Language) that is a high-level architecture describing language. Simulations based on binary trans-la- tion work several times faster in contrast to instruction interpreters while providing a wide range of possibilities for software debugging, as well as for dynamic analysis of applications. Thus, binary translation simulators based on QEMU in particular are of high interest either to system-level SoC (System on Crystal) developers and to embedded software developers. However, developing of binary translators is a more complicated and more time-consuming task compared to instruction interpreter development. Development of the QEMU simulator assumes the implementation of instructions of the simulated processor as a sequence of so-called tcg micro-operations. Tcg micro-operations are not executed directly, rather used for binary translation to the in-structions of the host machine. Therefore, there is no possibility to debug tcg description of instruc-tions using standard debuggers. It is possible to simplify QEMU simulator developing using PPDL language. PPDL compiler gener-ates two simulators from PPDL description of processor: an interpreter and a QEMU component kit. The compiler generates an interpreter as a C++ source code. With generated C++ code, any debugger like gdb or Microsoft Visual Studio can debug PPDL description. Than from the same description PPDL compiler generates the QEMU description of a processor representing instructions as a sequences of tcg micro-operation. Due to PPDL, developers can avoid debugging of the tcg processor description and therefore accelerate development of a QEMU based simulator.

13. Smart data collection from distributed data sources [№4 за 2019 год]
Author: M.S. Efimova
Visitors: 6569
The paper describes collecting and analysing data from distributed data sources using an example of analysing heterogeneous distributed financial information, analyzes and compares existing approaches to information collection and analysis. Most of the existing approaches that solve this problem require all data to be collected in a single repository to perform analysis on that data. However, such methods imply a delay from the moment when the data is generated until the moment when the analysis methods are applied to it due to the need to transfer the data from the source to the storage location. This signifi-cantly reduces the decision-making efficiency and increases network traffic. In addition, collecting da-ta from all sources can lead to significant costs if access to some of the sources is not free or is limited by a tariff plan. The considered approaches include data warehouses, ETL tools (extraction, transformation and loading), lambda architectures, cloud computing, fog computing, distributed data analysis based on the actor model. It has been concluded that these approaches do not take into account the cost and priori-ties of data sources and do not allow accessing them dynamically. Therefore, they do not meet all the requirements. The paper proposes and describes a method of smart information collection with dynamic reference to data sources depending on current need, cost and source priority. The proposed method allows to re-duce network traffic, speed up data analysis and reduce the costs associated with accessing data sources.

14. Implementing an expert system to evaluate technical solutions innovativeness [№4 за 2019 год]
Authors: Ivanov V.K., Obraztsov I.V., Palyukh B.V.
Visitors: 4713
The paper presents a possible solution to the problem of algorithmization for quantifying innovativeness indicators of technical products, inventions and technologies. The concepts of technological novelty, relevance and imple-mentability as components of product innovation criterion are introduced. Authors propose a model and algorithm to calculate every of these indicators of innovativeness under conditions of incompleteness and inaccuracy, and sometimes inconsistency of the initial information. The paper describes the developed specialized software that is a promising methodological tool for using in-terval estimations in accordance with the theory of evidence. These estimations are used in the analysis of com-plex multicomponent systems, aggregations of large volumes of fuzzy and incomplete data of various structures. Composition and structure of a multi-agent expert system are presented. The purpose of such system is to process groups of measurement results and to estimate indicators values of objects innovativeness. The paper defines ac-tive elements of the system, their functionality, roles, interaction order, input and output interfaces, as well as the general software functioning algorithm. It describes implementation of software modules and gives an example of solving a specific problem to determine the level of technical products innovation. The developed approach, models, methodology and software can be used to implement the storage technology to store the characteristics of objects with significant innovative potential. Formalization of the task's initial data significantly increases the possibility to adapt the proposed methods to various subject areas. There appears an op-portunity to process data of various natures, obtained during experts’ surveys, from a search system or even a measuring device, which helps to increase the practical significance of the presented research.

15. The method of fuzzy controllers automatic synthesis [№4 за 2019 год]
Authors: Ignatyev, V.V., V.V. Solovev , A.A. Vorotova
Visitors: 6182
The paper presents the method of fuzzy controllers automatic synthesis based on the measured data. In the course of fuzzy controllers development for technical facilities management systems issues arise related to choosing the number of linguistic variable terms, to determining the type of member-ship functions and to creating the rule base. These issues are solved with the help of experts, but this process is quite labour-intensive and time-consuming. One of possible solutions can be automatic crea-tion of fuzzy controllers based on the measured data, which can be taken from a real management sys-tem or from a simulation model. Authors of the paper developed the structure of control/management system in MatLab Simulink al-lowing to take input and output signals of the controller during simulation process and save them to a file as an array. They also developed an approach to analyze data arrays in order to determine parame-ters of input and output variables of a fuzzy controller and a data clustering mechanism that allows creating a database of fuzzy rules. After analyzing the data arrays, the rules in the database can either be completely duplicated or have the same antecedents and different consequents, which leads to uncertainty. In this regard, the al-gorithm is proposed for eliminating completely duplicate rules from the database and for averaging the rules with different consequents. Software has been developed in the MatLab environment, which al-lows taking the initial data from the technical facility management system with a PI control law, per-forming clustering and parameterization of input and output signals, and creating a rule base and re-duce it. The suggested method of fuzzy controllers automatic synthesis can be used to create controllers that will replace traditional management laws with intellectual ones.

16. Systems model verification based on equational characteristics of СTL formulas [№4 за 2019 год]
Authors: Korablin Yu.P., Shipov A.A.
Visitors: 5107
The paper proposes and examines the RTL notation based on systems of recursive equations and standard Linear Temporal Logic (LTL) semantic definitions and the Computational Tree Logic (CTL). When this notation was still called RLTL, the previous works of the authors showed that it enables easy formulation and verifying of LTL properties with respect to system models, even with those that are al-so specified using the RLTL notation. Then the authors expanded the capabilities of the RLTL notation, so it has become possible to formulate LTL and CTL expressions. Therefore, the first version of the RTL notation was created. This article presents the second version of the RTL, which was the result of refinement and simpli-fication of notation semantic definitions, which allowed increasing the visibility and readability of its expressions. The purpose of the article is to demonstrate the possibility of using the RTL notation as a tool to formulate and verify properties defined by formulas of both LTL and CTL logics using common axioms and rules. This lets RTL to become a single and universal notation for these logics. At the same time, it is possible for RTL to include expressiveness of other temporal logics too by minor additions to its basic definitions. It means that in future it is possible for RTL to become a full-fledged universal tem-poral logic that has all of the necessary tools and means for implementing all stages of verification.

17. Investigation of the optimal number of processor cores for parallel cluster multiple labeling on supercomputers [№4 за 2019 год]
Authors: S.Yu. Lapshina, A.N. Sotnikov, V.E. Loginova , C.Yu. Yudintsev
Visitors: 5610
The article considers the optimum number of processor cores for launching the Parallel Cluster Multi-ple Labeling Technique in the course of conducting simulation experiments on the problem of multi-agent modeling of the spread of mass epidemics on modern supercomputer systems installed in the JSCC RAS. This algorithm can be used in any field as a tool for differentiating large lattice clusters, because he is given input in a format independent of the application. At the JSCC RAS, this tool was used to study the problem of the spread of epidemics, for which an appropriate multiagent model was developed. The model considers an abstract disease transmitted by contact. During the simulation, the thresh-old value of the probability of infection is determined (i.e., the probability of infection itself is a varia-ble parameter), at which the percolation effect appears on the distribution grid of the disease. If this value is close to the contagiousness index of a particular disease, then there is every chance of expect-ing an epidemic to spread on a planetary scale. In the course of imitation experiments, a variant of the Parallel Cluster Multiple Labeling Technique for percolation Hoshen-Kopelman clusters related to the tag linking mechanism, which can also be used in any area as a tool for differentiating large-size lattice clusters, was used to be improved on a multiprocessor system. The article provides an estimate of the execution time of the Parallel Cluster Multiple Labeling Technique for Hoshen-Kopelman percolation clusters for various values of input parameters on high-performance computing systems installed in the JSCC RAS: MVS-10P MP2 KNL, MVS-10P OP, MVS 10P Tornado, MVS-100K.

18. Evolution and features of hyperconverged infrastructures [№4 за 2019 год]
Author: Yu.M. Lisetskiy
Visitors: 7248
The paper considers hyperconverged infrastructures that are widely used by companies to build a flex-ible cloud-level IT infrastructure. This infrastructure only uses private data centers or clouds and do not use public resources. The paper describes the evolution of hyperconverged infrastructures, their features and strong points. Emergence of hyperconverged infrastructures is a logical step forward in development of IT in-frastructures and the next level of converged infrastructures. The concept of hyperconverged infra-structures combines several infrastructure components into the complex initially integrated using con-nection software. This concept is a development of traditional approaches to building an IT infrastruc-ture. Hyperconverged infrastructures further develop the concept of converged infrastructures adding the modularity concept. It makes operation of all virtualized computing, network and storage resources autonomous inside separate modules, which are virtualized computing resources. Typically, they are grouped to provide fault tolerance, high performance and flexibility in building resource pools. One of essential reasons why hyperconverged infrastructures are important is that not all enterpris-es are ready to migrate their services and applications into public cloud in order the eliminate costs of building own IT infrastructure. However, many of them are interested in taking advantages of cloud technologies in their infrastructures and hyperconverged infrastructures give such opportunity. They are a realistic alternative to leasing cloud services from third party providers as the hyperconverged in-frastructures enable deployment of private clouds fully under control of an enterprise. Therefore, hy-perconverged infrastructures dominate as a hardware platform to build private clouds, virtualized working places, and to develop new applications.

19. Algorithm for identifying parameters of a liquid heating device [№4 за 2019 год]
Authors: Lgotchikov V.V., T.S. Larkina
Visitors: 5171
The paper discusses the algorithm for identification of parameters of a liquid heating device used to prepare, pasteurize and conserve agricultural products. Control part of the device uses a microcontroller, which suggests new consumer properties - im-proved quality of the processed product. Liquid heating device parameters are identified programmati-cally. The authors selected two identification parameters: active electric power released in the second-ary body and heat capacity of the liquid medium. These parameters cannot be directly measured with the help of sensors. Identification is made using the Eickhoff algorithm adapted to the process. Performance of the algo-rithm is confirmed by MATLAB simulation results. The model identifies subsystems that solve equa-tions of electro- magnetic and thermal balance for individual elements of the device, control loops of the temperature control system and the process of identifying parameters. It was discovered that the mathematical model with lumped parameters is a sufficient basis for im-proving the algorithm of the device with the control part. It was implemented on the basis of a micro-controller. The proposed modification of the Eickhoff identification algorithm showed good perfor-mance in the field of identifiable quantities of different sizes. Regression dependencies were obtained, which allow implementing the strategy for adjusting the software part. The choice of gain coefficients of the residual signal amplification curves for unobservable process parameters was made easier for known quantization periods of the observed feedback signals.

20. Method of forming a priority list of automated control equipment in special purpose systems and its software implementation [№4 за 2019 год]
Authors: V.L. Lyaskovsky , I.B. Bresler , M.A. Alasheev
Visitors: 4114
The paper considers the method of forming a priority list of control equipment for distributed infor-mation management systems (DIMS) designed for special and military applications that have to be equipped with automation tools, as well as its software implementation as a part of decision making support system. The need to develop and apply this method arises from the fact that DIMS, as a rule, are created in several stages over a long time. This is mainly due to high complexity and cost of development, manu-facture and supply of automation equipment complexes, as well as to limited financial resources, tech-nological and production capabilities of all participants of this process. At the same time it is intuitive-ly clear that equipping some controls with automation tools can make a more significant contribution to improving the efficiency of the entire system than automating other controls. However, there has been no formalized method till present that could substantiate the sequence of equipping controls with au-tomation facilities based on their most significant parameters and characteristics. In this regard devel-opment of a method for forming the priority list of DIMS control equipment is a very important and practically significant task. The essence of the proposed method lies in the consistent assessment of every unit of control equipment (CE) in accordance with the developed system of classification criteria. Moreover, all clas-sification criteria are hierarchically interconnected. Their importance decreases from the first to the last one. Application of the method is connected with the need to collect, store and process arrays of initial data. To use the method in a more convenient way, to reduce the time of information processing and the number of errors associated with the human factor, the authors developed the software implementing the method as an integral part of the developed decision making support system. The method can be used by contractors and research organizations to substantiate the sequence of work in the course of DIMS development and elaboration.

← Preview | 1 | 2 | 3 | Next →