Journal influence
Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)
Bookmark
Next issue
№4
Publication date:
09 December 2024
Journal articles №4 2019
1. Evolution and features of hyperconverged infrastructures [№4 за 2019 год]Authors: Yu.M. Lisetskiy (Iurii.Lisetskyi@snt.ua) - S&T Ukraine Company (Director General), Ph.D;
Abstract: The paper considers hyperconverged infrastructures that are widely used by companies to build a flex-ible cloud-level IT infrastructure. This infrastructure only uses private data centers or clouds and do not use public resources. The paper describes the evolution of hyperconverged infrastructures, their features and strong points. Emergence of hyperconverged infrastructures is a logical step forward in development of IT in-frastructures and the next level of converged infrastructures. The concept of hyperconverged infra-structures combines several infrastructure components into the complex initially integrated using con-nection software. This concept is a development of traditional approaches to building an IT infrastruc-ture. Hyperconverged infrastructures further develop the concept of converged infrastructures adding the modularity concept. It makes operation of all virtualized computing, network and storage resources autonomous inside separate modules, which are virtualized computing resources. Typically, they are grouped to provide fault tolerance, high performance and flexibility in building resource pools. One of essential reasons why hyperconverged infrastructures are important is that not all enterpris-es are ready to migrate their services and applications into public cloud in order the eliminate costs of building own IT infrastructure. However, many of them are interested in taking advantages of cloud technologies in their infrastructures and hyperconverged infrastructures give such opportunity. They are a realistic alternative to leasing cloud services from third party providers as the hyperconverged in-frastructures enable deployment of private clouds fully under control of an enterprise. Therefore, hy-perconverged infrastructures dominate as a hardware platform to build private clouds, virtualized working places, and to develop new applications.
Keywords: data-processing centre, convergence, IT-infrastructure, virtualization, architecture, modularity, components, servers, Storage System, hyperconvergence
Visitors: 9586
2. Systems model verification based on equational characteristics of СTL formulas [№4 за 2019 год]
Authors: Korablin Yu.P. (y.p.k@mail.ru) - Russian State Social University, Ph.D; Shipov A.A. (a-j-a-1@yandex.ru) - Russian State Social University, Ph.D;
Abstract: The paper proposes and examines the RTL notation based on systems of recursive equations and standard Linear Temporal Logic (LTL) semantic definitions and the Computational Tree Logic (CTL). When this notation was still called RLTL, the previous works of the authors showed that it enables easy formulation and verifying of LTL properties with respect to system models, even with those that are al-so specified using the RLTL notation. Then the authors expanded the capabilities of the RLTL notation, so it has become possible to formulate LTL and CTL expressions. Therefore, the first version of the RTL notation was created. This article presents the second version of the RTL, which was the result of refinement and simpli-fication of notation semantic definitions, which allowed increasing the visibility and readability of its expressions. The purpose of the article is to demonstrate the possibility of using the RTL notation as a tool to formulate and verify properties defined by formulas of both LTL and CTL logics using common axioms and rules. This lets RTL to become a single and universal notation for these logics. At the same time, it is possible for RTL to include expressiveness of other temporal logics too by minor additions to its basic definitions. It means that in future it is possible for RTL to become a full-fledged universal tem-poral logic that has all of the necessary tools and means for implementing all stages of verification.
Keywords: verification, model checking, equational characteristic of rtl, temporal logic formulas, ltl, ctl, recursive equation systems
Visitors: 6608
3. The method for translating first-order logic formulas into positively constructed formulas [№4 за 2019 год]
Author: Davydov A.V. (andrey.davydov@datadvance.net) - DATADVANCE LLC (Technical Writer); A.A. Larionov (bootfrost@zoho.com) - Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences (Programmer); E.A. Cherkashin (eugeneai@icc.ru ) - Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences (Senior Researcher);
Abstract: The paper considers the logic calculus of positively constructed formulas (PCF calculus) and based on it automated theorem proving (ATP) method. The PCF calculus was developed and described as a first-order logic formalism in works of S.N. Vassilyev and A.K. Zherlov as a result of formalizing and solv-ing problems of control theory. There are examples of describing and solving some control theory problems, effectively (from the point of view of the language expressiveness and the theorem proving means efficiency) solved using PCF calculus, for example, controlling a group of lifts; directing a tele-scope at the planet center, which is in an incomplete phase, and mobile robot control. Comparing to the capabilities of other logical means for subject domain formalization and logic conclusion search, the PCF calculus have the advantage of the expressiveness combined with the com-pactness of knowledge representation, the natural parallelism of their processing, large block size and lower combinatorial complexity of conclusions, high compatibility with heuristics, and great capabili-ties for interactive proof. The selected class of formulas makes it possible to build constructive proofs. This class of formulas is much wider than the class of Horn clauses used in the Prolog. There are no re-strictions in the logical formalization of the axiomatic base of the subject domain, and the target state-ment is a conjunction of queries (in terms of the Prolog). To test the ATP software system (prover) based on the PCF calculus the authors used the TPTP (Thousands of Problems for Theorem Provers) library. The TPTP format has become a standard in the community that studies automated reasoning. There is a natural need for the developed prover to ac-cept problems in this format as input. Thus, the problem of translating the first-order predicate logic formulas presented in the TPTP format to the POF format arises. This problem is nontrivial due to the special structure of the PCF calculus formulas. The paper proposes a more efficient translation method (compared to the previously developed al-gorithm in the first implementation of the prover based on the PCF calculus) for the first-order predi-cate calculus language preserving the original heuristic knowledge structure, and its simplified version for the problems presented in language of clauses. The efficiency is a number of steps and the length of the obtained formulas. The proposed method was implemented as a software system – a language trans-lator of first-order TPTP logic formulas to the PCF calculus language. The paper presents test results of the developed method, which imply that there is a certain class of first-order formulas that are not tak-en into account as special by existing ATP systems, while the PCF calculus has special strategies that increase the efficiency of the inference search for such class of formulas.
Keywords: translation algorithms, automated theorem proving, mathematical logic
Visitors: 6070
4. Smart data collection from distributed data sources [№4 за 2019 год]
Author: M.S. Efimova (maria.efimova@hotmail.com) - St. Petersburg Electrotechnical University "LETI" (Postgraduate Student);
Abstract: The paper describes collecting and analysing data from distributed data sources using an example of analysing heterogeneous distributed financial information, analyzes and compares existing approaches to information collection and analysis. Most of the existing approaches that solve this problem require all data to be collected in a single repository to perform analysis on that data. However, such methods imply a delay from the moment when the data is generated until the moment when the analysis methods are applied to it due to the need to transfer the data from the source to the storage location. This signifi-cantly reduces the decision-making efficiency and increases network traffic. In addition, collecting da-ta from all sources can lead to significant costs if access to some of the sources is not free or is limited by a tariff plan. The considered approaches include data warehouses, ETL tools (extraction, transformation and loading), lambda architectures, cloud computing, fog computing, distributed data analysis based on the actor model. It has been concluded that these approaches do not take into account the cost and priori-ties of data sources and do not allow accessing them dynamically. Therefore, they do not meet all the requirements. The paper proposes and describes a method of smart information collection with dynamic reference to data sources depending on current need, cost and source priority. The proposed method allows to re-duce network traffic, speed up data analysis and reduce the costs associated with accessing data sources.
Keywords: Internet of things, financial analysis, heterogeneous data, distributed data sources, smart data analysis
Visitors: 8132
5. Investigation of the optimal number of processor cores for parallel cluster multiple labeling on supercomputers [№4 за 2019 год]
Authors: S.Yu. Lapshina (lapshina@jscc.ru) - Joint Supercomputer Center of RAS (Head of the Scientific-organizational Department); A.N. Sotnikov (asotnikov@iscc.ru) - Joint Supercomputer Center of RAS (Professor), Ph.D; V.E. Loginova (vl@jscc.ru) - Joint Supercomputer Center of RAS – Branch of Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (JSCC RAS – Branch of SRISA) (Leading Engineer-Programmer); C.Yu. Yudintsev (climenty@jscc.ru ) - Joint Supercomputer Center of RAS – Branch of Federal State Institution "Scientific Research Institute for System Analysis of the Russian Academy of Sciences" (JSCC RAS – Branch of SRISA) (Research Associate);
Abstract: The article considers the optimum number of processor cores for launching the Parallel Cluster Multi-ple Labeling Technique in the course of conducting simulation experiments on the problem of multi-agent modeling of the spread of mass epidemics on modern supercomputer systems installed in the JSCC RAS. This algorithm can be used in any field as a tool for differentiating large lattice clusters, because he is given input in a format independent of the application. At the JSCC RAS, this tool was used to study the problem of the spread of epidemics, for which an appropriate multiagent model was developed. The model considers an abstract disease transmitted by contact. During the simulation, the thresh-old value of the probability of infection is determined (i.e., the probability of infection itself is a varia-ble parameter), at which the percolation effect appears on the distribution grid of the disease. If this value is close to the contagiousness index of a particular disease, then there is every chance of expect-ing an epidemic to spread on a planetary scale. In the course of imitation experiments, a variant of the Parallel Cluster Multiple Labeling Technique for percolation Hoshen-Kopelman clusters related to the tag linking mechanism, which can also be used in any area as a tool for differentiating large-size lattice clusters, was used to be improved on a multiprocessor system. The article provides an estimate of the execution time of the Parallel Cluster Multiple Labeling Technique for Hoshen-Kopelman percolation clusters for various values of input parameters on high-performance computing systems installed in the JSCC RAS: MVS-10P MP2 KNL, MVS-10P OP, MVS 10P Tornado, MVS-100K.
Keywords: multi-agent simulation, percolation’s cluster, parallel cluster multiple labeling technique, high-performance computing systems, processor cores
Visitors: 7102
6. . Methods and tools for modeling supercomputer job management system [№4 за 2019 год]
Authors: Baranov, A.V. (antbar@mail.ru, abaranov@jscc.ru ) - Joint Supercomputer Center of RAS (Associate Professor, Leading Researcher), Ph.D; D.S. Lyakhovets (anetto@inbox.ru) - Research & Development Institute Kvant (Research Associate);
Abstract: The paper discusses the methods and tools of modeling supercomputer job management systems, such as SLURM, PBS, Moab, and the domestic management system of parallel job passing. There are high-lighted job management system modeling methods including modeling with real supercomputer system, JMS modeling by a virtual nodes, and simulation modeling. The authors consider methods and tools for constructing a model job stream. The management system of parallel job passing example shows the impossibility of accurate repro-ducing a full-scale experiment with real supercomputer. The paper investigates the adequacy of the job management systems model in a broad and narrow sense. It is shown that an adequate in the narrow sense job management system model ensures compliance only with interval indicators and cannot be used as a forecast model. The authors consider a numerical estimate of the proximity of two event streams in order to determine the adequacy in a broad sense. The first event stream is the stream of real supercomputer events. The second one is the stream of events produced by a job management systems model. The normalized Euclidean distance between two vectors corresponding to the compared streams is proposed as a measure of proximity of two streams. The vectors' dimension is equal to the number of processed jobs, the vectors components are the job residence times in the job management systems. The method of adequacy determination is based on a comparison of the real supercomputer statis-tics and the results of job management systems modeling. The adequacy measure reference value is de-termined as the normalized Euclidean distance between the vectors of job residence times in the real system and in the job management system model.
Keywords: model adequacy, simulation, supercomputer job scheduling, job management system, high-performance computing
Visitors: 7996
7. Approaches to the development and debugging QEMU simulators using the high-level architecture describing language PPDL [№4 за 2019 год]
Authors: A.Yu. Drozdov (alexander.y.drozdov@gmail.com) - Moscow Institute of Physics and Technology (Professor), Ph.D; Yu.N. Fonin (fonin.iun@mipt.ru) - Moscow Institute of Physics and Technology (Research Associate); M.N. Perov (coder@frtk.ru) - Moscow Institute of Physics and Technology (Laboratory Assistant); A.S. Gerasimov (samik.mechanic@gmail.com ) - Moscow Institute of Physics and Technology (Laboratory Assistant);
Abstract: The paper describes an approach to the development and debugging simulators based on QEMU (Quick EMUlator) binary translation. This approach is based on using PPDL (Processor and Periphery Descrip-tion Language) that is a high-level architecture describing language. Simulations based on binary trans-la- tion work several times faster in contrast to instruction interpreters while providing a wide range of possibilities for software debugging, as well as for dynamic analysis of applications. Thus, binary translation simulators based on QEMU in particular are of high interest either to system-level SoC (System on Crystal) developers and to embedded software developers. However, developing of binary translators is a more complicated and more time-consuming task compared to instruction interpreter development. Development of the QEMU simulator assumes the implementation of instructions of the simulated processor as a sequence of so-called tcg micro-operations. Tcg micro-operations are not executed directly, rather used for binary translation to the in-structions of the host machine. Therefore, there is no possibility to debug tcg description of instruc-tions using standard debuggers. It is possible to simplify QEMU simulator developing using PPDL language. PPDL compiler gener-ates two simulators from PPDL description of processor: an interpreter and a QEMU component kit. The compiler generates an interpreter as a C++ source code. With generated C++ code, any debugger like gdb or Microsoft Visual Studio can debug PPDL description. Than from the same description PPDL compiler generates the QEMU description of a processor representing instructions as a sequences of tcg micro-operation. Due to PPDL, developers can avoid debugging of the tcg processor description and therefore accelerate development of a QEMU based simulator.
Keywords: architecture describing languages, qemu, simulator
Visitors: 7655
8. Domain-specific languages for testing web applications [№4 за 2019 год]
Authors: V.G. Fedorenkov (vlad.fedorenkov@gmail.com) - The National Research University of Information Technologies, Mechanics and Optics (Student); P.V. Balakshin (pvbalakshin@gmail.com) - The National Research University of Information Technologies, Mechanics and Optics (Associate Professor), Ph.D;
Abstract: The desire to release a high quality product with minimal errors often raises many problems regarding product testing for developers of both large and smaller projects. This work is devoted to searching for solutions for these problems. The paper compares the main methods as well as the existing software tools for creating and sup-porting domain specific languages aimed at working with test scripts to testing interfaces of web appli-cations. It also considers existing tools for working with Selenium, reviews the methodology of writing DSL (with further selection of the most appropriate), shows how to implement a prototype of DSL based on Selenium and to test and assess the applicability of a prototype. It describes the advantages of using DSL in testing, its functional and non-functional requirements, shows the developed DSL in a simplified form, the language structure (Java-packages). One of the main criteria for working with all of the abovementioned is the involvement of non-technical specialists at each testing stage (solving the so-called translation problem), which is im-portant for implementing comprehensive testing of a software product. One of the key features of the article is the demonstration of implementing a DSL prototype based on Selenium, followed by testing and evaluating the applicability of the implemented prototype. The paper shows a method of creating an additional metaprogramming tool for further simplification of cre-ation, support, and modification of the developed test scripts.
Keywords: selenium, functionality, interface, development, web application, testing, the software, dsl
Visitors: 8024
9. Web-robot detection method based on user’s navigation graph [№4 за 2019 год]
Authors: A.A. Menshchikov (menshikov@.itmo.ru) - The National Research University of Information Technologies, Mechanics and Optics (Postgraduate Student); Yu.A. Gatchin (od@mail.ifmo.ru) - The National Research University of Information Technologies, Mechanics and Optics (Professor), Ph.D;
Abstract: According to reports of web security companies, every fifth request to a typical website is from mali-cious automated system (web robots). Web robots already prevail over ordinary users of web resources in terms of traffic volume. They threaten data privacy and copyright, provide unauthorized information gathering, lead to statistics spoiling, and performance degradation. There is a need to detect and block the source of robots. The existing methods and algorithms involve syntactic and analytical processing of web server logs to detect web robots. Such approaches cannot reliably identify web robots that hide their presence and imitate the behavior of legitimate users. This article proposes a method of web-robot detection based on the characteristics of the page web-graph. The characteristics of the analyzed sessions include not only the features of a user web graph, but also parameters of each node visited by him (in and out degrees, centrality measures, and others). To calculate such characteristics, a connectivity graph of pages was constructed. Based on the analysis of these parameters, as well as the characteristics of the web robot's behav-ioral graph, the authors make a decision to classify the session. The authors provide an analysis of different behavioral patterns, describe the basic principles of ex-tracting the necessary data from web server logs, and the method of the connectivity graph construction as well as the most significant features. The paper conciders a detection procedure and selection of an appropriate classification model. For each studied model, the authors select optimal hyperparameters and perform cross-validation of the results. The analysis of the accuracy and precision of such detec-tion shows that the usage of XGboost library allows obtaining F1 measure equals 0.96.
Keywords: security of the information, the theory of counts, website graph, web-robot detection, parsers, website protection, infosecurity, web-robots
Visitors: 9356
10. Development of a spiking neural network with the possibility of high-speed training to neutralize DDoS attacks [№4 за 2019 год]
Authors: E.V. Palchevsky (teelxp@inbox.ru) - Financial University under the Government of the Russian Federation; O.I. Khristodulo (o-hristodulo@mail.ru ) - Ufa State Aviation Technical University (Professor), Ph.D;
Abstract: Effective data accessibility is one of the key challenges in information security. Often DDoS attacks violate in-formation availability. The imperfection of modern protection methods against attacks by external unauthorized traffic leads to the fact that many companies with Internet access are faced with the inaccessibility of their own services that provide various services or information. This results in company financial losses from equipment downtime. To solve this problem, the authors have developed a spiking neural network to protect against attacks by external unauthorized traffic. The main advantages of the developed spiking neural network are high self-learning speed and quick response to DDoS attacks (including unknown ones). A new method of a spiking neural network self-training is based on uniform processing of spikes by each neuron. Due to this fact, the neural network is trained in the shortest possible time, therefore it quickly and efficiently filters attacks with external unauthorized traffic. The paper also compares the developed spiking neural network with similar solutions for protecting against DDoS attacks. As a result, it reveales that the developed neural network is more optimized for high loads and is able to detect and neutralize DDoS attacks as soon as possible. The developed spiking neural network was tested in idle conditions and in protection against DDoS attacks. Load values were obtained on the resources of the computing cluster. Long-term testing of a pulsed neural net-work shows a rather low load on the central processor, RAM and solid state drive during massive DDoS attacks. Thus, the optimal load not only increases the availability of each physical server, but also provides the ability to simultaneously run resource-intensive computing processes without any disruption to the functioning of the work-ing environment. Testing was carried out on computing cluster servers, where a spiking neural network showed stable operation and effectively protected from DDoS attacks.
Keywords: infosecurity, malicious traffic, ddos attacks, neural network self-training, spiking neural network, networks, data transfer, information
Visitors: 9503
| 1 | 2 | 3 | Next → ►