ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Bookmark

Next issue

2
Publication date:
16 June 2021
-->

Journal articles №1 2021

11. Application of high-level synthesis technology and hardware accelerators on FPGA in protein identifications [№1 за 2021 год]
Authors: G.K. Shmelev (shmelevgk@cps.tver.ru) - R&D Institute Centerprogramsystem (Chief of Department); M.A. Likhachev (likhachevma@cps.tver.ru) - R&D Institute Centerprogramsystem (Chief of Department); Arzhaev V.I. (arzhaeVI@cps.tver.ru) - R&D Institute Centerprogramsystem (Branch Manager), Ph.D;
Abstract: The paper considers the use of high-level synthesis technology using hardware accelerators based on FPGA in the identifying proteins problem. Currently, there are a significant number of hardware solutions with high performance and band-width designed to solve various applications. One such solution is hardware-based computation accel-erators based on field-programmable gate array (FPGA), which have a number of advantages over ac-celerators built on both graphics processing unit (GPU) and application-specific integrated circuit (ASIC). However, there is a certain complexity in the wide application of such devices, which consists in the laboriousness and specificity of the traditional way of developing applications using specialized programming languages for this type of accelerator. Using high-level synthesis technology using one of the popular programming languages opens up new horizons in the wide use of such accelerators. This paper describes one embodiment of a computational hardware and software platform using a hardware accelerator on a FPGA. Special attention is paid to considering the major steps of developing the architecture of applications deployed on hardware and the methodology for developing a high-performance computing core of hardware-accelerated software functions. The results of improving the computational performance of the de novo peptide sequence sequencing software application and the effectiveness of the used hardware platform and the chosen development path in comparison with the original software application are demonstrated.
Keywords: de novo sequencing, protein identification, FPGA, hardware acceleration, high-level synthesis
Visitors: 329

12. The intelligent approach to automation of technological and production processes [№1 за 2021 год]
Authors: S.Yu. Ryabov (sergey.u.ryabov@gmail.com) - Synchronoss, Inc. (Lead Business Analyst); Yu.V. Ryabov (ryabov_yuri_atp@mail.ru) - Ufa State Aviation Technical University (Associate Professor), Ph.D;
Abstract: The paper considers the approach to production automation, in particular, to the automated design of technological processes. Data processing in existing systems is reduced to a set of rules, and the exe-cuting program in its implementation is like a state machine. Obviously, this approach has its own ceil-ing. It is proposed to represent the production process as something whole, described by an intellectual model. The adopted model of automation of technological and production processes is based on graph theory and graph representation of data and knowledge. The graph is considered as some function of time and computation. It is proposed to use a supergraph as a set of abstract and defined given nodes and abstract and static relations. Thus, every script of the physical reality, every manufacturing situa-tion, considered at any scale, will be modeled as a subgraph of a supergraph. Akka, which is an imple-mentation of an actor computational model, can be an intelligent platform for the implementation of computations. It allows for an intelligent approach to solving the problem of automating production and technological processes. An example of constructing a part of a supergraph for machining a part element is considered by a typical transition-side, including the corresponding tool, processing modes, and a measuring tool. The result of such a system will be a graph with vertices and relations describing the knowledge of techno-logical operations or the state of the production process. The result can be transferred to another sys-tem for execution, saved in the database, or used to analyze the situation.
Keywords: manufacturing process, technological process, graph, supergraph, intelligent platform, static node, data node, abstract node, calculator
Visitors: 320

13. Ontology processing in attributive access control in cyber-physical systems [№1 за 2021 год]
Authors: Poltavtseva M.A. (maria.poltavtseva@ibks.icc.spbstu.ru) - Peter the Great Saint-Petersburg Polytechnic University (Associate Professor), Ph.D;
Abstract: The paper is devoted to supporting the processing of large-scale ontologies in a relational server and considers a separate problem of representing and processing ontologies when implementing attributive (ontological) access in cyber-physical systems. The relevance of the paper is due to the attack growth on industrial cyber-physical systems and the improvement of access control methods. The most promising direction today is attributive access based on ontologies. On the one hand, distributed large-scale industrial cyber physical systems use a large and increasing number of rules for attributive access control, on the other hand, storage techniques and processing such data using specialized technologies must meet the requirements for information pro-tection. This leads to the necessity of applying advanced (including certified) tools and necessitates the use of a relational server for storing and processing data. Therefore, the searching problem of the most rational representation and processing of access control rules is highly relevant. The paper proposes a method for representing the rules of ontological inference based on the impli-cations of binary trees to support ontologies in the problem of attributive access control of cyber phys-ical systems. There is a data representation, and analysis of methods for displaying information in an industrial relational server. An example of support for access control rules shows experimental testing of the representation of ontological inference rules based on the implications of binary trees. Because of analytical effort and experimental testing, the most rational solution for this problem is to use the storage technique for a forest of trees based on a materialized path.
Keywords: infosecurity, cyber physical system, DBMS, relational model, inference rule, large-scale ontology
Visitors: 309

14. Developing ontology schemas based on spreadsheet transformation [№1 за 2021 год]
Authors: Dorodnykh N.O. (tualatin32@mail.ru) - Institute of system dynamics and control theory SB RAS, Ph.D; Yurin A.Yu. (iskander@irk.ru) - Institute of system dynamics and control theory SB RAS, National Research Irkutsk State Technical University, Ph.D; A.V. Vidiya (vidiya_av@icc.ru) - Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences (Programmer);
Abstract: Using ontologies is a widespread practice in the in creating intelligent systems and knowledge bases, in particular, for the conceptualization and formalization of knowledge. However, most modern ap-proaches and tools provide only manual manipulation of concepts and relationships, which is not al-ways effective. In this regard, using various information sources, including spreadsheets, is relevant for the automated creation of ontologies. This paper describes a method for the automated creation of ontological schemes in the OWL2 DL format based on the analysis and transformation of data extracted from spreadsheets. A feature of the method is the use of the original canonical relational form for the intermediate representation of spreadsheets, which provides the unification of input data. The method is based on the principles of model transformation and comprises four primary stages: converting the original spreadsheets with an arbitrary layout into a canonical (relational) form; obtaining fragments of the ontological scheme; ag-gregation of separate fragments of the ontological scheme; generation of the code of the ontological scheme in the OWL2 DL format. The method is implemented in the form of two software tools integrat-ed by the data: TabbyXL as the console Java application for table conversion and the PKBD.Onto plugin as the extension module for Personal Knowledge Base Designer (software for expert systems prototyping). The transformation of a spreadsheet with information about minerals is considered as an illustrative example, and the transformation result is presented in the form of a fragment of an ontolog-ical scheme. The method and tools are used in the educational process at the Institute of Information Technologies and Data Analysis of the Irkutsk National Research Technical University (INRTU).
Keywords: code generation, transformation of models, owl, ontological schema, conceptual model, canonical spreadsheet, spreadsheet
Visitors: 289

15. Semantic analysis of scientific texts: Experience in creating a corpus and building language pattern [№1 за 2021 год]
Authors: E.P. Bruches (bruches@bk.ru) - Novosibirsk State University, A.P. Ershov Institute of Informatics Systems (IIS), Siberian Branch of the Russian Federationn Academy of Sciences (Assistant, Postgraduate Student); A.E. Pauls (aleksey.pauls@mail.ru) - Novosibirsk State University (Student); Batura T.V. (tatiana.v.batura@gmail.com) - A.P. Ershov Institute of Informatics Systems (IIS), Siberian Branch of the Russian Federationn Academy of Sciences, Ph.D; V.V. Isachenko (vv.isachenko@gmail.com) - A.P. Ershov Institute of Informatics Systems (IIS), Siberian Branch of the Russian Federationn Academy of Sciences (Postgraduate Student); D.R. Shsherbatov (d.shsherbatov@g.nsu.ru) - Novosibirsk State University (Student);
Abstract: This paper is devoted to the development of methods for named entity recognition (NER) and relation classification (RC) in scientific texts from the information technology domain. Scientific publications provide valuable information about cutting-edge scientific advances, but efficient processing of in-creasing amounts of data is a time-consuming problem. Continuous improvement of automatic methods of such information processing is required. Modern deep learning methods are relatively good at solv-ing these problems with the help of deep computer-aided learning, but in order to achieve outstanding quality on data from specific areas of knowledge, it is necessary to additional training the obtained models on the specially prepared dataset. Such collections of scientific texts are available in English and are actively used by the Russian scientific community, but at present such collections are not pub-licly available in Russian. The paper contains the RuSERRC dataset description, which consists of 1600 unlabeled documents and 80 labeled with entities and semantic relations (6 relation types are considered). Several modifications of the methods for building models for the Russian language are also pro-posed. This is especially important, since most of the existing research is focused on working with data in English and Chinese, and it is not always possible to find high-quality models for the Russian lan-guage in the public domain. The paper includes the results of experiments comparing the vocabulary method, RAKE, and methods based on neural networks. Models and datasets are publicly available, and we hope it can be useful for research purposes and the development of information extraction systems.
Keywords: dataset building, neural network models, relation classification, semantic relation extraction, named entity recognition
Visitors: 273

16. Software environments for studying the basics of neural networks [№1 за 2021 год]
Authors: P.Yu. Bogdanov (45bogdanov@gmail.ru) - Russian State Hydrometeorological University (Senior Lecturer); E.V. Kraeva (kate.smitt.by@mail.ru) - Russian State Hydrometeorological University (Assistant); S.A. Verevkin (vrjovkin@rambler.ru) - Russian State Hydrometeorological University (Student); E.D. Poymanova (e.d.poymanova@gmail.com) - St. Petersburg State University of Aerospace Instrumentation (Senior Lecturer); T.M. Tatarnikova (tm-tatarn@yandex.ru) - St. Petersburg State University of Aerospace Instrumentation (Associate Professor, Professor), Ph.D;
Abstract: The paper describes the ways and methods of studying and constructing neural networks. It is shown that the study of the functioning guidelines of neural networks, their application for solving certain problems is possible only through practice. There is the analysis of various software environments that can be used in the laboratory and prac-tical classes for the study and application of neural networks in the paper. Highlighted the modern cloud service Google Colaboratory, which is recommended for teaching the basics of neural networks due to the presence of a pre-installation of the Tensorflow library and a library for working in Python, free access to graphics processors, the ability to write and execute program code in a browser, and no need for special configuration of the service. Examples of designing neural networks in the Colaboratory are considered. In particular, solving recognition problems and image classification, predictive modeling. The authors show that a convolu-tional neural network can be used for image recognition and classification, a feature of which is obtain-ing the image features a map with subsequent convolution. There are chunks of code for the connecting phases the necessary libraries, loading data sets, normalizing images, assembling a neural network, and its training, in the paper. The solving of the forecasting problem is considered on the example of a feed-forward neural net-work with an algorithm for backpropagation of errors in the learning process, the essence of which is to obtain the expected value at the output layer when the corresponding data is fed to the input layer. Backpropagation of errors consists of adjusting the weights that give the greatest correlation between the input dataset and its corresponding result.
Keywords: forecasting problem, classification problem, libraries and programming languages, learning how neural networks work for beginners, software environments, neural network
Visitors: 318

17. The adaptation of the LSTM neural network model to solve the pattern recognition complex problem [№1 за 2021 год]
Author: V.S. Tormozov (007465@pnu.edu.ru) - Pacific National University (Senior Lecturer);
Abstract: The paper examines the adaptation of the model of artificial neural networks of direct distribution with blocks of long short-term memory (LSTM) for the complex problem of pattern recognition. For artifi-cial neural networks (ANN), the context can be extracted from the input signal vector and from the weight values of the trained network. However, considering the context of a significant volume, the number of neural connections and the complexity of training procedures and network operation in-crease. Instead of receiving context from input values, the context can also be temporarily stored in a special memory buffer, from where it can later be extracted and used as a signal in the ANN's opera-tion. This type of memory is called LSTM. The advantage of networks of this type is that they use memory blocks associated with each neuron of the latent layer, which allows context-related data to be stored when forming recognition patterns. There is the method of linear switching of LSTM units depending on the value of the transmitted signal in the paper. A computational experiment was conducted aimed at investigating the effectiveness of the proposed method and the previously developed neural network of direct distribution of a similar structure. Machine learning was performed for each type of ANN on the same sequence of training ex-amples. The test results were compared for: an ANN of direct propagation, a recurring neural network (RNS) of a similar architecture: with the same number of neurons on each layer, and a network of neu-romodulating interaction with one feedback delay. The optimization criterion, in this case, is the error of the neural network on the training sample, consisting of examples not presented in the test. The effi-ciency of solving the classification problem is evaluated according to two criteria: learning error on the training sample and testing error on the testing sample.
Keywords: artificial network, artificial intelligence, machine learning, pattern recognition, long short-term memory unit
Visitors: 326

18. iLabit OmViSys: A photorealistic simulator based on the omnidirectional camera and structured light [№1 за 2021 год]
Author: I.Yu. Kholodilin (kholodilin.ivan@yandex.ru ) - South Ural State University (National Research University) (Postgraduate Student);
Abstract: According to recent advances in neural network learning, which are supported by the demand for large training data, virtual learning has recently attracted a lot of attention from the computer vision commu-nity. Today, there are many available virtual simulation environments, but most of them are based on a standard camera and are limited to measure sensors that are on the mobile robot. To facilitate data collection in systems that were not previously integrated into existing virtual envi-ronments, this paper presents a photorealistic simulator "iLabit OmViSys", which includes an Omnidi-rectional camera, and a structured light source. An Omnidirectional camera and structured light have their own distinctive advantages compared to other computer vision systems. The Omnidirectional camera provides a wide viewing angle with a single shot. In addition, the laser light source is easy to detect and extract its information from this image for further processing. Developed using Unity, the iLabit OmViSys simulator also integrates mobile robots, elements of the internal environment, allows you to generate synthetic photorealistic datasets, and supports communi-cation with third-party programs based on the Transmission Control Protocol (TCP). iLabit OmViSys includes three primary screens that allow one to generate data for internal camera calibration, carried out experiments, and take measurements. A distinctive feature of the simulator is also its versatility, in terms of support for such operating systems as Windows, macOS, and Linux.
Keywords: semantic data, structured light, omnidirectional camera, unity, simulator
Visitors: 296

19. Adaptive block-term tensor decomposition in visual question answering systems [№1 за 2021 год]
Authors: M.N. Favorskaya (info@sibsau.ru) - Reshetnev Siberian State University of Science and Technology (Professor), Ph.D; V.V. Andreev (jcjet88@gmail.com) - Reshetnev Siberian State University of Science and Technology (Postgraduate Student);
Abstract: The paper proposes a method for dimensionality reduction of the internal data representation in deep neural networks used to implement visual question answering systems. Methods of tensor decomposi-tion used to solve this problem in visual question answering systems are reviewed. The problem of these systems is to answer an arbitrary text question about the provided image or video sequence. A technical feature of these systems is the need to combine a visual signal (image or video sequence) with input data in text form. Differences in the features of the input data make it rea-sonable to use different architectures of deep neural networks: most often, a convolutional neural net-work for image processing and a recurrent neural network for text processing. When combining data, the number of model parameters explodes enough so that the problem of finding the most optimal methods for reducing the number of parameters is relevant, even when using modern equipment and considering the predicted growth of computational capabilities. Besides the technical limitations, it should also be noted that an increase in the number of parameters can reduce the model's ability to extract meaningful features from the training set, and increases the likelihood of fitting parameters to insignificant features in the data and "noise". The method of adaptive tensor decomposition proposed in the paper allows, based on training data, optimizing the number of parameters for the block tensor decomposition used for bilinear data fusion. The system was tested and the results were compared with some other visual question-answer systems, in which tensor decomposition methods are used for dimensionality reduction.
Keywords: deep learning, tensor decomposition, vqa, artificial intelligence, dimensionality reduction
Visitors: 299

20. Architecture and programming implementation of research testbed of a corporate Wireless Local Area Network [№1 за 2021 год]
Authors: L.I. Abrosimov (AbrosimovLI@mpei.ru) - National Research University “Moscow Power Engineering Institute” (Professor), Ph.D; M.A. Orlova (OrlovaMA@mpei.ru) - National Research University “Moscow Power Engineering Institute” (Assistant); H. Khayou (hussein.khayou@gmail.com) - National Research University “Moscow Power Engineering Institute” (Postgraduate Student);
Abstract: The paper presents the architecture and implementation of a research testbed for obtaining and analyz-ing the probabilistic time characteristics of a corporate Wireless Local Area Network (WLAN). To de-velop this testbed, the authors obtained mathematical relations for calculating the guaranteed intensity of multimedia traffic. The research testbed architecture includes the two independent blocks. The block "Simulation testbed" contains the corporate WLAN description and the multimedia traffic flows in the discrete-event simulation system ns-3. The block "Analyzing simulation results" contains programs for analyz-ing files of transmitted traffic and simulation results and programs for calculating performance charac-teristics. To write the block "Analyzing simulation results ", the authors used the Python3 language, the analysis of the transmitted traffic files was performed using the pyshark library. The paper also contains the analytical equations of the WLAN model used in the block " Analyzing simulation results ". The above equations allow us to determine the maximum intensity of the delivered packets for a prescribed time of guaranteed packet delivery, for wireless communication channels us-ing a prescribed channel protocol. A software implementation of the research testbed affords the op-portunity to get the dependence of intensity guaranteed multimedia traffic for the specified parameters: structure WLAN, the settings of the wireless communication channel protocols, and channel access control. The developed testbed provides the capability of operation in two modes. In the development mode of a new WLAN, when the known parameters are the equipment passport data, logical characteristics of protocols, and expected traffic characteristics, a full set of functional modules and blocks is used, which allows both traffic matching with transmission and processing resources, and the specified per-formance of the WLAN. In operation mode, when monitoring allows you to get actual characteristics of traffic and protocols, the testbed allows the WLAN administrator to test the performance of the WLAN and the traffic intensity. This mode uses a limited module set, which requires much less time to evalu-ate the performance of the WLAN, provides the ability to adaptively change the settings of the WLAN, and provides the performance characteristics of the WLAN that meet the requirements of QoS.
Keywords: network simulator software, research testbed architecture, wlan, productivity, media access protocol
Visitors: 316

← Preview | 1 | 2 | 3 | Next →