ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 4 at 2017 year.

Order result by:
Public date | Title | Authors |

11. Using concept maps for rule-based knowledge bases engineering [№4 за 2017 год]
Authors: Dorodnykh N.O., Yurin A.Yu.
Visitors: 11885
Using conceptual models in the form of concept maps for engineering rule-based knowledge bases of intelligent systems remains relevant. This relevance demands the development of specialized algorithmic and software. This paper considers an approach to prototyping of rule-based knowledge bases of expert systems based on analysis of IHMC CmapTools concept maps. The approach is based on the extracting structural elements of concept maps from the CXL files (Concept Mapping Extensible Language) and their transformation to the elements of a programming language, in particular, the C Language Production System (CLIPS). The paper describes the main stages of the approach, analyzed constructions of CXL files (in particular, concept-list, linking-phrase-list, connection-list). It also presents an illustrative example of transformations. A distinctive feature of the proposed approach is using an ontological model as a universal intermediate form of knowledge rep-resentation derived from concept maps, which is independent to the knowledge base programming language. Another feature is the author’s graphic notation – Rule Visual Modeling Language (RVML) that provides visualization and modification of cause-effect re-lations as logical rules. The considered algorithms are implemented as a part of a software research prototype called the Personal Knowledge Based De-signer (PKBD). Currently, it is used in the educational process at the Irkutsk National Research Technical University (INRTU) in “CASE-tools” and “Sowtware tools of information systems” courses.

12. A methodical approach to forming functional requirements for a computer attacks protection system for automated control systems and its software implementation [№4 за 2017 год]
Author: Drobotun E.B.
Visitors: 6695
One of the main stages of development and building of secured automated control systems for various purposes is the stage of forming requirements for the developed automated system including security requirements against computer attacks and other information technology impact. Effectively developed and reasonable functional requirements for a computer attacks protection system will allow on the one hand providing the necessary level of automated system protection, on the other hand minimizing consumption of computing and human resources of the protected automated system, the amount of which is limited and finite in any automated system. One of the possible ways to form and prove optimal functional requirements for a computer attacks protection system is using a risk-oriented approach to forming and reasoning of these requirements. The approach includes identifying the severity and probability of possible security threats against the protected automated system. The article offers a methodical approach to formation of functional requirements for computer attacks protection systems for automated control systems. It is based on a risk assessment of information security threats in the automated system and its safe operation threats. The application of the proposed approach will allow forming optimal functional requirements for a computer attacks protection system for automated control systems for various purposes. It will help to achieve optimal resource allocation in an automated system to ensure functioning of the computer attacks protection system.

13. Ontology design based on non-relational database for intelligent decision support system for medical purposes [№4 за 2017 год]
Authors: Eremeev, A.P. , S.A. Ivliev
Visitors: 10687
As the volume of data in healthcare systems increases, it becomes possible to use complex methods of software data processing to support decision-making in complex problem situations. Since the environment has a strong heterogeneity (different form of reporting, file formats, iterative process of working with the patient), it is required to create a flexible system that will provide effective work under the conditions described. The paper represents building and using ontologies based on a non-relational database for intelligent decision support systems in order to investigate and diagnose complex pathologies. It also describes constructing a software interface to work with this ontology. The created ontology is oriented both to the storage of data on patient examinations conducted by a doctor, and medical assessment reports. In this case, a non-relational database allows operating data in heterogeneous environments (e.g. medical research) more efficiently. The paper discusses the main advantages of non-relational databases compared to traditional relational databases, such as more convenient data handling within a given ontology, its expanding and supplementing, extracting the required data upon a request. The paper also describes software implementation of the proposed approach and illustrates its work. Based on the results of the research, the authors suggest ways of further development in this area. They are: generalization of the obtained results to solve other similar problems of medical diagnostics, as well as using new methods, such as the semantic search against case records, in the implemented intelligent decision support system.

14. Program synthesizing based on a graph-analytic model description [№4 за 2017 год]
Authors: A.G. Zykov, I.V. Kochetkov, V.I. Polyakov, E.G. Chistikov
Visitors: 8033
The quantity and volumes of the developed software grow annually. It stimulates developers to create new tools enabling to reduce time for the next product development. It also includes testing automation equipment. The demand for new instruments of test automation increases due to increasing number of systems using different programming lan-guages. The relevance of the task of searching universal cross-language testing tools remains high. The paper considers verification of computing processes based on a graph-analytic model (GAM). The key idea of this approach is that the developed program is converted into a GAM description and is compared to the reference GAM description according to which it was created. Further, according to the results of comparing, the program either is recognized as correct, or is sent back for revision. A bottle neck of such approach is development of the program based on GAM and a potential iteration nature of the process. The authors suggest a special utility to solve this problem. This utility performs synthesis of programs for reference descriptions. The paper considers an algorithm of conversion of a GAM description object model into text representation of C# operators and expressions. A research objective is automation of program synthesis in C# by a group of GAM descriptions of a computing process. Within the research, we have created a tool enabling to transform GAM descriptions into program source codes. We have checked the developed utility on GAM descriptions of an array processing program (sorting, turn). The synthesized executed module has been successfully tested in Windows 10 operating system environment. In the future we plan to develop the utility along with new versions of a description language to enrich the possibilities of synthesizable programs.

15. Software suite for modeling a radar recognition system [№4 за 2017 год]
Authors: T.V. Kalinin, A.V. Bartzevich, S.A. Petrov, D.V. Khrestinin
Visitors: 8311
The paper presents a software suite for modeling a radar recognition system that evaluates the influence of different factors on the operating efficiency of the system being its statistical mathematical model. The design of effective radar recognition systems requires applying both theoretical research and methods of mathematical and physico-mathematical modeling. The statistical decision theory is a common methodological basis for solving many radar tasks. Solving the problems of recognizing various objects in the air is also based on the theory of probability and statistics. The software implementation of the statistical model is carried out in MATLAB. The software algorithm has been developed by studying the corresponding theoretical background. The software suite consists of subroutines according to the operating principle of recognition systems. The first subroutine reflects the processes in a measuring unit of a radar station. It allows estimating the influence of unit characteristics on the accuracy of measuring target features. The second subroutine simulates the process of radar recognition and enables to rate its effectiveness depending on the selected characteristics of the measuring radar station unit with a given class alphabet and vocabulary of indicators. The third subroutine allows estimating the information capacity of selected indicators to find the most effective set and create an active vocabulary of indicators. The software application has a user-friendly graphical interface and supports conducting research with the results being saved in a file.

16. A protocol for a decentralized storage with redundant encoding [№4 за 2017 год]
Authors: P.K. Karasyuk, D.S. Miginsky
Visitors: 4871
Many distributed data storages use replication that leads to significant decreasing of the effective disk space. Applying redundant coding methods instead of replication for data safety can solve this problem. Due to CAP theorem, many storages abandon strong consistency in favor of eventual consistency, which is a guarantee that data will be consistent within a finite time after last external modification. The transition from replication to redundant encoding under the eventual consistency paradigm leads to complexity associated with the necessity to keep enough mutually consistent fragments of the code words for recovery. The article proposes a Dynamo-based protocol for distributed data storage. It computes object checksums using Reed-Solomon codes and uses them later for recovery if necessary. It provides the same level of fault-tolerance with lower redundancy. The protocol supports concurrent execution of several read and write operations on the same object. It tracks node failures and considers them in further execution. The protocol allows a fixed number of permanent node failures and arbitrary transitional failures without data loss or denial of service. The protocol was tested in a distributed environment simulator with preselected scenarios of failures and user messages. The article demonstrates the protocol behavior in some of the scenarios.

17. Applying meta-analysis methods in liver failure diagnosis and treatment [№4 за 2017 год]
Authors: B.A. Kobrinsky, A.I. Molodchenkov, N.A. Blagosklonov, A.V. Lukin
Visitors: 8027
The article considers the principles form the basis of various meta-analysis methods and the main differences of tools for their application. It presents and discusses clinical variants of liver failure and their description in literature of various countries and regions of the world (Russia, Asia, Europe and North America). The authors propose the application of a set-theoretic model, the attributes of which should take into account the features of the character and course of the disease in the different nature of the disease and at different periods of its progress. The obtained data will find use in a subsequent meta-analysis. This stage is the main, it formes subgroups of patients on the similarity of clinical implications and the results of applying different treatment regimens. Choosing the meta-analysis method to estimate the world literature data, this approach will allow offering the most appropriate methods of treatment of the disease in a particular patient depending on the nature of relevant changes in the diagnosis of a certain form of hepatic insufficiency. It will provide a transition (by analogy) to the directed use of certain medical means.

18. Methods of representing text information in automated rubrication of short text documents [№4 за 2017 год]
Author: P.Yu. Kozlov
Visitors: 9010
The paper shows that citizens’ electronic messages (complaints, appeals, proposals, etc.) in terms of the possibility of their automated processing have a number of specific features. They are: usually a small document capacity, which makes it difficult to analyze it statistically, a lack of structuring, which complicates extracting information, a big number of grammatical and syntactic errors that lead to implementing several additional processing steps, thesaurus non-stationarity (composition and importance of words), which depends on the issuance of new normative documents, officials’ and politicians’ speeches, etc. All this leads to the necessity of using procedures for headings dynamic classification. The paper describes the stages of automated analysis and methods for formalizing text documents. It also proposes a developed rubrication method that uses the results of the morphological and syntactic stages with modified linguistic markup of text documents. The syntactic parser is MaltParser or LinkGrammar software that build dependency trees for all sentences in a document. The paper shows standard linguistic markings of MaltParser and LinkGrammar applied to short text documents, as well as a modification of the LinkGrammar markup to use for rubrication. Using known software for additional stages of analysis shows the problem of the diversity of linguistic markings. For example, most of the syntactic parsers at the output represent each sentence as dependency trees, which are described by linguistic markup. For further classification and assignment of weighting factors, linguistic markup should be modified, so it will increase the dimension of the metric. The developed method of rubrication takes into account the expert evaluation of the importance of words for each rubric, as well as the syntactic role of words in sentences. The paper shows a diagram of the process of automated rubrication of complaints and proposals in the developed analysis system. It also describes an experiment that confirms the expediency of using syntactic parsers in such systems, which leads to increasing accuracy of rubrication. There are recommendations to improve the accuracy of the developed method and use the theory of fuzzy sets and methods of cognitive modeling in order to solve the problem of thesaurus nonstationarity in the systems that depend on the issue of normative documents and officials’ speeches.

19. Mathematical models of rheograms of states in Table Curve 2d/3d programs as a basis of the intelligent system for managing structuring processes of multicomponent elastomer composites [№4 за 2017 год]
Authors: A.S. Kuznetsov, Kornyushko V.F.
Visitors: 8023
Nowadays, most of the elastomeric products are the result of the structuring process. It is a chemical process of spatial cross-linking and a technological process of creating a finished product from elastomers. Modern industrial elastomeric production is a complex multi-stage process that has several stages. In general, the elastomer production system is an ex-ample of a chemical-technological system with a serial connection of elements. The paper considers a unified chemical-technological elastomer production system, as well as chemical-technological processes for mixing and structuring multicomponent elastomeric composites as components of a chemical-technological system. Nowadays, the requirements to the attribute level of elastomeric products are increasingly tightened. To obtain qualified products with the required set of properties, it is necessary to follow the sequence and parameters of all the preparatory and technological operations and stages of rubber production. Improving the quality of finished products is possible due to applicating methods of control and management of mixing and structuring processes, system analysis of production processes, their detailed verbal and mathematical description, as well as information support for decision making while controlling mixing and vulcanization processes based on the analysis of rheometric curves and information databases. The organization of the elastomeric composite structuring process is impossible without its information support based on modern information technologies and systems. The paper considers the constructed intelligent information management system for complex chemical-technological processes of structuring multicomponent elastomeric composites based on analysis and modeling of rheographic information. It also provides mathematical modeling of rheograms of the state of multicomponent elastomeric composites using non-linear parameters of models. There are quality criteria of the obtained models, as well as the methods of visualization of the main indicators of the multicomponent elastomeric composite structuring process using modern software products. It is shown that analytical instruments are an essential part of the integrated intellectual information management system for complex chemical-technological processes of multi-component elastomeric composite structuring.

20. A method for improving interpretability of regression models based on a three-step building cognition model [№4 за 2017 год]
Author: Kulikovskikh I.M.
Visitors: 6906
Increasing generalization performance of regression models leads to a more effective solution for the problems of recognition, prediction, and extraction of social and engineering behavior strategies. A number of known methods for improving the generalization properties demonstrate computational effectiveness, hovewer they reduce interpretability of a model and results. This study is an attempt to approach this problem looking at the methods of regression and classification from digital filtering and psychometrics points of view. Considering the advantages of the methods for solving the interpretability problem in these areas, this research is aimed at defining a method to improve the interpretability of regression models by promoting learner’ internal uncertainty in machine learning. In order to solve the problem, the author has developed a three-step model of building cognition. This model reflects direct relations among digital filtering, psychometrics, and machine learning. These research areas employ the same sources of internal uncertainty that makes creating consistent mathematical models that connects the areas possible. For this purpose, the paper considers internal uncertainty from a cognitive point of view as processes of forgetting and guessing. The findings of this study provide the implementations of the following steps in accordance with the tree-step model: a filter synthesis step, a psychological assessment step, and an integrated regression/classification step. While the first step models an engineering environment and the second step presents a social environment, the integrated step helps to create a social-engineering environment. In addition, in contrast to the social environment that may simulate human cognition, the social-engineering environment seems promising in introducing machine cognition. The proposed implementations allow formalizing the method for improving interpretability of regression models changing from one kind of cognition to the other.

← Preview | 1 | 2 | 3 | 4 | Next →