ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 4 at 2015 year.

Order result by:
Public date | Title | Authors

21. A fault injector to test the SoC-processor to single event upsets [№4 за 2015 год]
Author: Chekmarev S.A.
Visitors: 7884
Error injection to the microprocessor memory is used to test fault detection and correction tools. Injection simulates the result of space heavy charged particles influence. This paper describes the developed failure injector IP-core. IP-core makes single event upsets to the memory of the system-on-a-chip (SoC) microprocessor. The paper describes a failure injector scheme, modules structure, a state machine, the work in all injection modes. The injector allows us to add failures to a register file, cache and external memory in different modes. Available modes are: with or without a microprocessor stop the processor with a random failure or predetermined by covering. Failure injector IP-core was applied to the SoC processor LEON3. The article describes the failure injection procedures in LEON3 memory. DSU interface helps to access the on-chip memory. DSU is a slave interface to the AMBA AHB bus. Using its registers the failure injector can stop the processor, modify its internal memory contents, and resume running software. A memory controller is used to access LEON3 external memory. During the experiment the injector collects statistical data on injected and found faults. The analysis of the results leads to a conclusion about the microprocessor sensitivity to single event upsets in the memory. After testing a failure injector can be excluded from the SoC microprocessor without any traces left.

22. An approach to software testing management system development [№4 за 2015 год]
Authors: Kornyushko V.F., Kostrov A.V., Porodnikova P.A.
Visitors: 9733
The paper considers the problem of creating an approach to testing management system (TMS) development control as a part of software development business process management system in a project company. The authors suggest to separate software testing process as an independent business process, so software development management system has a testing management subsystem. The paper reviews the features of testing management in typical software development models. There are the variants of basic testing processes implementation: Review (R); Test Design (D); Test Execution (E); Test Report (O). The article shows the role of TMS development evaluation in the development management processes, and provides the approach to its evaluation. The approach is based on TMS development evaluation for different software development models, expert evaluation in particular. Classification of project management maturity stages is used as a methodological basis. The paper considers the features and applicability of both direct and multi-criteria expert evaluation. The authors suggest displaying a verbal description of TMS maturity stages as a set of private quantitative criteria; some of them can be determined by instrumental methods. To evaluate the others it is proposed to make multi-criteria assessment with the assistance of field experts according to each criterion. The paper considers the variants of single-level algorithms to determine a development level global criteria based on multiple assessments of partial criteria: calculation of the length of the vector in Euclidean space and determining the amount of the weighted assessments of partial criteria. The paper proposes a two-level ordering of the partial criteria and corresponding algorithms for processing their set of estimates, as well as a visualization of the results of the TMS development evaluation. The described approach allows assessing the software TMS maturity and managing its development.

23. The problems of ensuring distributed information systems sustainability [№4 за 2015 год]
Author: Yesikov D.O.
Visitors: 8622
The paper considers the ways to ensure distributed information systems sustainability. It formalizes the problems of ensuring sustainability of distributed information systems, which consist of: a mathematical optimization model for functional tasks software elements distribution according to network nodes; a mathematical optimization model for information resources distribution according to data storage and processing centers; a mathematical model for determination of a reasonable expenditure level to form data storage facilities; a mathematical optimization model of technical means for data storage and processing; a mathematical optimization model for information resources reserve distribution on data storage and processing centers. The article shows that they belong to discrete optimization problems and gives their characteristics. It also proposes a procedure for applying a complex of mathematical models to ensure distributed information systems sustainability with indication of input and output data for each model. To solve the abovementioned problems the authors suggest using a software package, which implements the branch and bound method to solve discrete optimization problems with Boolean variables using the algorithm of branching variables predetermined order based on the duality theory. The duality theory in the branch and bound method significantly enhances the screening of unpromising options and reduces the time for solving problems compared with the traditional method on average in 8 times. The paper gives the special aspects of practical use of the developed mathematical model complex during design, maintenance and update of distributed information systems life cycle.

24. Finding sloutions by the modified Rete algorithm for fuzzy expert systems [№4 за 2015 год]
Authors: Mikhailov I.S., Zaw Min Htike
Visitors: 8854
The paper considers the basic concepts of fuzzy production expert systems. Fuzzy production expert systems are based on a set of rules presented in terms of linguistic variables. The authors suggest the developed Rete algorithm modification for a fuzzy rule base as a fuzzy inference tool. This modification accelerates systems operation due to a single computing of the same conditions in the rules. It also formulates the rules and conclusions in the limited natural language. The modified Rete algorithm formal decision tree model for a fuzzy production knowledge base consists of a set of vertex-conditions, vertexsolutions, the relationship between vertices and relations to describe the fuzzy expert system rules. The created algorithm processes the rules from the fuzzy rule base and converts them into the decision tree modified Rete algorithm formal model. Rete algorithm modification is different from a classical algorithm as it is used for fuzzy variables. Therefore, each stage of the algorithm includes building the decision tree vertices fuzzy truth values using fuzzy operators. This allows formulating the conditions and consequences in the rule base, as well as the solutions in the limited natural language. The same conditions are combined during decision tree construction. It accelerates decision tree processing comparing to sequential viewing of expert system rules. The paper describes an operating example of the production fuzzy expert system, which works on the basis of the proposed Rete algorithm modification. It also displays the effectiveness of the proposed method.

25. Development and research of a parallel ant colony algorithm for a block cryptosystem cryptanalysis [№4 за 2015 год]
Authors: Chernyshev Yu.O., Sergeev A.S., Ryazanov A.N., Kapustin S.A.
Visitors: 9847
The article considers the possibility of parallel implementation of ant colonies algorithms for block cryptosystems cryptanalysis. The authors specify that a new scientific direction “natural calculations” is relevant, provide the block diagram of DES standard cryptanalysis using a method of ant colonies. The parallel version of cryptanalysis algorithm is described on the basis of a data-logical flow graph, sequence matrices, logical incompatibility and independence. The minimum number of the processors for a cryptanalysisis algorithm is defined on the basis of a technique of defining the number of processors. This technique includes finding of a mutually independent operators maximal set in an independence matrix, consecutive fictive linking in a data-logical graph provided that these links don\'t increase a critical way length. A bioinspired cryptanalysis method application is distinguished by the possibility of using an encryption (decryption) algorithm as an objective function of key assessment defined by genetic operations. Therefore, when using the bioinspired cryptanalysis methods the process of private key definition (for example, at type 2 cryptanalysis) depends not on encryption complexity, but on the bioinspired method itself that provides a sufficient variety of key generation. This proves the relevance of the problem connected with a research on the bioinspired algorithms application possibility for block cryptosystems cryptanalysis.

26. Software reliability evaluation by discrete-event simulation methods [№4 за 2015 год]
Authors: Butakova M.A., Guda A.N., Chernov A.V., Chubeyko S.V.
Visitors: 6024
The article considers discrete-event modeling and presents its distinctive features comparing to other types of modeling. The main difference is a lack of time reference, i.e. it is enough to keep the sequence of events, and thus the time interval between events isn\'t important. The authors give the definition of a discrete-event system model and add model time, which reproduce the course of events. The article solves the important problem of the event list generation in various ways: object-based and the process-based execution of events. Both ways are considered in detail with the illustration, algorithm and a program implementation element. Events can be united into groups called processes. The process-based modeling is more difficult than object-based as there is a process scheduler. The article also considers the assessment of software reliability based on discrete-event approach. This approach is based on the idea of software reliability growth. Bug scanning is modeled by a stochastic point process. A detected bug is eliminated, thereby the software becomes more reliable. Modeling has two parts: generation of the processes, which imitate introducing bugs in the software, and an assessment of component software system reliability. The paper considers options of calculating bug probability depending on program structure: consecutive, branching, cyclic and parallel. Each option has an illustration and a computation scheme. The cyclic scheme of a software component has the computation scheme of a consecutive component as it is some kind of same-type repetitions of a software component consecutive structure.

27. A methodical framework of analysis and synthesis of secure software development controls [№4 за 2015 год]
Authors: Barabanov A.V., Markov A.S., Tsirlov V.L.
Visitors: 11572
The paper considers important issues of secure software products standardization of serial production. The authors analyze organizational and technical measures aimed to reduce the number of vulnerabilities when developing and maintaining secure automated systems software. They also systematize standards and guidelines for secure software development. The authors analyze the applicability of the existing methodological approaches to secure software development during information security assessment (including for the software certification). They show harmonization expediency of the developed regulations and practical measures with the international standards like ISO 15408 and ISO 12207.The paper introduces the concept of secure software. There also is a basic set of requirements to assess compliance of the secure software development process. In this case the set of requirements should be based on information security policies and current threats. There is an example of developed requirements. The authors developed the original conceptual model for the analysis and synthesis of secure software development controls based on a set of generated requirements. The conceptual model gives software developers the ability of science-based choice of software development methods. The authors suggest a general method of selecting a secure software development set. There is the indirect evidence of the proposed approach effectiveness. It is noted that the proposed approach was the basis for a national standard regarding security software development.

28. Equational characteristics of LTL formulas [№4 за 2015 год]
Authors: Korablin Yu.P., Shipov A.A.
Visitors: 8869
sian Federationn State Social University, Vilgelma Pika St. 4, Moskow, 129256, Russian Federation) Аbstract. Day by day software systems are becoming more and more complex. Therefore, we need to have some useful instruments to check their capability according to specifications, especially large distributed software systems. Nowadays, to describe the verifying model conditions it is common to use such mechanisms as linear temporal logic (LTL) and computational tree logic (CTL). However, experience has proven that these mechanisms can help formulate only a relatively small set of the same-type conditions. It can complicate the verification process or make it ineffective for a particular system model. Correct formulation of model\'s verifying properties is the key problem, as the whole verification process depends on it. Thus, to achieve best results it is required to use powerful tools and techniques, which clearly formulate a wide class of verifying properties. The article describes a mechanism that can significantly extend the group of formulated conditions according to verifiable models. This effect can be achieved by expanding LTL expressivity using the proposed method, which increases the efficiency of the verification process. The article provides a set of illustrative examples of the method, which demonstrates its practical application. The article also provides the example of properties verification for a particular model based on the proposed method.

29. The basic concepts of a semantic libraries formal model and its integration process formalization [№4 за 2015 год]
Authors: Ataeva O.M., Serebryakov V.A.
Visitors: 7843
Modern technology development leads to redefinition of the library content concept, which may be not only a traditional description of printed publications, but also other types of objects. The content of digital libraries and physical objects can be linked in various ways. This article considers the library as a structured data repository suitable for integration with other data sources. The paper presents a thesaurus structure to determine their thematic scope, as well as basic concepts to describe such libraries. Such concepts as information resources, attributes sets, information and other associated objects form a conceptual basis for a generated semantic library subject domain. Thesaurus, in its turn, provides terminology support for these concepts, facilitates information system navigation, supports the process of user query qualification and completion. The article also describes the concepts for detailed data integration work; the emphasis is on the concepts used in the process of data reduction. An important characteristic of any data set regardless of its structure is data quality. Based on data quality assessment it is possible to give an objective assessment of the semantic library process efficiency, where the most important process is data integration with other sources. A formal model of concepts described in the article is used to describe a library ontology.

30. A software package for a social network data analysis [№4 за 2015 год]
Authors: Batura T.V., Murzin F.A., Proskuryakov A.V.
Visitors: 9101
The paper focuses on the problems of extracting and processing social network data. It considers various numerical characteristics, relations and sets analyzed on the basis of output data. It is important that these characteristics are constructive and can be effectively calculated or built using corresponding algorithms. The data amount in social networks is large. It becomes even larger when using distributed data extraction and processing system. That is why the most significant and difficult problem is to determine data, which could be processed effectively enough and is of interest according to the goals. To solve this task, we suggest using the method of determining the amount of audience influence on individual users. The method is based on the dynamic social impact theory proposed by B. Latané. This approach is also useful for solving the problem of determining the primary source of information. This problem is connected to the opinion leader detection problem. Opinion leaders are popular users who form the opinion of the majority. The paper describes one of the methods of opinion leader detection. The article provides a detailed description of the developed software, which processes information from social networks VKontakte and Twitter. It consists of six modules: extraction module, data processing module, user data change tracking module, data analysis module, graph construction module and data visualization module.

← Preview | 1 | 2 | 3 | 4 | Next →