ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 1 at 2017 year.

Order result by:
Public date | Title | Authors |

1. A software agent to determine student’s psychological state in e-learning systems [№1 за 2017 год]
Authors: E.L. Khryanin, A.N. Shvetsov
Visitors: 7892
The article considers the problem of using software agents to assess students’ psychological state in an e-learning system. The hypothesis of the study is the following: the more psychologically acceptable material for a student, the faster and better it is learned. It is required to develop an automatic algorithm for selection of material. The article describes the developed e-learning system, which has been developed over 5 years and tested in one of the state universities. There is a brief description of e-learning system implementation that includes the agent interaction scheme, main database tables, backend and frontend implementation. The paper also describes a method and an algorithm to determine student’s perceptual modality during psychological testing. It uses statistical methods to predict the probability of logging-in (based on statistics). The authors propose weight coefficients of frequency of using e-learning system by students for the agent, which determines their psychological state, to make decisions. The paper describes the created algorithm of an automatic decision on the need in testing. The study involved 3 groups: a control group, a group with recommendation of material and a group with material chosen by an agent. The study involved more than 90 people. The study has formed formulas for perceptual modality calculation for several consecutive measurements. There is an example of calculation clarification for contradictory data. The experiment has shown positive results when using a recommendation mode. More than 61 % of students have passed the control test, and more than a half of the group has solved a difficult task (about 42 % and 12 % in the control group respectively). There is a conclusion on expediency of using the psychological state definition agent in e-learning systems.

2. Statistical analysis of test results of products of aeronautical engineering in terms of random censoring [№1 за 2017 год]
Authors: Agamirov L.V., Agamirov V.L., Vestyak V.A.
Visitors: 7595
The article considers a technique of point and interval estimation of distribution parameters applied in statistical analysis of fatigue tests of aircraft structural elements based on the least squares method. The technique considers censored observations. Relevance of the study is defined by the fact that when estimating distribution parameters for fatigue properties characteristics for a statistical analysis of fatigue tests of aircraft equipment it is necessary to take into account the results of the samples with the test finished before reaching a critical condition. The solution of this problem using known methods (maximum likelihood method) is complicated due to objective function nonmonotonicity, a number of local extremes, etc. The first part of the article is devoted to a technique of estimating distribution parameters of observed random variables for a complete sample, which were transformed from repeatedly censored (incomplete) sample by bootstrap simulation based on order statistics. Transformation of an original randomly censored sample into a quasicomplete one is carried out in order to use the least squares method to estimate distribution parameters since this method is applicable only for a complete sample. The second part of the article is devoted to construction of confidence limits for a quintile of observed random variable distribution. In the aircraft engineering it is applicable for assessment of a guaranteed resource normalized on a lower confidence limit of a durability quintile. The article considers a reduction technique of repeatedly censored incomplete sample in a general case to an equivalent quasicomplete sample, for which it is possible to use the least squares method and receive the most stable and efficient evaluation with minimum dispersion. Thus, the problem of point and interval estimation of distribution parameters of fatigue properties characteristics of aircraft structure elements considering multicensored observations is solved.

3. Automatic text classification methods [№1 за 2017 год]
Author: Batura T.V.
Visitors: 30186
Text classification is one of the main tasks of computer linguistics because it unites a number of other problems: theme identification, authorship identification, sentiment analysis, etc. Content analysis in telecommunication networks is of great importance to ensure information security and public safety. Texts may contain illegal information (including data related to terrorism, drug trafficking, organization of protest movements and mass riots). This article provides a survey of text classification methods. The purpose of this survey is to compare modern methods for solving the text classification problem, detect a trend direction, and select the best algorithm for using in research and commercial problems. A well-known modern approach to text classification is based on machine learning methods. It should take into account the characteristics of each algorithm for selecting a particular classification method. This article describes the most popular algorithms, experiments carried out with them, and the results of these experiments. The survey was prepared on the basis of scientific publications which are publicly available on the Internet, made in the period of 2011–2016, and highly regarded by the scientific community. The article contains an analysis and a comparison of different classification methods with the following characteristics: precision, recall, running time, the possibility of the algorithm in incremental mode, amount of preliminary information necessary for classification, language independence.

4. Multiprocessing for spatial reconstruction based on multiple range-scans [№1 за 2017 год]
Authors: V.A. Bobkov, A.P. Kudryashov, S.V. Melman
Visitors: 5916
The paper proposes a scheme for multiprocessing large volumes of spatial data based on the hybrid computing cluster. This scheme uses the voxel approach for reconstruction and visualization of 3D models of underwater scenes. There are several processing steps including loading various types of initial depth maps, construction of voxel representation of a scalar field and construction of an isosurface using voxel space. The authors analyze a computational scheme to identify the most computationally intensive stages and to understand whether multiprocessing is feasible. They also consider the hybrid computing cluster architecture, which combines three levels of multiprocessing: computing nodes, multi-core and GPU video cards. Two types of parallel architectures are used: MPI and CUDA (parallel computing on GPU). The proposed solution of processing load distribution is based on the nature of each stage and the features of used parallel architectures. The paper provides substantiation for the implemented scheme with qualitative and quantitative assessment. The implemented data processing scheme provides a maximum acceleration of a scene 3D reconstruction using the considered computational cluster. The paper presents the results of computational experiments with real data obtained from the scanner RangeVision Premium 5 Mpix. Test result analysis confirms a possibility of a fundamental increasing of computing performance for this problem by organizing distributed parallel processing. A similar scheme can be used to solve other problems related to handling large volumes of spatial data.

5. Intelligent decision support in process scheduling in diversified engineering [№1 за 2017 год]
Authors: Burdo G.B., Semenov N.A.
Visitors: 6829
In the last fifteen years the structure of machine-building and instrument-making production has undergone major changes due to the requirements of customers to receive high-tech products at a certain time. This fact made relevant companies to design and manufacture a large number of different products simultaneously. It has led them to diversification. Historically, diversified engineering and instrumentation enterprises were not equipped with automated tools to manage technological processes effectively. This fact might be explained by high acceleration capacity of their production systems, lack of repeatability in a production list and manufacturing situations, as well as influence of random factors that violate a normal process flow status. All this leads to elongation and disruption of product delivery time, and as a result, to the deterioration of financial and economic performance data of enterprises and firms. In this regard, it becomes clear that creation of automated decision-making support systems in automated technological process control systems is an important problem. Dispatching of technological process is focused on their introduction into a normal schedule. It is one of the most important components in management. In this work we implemented a combined approach to making controlling actions. Based on a large number of random disturbances, an automated system records the most important and most probable of them. Therefore, by comparing and analyzing planned and actual times (start and end times) of technological process operations, possible situation development (accumulation or reduction of disagreement) the system accumulates the results and identifies the most likely causes of plan failure and possible control actions. The analysis is performed using a knowledge base constructed on the basis of production models. The identified causes are “tips” for the second phase. At this stage with a predetermined frequency or at the occurrence of the exception a group of experts from company employees discusses and evaluates alternatives. Fuzzy control defines a weighted assessment of experts’ confidence in achievability of a desired result by executing various control action and the final decision is accepted.

6. Object detection algorithm in low image quality photographs [№1 за 2017 год]
Author: A.S. Viktorov
Visitors: 9432
The article considers a set of algorithms for specified class object recognition in low quality photographs obtained via camera with low resolution. A special feature of the considered method of object detection is the ability to detect objects even if their sizes in images don't exceed several tens of pixels. Each processed image is scanned via sliding window of fixed width and height that reads rectangular image regions with specified overlap between neighboring regions. All scanned image regions are preliminarily processed by a discriminative autoencoder to extract feature vector from a processed image region. Further analysis of an extracted vector includes classifier means on the basis of probabilistic multinomial regression model to check the scanned region of image if there is object image or its parts. The classifier calculates the probability of detection of a certain class detectable object in each scanned image region. On the basis of an image scan result there is a conclusion on the object image presence and its most probable position in the photograph. To improve the accuracy of calculation of detected object image boundaries, the value of a detection probability of a certain detectable object is interpolated for each pixel, which is analyzed for belonging to the image of the object. After that, on the basis of the detected pixel distribution on the image it is possible to estimate the boundaries of the detected object. The experiment has revealed that using a discriminative autoencoder significantly increases detection algorithm robustness. The article also gives a detailed description of a learning and algorithm parameters adjustment process. The results of this research can be widely used to automate various processes, for example, to collect and analyze information in various analytical systems.

7. Implementation of reinforcement learning methods based on temporal differences and a multi-agent approach for real-time intelligent systems [№1 за 2017 год]
Authors: Eremeev, A.P. , A.A. Kozhukhov
Visitors: 12107
The paper describes implementation of reinforcement learning methods based on time (temporal) differences and a multi-agent technology. The authors examine the possibilities of combining learning methods with statistical and expert methods of forecasting for further integration into an instrumental software environment to use in modern and advanced real-time intelligent systems (RT IS), a type of real-time intelligent decision support systems (RT IDSS). There is an analysis of reinforcement learning (RL-learning) methods in terms of using them in RT IS, main components, benefits and tasks. The paper focuses on the methods of RL-learning based on time (temporal) differences (TD-methods) and presents the developed corresponding algorithms. The authors consider the possibility of including RL-learning methods into a multi-agent environment and combining them with statistical and expert forecasting methods in terms of integration into the environment, which was developed for RT IDSS for complex technical object control and diagnosis. The paper proposes the architecture of the forecasting subsystem prototype consisting of an emulator, which simulates the state of environment, forecasting module, analysis and decision-making module and a multi-agent RL-learning module. There is software implementation of the forecasting subsystem prototype using a multi-agent approach in order to solve the problem of the complex technological object expert diagnosis. According to the results of testing and validation of the developed system, the paper considers the conclusions about the efficiency and expediency of including into the RT IDSS.

8. Software interface design using elements of artificial intelligence [№1 за 2017 год]
Authors: Zubkova, T.M., E.N. Natochaya
Visitors: 11761
In order to develop high-quality software it is necessary to reflect all customer requirements in the specification, thus, to have a global view on the future software for customers and performers. One of the options to achieve mutual understanding is to develop a prototype of a user interface. The article describes the methods of selecting an alternative version of the interface template using such artificial intelligence methods as expert evaluation and the fuzzy-set theory. Users might be are divided into five groups on the basis of individual characteristics (a newbie, usual, experienced, skilled, an administrator). The article defines the basic parameters of individual characteristics which may help to classify users when designing interfaces (computer literacy, systematic experience, experience of working with similar programs, typing, thinking, memory, motor skills, blindness, concentration, emotional stability). The paper describes mathematical support and software for solving the problems of intelligent user interface design. Task implementation is performed in three stages. The first stage is “Forming and assessing expert group competence”. It defines the characteristics of experts. A quantitative description of experts’ characteristics is based on the calculation of relative ratios of competence according to the results of experts’ statements on the Advisory group. The second stage is “Group expert assessment of the object with direct assessment”. It determines recurrent relations for iterations. The third phase is “Building a fuzzy model on fuzzy binary relations”. It operates by two fuzzy sets: a set of user groups and a variety of interface templates that are maximally effective for users with these characteristics. Fuzzy model input data are selected fuzzy sets, the output data are the degrees of matching interface patterns to users. The user interface design process is automated on the basis of the proposed methodology in order to improve objectivity and optimize decisions taken by software developers.

9. Using Bayes' theorem to estimate CMMI® practices implementation [№1 за 2017 год]
Authors: G.I. Kozhomberdieva, D.P. Burakov , M.I. Garina
Visitors: 9916
The article is devoted to the expert estimation methodology (based on objective evidence) for appraising the extent of implementation of practices, which ensure achievement of the goals of CMMI® model process areas. The model has been developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. Such appraisals are necessary to understand the software development processes maturity level in a developer company. In case of uncertainty and/or incompleteness of information on CMMI® practice implementation, it is reasonable to use a toolkit for decision-making in weakly formalized subject domains. It helps to increase a degree of belief to decisions of appraisal team members. In the previously published work, the authors have considered two approaches to construction of the estimation: fuzzy logic methods and multi-criteria classification methods. This article makes an attempt to make the appraisal procedure even more simple and flexible, to expand the opportunities for its use and to increase its objectivity. The proposed approach is based on the known Bayes' theorem. An extent of CMMI® practice implementation is estimated via the distribution of probabilities on a set of hypothesizes. Each of hypotheses assumes that an implementation level reached one of predefined ones. The Bayesian estimation of a practice implementation extent is understood as a posteriori probability distribution, which is revised and refined during the estimation. Values of conditional probability that are used when calculating the Bayesian estimation, show how much hypothesis on a practices implementation level are supported by the obtained objective evidences.

10. Automated analysis method of short unstructured text documents [№1 за 2017 год]
Author: P.Yu. Kozlov
Visitors: 6534
The paper considers the problem of an automated analysis of text documents in the executive and legislative authorities. It provides a characteristics group in order to classify text documents, their types, methods of analysis and rubricating. There is a list of the types of documents that need to be classified. To analyze short unstructured text documents the authors propose to use a classification method based on weighting factors, expert information, fuzzy inference with a developed probabilistic mathematical model, a way of learning and experimentally chosen ratio of weight coefficients. The pre-developed method should be trained. During learning the thesaurus words for each domain are divided into three types: unique, rare and common. The words are allocated with weights depending on the type. In order to maintain the relevance of weight and frequency coefficients it is proposed to use dynamic clustering. The developed method allows analyzing the disclosed documents, as well as taking into account thesaurus heading agility. The paper presents a scheme of automatic classification system for unstructured text documents written in natural language. There might be various types of text documents: long, short, very short. Depending on the document type the system uses a corresponding method of analysis, which has the best indicators of accuracy and completeness of such text document analysis. MaltParser is a parser which is used here and trained on a national set of the Russian language. The result of the whole system work is a knowledge base, which includes all extracted knowledge and attitudes. The knowledge base is constantly updated and used by employees of the executive and legislative authorities to handle incoming requests.

| 1 | 2 | 3 | Next →