ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 1 at 2017 year.

Order result by:
Public date | Title | Authors

1. Automatic syntactic analysis of chinese sentences by a restricted dictionary [№1 за 2017 год]
Authors: Yu Chuqiao, Bessmertny I.A.
Visitors: 7331
The paper considers a problem of natural language processing of Chinese texts. One of the relevant tasks in this area is automatic fact acquisition by a query since existing automatic translators are useless for this task. The suggested approach includes a syntactic analysis of phrases and matching parts of speech founded with a formalized query. The purpose of the study is direct fact extracting from original texts without translation. For this purpose the paper suggests to use an approach based on syntactic analysis of sentences from a text with further comparison of the found parts of speech with a formalized subject–object–predicate query. A key feature of the proposed approach is a lack of a segmentation phase of a hieroglyph sequence in a sentence by words. The bottleneck at this task is a dictionary because interpretation of a sentence is impossible without even a single word in the dictionary. To eliminate this problem the authors propose to identify a sentence model by function words while restraint of the dictionary could be compensated by automatic building of a thesaurus using statistical processing of a document corpus. The suggested approach is tested on a small topic where it demonstrates its robustness. There is also an analysis of temporal properties of the developed algorithm. As the proposed algorithm uses a direct-search method, the parsing speed for real tasks could be unacceptably low and this is a subject for further research.

2. Object detection algorithm in low image quality photographs [№1 за 2017 год]
Author: A.S. Viktorov
Visitors: 9432
The article considers a set of algorithms for specified class object recognition in low quality photographs obtained via camera with low resolution. A special feature of the considered method of object detection is the ability to detect objects even if their sizes in images don't exceed several tens of pixels. Each processed image is scanned via sliding window of fixed width and height that reads rectangular image regions with specified overlap between neighboring regions. All scanned image regions are preliminarily processed by a discriminative autoencoder to extract feature vector from a processed image region. Further analysis of an extracted vector includes classifier means on the basis of probabilistic multinomial regression model to check the scanned region of image if there is object image or its parts. The classifier calculates the probability of detection of a certain class detectable object in each scanned image region. On the basis of an image scan result there is a conclusion on the object image presence and its most probable position in the photograph. To improve the accuracy of calculation of detected object image boundaries, the value of a detection probability of a certain detectable object is interpolated for each pixel, which is analyzed for belonging to the image of the object. After that, on the basis of the detected pixel distribution on the image it is possible to estimate the boundaries of the detected object. The experiment has revealed that using a discriminative autoencoder significantly increases detection algorithm robustness. The article also gives a detailed description of a learning and algorithm parameters adjustment process. The results of this research can be widely used to automate various processes, for example, to collect and analyze information in various analytical systems.

3. Intelligent decision support in process scheduling in diversified engineering [№1 за 2017 год]
Authors: Burdo G.B., Semenov N.A.
Visitors: 6828
In the last fifteen years the structure of machine-building and instrument-making production has undergone major changes due to the requirements of customers to receive high-tech products at a certain time. This fact made relevant companies to design and manufacture a large number of different products simultaneously. It has led them to diversification. Historically, diversified engineering and instrumentation enterprises were not equipped with automated tools to manage technological processes effectively. This fact might be explained by high acceleration capacity of their production systems, lack of repeatability in a production list and manufacturing situations, as well as influence of random factors that violate a normal process flow status. All this leads to elongation and disruption of product delivery time, and as a result, to the deterioration of financial and economic performance data of enterprises and firms. In this regard, it becomes clear that creation of automated decision-making support systems in automated technological process control systems is an important problem. Dispatching of technological process is focused on their introduction into a normal schedule. It is one of the most important components in management. In this work we implemented a combined approach to making controlling actions. Based on a large number of random disturbances, an automated system records the most important and most probable of them. Therefore, by comparing and analyzing planned and actual times (start and end times) of technological process operations, possible situation development (accumulation or reduction of disagreement) the system accumulates the results and identifies the most likely causes of plan failure and possible control actions. The analysis is performed using a knowledge base constructed on the basis of production models. The identified causes are “tips” for the second phase. At this stage with a predetermined frequency or at the occurrence of the exception a group of experts from company employees discusses and evaluates alternatives. Fuzzy control defines a weighted assessment of experts’ confidence in achievability of a desired result by executing various control action and the final decision is accepted.

4. Scene geometry for detector precision improvement [№1 за 2017 год]
Authors: E.V. Shalnov, Konushin A.S.
Visitors: 7863
Object detection algorithms are the key component of any intelligent video content analysis systems. High computation requirements and low precision of existing methods restrain widespread acceptance of intelligent video content analysis. The paper introduces a novel algorithm that accelerates existing sliding window object detectors and increases their precision. This approach is based on the geometric properties of an observed scene. If the camera position in the scene is known, we can determine feasible sizes of detected objects in each location of an input image. Windows of other sizes cannot correspond to objects in a scene and thus could be skipped. It significantly decreases computation time. The proposed algorithm estimates feasible sizes of object for each location of an input image. We apply Neural Network (NN) to solve this task. A NN takes camera calibration parameters and window parameters as the input and determines if this configuration feasible or not. We train the NN on the synthetic dataset. It allows us to take into account a huge range of camera calibration parameters. We apply the NN to construct a map of feasible object sizes for the input scene. Thus the detector processes the feasible subset of windows. The performed evaluation reveals that the proposed algorithm accelerates processing by 70 % and increases precision of a detector.

5. Using Bayes' theorem to estimate CMMI® practices implementation [№1 за 2017 год]
Authors: G.I. Kozhomberdieva, D.P. Burakov , M.I. Garina
Visitors: 9916
The article is devoted to the expert estimation methodology (based on objective evidence) for appraising the extent of implementation of practices, which ensure achievement of the goals of CMMI® model process areas. The model has been developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. Such appraisals are necessary to understand the software development processes maturity level in a developer company. In case of uncertainty and/or incompleteness of information on CMMI® practice implementation, it is reasonable to use a toolkit for decision-making in weakly formalized subject domains. It helps to increase a degree of belief to decisions of appraisal team members. In the previously published work, the authors have considered two approaches to construction of the estimation: fuzzy logic methods and multi-criteria classification methods. This article makes an attempt to make the appraisal procedure even more simple and flexible, to expand the opportunities for its use and to increase its objectivity. The proposed approach is based on the known Bayes' theorem. An extent of CMMI® practice implementation is estimated via the distribution of probabilities on a set of hypothesizes. Each of hypotheses assumes that an implementation level reached one of predefined ones. The Bayesian estimation of a practice implementation extent is understood as a posteriori probability distribution, which is revised and refined during the estimation. Values of conditional probability that are used when calculating the Bayesian estimation, show how much hypothesis on a practices implementation level are supported by the obtained objective evidences.

6. Automated analysis method of short unstructured text documents [№1 за 2017 год]
Author: P.Yu. Kozlov
Visitors: 6534
The paper considers the problem of an automated analysis of text documents in the executive and legislative authorities. It provides a characteristics group in order to classify text documents, their types, methods of analysis and rubricating. There is a list of the types of documents that need to be classified. To analyze short unstructured text documents the authors propose to use a classification method based on weighting factors, expert information, fuzzy inference with a developed probabilistic mathematical model, a way of learning and experimentally chosen ratio of weight coefficients. The pre-developed method should be trained. During learning the thesaurus words for each domain are divided into three types: unique, rare and common. The words are allocated with weights depending on the type. In order to maintain the relevance of weight and frequency coefficients it is proposed to use dynamic clustering. The developed method allows analyzing the disclosed documents, as well as taking into account thesaurus heading agility. The paper presents a scheme of automatic classification system for unstructured text documents written in natural language. There might be various types of text documents: long, short, very short. Depending on the document type the system uses a corresponding method of analysis, which has the best indicators of accuracy and completeness of such text document analysis. MaltParser is a parser which is used here and trained on a national set of the Russian language. The result of the whole system work is a knowledge base, which includes all extracted knowledge and attitudes. The knowledge base is constantly updated and used by employees of the executive and legislative authorities to handle incoming requests.

7. Automatic text classification methods [№1 за 2017 год]
Author: Batura T.V.
Visitors: 30186
Text classification is one of the main tasks of computer linguistics because it unites a number of other problems: theme identification, authorship identification, sentiment analysis, etc. Content analysis in telecommunication networks is of great importance to ensure information security and public safety. Texts may contain illegal information (including data related to terrorism, drug trafficking, organization of protest movements and mass riots). This article provides a survey of text classification methods. The purpose of this survey is to compare modern methods for solving the text classification problem, detect a trend direction, and select the best algorithm for using in research and commercial problems. A well-known modern approach to text classification is based on machine learning methods. It should take into account the characteristics of each algorithm for selecting a particular classification method. This article describes the most popular algorithms, experiments carried out with them, and the results of these experiments. The survey was prepared on the basis of scientific publications which are publicly available on the Internet, made in the period of 2011–2016, and highly regarded by the scientific community. The article contains an analysis and a comparison of different classification methods with the following characteristics: precision, recall, running time, the possibility of the algorithm in incremental mode, amount of preliminary information necessary for classification, language independence.

8. Multiprocessing for spatial reconstruction based on multiple range-scans [№1 за 2017 год]
Authors: V.A. Bobkov, A.P. Kudryashov, S.V. Melman
Visitors: 5916
The paper proposes a scheme for multiprocessing large volumes of spatial data based on the hybrid computing cluster. This scheme uses the voxel approach for reconstruction and visualization of 3D models of underwater scenes. There are several processing steps including loading various types of initial depth maps, construction of voxel representation of a scalar field and construction of an isosurface using voxel space. The authors analyze a computational scheme to identify the most computationally intensive stages and to understand whether multiprocessing is feasible. They also consider the hybrid computing cluster architecture, which combines three levels of multiprocessing: computing nodes, multi-core and GPU video cards. Two types of parallel architectures are used: MPI and CUDA (parallel computing on GPU). The proposed solution of processing load distribution is based on the nature of each stage and the features of used parallel architectures. The paper provides substantiation for the implemented scheme with qualitative and quantitative assessment. The implemented data processing scheme provides a maximum acceleration of a scene 3D reconstruction using the considered computational cluster. The paper presents the results of computational experiments with real data obtained from the scanner RangeVision Premium 5 Mpix. Test result analysis confirms a possibility of a fundamental increasing of computing performance for this problem by organizing distributed parallel processing. A similar scheme can be used to solve other problems related to handling large volumes of spatial data.

9. Modelling and simulation of Black Hole attack on wireless networks [№1 за 2017 год]
Authors: V.V. Shakhov, A.N. Yurgenson, O.D. Sokolova
Visitors: 8597
The technologies based on wireless sensor networks can be used in a wide range of vital applications. There are several implementations of the Internet of Things architecture based on wireless sensor networks. For example, the core objective in the projects of the 7th Framework Programme funded by the European Union was to provide the technical foundation for WSN technology in IoT products and services. As wireless sensor networks based applications are deployed, security becomes an essential requirement. In this paper the authors discuss the state-of-arts for security issues in WSN. It is impossible in all cases to provide absolute protection and eliminate the consequences of intrusion. However, the effective range of protection mechanisms will significantly reduce the damage. To achieve this it is necessary to develop and explore appropriate mathematical models. The paper focuses on the attack named Black Hole. This attack has one of the most dangerous destructive information impacts. As a result, more than 90 % of the information transmitted to the sink may be lost. The direct transmission of information between the nodes in WSN is possible if they are within each other's radio reachability. Therefore, the unit disk graphs (UDG-graphs) might be used as a wireless network model. Communication in these networks are described by UDG-models the most appropriate. To simulate data transmission by the routing algorithm in the graph, a spanning tree is constructed. The authors have obtained the formula for calculating analytical estimates for some cases of a spanning tree structure. To assess the vulnerability of this tree to attacks the authors used the value “normalized number of vertices with lost information”. It shows the average number of vertices which lose information, divided by the total number of nodes on the tree. The analytical results are consistent with simulation results. The paper offers a counteracting method against Black Hole and provides the corresponding performance analysis as well.

10. Monitoring of frequency resource of geostationary repeater satellites using cover entropy [№1 за 2017 год]
Authors: A.V. Sukhov, Reshetnikov V.N., S.B. Savilkin
Visitors: 7849
The paper considers radio-frequency spectrum monitoring for repeater satellites placed in geostationary orbits. It solves the optimization problem of interference source detection with the given search time and definition accuracy of in-terference source coordinates. The optimization problem is solved in the target information space based on covering entropy. Positioning of ground unauthorized radio transmitters is performed by analyzing a signal time delay and Doppler shift signal frequency. The location of an interference source on the Earth's surface can be determined by transmitter signals, which are relayed through a single communications satellite to a geostationary orbit. A small Doppler shift of the signal carrier frequency, which is caused by a small displacement of a satellite on the orbit against the Earth's surface, can be used to calculate the transmitter location. The paper focuses on potential accuracy of estimation and the choice of an efficient approach (in the sense of the minimum covering entropy) to optimization of measurement time. Measurement session time, a signal-to-noise ratio and measurement parameters are interrelated. The relationships be-tween real and specified parameters of measurements are used in information measure which is covering entropy (A. Su-khova). The covering entropy characterizes the efficiency of systems that can be represented by a vector of performance indicators in accordance with their intended use. The minimum value of zero means that regulatory requirements are fulfilled, the positive values characterize the level of generalized compliance. The authors evaluated information potential efficiency in detection of interference source coordinates using the Doppler frequency shift effect based on covering entropy.

| 1 | 2 | 3 | Next →