ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Journal articles №1 2017

11. Multiprocessing for spatial reconstruction based on multiple range-scans [№1 за 2017 год]
Authors: V.A. Bobkov (bobkov@iacp.dvo.ru) - Institute of Automation and Control Processes Far Eastern Branch of RAS (Head of Laboratory), Ph.D; A.P. Kudryashov (kudryashova@dvo.ru) - Institute of Automation and Control Processes Far Eastern Branch of RAS (Junior Researcher), Ph.D; S.V. Melman (melman@dvo.ru) - Institute of Automation and Control Processes Far Eastern Branch of RAS (Junior Researcher), Ph.D;
Abstract: The paper proposes a scheme for multiprocessing large volumes of spatial data based on the hybrid computing cluster. This scheme uses the voxel approach for reconstruction and visualization of 3D models of underwater scenes. There are several processing steps including loading various types of initial depth maps, construction of voxel representation of a scalar field and construction of an isosurface using voxel space. The authors analyze a computational scheme to identify the most computationally intensive stages and to understand whether multiprocessing is feasible. They also consider the hybrid computing cluster architecture, which combines three levels of multiprocessing: computing nodes, multi-core and GPU video cards. Two types of parallel architectures are used: MPI and CUDA (parallel computing on GPU). The proposed solution of processing load distribution is based on the nature of each stage and the features of used parallel architectures. The paper provides substantiation for the implemented scheme with qualitative and quantitative assessment. The implemented data processing scheme provides a maximum acceleration of a scene 3D reconstruction using the considered computational cluster. The paper presents the results of computational experiments with real data obtained from the scanner RangeVision Premium 5 Mpix. Test result analysis confirms a possibility of a fundamental increasing of computing performance for this problem by organizing distributed parallel processing. A similar scheme can be used to solve other problems related to handling large volumes of spatial data.
Keywords: 3D-reconstruction, hybrid multiprocessing, voxel approach
Visitors: 5916

12. Editing and entering information into XML-documents of automated information systems [№1 за 2017 год]
Authors: A.N. Trusov (TrusovAlexander@hotmail.com) - Plekhanov Russian University of Economics, Kemerovo Institute (branch); P.Yu. Ivanchenko (Pavel-Ivanchenko@hotmail.com) - Plekhanov Russian University of Economics, Kemerovo Institute (branch); D.A. Katsuro (Davidkacuro@hotmail.com) - Plekhanov Russian University of Economics, Kemerovo Institute (branch);
Abstract: The article considers the issues of automated editing and amending changes to a eXtensibleMarkupLanguage format (XML) configuration file protected from external editing in some automated information system (AIS) with financial and analytical content. It describes the basic idea and the concept of creating a module for editing and entering the infor-mation in an original XML-file in an automated system. It considers a method of providing information system functionality to an end-user by placing a web page on the Internet. The paper shows the algorithm of interaction between a user and program modules. It also describes in detail the technical implementation of the algorithm for editing and automated changing an AIS configuration file without direct interaction with software. The article analyzes the configuration file structure in detail and forms the requirements for its creation. It presents fragments of the configuration file formed structure in the information system, as well as the code referring to the tree element in the XML-file. The authors select the appropriate software implementation to enter social and economic parameters into a configuration file without interaction with a software product. The described approach is necessary when there is a need in operational processing and visual representation of socio-economic information based on situational centers of operational support decision-making in expert analysis of the state and development of socio-economic systems. The authors implemented a software package consisting of the described module and the optimization of financial and analytical AIS, which has been tested in solving problems of social and economic analysis of the situation center in the regional socio-economic development of the Kemerovo branch of the Russian University of Economics named after G.V. Plekhanov.
Keywords: xml document, web-development, situational centre, information technologies, automated information system
Visitors: 7387

13. Automatic text classification methods [№1 за 2017 год]
Authors: Batura T.V. (tatiana.v.batura@gmail.com) - A.P. Ershov Institute of Informatics Systems (IIS), Siberian Branch of the Russian Federationn Academy of Sciences, Ph.D;
Abstract: Text classification is one of the main tasks of computer linguistics because it unites a number of other problems: theme identification, authorship identification, sentiment analysis, etc. Content analysis in telecommunication networks is of great importance to ensure information security and public safety. Texts may contain illegal information (including data related to terrorism, drug trafficking, organization of protest movements and mass riots). This article provides a survey of text classification methods. The purpose of this survey is to compare modern methods for solving the text classification problem, detect a trend direction, and select the best algorithm for using in research and commercial problems. A well-known modern approach to text classification is based on machine learning methods. It should take into account the characteristics of each algorithm for selecting a particular classification method. This article describes the most popular algorithms, experiments carried out with them, and the results of these experiments. The survey was prepared on the basis of scientific publications which are publicly available on the Internet, made in the period of 2011–2016, and highly regarded by the scientific community. The article contains an analysis and a comparison of different classification methods with the following characteristics: precision, recall, running time, the possibility of the algorithm in incremental mode, amount of preliminary information necessary for classification, language independence.
Keywords: text categorization, analysis of text information, data processing, machine learning, neural network, quality of classification
Visitors: 30185

14. Automated analysis method of short unstructured text documents [№1 за 2017 год]
Author: P.Yu. Kozlov (originaldod@gmail.com) - Smolensk Branch of the Moscow Power Engineering Institute;
Abstract: The paper considers the problem of an automated analysis of text documents in the executive and legislative authorities. It provides a characteristics group in order to classify text documents, their types, methods of analysis and rubricating. There is a list of the types of documents that need to be classified. To analyze short unstructured text documents the authors propose to use a classification method based on weighting factors, expert information, fuzzy inference with a developed probabilistic mathematical model, a way of learning and experimentally chosen ratio of weight coefficients. The pre-developed method should be trained. During learning the thesaurus words for each domain are divided into three types: unique, rare and common. The words are allocated with weights depending on the type. In order to maintain the relevance of weight and frequency coefficients it is proposed to use dynamic clustering. The developed method allows analyzing the disclosed documents, as well as taking into account thesaurus heading agility. The paper presents a scheme of automatic classification system for unstructured text documents written in natural language. There might be various types of text documents: long, short, very short. Depending on the document type the system uses a corresponding method of analysis, which has the best indicators of accuracy and completeness of such text document analysis. MaltParser is a parser which is used here and trained on a national set of the Russian language. The result of the whole system work is a knowledge base, which includes all extracted knowledge and attitudes. The knowledge base is constantly updated and used by employees of the executive and legislative authorities to handle incoming requests.
Keywords: dynamic thesaurus, short texts unstructured, analysis automated analysis of texts
Visitors: 6534

15. Scene geometry for detector precision improvement [№1 за 2017 год]
Authors: E.V. Shalnov (eshalnov@graphics.cs.msu.ru ) - Lomonosov Moscow State University; Konushin A.S. (ktosh@graphics.cs.msu.ru) - (Lomonosov Moscow State University, Ph.D;
Abstract: Object detection algorithms are the key component of any intelligent video content analysis systems. High computation requirements and low precision of existing methods restrain widespread acceptance of intelligent video content analysis. The paper introduces a novel algorithm that accelerates existing sliding window object detectors and increases their precision. This approach is based on the geometric properties of an observed scene. If the camera position in the scene is known, we can determine feasible sizes of detected objects in each location of an input image. Windows of other sizes cannot correspond to objects in a scene and thus could be skipped. It significantly decreases computation time. The proposed algorithm estimates feasible sizes of object for each location of an input image. We apply Neural Network (NN) to solve this task. A NN takes camera calibration parameters and window parameters as the input and determines if this configuration feasible or not. We train the NN on the synthetic dataset. It allows us to take into account a huge range of camera calibration parameters. We apply the NN to construct a map of feasible object sizes for the input scene. Thus the detector processes the feasible subset of windows. The performed evaluation reveals that the proposed algorithm accelerates processing by 70 % and increases precision of a detector.
Keywords: neural network, computer vision, objects detection, video analytics, computer graphics, pattern recognition
Visitors: 7863

16. A software agent to determine student’s psychological state in e-learning systems [№1 за 2017 год]
Authors: E.L. Khryanin (evgeshah@list.ru) - Vologda State University, Bank "Vologzhanin" (Chief Engineer), ; A.N. Shvetsov (smithv@mail.ru) - Vologda State University (Professor), Ph.D;
Abstract: The article considers the problem of using software agents to assess students’ psychological state in an e-learning system. The hypothesis of the study is the following: the more psychologically acceptable material for a student, the faster and better it is learned. It is required to develop an automatic algorithm for selection of material. The article describes the developed e-learning system, which has been developed over 5 years and tested in one of the state universities. There is a brief description of e-learning system implementation that includes the agent interaction scheme, main database tables, backend and frontend implementation. The paper also describes a method and an algorithm to determine student’s perceptual modality during psychological testing. It uses statistical methods to predict the probability of logging-in (based on statistics). The authors propose weight coefficients of frequency of using e-learning system by students for the agent, which determines their psychological state, to make decisions. The paper describes the created algorithm of an automatic decision on the need in testing. The study involved 3 groups: a control group, a group with recommendation of material and a group with material chosen by an agent. The study involved more than 90 people. The study has formed formulas for perceptual modality calculation for several consecutive measurements. There is an example of calculation clarification for contradictory data. The experiment has shown positive results when using a recommendation mode. More than 61 % of students have passed the control test, and more than a half of the group has solved a difficult task (about 42 % and 12 % in the control group respectively). There is a conclusion on expediency of using the psychological state definition agent in e-learning systems.
Keywords: mvc, MySQL, php, learning outcome assessment, perceptual modality, psychological state diagnostics, e-Learning Management System, intellectual system, agent-oriented approach
Visitors: 7892

17. Monitoring of frequency resource of geostationary repeater satellites using cover entropy [№1 за 2017 год]
Authors: A.V. Sukhov (avs57@mail.ru) - Moscow Aviation Institute (National Research University) (Professor), Ph.D; Reshetnikov V.N. (rvn_@mail.ru) - Center of Visualization and Satellite Information Technologies SRISA (Professor, Supervisor), Ph.D; S.B. Savilkin (savilkin@mail.ru) - Moscow Aviation Institute (National Research University) (Associate Professor), Ph.D;
Abstract: The paper considers radio-frequency spectrum monitoring for repeater satellites placed in geostationary orbits. It solves the optimization problem of interference source detection with the given search time and definition accuracy of in-terference source coordinates. The optimization problem is solved in the target information space based on covering entropy. Positioning of ground unauthorized radio transmitters is performed by analyzing a signal time delay and Doppler shift signal frequency. The location of an interference source on the Earth's surface can be determined by transmitter signals, which are relayed through a single communications satellite to a geostationary orbit. A small Doppler shift of the signal carrier frequency, which is caused by a small displacement of a satellite on the orbit against the Earth's surface, can be used to calculate the transmitter location. The paper focuses on potential accuracy of estimation and the choice of an efficient approach (in the sense of the minimum covering entropy) to optimization of measurement time. Measurement session time, a signal-to-noise ratio and measurement parameters are interrelated. The relationships be-tween real and specified parameters of measurements are used in information measure which is covering entropy (A. Su-khova). The covering entropy characterizes the efficiency of systems that can be represented by a vector of performance indicators in accordance with their intended use. The minimum value of zero means that regulatory requirements are fulfilled, the positive values characterize the level of generalized compliance. The authors evaluated information potential efficiency in detection of interference source coordinates using the Doppler frequency shift effect based on covering entropy.
Keywords: spectrum monitoring, covering entropy, geostationary orbit, measuring tools, estimation
Visitors: 7849

18. Statistical analysis of test results of products of aeronautical engineering in terms of random censoring [№1 за 2017 год]
Authors: Agamirov L.V. (mmk@mati.ru) - MATI (Russian National Research University), Ph.D; Agamirov V.L. (avl095@mail.ru) - Moscow Aviation Institute (National Research University, Ph.D; Vestyak V.A. (kaf311@mai.ru) - Moscow Aviation Institute (National Research University, Ph.D;
Abstract: The article considers a technique of point and interval estimation of distribution parameters applied in statistical analysis of fatigue tests of aircraft structural elements based on the least squares method. The technique considers censored observations. Relevance of the study is defined by the fact that when estimating distribution parameters for fatigue properties characteristics for a statistical analysis of fatigue tests of aircraft equipment it is necessary to take into account the results of the samples with the test finished before reaching a critical condition. The solution of this problem using known methods (maximum likelihood method) is complicated due to objective function nonmonotonicity, a number of local extremes, etc. The first part of the article is devoted to a technique of estimating distribution parameters of observed random variables for a complete sample, which were transformed from repeatedly censored (incomplete) sample by bootstrap simulation based on order statistics. Transformation of an original randomly censored sample into a quasicomplete one is carried out in order to use the least squares method to estimate distribution parameters since this method is applicable only for a complete sample. The second part of the article is devoted to construction of confidence limits for a quintile of observed random variable distribution. In the aircraft engineering it is applicable for assessment of a guaranteed resource normalized on a lower confidence limit of a durability quintile. The article considers a reduction technique of repeatedly censored incomplete sample in a general case to an equivalent quasicomplete sample, for which it is possible to use the least squares method and receive the most stable and efficient evaluation with minimum dispersion. Thus, the problem of point and interval estimation of distribution parameters of fatigue properties characteristics of aircraft structure elements considering multicensored observations is solved.
Keywords: survivability, bootstrapped modeler, method of least squares, order statistics, random censoring
Visitors: 7595

19. Object detection algorithm in low image quality photographs [№1 за 2017 год]
Author: A.S. Viktorov (alsevictor@mail.ru) - Kostroma State University;
Abstract: The article considers a set of algorithms for specified class object recognition in low quality photographs obtained via camera with low resolution. A special feature of the considered method of object detection is the ability to detect objects even if their sizes in images don't exceed several tens of pixels. Each processed image is scanned via sliding window of fixed width and height that reads rectangular image regions with specified overlap between neighboring regions. All scanned image regions are preliminarily processed by a discriminative autoencoder to extract feature vector from a processed image region. Further analysis of an extracted vector includes classifier means on the basis of probabilistic multinomial regression model to check the scanned region of image if there is object image or its parts. The classifier calculates the probability of detection of a certain class detectable object in each scanned image region. On the basis of an image scan result there is a conclusion on the object image presence and its most probable position in the photograph. To improve the accuracy of calculation of detected object image boundaries, the value of a detection probability of a certain detectable object is interpolated for each pixel, which is analyzed for belonging to the image of the object. After that, on the basis of the detected pixel distribution on the image it is possible to estimate the boundaries of the detected object. The experiment has revealed that using a discriminative autoencoder significantly increases detection algorithm robustness. The article also gives a detailed description of a learning and algorithm parameters adjustment process. The results of this research can be widely used to automate various processes, for example, to collect and analyze information in various analytical systems.
Keywords: cascading noise-canceling autocoder, relevance vector machine, neural network, loss function, learning sample, feature vector, object detection, likelihood function
Visitors: 9432

20. Automatic syntactic analysis of chinese sentences by a restricted dictionary [№1 за 2017 год]
Authors: Yu Chuqiao (yuchuqiao123@gmail.com) - The National Research University of Information Technologies, Mechanics and Optics; Bessmertny I.A. (bia@cs.ifmo.ru) - The National Research University of Information Technologies, Mechanics and Optics (Professor), Ph.D;
Abstract: The paper considers a problem of natural language processing of Chinese texts. One of the relevant tasks in this area is automatic fact acquisition by a query since existing automatic translators are useless for this task. The suggested approach includes a syntactic analysis of phrases and matching parts of speech founded with a formalized query. The purpose of the study is direct fact extracting from original texts without translation. For this purpose the paper suggests to use an approach based on syntactic analysis of sentences from a text with further comparison of the found parts of speech with a formalized subject–object–predicate query. A key feature of the proposed approach is a lack of a segmentation phase of a hieroglyph sequence in a sentence by words. The bottleneck at this task is a dictionary because interpretation of a sentence is impossible without even a single word in the dictionary. To eliminate this problem the authors propose to identify a sentence model by function words while restraint of the dictionary could be compensated by automatic building of a thesaurus using statistical processing of a document corpus. The suggested approach is tested on a small topic where it demonstrates its robustness. There is also an analysis of temporal properties of the developed algorithm. As the proposed algorithm uses a direct-search method, the parsing speed for real tasks could be unacceptably low and this is a subject for further research.
Keywords: natural language, syntactic analysis, fact extraction, thesaurus, search tree
Visitors: 7331

← Preview | 1 | 2 | 3 | Next →