Journal influence
Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)
Bookmark
Next issue
№4
Publication date:
09 September 2024
Journal articles №2 2016
1. Parallel programming in mathematical suites [№2 за 2016 год]Authors: Chernetsov A.M. (an@ccas.ru) - Dorodnicyn Computing Centre FRC CSC RAS, National Research University “MPEI” (Research Associate, Associate Professor), Ph.D;
Abstract: Recently tools and features of parallel programming have been used for calculating difficult tasks. Programming models in shared and distributed memory are well-known. Later hybrid models have appeared. However, all these tools suppose fairly low-level programming when a source code is modified significantly. A significant number of mathematical calculations is performed not in algorithmic languages (C/C++, Fortran), but in special mathematical suites such as MATLAB, Maple, Mathematica, MathCad. The paper discusses parallel programming tools in modern mathematical suites. There is a short review of parallel programming tools development in well-known suites, such as MATLAB, Maple, Mathematica and Mathcad. The paper briefly describes the main primitives of parallel programming and their analogs in MPI for MATLAB. It also mentions other operators of parallel programming. It describes different features of parallelism in Maple (threads programming, high-level Task Programming Model, parallel programming). There are some basic constructions of parallel programming in Mathematica Wolfram language. The paper describes different examples. Different possibilities are available depending on an operation suite. However, any problem can be solved in each of these suites (except MathCad).
Keywords: wstp, MPI, mathematical suites, parallel programming
Visitors: 12730
2. Classification algorithm based on random forest principes for a forecasting problem [№2 за 2016 год]
Authors: Kartiev S.B. (mlearningsystems@gmail.com) - Academy of Engineering and Technology Southern Federal University; Kureichik V.M. (kur@tgn.sfedu.ru) - Taganrog Institute of Technology Southern Federal University, Ph.D;
Abstract: This article considers the methods of constructing ensembles of models to solve the forecasting problem. One of the major forecasting stages is classification. This stage includes the basic logic of predictive models. It describes the “random forest” classification method. It also presents the pros and cons of the methods used. During the research the authors justify the choice of this method for using in the developed forecasting system. The paper presents an algorithm for random forest construction based on a combination of decision-making elements and training methods for generated data structures using a modified random forest (MRF) training algorithm. The fundamental difference of this method is finding the optimal class which possesses the object in question for a forecasting task. The paper describes the software implementation in Java using the principles of generic programming. It also describes the basic data structure as an UML-diagram. The article defines the place of the developed module in the diagnostic system of complex technical systems for software system maintenance using modeling principles based on temporal logic. The experimental research showes the efficiency of the described method compared to existing ones. Classification quality has improved at approximately 5 % compared to previous experiments.
Keywords: temporal logic, forecasting, algorithm, random forest, classification
Visitors: 14644
3. A model and algorithmization of the assignment problem under additional constraints [№2 за 2016 год]
Authors: Kordyukov R.Yu. (romkord@yandex.ru) - Main Department of scientific and research activities and technological support of the advanced technologies of the Ministry of defense of the Russian Federation, Ph.D; Dopira R.V. (rvdopira@yandex.ru) - NPO RusBITex (Professor, Head of Department), Ph.D; Ivanova A.V. (tiki.mikck@yandex.ru) - NPO RusBITex (Junior Researcher); Abu-Abed, F.N. (aafares@mail.ru) - Tver State Technical University (Associate Professor, Dean), Ph.D; Martynov D.V. (idpo@tstu.tver.ru) - Tver State Technical University, Ph.D;
Abstract: The article discusses the problem of optimal selection of candidates for working in tender projects based on financial conditions offered by candidates. It presents key criteria in selecting appropriate applications of candidates on the basis of pre-announced standards. The problem is formalized. The objective function is to minimize the costs for project implementation. The developed model aggregates the source data and constraints into one system and allows operating the initial conditions for their analysis. The authors offer a special algorithm for optimal appointment variants search based on the graph theory, the method of sequential analysis and option screening, as well as implicit enumeration. This algorithm takes into account the requirements for the applications of candidates. It works both in terms of the excistance of enterprises’ maximum and minimum financial constraints, and in their absence. It allows selecting performers for a complex project, which involves the successful completion of many individual projects in its composition. The developed software provides opportunities for creating a list of competitive projects, candidates for their implementation and their applications for certain types of work, taking into account the existing cost, time and probability limits. The algorithm searches all applications that meet the requirements of standards, then it determines the optimal selection among them, taking into account the performers’ possibilities of the acquisition of allocated resources.
Keywords: implicit enumeration, the theory of counts, cost optimization, project distribution, tender, assignment problem
Visitors: 11033
4. Algorithms for equipment reliability test in an automatic control system [№2 за 2016 год]
Authors: Rusin A.Yu. (alrus@tvcom) - Tver State Technical University (Associate Professor), Ph.D; Abdulkhamed M. (alrus@tvcom) - Tver State Technical University (Postgraduate Student); Baryshev Ya.V. (alrus@tvcom) - Tver State Technical University (Postgraduate Student);
Abstract: Economic efficiency of equipment reliability test system can be improved by running time reduction or decrease in the amount of specimens. When running time reduces, sample trimming rating increases. Decrease in the amount of specimens leads to decrease in the sample number of equipment running. Evaluation test specifications may be reduced only if information processing methods ensure the validity of the calculated reliability characteristics. The result of test operations is forming small censored samples of mean-time-between-equipment failures. Reliability measurement using such samples is made by the maximum likelihood method. The article presents experimental studies of estimating precision of maximum a likelihood parameter of the exponential distribution law on small singly right-censored samples. In their studies the authors used computer simulating of censored samples, which are similar to the samples formed in equipment reliability testing. These experimental data show that the majority of maximum likelihood estimates obtained using small singly right-censored samples have significant deviations from ideal values. The work includes regression models that set a relation between a deviation of maximum likelihood estimate from ideal value and the parameters characterizing the sample structure. They allow calculating and putting amendments to maximum likelihood estimates. The paper also includes experimental studies of its usage results. After applying developed models and putting amendments to maximum likelihood estimates the accuracy of maximum likelihood estimates increases. There also is a developed software to apply regression models in practice.
Keywords: the software, maximum likelihood method, censored samples, reliability, equipment test, information processing, computer modeling
Visitors: 10873
5. Approximate reasoning based on temporal fuzzy Bayesian belief networks [№2 за 2016 год]
Authors: Borisov V.V. (BYG@yandex.ru) - Smolensk Branch of the Moscow Power Engineering Institute, Ph.D; Zakharov A.S. (auth1989@yandex.ru) - Smolensk Branch of the Moscow Power Engineering Institute;
Abstract: The article considers the problem of approximate reasoning modeling under uncertainty. It describes a temporal fuzzy Bayesian network, which represents a Bayesian belief network, where preconditions of cause-effect relationships are complex temporal expressions; a statement truth measure is a fuzzy probability measure. A temporal fuzzy Bayesian network allows qualitative and quantitative setting of cause-effect relationships, taking into account temporal dependencies under conditions of stochastic and non-stochastic uncertainty. A result of approximate reasoning is a value of fuzzy probabilistic truth measure of a statement about finding a network node in one of its states. Moreover, the reasoning process is implemented as a sequential transition between moments of time and for each time moment implementing probabilistic inference in a temporal fuzzy Bayesian network. During the inference for each time moment when there are temporal dependencies we use reasoning results obtained at previous steps. To model approximate reasoning based on a temporal fuzzy Bayesian network the authors propose a method that allows to determine values of a fuzzy probability truth measure of statements during the forward and backward reasoning considering complex temporal dependencies. The proposed method is based, first, on the transformation of a fuzzy Bayesian network with complex temporal statements into a form containing only simple temporal statements. Second, it is based on the join tree construction according to the source fuzzy Bayesian network. Third, it is based on calculating fuzzy probability distribution by transmitting messages between join tree nodes, as well on a time constraint network to transmit messages through heterogenous join tree separators. The paper describes the developed software tools that implement the proposed model and the method of approximate reasoning. There are examples of using the developed model and method for analysing mental and emotional state of patients.
Keywords: temporal fuzzy bayesian network, fuzzy probability measure, approximate reasoning modeling
Visitors: 6695
6. A hybrid desktop/cloud platform for design space exploration [№2 за 2016 год]
Author: Prokhorov A.A. (alexander.prokhorov@datadvance.net) - DATADVANCE LLC (Head of Department); Nazarenko A.M. (alexey.nazarenko@datadvance.net) - DATADVANCE LLC (Senior Programmer); Perestoronin N.O. (nikita.perestoronin@datadvance.net) - DATADVANCE LLC (Senior Programmer); Davydov A.V. (andrey.davydov@datadvance.net) - DATADVANCE LLC (Technical Writer);
Abstract: Modern engineering practice shows that simulation driven design is arguably the most promising method to reduce lead time and development costs. However, its application involves a number of methodological and operational difficulties. Thus, it remains limited and in general is not available for smaller companies that lack the required resources. High entry level of this method is the consequence of high complexity and cost of implementing the simulation models required in solving modern multidisciplinary engineering problems. Development of such models requires a high level of expertise in many subject domains, as well as using various specific software products which are usually available on commercial basis only. Moreover, performing large scale simulations leads to additional costs for development and maintenance a high-performance computing system. The paper considers the main issues of performing large scale automated simulations that are required when computational methods are applied at early design stages in order to support a search for new design decisions. On contrary, we have a yet more common practice of using simulation experiments only at the later stage of design validation, which does not require mass calculations. The paper discusses the ways of lowering the entrance level paying attention to the existing practice of developing integrated solutions that are accessible to a wide range of users, as well as to the opportunity of at least partial moving simulation experiments into a cloud, which would allow lowering simulation costs. The authors also consider developing hybrid integrated applications based both on cloud and desktop software. The paper formulates related requirements for the process integration and automation platform that would support both cloud and desktop components in order to allow developing hybrid integrated applications aimed to solve classes of similar tasks. It then proceeds to describe the software architecture developed with regard to these requirements, which allows minimizing resources required for implementation thanks to the fact that its main components can be used both in the cloud and desktop versions.
Keywords: design process management, integration, cloud computing, engineering automation
Visitors: 9122
7. Preprocessing sets of precedents to construct decision functions in classification problems [№2 за 2016 год]
Authors: Gdansky N.I. (al-kp@mail.ru) - Moscow Polytechnic University (Professor), Ph.D; Kulikova N.L. (kulikovanl@mpei.ru) - National Research University "Moscow Power Engineering Institute" (Associate Professor), Ph.D; Krasheninnikov A.M. (lifehouse@list.ru) - Russian State Social University (Senior Lecturer);
Abstract: The article considers an important problem of errors in learning samples for subsequent construction using the method of solving functions precedents in problems of new objects classification. The paper researches the main causes of these errors and their impact on the construction of classifiers. Based on the geometric interpretation of a classification problem the authors propose methods to not only analyze the quality of a training sample, but also identify possible causes of the errors contained in it, as well as perform their correction required for the subsequent construction of an effective classifier. For numerical accounting of common emission lobes, which must be removed and corrected in a learning sample, the authors propose using the corresponding maximum allowable threshold values. There are some recommendations for the main subject areas. The algorithm of precedent analysis uses a special measure of single object proximity to an arbitrary class. It is similar to the method of the nearest neighbor with the difference that neighborhood is determined by not a nearest point but several points. The complexity of the proposed algorithms for analysis and correction of training sets is polynomial according to the number of points in the the learning sample. In the first case it is quadratic, in the second case it is linear. A new corrected training set sets smoother class boundaries in the space of characteristic values. Consequently, the data set of points to a greater extent satisfy the compactness hypothesis and give decision functions with a simpler structure, which requires less computing operations to solve the problem of classification.
Keywords: correction, analysis, erroneous data, precedent, learning sample, decision function, classifier, classification problem
Visitors: 9341
8. Methods of automatic ontology construction [№2 за 2016 год]
Authors: Platonov A.V. (avplatonov@corp.ifmo.ru) - The National Research University of Information Technologies, Mechanics and Optics; Poleschuk E.A. (eapoleschuk@corp.ifmo.ru) - The National Research University of Information Technologies, Mechanics and Optics;
Abstract: The article describes an automatic domain ontology generation process using input text corpora. In particular, it describes the processes similar to Biperpedia, BOEMIE Project systems, etc. This paper includes a description of basic steps of automatic ontology construction, specifically a domain-object extraction process, concept (i.e. terms that combine an object set) extraction process, as well as the process of semantic relations and rules extraction. This paper reviews algorithms for each steps of an ontology construction process. There is a named entity recognition task and regular expression generation based on a genetic programming approach for a domain-object extraction process. The authors propose an idea of using a sequential pattern mining approach for term sequences extraction for an object identification process. The paper contains a description of basic steps of a concept extraction task and a review of a concept attributes extraction task. The article also describes a lexico-syntactic pattern approach for a domain semantic relation extraction process. The authors propose an approach to this task based on association rules mining like in a frequent pattern mining approach. The paper includes three methods of ontology learning evaluation, specifically: a golden sample method, a human evaluation method and an indirect method using client-application evaluation. The paper describes positive and negative aspects of each method and proposes a compromise to estimate the quality of a model.
Keywords: semantic relation extraction, named entity recognition, ontology
Visitors: 11785
9. The method of distributed analysis of verifiable models properties [№2 за 2016 год]
Authors: Shipov A.A. (a-j-a-1@yandex.ru) - Russian State Social University, Ph.D;
Abstract: Due to every day complexity and complication growth of software systems, we need some useful tools to check matching their specifications, especially for large distributed software systems. However, verification of this kind of systems is often followed by the “combinatorial explosion” problem, which causes a sharp growth of temporal complexity during verification at rather low volume increase of verifiable systems. Nowadays there are some methods to overcome this problem, such as abstraction, interpretation and verification “on the fly”. Nevertheless, practically, the usage of only existing methods can be often not enough to solve this problem. The logic prompts that we should carry out the process of executing large distributed software systems, as well as a verification process in a distributed way. The article offers and analyses a method for overcoming the problem of “combinatorial explosion”. It can be used as additional for already existing methods. The idea of the method consists in using the algorithm of Buchi automata distributed verification for linear temporal logic (LTL). This algorithm can help to increase efficiency and speed of the verification process due to division of computations between the number of computing knots. Despite the fact that the idea of distributed computations is not innovative and similar tools are already presented in a model checking tool Spin, the theoretical material of the article is supported by the set of examples which shows on practice that the proposed algorithm is more efficient than one presented in Spin.
Keywords: ctl, ltl, temporal logic formula, Buchi automaton, spin, verification
Visitors: 10430
10. System analysis and decision-making on reengineering of corporate information management systems [№2 за 2016 год]
Authors: Shilnikova O.V. (tmo@mite.ru) - Smolensk Branch of the Moscow Power Engineering Institute (Senior Lecturer), ;
Abstract: Qualitative and quantitative analysis of functional parameters, performance properties and survivability in a distributed multi-level information management system is performed using computer modeling tools including simulation. The models take into account heterogeneity and variability of structures, network bandwidth and distributed database features. In recent years investigation of the evolving information system properties in corporation management has become topical. This article considers the evolution of information management systems (IMS). Modeling of support IMS performance process is carried out with using an optimal composition of the resources required at the first stage. The model takes into account the fact that system parameters gradually drift far enough away from the optimum and the phase trajectory is “attracted” to the stable but non-optimal point in the evolution. As a result, we can see that necessary features to achieve the bifurcation point are performed. The article proves the validity of the hypothesis about the need to release a new version of MIS. Bringing the system to even more efficient state without interrupting the life cycle requires some special solutions. One of them is a release of the next IMS version. Scientific research and consulting units in corporations can carry out systematic analytical study of IMS on their own, or set targets for outside organizations (universities or research institutes).
Keywords: strange attractor, life cycle, synergy, corporation, evolution, emergence, embedded markov chain, simulation model, bifurcation point, attractor, information-control system
Visitors: 10204
| 1 | 2 | 3 | Next → ►