ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 3 at 2018 year.

Order result by:
Public date | Title | Authors |

1. Carbonate reservoirs crosswell survey interpretation by a two-rate test using numerical models [№3 за 2018 год]
Authors: I.V. Afanaskin , S.G. Volpin, O.V. Lomakina, Yu.M. Shteynberg
Visitors: 4539
The paper describes a two-rate well test method that is applied at transient flow. Due to recent development this method allows defining a set of parameters that is the as wide-spread pressure build up test. It is established that the method investigates few dozen meters area, rarely hundreds. The two-rate method doesn’t require well shut-in, so oil output loss reduce during the research. A disadvantage of the method is that parameter calculation is not so precise. The paper shows that at significant rate difference between two modes and sufficient second-rate longevity the two-rate method is a reasonable alternative to interference test. In this case, test area between wells increases to a hundreds of meters. The paper describes a dual-porosity mathematic model for crosswell carbonate fractured reservoir well test interpretation. This model is applicable for crosswell reservoir two-rate test data interpretation. There is a “classic” original finite-difference calculation scheme for this model. The authors describe a reverse-problem subsurface hydrodynamic solution by Newton method. The method has been applied to synthetic downhole pressure curve with corresponding recommendations. Few interpretation variants presented for different reservoir parameters refinement. A relative fractured volume and matrix-fractures diffusivity coefficient have low influence on a tested well at late time pressure value. It is recommended to include fractures permeability, matrix porosity and area fractures anisotropy as defined parameters.

2. A mathematical model approach for evaluation of oil recovery changes caused by wettability during in-situ combustion [№3 за 2018 год]
Authors: I.V. Afanaskin , A.V. Korolev, V.A. Yudin
Visitors: 4449
The paper considers a problem arising in simulation of EOR thermal methods that is wettability changes influenced by formation heating. EOR thermal methods may make a formation temperature rise up to 100–500 ºС above the initial value. Wettability may change in time and space as a result of such artificial temperature distribution. Thus, the fluid flow in formation and the result of applying EOR thermal methods may change dramatically. Investigation of wettability temperature changes requires complicated experiments that take into account temperature changes of a big number of physical processes effecting wettability. Before conducting such experiments, it is mandatory to estimate the importance of such factor in forecasting EOR thermal methods result using computer simulation. The article took into account wettability temperature changes by implementing temperature dependence of critical points on phase relative permeability curves, both for oil and water. The authors consider the problem of multi-phase multi-component non-isothermal filtration through with chemical reactions that took into account a temperature dependence of relative phase permeabilities. They proposed an original finite-difference scheme that is implicit on pressure and explicit on temperature and concentrations. The calculations showed that if wettability changes due to temperature profile variability in formation space and time are neglected, considerable errors may occur in estimates of cumulative production and time of oxygen breakthrough in predicting in-situ combustion results by computer simulation. These errors may be as much as 20 % in cumulative production and 10 % in breakthrough time. To take into account this effect, it is necessary to study the changes in wettability (and relative permeability) experimentally with increasing formation temperature.

3. The English auction method for job scheduling with absolute priorities in a distributed computing system [№3 за 2018 год]
Authors: Baranov, A.V., V.V. Molokanov, P.N. Telegin, A.I. Tikhomirov
Visitors: 5560
The article considers the problem of job scheduling with absolute priorities in a geographically distributed computing system (GDS) when auction methods can be efficiently applied. Most latest papers use a market model where the subject of auction trades (goods) are computational resources, and their owners act as sellers. Buyers are users who participate in the auction to purchase computing resources for of their jobs execution. Such model assumes that customers have certain budgets in nominal or real money. Job priority is determined by the price that the user can pay to finish the job by certain time. The investigated GDS model differs from the known ones by thу fact that job priorities are absolute and assigned according to uniform rules. The main goal is the earliest execution of high-priority jobs. In this case, the concept of the user's budget becomes meaningless, and the classic auction models do not work. The authors propose a new approach when the subject of auction trades are jobs, and resource owners act as buyers paying for jobs with available free computing resources. Within this approach, the authors consider the English auction as the most preferred method for job scheduling with absolute priorities in GDS. The main characteristic of the scheduling algorithm, which is based on this method, is the duration of an auction. The paper presents experimental evaluation of the optimal duration of the English auction in reference to the average job processing time.

4. The approaches to identification of network flows and organization of traffic routes in a virtual data center based on a neural network [№3 за 2018 год]
Authors: Bolodurina I.P., D.I. Parfenov
Visitors: 6146
Every year traffic in data transmission networks grows. Currently, their basis is Big Data traffic. The purpose of this study is to develop new methods for routing traffic in superimposed networks of virtual data centers. The effective construction of routes in modern computer networks that process Big Data flows is one of the most important indicators of the operation of data centers. In order to solve this problem, the authors developed an ensemble of models describing the approach to constructing adaptive routes in superimposed networks of a virtual data center. The novelty of the proposed solution is to develop a hybrid approach that allows (based on methods of data mining) managing routing in a virtual data center network, taking into account data on the status of network nodes, superimposed communication channels and QoS requirements imposed by traffic flows. The proposed route identification model allows solving such problems as determining the order of using chains of superimposed communication channels in a virtual data center network and establishing rules for providing quality of service for mission-critical traffic. A software-algorithmic solution is implemented based on the built models. It is a module for a controller of a software-configurable network. The algorithm described in the paper allows obtaining sets of quasi-optimal and optimal routing rules in polynomial time. The effectiveness of the proposed solution is proved by experimental research using a real network infrastructure of a virtual data center. Comparison of the results of an experimental study carried out using known routing algorithms showed a reduction in response time in the network, as well as a reduction in the load on network nodes processing traffic.

5. A package manager for multiversion applications [№3 за 2018 год]
Authors: Galatenko V.A., M.D. Dzabraev, Kostyukhin K.A.
Visitors: 3676
All software developers eventually face the problem of creating and distributing their software products. At the same time, it is necessary to take into account possibilities of supporting existing products, i.e. replacing old distributions with new ones. When using a quality distribution tool, developers are able to distribute their products to a wider range of platforms, as well as provide the necessary and timely support for these products. The authors of the article consider only UNIX-like systems, most of which include package managers as dpkg, yum. These package managers operate according to a standard concept of software installation in UNIX. The standard concept implies that programs are installed in standard directories such as /usr/bin, /usr/local/bin, and so on. When updating a program (package), it is common practice to replace old files with new ones. Such substitution strategy can be destructive. This means that after software update, some programs or libraries stop working. It is possible, for example, that a package manager itself may stop working after updating. A user is often in a situation when old versions of software are required to support compatibility. In this case, it is necessary to use the practice of building programs and libraries from source code and manual installation, such as “make install”. This kind of installation is irreversible and very dangerous, since in this case the files under control of a package manager may be deleted or replaced. The authors propose a package manager NIX [1] as a solution for the described problems. The most important advantage of this manager is that it completely excludes destructive impact on its part. This is achieved by installing each package in an isolated location controlled by a package manager.

6. Research on compression of raster images using artificial neural networks [№3 за 2018 год]
Authors: A.A. Genov, K.D. Rusakov , A.A. Moiseev, V.V. Osipov
Visitors: 10475
Modern rates of information growth stored on hard disks transferred over the Internet and local enterprise networks has made it necessary to solve the problem of compressing, transferring and storing data. Most of the transferred data is in the form of multimedia content. Nowadays, the algorithms for compressing visual information based on the neural network apparatus are becoming more popular. Unlike classical algorithms, which are based on the elimination of redundancy, these algorithms are based on artificial neural networks. The field is relevant due to the development of mathematical algorithms for network learning, which will improve existing compression methods in the future. The analysis of publications showed that nowadays there is no particular information about the influence of the artificial neural network architecture on a learning process and the quality of their work in real multimedia content. The important task is to select a network topology, which is most suitable for compressing visual information. The purpose of the article is to describe the capabilities of one of the types of artificial neural networks called a multi-layer perceptron in the area of compression and recovery of images of an arbitrary type. The paper analyzes topologies of artificial neural networks, algorithms for their learning, and the efficiency of their work. It also describes the architecture of a “bottleneck”, which is most often used in solving the problem of image compression and recovery. The authors give one of the ways of encoding and decoding data obtained during network operation. The paper describes a computational experiment and gives its results. The experiment showed that using a multilayer perceptron with an input vector of more than eight values turned out to be less effective. As a result, the authors propose the most suitable network architecture to use in practice.

7. A hybrid algorithm for solving optimization problems of computer-aided design and its software implementation [№3 за 2018 год]
Authors: L.A. Gladkov, S.N. Leyba, V.B. Tarasov
Visitors: 8353
The article suggests a hybrid algorithm for solving complex design optimization problems. The work of the algorithm is examined using the example of solving the problems of placement and tracing elements of digital electronic computing equipment circuits. The paper gives a problem statement, limitations of the admissible solutions domain and formulates a criterion for estimating the quality of solutions. The authors propose a new hybrid approach to solving this problem based on a combination of evolutionary search methods, the mathematical apparatus of fuzzy logic and the possibilities of parallel organization of the computational process. They also propose a modified migration operator to exchange information between solution populations in the process of performing parallel computations. The structure of the parallel hybrid algorithm is developed. The paper proposes implementation of the fuzzy control module based on using a multilayer neural network and the Gaussian function. It also notes the main differences of the proposed structure of a neural network from “traditional” neural networks. The basic principles of the fuzzy control unit are formulated. The authors consider the features of software implementation of the proposed hybrid algorithm in detail. They also state the requirements to the architecture of the developed program taking into account the need to support the modularity and extensibility of the application. There are some examples of the description of a printed circuit board element based on existing specifications. The paper describes the interface structure and the main elements of the graphic interface of the developed application. To assess the quality of the obtained solutions and the search for solutions in general, it was suggested to use parameters characterizing the dynamics of changes in the mean and the best values of the objective function, as well as the diversity of the population. There is a brief description of the computational experiments that confirm the effectiveness of the proposed method. The paper shows dependencies of the probability of the performance of genetic operators on control parameter values.

8. Improving logical inference speed of production expert systems using aspect-oriented approach [№3 за 2018 год]
Authors: Goncharov А.А., Semenov N.A.
Visitors: 5763
The reason for active applying of expert systems in various industries is their ability to solve problems of data interpretation, diagnosis, monitoring, design, forecasting, planning and training. Each expert system is based on a knowledge representation model. A production model, semantic networks and frames are the most common models. The production model is the one used most frequently. The paper considers the important shortage of production systems related to low efficiency of the logical inference process compared to other knowledge representation models. The paper describes the proposed method of increasing the efficiency of the logical inference process in production systems based on applying an aspect-oriented approach. The aspect-oriented approach allows identifying intersecting functional elements and providing their consolidation during architecture creation and system implementation. This approach was first introduced in 1997 and it remains popular at the present time. As the example, the article provides a set of production rules of expert system for selecting the requirements for a given level of control according to the requirements of guidance document of undocumented features. In this case, the facts are presented in the form of values of control levels and actions are presented as requirements to the selected level of control. The proposed aspect-oriented approach to organizing production systems provided an opportunity of increasing the speed of logical inference in expert systems. The separation of intersecting facts and actions from the set of production rules into aspects has made it possible to reduce the number of operations when searching for a solution and to eliminate exhaustive search for facts and actions.

9. IACPaaS cloud platform for the development of intelligent service shells: current state and future evolution [№3 за 2018 год]
Authors: Gribova, V.V., A.S. Kleschev, F.M. Moskalenko, V.A. Timchenko, Fedorischev L.A., E.A. Shalfeeva
Visitors: 6786
The paper describes the main features and functional capabilities of the IACPaaS cloud platform. It provides three models of cloud service delivery: PaaS, SaaS and DaaS. The platform is intended for development of specialized (i.e., oriented to specific domains and/or classes of solved problems) intelligent service shells, as well as for development of applied intelligent services using such shells. Intelligent service shells are also presented as cloud services of the platform. Development (maintenance) of applied services using a shell is reduced to formation (modification) of a knowledge base using the tools provided by this shell and binding it with a problem solver. The problem solver consists of a set of agents that are software components interacting with each other by exchanging messages. Specialized intelligent service shells use a domain-specific conceptual representation, which is defined by the ontology of the domain for which the shell is created, for knowledge representation. The knowledge base formation tool uses such problem-oriented model (language) for knowledge representation to generate user interface oriented on domain experts. As a result, domain experts can form and maintain knowledge bases and databases within a familiar conceptual framework (without cognitive engineers as intermediaries or additional training) and make no mistakes in using the language for knowledge representation. The IACPaaS platform provides a basic (universal) and a several specialized technologies for development of the applied intelligent service shells using tools that support these technologies. In addition, there is a technology of interaction between problem solver agents and external software (not included in the IACPaaS platform). It is based on the standard mechanism of HTTP request processing and the ability to run external software executable files from programs (scripts) located on a web server.

10. An algorithm of information security residual risk assessment taking into account a protection mechanisms separation by types [№3 за 2018 год]
Authors: D.A. Derendyaev, Yu.A. Gatchin, V.А. Bezrukov
Visitors: 5900
Analyzing modern approaches to assessing the risk of information security threats, the authors conclude that most of these approaches do not consider protection mechanisms separation by types, which would allow a better analysis of the existing protection system in an enterprise. The presented algorithm takes into account such separation and considers each type with an emphasis on its features. Due to the absence of clear separation of protection mechanisms, it is proposed to divide them into two groups: technical and organizational. To calculate residual risk, the authors taken into account additional variables, such as a possibility of the correct operation of the protection mechanism and the possibility of overcoming the mechanism in threat materializing. Technical protection mechanisms reqiire taking into account the probability of transition to an inoperative state over time. Considering organizational measures, it is worth considering an expiration of its validity or its changing due to changing conditions. Such processes have random nature, therefore the mathematical apparatuses of the hidden Markov model and random Markov processes are used to determine their probabilities. The final indicator of residual risk is determined using an alternative mathematical model obtained after a full factorial experiment. This model allows obtaining more correct values as it considers input parameters at upper and lower levels. As a result of the algorithm implementation, the values of residual risks are determined taking into account counteraction to the threat of each type of protective measures, which allows identifying the disadvantages of the protection system more precisely.

| 1 | 2 | 3 | 4 | Next →