ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
16 June 2024

Articles of journal № 3 at 2019 year.

Order result by:
Public date | Title | Authors |

1. Dual porosity model for fractured porous reservoirs development analysis based on the superelement concept [№3 за 2019 год]
Authors: I.V. Afanaskin , S.G. Volpin, A.V. Roditelev , A.A. Kolevatov
Visitors: 6598
The main oil field development strategy in Russia is waterflooding (water injection into an oil reservoir for oil displacement and pressure maintenance). Nowadays, most Russian oil fields are at the 3rd and 4th development stages, which means high water cut of produced liquid (90% and more). Main objec-tive of reservoir engineers is water production reduction (if possible) and oil production increase. These conditions require significant control and oil reservoir development regulation. To implement such activities, specialists need a solution for fast simulation of significant reservoirs and for fast evaluation of multiple development scenarios for testing hypothesis for geological struc-ture, history matching and production optimization. This approach is relevant for fractured porous res-ervoirs that have significant heterogeneity of filtration-conductivity properties. This fact causes early watercut growth in producing wells and leads to limitation of field production project targets. The paper proposes a methodology of numerical simulation of fractured porous oil reservoirs de-velopment based on the superelement conception. The model simulates two-phase filtration in a dual-porosity reservoir. A numerical scheme is fully explicit. The set of conservation equations is approxi-mated on a super-elements grid. This fact increases calculation speed and simplifies model generation (as cell dimension is consistent with well spacing). Calculation accuracy check requires production history matching. The proposed calculation methodology is tested on a real field example and checked by simulation in Rubis Kappa Engineering. Good matching results have been achieved at model training stages and forecast simulation.

2. Analysis of color spaces effect on the results of color image processing by equalization algorithms [№3 за 2019 год]
Author: M.I. Bald
Visitors: 4636
Color images do not often have a necessary level of visual quality. The histogram equalization method is one of the common ways to improve the contrast of a color image. Usually, color image processing with contrast irregularity is made in the YCbCr color space. However, this color space is not universal to improve any type of distortion. The use of inappropriate color space might significantly reduce the quality of color reproduction. The paper presents a comparative analysis of color space influence on a processing result of histo-gram equalization algorithms. There is a description of an image structure. The authors consider vari-ous color spaces such as RGB, YCbCr, HSV and Lab; their advantages, disadvantages and application areas. There also is a detailed description of the process of direct and inverse transformation of color schemes. The paper proposes a classification of distorted images with contrast irregularity based on their histograms. In order to equalize the contrast, six different histogram equalization algorithms process an image brightness component. For color reproduction analysis, image processing is performed in each of the colored spaces. Examination of original and processed images in different color spaces has shown the color representation dependence on distortion types and color schemes. Estimation of the results of distorted image processing using quantitative metrics proved to be ineffective due to the high share of noise in an image and the absence of an original undistorted image. Therefore, a visual evaluation of a person is used to assess image quality. The paper also describes the research peculiarities. Based on obtained the results, to improve color reproduction, a corresponding color scheme is se-lected for each type of distorted color image. The HSV color space is best for processing high-contrast images, the Lab color space is for low-contrast images, the YCbCr system is for bright images, and the HSV space is for dark images.

3. The prototype of an intelligent e-book based on technology of knowledge direct imposition [№3 за 2019 год]
Authors: G.B. Bronfeld , D.I. Kirov , V.V. Kondratyev
Visitors: 5599
The paper considers the process of introduction of intelligent e-books (IEB). It briefly discusses the ba-sics of creating the IEB in the form of an elinga. Elinga is based on the technology of direct imposition of knowledge (TDIK). TDIK includes applying a new model of knowledge representation – molinga, which in fact repre-sents text sentences as short semantic networks. The developed knowledge bases include a large set of molingas. The use of TDIK makes the knowledge base to contain only sentences with different seman-tic meanings. Molinga corresponds to the structure of production models, however it has a core con-taining a simple sentence with a code description, indicating a confidence factor and postconditions. These postconditions might contain graphical images, data files, or calculation models. The technology is developed in the framework of expert systems design, however each component is implemented differently. As a result, a software package – elinga – has unique capabilities in com-parison with conventional expert systems. Molingas allow applying TDIK introduced by J. Gray to nu-meric data as well. A logical inference is based on using the modified modus ponens rule. The process of finding a solu-tion is based on the dialog-associative search in the human-computer discourse using intermediate re-sults obtained during the logical inference. The paper describes basic functions of the elinga prototype and its operation modes. The elinga ac-tually implements V. Bush’s dream. Based on a fundamentally new technology, this approach allows users to solve various problems that were unsolvable or difficult to solve earlier more effectively using different basis of knowledge integration.

4. Analysis of formulation features of functional requirements to an automated information system [№3 за 2019 год]
Authors: R.D. Gutgarts , P.M. Polyakova
Visitors: 5054
The article briefly analyzes typical problems accompanying the stage of identification of requirements for automated information systems (AIS). Since a user sees an information system in a modern context in the form of software (software), the requirements for functional software can be considered equiva-lent to functional requirements for AIS. The paper considers some well-known approaches to the formulation of requirements for AIS in-cluding functional ones, reveals their common and original aspects. There are many requirements for the designed system. However, functional requirements are always primary. AIS requirements related to reliability, customizability, technical support, interface organization taking into account error han-dling, etc. are secondary to functional and are fully determined by them. They also depend on the cur-rent level of development of relevant information technologies including programming technologies. The analysis is based on experts’ opinions presented in classical thematic sources. The study has shown that so far the tasks related to the correct formulation of functional require-ments for software do not have an unambiguous solution, although attempts to structure them and (or) unify them are being made. The paper proposes an approach to a semantic content of functional requirements taking into ac-count the algorithmic aspect for their further software implementation. It is based on one of the classi-cal control functions (accounting function, calculation, analysis, control, regulation) in the textual for-mulation and allows seeing the informational relationship between source data, an algorithm and re-sults. This may be a necessary and sufficient condition that promotes some unification when identifying functional requirements. There is the example illustrating the proposed approach.

5. On the formalization of functional requirements in information system projects [№3 за 2019 год]
Authors: R.D. Gutgarts , E.I. Provilkov
Visitors: 7128
There are a lot of scientific and applied research devoted to discussion of problems in IT project man-agement. Theн mostly focus on the financial aspects and duration of a project. However, functional as-pects are often overlooked. This might be explained by the fact that such indicators as money and time can be calculated using appropriate methods and algorithms taking into account various risks. Mean-while, a reasonable numerical equivalent to determine project functionality still does not exist. When considering such specialized IT project as the design and development of an information sys-tem (or its separate module), the implemented functionality is a fundamental factor which affects all other project indicators. However, fundamental literature and periodical scientific publications pay in-sufficient attention to functionality as a semantic item for various reasons. Scientists and specialists are mostly interested in managing information systems requirements including functional ones. The for-malization of functional requirements is a subject for discussion in a scientific community and is con-sidered in various aspects. However, there are no standardized or unified solutions. The paper considers the issues related to functional requirements, features of their initial formula-tion, presentation for discussion with a customer and with formalization for software implementation in a project. The authors briefly analyze approaches to the formalization of requirements and propose an ap-proach to the formalization of functional requirements, which can be applied to certain types of tasks that are included in the software that represents an information system as a software product. This may be the first step to create prerequisites for the development of an algorhythmic component. Thus, for a more correct calculation of project complexity, and for more accurate planning of its financial and time costs.

6. Design of the QVT Operational Mappings interpreter for UML Refactoring in terms of the model driven architecture approach [№3 за 2019 год]
Authors: O.A. Deryugina , E.V. Kryuchkova
Visitors: 3934
The paper discusses the MDA (Model Driven Architecture) approach, which has been introduced by the OMG consortium and is aimed at the automation of the software development process. MDA pro-poses the following steps of the software development: design of the Platform Independent Model (PIM), design of the Platform Specific Model (PSM), development of the Code Model. The paper provides an overview of the MDA standards: XMI (XML Metadata Interchange), which unifiers model and metamodel interchange between software products; QVT (Que-ry/View/Transformation), which describes model query languages. The paper is aimed at the design of the QVT Operational Mappings language Interpreter for the UML Refactoring tool. The UML Refactoring tool provides the UML class diagram analysis and trans-formation. Typically, UML class diagrams are used to describe the software object-oriented architec-ture. UML Refactoring tool provides object-oriented metrics calculation (Avg, DIT, Avg. NOC, Avg. CBO, etc.) and searching for the transformations (Interface Insertion, Façade, Strategy) minimizing the refactoring fitness function value, which has been chosen by a user. Based on the information about the QVTo language, the Interpreter class has been designed for the UML Refactoring tool. This class translates QVT commands to the sequence of the transformations of the UML class diagram including add class transformation, add attribute to class, add method to class, add interface, add attribute to interface, add method to interface, add package, add class to package, add interface to package, add package to package. For each transformation, there is a newly designed class to extend Refactoring.java class. This class is an input for the Transformator.java class, which calls method execute() of the Refactoring.java class.

7. Development of a database and a converter for retrieval and analysis of specialized data from a medical device [№3 за 2019 год]
Authors: Eremeev, A.P. , S.A. Ivliev
Visitors: 7111
When developing expert systems, there may be difficulties associated with storage or data exchange formats. There may be situations when the data is stored in a proprietary format, or exchange files for such systems have proprietary format. This makes automated data analyze difficult, since they have to be manually entered into an expert system. However, there are methods that allow converting data into an easy-to-use format. The paper considers the analysis of database binary files of a medical apparatus for studying com-plex vision impairment in order to extract biophysical studies data for further analysis. Since the standard software does not allow information exchange with external systems in open formats, it is necessary to develop additional methods and software to determine data physical structure for subse-quent conversion to an open format. The initial data for the analysis is information about what data are stored in a medical device data-base, as well as general principles of physical data representation in computer systems. Converter de-veloping follows determining the structure of data files. Converter output files can be used in further neural network training. This approach allows quick creating of a database of samples (precedents) eliminating the need for manual data transfer. The proposed approach can further serve as a basis for data analysis in other similar situations.

8. Automation of program verification using graph analytical models of a computational process [№3 за 2019 год]
Authors: A.G. Zykov, Ya.S. Golovanev , V.I. Polyakov
Visitors: 4978
The constant growth in the volume and quantity of software being created requires development of new tools that reduce the time for designing and developing the next product. These tools include automat-ed verification tools. Verification of computational processes implemented by software is a complex and time-consuming task. The need for new verification automation tools is increasing due to an in-crease in the number of systems using various programming languages and requirements for shortening the implementation of projects. So far the urgency of the task of creating universal interlanguage veri-fication tools remains high. The paper discusses the method and means of computational process verification automation based on the description of a graph analytical model. The proposed method assumes that a description in the developed language is restored according to the developed program and then it is compared with the reference description of a graph analytical model, according to which it was created. After that, in the automatic mode, the program is either verified and determined as correct by comparison results, or de-tailed information about a mismatch is given and a program source text is modified interactively ac-cording to the information received, and the verification process is repeated. The aim of the study is to automate verification of C/C# programs by a group of descriptions of a graph analytical model of a computational process. This study includes a developing a tool that allows converting program source codes into descrip-tions of a graph analytical model and performing automated formal verification of a project. The developed utility was tested on the recovered descriptions of the graph analytical model of C/C# and Java programs for array processing (merge sort, Dijkstra's algorithm). The synthesized exe-cutable was successfully tested in the Windows 10 operating system. In the future, it is planned to develop the utility along with new versions of a description language in order to expand the analyzing options and program verifying.

9. Reconstruction of urban space texture model based on topographic maps and camera records [№3 за 2019 год]
Authors: A.P. Kudryashov, I.V. Solovev
Visitors: 4683
The tasks of reconstructing urban space scenes can use various materials as a data source: satellite im-ages, video series, optical system data, etc. The paper proposes an approach for reconstructing a three-dimensional model of urban space using the method of recognizing service information on a topographic map. Topographic maps are the basic data at all stages of architectural, planning and engineering design. They contain information on the building footing geometry and their position among other objects. For recognition, the authors use a modified wave algorithm, which allows identifying and recognizing closed contours in an image that are classified into various objects: building contours, labels, service symbols, etc. The paper presents rationale of advantages of the considered algorithm to select contours. The paper proposes a method of applying textures to three-dimensional models of buildings. The textures are from photographs of real buildings. It is proposed to use special textures for certain types of buildings in a case when there is no real photograph of the building. Photos are attached to a topo-graphic map using geographic coordinates. The authors also describe a method of binding reconstructed objects to a relief. There is an infor-mation system used both for the entire reconstruction process and to solve individual local tasks. The paper gives the examples of reconstructing real topographic maps of 1:2000 scale.

10. A simplified method for skeletonization of non-convex figures [№3 за 2019 год]
Author: A.V. Kuchuganov
Visitors: 3934
The approximation of graphic information through the skeletonization of object images is a way to re-place objects with simpler and more convenient representations in semantic analysis problems and im-age recognition. Skeletons are widely used in technical vision systems, content image search, in geo-metric modeling and visualization. The most popular approaches: based on “erosion” (removal of ob-ject boundary points) and mathematical (based on Voronoi diagrams formed by Delaunay triangula-tion, inscribing circles or using the wave method). A common disadvantage of the existing skeleton building algorithms is the loss of information about the width of the original figure sections, which is often necessary in image recognition and description tasks. The paper proposes an approach that follows the previously published method of skeletalization based on heuristic rules and consists in the sequential cutting off of figure segments with minimal chords in places where the border of the figure has a negative inflection when it is counterclockwise. Then segments are constructed connecting the midpoints of the chords of adjacent segments. The seg-ments are combined into chains that form a nonconvex figure skeleton. In this case, the lengths of the obtained chords carry information about a figure width in the corresponding sections. The experiments were related to two subject areas: processing scanned archival drawings of parts of a general engineering application to use previously gained experience in designing new products and reducing the overall design time and technological preparation of production, as well as the problem of recognizing a continuous handwritten text in the off-line mode.

| 1 | 2 | 3 | Next →