Journal influence
Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)
Bookmark
Next issue
№4
Publication date:
16 December 2025
Articles of journal № 2 at 2025 year.
| Order result by: Public date | Title | Authors |
11. Zero-shot personalized image generation with Stable Diffusion: Training a neural network [№2 за 2025 год]
Author: Livshits, G.B.
Visitors: 1275
The paper proposes a method for training additional modules for the Stable Diffusion model in order to solve the problem of personalized face generation. The method allows applying the basic diffusion model together with trained modules in the inference mode on new data without additional training of model elements for each separate example. The trained models make it possible to generate face images while preserving the identity of the person from a reference image. They not only preserve the face position in the frame, but also use a reference photo to set a face position in the frame while changing its style and environment according to the entered text query. The paper provides following details of the training process: collection, filtering and processing of pre-training data, the architecture of the trained neural network modules, the use of publicly available pre-trained neural networks for extracting feature representations of input images, a data augmentation method to improve the model robustness and the modifiability of face images on generations, and a modified loss function. A com-parative analysis of the generations produced by the trained model and models from competitive works on a fixed set of text queries and input face images has demonstrated the significant superiority of the trained model over competitors. The authors considered such metrics as cosine proximity of generated images to the corresponding text queries (CLIP text-image score), cosine proximity between face templates extracted from generated images and from input images. There are also generation results demonstrating the improved generation quality of the trained model compared to competitive models.
12. Software implementation of an algorithm for detecting social engineering attacks using speech patterns [№2 за 2025 год]
Author: Zhukova, M.N.
Visitors: 1723
The paper focuses on solving the problem of detecting attacks going through a speech channel. It considers approaches to implementing a classifier for determining the signs of such an attack. It also discusses possible ways of countering social engineering attacks via the speech channel, such as machine learning and content analysis. Content analysis methods can be used as a classifier for the developed solution. They create a flexible and easily configurable system that is optimal for application by various companies. The paper also considers the possibility of using DLP-systems to solve the problem. The author comparatively analyzes and tests Python libraries for speech recognition and parsing. The paper presents the implementation schemes of a speech analyzer. The author proposes an algorithm for detecting attacks through speech channel by de-tecting speech patterns. The algorithm allows detecting an attack conducted through speech interaction between employees and an attacker. The vulnerabilities of the algorithm that arise when using typed Python libraries are eliminated. This algorithm is the base for the developed prototype software solution with a client-server architecture. The author presents implementation options for the software solution and gives its approbation results. The author shows a promising area of the created solution application based integrating as a module in DLP solutions. Therefore, DLP can be used as a possible defense measure against social engineering attacks.
13. Modifying the reasoning algorithm in classification tasks [№2 за 2025 год]
Authors: Nikolaev, A.A., N.A. Blagosklonov, B.A. Kobrinsky
Visitors: 1776
The user determines the explainability of hypotheses in intelligent decision support systems in terms of their reasoning. The paper discusses the importance of reasoning and different approaches to reasoning in intelligent systems. Given a set of hypotheses, there is a decomposition of inference rules for similar difficult-to-recognize diseases or pathological conditions. This may be due to partially overlapping features (risk factors). Simultaneously, there arises the problem of forming a ranked list of hypotheses accompanied by all arguments, including arguments of the lowest level, i.e., those related to weak hypotheses. This paper considers a modification of the reasoning algorithm for argumentation reasoning. This provides a gentle reduction in the number of hypotheses by selecting one or more leading ones corresponding to the presence of more than one subclass or group of diseases. In addition, the authors use the method of assigning an order relation. The authors present and justify the reasoning algorithm modification within the framework of the previously created knowledge base of the intelligent recommendation system for disease risk assessment. The system is implemented on a heterogeneous semantic network. The algorithm steps are corrected. The solver ranks issued hypotheses while storing information about all detected arguments regardless of their relevance. The modified solver includes prevention against possible loss of one of the relevant hypotheses when there is a number of diseases present at the same time. It provides information about all reasons for multiple hypotheses of different ranks. The solver enhances the explainability of the issued hypotheses based on features that give reasons for different classified diseases. The authors compare the modified algorithm with other approaches that interpret the issued solutions. The practical significance of the work is in increasing the explainability for the user of the leading hypotheses while simultaneously retrieving the entire set of detected arguments.
14. Machining units of small-scale machinery: Automated preparing of operational plans [№2 за 2025 год]
Authors: Burdo G.B., Semenov N.A.
Visitors: 1252
The paper focuses on developing a methodology for building operational control systems for technological processes in high-variety machine-building production. Technological process control methodology is based on work allocation methods and the scheduling theory supplemented with priority schemes. Schemes for calculating plans are dynamic; they and deter-mined based on the machine places' workload and the ratio of equipment utilization factors. Based on the performed set of researches, the authors developed a methodology of making operational plans for machine-building machining technological units of small-scale production. Considering the capabilities of modern information technologies, they proposed a method of recognizing a production situation based on the analysis of equipment utilization ratios. The methodology has a number of advantages. One of the advantages is calculating operational plans based on priority scheme systems that take into account the actual state of technological units. Others are: systematic updating of equipment utilization and que length for operations to ensure good plan convergence; accounting of multi-task service; reducing time costs for calculating a plan through eliminating the need for multivariate calculations and their analysis. The work results are practically significant due to the possibility of correct planning and technological process control in machining small-scale production. The paper reveals that the analysis of equipment utilization rates during technological processes unambiguously characterizes the state and production capabilities of technological units.
15. Rationale for selecting a normal recovery model to construct 3D mesh surface [№2 за 2025 год]
Authors: Dyachenko, R.A., Savvidi, K.L., Gura, D.A.
Visitors: 1233
The paper discusses the process of finding an approach to define a normal recovery model to construct a 3D mesh surface. The approach will allow reducing the number of available holes in the point cloud and the resulting artefacts in the mesh surface. The research object is small architectural forms, in particular, genre sculpture. The paper focuses on improving the identification efficiency of 3D objects through the joint use of the Plane normal construction model and the Poisson reconstruction method of the mesh surface. While searching for an approach to solve the problem, the authors use the experimental research method. During the experiment, the point cloud was subjected to subsampling. In the next step, the authors applied three normal reconstruction models to it in order to obtain normal maps and determine the map that most closely matches the study object. Based on the obtained normal maps, three mesh surfaces were reconstructed using the Poisson reconstruction method. This is necessary to determine the best quality surface by empirical analysis for artefacts and holes. The reconstruction used a new method. It involved the joint application of the spatial method of point cloud denoising, the Plane model for normal reconstruction and the subsequent Poisson reconstruction of the mesh surface of a 3D model. The experiment determined the dependence between the setting value of the subsampling procedure and the number of points remaining after its execution. This dependence appears in a graph chart form. The setting values obtained during the experiment determine the scientific novelty. The authors obtained three normal maps of the research object. Further, they empirically determined the most suitable model of normal recovery – Plane. The authors applied the Poisson surfaces reconstruction method to the normal maps formed during the experiment in order to obtain polygonal models of the research object. The practical significance refers to improving the efficiency of 3D object recognition in order to obtain highly polygonal digital twins.
16. UtmnTeam web service: Selecting performers for IT projects based on structured and unstructured data about students [№2 за 2025 год]
Authors: Melnikova, A.V., Vorobeva M.S., Plotonenko, Yu.A.
Visitors: 1437
The paper focuses on educational projects aimed at solving tasks set by teachers in terms of student teams. It discusses the development of a web service that will assist in selecting performers for IT projects based on data obtained during students' education. The research involves identifying characteristic aspects of project activities in terms of the educational process and target audience, identifying functionalities, selecting architecture and development technologies, as well as analyzing and processing structured and unstructured data. Students can use the developed UtmnTeam service to view information about projects. Teachers, mentors and administrators who have access to project creation and participant selection can also use it. The solution has a service architecture with modular blocks that enable the main functionalities and interact with the client part and the database via API. The team selection module is based on an algorithm that processes information about students' skills, their academic records and experience in previous projects. Furthermore, it considers the importance of project requirements. A report-processing module based on modern text processing methods provides information about students' skills. Docker containers are used to ensure reliability and scalability, to avoid dependency conflicts of different software development tools. The web service allows the user to select a team of students for a project and to assess how well the participants meet the requirements. The authors plan further development of the service with adding new modules and integrating with external data sources, as well as testing on new data.
17. Computational mesh decomposition using a genetic algorithm [№2 за 2025 год]
Author: Rybakov, A.A.
Visitors: 1772
The paper focuses on the problem of computational mesh decomposition for organizing efficient high-performance computations on multiprocessor computing systems. The author considers the problem of computational mesh decomposition in terms of its dual graph decomposition. To assess the quality of the obtained solution, the author uses the weighted sum of the main decomposition quality indicators (deviation of the domain size from a theoretical average value, the length of the largest boundary between domains, the sum of boundaries between all domain pairs). A decomposition method is a genetic algorithm with the genotype consisting of graph base vertices to build up domains. The author uses a fast algorithm, which is a rough analog of the bubble growth algorithm, to build an individual from a genotype. Experiments show that a small size genotype allows obtaining a sufficiently high-quality solution for decomposing the computational mesh in an acceptable time, despite the coarseness of the algorithm for generating the individual. When decomposing the test computational mesh, the value of the penalty function of the population best individual showed a 50–75 % decrease in the first 100 algorithm epochs. The considered genetic algorithm does not depend on the computational mesh structure, is simple to implement, allows parallel execution, and scales well to large meshes. The algorithm is applicable for working with computational meshes that dynamically change during computations, since changing the computational mesh geometry and structure weakly affects the genotype of its decomposition. It can also be used to decompose arbitrary computational meshes. The considered algorithm can be extended with arbitrary operations of generating a decomposition from its genotype.
18. A supercomputer job management system based on a hierarchical resource management model [№2 за 2025 год]
Author: Baranov, A.V.
Visitors: 1931
The paper considers the user job managing in a HPC system as the research subject. In this context, a user job is an information object that includes a parallel program with input data and requirements for a parallel resource. A parallel resource is a subset of supercomputer nodes required to run parallel program for some amount of time. Job management is assignment user jobs to dynamically allocated subsets of supercomputer nodes. The article discusses the problems of computing resource management, such as receiving an job input flow, job queue scheduling, allocating and releasing parallel resources for jobs. The research methodology consists of building a five-level hierarchical supercomputer resource management model. The hierarchy levels of the model reflect the degrees of parallelization the job input flow. The top level is job scheduling in a distributed HPC system by assigning jobs to queues of single supercomputers. The lower level is the optimization (such as vectorization) of the program code executed on the single processor core. The architecture of the domestic parallel job management system called SUPPZ is considered. The SUPPZ architecture is based on the proposed hierarchical model. The paper shows the correspondence of the architecture components to the model hierarchy levels, and defines the SUPPZ fea-tures. The successful SUPPZ application to manage high-performance computing resources of shared-use supercomputer centers determines the practical aspect of the study. A digital ecosystem of high-performance computing for scientific re-search has been formed based on the SUPPZ operation over the years. The article provides for the first time generalized sta-tistics on the SUPPZ use at a number of domestic supercomputers in 2001–2024.
19. Knowledge distillation method for language models based on selective intervention in a learning process [№2 за 2025 год]
Authors: Tatarnikova, T.M., Mokretsov, N.S.
Visitors: 1040
The paper discusses the optimization problem of large neural networks on the example of language models. The large size of language models is an obstacle for their practical application under conditions of limited computational resources and memory. One relevant direction in the field of compressing large neural network models is knowledge distillation. It is knowledge transfer from a large teacher model to a smaller student model without significant loss of result accuracy. In this case, the student's model output is used to accelerate learning. Applying this approach leads to reducing in the mismatch be-tween the outputs of training and model usage and to improving performance. However, this is only applicable to short language model sequences. For long sequences, the problem remains unsolved, as do the problems of inaccurate knowledge transfer and error accumulation. The authors propose selective teacher intervention in a student's learning process to solve them. The idea is to switch selectively between the student's model and the teacher's model to generate the next token when significant discrepancies between their probability distributions are detected. The switch decision is based on reaching an exponentially decreasing measurement threshold of the divergence between teacher and student's probability distributions. This strategy balances the need to train a student on their data and to prevent error accumulation in long sequences. The knowledge distillation method is practically significant due to its possible application to tasks with limited computational resources.
20. Developing a temperature field modeling program for layered electric arc welding of metal ware [№2 за 2025 год]
Authors: Kakorin, D.D., Margolis, B.I.
Visitors: 1766
The paper gives a brief review of the programs for modeling temperature fields in additive manufacturing of metal products. It highlights the main drawbacks of the developed models in terms of layered electric arc cladding of filler wire. The authors describe the identification procedure of heat transfer parameters when modeling a temperature field during layered electric arc welding of metal. They identify major and minor mechanisms of heat transfer in steel. The authors describe the process of experimental determining the temperature of welded metal during technological soaking and present measurement results. They analyze the change in the metal temperature, highlighting the most probable causes of the sharp exponential cooling at the beginning of the soaking stage. Taking into account the initial data used in the experiment, they performed mathematical modeling of the temperature field for the stages of layered welding and process soaking using TempSurfacing and Tem-pRest functions, respectively. Depending on temperature, the TempDepend function took into account variable thermal conductivity and heat capacity of steel values. The paper presents the modeling results in the form of a temperature field graph along the length and height of the structure. Numerical modeling of the temperature field showed a significant discrepancy between experimental and calculated temperature values during metal layer welding. The main reason for the inconsistency of indicators is that the program cannot take into account all heat distribution features in a massive body. The authors suggest identifying the weighting coefficients of the base and clad metal, as well as the coefficients of forced convective heat exchange. For this purpose, the functions of optimal values identification of weight coefficients for the welding stage and determining convective heat exchange coefficients for the technological holding stage are added to the program. When modeling a temperature field for the identified heat transfer parameters, the authors obtained the minimum deviation of the calculated temperature values from the experimental ones. Thus, they corrected the temperature field-modeling program in two-dimensional spatial coordinates for simple geometric metal products. It takes into account structure massiveness, does not complicate significantly working functions and does not increase computational operations time. The program of temperature field modeling can be used for technological mode development at different geometrical characteristics of the cladding structure. Its application allows reducing the cost of prototypes and experimental samples manufacturing and optimizing the technological soaking time.
◄ ← Preview | 1 | 2 | 3 | Next → ►
