ISSN 0236-235X (P)
ISSN 2311-2735 (E)

Journal influence

Higher Attestation Commission (VAK) - К1 quartile
Russian Science Citation Index (RSCI)

Bookmark

Next issue

2
Publication date:
14 June 2026

Latest issue articles

Order result by:
Public date | Title | Authors

1. Internet of Battlefield Things: Architecture development using intelligent technologies [№1 за год]
Authors: Vinogradov G.P., Konukhov I.А.
Visitors: 3010
This article addresses the challenge of improving the efficiency and speed of decision-making in armed forces under modern conditions. It is demonstrated that this challenge can be addressed through the Internet of Battlefield Things. The subject of the research is the architecture for building the Internet of Battlefield Things, its core components, and a set of information processing algorithms. An analysis is conducted of the premises driving its relevance and the associated problems encountered during its development and implementation. The state-of-the-art analysis reveals that military applications of the Internet of Things require the integration of physical processes during real-time mission execution with software-electronic systems and information technologies, which is achieved through cyber-physical systems. Reactive wireless sensor networks, which provide data collection, edge processing of raw information for military applications, and solution implementation, are proposed as the foundation for such systems. A variant of the reactive sensor network archi-tecture is considered. Options for constructing architectural elements and the most critical algorithms – positioning and target mobility tracking – are presented. An architecture for the sensor node of reactive wireless sensor networks is developed. A node management system employing efficient patterns is proposed. Intelligent approaches, methods, and algorithms for node localization and tracking are developed. Experimental tests demonstrated higher and more stable localization performance across various scenarios using the proposed methods and algorithms compared to known alternatives. To reduce information load and increase maximum network throughput, an architecture utilizing intelligent data filters, edge device management, and upgraded network infrastructure is proposed. The implementation of the author's approach highlights the need to develop sensor data mining algorithms based on artificial intelligence and data mining methods, as well as the synthesis and generation of knowledge based on onboard awareness and ontologies. To enhance the lifecycle of reactive wireless sensor networks, the development of energy-efficient communication protocols is required.

2. Method of parametric identification of intelligent control of operational variables in the target tracking function of a multifunctional radar system [№1 за год]
Authors: Arzhaev V.I., M.A. Likhachev
Visitors: 2873
This paper presents a model for the parametric identification of intelligent control actions aimed at adaptively maintaining the required quality of multitarget tracking. The method is based on covariance analysis and a quality of service management approach. The model enables the estimation of target tracking errors and the determination of a necessary set of adjustable operational parameters. This is essential for synthesizing optimal intelligent control actions for the corresponding subsystems of the technological hardware within a multifunctional radar system. The proposed functional model integrates four key aspects: tracking function quality metrics, environmental parameters, operational variables, and the characteristics of the technological hardware's design resource. Based on this model, an optimization problem is formalized to maximize overall utility (satisfaction) subject to total resource consumption constraints. This problem is solved using the Q-RAM algorithm and its modifications for highly configurable tasks (FOFT, 2-FOFT, SOFT). Particular attention is given to the adaptive control of the target revisit interval based on covariance analysis of the Kalman filter. This approach enables predicting the growth of trajectory estimate uncertainty and dynamically determining track update priorities. The concept of tracking load is introduced as the ratio of coherent processing time to the update interval, thereby linking energy, temporal, and computational resources to track accuracy and continuity metrics. It is demonstrated that a linear utility function with error thresholds provides flexible specification of tracking quality requirements while accounting for target priorities. A method for the parametric identification of the intelligent tracking controller model is proposed: training datasets are generated based on the numerical solution of the optimization problem to construct a gray-box model using artificial intelligence techniques. This solution paves the way for implementing adaptive, self-learning control of digital antenna array parameters and computing platforms in real time.

3. Modeling surface ship motion in navigation simulators [№1 за год]
Authors: Zubkov, V.V., Tretyakov, A.D., Nesmelov, A.V.
Visitors: 2781
This paper addresses the mathematical formulation and software implementation of a motion model for various ship types across potential maneuvering areas, accounting for wind-driven wave effects across a wide range of Beaufort numbers. The proposed mathematical model is based on the numerical solution of simple yet nonlinear ordinary differential equations governing ship motion, specifically center of mass movement and rotational dynamics. The hydrodynamic component of the model avoids the complexity of solving partial differential equations associated with fluid-solid interaction dynamics. Instead, it relies on qualitatively accurate model expressions for positional and damping forces, as well as for forces acting on a plate in a fluid flow. The effects of wave-induced and buoyancy forces, along with their corresponding moments, are computed using general formulas. This computation incorporates detailed hull geometry information and a dynamically updated grid of points representing water surface elevation relative to the still water level at each time step. The mathematical model also accommodates the effects of thrusters and the planing regime for waterjet propelled craft. It offers significant flexibility, as its parameters can be readily and efficiently adjusted to represent different ship types. The model's relative simplicity facilitates the empirical tuning of new parameters when specific modifications to ship dynamics are required. The motion model, including ship roll and pitch dynamics, has been successfully implemented as software code and integrated into a navigation simulator. Validation of the model within the operational navigation simulator has confirmed its effectiveness.

4. Cognitive representation methods for global training areas in complex ship handling simulators [№1 за год]
Author: Grachev, V.G.
Visitors: 2753
This paper examines methods for the automated generation and visualization of extended terrain in macro-regional training areas using visual simulation systems integrated into complex ship handling simulators. Current technological limitations in the cognitive representation of macro-regional training terrain are analyzed, and potential solutions are proposed. It is noted that enhancing the fidelity of environmental modeling in complex ship handling simulators can be achieved through a sub-stantial increase in the computational performance of modeling methods. In this context, the significantly expanded pro-cessing power of modern graphics processing units – already integrated into the simulator's visual simulation system but currently underutilized by existing simulator software – is proposed as a key computational resource. The main features of the proposed approach are described, including a parallel recursive subdivision method based on concurrent binary tree techniques, the generation of a locally adaptive three-dimensional terrain fragment model, and view-dependent cognitive representation of extended macro-regional terrain. Optimality criteria for these methods are formulated, and results confirming the efficient utilization of the integrated computational resources of the “CPU – data bus – graphics adapter” system are presented. The algorithmic and software implementation specifics of these methods within the automated generation and visualization pipeline for extended terrains in the visual simulation system are discussed. The new capabilities introduced into complex ship handling simulators by implementing the global training area paradigm enhance both the functional and didactic characteristics of the simulator, as well as its overall technical sophistication.

5. Methodology for benchmarking the performance of peptide sequence reconstruction software based on reference peptide-spectrum matches [№1 за год]
Authors: Arzhaev V.I., Skvortsov А.V., Tsygankov, R.Y., Belyakova, S.N.
Visitors: 2771
This paper addresses the challenge of reconstructing amino acid sequences of primary peptide structures from liquid chromatography-tandem mass spectrometry (LC-MS/MS) data for protein identification in biological and bioanalogous samples lacking prior sequence information. It provides an overview of currently available and emerging software tools developed to address this problem. The object of this research encompasses the algorithms and software tools utilized for peptide sequence reconstruction, with the subject being the accuracy metrics associated with peptide-spectrum matches generated by these tools. A review is conducted of the most widely used and frequently cited de novo peptide sequencing programs. A methodology is proposed for evaluating the correctness of de novo sequencing. This approach is based on adapting a method originally designed for calculating quantitative similarity measures of functional annotation elements in nucleotide sequences–applied at the level of individual nucleotides and gene exonintron structures–to assess the reliability of amino acid sequence reconstruction by existing software tools operating without prior information. Results are presented from a quality assessment of de novo peptide sequencing programs, performed using a custom-developed software tool in accordance with the statistical metrics defined in the proposed methodology. The proposed methodology, along with its software implementation, provides an objective, reproducible, and practically applicable framework for comparing de novo peptide sequencing tools, selecting them for use in biomedical research, and facilitating their further improvement. This is particularly pertinent given the rapid progress in deep learning applications within proteomics and the continuous emergence of new publicly available software solutions in this field.

6. Application of container virtualization for implementing parallel computing on supercomputer clusters [№1 за год]
Authors: Kiselev E.A., Yarovoy, A.V.
Visitors: 2831
The subject of the research presented in this article is the application of container virtualization technologies in supercomputing systems. The authors conducted a comparison of the most common technologies in the field of high-performance computing: Docker, Singularity/Apptainer, and Podman. The advantages and disadvantages of each technology were analyzed. Based on this analysis, two options for using container virtualization technologies in conjunction with the parallel job management system (SUPPZ), used in domestic supercomputing centers, are proposed. The first option is based on using Singularity containers managed by this SUPPZ without additional orchestration tools. A procedure for creating and launching user jobs as Singularity containers was developed, along with a software tool for automating the creation and deployment of Singularity containers. The second option involves using Podman containers together with the Kubernetes orchestration system to automate the deployment of a virtual high-performance computing infrastructure managed by the SUPPZ. A scalable platform prototype for parallel computing is proposed, within which a set of virtual clusters managed by the SUPPZ with separate entry points for different user groups is created; the main stages of building the prototype are considered. A method for scaling the problem-solving domain of a virtual computing system, utilizing the capabilities of both Kubernetes and the SUPPZ, is proposed. An algorithm for the operation of a software tool for automatic scaling of worker nodes in a virtual computing cluster was developed. The research results can be practically applied in the educational process for disciplines related to parallel computing, as well as in conducting research and development in the field of high-performance computing.

7. Applying quantized machine learning models to real-world embedded systems tasks [№1 за год]
Authors: Achkasov, A.V., Yagodkin, A.S., Makarenko, Ph.V.
Visitors: 2838
Quantization is a key technique for optimizing machine learning models for resource-constrained devices in the context of Tiny Machine Learning (TinyML). This paper presents a comprehensive study on the application of quantized TinyML models to various real-world tasks: patient health monitoring, intelligent energy management, predictive maintenance of industrial equipment, and automated monitoring of crop health. The research covers diverse quantization schemes, including uniform fixed-point (8-, 4-, and 2-bit), distribution-aware, mixed-precision, and quantization-aware training. Experiments were conducted using neural network architectures carefully tailored to each specific task: one-dimensional convolutional neural networks for time-series analysis, long short-term memory networks for precise forecasting, hybrid CNN-LSTM networks for comprehensive vibration analysis, and optimized mobile networks for efficient image classification. The results clearly demonstrate that 8-bit quantization provides an optimal balance between computational accuracy and resource efficiency, reducing model size by 75 % with only a minimal accuracy loss of 1–2 percentage points. Innovative mixed-precision schemes, such as top-down and progressive-depth quantization, show superior performance for certain architectures and specific tasks. Importantly, quantization also significantly accelerates inference and substantially reduces power consumption, which is critical for battery-powered autonomous devices. The study offers detailed practical guidelines for selecting optimal quantization schemes for various TinyML applications and outlines promising directions for future research in this rapidly evolving field.

8. Practical aspects of applying the Damerau – Levenshtein distance in text classification tasks [№1 за год]
Authors: Tatarnikova, T.M., Milyaev, D.R.
Visitors: 2833
This paper proposes a novel text classification method that operates without the need for machine learning or a labeled training dataset. The approach is based on an edit distance metric – specifically, the Damerau – Levenshtein distance – combined with word semantic similarity, weighted edit operations, and term importance ordering. In this study, the proposed method evaluates the proximity of an input text to a reference text preassigned to a specific category. Standard evaluation metrics for text classification are reported, including classifier accuracy, mean absolute error, root mean square er-ror, execution time, and semantic coherence. The method is validated in the domain of public utility services for processing citizen complaints. A step-by-step implementation procedure is provided, covering essential text processing operations and illustrated with practical examples. A fully operational structural and functional diagram for collecting complaints from multiple sources is developed, accompanied by a description of information exchange objects. Comparative analysis with a baseline method that does not account for semantic word similarity demonstrates improved accuracy, reduced string similarity search time, and substantially lower mean absolute and root mean square errors. The proposed method holds potential for deployment in real-world systems that process short texts and where resource-intensive artificial intelligence techniques are not feasible – for instance, in systems for monitoring and handling citizen inquiries.

9. Segmentation of binary images based on run-length encoding with variable-length segments [№1 за год]
Authors: Krasnov, A.E., Turchinsky, K.A.
Visitors: 2799
This research describes the development of a compression method that reduces file size without degrading segmentation accuracy. The scientific novelty of the work lies in the integration of local adaptive filtering and morphological correction with four variants of the Run-Length Encoding (RLE) algorithm (classical, Foreground-Only, differential, and Z-order) and in the use of an optimization function C_total, which simultaneously accounts for code length, (de)coding time, and quality metrics (accuracy, IoU, Dice). Fifty heterogeneous masks (medical and synthetic) were investigated. Prior to encoding, block-based threshold filtering with automatic threshold selection and closing/opening operations were applied to enhance segment connectivity. The performance of the schemes was compared based on average volume savings, IoU, Dice, and CPU time. The optimal combination of local filtering, morphological operations, and differential RLE reduced file sizes by 25–40% compared to global binarization combined with standard RLE, while preserving object contours (IoU = 0.85 – 0.92; Dice = 0.86 – 0.93). For sparse objects, the Foreground-Only mode provided the best compromise, while Z-order achieved the highest compression at the cost of approximately a 3% drop in IoU. The methodology is applicable in biomedicine, video analytics, and depth map storage systems. It allows for customizing filtering and encoding parameters to meet specific bandwidth constraints and accuracy requirements.

10. odeling information processes for interactive digital device control [№1 за год]
Authors: Mitsuk, S.V., Skudnev, D.M., Kustov, D.N.
Visitors: 2692
This article investigates the challenge of implementing interactive modules for interfacing with and controlling digital devices. The module's design is based on reading the spatial position of a control glove to position interconnected elements, such as a computer cursor. Key features of the addressed task include the application of wireless technologies and the structuring of the program for information exchange between devices. This structure encompasses defining the spatial position of the glove-shaped control module, processing user button-press events (e.g., resetting the accelerometer's coordinate axis reference points), and utilizing light indicators for rapid identification of critical device malfunctions. The device features integrated buttons that initiate calibration of the MPU6050 accelerometer. The relevance of this development is driven by the inte-gration of IT into the educational process. Functional software for the module was developed in the Python3 programming language. It features a graphical user interface (GUI) and includes several sections: control parameter configuration, launch parameter verification, and connection testing. The end-user gains the ability to flexibly configure the program for their specific screen parameters, select a data source, set motion smoothing parameters, and monitor issues during accelerometer operation, such as situations where the sensor processes data incorrectly and sends zero values. GUI elements enable connecting a laptop to the glove, and the program implements user actions in accordance with the glove's position and user commands. Examples illustrating the device's functional capabilities are presented graphically. An analysis of the project's commercialization potential in the educational sector is provided. Based on the research data, development prospects are formulated from both the perspective of the device itself and the broader field.

| 1 | 2 | Next →