The quarantine restrictions introduced during COVID-19 are necessary to minimize the spread of coronavirus disease. These measures include a fixed number of people in the room, social distance, wearing protective equipment. These restrictions are achieved by the work of technological control workers and the police. However, people are not ideal creatures, quite often the human factor makes its adjustments.
The aim of the work is to develop an automated measuring system in a mechanical gyrocompass with the help of specially developed hardware and software in order to facilitate the operation of the device and minimize observer errors. The developed complex provides automation only for the time method, as for the method of the turning point it is necessary to constantly contact the motion screw of the total station. The project is based on an integrated system, the hardware part of which contains a single-board computer, camera, and lens.
This paper examines the main methods and principles of image formation, display of the sign language recognition algorithm using computer vision to improve communication between people with hearing and speech impairments. This algorithm allows to effectively recognize gestures and display information in the form of labels. A system that includes the main modules for implementing this algorithm has been designed.
Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed.
We propose a structure of special processor implementing feature detection in a video stream based on the SURF algorithm to be used in computer vision systems.
We consider a microcontroller implementation of an algorithm for extracting SURF features of an object in a video stream to be used in a specialized computer vision system.
In this paper, we develop a new approach for detecting fire in images based on convolutional neural networks. Cascade structure, which provides improved efficiency of recognition in images with low resolution and objects that can visually resemble flames, was proposed. We have performed an experimental comparison with the modern method of objects detecting Faster R-CNN. As a result of the experiments, it was found that performance of fire recognition improved on average by 20%.
Splines have been used for the solution of the considered problems. This allows getting a single model for the three tasks and combine flexibility of the model with ease of calculations. For filtering and segmentation of acoustic signals spline filters that are similar to Savitsky-Golay filters have been used. Various widths of spline fragments provides a possibility to have different smoothness and select fragments of varying detalization.
The method BOX transformation that reflects lines on raster images for the points of intersection circumscribing the square image. Displaying is carried out by lines with distinct pairs of crossing points and has N2 complexity. The number of pairs of points lying on a straight line accumulates at a point of reflection. This selection allows you to direct the number of dots on them and perform filtering of individual points. The algorithms of direct and inverse conversion and the examples of images conversion have been demonstrated.
We consider a microcontroller implementation of a histogram-based algorithm for finding object coordinates in monochrome video.