Exploit computer vision inpainting approach to boost deep learning models

2022;
: pp. 1 - 6
1
Ivan Franko National University of Lviv
2
Ivan Franko National University of Lviv
3
National Technical University “Igor Sikorsky Kyiv Polytechnic Institute”

In today’s world, the amount of available information grows exponentially every day. Most of this data is visual data. Correspondingly, the demand for the algorithm of image rent is growing. Traditionally, the first approaches to computer vision problems were classical algorithms without the use of machine learning. Such approaches are limited by many factors. First of all, the conditions imposed on the input images are applied – the shooting angle, lighting, position of objects on the scene, etc. Other classical algorithms cannot meet the needs of modern computer vision problems.

Neural network approaches and deep learning models have largely replaced classical programming algorithms. The greatest advantage of deep neural networks in computer vision tasks is not only the possibility of automatically building data processing algorithms that cannot be built in any other way, but also the comprehensiveness of such an approach – actual deep neural networks provide all stages of image processing from start to finish. But. This approach is not always optimal. Training models require a large amount of annotated data to avoid the effect of overfitting such models. In many settings, the conditions have a significant degree of variability, but are limited. In such cases, the combination of both approaches of computer vision is fruitful – pre-processing of the image is performed by classical algorithms, and prediction (classification, object search, etc.) is performed by a neural network.

This article noted an example of the use of damaged images in the classification of tasks (in the extreme cases, the percentage of damage reached 60 % of the image area). We have shown in practice that the use of classic approaches for restoration of damaged areas of the image (inpainting) made it possible to increase the final accuracy of the model by up to 10 % compared to the base model trained under identical conditions on the original data.

  1. Merino, Ibon & Azpiazu, Jon & Remazeilles, Anthony & Sierra, Basilio (2020). 2D Image Features Detector And Descriptor Selection Expert System. DOI: 10.5121/csit.2019.91206.
  2. Gong, Xin-Yi & Su, Hu & Xu, De & Zhang, Zhengtao & Shen, Fei & Yang, Hua-Bin (2018). An Overview of Contour Detection Approaches. International Journal of Automation and Computing, 15, 1–17. 10.1007/s11633- 018-1117-z. DOI: 10.1007/s11633-018-1117-z.
  3. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. DOI: 10.1109/TPAMI.2015.2502579.
  4. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778. DOI: 10.1109/CVPR.2016.90.
  5. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1–9. DOI: 10.1109/CVPR.2015.7298594.
  6. Tan, M., & Le, Q. (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, 6105–6114. PMLR. DOI: 10.1109/ECTI-CON54298.2022.9795496.