Mathematical Modeling of Hardware-Optical Distortions in Aerial Image Data

2025;
: pp. 178 - 189
Authors:
1
Interregional Academy of Personnel Management, Institute of Computer and Information Technologies and Design, Department of Computer Information Systems and Technologies, Kyiv, Ukraine

This study presents the formalization of mathematical models of hardware-optical distortions in digital images captured during aerial photography from onboard systems of Unmanned Aerial Vehicles (UAVs). These distortions significantly affect the accuracy and reliability of automated object detection and classification algorithms in complex outdoor environments. A generalized classification scheme of distortions is proposed, accounting for their origins and dividing image degradation into hardware- optical, dynamic, and environmental factors that induce structural instability in the input data. The research formulates mathematical modeling tasks for key types of hardware-optical distortions, including:  spherical  aberration  is  formalized  through  a  spatially-dependent  point  spread  function (PSF); chromatic aberration is described as linear displacement of additive color channels as a function of radial distance from the image center; geometric distortion is modeled by radial coordinate transformation using calibrated lens parameters; defocus blur is represented by a spatially-variant Gaussian blur kernel incorporating local scene depth; and sensor noise is modeled as a combination of stochastic processes using normal and Poisson distributions. The paper substantiates the selection of these mathematical models as a foundation for generating synthetic image datasets in the training of deep learning neural architectures, with the goal of enhancing robustness to real-world distortions. A comparative analysis is performed to assess the impact of each distortion type on image quality, information loss, and suitability for further processing-particularly at the stages of segmentation, object detection, and classification under conditions such as variable backgrounds, partial occlusion, or low illumination. Methodological recommendations are developed for generating training datasets with defined levels of complexity and distortion, reflecting the real-world conditions of UAV imaging systems, including varying natural lighting, platform instability, vibration, atmospheric scattering, and design limitations of compact sensors. The study concludes that the use of adaptively generated datasets with a priori modeled distortions significantly improves the robustness, accuracy, and generalization capability of modern neural network models, especially in practical deployments of UAV platforms along active confrontation lines.

  1. Barisić, A., Petrić, F., & Bogdan, S. (2021). Sim2Air – Synthetic aerial dataset for UAV monitoring. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021.
  2. Barisić, A., Petrić, F., & Bogdan, S. (2022). Sim2Air – Synthetic aerial dataset for UAV monitoring. IEEE Robotics and Automation Letters, 7(2), 3757–3764. https://doi.org/10. 1109/LRA.2022.3147337
  3. Chabok, M. (2012). Eliminating and modelling non-metric camera sensor distortions caused by sidewise and forward motion of the UAV. In ISPRS Archives—XL‑1/W2 (pp. 73–78). International Society for Photogrammetry and Remote Sensing.
  4. Dewez, T. J. B., Diogo, A., François, M., & Clercq, E. (2021). UAV-based structural damage mapping: A review.ISPRS International Journal of Geo-Information, 9(3), 187.
  5. Gao, Y.-L., Chen, J., Dam, T., Ferdaus, M. M., Poenar, D. P., & Duong, V. N. (2024). Dehazing remote sensing and UAV imagery: A review of deep learning, prior-based, and hybrid approaches. arXiv. https://doi.org/10.48550/arXiv.2405.07520
  6. Incekara, A. H., & Şeker, D. Z. (2021). Rolling shutter effect on the accuracy of photogrammetric product produced by low‑cost UAV. International Journal of Environment and Geoinformatics, 8(4), 549–553. https://doi.org/10.30897/ijegeo.948676
  7. Jiang,  H., Meingast, M., Geyer,  C., Sastry, S. (2005). Geometric models of rolling-shutter cameras. arXiv.https://doi.org/10.48550/arXiv.cs/0503076
  8. Kamilaris, A., van den Brink, C., & Karatsiolis, S. (2019). Training deep learning models via synthetic data: Application in unmanned aerial vehicles. arXiv. https://doi.org/10.48550/arXiv.1908.06472
  9. Kiefer, B., Ott, D., & Zell, A. (2021). Leveraging synthetic data in object detection on unmanned aerial vehicles.arXiv. https://doi.org/10.48550/arXiv.2112.12252
  10. Kong, F., Huang, B., Bradbury, K., & Malof, J. M. (2020). The Synthinel‑1 dataset: A collection of high‑resolution synthetic overhead imagery for building segmentation. arXiv. https://doi.org/10.48550/ arXiv.2001.05130
  11. Lenhard, T. R., Weinmann, A., Franke, K., & Koch, T. (2025). SynDroneVision: A synthetic dataset for image‑based drone detection. Proceedings of WACV 2025.
  12. Li, X., Suo, J., Zhang, W., Yuan, X., & Dai, Q. (2021). Universal and Flexible Optical Aberration Correction Using Deep‑Prior Based Deconvolution. arXiv. https://doi.org/10.48550/arXiv.2104.03078
  13. Maxey, C., Choi, J., Lee, H., Manocha, D., & Kwon, H. (2023). UAV‑Sim: NeRF‑based synthetic data generation for UAV‑based perception. arXiv.  https://doi.org/10.48550/arXiv.2310.16255
  14. Munir, A., Siddiqui, A. J., Anwar, S., El‑Maleh, A., Khan, A. H., & Rehman, A. (2024). Impact of adverse weather and image distortions on vision‑based UAV detection: A performance evaluation of deep learning models. Drones, 8(11), 638. https://doi.org/10.3390/drones8110638
  15. Pável, S., Sándor, C., & Csató, L. (2019). Distortion Estimation Through Explicit Modeling of the Refractive Surface. arXiv. https://doi.org/10.48550/arXiv.1909.10820
  16. Riazul Islam, S. M. (2023). Drones on the rise: Exploring the current and future potential of UAVs. arXiv. https://doi.org/10.48550/arXiv.2304.13702
  17. Shelekhov, A., Afanasiev, A., Shelekhova, E., Kobzev, A., Tel’minov, A., Molchunov, A., & Poplevina, O. (2022). Low-altitude sensing of urban atmospheric turbulence with UAV. Drones, 6(3), 61. https://doi.org/10.3390/drones6030061
  18. Trouvé-Peloux, P., Champagnat, F., Le Besnerais, G., Druart, G., & Idier, J. (2021). Performance model of depth from defocus with an unconventional camera. Journal of the Optical Society of America A, 38(10), 1489– 1500. https://doi.org/10.1364/JOSAA.424621
  19. Wu, S., He, X., & Chen, X. (2025). Weamba: Weather-degraded remote sensing image restoration with multi- router state space model. Remote Sensing, 17(3), 458. https://doi.org/10.3390/rs17030458
  20. Zhang, Q., & Fraser, C. (2022). A Bi‑Radial Model for Lens Distortion Correction of Low‑Cost UAV Cameras.
  21. Remote Sensing, 15(22), 5283. https://doi.org/10. 3390/rs15225283
  22. Zheng, T., Zhang, G., Xu, Y., & Xia, G.-S. (2023). A review on deep learning in UAV remote sensing. Remote Sensing, 15(1), 149.
  23. Zwęgliński, T. (2020). The use of drones in disaster aerial needs reconnaissance and damage assessment – Three- dimensional modeling and orthophoto map study. Sustainability, 12(15), 6080. https://doi.org/10. 3390/su12156080