Low-rank tensor completion using nonconvex total variation

In this work, we study the tensor completion problem in which the main point is to predict the missing values in visual data.  To greatly benefit from the smoothness structure and edge-preserving property in visual images, we suggest a tensor completion model that seeks gradient sparsity via the $l_0$-norm.  The proposal combines the low-rank matrix factorization which guarantees the low-rankness property and the nonconvex total variation (TV).  We present several experiments to demonstrate the performance of our model compared with popular tensor completion methods in terms of visual and quantitative measures.

  1. Kolda T. G., Bader B. W.  Tensor decompositions and applications.  SIAM review. 51 (3), 455–500 (2009).
  2. Xu Y., Hao R., Yin W., Su Z.  Parallel matrix factorization for low-rank tensor completion.  Preprint arXiv:1312.1254 (2013).
  3. He W., Zhang H., Zhang L., Shen H.  Total-variation-regularized low-rank matrix factorization for    hyperspectral image restoration.  IEEE transactions on geoscience and remote sensing. 54 (1), 178–188 (2015).
  4. Ji T.-Y., Huang T.-Z., Zhao X.-L., Ma T.-H., Liu G.  Tensor completion using total variation and low-rank matrix factorization.  Information Sciences. 326, 243–257 (2016).
  5. Jiang T.-X., Huang T.-Z., Zhao X.-L., Ji T.-Y., Deng L.-J.  Matrix factorization for low-rank tensor completion using framelet prior.  Information Sciences. 436–437, 403–417 (2018).
  6. Ben-Loghfyry A., Hakim A.  Time-fractional diffusion equation for signal and image smoothing.  Mathematical Modeling and Computing. 9 (2), 351–364 (2022).
  7. Alaa H., Alaa N. E., Atounti M., Aqel F.  A new mathematical model for contrast enhancement in digital images.  Mathematical Modeling and Computing. 9 (2), 342–350 (2022).
  8. Alaa H., Alaa N. E., Aqel F., Lefraich H.  A new Lattice Boltzmann method for a Gray-Scott based model applied to image restoration and contrast enhancement.  Mathematical Modeling and Computing. 9 (2), 187–202 (2022).
  9. Mohaoui S., Hakim A.,  Raghay S.  Bi-dictionary learning model for medical image reconstruction from undersampled data.  IET Image Processing. 14 (10), 2130–2139 (2020).
  10. Mohaoui S., Hakim A., Raghay S.  A combined dictionary learning and TV model for image restoration with convergence analysis.  Journal of Mathematical Modeling. 9 (1), 13–30 (2021).
  11. Rudin L. I., Osher S., Fatemi E.  Nonlinear total variation based noise removal algorithms.  Physica D: Nonlinear Phenomena. 60 (1–4), 259–268 (1992).
  12. Wang M., Wang Q., Chanussot J.  Tensor low-rank constraint and $l_0$ total variation for hyperspectral image mixed noise removal.  IEEE Journal of Selected Topics in Signal Processing. 15 (3), 718–733 (2021).
  13. Banouar O., Mohaoui S., Raghay S.  Collaborating filtering using unsupervised learning for image reconstruction from missing data.  EURASIP Journal on Advances in Signal Processing. 2018, 72 (2018).
  14. Mohaoui S., Hakim A., Raghay S.  Tensor completion via bilevel minimization with fixed-point constraint to estimate missing elements in noisy data.  Advances in Computational Mathematics. 47 (1), 10 (2021).
  15. Liu J., Musialski P., Wonka P., Ye J.  Tensor completion for estimating missing values in visual data.  IEEE transactions on pattern analysis and machine intelligence. 35 (1), 208–220 (2012).
  16. Xu L., Zheng S., Jia J.  Unnatural $l_0$ sparse representation for natural image deblurring.  2013 IEEE Conference on Computer Vision and Pattern Recognition. 1107–1114 (2013).
  17. Ono S.  $l_0$ gradient projection.  IEEE Transactions on Image Processing. 26 (4), 1554–1564 (2017).
  18. Xue S., Qiu W., Liu F., Jin X.  Low-rank tensor completion by truncated nuclear norm regularization.  2018 24th International Conference on Pattern Recognition (ICPR). 2600–2605 (2018).
  19. Wright J., Ganesh A., Rao S., Ma Y.  Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization.  Advances in Neural Information Processing Systems 22 (NIPS 2009). 22 (2009).