Diabetes prediction using an improved machine learning approach

2021;
: pp. 726–735
https://doi.org/10.23939/mmc2021.04.726
Received: May 23, 2021
Accepted: June 07, 2021

Mathematical Modeling and Computing, Vol. 8, No. 4, pp. 726–735 (2021)

1
LMA FST Beni-Mellal, University Sultan Moulay Slimane, Morocco
2
LMA FST Beni-Mellal, University Sultan Moulay Slimane, Morocco

This paper deals with a machine-learning model arising from the healthcare sector, namely diabetes progression.  The model is reformulated into a regularized optimization problem. The term of the fidelity is the L1 norm and the optimization space of the minimum is constructed by a reproducing kernel Hilbert space (RKSH).  The numerical approximation of the model is realized by the Adam method, which shows its success in the numerical experiments (if compared to the stochastic gradient descent (SGD) algorithm).

  1. Ricci P., Blotière P. O., Weill A., Simon D., Tuppin P., Ricordeau P., Allemand H.  Diabète  traité: quelles évolutions entre 2000 et 2009 en France.  Bull. Epidemiol. Hebd. 42 (42–43), 425–431 (2010).
  2. Isnard R., Legrand L., Pousset F.  Insuffisance cardiaque et diabète: données épidémiologiques, phénotype et impact sur le pronostic.  Médecine des Maladies Métaboliques. 15 (3), 246–251 (2021).
  3. Kavakiotis I., Tsave O., Salifoglou A., Maglaveras N., Vlahavas I., Chouvarda I.  Machine learning and data mining methods in diabetes research.  Computational and Structural Biotechnology Journal. 15, 104–116 (2017).
  4. Perveen S., Shahbaz M., Keshavjee K., Guergachi A.  Metabolic syndrome and development of diabetes mellitus: Predictive modeling based on machine learning techniques.  IEEE Access. 7, 1365–1375 (2018).
  5. Luo G.  Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction.  Health Information Science and Systems. 4 (1), Article number: 2 (2016).
  6. Benhamou P. Y., Lablanche S.  Diabète de type 1: perspectives technologiques.  Mise Au Point. 11–16 (2018).
  7. Hofmann T., Schölkopf B., Smola A. J.  Kernel methods in machine learning.  The Annals of Statistics. 36 (3), 1171–1220 (2008).
  8. Aronszajn N.  Theory of Reproducing Kernels.  Transactions of the American Mathematical Society. 68, 337–404 (1950).
  9. Rosasco L., De~Vito E., Caponnetto A., Piana M., Verri A.  Are Loss Functions All the Same?   Neural Computation. 16 (5), 1063–1076 (2004).
  10. Lyaqini S., Quafafou M., Nachaoui M., Chakib A.  Supervised learning as an inverse problem based on non-smooth loss function.  Knowledge and Information Systems. 62, 3039–3058 (2020).
  11. Lyaqini S., Nachaoui M., Quafafou M.  Non-smooth classification model based on new smoothing technique.  Journal of Physics: Conference Series. 1743, 012025 (2021).
  12. Nachaoui M.  Parameter learning for combined first and second order total variation for image reconstruction.  Advanced Mathematical Models & Applications. 5 (1), 53–69 (2020).
  13. El Mourabit I., El Rhabi M., Hakim A., Laghrib A., Moreau E.  A new denoising model for multi-frame super-resolution image reconstruction.  Signal Processing. 132, 51–65 (2017).
  14. Chen C., Mangasarian O. L.  A class of smoothing functions for nonlinear and mixed complementarity problems.  Computational Optimization and Applications. 5 (2), 97–138 (1996).
  15. Lee Y. J., Hsieh W. F., Huang C. M.  "/spl epsi/-SSVR: a smooth support vector machine for epsilon-insensitive regression. IEEE Transactions on Knowledge & Data Engineering.  17 (5), 678–685 (2005).
  16. Hajewski J., Oliveira S., Stewart D.  Smoothed Hinge Loss and l1 Support Vector Machines.  2018 IEEE International Conference on Data Mining Workshops (ICDMW). 1217–1223 (2018).
  17. Défossez A., Bottou L., Bach F., Usunier N.  On the convergence of Adam and Adagrad.  arXiv preprint arXiv:2003.02395  (2020).
  18. Fei Z., Wu Z., Xiao Y., Ma J., He W.  A new short-arc fitting method with high precision using Adam optimization algorithm.  Optik. 212, 164788 (2020).
  19. Rosales R., Schmidt M., Fung G.  Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches (2007).
  20. Hadamard J.  Lectures on Cauchy's problem in linear partial differential equations.  New Haven, Yale University Press (1923).
  21. Girosi F., Jones M., Poggio T.  Regularization theory and neural networks architectures.  Neural computation. 7 (2), 219–269 (1995).
  22. Schölkopf B., Herbrich R., Smola A. J.  A generalized representer theorem.  International conference on computational learning theory. 416–426 (2001).
  23. Boyd S., Vandenberghe L.  Convex Optimization.  Cambridge University Press, New York, USA (2004).
  24. Ruder S.  An overview of gradient descent optimization algorithms.  arXiv preprint arXiv:1609.04747 (2016).
  25. Efron B., Hastie T., Johnstone I., Tibshirani R.  Least angle regression.  Annals of statistics. 32 (2), 407–499 (2004).