Construction of an Optimized Multilayer Neural Network Within a Nonlinear Model of Generalized Error

2021;
: pp. 53 - 60
1
Lviv Polytechnic National University
2
Lviv Polytechnic National University, Lviv, Ukraine
3
Lviv Polytechnic National University
4
Lviv Polytechnic National University, Information Systems and Networks Department; Osnabrück University, Institute of Computer Science
5
5 Drohobych Ivan Franko State Pedagogical University

In this paper, we propose a method for optimizing the structure of a multilayer neural network based on minimizing nonlinear generalized error, which is based on the principle of minimum length of description. According to this principle, the generalized error is determined by the error in the description of the model and the error in the approximation of the data by the neural network in the nonlinear approximation. From the condition of minimizing the generalized network error, the expressions for calculating the optimal network size are given (the number of synaptic connections and the number of neurons in hidden layers). The graphic dependences of the generalized error of the network on the number of synaptic connections between the neurons with different values of input images and the fixed number of educational examples and the graphic dependences of the optimal number of synaptic connections from the number of educational examples with different values of the input images are constructed. The assessment of the degree of complexity of the training of the neural network is carried out on the basis of the ratio of the optimal number of synaptic connections between the neurons and the optimal number of neurons in the hidden layers.

  1. Vasyl Lytvyn, Ivan Peleshchak, Roman Peleshchak. (2017). The compression of the input images in neural network that using method diagonalization the matrices of synaptic weight connections. 2017 2nd International Conference on Advanced Information and Communication Technologies (AICT), 66 - 70. https://doi.org/10.1109/AIACT.2017.8020067. phttps://doi.org/10.1109/AIACT.2017.8020067
  2. Vasyl Lytvyn, Ivan Peleshchak, Roman Peleshchak. (2017). Increase the speed of detection and recognition of computer attacks in combined diagonalized neural networks. 2017 4th International Scientific-Practical Conference «Problems of infocommunications. Science and Technolohy», 152 - 155. https://doi.org/10.1109/INFOCOMMST.2017.8246370. phttps://doi.org/10.1109/INFOCOMMST.2017.8246370
  3. Yezhov A. A., Shumsky S. A. (1998). Neurocomputing and its applications in Economics and business. Moscow, 222.
  4. Tariq Rashid. (2016). Make Your Own Neural Network. Kindle Edition, 222.
  5. Charu C. Aggarwal. (2018). Neural Networks and Deep Learning: A Textbook. Springer, 520. https://doi.org/10.1007/978-3-319-94463-0. phttps://doi.org/10.1007/978-3-319-94463-0
  6. Tereykovsky I. A. (2012). Optimization of the structure of the multilayer perceptron in systems of computer informaton protection. Information protection, 3, 36-40. https://doi.org/10.18372/2410-7840.14.3357. phttps://doi.org/10.18372/2410-7840.14.3357
  7. Giovanni Landi, Alessandro Zampini. (2018). Linear Algebra and Analytic Geometry for Physical Sciences. Springer, 345. https://doi.org/10.1007/978-3-319-78361-1. phttps://doi.org/10.1007/978-3-319-78361-1
  8. Heydarov P. S. (2017). Neural network of direct distribution with the calculated parameters. Information technology, 7, 543 - 552.
  9. Hecht-Nielsen R. (1987). Kolmogorov’s mapping neural network existence theorem. IEEE First Annual Int. Conf. on Neural Networks. San Diego, 3, 11-13.
  10. Kulchin Y. N. (2015). Processing of signals of the distributed fiber-optic network for recognition of dynamic images using neural networks. Information technology, 4, 312 - 318.