The model of decentralized control of adaptive data collection processes has been developed based on the equilibrium concept, which is used to study the problem of coordinating joint collective actions from the point of view of finding an effective scheme for complementing the individual actions of data collection processes in the absence of a control center. The method of decentralized control of adaptive data collection processes in autonomous distributed systems based on the equilibrium concept and reinforcement learning by the method of normalized exponential function (softmax) has been developed. The method allows one to organize autonomous distributed exploration under the conditions of dynamic changes in the number of data collection processes and unreliable local information interaction between them. As a result of research and modeling of the developed method of decentralized control, it has been established that the use of the reinforcement learning (normalized exponential function method) provides more effective search for a solution compared to the method of adaptive random search (by an average of 28.3%). Using the efficiency retention rate, an estimate was obtained for the dependence of the work of the developed decentralized control method on the change in the number of adaptive data collection processes and the change in the information interaction channels between adaptive data collection processes.
- Multiagent Systems, by Gerhard Weiss (Editor), 2nd edition, The MIT Press, 2013. — 920 p.
- Michael Wooldridge, An Introduction to MultiAgent Systems, 2nd edition, Wiley, 2009. — 484 p.
- Yoav Shoham, Kevin Leyton-Brown, Multiagent Systems: Algorithmic,Game-Theoretic, and Logical Foundations, Cambridge University Press, 2008. — 504 p.
- Multi-Agent Systems: Simulation and Applications, by Adelinde M. Uhrmacher (Editor), Danny Weyns (Editor), CRC Press, 2009. — 566 p.
- Stuart Russell, Peter Norvig, Artificial Intelligence: A Modern Approach, 4th edition, Pearson, 2020. — 1136 p.
- Jacques Ferber, Multi-agent systems: An introduction to distributed artificial intelligence, Addison-Wesley Professional, 1999. — 528 p.
- Serge Kernbach, Structural Self-Organization in Multi- Agents and Multi-Robotic Systems, Logos Verlag, 2008. — 250 p.
- Tsetlin M. L. Finite automata and models of simple forms of behaviour, Russian Mathematical Surveys,18(4):112, 1963, 3-28 p.
- Tsetlin, M. L. (1973). Automaton Theory and Modeling of Biological Systems. Academic Press, New York. — 288 p.
- Varshavsky V.I. Collective behavior of automata, Moscow, Nauka, 1973. — 408p. (in Russian)
- Narendra, K. and Thathachar, M. A. L., Learning Automata: An Introduction, 2nd ed., Dover Publications, 2013. — 496 p.
- Richard S. Sutton, Andrew G. Barto, Reinforcement Learning: An Introduction, 2nd Ed., A Bradford Book, 2018. — 532 p.
- Reinforcement Learning: Theory and Applications, ed. by C. Weber, M. Elshaw, N. M. Mayer, I-Tech Education and Publishing, Vienna, 2008. — 424 p.
- L.P. Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: A survey. Journal of AI Research, N4, 1996. — pp.237-285
- Csaba Szepesvari, Algorithms for Reinforcement Learning, Morgan and Claypool Publishers, 2010. — 104 p.
- Reinforcement Learning: State-of-the-Art, ed. by Marco Wiering, Martijn van Otterlo, Springer, 2012. — 672 p.
- Dimitri Bertsekas, Reinforcement Learning and Optimal Control, Athena Scientific, 2019. — 388 p.
- John Wilbur Sheppard, Multi-Agent Reinforcement Learning in Markov Games, D.Ph. Thesis, Johns Hopkins University, Baltimore, Maryland, 1997.
- Thomas Gabel, Learning in Cooperative Multi-Agent Systems: Distributed Reinforcement Learning Algorithms and their Application to Scheduling Problems, Südwestdeutscher Verlag für Hochschulschriften, 2009. — 192 p.
- L. Buşoniu, R. Babuška, and B. De Schutter, “Multi-agent reinforcement learning:An overview,” Chapter 7 inInnovations in Multi-Agent Systems and Applications — 1(D. Srinivasan and L.C. Jain, eds.), vol. 310 ofStudies in Computational Intelligence,Berlin, Germany: Springer, 2010. — pp. 183–221
- Howard M. Schwartz, Multi-Agent Machine Learning: A Reinforcement Approach, Wiley, 2014. — 256 p.
- Felipe Leno Da Silva, Anna Helena Reali Costa, A Survey on Transfer Learning for Multiagent Reinforcement Learning Systems, Journal of Artificial Intelligence Research (JAIR), Vol. 64, 2019. — pp. 645-703
- Arup Kumar Sadhu, Amit Konar, Multi-Agent Coordination: A Reinforcement Learning Approach, Wiley, 2020. — 320 p.
- Botchkaryov A., Golembo V., Distributed contact sensing system based on autonomous intelligent agents, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, 2001, No. 437. — pp.14-20 (in Ukrainian)
- Botchkaryov A., Golembo V., Models of collective behavior of measuring agents, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, No. 463, 2002. — pp.19-27 (in Ukrainian)
- Melnyk A., Golembo V., Botchkaryov A., The new principles of designing configurable smart sensor networks based on intelligent agents, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, No. 492, 2003. — pp.100-107 (in Ukrainian)
- Botchkaryov A., Collective behavior of mobile intelligent agents solving the autonomous distributed exploration task, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, No. 546, 2005. — pp.12-17 (in Ukrainian)
- Melnyk A., Golembo V., Botchkaryov A., Multiagent approach to the distributed autonomous explorations, Proceedings of NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2007), Edinburgh, UK, 2007. — pp.606-610
- Golembo V., Botchkaryov A. Applying the concepts of multi-agent approach to the distributed autonomous explorations // International Book Series “Information Science and computing”, No:13″ — “Intelligent Information and Engineering Systems”, supplement to International Journal “Information Technologies and Knowledge”, Volume 3, 2009. — pp.136-142
- Botchkaryov A., Structural adaptation of the autonomous distributed sensing and computing systems, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, No. 688, 2010. — pp.16-22 (in Ukrainian)
- Botchkaryov A., The problem of organizing adaptive sensing and computing processes in autonomous distributed systems, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, No. 745, 2012. — pp.20-26 (in Ukrainian)
- Botchkaryov A., Golembo V., Applying intelligent technologies of data collection to autonomous cyber- physical systems, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, No. 830, 2015. — pp.7-11 (in Ukrainian)
- Botchkaryov A., Organization of adaptive processes of information collection in mobile cyberphysical systems, Proceedings of the Second Scientific Seminar “Cyberphysical Systems: Achievements and Challenges”, Lviv Polytechnic National University, Lviv, June 21-22, 2016. — pp.62-67 (in Ukrainian)
- Cyber-physical systems: data collection technologies, by A. Botchkaryov, V. Golembo, Y. Paramud, V. Yatsyuk (authors), A. Melnyk (editor), Lviv, “Magnolia 2006”, 2019. — 176 p. (in Ukrainian)
- Opoitsev V.I. Equilibrium and stability in collective behavior models, Moscow, Nauka, 1974. — 245 p. (in Russian)
- Botchkaryov A., Solving the problem of mechanical balancing by collective of mobile agents, Transactions on Computer systems and networks, Lviv Polytechnic National University Press, No. 463, 2002. — pp.14-18 (in Ukrainian)
- Rastrigin L.A., Ripa K.K., Tarasenko G.S., Random search adaptation, Riga, Zinatne, 1978. — 239 p. (in Russian)
- Rastrigin L.A. Adaptation of complex systems, Riga, Zinatne, 1981. — 375 p. (in Russian)