Machine learning (ML) in queueing theory combines the predictive and optimization capabilities of ML with the analytical frameworks of queueing models to improve performance in systems such as telecommunications, manufacturing, and service industries. In this paper we give an overview of how ML is applied in queueing theory, highlighting its use cases, benefits, and challenges. We consider a classical GI/G/K-type queueing system, which is at the same time rather complex for obtaining analytical results, consisting of K homogeneous servers with an arbitrary distribution of time between incoming customers
and equally distributed service times, also with an arbitrary distribution. Different simulation techniques are used to obtain the training and test samples needed to apply the supervised ML algorithms to problems of regression and classification, and some results of the approximation analysis of such a system will be needed to verify the results. ML algorithms are used also to solve both parametric and dynamic optimization problems. The latter is achieved by means of a reinforcement learning
approach. It is shown that the application of ML in queueing theory is a promising technique to handle the complexity
and stochastic nature of such systems.