In this paper we propose a new machine learning
concept called randomized machine learning, in which model
parameters are assumed random and data are assumed to contain
random errors. Distinction of this approach from “classical”
machine learning is that optimal estimation deals with the
probability density functions of random parameters and the
“worst” probability density of random data errors. As the
optimality criterion of estimation, randomized machine learning
employs the generalized information entropy maximized on a set
described by the system of empirical balances. We apply this
approach to text classification and dynamic regression problems.
The results illustrate capabilities of the approach.