We study the adaptive Mirror Descent Algorithms for a convex stochastic optimization and
multi-armed bandit problems. The recursive algorithms realize primal-dual method of gradient
type under different parameters like norms in primal/dual spaces, proxi-function that generates
a related projection by mapping the dual space onto the given convex optimization set using
Legendre-Fenchel transform and the gradient, and two real sequences as step size and a so called
generalized temperature. The latter is recursively updated by using current observations (i.e.,
stochastic sub-gradients for optimizing the function) and leads the algorithms to optimal rate of
convergence (up to log-factor). The algorithms demonstrate their effectiveness on the example
of the multi-armed bandit problem.