We introduce the so-called Langevin dynamics associated to a recursive optimization procedure, whether stochastic or not. We focus on the Langevin version of gradient descent and stochastic gradient descent induced by a smooth coercive potential function but this could also be applied to the Adam procedure among others. We first provide, in a nutshell, some basics about invariant distributions of (mean-reverting) Brownian diffusions with a focus on the Langevin diffusion. We show in particular that its invariant distribution concentrates on the argmin of the potential function when the Brownian noise fades. The Langevin dynamics applied to a gradient descent consist in adding an exogenous noise in the appropriate scale in order to improve the exploration of the state space. Convergence rates are established for the Langevin version of both regular and stochastic gradient descents.