This paper overviews a line of work on the use of the ADMM (alternating direction method of multipliers, a member of the augmented Lagrangian family of methods) to solve regularization formulations of some classical imaging inverse problems. At the core of this line of work is a way of using ADMM to tackle optimization problems where the objective function is the sum of two or more convex functions, each of which having a proximity operator that can be efficiently computed. The approach is illustrated on a variety of well-known problems, namely: image restoration and reconstruction with linear observations (for example, compressive imaging, image deblurring, image inpainting), which may be contaminated with Gaussian or Poisson noise, using synthesis, analysis, or hybrid regularization, and unconstrained or constrained regularization/variational formulations. In all these cases, the proposed approach inherits the convergence properties of ADMM. The main computational bottleneck of the proposed approach is a matrix inversion, which has been often criticized as a hurdle that should be avoided; in contrast, we show that in all the above mentioned problems, this inversion can be tackled very efficiently and we conjecture that it actually underlies the good empirical performance which has been reported for several instances of this class of methods.