2015 | OriginalPaper | Buchkapitel
Large Scale Optimization with Proximal Stochastic Newton-Type Gradient Descent
verfasst von : Ziqiang Shi, Rujie Liu
Erschienen in: Machine Learning and Knowledge Discovery in Databases
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this work, we generalized and unified two recent completely different works of Jascha [
10
] and Lee [
2
] respectively into one by proposing the
prox
imal s
to
chastic
N
ewton-type gradient (PROXTONE) method for optimizing the sums of two convex functions: one is the average of a huge number of smooth convex functions, and the other is a nonsmooth convex function. Our PROXTONE incorporates second order information to obtain stronger convergence results, that it achieves a linear convergence rate not only in the value of the
objective
function, but also for the
solution
. The proofs are simple and intuitive, and the results and technique can be served as a initiate for the research on the proximal stochastic methods that employ second order information. The methods and principles proposed in this paper can be used to do logistic regression, training of deep neural network and so on. Our numerical experiments shows that the PROXTONE achieves better computation performance than existing methods.