Laboratory of Structural Methods of Data Analysis in Predictive
Modeling Moscow Institute of Physics and Technology
ENG
Логин:
Пароль:
Universal gradient methods for convex optimization problems
In this paper, we present new methods for black-box convex minimization. They do not need to know in advance the actual level of smoothness of the objective function. The only essential input parameter is the required accuracy of the solution. At the same time, for each particular problem class they automatically ensure the best possible rate of convergence. We confirm our theoretical results by encouraging numerical experiments, which demonstrate that the fast rate of convergence, typical for the smooth optimization problems, sometimes can be achieved even on nonsmooth problem instances.

Авторы: Nesterov Yurii

Дата: 17 ноября 2014

Статус: опубликована

Журнал: Mathematical Programming

Google scholar:

Направления исследований

Primal-dual subgradient methods

Huge-scale problems

Дополнительные материалы