Skip to main content
Erschienen in: 4OR 3/2021

Open Access 06.09.2021 | Invited Survey

Frank–Wolfe and friends: a journey into projection-free first-order optimization methods

verfasst von: Immanuel M. Bomze, Francesco Rinaldi, Damiano Zeffiro

Erschienen in: 4OR | Ausgabe 3/2021

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Invented some 65 years ago in a seminal paper by Marguerite Straus-Frank and Philip Wolfe, the Frank–Wolfe method recently enjoys a remarkable revival, fuelled by the need of fast and reliable first-order optimization methods in Data Science and other relevant application areas. This review tries to explain the success of this approach by illustrating versatility and applicability in a wide range of contexts, combined with an account on recent progress in variants, improving on both the speed and efficiency of this surprisingly simple principle of first-order optimization.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

In their seminal work (Frank and Wolfe 1956), Marguerite Straus-Frank and Philip Wolfe introduced a first-order algorithm for the minimization of convex quadratic objectives over polytopes, now known as Frank–Wolfe (FW) method. The main idea of the method is simple: to generate a sequence of feasible iterates by moving at every step towards a minimizer of a linearized objective, the so-called FW vertex. Subsequent works, partly motivated by applications in optimal control theory (see Dunn (1979) for references), generalized the method to smooth (possibly non-convex) optimization over closed subsets of Banach spaces admitting a linear minimization oracle (see Demyanov and Rubinov 1970; Dunn and Harshbarger 1978).
Furthermore, while the \({{\mathcal {O}}}(1/k)\) rate in the original article was proved to be optimal when the solution lies on the boundary of the feasible set (Canon and Cullum 1968), improved rates were given in a variety of different settings. In Levitin and Polyak (1966) and Demyanov and Rubinov (1970), a linear convergence rate was proved over strongly convex domains assuming a lower bound on the gradient norm, a result then extended in Dunn (1979) under more general gradient inequalities. In Guélat and Marcotte (1986), linear convergence of the method was proved for strongly convex objectives with the minimum obtained in the relative interior of the feasible set.
The slow convergence behaviour for objectives with solution on the boundary motivated the introduction of several variants, the most popular being Wolfe’s away step (Wolfe 1970). Wolfe’s idea was to move away from bad vertices, in case a step of the FW method moving towards good vertices did not lead to sufficient improvement on the objective. This idea was successfully applied in several network equilibrium problems, where linear minimization can be achieved by solving a min-cost flow problem (see Fukushima 1984 and references therein). In Guélat and Marcotte (1986), some ideas already sketched by Wolfe were formalized to prove linear convergence of the Wolfe’s away step method and identification of the face containing the solution in finite time, under some suitable strict complementarity assumptions.
In recent years, the FW method has regained popularity thanks to its ability to handle the structured constraints appearing in machine learning and data science applications efficiently. Examples include LASSO, SVM training, matrix completion, minimum enclosing ball, density mixture estimation, cluster detection, to name just a few (see Sect. 3 for further details).
One of the main features of the FW algorithm is its ability to naturally identify sparse and structured (approximate) solutions. For instance, if the optimization domain is the simplex, then after k steps the cardinality of the support of the last iterate generated by the method is at most \(k + 1\). Most importantly, in this setting every vertex added to the support at every iteration must be the best possible in some sense, a property that connects the method with many greedy optimization schemes (Clarkson 2010). This makes the FW method pretty efficient on the abovementioned problem class. Indeed, the combination of structured solutions with often noisy data makes the sparse approximations found by the method possibly more desirable than high precision solutions generated by a faster converging approach. In some cases, like in cluster detection (see, e.g., Bomze 1997), finding the support of the solution is actually enough to solve the problem independently from the precision achieved.
Another important feature is that the linear minimization used in the method is often cheaper than the projections required by projected-gradient methods. It is important to notice that, even when these two operations have the same complexity, constants defining the related bounds can differ significantly (see Combettes and Pokutta 2021 for some examples and tests). When dealing with large scale problems, the FW method hence has a much smaller per-iteration cost with respect to projected-gradient methods. For this reason, FW methods fall into the category of projection-free methods (Lan 2020). Furthermore, the method can be used to approximately solve quadratic subproblems in accelerated schemes, an approach usually referred to as conditional gradient sliding (see, e.g., Carderera and Pokutta 2020; Lan and Zhou 2016).

1.1 Organisation of the paper

The present review is not intended to provide an exhaustive literature survey, but rather as an advanced tutorial demonstrating versatility and power of this approach. The article is structured as follows: in Sect. 2, we introduce the classic FW method, together with a general scheme for all the methods we consider. In Sect. 3, we present applications from classic optimization to more recent machine learning problems. In Sect. 4, we review some important stepsizes for first order methods. In Sect. 5, we discuss the main theoretical results about the FW method and the most popular variants, including the \({{\mathcal {O}}}(1/k)\) convergence rate for convex objectives, affine invariance, the sparse approximation property, and support identification. In Sect. 6 we illustrate some recent improvements on the \({{\mathcal {O}}}(1/k)\) convergence rate. Finally, in Sect. 7 we present recent FW variants fitting different optimization frameworks, in particular block coordinate, distributed, accelerated, and trace norm optimization. We highlight that all the proofs reported in the paper are either seminal, or simplified versions of proofs reported in published papers, and we believe they might give some useful technical insights to the interested reader.

1.2 Notation

For any integers a and b, denote by \([{a}\! : \! {b}] = \{ x \text{ integer }: a\le x\le b\}\) the integer range between them. For a set V, the power set \(2^V\) denotes the system all subsets of V, whereas for any positive integer \(s\in {\mathbb {N}}\) we set \({V\atopwithdelims ()s} := \{ S\in 2^V : |S| = s\}\), with |S| denoting the number of elements in S. Matrices are denoted by capital sans-serif letters (e.g., the zero matrix \({\mathsf {O}}\), or the \(n\times n\) identity matrix \({\mathsf {I}}_n\) with columns \({\mathsf {e}}_i\) the length of which should be clear from the context). The all-ones vector is \({\mathsf {e}}:=\sum _i {\mathsf {e}}_i\in {\mathbb {R}}^n\). Generally, vectors are always denoted by boldface sans-serif letters \({\mathsf {x}}\), and their transpose by \({\mathsf {x}} ^{\intercal }\). The Euclidean norm of \({\mathsf {x}}\) is then \(\Vert {\mathsf {x}} \Vert := \sqrt{{\mathsf {x}} ^{\intercal }{\mathsf {x}}}\) whereas the general p-norm is denoted by \({\Vert {\mathsf {x}} \Vert }_p\) for any \(p\ge 1\) (so \({\Vert {\mathsf {x}} \Vert }_2=\Vert {\mathsf {x}} \Vert \)). By contrast, the so-called zero-norm simply counts the number of nonzero entries:
$$\begin{aligned} {\Vert {\mathsf {x}} \Vert }_0 := |\{ i\in [{1}\! : \! {n}] : x_i\ne 0\}|\, . \end{aligned}$$
For a vector \({\mathsf {d}}\) we denote as \({\widehat{{\mathsf {d}}}} :=\frac{1}{\Vert {\mathsf {d}} \Vert }\,{\mathsf {d}}\) its normalization, with the convention \({\widehat{{\mathsf {d}}}} = {\mathsf {o}}\) if \({\mathsf {d}}= {\mathsf {o}}\). Here \({\mathsf {o}}\) denotes the zero vector. In context of symmetric matrices, “\({{\,\mathrm{psd}\,}}\)” abbreviates “positive-semidefinite”.

2 Problem and general scheme

We consider the following problem:
$$\begin{aligned} \min _{{\mathsf {x}}\in C} f({\mathsf {x}}) \end{aligned}$$
(1)
where, unless specified otherwise, \(C\) is a convex and compact (i.e. bounded and closed) subset of \({\mathbb {R}}^n\) and f is a differentiable function having Lipschitz continuous gradient with constant \(L>0\):
$$\begin{aligned} \Vert \nabla f({\mathsf {x}}) - \nabla f({\mathsf {y}}) \Vert \le L \Vert {\mathsf {x}}- {\mathsf {y}} \Vert \quad {\text{ for } \text{ all } \{{\mathsf {x}},{\mathsf {y}}\} \subset C\, .} \end{aligned}$$
This is a central property required in the analysis of first-order methods. Such a property indeed implies (and for a convex function is equivalent to) the so-called Descent Lemma (see, e.g., Bertsekas 2015, Proposition 6.1.2), which provides a quadratic upper approximation to the function f. Throughout the article, we denote by \({\mathsf {x}}^*\) a (global) solution to (1) and use the symbol \(f^*:=~f({\mathsf {x}}^*)\) as a shorthand for the corresponding optimal value.
The general scheme of the first-order methods we consider for problem (1), reported in Algorithm 1, is based upon a set \(F({\mathsf {x}},{\mathsf {g}})\) of directions feasible at \({\mathsf {x}}\) using first-order local information on f around \({\mathsf {x}}\), in the smooth case \({\mathsf {g}}=\nabla f({\mathsf {x}})\). From this set, a particular \({\mathsf {d}}\in F({\mathsf {x}},{\mathsf {g}})\) is selected, with the maximal stepsize \(\alpha ^{\max }\) possibly dependent from auxiliary information available to the method (at iteration k, we thus write \(\alpha ^{\max }_k\)), and not always equal to the maximal feasible stepsize.

2.1 The classical Frank–Wolfe method

The classical FW method for minimization of a smooth objective f generates a sequence of feasible points \(\{{\mathsf {x}}_k\}\) following the scheme of Algorithm 2. At the iteration k it moves toward a vertex i.e., an extreme point, of the feasible set minimizing the scalar product with the current gradient \(\nabla f({\mathsf {x}}_k)\). It therefore makes use of a linear minimization oracle (LMO) for the feasible set \(C\)
$$\begin{aligned} { \text {LMO}_{C}({\mathsf {g}}) \in {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{\mathsf {z}}\in C}} \,{\mathsf {g}}^\intercal {\mathsf {z}} }\, , \end{aligned}$$
(2)
defining the descent direction as
$$\begin{aligned} {\mathsf {d}}_k = {\mathsf {d}}_k^{FW} := {\mathsf {s}}_k - {\mathsf {x}}_k, \ \ {\mathsf {s}}_k \in \text {LMO}_{C}(\nabla f(x_k)) \, . \end{aligned}$$
(3)
In particular, the update at step 6 in Algorithm 2 can be written as
$$\begin{aligned} {\mathsf {x}}_{k + 1} = {\mathsf {x}}_k + \alpha _k ({\mathsf {s}}_k - {\mathsf {x}}_k) = \alpha _k {\mathsf {s}}_k + (1 - \alpha _k) {\mathsf {x}}_k \end{aligned}$$
(4)
Since \(\alpha _k \in [0, 1]\), by induction \({\mathsf {x}}_{k + 1}\) can be written as a convex combination of elements in the set \(S_{k + 1} := \{{\mathsf {x}}_0\} \cup \{{\mathsf {s}}_i\}_{0 \le i \le k}\). When \(C = {{\,\mathrm{conv}\,}}(A)\) for a set A of points with some common property, usually called “elementary atoms”, if \(x_0 \in A\) then \(x_{k}\) can be written as a convex combination of \(k + 1\) elements in A. Note that due to Caratheodory’s theorem, we can even limit the number of occurring atoms to \(\min \{k,n\}+1\). In the rest of the paper the primal gap at iteration k is defined as \(h_k=f({\mathsf {x}}_k)-f^*\).

3 Examples

FW methods and variants are a natural choice for constrained optimization on convex sets admitting a linear minimization oracle significantly faster than computing a projection. We present here in particular the traffic assignment problem, submodular optimization, LASSO problem, matrix completion, adversarial attacks, minimum enclosing ball, SVM training, maximal clique search in graphs, sparse optimization.

3.1 Traffic assignment

Finding a traffic pattern satisfying the equilibrium conditions in a transportation network is a classic problem in optimization that dates back to Wardrop’s paper (Wardrop 1952). Let \({\mathcal {G}}\) be a network with set of nodes \([{1}\! : \! {n}]\). Let \(\{D(i, j)\}_{i \ne j}\) be demand coefficients, modeling the amount of goods with destination j and origin i. For any ij with \(i\ne j\) let furthermore \(f_{ij}: {\mathbb {R}}\rightarrow {\mathbb {R}}\) be the non-linear (convex) cost functions, and \(x_{ij}^s\) be the flow on link (ij) with destination s. The traffic assignment problem can be modeled as the following non-linear multicommodity network problem (Fukushima 1984):
$$\begin{aligned} \min \left\{ \sum _{i, j} f_{ij}\left( \sum _s x_{ij}^s\right) : \sum _{i} x_{\ell i}^s - \sum _{j} x^s_{j\ell } = D(\ell , s) \, , \text{ all } \ell \ne s, \, \, x_{ij}^s \ge 0 \right\} \, . \end{aligned}$$
(5)
Then the linearized optimization subproblem necessary to compute the FW vertex takes the form
$$\begin{aligned} \min \left\{ \sum _s\sum _{i,j}c_{ij}x_{ij}^s : \sum _{i} x_{\ell i}^s - \sum _{j} x^s_{j\ell } = D(\ell , s), \, \ell \ne s, \, \, x_{ij}^s \ge 0 \right\} \end{aligned}$$
(6)
and can be split in n shortest paths subproblems, each of the form
$$\begin{aligned} \min \left\{ \sum _{i,j}c_{ij}x_{ij}^s : \sum _{i} x_{\ell i}^s - \sum _{j} x^s_{j\ell } = D(\ell , s), \, \ell \ne s, \, \, x_{ij}^s \ge 0 \right\} \end{aligned}$$
(7)
for a fixed \(s \in [{1}\! : \! {n}]\), with \(c_{ij}\) the first-order derivative of \(f_{ij}\) (see Fukushima 1984 for further details). A number of FW variants were proposed in the literature for efficiently handling this kind of problems (see, e.g., Bertsekas 2015; Fukushima 1984; LeBlanc et al. 1975; Mitradjieva and Lindberg 2013; Weintraub et al. 1985 and references therein for further details). Some of those variants represent a good (if not the best) choice when low or medium precision is required in the solution of the problem (Perederieieva et al. 2015).
In the more recent work Joulin et al. (2014) a FW variant also solving a shortest path subproblem at each iteration was applied to image and video co-localization.

3.2 Submodular optimization

Given a finite set V, a function \(r: 2^V \rightarrow {\mathbb {R}}\) is said to be submodular if for every \(A, B \subset V\)
$$\begin{aligned} r(A) + r(B) \ge r(A \cup B) + r(A \cap B) \, . \end{aligned}$$
(8)
As is common practice in the optimization literature (see e.g. Bach 2013, Section 2.1), here we always assume \(s(\emptyset ) = 0\). A number of machine learning problems, including image segmentation and sensor placement, can be cast as minimization of a submodular function (see, e.g., Bach 2013; Chakrabarty et al. 2014 and references therein for further details):
$$\begin{aligned} \min _{A \subseteq V} r(A)\, . \end{aligned}$$
(9)
Submodular optimization can also be seen as a more general way to relate combinatorial problems to convexity, for example for structured sparsity (Bach 2013; Jaggi 2013). By a theorem from Fujishige (1980), problem (9) can be in turn reduced to an minimum norm point problem over the base polytope
$$\begin{aligned} B(r) = \left\{ {\mathsf {s}}\in {\mathbb {R}}^V : \sum _{a \in A} s_a \le r(A) \text{ for } \text{ all } A \subseteq V\, , \, \sum _{a \in V} s_a = r(V) \right\} . \end{aligned}$$
(10)
For this polytope, linear optimization can be achieved with a simple greedy algorithm. More precisely, consider the LP
$$\begin{aligned} \max _{{\mathsf {s}}\in B(r)} {\mathsf {w}} ^{\intercal }{\mathsf {s}}\, . \end{aligned}$$
Then if the objective vector \({\mathsf {w}}\) has a negative component, the problem is clearly unbounded. Otherwise, a solution to the LP can be obtained by ordering \({\mathsf {w}}\) in decreasing manner as \(w_{j_1} \ge w_{j_2} \ge ... \ge w_{j_n}\), and setting
$$\begin{aligned} s_{j_k} := r(\{j_{1}, ..., j_{k} \}) - r(\{j_1, ..., j_{k - 1}\}) \, , \end{aligned}$$
(11)
for \(k \in [{1}\! : \! {n}]\). We thus have a LMO with a \({\mathcal {O}}(n\log n)\) cost. This is the reason why FW variants are widely used in the context of submodular optimization; further details can be found in, e.g., Bach (2013), Jaggi (2013).

3.3 LASSO problem

The LASSO, proposed by Tibshirani in 1996 (Tibshirani 1996), is a popular tool for sparse linear regression. Given the training set
$$\begin{aligned} T=\{({\mathsf {r}}_i,b_i) \in {\mathbb {R}}^n\times {\mathbb {R}}: i\in [{1}\! : \! {m}]\}\, , \end{aligned}$$
where \({\mathsf {r}}_i ^{\intercal }\) are the rows of an \(m\times n\) matrix \({\mathsf {A}}\), the goal is finding a sparse linear model (i.e., a model with a small number of non-zero parameters) describing the data. This problem is strictly connected with the Basis Pursuit Denoising (BPD) problem in signal analysis (see, e.g., Chen et al. 2001). In this case, given a discrete-time input signal b, and a dictionary
$$\begin{aligned} \{{\mathsf {a}}_j\in {\mathbb {R}}^m \ : \ j\in [{1}\! : \! {n}] \} \end{aligned}$$
of elementary discrete-time signals, usually called atoms (here \({\mathsf {a}}_j\) are the columns of a matrix \({\mathsf {A}}\)), the goal is finding a sparse linear combination of the atoms that approximate the real signal. From a purely formal point of view, LASSO and BPD problems are equivalent, and both can be formulated as follows:
$$\begin{aligned} \begin{array}{ll} \displaystyle {\min _{{\mathsf {x}}\in {\mathbb {R}}^n}}&{}f({\mathsf {x}}):=\Vert {\mathsf {A}}{\mathsf {x}}-{\mathsf {b}}\Vert _2^2\\ s.t. &{} \Vert {\mathsf {x}}\Vert _1\le \tau \, , \end{array} \end{aligned}$$
(12)
where the parameter \(\tau \) controls the amount of shrinkage that is applied to the model (related to sparsity, i.e., the number of nonzero components in \({\mathsf {x}}\)). The feasible set is
$$\begin{aligned} C=\{{\mathsf {x}}\in {\mathbb {R}}^n : \Vert {\mathsf {x}}\Vert _1\le \tau \}={{\,\mathrm{conv}\,}}\{\pm \tau {\mathsf {e}}_i : \ i\in [{1}\! : \! {n}] \}\, . \end{aligned}$$
Thus we have the following LMO in this case:
$$\begin{aligned} \text {LMO}_C(\nabla f({\mathsf {x}}_k))={{\,\mathrm{sign}\,}}(-\nabla _{i_k} f({\mathsf {x}}_k))\cdot \tau {\mathsf {e}}_{i_k}\, , \end{aligned}$$
with \(i_k \in \displaystyle {\mathop {{{\,\mathrm{arg\,max}\,}}}\limits _{i}} |\nabla _i f({\mathsf {x}}_k)|\). It is easy to see that the FW per-iteration cost is then \({\mathcal {O}}(n)\). The peculiar structure of the problem makes FW variants well-suited for its solution. This is the reason why LASSO/BPD problems were considered in a number of FW-related papers (see, e.g., Jaggi 2011, 2013; Lacoste-Julien and Jaggi 2015; Locatello et al. 2017).

3.4 Matrix completion

Matrix completion is a widely studied problem that comes up in many areas of science and engineering, including collaborative filtering, machine learning, control, remote sensing, and computer vision (just to name a few; see also Candès and Recht (2009) and references therein). The goal is to retrieve a low rank matrix \({\mathsf {X}}\in {\mathbb {R}}^{n_1 \times n_2}\) from a sparse set of observed matrix entries \(\{U_{ij}\}_{(i,j) \in J}\) with \(J \subset [{1}\! : \! {n_1}] \times [{1}\! : \! {n_2}]\). Thus the problem can be formulated as follows (Freund et al. 2017):
$$\begin{aligned} \begin{array}{ll} \displaystyle {\min _{{\mathsf {X}}\in {\mathbb {R}}^{n_1 \times n_2}}}&{}f({\mathsf {X}}) := \displaystyle \sum _{(i,j)\in J} (X_{ij} - U_{ij})^2\\ \quad s.t. &{} {{\,\mathrm{rank}\,}}({\mathsf {X}})\le \delta , \end{array} \end{aligned}$$
(13)
where the function f is given by the squared loss over the observed entries of the matrix and \(\delta >0\) is a parameter representing the assumed belief about the rank of the reconstructed matrix we want to get in the end. In practice, the low rank constraint is relaxed with a nuclear norm ball constraint, where we recall that the nuclear norm \({\Vert {\mathsf {X}} \Vert }_*\) of a matrix \({\mathsf {X}}\) is equal the sum of its singular values. Thus we get the following convex optimization problem:
$$\begin{aligned} \begin{array}{ll} \displaystyle {\min _{{\mathsf {X}}\in {\mathbb {R}}^{n_1 \times n_2}}}&{} \displaystyle \sum _{(i,j)\in J}(X_{ij} - U_{ij})^2\\ \quad s.t. &{} {\Vert {\mathsf {X}} \Vert }_*\le \delta \, . \end{array} \end{aligned}$$
(14)
The feasible set is the convex hull of rank-one matrices:
$$\begin{aligned} \begin{array}{rcl} C &{}= &{}\{{\mathsf {X}}\in {\mathbb {R}}^{n_1 \times n_2} : {\Vert {\mathsf {X}}\Vert }_*\le \delta \}\\ &{}=&{} {{\,\mathrm{conv}\,}}\{\delta {\mathsf {u}}{\mathsf {v}} ^{\intercal }:{\mathsf {u}}\in {\mathbb {R}}^{n_1},{\mathsf {v}}\in {\mathbb {R}}^{n_2},\ \Vert {\mathsf {u}}\Vert = \Vert {\mathsf {v}}\Vert =1 \} \, .\end{array} \end{aligned}$$
If we indicate with \({\mathsf {A}}_J\) the matrix that coincides with \({\mathsf {A}}\) on the indices J and is zero otherwise, then we can write \(\nabla f({\mathsf {X}})={2}\,({\mathsf {X}}-{\mathsf {U}})_J\). Thus we have the following LMO in this case:
$$\begin{aligned} \text {LMO}_C(\nabla f({\mathsf {X}}_k)) \in {{\,\mathrm{arg\,min}\,}}\{{{\,\mathrm{tr}\,}}(\nabla f({\mathsf {X}}_k) ^{\intercal }{\mathsf {X}}) : {\Vert {\mathsf {X}} \Vert }_* \le \delta \}\, , \end{aligned}$$
(15)
which boils down to computing the gradient, and the rank-one matrix \(\delta {\mathsf {u}}_1 {\mathsf {v}}_1 ^{\intercal }\), with \({\mathsf {u}}_1, {\mathsf {v}}_1\) right and left singular vectors corresponding to the top singular value of \(-\nabla f({\mathsf {X}}_k)\). Consequently, the FW method at a given iteration approximately reconstructs the target matrix as a sparse combination of rank-1 matrices. Furthermore, as the gradient matrix is sparse (it only has |J| non-zero entries) storage and approximate singular vector computations can be performed much more efficiently than for dense matrices1. A number of FW variants has hence been proposed in the literature for solving this problem (see, e.g., Freund et al. 2017; Jaggi 2011, 2013).

3.5 Adversarial attacks in machine learning

Adversarial examples are maliciously perturbed inputs designed to mislead a properly trained learning machine at test time. An adversarial attack hence consists in taking a correctly classified data point \(x_0\) and slightly modifying it to create a new data point that leads the considered model to misclassification (see, e.g., Carlini and Wagner 2017; Chen et al. 2017; Goodfellow et al. 2014 for further details). A possible formulation of the problem (see, e.g., Chen et al. 2020; Goodfellow et al. 2014) is given by the so called maximum allowable \(\ell _p\)-norm attack that is,
$$\begin{aligned} \begin{aligned}&\min _{{\mathsf {x}}\in {\mathbb {R}}^n} \, f({\mathsf {x}}_0+{\mathsf {x}}) \\&s.t. \quad {\Vert {\mathsf {x}}\Vert }_p \le \varepsilon \, , \end{aligned} \end{aligned}$$
(16)
where f is a suitably chosen attack loss function, \({\mathsf {x}}_0\) is a correctly classified data point, \({\mathsf {x}}\) represents the additive noise/perturbation, \(\varepsilon > 0\) denotes the magnitude of the attack, and \(p\ge 1\). It is easy to see that the LMO has a cost \({\mathcal {O}}(n)\). If \({\mathsf {x}}_0\) is a feature vector of a dog image correctly classified by our learning machine, our adversarial attack hence suitably perturbs the feature vector (using the noise vector \({\mathsf {x}}\)), thus getting a new feature vector \({\mathsf {x}}_0+{\mathsf {x}}\) classified, e.g., as a cat. In case a target adversarial class is specified by the attacker, we have a targeted attack. In some scenarios, the goal may not be to push \({\mathsf {x}}_0\) to a specific target class, but rather push it away from its original class. In this case we have a so called untargeted attack. The attack loss function f will hence be chosen depending on the kind of attack we aim to perform over the considered model. Due to its specific structure, problem (16) can be nicely handled by means of tailored FW variants. Some FW frameworks for adversarial attacks were recently described in, e.g., Chen et al. (2020), Kazemi et al. (2021), Sahu and Kar (2020).

3.6 Minimum enclosing ball

Given a set of points \(P = \{{\mathsf {p}}_1,\ldots , {\mathsf {p}}_n\}\subset {\mathbb {R}}^d\), the minimum enclosing ball problem (MEB, see, e.g., Clarkson 2010; Yıldırım 2008) consists in finding the smallest ball containing P. Such a problem models numerous important applications in clustering, nearest neighbor search, data classification, machine learning, facility location, collision detection, and computer graphics, to name just a few. We refer the reader to Kumar et al. (2003) and the references therein for further details. Denoting by \({\mathsf {c}}\in {\mathbb {R}}^d\) the center and by \(\sqrt{\gamma }\) (with \(\gamma \ge 0\)) the radius of the ball, a convex quadratic formulation for this problem is
$$\begin{aligned}&\min _{({\mathsf {c}},\gamma ) \in {\mathbb {R}}^d\times {\mathbb {R}}} \, \gamma \end{aligned}$$
(17)
$$\begin{aligned}&\quad \ s.t. \, \Vert {\mathsf {p}}_i - {\mathsf {c}} \Vert ^2 \le \gamma \, , \; \text { all } i \in [{1}\! : \! {n}]\, . \end{aligned}$$
(18)
This problem can be formulated via Lagrangian duality as a convex Standard Quadratic Optimization Problem (StQP, see, e.g. Bomze and de Klerk 2002)
$$\begin{aligned} { \min \left\{ {\mathsf {x}}^{\intercal }{\mathsf {A}}^{\intercal }{\mathsf {A}}{\mathsf {x}}- {\mathsf {b}} ^{\intercal }{\mathsf {x}}: {\mathsf {x}}\in \varDelta _{n - 1} \right\} } \end{aligned}$$
(19)
with \({\mathsf {A}}= [{\mathsf {p}}_1, ..., {\mathsf {p}}_n]\) and \({\mathsf {b}} ^{\intercal }= [{\mathsf {p}}_1^{\intercal }{\mathsf {p}}_1, \ldots , {\mathsf {p}}_n^{\intercal }{\mathsf {p}}_n]\). The feasible set is the standard simplex
$$\begin{aligned} \varDelta _{n - 1}:=\{{\mathsf {x}}\in {\mathbb {R}}^n_+ : {\mathsf {e}}^\intercal {\mathsf {x}}=1\}={{\,\mathrm{conv}\,}}\{{\mathsf {e}}_i : i\in [{1}\! : \! {n}] \}\, , \end{aligned}$$
and the LMO is defined as follows:
$$\begin{aligned} \text {LMO}_{\varDelta _{n - 1}}(\nabla f({\mathsf {x}}_k))={\mathsf {e}}_{i_k}, \end{aligned}$$
with \(i_k \in {\mathop {{{\,\mathrm{arg\,min}\,}}}\nolimits _{i}} \nabla _i f({\mathsf {x}}_k)\). It is easy to see that cost per iteration is \({\mathcal {O}}(n)\). When applied to (19), the FW method can find an \(\varepsilon \)-cluster in \({{\mathcal {O}}}(\frac{1}{\varepsilon })\), where an \(\varepsilon \)-cluster is a subset \(P'\) of P such that the MEB of \(P'\) dilated by \(1 + \varepsilon \) contains P (Clarkson 2010). The set \(P'\) is given by the atoms in P selected by the LMO in the first \({{\mathcal {O}}}(\frac{1}{\varepsilon })\) iterations. Further details related to the connections between FW methods and MEB problems can be found in, e.g., Ahipaşaoğlu et al. (2008), Ahipaşaoğlu and Todd (2013), Clarkson (2010) and references therein.

3.7 Training linear Support Vector Machines

Support Vector Machines (SVMs) represent a very important class of machine learning tools (see, e.g., Vapnik 2013 for further details). Given a labeled set of data points, usually called training set:
$$\begin{aligned} TS=\{({\mathsf {p}}_i, y_i),\ {\mathsf {p}}_i \in {\mathbb {R}}^d,\ y_i \in \{-1, 1\},\ i=1,\dots ,n \}, \end{aligned}$$
the linear SVM training problem consists in finding a linear classifier \({\mathsf {w}}\in {\mathbb {R}}^d\) such that the label \(y_i\) can be deduced with the “highest possible confidence” from \({\mathsf {w}}^{\intercal }{\mathsf {p}}_i\). A convex quadratic formulation for this problem is the following Clarkson (2010):
$$\begin{aligned} \begin{array}{cl} \displaystyle {\min _{{\mathsf {w}}\in {\mathbb {R}}^d, \rho \in {\mathbb {R}}}}&{}\rho + \frac{\Vert {\mathsf {w}} \Vert ^2}{2}\\ s.t. &{} \rho + y_i\, {\mathsf {w}}^\intercal {\mathsf {p}}_i \ge 0\, , \quad \text {all }i\in [{1}\! : \! {n}]\, , \end{array} \end{aligned}$$
(20)
where the slack variable \(\rho \) stands for the negative margin and we can have \(\rho < 0\) if and only if there exists an exact linear classifier, i.e. \({\mathsf {w}}\) such that \({\mathsf {w}}^\intercal {\mathsf {p}}_i = {{\,\mathrm{sign}\,}}(y_i)\). The dual of (20) is again an StQP:
$$\begin{aligned} { \min \left\{ {\mathsf {x}}^{\intercal }{\mathsf {A}}^{\intercal }{\mathsf {A}}{\mathsf {x}}: {\mathsf {x}}\in \varDelta _{n - 1} \right\} } \end{aligned}$$
(21)
with \({\mathsf {A}}= [y_1{\mathsf {p}}_1, ..., y_n{\mathsf {p}}_n]\). Notice that problem (21) is equivalent to an MNP problem on \( {{\,\mathrm{conv}\,}}\{ y_i{\mathsf {p}}_i : i\in [{1}\! : \! {n}]\}\), see Sect. 7.2 below. Some FW variants (like, e.g., the Pairwise Frank–Wolfe) are closely related to classical working set algorithms, such as the SMO algorithm used to train SVMs (Lacoste-Julien and Jaggi 2015). Further details on FW methods for SVM training problems can be found in, e.g., Clarkson (2010), Jaggi (2011).

3.8 Finding maximal cliques in graphs

In the context of network analysis the clique model, dating back at least to the work of Luce and Perry (1949) about social networks, refers to subsets with every two elements in a direct relationship. The problem of finding maximal cliques has numerous applications in domains including telecommunication networks, biochemistry, financial networks, and scheduling (see, e.g., Bomze et al. 1999; Wu and Hao 2015). Let \(G=(V,E)\) be a simple undirected graph with V and E set of vertices and edges, respectively. A clique in G is a subset \(C\subseteq V\) such that \((i,j)\in E\) for each \((i,j)\in C\), with \(i\ne j\). The goal in finding a clique C such that it is maximal (i.e., it is not contained in any strictly larger clique). This corresponds to find a local solution to the following equivalent (this time non-convex) StQP (see, e.g., Bomze 1997; Bomze et al. 1999; Hungerford and Rinaldi 2019 for further details):
$$\begin{aligned} \max \left\{ {\mathsf {x}}^{\intercal }{\mathsf {A}}_G {\mathsf {x}}+\frac{1}{2}\Vert {\mathsf {x}}\Vert ^2 : {\mathsf {x}}\in \varDelta _{n - 1} \right\} \end{aligned}$$
(22)
where \({\mathsf {A}}_G\) is the adjacency matrix of G. Due to the peculiar structure of the problem, FW methods can be fruitfully used to find maximal cliques (see, e.g., Hungerford and Rinaldi 2019).

3.9 Finding sparse points in a set

Given a non-empty polyhedron \(P\subset {\mathbb {R}}^n\), the goal is finding a sparse point \({\mathsf {x}}\in P\) (i.e., a point with as many zero components as possible). This sparse optimization problem can be used to model a number of real-world applications in fields like, e.g., machine learning, pattern recognition and signal processing (see Rinaldi et al. 2010 and references therein). Ideally, what we would like to get is an optimal solution for the following problem:
$$\begin{aligned} \min \left\{ \Vert {\mathsf {x}}\Vert _0 : {\mathsf {x}}\in P\right\} . \end{aligned}$$
(23)
Since the zero norm is non-smooth, a standard procedure is to replace the original formulation (23) with an equivalent concave optimization problem of the form:
$$\begin{aligned} \min \left\{ \displaystyle \sum _{i=1}^n \phi (y_i): {\mathsf {x}}\in P,\, \, -{\mathsf {y}}\le {\mathsf {x}}\le {\mathsf {y}}\right\} , \end{aligned}$$
(24)
where \(\phi :\left[ 0\right. \!, +\infty \left[ \right. \rightarrow {\mathbb {R}}\) is a suitably chosen smooth concave univariate function bounded from below, like, e.g.,
$$\begin{aligned} \phi (t)=(1-e^{-\alpha t})\, , \end{aligned}$$
with \(\alpha \) a large enough positive parameter (see, e.g., Mangasarian 1996; Rinaldi et al. 2010 for further details). The LMO in this case gives a vertex solution for the linear programming problem:
$$\begin{aligned} \min \left\{ \displaystyle {{\mathsf {c}}}_k^\intercal {\mathsf {y}}: {\mathsf {x}}\in P,\, \, -{\mathsf {y}}\le {\mathsf {x}}\le {\mathsf {y}}\right\} , \end{aligned}$$
with \(({{\mathsf {c}}}_k)_i\) the first-order derivative of \(\phi \) calculated in \(({{\mathsf {y}}}_k)_i\). Variants of the unit-stepsize FW method have been proposed in the literature (see, e.g., Mangasarian 1996; Rinaldi et al. 2010) to tackle the smooth equivalent formulation (24).

4 Stepsizes

Popular rules for determining the stepsize are:
  • unit stepsize:
    $$\begin{aligned} \alpha _k=1, \end{aligned}$$
    mainly used when the problem has a concave objective function. Finite convergence can be proved, under suitable assumptions, both for the unit-stepsize FW and some of its variants described in the literature (see, e.g., Rinaldi et al. 2010 for further details).
  • diminishing stepsize:
    $$\begin{aligned} \alpha _k = \frac{2}{ k + 2} \, , \end{aligned}$$
    (25)
    mainly used for the classic FW (see, e.g., Freund and Grigas 2016; Jaggi 2013).
  • exact line search:
    $$\begin{aligned} \alpha _k = \min {{\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{\alpha \in [0, \alpha _k^{\max }]}} \varphi (\alpha )} \quad {\text{ with } \varphi (\alpha ):=f({\mathsf {x}}_k + \alpha \, {\mathsf {d}}_k) }\, , \end{aligned}$$
    (26)
    where we pick the smallest minimizer of the function \(\varphi \) for the sake of being well-defined even in rare cases of ties (see, e.g., Bomze et al. 2020; Lacoste-Julien and Jaggi 2015).
  • Armijo line search: the method iteratively shrinks the step size in order to guarantee a sufficient reduction of the objective function. It represents a good way to replace exact line search in cases when it gets too costly. In practice, we fix parameters \(\delta \in (0,1)\) and \(\gamma \in (0,\frac{1}{2})\), then try steps \(\alpha =\delta ^{m}\alpha _k^{\max }\) with \(m\in \{ 0,1,2,\dots \}\) until the sufficient decrease inequality
    $$\begin{aligned} f({\mathsf {x}}_k+\alpha \,{\mathsf {d}}_k)\le f({\mathsf {x}}_k)+\gamma \alpha \, \nabla f({\mathsf {x}}_k)^{\intercal } {\mathsf {d}}_k \end{aligned}$$
    (27)
    holds, and set \(\alpha _k=\alpha \) (see, e.g., Bomze et al. 2019 and references therein).
  • Lipschitz constant dependent step size:
    $$\begin{aligned} \alpha _k = \alpha _k(L) := \min \left\{ -\, \frac{ \nabla f({\mathsf {x}}_k) ^{\intercal }{\mathsf {d}}_k}{L\Vert {\mathsf {d}}_k \Vert ^2}, \alpha _k^{max} \right\} \, , \end{aligned}$$
    (28)
    with L the Lipschitz constant of \(\nabla f\) (see, e.g., Bomze et al. 2020; Pedregosa et al. 2020).
The Lipschitz constant dependent step size can be seen as the minimizer of the quadratic model \(m_k(\cdot ; L)\) overestimating f along the line \({\mathsf {x}}_k+\alpha \, {\mathsf {d}}_k\):
$$\begin{aligned} m_k( \alpha ; L) = f({\mathsf {x}}_k) + \alpha \, \nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k + \frac{L\alpha ^2}{2}\, \Vert {\mathsf {d}}_k \Vert ^2 \ge f({\mathsf {x}}_k + \alpha \, {\mathsf {d}}_k)\, , \end{aligned}$$
(29)
where the inequality follows by the standard Descent Lemma.
In case L is unknown, it is even possible to approximate L using a backtracking line search (see, e.g., Kerdreux et al. 2020; Pedregosa et al. 2020).
We now report a lower bound for the improvement on the objective obtained with the stepsize (28), often used in the convergence analysis.
Lemma 1
If \(\alpha _k\) is given by (28) and \(\alpha _k < \alpha _k^{\max }\) then
$$\begin{aligned} f({\mathsf {x}}_{k + 1}) \le f({\mathsf {x}}_k) - \frac{1}{2L}(\nabla f({\mathsf {x}}_k)^\intercal {\widehat{{\mathsf {d}}}}_k)^2 \, . \end{aligned}$$
(30)
Proof
We have
$$\begin{aligned} \begin{array}{rcl} f({\mathsf {x}}_k + \alpha _k \, {\mathsf {d}}_k) &{}\le &{}f({\mathsf {x}}_k) + \alpha _k \nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k + \frac{L\alpha _k^2}{2}\, \Vert {\mathsf {d}}_k \Vert ^2 \\ &{}= &{}f({\mathsf {x}}_k) - \frac{(\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k)^2}{2L\Vert {\mathsf {d}}_k \Vert ^2} = f({\mathsf {x}}_k) - \frac{1}{2L}(\nabla f({\mathsf {x}}_k)^\intercal \widehat{{\mathsf {d}}_k})^2 \, , \end{array}\nonumber \\ \end{aligned}$$
(31)
where we used the standard Descent Lemma in the inequality. \(\square \)

5 Properties of the FW method and its variants

5.1 The FW gap

A key parameter often used as a measure of convergence is the FW gap
$$\begin{aligned} G({\mathsf {x}}) = \max _{{\mathsf {s}}\in C} - \nabla f({\mathsf {x}})^\intercal ({\mathsf {s}}- {\mathsf {x}}) \, , \end{aligned}$$
(32)
which is always nonnegative and equal to 0 only in first order stationary points. This gap is, by definition, readily available during the algorithm. If f is convex, using that \(\nabla f({\mathsf {x}})\) is a subgradient we obtain
$$\begin{aligned} G({\mathsf {x}}) \ge - \nabla f({\mathsf {x}})^\intercal ({\mathsf {x}}^* - {\mathsf {x}}) \ge f({\mathsf {x}}) - f^*\, , \end{aligned}$$
(33)
so that \(G({\mathsf {x}})\) is an upper bound on the optimality gap at \({\mathsf {x}}\). Furthermore, \(G({\mathsf {x}})\) is a special case of the Fenchel duality gap (Lacoste-Julien et al. 2013).
If \(C=\varDelta _{n - 1}\) is the simplex, then G is related to the Wolfe dual as defined in Clarkson (2010). Indeed, this variant of Wolfe’s dual reads
$$\begin{aligned} \begin{aligned} \max \&f({\mathsf {x}}) + \lambda ({\mathsf {e}}^{\intercal }{\mathsf {x}}-1)- {\mathsf {u}}^\intercal {\mathsf {x}} \\ \text {s.t.}\;\;&\nabla _i f({\mathsf {x}}) - u_i + {\lambda } = 0 \, ,\quad i \in [{1}\! : \! {n}] \, , \\&({\mathsf {x}}, {\mathsf {u}}, {\lambda }) \in {\mathbb {R}}^n \times {\mathbb {R}}^n_+ \times {\mathbb {R}}\ \end{aligned} \end{aligned}$$
(34)
and for a fixed \({\mathsf {x}}\in {\mathbb {R}}^n\), the optimal values of \(({\mathsf {u}}, {\lambda })\) are
$$\begin{aligned} {\lambda }_{\mathsf {x}}= - \min _{j} \nabla _j f({\mathsf {x}})\, , \, \ \ u_i({\mathsf {x}}):= \nabla _i f({\mathsf {x}}) - \min _j \nabla _j f({\mathsf {x}}) \ge 0 \, . \end{aligned}$$
Performing maximization in problem (34) iteratively, first for \(({\mathsf {u}},\lambda )\) and then for \({\mathsf {x}}\), this implies that (34) is equivalent to
$$\begin{aligned} \begin{array}{rcl} &{}&{}\max _{{\mathsf {x}}\in {\mathbb {R}}^n}\left[ f({\mathsf {x}}) + \lambda _{\mathsf {x}}({\mathsf {e}} ^{\intercal }{\mathsf {x}}-1) - {\mathsf {u}}({\mathsf {x}}) ^{\intercal }{\mathsf {x}}\right] \\ &{} = &{} \max _{{\mathsf {x}}\in {\mathbb {R}}^n}\left[ f({\mathsf {x}})- \max _j ({\mathsf {e}}_j-{\mathsf {x}}) ^{\intercal }\nabla f({\mathsf {x}}) \right] = \max _{{\mathsf {x}}\in {\mathbb {R}}^n}\left[ f({\mathsf {x}}) - G({\mathsf {x}})\right] \, .\end{array} \end{aligned}$$
(35)
Furthermore, since Slater’s condition is satisfied, strong duality holds by Slater’s theorem (Boyd et al. 2004), resulting in \( G({\mathsf {x}}^*) = 0\) for every solution \({\mathsf {x}}^*\) of the primal problem.
The FW gap is related to several other measures of convergence (see e.g. Lan 2020, Section 7.5.1). First, consider the projected gradient
$$\begin{aligned} {\widetilde{{\mathsf {g}}}}_k := \pi _C({\mathsf {x}}_k - \nabla f({\mathsf {x}}_k)) - {\mathsf {x}}_k \, . \end{aligned}$$
(36)
with \(\pi _{B}\) the projection on a convex and closed subset \(B\subseteq {\mathbb {R}}^n\). We have \(\Vert {\widetilde{{\mathsf {g}}}}_k \Vert = 0\) if and only if \({\mathsf {x}}_k\) is stationary, with
$$\begin{aligned} \begin{array}{rcl} \Vert {\widetilde{{\mathsf {g}}}}_k \Vert ^2 &{} = &{} {\widetilde{{\mathsf {g}}}}_k^\intercal {\widetilde{{\mathsf {g}}}}_k \,\le \, {\widetilde{{\mathsf {g}}}}_k^\intercal [({\mathsf {x}}_k - \nabla f({\mathsf {x}}_k) ) - \pi _C({\mathsf {x}}_k - \nabla f({\mathsf {x}}_k))] + {\widetilde{{\mathsf {g}}}}_k^\intercal {\widetilde{{\mathsf {g}}}}_k \\ &{}= &{} -{\widetilde{{\mathsf {g}}}}_k^\intercal \nabla f({\mathsf {x}}_k) \, = \, -(\pi _C({\mathsf {x}}_k - \nabla f({\mathsf {x}}_k)) - {\mathsf {x}}_k) ^\intercal \nabla f({\mathsf {x}}_k) \\ &{}\le &{} \max \limits _{{\mathsf {y}}\in C} - ({\mathsf {y}}- {\mathsf {x}}_k)^\intercal \nabla f({\mathsf {x}}_k) \,=\, G({\mathsf {x}}_k) \, , \end{array} \end{aligned}$$
(37)
where we used \([{\mathsf {y}}- \pi _{C}({\mathsf {x}})]^\intercal [{\mathsf {x}}- \pi _{C}({\mathsf {x}})] \le 0\) in the first inequality, with \({\mathsf {x}}= {\mathsf {x}}_k - \nabla f({\mathsf {x}}_k)\) and \({\mathsf {y}}= {\mathsf {x}}_k\).
Let now \(N_{C}(x)\) denote the normal cone to \(C\) at a point \({\mathsf {x}}\in C\):
$$\begin{aligned} N_{C}({\mathsf {x}}) := \{{\mathsf {r}}\in {\mathbb {R}}^n : {\mathsf {r}}^\intercal ({\mathsf {y}}- {\mathsf {x}}) \le 0 \; \text{ for } \text{ all } {\mathsf {y}}\in C\} \, . \end{aligned}$$
(38)
First-order stationarity conditions are equivalent to \( - \nabla f({\mathsf {x}}) \in N_{C}({\mathsf {x}})\), or
$$\begin{aligned} {{\,\mathrm{dist}\,}}(N_{C}({\mathsf {x}}), - \nabla f({\mathsf {x}})) = \Vert - \nabla f({\mathsf {x}}) - \pi _{N_{C}({\mathsf {x}})}(-\nabla f({\mathsf {x}})) \Vert = 0 \, . \end{aligned}$$
The FW gap provides a lower bound on the distance from the normal cone \({{\,\mathrm{dist}\,}}(N_{C}({\mathsf {x}}), - \nabla f({\mathsf {x}}))\), inflated by the diameter \(D>0\) of \(C\), as follows:
$$\begin{aligned} \begin{array}{rcl} G({\mathsf {x}}_k) &{}= &{} -({\mathsf {s}}_k - {\mathsf {x}}_k)^\intercal \nabla f({\mathsf {x}}_k) \\ &{}= &{} ({\mathsf {s}}_k - {\mathsf {x}}_k)^\intercal [\pi _{N_{C}({\mathsf {x}}_k)}(-\nabla f({\mathsf {x}}_k)) - (\pi _{N_{C}({\mathsf {x}}_k)}(-\nabla f({\mathsf {x}}_k)) + \nabla f({\mathsf {x}}_k))] \\ &{}\le &{} \Vert {\mathsf {s}}_k - {\mathsf {x}}_k \Vert \, \Vert \pi _{N_{C}({\mathsf {x}}_k)}(-\nabla f({\mathsf {x}}_k)) + \nabla f({\mathsf {x}}_k) \Vert \\ &{}\le &{} D\, {{\,\mathrm{dist}\,}}(N_{C}({\mathsf {x}}_k), -\nabla f({\mathsf {x}}_k)) \, , \end{array} \end{aligned}$$
(39)
where in the first inequality we used \(({\mathsf {s}}_k - {\mathsf {x}}_k)^\intercal [\pi _{N_{C}({\mathsf {x}}_k)}(-\nabla f({\mathsf {x}}_k))] \le 0\) together with the Cauchy-Schwarz inequality, and \(\Vert {\mathsf {s}}_k - {\mathsf {x}}_k \Vert \le D\) in the second.

5.2 \({{\mathcal {O}}}(1/k)\) rate for convex objectives

If f is non-convex, it is possible to prove a \({{\mathcal {O}}}(1/\sqrt{k})\) rate for \(\min _{i \in [1:k]} G(x_i)\) (see, e.g., Lacoste-Julien 2016). On the other hand, if f is convex, we have an \({{\mathcal {O}}}(1/k)\) rate on the optimality gap (see, e.g., Frank and Wolfe 1956; Levitin and Polyak 1966) for all the stepsizes discussed in Sect 4. Here we include a proof for the Lipschitz constant dependent stepsize \(\alpha _k\) given by (28).
Theorem 1
If f is convex and the stepsize is given by (28), then for every \(k \ge 1\)
$$\begin{aligned} f({\mathsf {x}}_k) - f^* \le \frac{2LD^2}{k + 2} \, . \end{aligned}$$
(40)
Before proving the theorem we prove a lemma concerning the decrease of the objective in the case of a full FW step, that is a step with \({\mathsf {d}}_k = {\mathsf {d}}_k^{FW}\) and with \(\alpha _k\) equal to 1, the maximal feasible stepsize.
Lemma 2
If \(\alpha _k = 1\) and \({\mathsf {d}}_k = {\mathsf {d}}_k^{FW}\) then
$$\begin{aligned} f({\mathsf {x}}_{k + 1}) - f^* \le \frac{1}{2} \, \min \left\{ L\Vert {\mathsf {d}}_k \Vert ^2 , f({\mathsf {x}}_k) - f^* \right\} \, . \end{aligned}$$
(41)
Proof
If \(\alpha _k = 1 = \alpha _k^{\max }\) then by Definitions (3) and (32)
$$\begin{aligned} G({\mathsf {x}}_k) = -\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k \ge L \Vert {\mathsf {d}}_k \Vert ^2 \, , \end{aligned}$$
(42)
the last inequality following by Definition (28) and the assumption that \(\alpha _k = 1\). By the standard Descent Lemma it also follows
$$\begin{aligned} f({\mathsf {x}}_{k + 1}) - f^* = f({\mathsf {x}}_k + {\mathsf {d}}_k) - f^* \le f({\mathsf {x}}_k) - f^* + \nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k + \frac{L}{2}\,\Vert {\mathsf {d}}_k \Vert ^2 \, .\nonumber \\ \end{aligned}$$
(43)
Considering the definition of \({\mathsf {d}}_k\) and convexity of f, we get
$$\begin{aligned} f({\mathsf {x}}_k) - f^* + \nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k \le f({\mathsf {x}}_k) - f^* + \nabla f({\mathsf {x}}_k)^\intercal ({\mathsf {x}}^*-{\mathsf {x}}_k) \le 0\, , \end{aligned}$$
so that (43) entails \( f({\mathsf {x}}_{k + 1}) - f^* \le \frac{L}{2}\,\Vert {\mathsf {d}}_k \Vert ^2 \). To conclude, it suffices to apply to the RHS of (43) the inequality
$$\begin{aligned} f({\mathsf {x}}_k) - f^* + \nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k + {\textstyle {\frac{L}{2}}}\,\Vert {\mathsf {d}}_k \Vert ^2 \le f({\mathsf {x}}_k) - f^* -{\textstyle {\frac{1}{2}}}\,G({\mathsf {x}}_k) \le {\textstyle {\frac{f({\mathsf {x}}_k) - f^*}{2}}}\,\nonumber \\ \end{aligned}$$
(44)
where we used (42) in the first inequality and \(G({\mathsf {x}}_k) \ge f({\mathsf {x}}_k) - f^*\) in the second. \(\square \)
We can now proceed with the proof of the main result.
Proof of Theorem 1
For \(k = 0\) and \(\alpha _0 = 1\) then by Lemma 2
$$\begin{aligned} f({\mathsf {x}}_1) - f^* \le \frac{L\Vert {\mathsf {d}}_0 \Vert ^2}{2} \le \frac{ L D^2}{2} \, . \end{aligned}$$
(45)
If \(\alpha _0 < 1\) then
$$\begin{aligned} f({\mathsf {x}}_0) - f^* \le G({\mathsf {x}}_0) < L\Vert {\mathsf {d}}_0 \Vert ^2 \le LD^2 \, . \end{aligned}$$
(46)
Therefore in both cases (30) holds for \(k = 0\).
Reasoning by induction, if (40) holds for k with \(\alpha _k = 1\), then the claim is clear by (41). On the other hand, if \(\alpha _k <\alpha _k^{\max }= 1 \) then by Lemma 1, we have
$$\begin{aligned} \begin{array}{rcl} f({\mathsf {x}}_{k + 1}) - f^* &{}\le &{}f({\mathsf {x}}_k) - f^* - \frac{1}{2L}\, (\nabla f(x_k)^\intercal {\widehat{{\mathsf {d}}}}_k)^2\\ &{}\le &{}f({\mathsf {x}}_k) - f^* - \frac{(\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k)^2}{2LD^2} \\ &{}\le &{}f({\mathsf {x}}_k) - f^* - \frac{(f({\mathsf {x}}_k) - f^*)^2}{2LD^2}\\ &{}= &{}(f({\mathsf {x}}_k) - f^*)\left( 1 - \frac{f({\mathsf {x}}_k) - f^*}{2LD^2}\right) \, \le \, \frac{2LD^2}{k + 3} \, , \end{array} \end{aligned}$$
(47)
where we used \(\Vert {\mathsf {d}}_k \Vert \le D\) in the second inequality, \(\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}}_k = G({\mathsf {x}}_k) \ge f({\mathsf {x}}_k) - f^*\) in the third inequality; and the last inequality follows by induction hypothesis. \(\square \)
As can be easily seen from above argument, the convergence rate of \({{\mathcal {O}}}(1/k)\) is true also in more abstract normed spaces than \({\mathbb {R}}^n\), e.g. when \(C\) is a convex and weakly compact subset of a Banach space (see, e.g., Demyanov and Rubinov 1970; Dunn and Harshbarger 1978). A generalization for some unbounded sets is given in Ferreira and Sosa (2021). The bound is tight due to a zigzagging behaviour of the method near solutions on the boundary, leading to a rate of \(\varOmega (1/k^{1 + \delta })\) for every \(\delta > 0\) (see Canon and Cullum 1968 for further details), when the objective is a strictly convex quadratic function and the domain is a polytope.
Also the minimum FW gap \(\min _{i \in [0: k]} G({\mathsf {x}}_i) \) converges at a rate of \({{\mathcal {O}}}(1/k)\) (see Jaggi 2013; Freund and Grigas 2016). In Freund and Grigas (2016), a broad class of stepsizes is examined, including \(\alpha _k= \frac{1}{k + 1}\) and \(\alpha _k = {\bar{\alpha }}\) constant. For these stepsizes a convergence rate of \({{\mathcal {O}}}\left( \frac{\ln (k)}{k}\right) \) is proved.

5.3 Variants

Active set FW variants mostly aim to improve over the \({{\mathcal {O}}}(1/k)\) rate and also ensure support identification in finite time. They generate a sequence of active sets \(\{A_k\}\), such that \({\mathsf {x}}_k \in {{\,\mathrm{conv}\,}}(A_k)\), and define alternative directions making use of these active sets.
For the pairwise FW (PFW) and the away step FW (AFW) (see Clarkson 2010; Lacoste-Julien and Jaggi 2015) we have that \(A_k\) must always be a subset of \(S_k\), with \({\mathsf {x}}_k\) a convex combination of the elements in \(A_k\). The away vertex \({\mathsf {v}}_k\) is then defined by
$$\begin{aligned} {\mathsf {v}}_k \in {\mathop {{{\,\mathrm{arg\,max}\,}}}\limits _{{\mathsf {y}}\in A_k}} \nabla f({\mathsf {x}}_k)^\intercal {\mathsf {y}} \, . \end{aligned}$$
(48)
The AFW direction, introduced in Wolfe (1970), is hence given by
$$\begin{aligned} \begin{array}{ll} {\mathsf {d}}^{AS}_k &{}= {\mathsf {x}}_k - {\mathsf {v}}_k \\ {\mathsf {d}}_k &{}\in {{\,\mathrm{arg\,max}\,}}\{-\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}} : {\mathsf {d}}\in \{{\mathsf {d}}_k^{AS}, {\mathsf {d}}_k^{FW}\} \} \, , \end{array} \end{aligned}$$
(49)
while the PFW direction, as defined in Lacoste-Julien and Jaggi (2015) and inspired by the early work (Mitchell et al. 1974), is
$$\begin{aligned} {\mathsf {d}}^{PFW}_k ={\mathsf {d}}_k^{FW}+{\mathsf {d}}^{AS}_k= {\mathsf {s}}_k - {\mathsf {v}}_k \, , \end{aligned}$$
(50)
with \({\mathsf {s}}_k\) defined in (3).
The FW method with in-face directions (FDFW) (see Freund et al. 2017; Guélat and Marcotte 1986), also known as Decomposition invariant Conditional Gradient (DiCG) when applied to polytopes (Bashiri and Zhang 2017), is defined exactly as the AFW, but with the minimal face \({\mathcal {F}}({\mathsf {x}}_k)\) of \(C\) containing \({\mathsf {x}}_k\) as the active set. The extended FW (EFW) was introduced in Holloway (1974) and is also known as simplicial decomposition (Von Hohenbalken 1977). At every iteration the method minimizes the objective in the current active set \(A_{k + 1}\)
$$\begin{aligned} {\mathsf {x}}_{k + 1} \in {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{\mathsf {y}}\in {{\,\mathrm{conv}\,}}(A_{k + 1})}} f({\mathsf {y}})\, , \end{aligned}$$
(51)
where \(A_{k + 1} \subseteq A_k \cup \{s_k\}\) (see, e.g., Clarkson 2010, Algorithm 4.2). A more general version of the EFW, only approximately minimizing on the current active set, was introduced in Lacoste-Julien and Jaggi (2015) under the name of fully corrective FW. In Table 1, we report the main features of the classic FW and of the variants under analysis.
Table 1
FW method and variants covered in this review
Variant
Direction
Active set
FW
\({\mathsf {d}}_k = {\mathsf {d}}_k^{FW} = {\mathsf {s}}_k - {\mathsf {x}}_k, \quad {\mathsf {s}}_k \in {{\,\mathrm{arg\,max}\,}}\{\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {x}} : {\mathsf {x}}\in C\} \)
AFW
\( {\mathsf {d}}_k \in {{\,\mathrm{arg\,max}\,}}\{-\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}} : {\mathsf {d}}\in \{{\mathsf {x}}_k - {\mathsf {v}}_k, {\mathsf {d}}_k^{FW}\}, \ {\mathsf {v}}_k \in A_k \}\)
\(A_{k + 1}\subseteq A_k \cup \{{\mathsf {s}}_{k}\}\)
PFW
\( {\mathsf {d}}_k = {\mathsf {s}}_k - {\mathsf {v}}_k, \quad {\mathsf {v}}_k \in {{\,\mathrm{arg\,max}\,}}\{\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {v}}_k : {\mathsf {v}}_k \in A_k \} \)
\(A_{k + 1}\subseteq A_k \cup \{{\mathsf {s}}_{k}\}\)
EFW
\( {\mathsf {d}}_k = {\mathsf {y}}_k - {\mathsf {x}}_k, \quad {\mathsf {y}}_k \in {{\,\mathrm{arg\,min}\,}}\{f({\mathsf {y}}): {\mathsf {y}}\in {{\,\mathrm{conv}\,}}(A_k) \} \)
\(A_{k + 1}\subseteq A_k \cup \{{\mathsf {s}}_{k}\}\)
FDFW
\( {\mathsf {d}}_k \in {{\,\mathrm{arg\,max}\,}}\{-\nabla f({\mathsf {x}}_k)^\intercal {\mathsf {d}} : {\mathsf {d}}\in \{{\mathsf {x}}_k - {\mathsf {v}}_k, {\mathsf {d}}_k^{FW}\}, \ {\mathsf {v}}_k \in A_k \} \)
\(A_k = {\mathcal {F}}({\mathsf {x}}_k)\)

5.4 Sparse approximation properties

As discussed in the previous section, for the classic FW method and the AFW, PFW, EFW variants \({\mathsf {x}}_k\) can always be written as a convex combination of elements in \(A_k \subset S_k\), with \(|A_k| \le k + 1\). Even for the FDFW we still have the weaker property that \({\mathsf {x}}_k\) must be an affine combination of elements in \(A_k \subset A\) with \( |A_k| \le k + 1\). It turns out that the convergence rate of methods with this property is \(\varOmega (\frac{1}{k})\) in high dimension. More precisely, if \(C= {{\,\mathrm{conv}\,}}(A)\) with A compact, the \({{\mathcal {O}}}(1/k)\) rate of the classic FW method is worst case optimal given the sparsity constraint
$$\begin{aligned} x_k \in {{\,\mathrm{aff}\,}}(A_k) \ \text {with }A_k \subset A, \ |A_k| \le k + 1 \, . \end{aligned}$$
(52)
An example where the \({{\mathcal {O}}}(1/k)\) rate is tight was presented in Jaggi (2013). Let \(C= \varDelta _{n - 1}\) and \(f({\mathsf {x}}) = \Vert {\mathsf {x}}- \frac{1}{n}\, {\mathsf {e}} \Vert ^2\). Clearly, \(f^* = 0\) with \({\mathsf {x}}^* = \frac{1}{n}\, {\mathsf {e}}\). Then it is easy to see that \(\min \{f({\mathsf {x}}) - f^* : {\Vert {\mathsf {x}}\Vert }_0 \le k + 1 \} \ge \frac{1}{k + 1} - \frac{1}{n}\) for every \(k \in {\mathbb {N}}\), so that in particular under (52) with \(A_k = \{e_i : i \in [{1}\! : \! {n}]\}\), the rate of any FW variant must be \(\varOmega (\frac{1}{k})\).

5.5 Affine invariance

The FW method and the AFW, PFW, EFW are affine invariant (Jaggi 2013). More precisely, let \({\mathsf {P}}\) be a linear transformation, \({\hat{f}}\) be such that \({\hat{f}}({\mathsf {P}}{\mathsf {x}}) = f({\mathsf {x}})\) and \({\hat{C}} = {\mathsf {P}}(C)\). Then for every sequence \(\{{\mathsf {x}}_k\}\) generated by the methods applied to \((f, C)\), the sequence \(\{{\mathsf {y}}_k\} := \{{\mathsf {P}}{\mathsf {x}}_k\}\) can be generated by the FW method with the same stepsizes applied to \(({\hat{f}}, {\hat{C}})\). As a corollary, considering the special case where \({\mathsf {P}}\) is the matrix collecting the elements of A as columns, one can prove results on \(C= \varDelta _{|A| - 1}\) and generalize them to \({\hat{C}}:= {{\,\mathrm{conv}\,}}(A)\) by affine invariance.
An affine invariant convergence rate bound for convex objectives can be given using the curvature constant
$$\begin{aligned} \kappa _{f, C} := \sup \left\{ 2{\textstyle { \frac{f( \alpha {\mathsf {y}}+(1-\alpha ){\mathsf {x}}) - f({\mathsf {x}}) - \alpha \nabla f({\mathsf {x}})^\intercal ({\mathsf {y}}-{\mathsf {x}})}{\alpha ^2}}}: \{{\mathsf {x}},{\mathsf {y}}\}\subset C, \, \alpha \in (0, 1] \right\} \, . \end{aligned}$$
(53)
It is easy to prove that \(\kappa _{f, C} \le LD^2\) if D is the diameter of \(C\). In the special case where \(C= \varDelta _{n - 1}\) and \(f({\mathsf {x}}) = {\mathsf {x}}^\intercal {\tilde{{\mathsf {A}}}}^\intercal {\tilde{{\mathsf {A}}}} {\mathsf {x}}+ {\mathsf {b}} ^{\intercal }{\mathsf {x}}\), then \(\kappa _{f, C} \le {{\,\mathrm{diam}\,}}({\mathsf {A}}\varDelta _{n - 1})^2\) for \({\mathsf {A}} ^{\intercal }= [{\tilde{{\mathsf {A}}}} ^{\intercal }, {\mathsf {b}}]\); see Clarkson (2010).
When the method uses the stepsize sequence (25), it is possible to give the following affine invariant convergence rate bounds (see Freund and Grigas 2016):
$$\begin{aligned} \begin{aligned} f({\mathsf {x}}_k) - f^*&\le \frac{2\kappa _{f, C}}{k + 4} \, , \\ \min _{i \in [1:k]} G({\mathsf {x}}_i)&\le \frac{9\kappa _{f, C}}{2k} \, , \end{aligned} \end{aligned}$$
(54)
thus in particular slightly improving the rate we gave in Theorem 1 since we have that \(\kappa _{f, C} \le LD^2\).

5.6 Support identification for the AFW

It is a classic result that the AFW under some strict complementarity conditions and for strongly convex objectives identifies in finite time the face containing the solution (Guélat and Marcotte 1986). Here we report some explicit bounds for this property proved in Bomze et al. (2020). We first assume that \(C= \varDelta _{n - 1}\), and introduce the multiplier functions
$$\begin{aligned} \lambda _i({\mathsf {x}}) = \nabla f({\mathsf {x}})^\intercal ({\mathsf {e}}_i - {\mathsf {x}}) \end{aligned}$$
(55)
for \(i \in [{1}\! : \! {n}]\). Let \({\mathsf {x}}^*\) be a stationary point for f, with the objective f not necessarily convex. It is easy to check that \(\{\lambda _i({\mathsf {x}}^*)\}_{i \in [1:n]}\) coincide with the Lagrangian multipliers. Furthermore, by complementarity conditions we have \(x^*_i \lambda _i({\mathsf {x}}^*) = 0\) for every \(i \in [{1}\! : \! {n}]\). It follows that the set
$$\begin{aligned} I({\mathsf {x}}^*) := \{i \in [{1}\! : \! {n}] : \lambda _{i}({\mathsf {x}}^*) = 0\} \end{aligned}$$
contains the support of \({\mathsf {x}}^*\),
$$\begin{aligned} {{\,\mathrm{supp}\,}}({\mathsf {x}}^*) :=\{ i\in [{1}\! : \! {n}]: x_i^*>0\}\, . \end{aligned}$$
The next lemma uses \(\lambda _i\), and the Lipschitz constant L of \(\nabla f\), to give a lower bound of the so-called active set radius \(r_*\), defining a neighborhood of \({\mathsf {x}}^*\). Starting the algorithm in this neighbourhood, the active set (the minimal face of \(C\) containing \({\mathsf {x}}^*\)) is identified in a limited number of iterations.
Lemma 3
Let \({\mathsf {x}}^*\) be a stationary point for f on the boundary of \(\varDelta _{n - 1}\), \(\delta _{\min } = \min _{i: \lambda _{i}({\mathsf {x}}^*) > 0} \lambda _i({\mathsf {x}}^*)\) and
$$\begin{aligned} r_* = \frac{\delta _{\min }}{\delta _{\min } + 2L} \, . \end{aligned}$$
(56)
Assume that for every k for which \({\mathsf {d}}_k = {\mathsf {d}}_k^{{\mathcal {A}}}\) holds, the step size \(\alpha _k\) is not smaller than the stepsize given by (28), \(\alpha _k(L) \le \alpha _k\).
If \({\Vert {\mathsf {x}}_k - {\mathsf {x}}^*\Vert }_1 < r_*\), then for some
$$\begin{aligned} j \le \min \{ n - |I({\mathsf {x}}^*)|, |{{\,\mathrm{supp}\,}}({\mathsf {x}}_k)| - 1\} \end{aligned}$$
we have \({{\,\mathrm{supp}\,}}({\mathsf {x}}_{k + j}) \subseteq I({\mathsf {x}}^*)\) and \(\Vert {\mathsf {x}}_{k + j} - {\mathsf {x}}^* \Vert _1 < r_*\).
Proof
Follows from (Bomze et al. 2020, Theorem 3.3), since under the assumptions the AFW sets one variable in \({{\,\mathrm{supp}\,}}(x_k)\setminus I({\mathsf {x}}^*)\) to zero at every step without increasing the 1-norm distance from \({\mathsf {x}}^*\). \(\square \)
The above lemma does not require convexity and was applied in Bomze et al. (2020) to derive active set identification bounds in several convex and non-convex settings. Here we focus on the case where the domain \(C= {{\,\mathrm{conv}\,}}(A)\) with \(|A| < + \infty \) is a generic polytope, and where f is \(\mu \)-strongly convex for some \(\mu >0\), i.e.
$$\begin{aligned} f({\mathsf {y}}) \ge f({\mathsf {x}}) + \nabla f({\mathsf {x}})^\intercal ({\mathsf {y}}- {\mathsf {x}}) + \frac{\mu }{2} \Vert {\mathsf {x}}- {\mathsf {y}} \Vert ^2\quad \text{ for } \text{ all } \{{\mathsf {x}}, {\mathsf {y}}\} \subset C\, . \end{aligned}$$
(57)
Let \(E_{C}({\mathsf {x}}^*)\) be the face of \(C\) exposed by \(\nabla f(x^*)\):
$$\begin{aligned} E_{C}({\mathsf {x}}^*) := {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{\mathsf {x}}\in C}} \nabla f({\mathsf {x}}^*)^\intercal {\mathsf {x}} \, , \end{aligned}$$
(58)
Let then \(\theta _{A}\) be the Hoffman constant (see Beck and Shtern 2017) related to \([{\bar{{\mathsf {A}}}} ^{\intercal }, {\mathsf {I}}_n, {\mathsf {e}}, -{\mathsf {e}}] ^{\intercal }\), with \({\bar{{\mathsf {A}}}}\) the matrix having as columns the elements in A. Finally, consider the function \( {f}_A({\mathsf {y}}) := f({\bar{{\mathsf {A}}}}{\mathsf {y}})\) on \( \varDelta _{|A| - 1}\), and let \(L_{A}\) be the Lipschitz constant of \({\nabla } {f_A}\) as well as
$$\begin{aligned} { \delta _{\min } := \min _{{\mathsf {a}}\in A \setminus E_{C}({\mathsf {x}}^*)}\nabla f({\mathsf {x}}^*)^\intercal ({\mathsf {a}}- {\mathsf {x}}^*)\quad \text{ and }\quad r_*({\mathsf {x}}^*) := \frac{\delta _{\min }}{\delta _{\min } + 2L_A}\, .} \end{aligned}$$
Using linearity of AFW convergence for strongly convex objectives (see Sect. 6.1), we have the following result:
Theorem 2
The sequence \(\{{\mathsf {x}}_k\}\) generated by the AFW with \({\mathsf {x}}_0 \in A\) enters \(E_{C}({\mathsf {x}}^*)\) for
$$\begin{aligned} k \ge \max \left\{ 2\frac{\ln (h_0) - \ln (\mu _A r_*({\mathsf {x}}^*)^2/2)}{\ln (1/q)}, 0\right\} \, , \end{aligned}$$
(59)
where \(\mu _A = \frac{\mu }{n\theta _{A}^2}\) and \(q\in (0,1)\) is the constant related to the linear convergence rate of the AFW, i.e. \(h_k\le q^k h_0\) for all k.
Proof of Theorem 2
(sketch) We present an argument in the case \(C= \varDelta _{n - 1}\), \(A = \{e_i\}_{i \in [1:n]}\) which can be easily extended by affine invariance to the general case (see Bomze et al. 2020 for details). In this case \(\theta _{A} \ge 1\) and we can define \({{\bar{\mu }} := {\mu }/n }\ge \mu _{A} \).
To start with, the number of steps needed to reach the condition
$$\begin{aligned} h_k \le \frac{\mu }{2n}r_*({\mathsf {x}}^*)^2 = \frac{{\bar{\mu }}}{2} r_*({\mathsf {x}}^*)^2 \end{aligned}$$
(60)
is at most
$$\begin{aligned} {\bar{k}} = \max \left\{ \left\lceil \frac{\ln (h_0) - \ln ({\bar{\mu }} r_*(x^*)^2/2)}{\ln (1/q)}\right\rceil , 0\right\} \ . \end{aligned}$$
Now we combine \(n\Vert \cdot \Vert \ge {\Vert \cdot \Vert }_1\) with strong convexity and relation (60) to obtain \({\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert }_1 \le r_*({\mathsf {x}}^*)\), hence in particular \({\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert }_1 \le r_*({\mathsf {x}}^*)\) for every \(k \ge {\bar{k}}\). Since \({\mathsf {x}}_0\) is a vertex of the simplex, and at every step at most one coordinate is added to the support of the current iterate, \(|{{\,\mathrm{supp}\,}}({\mathsf {x}}_{{\bar{k}}})| \le {\bar{k}} + 1\). The claim follows by applying Lemma 3. \(\square \)
Additional bounds under a quadratic growth condition weaker than strong convexity and strict complementarity are reported in Garber (2020).
Convergence and finite time identification for the PFW and the AFW are proved in Bomze et al. (2019) for a specific class of non-convex minimization problems over the standard simplex, under the additional assumption that the sequence generated has a finite set of limit points. In another line of work, active set identification strategies combined with FW variants have been proposed in Cristofari et al. (2020) and Sun (2020).

5.7 Inexact linear oracle

In many real-world applications, linear subproblems can only be solved approximately. This is the reason why the convergence of FW variants is often analyzed under some error term for the linear minimization oracle (see, e.g., Braun et al. 2019, 2017; Freund and Grigas 2016; Jaggi 2013; Konnov 2018). A common assumption, relaxing the FW vertex exact minimization property, is to have access to a point (usually a vertex) \({\tilde{{\mathsf {s}}}}_k\) such that
$$\begin{aligned} \nabla f({\mathsf {x}}_k)^\intercal ({\tilde{{\mathsf {s}}}}_k-{\mathsf {x}}_k) \le \min _{{\mathsf {s}}\in C} \nabla f({\mathsf {x}}_k)^\intercal ({\mathsf {s}}-{\mathsf {x}}_k) + \delta _k \, , \end{aligned}$$
(61)
for a sequence \(\{\delta _k\}\) of non negative approximation errors.
If the sequence \(\{\delta _k\}\) is constant and equal to some \(\delta > 0\), then trivially the lowest possible approximation error achieved by the FW method is \(\delta \). At the same time, (Freund and Grigas 2016, Theorem 5.1) implies a rate of \({{\mathcal {O}}}(\frac{1}{k} + \delta )\) if the stepsize \(\alpha _k= \frac{2}{k + 2}\) is used.
The \({{\mathcal {O}}}(1/k)\) rate can be instead retrieved by assuming that \(\{\delta _k\}\) converges to 0 quickly enough, and in particular if
$$\begin{aligned} \delta _k = \frac{\delta \kappa _{f, C}}{k + 2} \end{aligned}$$
(62)
for a constant \(\delta > 0 \). Under (62), in Jaggi (2013) a convergence rate of
$$\begin{aligned} f(x_k) - f^* \le \frac{2\kappa _{f, C}}{k + 2}(1 + \delta ) \end{aligned}$$
(63)
was proved for the FW method with \(\alpha _k\) given by exact line search or equal to \(\frac{2}{k + 2}\), as well as for the EFW.
A variant making use of an approximated linear oracle recycling previous solutions to the linear minimization subproblem is studied in Braun et al. (2019). In Freund and Grigas (2016), Hogan (1971), the analysis of the classic FW method is extended to the case of inexact gradient information. In particular in Freund and Grigas (2016), assuming the availability of the \((\delta , L)\) oracle introduced in Devolder et al. (2014), a convergence rate of \({{\mathcal {O}}}(1/k + \delta k)\) is proved.

6 Improved rates for strongly convex objectives

Table 2
Known convergence rates for the FW method and the variants covered in this review
Method
Objective
Domain
Assumptions
Rate
FW
NC
Generic
\({{\mathcal {O}}}(1/\sqrt{k})\)
FW
C
Generic
\({{\mathcal {O}}}(1/k)\)
FW
SC
Generic
\({\mathsf {x}}^* \in {{\,\mathrm{ri}\,}}(C)\)
Linear
Variants
SC
Polytope
Linear
FW
SC
Strongly convex
\({{\mathcal {O}}}(1/k^2)\)
FW
SC
Strongly convex
\(\min \Vert \nabla f({\mathsf {x}}) \Vert > 0 \)
Linear
NC, C and SC stand for non-convex, convex and strongly convex respectively

6.1 Linear convergence under an angle condition

In the rest of this section we assume that f is \(\mu \)-strongly convex (57). We also assume that the stepsize is given by exact linesearch or by (28).
Under the strong convexity assumption, an asymptotic linear convergence rate for the FDFW on polytopes was given in the early work (Guélat and Marcotte 1986). Furthermore, in Garber and Hazan (2016) a linearly convergent variant was proposed, making use however of an additional local linear minimization oracle. See also Table 2 for a list of improvements on the \({\mathcal {O}}(1/k)\) rate under strong convexity.
Recent works obtain linear convergence rates by proving the angle condition
$$\begin{aligned} { - \nabla f({\mathsf {x}}_k)^\intercal {\widehat{{\mathsf {d}}}}_k \ge \frac{\tau }{{\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert }} \, \nabla f({\mathsf {x}}_k)^\intercal ( {\mathsf {x}}_k - {\mathsf {x}}^*)} \end{aligned}$$
(64)
for some \(\tau >0\) and some \({\mathsf {x}}^* \in {\mathop {{{\,\mathrm{arg\,min}\,}}}\nolimits _{{\mathsf {x}}\in C}} f({\mathsf {x}})\). As we shall see in the next lemma, under (64) it is not difficult to prove linear convergence rates in the number of good steps. These are FW steps with \(\alpha _k = 1\) and steps in any descent direction with \(\alpha _k < 1\).
Lemma 4
If the step k is a good step and (64) holds, then
$$\begin{aligned} { h_{k+1} \le \max \left\{ {\textstyle { \frac{1}{2}}, 1 - \frac{\tau ^2\mu }{L}} \right\} h_k \, .} \end{aligned}$$
(65)
Proof
If the step k is a full FW step then Lemma 2 entails \(h_{k+1}\le \frac{1}{2}\, h_k\). In the remaining case, first observe that by strong convexity
$$\begin{aligned} \begin{array}{rcl} f^* &{}= &{}f({\mathsf {x}}^*) \ge f({\mathsf {x}}_k) + \nabla f({\mathsf {x}}_k)^\intercal ({\mathsf {x}}^* - {\mathsf {x}}_k) + \frac{\mu }{2}\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert ^2 \\ &{}\ge &{}\min \limits _{\alpha \in {\mathbb {R}}}\left[ f({\mathsf {x}}_k) + \alpha \nabla f({\mathsf {x}}_k)^\intercal ({\mathsf {x}}^* - {\mathsf {x}}_k) + \frac{\alpha ^2\mu }{2}\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert ^2 \right] \\ &{}= &{}f({\mathsf {x}}_k) - \frac{1}{2\mu {\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert }^2} \left[ \nabla f({\mathsf {x}}_k)^\intercal ( {\mathsf {x}}_k - {\mathsf {x}}^*)\right] ^2 \, , \end{array} \end{aligned}$$
(66)
which means
$$\begin{aligned} h_k \le \frac{1}{2\mu {\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert }^2}\left[ \nabla f({\mathsf {x}}_k)^\intercal ({\mathsf {x}}_k - {\mathsf {x}}^*)\right] ^2 \, .\end{aligned}$$
(67)
We can then proceed using the bound (30) from Lemma 1 in the following way:
$$\begin{aligned} \begin{array}{rcl} h_{k+1} &{}= &{}f({\mathsf {x}}_{k + 1}) - f^* \le f({\mathsf {x}}_k) - f^* - \frac{1}{2L}\left[ \nabla f({\mathsf {x}}_k)^\intercal {\widehat{{\mathsf {d}}}}_k\right] ^2 \\ &{}\le &{} h_k - \frac{\tau ^2}{2L{\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert }^2} \left[ \nabla f({\mathsf {x}}_k)^\intercal ( {\mathsf {x}}_k - {\mathsf {x}}^* ) \right] ^2 \\ &{}\le &{}h_k \left( 1 - \frac{ \tau ^2\mu }{L}\right) \, , \end{array} \end{aligned}$$
(68)
where we used (64) in the second inequality and (67) in the third one. \(\square \)
As a corollary, under (64) we have the rate
$$\begin{aligned} f({\mathsf {x}}_{k}) - f^* {= h_k} \le \max \left\{ {\textstyle { \frac{1}{2}}, 1 - \frac{\tau ^2\mu }{L}}\right\} ^{\gamma (k)} {h_0} \end{aligned}$$
(69)
for any method with non increasing \(\{f({\mathsf {x}}_k)\}\) and following Algorithm 1, with \(\gamma (k) \le k\) an integer denoting the number of good steps until step k. It turns out that for all the variants we introduced in this review we have \(\gamma (k) \ge Tk\) for some constant \(T > 0\). When \({\mathsf {x}}^*\) is in the relative interior of \(C\), the FW method satisfies (64) and we have the following result (see Guélat and Marcotte 1986; Lacoste-Julien and Jaggi 2015):
Theorem 3
If \({\mathsf {x}}^* \in {{\,\mathrm{ri}\,}}(C)\), then
$$\begin{aligned} f({\mathsf {x}}_{k}) - f^* \le \left[ 1 - \frac{\mu }{L} \left( \frac{{{\,\mathrm{dist}\,}}({\mathsf {x}}^*, \partial C)}{D}\right) ^2\right] ^k (f({\mathsf {x}}_0) - f^*) \, . \end{aligned}$$
(70)
Proof
We can assume for simplicity \({{\,\mathrm{int}\,}}(C) \ne \emptyset \), since otherwise we can restrict ourselves to the affine hull of \(C\). Let \(\delta ={{\,\mathrm{dist}\,}}({\mathsf {x}}^*, \partial C)\) and \({\mathsf {g}}= -\nabla f({\mathsf {x}}_k)\). First, by assumption we have \({\mathsf {x}}^* + \delta {\widehat{{\mathsf {g}}}} \in C\). Therefore
$$\begin{aligned} {\mathsf {g}}^\intercal {\mathsf {d}}_k^{FW} \ge {\mathsf {g}}^\intercal (({\mathsf {x}}^*+\delta {\widehat{{\mathsf {g}}}} ) - {\mathsf {x}}) = \delta {\mathsf {g}}^\intercal {\widehat{{\mathsf {g}}}} + {\mathsf {g}}^\intercal ({\mathsf {x}}^* - {\mathsf {x}}) \ge \delta \Vert {\mathsf {g}} \Vert + f({\mathsf {x}}) - f^* \ge \delta \Vert {\mathsf {g}} \Vert ,\nonumber \\ \end{aligned}$$
(71)
where we used \({\mathsf {x}}^*+\delta {\widehat{{\mathsf {g}}}} \in C\) in the first inequality and convexity in the second. We can conclude
$$\begin{aligned} {\mathsf {g}}^\intercal \frac{{\mathsf {d}}_k^{FW}}{\Vert {\mathsf {d}}_k^{FW} \Vert } \ge {\mathsf {g}}^\intercal \frac{{\mathsf {d}}_k^{FW}}{D} \ge \frac{\delta }{D} \Vert {\mathsf {g}} \Vert \ge \frac{\delta }{D} {\mathsf {g}}^\intercal \left( \frac{{\mathsf {x}}_k - {\mathsf {x}}^*}{\Vert {\mathsf {x}}_k - {\mathsf {x}}^* \Vert }\right) \, . \end{aligned}$$
(72)
The thesis follows by Lemma 4, noticing that for \(\tau = \frac{{{\,\mathrm{dist}\,}}({\mathsf {x}}^*, \partial C)}{D} \le \frac{1}{2}\) we have \(1 - \tau ^2\frac{\mu }{L} > \frac{1}{2}\). \(\square \)
In Lacoste-Julien and Jaggi (2015), the authors proved that directions generated by the AFW and the PFW on polytopes satisfy condition (64), with \(\tau = {{\,\mathrm{PWidth}\,}}(A)/D\) and \({{\,\mathrm{PWidth}\,}}(A)\), pyramidal width of A. While \({{\,\mathrm{PWidth}\,}}(A)\) was originally defined with a rather complex minmax expression, in Peña and Rodriguez (2018) it was then proved
$$\begin{aligned} {{\,\mathrm{PWidth}\,}}(A) = \min _{F \in {{\,\mathrm{faces}\,}}(C)} {{\,\mathrm{dist}\,}}(F, {{\,\mathrm{conv}\,}}(A \setminus F)) \, . \end{aligned}$$
(73)
This quantity can be explicitly computed in a few special cases. For \(A = \{0, 1\}^n\) we have \({{\,\mathrm{PWidth}\,}}(A) = 1/\sqrt{n}\), while for \(A = \{e_i\}_{i \in [1:n]}\) (so that \(C\) is the \(n - 1\) dimensional simplex)
$$\begin{aligned} {{\,\mathrm{PWidth}\,}}(A) = {\left\{ \begin{array}{ll} \frac{2}{\sqrt{n}} &{}\text { if }n \text { is even} \\ \frac{2}{\sqrt{n - 1/n}} &{}\text { if }n \text { is odd.} \end{array}\right. } \end{aligned}$$
(74)
Angle conditions like (64) with \(\tau \) dependent on the number of vertices used to represent \(x_k\) as a convex combination were given in Bashiri and Zhang (2017) and Beck and Shtern (2017) for the FDFW and the PFW respectively. In particular, in Beck and Shtern (2017) a geometric constant \(\varOmega _{C}\) called vertex-facet distance was defined as
$$\begin{aligned} \varOmega _{C} = \min \{{{\,\mathrm{dist}\,}}({\mathsf {v}}, H) : {\mathsf {v}}\in V(C) \, , H \in {{\mathcal {H}}}(C), \, {\mathsf {v}}\notin H \} \, , \end{aligned}$$
(75)
with \(V(C)\) the set of vertices of \(C\), and \({{\mathcal {H}}}(C)\) the set of supporting hyperplanes of \(C\) (containing a facet of \(C\)). Then condition (64) is satisfied for \(\tau = \varOmega _{C}/s\), with \({\mathsf {d}}_k\) the PFW direction and s the number of points used in the active set \(A_k\).
In Bashiri and Zhang (2017), a geometric constant \(H_s\) was defined depending on the minimum number s of vertices needed to represent the current point \(x_k\), as well as on the proper2 inequalities \({\mathsf {q}}_i ^{\intercal }{\mathsf {x}}\le b_i\), \(i \in [{1}\! : \! {m}]\), appearing in a description of \(C\). For each of these inequalities the second gap \(g_i\) was defined as
$$\begin{aligned} g_i = \max _{{\mathsf {v}}\in V(C)} {\mathsf {q}}_i^\intercal {\mathsf {v}} - {{\,\mathrm{secondmax}\,}}_{{\mathsf {v}}\in V(C)} {\mathsf {q}}_i^\intercal {\mathsf {v}} \, ,\quad i\in [{1}\! : \! {m}]\, , \end{aligned}$$
(76)
with the secondmax function giving the second largest value achieved by the argument. Then \(H_s\) is defined as
$$\begin{aligned} H_s := {\max {\textstyle {\left\{ \sum \limits _{j = 1}^n \left( \sum \limits _{i \in S} \frac{a_{ij}}{g_i} \right) ^2 : S \in { {[1:m] }\atopwithdelims ()s} \right\} }}}\, . \end{aligned}$$
(77)
The arguments used in the paper imply that (64) holds with \(\tau = \frac{1}{{2}D\sqrt{H_s}}\) if \({\mathsf {d}}_k\) is a FDFW direction and \({\mathsf {x}}_k\) the convex combination of at most s vertices. We refer the reader to Peña and Rodriguez (2018) and Rademacher and Shu (2020) for additional results on these and related constants.
The linear convergence results for strongly convex objectives are extended to compositions of strongly convex objectives with affine transformations in Beck and Shtern (2017), Lacoste-Julien and Jaggi (2015), Peña and Rodriguez (2018). In Gutman and Pena (2021), the linear convergence results for the AFW and the FW method with minimum in the interior are extended with respect to a generalized condition number \(L_{f, C, D}/\mu _{f, C, D}\), with D a distance function on \(C\).
For the AFW, the PFW and the FDFW, linear rates with no bad steps (\(\gamma (k) = k\)) are given in Rinaldi and Zeffiro (2020) for non-convex objectives satisfying a Kurdyka-Łojasiewicz inequality. In Rinaldi and Zeffiro (2020), condition (64) was proved for the FW direction and orthographic retractions on some convex sets with smooth boundary. The work Combettes and Pokutta (2020) introduces a new FW variant using a subroutine to align the descent direction with the projection on the tangent cone of the negative gradient, thus implicitly maximizing \(\tau \) in (64).

6.2 Strongly convex domains

When \(C\) is strongly convex we have a \({{\mathcal {O}}}(1/k^2)\) rate (see, e.g., Garber and Hazan 2015; Kerdreux et al. 2021) for the classic FW method. Furthermore, when \(C\) is \(\beta _{C}\)-strongly convex and \(\Vert \nabla f({\mathsf {x}}) \Vert \ge c > 0\), then we have the linear convergence rate (see Demyanov and Rubinov 1970; Dunn 1979; Kerdreux et al. 2020; Levitin and Polyak 1966)
$$\begin{aligned} {h_{k+1}} \le \max \left\{ {\textstyle {\frac{1}{2}, 1 - \frac{L}{2c\beta _{C}} }}\right\} {h_k} \, . \end{aligned}$$
(78)
Finally, it is possible to interpolate between the \({{\mathcal {O}}}(1/k^2)\) rate of the strongly convex setting and the \({{\mathcal {O}}}(1/k)\) rate of the general convex one by relaxing strong convexity of the objective with Hölderian error bounds (Xu and Yang 2018) and also by relaxing strong convexity of the domain with uniform convexity (Kerdreux et al. 2021).

7 Extensions

7.1 Block coordinate Frank–Wolfe method

The block coordinate FW (BCFW) was introduced in Lacoste-Julien et al. (2013) for block product domains of the form \(C= C^{(1)} \times ... \times C^{(m)} \subseteq {\mathbb {R}}^{n_1 + ... + n_m} \), and applied to structured SVM training. The algorithm operates by selecting a random block and performing a FW step in that block. Formally, for \({\mathsf {s}}\in {\mathbb {R}}^{m_i}\) let \({\mathsf {s}}^{(i)} \in {\mathbb {R}}^n\) be the vector with all blocks equal to \({\mathsf {o}}\) except for the i-th block equal to \({\mathsf {s}}\). We can write the direction of the BCFW as
$$\begin{aligned} \begin{aligned} {\mathsf {d}}_k&= {\mathsf {s}}_k^{(i)} - {\mathsf {x}}_k \\ {\mathsf {s}}_k&\in {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{\mathsf {s}}\in C^{(i)}}} \nabla f({\mathsf {x}}_k)^\intercal {\mathsf {s}}^{(i)} \end{aligned} \end{aligned}$$
(79)
for a random index \(i \in [{1}\! : \! {n}]\).
In Lacoste-Julien et al. (2013), a convergence rate of
$$\begin{aligned} {\mathbb {E}}[f(x_k)] - f^* \le \frac{{2Km}}{k + 2m} \end{aligned}$$
(80)
is proved, for \(K = h_0 + \kappa _f^{\otimes }\), with \(\kappa _f^{\otimes }\) the product domain curvature constant, defined as \(\kappa _f^{\otimes } = \sum \kappa _f^{\otimes , i}\) where \(\kappa _f^{\otimes , i}\) are the curvature constants of the objective fixing the blocks outside \(C^{(i)}\):
$$\begin{aligned} \kappa _f^{\otimes , i} := { \sup \left\{ 2{\textstyle { \frac{f( {\mathsf {x}}+\alpha {\mathsf {d}}^{(i)}) - f({\mathsf {x}}) - \alpha \nabla f({\mathsf {x}})^\intercal {\mathsf {d}}^{(i)}}{\alpha ^2}}}: {\mathsf {d}}\in C-{\mathsf {x}},\, {\mathsf {x}}\in C, \, \alpha \in (0, 1] \right\} \, .} \end{aligned}$$
(81)
An asynchronous and parallel generalization for this method was given in Wang et al. (2016). This version assumes that a cloud oracle is available, modeling a set of worker nodes each sending information to a server at different times. This information consists of an index i and the following LMO on \(C^{(i)}\):
$$\begin{aligned} {\mathsf {s}}_{(i)} \in {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{\mathsf {s}}\in C^{(i)}}} \nabla f({\mathsf {x}}_{{\widetilde{k}}})^\intercal {\mathsf {s}}^{(i)} \, . \end{aligned}$$
(82)
The algorithm is called asynchronous because \({\widetilde{k}}\) can be smaller than k, modeling a delay in the information sent by the node. Once the server has collected a minibatch S of \(\tau \) distinct indexes (overwriting repetitions), the descent direction is defined as
$$\begin{aligned} {\mathsf {d}}_k = \sum _{i \in S} {\mathsf {s}}^{(i)}_{{(i)}} \, , \end{aligned}$$
(83)
If the indices sent by the nodes are i.i.d., then under suitable assumptions on the delay, a convergence rate of
$$\begin{aligned} {\mathbb {E}}[f({\mathsf {x}}_k)] - f^* \le \frac{2mK_{\tau }}{\tau ^2k + 2m} \end{aligned}$$
(84)
can be proved, where \(K_{\tau } = m\kappa _{f, \tau }^{\otimes }(1 + \delta ) + h_0\) for \(\delta \) depending on the delay error, with \(\kappa _{f, \tau }^{\otimes }\) the average curvature constant in a minibatch keeping all the components not in the minibatch fixed.
In Osokin et al. (2016), several improvements are proposed for the BCFW, including an adaptive criterion to prioritize blocks based on their FW gap, and block coordinate versions of the AFW and the PFW variants.
In Shah et al. (2015), a multi plane BCFW approach is proposed in the specific case of the structured SVM, based on caching supporting planes in the primal, corresponding to block linear minimizers in the dual. In Berrada et al. (2018), the duality for structured SVM between BCFW and stochastic subgradient descent is exploited to define a learning rate schedule for neural networks based on only one hyper parameter. The block coordinate approach is extended to the generalized FW in Beck et al. (2015), with coordinates however picked in a cyclic order.

7.2 Variants for the min-norm point problem

Consider the min-norm point (MNP) problem
$$\begin{aligned} \min _{{\mathsf {x}}\in C} {\Vert {\mathsf {x}}\Vert }_{*} \, , \end{aligned}$$
(85)
with \(C\) a closed convex subset of \({\mathbb {R}}^n\) and \({\Vert \cdot \Vert }_{*}\) a norm on \({\mathbb {R}}^n\). In Wolfe (1976), a FW variant is introduced to solve the problem when \(C\) is a polytope and \({\Vert \cdot \Vert }_*\) is the standard Euclidean norm \(\Vert \cdot \Vert \). Similarly to the variants introduced in Sect. 5.3, it generates a sequence of active sets \(\{A_k\}\) with \( {\mathsf {s}}_k\in A_{k + 1}\). At the step k the norm is minimized on the affine hull \({{\,\mathrm{aff}\,}}(A_k)\) of the current active set \(A_k\), that is
$$\begin{aligned} {\mathsf {v}}_k = {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{\mathsf {y}}\in {{\,\mathrm{aff}\,}}(A_k)}} \Vert {\mathsf {y}} \Vert \, . \end{aligned}$$
(86)
The descent direction \({\mathsf {d}}_k\) is then defined as
$$\begin{aligned} {\mathsf {d}}_k = {\mathsf {v}}_k - {\mathsf {x}}_k\, , \end{aligned}$$
(87)
and the stepsize is given by a tailored linesearch that allows to remove some of the atoms in the set \(A_k\) (see, e.g. Lacoste-Julien and Jaggi 2015; Wolfe 1976). Whenever \({\mathsf {x}}_{k + 1}\) is in the relative interior of \({{\,\mathrm{conv}\,}}(A_k)\), the FW vertex is added to the active set (that is, \({\mathsf {s}}_k \in A_{k + 1}\)). Otherwise, at least one of the vertices not appearing in a convex representation of \({\mathsf {x}}_k\) is removed. This scheme converges linearly when applied to generic smooth strongly convex objectives (see, e.g., Lacoste-Julien and Jaggi 2015).
In Harchaoui et al. (2015), a FW variant is proposed for minimum norm problems of the form
$$\begin{aligned} \min \{{\Vert {\mathsf {x}} \Vert }_* : f({\mathsf {x}}) \le 0, \, {\mathsf {x}}\in K \} \end{aligned}$$
(88)
with K a convex cone, f convex with L-Lipschitz gradient. In particular, the optimization domain is \(C= \{{\mathsf {x}}\in {\mathbb {R}}^n : f({\mathsf {x}}) \le 0\} \cap K\). The technique proposed in the article applies the standard FW method to the problems
$$\begin{aligned} \min \{f({\mathsf {x}}) : {\Vert {\mathsf {x}} \Vert }_* \le \delta _k, \, {\mathsf {x}}\in K\} \, , \end{aligned}$$
with \(\{\delta _k\}\) an increasing sequence convergent to the optimal value \({\bar{\delta }}\) of the problem (88). Let \(C(\delta ) = \{{\mathsf {x}}\in {\mathbb {R}}^n : {\Vert {\mathsf {x}} \Vert }_* \le \delta \} \cap K \) for \(\delta \ge 0\), and let
$$\begin{aligned} {\text {LM}}({\mathsf {r}}) \in {\mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{{{\mathsf {x}}\in C(1)}}} {\mathsf {r}}^\intercal {\mathsf {x}} \, , \end{aligned}$$
so that by homogeneity for every k the linear minimization oracle on \(C(\delta _k)\) is given by
$$\begin{aligned} \text {LMO}_{C(\delta _k)}({\mathsf {r}}) = \delta _k {\text {LM}}({\mathsf {r}}) \, . \end{aligned}$$
(89)
For every k, applying the FW method with suitable stopping conditions an approximate minimizer \({\mathsf {x}}_k\) of \(f( {\mathsf {x}})\) over \(C(\delta _k)\) is generated, with an associated lower bound on the objective, an affine function in \({\mathsf {y}}\):
$$\begin{aligned} f_{k}({\mathsf {y}}) := f({\mathsf {x}}_k) + \nabla f({\mathsf {x}}_k)^\intercal ({\mathsf {y}}- {\mathsf {x}}_k) \, . \end{aligned}$$
(90)
Then the function
$$\begin{aligned} \ell _{k}(\delta ) := {\min _{{\mathsf {y}}\in C(\delta )} f_k({\mathsf {y}}) } = f_{k}(\delta {\text {LM}({\mathsf {g}}_k)) \quad \text {with } {\mathsf {g}}_k= \nabla f({\mathsf {x}}_k)} \end{aligned}$$
(91)
is decreasing and affine in \(\delta \) and satisfies
$$\begin{aligned} \ell _{k}(\delta ) = \min _{{\mathsf {y}}\in C(\delta )} f_k({\mathsf {y}}) \le F(\delta ) : =\min _{{\mathsf {y}}\in C(\delta )} f({\mathsf {y}}) \, . \end{aligned}$$
(92)
Therefore, for
$$\begin{aligned} {\bar{\ell }}_k(\delta ) = \max _{i \in [1:k]} \ell _i(\delta ) \le F(\delta ) \end{aligned}$$
the quantity \(\delta _{k + 1}\) can be defined as \(\min \{\delta \ge 0 : {{{\bar{\ell }}}_{k}(\delta )} \le 0 \}\), hence \(F(\delta _{k + 1}) \ge 0\). A complexity bound of \({{\mathcal {O}}}(\frac{1}{\varepsilon } \ln (\frac{1}{\varepsilon }))\) was given to achieve precision \(\varepsilon \) applying this method, with \({{\mathcal {O}}}(1/\varepsilon )\) iterations per subproblem and length of the sequence \(\{\delta _k\}\) at most \({{\mathcal {O}}}(\ln (1/\varepsilon ))\) (see (Harchaoui et al. 2015, Theorem 2) for details).

7.3 Variants for optimization over the trace norm ball

The FW method has found many applications for optimization problems over the trace norm ball. In this case, as explained in Example 3.4, linear optimization can be obtained by computing the top left and right singular vectors of the matrix \(-\nabla f({\mathsf {X}}_k)\), an operation referred to as 1-SVD (see Allen-Zhu et al. 2017) .
In the work Freund et al. (2017), the FDFW is applied to the matrix completion problem (13), thus generating a sequence of matrices \(\{{\mathsf {X}}_k\}\) with \({\Vert {\mathsf {X}}_k \Vert }_* \le \delta \) for every k. The method can be implemented efficiently exploiting the fact that for \({\mathsf {X}}\) on the boundary of the nuclear norm ball, there is a simple expression for the face \({\mathcal {F}}({\mathsf {X}})\). For \({\mathsf {X}}\in {\mathbb {R}}^{m \times n}\) with \({{\,\mathrm{rank}\,}}({\mathsf {X}}) = k\) let \({\mathsf {U}}{\mathsf {D}}{\mathsf {V}}^{\intercal }\) be the thin SVD of \({\mathsf {X}}\), so that \({\mathsf {D}}\in {\mathbb {R}}^{k \times k}\) is the diagonal matrix of non zero singolar values for \({\mathsf {X}}\), with corresponding left and right singular vectors in the columns of \({\mathsf {U}}\in {\mathbb {R}}^{m \times k}\) and \({\mathsf {V}}\in {\mathbb {R}}^{n \times k}\) respectively. If \({\Vert {\mathsf {X}} \Vert }_* = \delta \) then the minimal face of the domain containing \({\mathsf {X}}\) is the set
$$\begin{aligned} {\mathcal {F}}({\mathsf {X}}) = \{{\mathsf {X}}\in {\mathbb {R}}^{m \times n} : {\mathsf {X}}= {\mathsf {U}}{\mathsf {M}}{\mathsf {V}}^{\intercal } \text { for } {{\mathsf {M}}={\mathsf {M}} ^{\intercal }\ {{\,\mathrm{psd}\,}}\text { with }} {\Vert {\mathsf {M}} \Vert }_{*} = \delta \} \, . \end{aligned}$$
(93)
It is not difficult to see that we have \({{\,\mathrm{rank}\,}}({\mathsf {X}}_k) \le k + 1\) for every \(k \in {\mathbb {N}}\), as well. Furthermore, the thin SVD of the current iterate \({\mathsf {X}}_k\) can be updated efficiently both after FW steps and after in face steps. The convergence rate of the FDFW in this setting is still \({{\mathcal {O}}}(1/k)\).
In the recent work Wang et al. (2020), an unbounded variant of the FW method is applied to solve a generalized version of the trace norm ball optimization problem:
$$\begin{aligned} \min _{{\mathsf {X}}\in {\mathbb {R}}^{m \times n}}\{f({\mathsf {X}}) : {\Vert {\mathsf {P}}{\mathsf {X}}{\mathsf {Q}} \Vert }_* \le \delta \} \end{aligned}$$
(94)
with \({\mathsf {P}}, {\mathsf {Q}}\) singular matrices. The main idea of the method is to decompose the domain in the sum \(S + T\) between the kernel T of the linear function \(\varphi _{{\mathsf {P}}, {\mathsf {Q}}}({\mathsf {X}})= {\mathsf {P}}{\mathsf {X}}{\mathsf {Q}}\) and a bounded set \(S \subset T^{\perp }\). Then gradient descent steps in the unbounded component T are alternated to FW steps in the bounded component S. The authors apply this approach to the generalized LASSO as well, using the AFW for the bounded component.
In Allen-Zhu et al. (2017), a variant of the classic FW using k-SVD (computing the top k left and right singular vectors for the SVD) is introduced, and it is proved that it converges linearly for strongly convex objectives when the solution has rank at most k. In Mu et al. (2016), the FW step is combined with a proximal gradient step for a quadratic problem on the product of the nuclear norm ball with the \(\ell _1\) ball. Approaches using an equivalent formulation on the spectrahedron introduced in Jaggi and Sulovský (2010) are analyzed in Ding et al. (2020), Garber (2019).

8 Conclusions

While the concept of the FW method is quite easy to understand, its advantages, witnessed by a multitude of related work, may not be apparent to someone not closely familiar with the subject. Therefore we considered, in Sect. 3, several motivating applications, ranging from classic optimization to more recent machine learning problems. As in any line search-based method, the proper choice of stepsize is an important ingredient to achieve satisfactory performance. In Sect. 4, we review several options for stepsizes in first order methods, which are closely related both to the theoretical analysis as well as to practical implementation issues, guaranteeing fast convergence. This scope was investigated in more detail in Sect. 5 covering main results about the FW method and its most popular variants, including the \({{\mathcal {O}}}(1/k)\) convergence rate for convex objectives, affine invariance, the sparse approximation property, and support identification. The account is complemented by a report on recent progress in improving on the \({{\mathcal {O}}}(1/k)\) convergence rate in Sect. 6. Versatility and efficiency of this approach was also illustrated in the final Sect. 7 describing present recent FW variants fitting different optimization frameworks and computational environments, in particular block coordinate, distributed, parametrized, and trace norm optimization. For sure many other interesting and relevant aspects of FW and friends could not find their way into this review because of space and time limitations, but the authors hope to have convinced readers that FW merits a consideration even by non-experts in first-order optimization.

Acknowledgements

The authors would like to thank an anonymous referee for their diligence and the Editors-in-Chief for their trust and encouragement, offering us the opportunity to publish this Invited Review.

Declarations

Conflict of interest

The authors confirm there is no conflict of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Fußnoten
1
Details related to the LMO cost can be found in, e.g., Jaggi (2013).
 
2
i.e., those inequalities strictly satisfied for some \({\mathsf {x}}\in C\).
 
Literatur
Zurück zum Zitat Ahipaşaoğlu SD, Sun P, Todd MJ (2008) Linear convergence of a modified Frank–Wolfe algorithm for computing minimum-volume enclosing ellipsoids. Optim Methods Soft 23(1):5–19CrossRef Ahipaşaoğlu SD, Sun P, Todd MJ (2008) Linear convergence of a modified Frank–Wolfe algorithm for computing minimum-volume enclosing ellipsoids. Optim Methods Soft 23(1):5–19CrossRef
Zurück zum Zitat Ahipaşaoğlu SD, Todd MJ (2013) A modified Frank–Wolfe algorithm for computing minimum-area enclosing ellipsoidal cylinders: Theory and algorithms. Comput Geom 46(5):494–519CrossRef Ahipaşaoğlu SD, Todd MJ (2013) A modified Frank–Wolfe algorithm for computing minimum-area enclosing ellipsoidal cylinders: Theory and algorithms. Comput Geom 46(5):494–519CrossRef
Zurück zum Zitat Allen-Zhu Z, Hazan E, Hu W, Li Y (2017) Linear convergence of a Frank–Wolfe type algorithm over trace-norm balls. Adv Neural Inf Process Syst 2017:6192–6201 Allen-Zhu Z, Hazan E, Hu W, Li Y (2017) Linear convergence of a Frank–Wolfe type algorithm over trace-norm balls. Adv Neural Inf Process Syst 2017:6192–6201
Zurück zum Zitat Bach F et al (2013) Learning with submodular functions: A convex optimization perspective. Foundations and Trends\(\textregistered \). Mach Learn 6(2–3):145–373 Bach F et al (2013) Learning with submodular functions: A convex optimization perspective. Foundations and Trends\(\textregistered \). Mach Learn 6(2–3):145–373
Zurück zum Zitat Bashiri MA, Zhang X (2017) Decomposition-invariant conditional gradient for general polytopes with line search. In: Advances in neural information processing systems, pp 2690–2700 Bashiri MA, Zhang X (2017) Decomposition-invariant conditional gradient for general polytopes with line search. In: Advances in neural information processing systems, pp 2690–2700
Zurück zum Zitat Beck A, Pauwels E, Sabach S (2015) The cyclic block conditional gradient method for convex optimization problems. SIAM J Optim 25(4):2024–2049CrossRef Beck A, Pauwels E, Sabach S (2015) The cyclic block conditional gradient method for convex optimization problems. SIAM J Optim 25(4):2024–2049CrossRef
Zurück zum Zitat Beck A, Shtern S (2017) Linearly convergent away-step conditional gradient for non-strongly convex functions. Math Program 164(1–2):1–27CrossRef Beck A, Shtern S (2017) Linearly convergent away-step conditional gradient for non-strongly convex functions. Math Program 164(1–2):1–27CrossRef
Zurück zum Zitat Berrada L, Zisserman A, Kumar MP (2018) Deep Frank–Wolfe for neural network optimization. In: International conference on learning representations Berrada L, Zisserman A, Kumar MP (2018) Deep Frank–Wolfe for neural network optimization. In: International conference on learning representations
Zurück zum Zitat Bertsekas DP (2015) Convex optimization algorithms. Athena Scientific, Nashua Bertsekas DP (2015) Convex optimization algorithms. Athena Scientific, Nashua
Zurück zum Zitat Bomze IM (1997) Evolution towards the maximum clique. J Global Optim 10(2):143–164CrossRef Bomze IM (1997) Evolution towards the maximum clique. J Global Optim 10(2):143–164CrossRef
Zurück zum Zitat Bomze IM, Budinich M, Pardalos PM, Pelillo M (1999) The maximum clique problem. In: Du D-Z, Pardalos P (eds) Handbook of combinatorial optimization, pp. 1–74. Springer Bomze IM, Budinich M, Pardalos PM, Pelillo M (1999) The maximum clique problem. In: Du D-Z, Pardalos P (eds) Handbook of combinatorial optimization, pp. 1–74. Springer
Zurück zum Zitat Bomze IM, de Klerk E (2002) Solving standard quadratic optimization problems via linear, semidefinite and copositive programming. J Global Optim 24(2):163–185CrossRef Bomze IM, de Klerk E (2002) Solving standard quadratic optimization problems via linear, semidefinite and copositive programming. J Global Optim 24(2):163–185CrossRef
Zurück zum Zitat Bomze IM, Rinaldi F, Rota Bulò S (2019) First-order methods for the impatient: Support identification in finite time with convergent Frank–Wolfe variants. SIAM J Optim 29(3):2211–2226CrossRef Bomze IM, Rinaldi F, Rota Bulò S (2019) First-order methods for the impatient: Support identification in finite time with convergent Frank–Wolfe variants. SIAM J Optim 29(3):2211–2226CrossRef
Zurück zum Zitat Bomze IM, Rinaldi F, Zeffiro D (2020) Active set complexity of the away-step Frank–Wolfe algorithm. SIAM J Optim 30(3):2470–2500CrossRef Bomze IM, Rinaldi F, Zeffiro D (2020) Active set complexity of the away-step Frank–Wolfe algorithm. SIAM J Optim 30(3):2470–2500CrossRef
Zurück zum Zitat Boyd S, Boyd SP, Vandenberghe L (2004) Convex optimization. Cambridge University Press, CambridgeCrossRef Boyd S, Boyd SP, Vandenberghe L (2004) Convex optimization. Cambridge University Press, CambridgeCrossRef
Zurück zum Zitat Braun G, Pokutta S, Tu D, Wright S (2019) Blended conditonal gradients. In: International conference on machine learning, PMLR, pp 735–743 Braun G, Pokutta S, Tu D, Wright S (2019) Blended conditonal gradients. In: International conference on machine learning, PMLR, pp 735–743
Zurück zum Zitat Braun G, Pokutta S, Zink D (2017) Lazifying conditional gradient algorithms. In: ICML, pp 566–575 Braun G, Pokutta S, Zink D (2017) Lazifying conditional gradient algorithms. In: ICML, pp 566–575
Zurück zum Zitat Candès EJ, Recht B (2009) Exact matrix completion via convex optimization. Found Comput Math 9(6):717–772CrossRef Candès EJ, Recht B (2009) Exact matrix completion via convex optimization. Found Comput Math 9(6):717–772CrossRef
Zurück zum Zitat Canon MD, Cullum CD (1968) A tight upper bound on the rate of convergence of Frank–Wolfe algorithm. SIAM J Control 6(4):509–516CrossRef Canon MD, Cullum CD (1968) A tight upper bound on the rate of convergence of Frank–Wolfe algorithm. SIAM J Control 6(4):509–516CrossRef
Zurück zum Zitat Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (sp), IEEE, pp 39–57 Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (sp), IEEE, pp 39–57
Zurück zum Zitat Chakrabarty D, Jain P, Kothari P (2014) Provable submodular minimization using Wolfe’s algorithm. Adv Neural Inform Process Syst 27:802–809 Chakrabarty D, Jain P, Kothari P (2014) Provable submodular minimization using Wolfe’s algorithm. Adv Neural Inform Process Syst 27:802–809
Zurück zum Zitat Chen J, Zhou D, Yi J, Gu Q (2020) A Frank–Wolfe framework for efficient and effective adversarial attacks. In: Proceedings of the AAAI conference on artificial intelligence vol 34, pp 3486–3494 Chen J, Zhou D, Yi J, Gu Q (2020) A Frank–Wolfe framework for efficient and effective adversarial attacks. In: Proceedings of the AAAI conference on artificial intelligence vol 34, pp 3486–3494
Zurück zum Zitat Chen PY, Zhang H, Sharma Y, Yi J, Hsieh CJ (2017) ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, pp 15–26 Chen PY, Zhang H, Sharma Y, Yi J, Hsieh CJ (2017) ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, pp 15–26
Zurück zum Zitat Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159CrossRef Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159CrossRef
Zurück zum Zitat Clarkson KL (2010) Coresets, sparse greedy approximation, and the Frank–Wolfe algorithm. ACM Trans Algorithms 6(4):1–30CrossRef Clarkson KL (2010) Coresets, sparse greedy approximation, and the Frank–Wolfe algorithm. ACM Trans Algorithms 6(4):1–30CrossRef
Zurück zum Zitat Combettes C, Pokutta S (2020) Boosting Frank–Wolfe by chasing gradients. In: International Conference on Machine Learning, PMLR, pp 2111–2121 Combettes C, Pokutta S (2020) Boosting Frank–Wolfe by chasing gradients. In: International Conference on Machine Learning, PMLR, pp 2111–2121
Zurück zum Zitat Cristofari A, De Santis M, Lucidi S, Rinaldi F (2020) An active-set algorithmic framework for non-convex optimization problems over the simplex. Comput Optim Appl 77:57–89CrossRef Cristofari A, De Santis M, Lucidi S, Rinaldi F (2020) An active-set algorithmic framework for non-convex optimization problems over the simplex. Comput Optim Appl 77:57–89CrossRef
Zurück zum Zitat Demyanov VF, Rubinov AM (1970) Approximate methods in optimization problems. American Elsevier, New York Demyanov VF, Rubinov AM (1970) Approximate methods in optimization problems. American Elsevier, New York
Zurück zum Zitat Devolder O, Glineur F, Nesterov Y (2014) First-order methods of smooth convex optimization with inexact oracle. Math Program 146(1):37–75CrossRef Devolder O, Glineur F, Nesterov Y (2014) First-order methods of smooth convex optimization with inexact oracle. Math Program 146(1):37–75CrossRef
Zurück zum Zitat Ding L, Fei Y, Xu Q, Yang C (2020) Spectral Frank–Wolfe algorithm: Strict complementarity and linear convergence. In: International conference on machine learning, PMLR, pp 2535–2544 Ding L, Fei Y, Xu Q, Yang C (2020) Spectral Frank–Wolfe algorithm: Strict complementarity and linear convergence. In: International conference on machine learning, PMLR, pp 2535–2544
Zurück zum Zitat Dunn JC (1979) Rates of convergence for conditional gradient algorithms near singular and nonsingular extremals. SIAM J Control Optim 17(2):187–211CrossRef Dunn JC (1979) Rates of convergence for conditional gradient algorithms near singular and nonsingular extremals. SIAM J Control Optim 17(2):187–211CrossRef
Zurück zum Zitat Dunn JC, Harshbarger S (1978) Conditional gradient algorithms with open loop step size rules. J Math Anal Appl 62(2):432–444CrossRef Dunn JC, Harshbarger S (1978) Conditional gradient algorithms with open loop step size rules. J Math Anal Appl 62(2):432–444CrossRef
Zurück zum Zitat Ferreira O, Sosa W (2021) On the Frank–Wolfe algorithm for non-compact constrained optimization problems. Optimization 1–15 Ferreira O, Sosa W (2021) On the Frank–Wolfe algorithm for non-compact constrained optimization problems. Optimization 1–15
Zurück zum Zitat Frank M, Wolfe P (1956) An algorithm for quadratic programming. Naval Res Logist Q 3(1–2):95–110CrossRef Frank M, Wolfe P (1956) An algorithm for quadratic programming. Naval Res Logist Q 3(1–2):95–110CrossRef
Zurück zum Zitat Freund RM, Grigas P (2016) New analysis and results for the Frank–Wolfe method. Math Program 155(1–2):199–230CrossRef Freund RM, Grigas P (2016) New analysis and results for the Frank–Wolfe method. Math Program 155(1–2):199–230CrossRef
Zurück zum Zitat Freund RM, Grigas P, Mazumder R (2017) An extended Frank–Wolfe method with in-face directions, and its application to low-rank matrix completion. SIAM J Optim 27(1):319–346CrossRef Freund RM, Grigas P, Mazumder R (2017) An extended Frank–Wolfe method with in-face directions, and its application to low-rank matrix completion. SIAM J Optim 27(1):319–346CrossRef
Zurück zum Zitat Fujishige S (1980) Lexicographically optimal base of a polymatroid with respect to a weight vector. Math Oper Res 5(2):186–196CrossRef Fujishige S (1980) Lexicographically optimal base of a polymatroid with respect to a weight vector. Math Oper Res 5(2):186–196CrossRef
Zurück zum Zitat Fukushima M (1984) A modified Frank–Wolfe algorithm for solving the traffic assignment problem. Trans Res Part B Methodol 18(2):169–177CrossRef Fukushima M (1984) A modified Frank–Wolfe algorithm for solving the traffic assignment problem. Trans Res Part B Methodol 18(2):169–177CrossRef
Zurück zum Zitat Garber D (2019) Linear convergence of Frank–Wolfe for rank-one matrix recovery without strong convexity. arXiv preprint arXiv:1912.01467 Garber D (2019) Linear convergence of Frank–Wolfe for rank-one matrix recovery without strong convexity. arXiv preprint arXiv:​1912.​01467
Zurück zum Zitat Garber D (2020) Revisiting Frank–Wolfe for polytopes: Strict complementarity and sparsity. Adv Neural Inform Process Syst 33:18883–18893 Garber D (2020) Revisiting Frank–Wolfe for polytopes: Strict complementarity and sparsity. Adv Neural Inform Process Syst 33:18883–18893
Zurück zum Zitat Garber D, Hazan E (2015) Faster rates for the Frank–Wolfe method over strongly-convex sets. ICML 15:541–549 Garber D, Hazan E (2015) Faster rates for the Frank–Wolfe method over strongly-convex sets. ICML 15:541–549
Zurück zum Zitat Garber D, Hazan E (2016) A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization. SIAM J Optim 26(3):1493–1528CrossRef Garber D, Hazan E (2016) A linearly convergent variant of the conditional gradient algorithm under strong convexity, with applications to online and stochastic optimization. SIAM J Optim 26(3):1493–1528CrossRef
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680
Zurück zum Zitat Guélat J, Marcotte P (1986) Some comments on Wolfe’s away step. Math Program 35(1):110–119 Guélat J, Marcotte P (1986) Some comments on Wolfe’s away step. Math Program 35(1):110–119
Zurück zum Zitat Gutman DH, Pena JF (2021) The condition number of a function relative to a set. Math Program 188:255–294 Gutman DH, Pena JF (2021) The condition number of a function relative to a set. Math Program 188:255–294
Zurück zum Zitat Harchaoui Z, Juditsky A, Nemirovski A (2015) Conditional gradient algorithms for norm-regularized smooth convex optimization. Math Program 152(1):75–112CrossRef Harchaoui Z, Juditsky A, Nemirovski A (2015) Conditional gradient algorithms for norm-regularized smooth convex optimization. Math Program 152(1):75–112CrossRef
Zurück zum Zitat Hogan WW (1971) Convergence results for some extensions of the Frank–Wolfe method. Tech. rep., California Univ Los Angeles Western Management Science Inst Hogan WW (1971) Convergence results for some extensions of the Frank–Wolfe method. Tech. rep., California Univ Los Angeles Western Management Science Inst
Zurück zum Zitat Holloway CA (1974) An extension of the Frank and Wolfe method of feasible directions. Math Program 6(1):14–27CrossRef Holloway CA (1974) An extension of the Frank and Wolfe method of feasible directions. Math Program 6(1):14–27CrossRef
Zurück zum Zitat Hungerford JT, Rinaldi F (2019) A general regularized continuous formulation for the maximum clique problem. Math Oper Res 44(4):1161–1173CrossRef Hungerford JT, Rinaldi F (2019) A general regularized continuous formulation for the maximum clique problem. Math Oper Res 44(4):1161–1173CrossRef
Zurück zum Zitat Jaggi M (2011) Sparse convex optimization methods for machine learning. Ph.D. thesis, ETH Zurich Jaggi M (2011) Sparse convex optimization methods for machine learning. Ph.D. thesis, ETH Zurich
Zurück zum Zitat Jaggi M (2013) Revisiting Frank–Wolfe: Projection-free sparse convex optimization. ICML 1:427–435 Jaggi M (2013) Revisiting Frank–Wolfe: Projection-free sparse convex optimization. ICML 1:427–435
Zurück zum Zitat Jaggi M, Sulovský M (2010) A simple algorithm for nuclear norm regularized problems. In: ICML, pp 471–478 Jaggi M, Sulovský M (2010) A simple algorithm for nuclear norm regularized problems. In: ICML, pp 471–478
Zurück zum Zitat Joulin A, Tang K, Fei-Fei L (2014) Efficient image and video co-localization with Frank–Wolfe algorithm. In: European conference on computer vision. Springer, pp 253–268 Joulin A, Tang K, Fei-Fei L (2014) Efficient image and video co-localization with Frank–Wolfe algorithm. In: European conference on computer vision. Springer, pp 253–268
Zurück zum Zitat Kazemi E, Kerdreux T, Wang L (2021) Generating structured adversarial attacks using Frank–Wolfe method. arXiv preprint arXiv:2102.07360 Kazemi E, Kerdreux T, Wang L (2021) Generating structured adversarial attacks using Frank–Wolfe method. arXiv preprint arXiv:​2102.​07360
Zurück zum Zitat Kerdreux T, d’Aspremont A, Pokutta S (2021) Projection-free optimization on uniformly convex sets. In: International Conference on Artificial Intelligence and Statistics, pp. 19–27. PMLR Kerdreux T, d’Aspremont A, Pokutta S (2021) Projection-free optimization on uniformly convex sets. In: International Conference on Artificial Intelligence and Statistics, pp. 19–27. PMLR
Zurück zum Zitat Kerdreux T, Liu L, Lacoste-Julien S, Scieur D (2020) Affine invariant analysis of Frank–Wolfe on strongly convex sets. arXiv preprint arXiv:2011.03351 Kerdreux T, Liu L, Lacoste-Julien S, Scieur D (2020) Affine invariant analysis of Frank–Wolfe on strongly convex sets. arXiv preprint arXiv:​2011.​03351
Zurück zum Zitat Konnov I (2018) Simplified versions of the conditional gradient method. Optimization 67(12):2275–2290CrossRef Konnov I (2018) Simplified versions of the conditional gradient method. Optimization 67(12):2275–2290CrossRef
Zurück zum Zitat Kumar P, Mitchell JS, Yıldırım EA (2003) Approximate minimum enclosing balls in high dimensions using core-sets. J Exp Algorithmics 8:1–1CrossRef Kumar P, Mitchell JS, Yıldırım EA (2003) Approximate minimum enclosing balls in high dimensions using core-sets. J Exp Algorithmics 8:1–1CrossRef
Zurück zum Zitat Lacoste-Julien S, Jaggi M (2015) On the global linear convergence of Frank–Wolfe optimization variants. In: Advances in neural information processing systems, pp 496–504 Lacoste-Julien S, Jaggi M (2015) On the global linear convergence of Frank–Wolfe optimization variants. In: Advances in neural information processing systems, pp 496–504
Zurück zum Zitat Lacoste-Julien S, Jaggi M, Schmidt M, Pletscher P (2013) Block-coordinate Frank–Wolfe optimization for structural SVMs. In: Dasgupta S, McAllester D (eds) Proceedings of the 30th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 28, PMLR, Atlanta, Georgia, USA, pp 53–61 Lacoste-Julien S, Jaggi M, Schmidt M, Pletscher P (2013) Block-coordinate Frank–Wolfe optimization for structural SVMs. In: Dasgupta S, McAllester D (eds) Proceedings of the 30th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 28, PMLR, Atlanta, Georgia, USA, pp 53–61
Zurück zum Zitat Lan G (2020) First-order and stochastic optimization methods for machine learning. Springer, New YorkCrossRef Lan G (2020) First-order and stochastic optimization methods for machine learning. Springer, New YorkCrossRef
Zurück zum Zitat Lan G, Zhou Y (2016) Conditional gradient sliding for convex optimization. SIAM J Optim 26(2):1379–1409CrossRef Lan G, Zhou Y (2016) Conditional gradient sliding for convex optimization. SIAM J Optim 26(2):1379–1409CrossRef
Zurück zum Zitat LeBlanc LJ, Morlok EK, Pierskalla WP (1975) An efficient approach to solving the road network equilibrium traffic assignment problem. Transp Res 9(5):309–318CrossRef LeBlanc LJ, Morlok EK, Pierskalla WP (1975) An efficient approach to solving the road network equilibrium traffic assignment problem. Transp Res 9(5):309–318CrossRef
Zurück zum Zitat Levitin ES, Polyak BT (1966) Constrained minimization methods. USSR Comput Math Math Phys 6(5):1–50CrossRef Levitin ES, Polyak BT (1966) Constrained minimization methods. USSR Comput Math Math Phys 6(5):1–50CrossRef
Zurück zum Zitat Locatello F, Khanna R, Tschannen M, Jaggi M (2017) A unified optimization view on generalized matching pursuit and Frank–Wolfe. In: Artificial intelligence and statistics. PMLR, pp 860–868 Locatello F, Khanna R, Tschannen M, Jaggi M (2017) A unified optimization view on generalized matching pursuit and Frank–Wolfe. In: Artificial intelligence and statistics. PMLR, pp 860–868
Zurück zum Zitat Luce RD, Perry AD (1949) A method of matrix analysis of group structure. Psychometrika 14(2):95–116CrossRef Luce RD, Perry AD (1949) A method of matrix analysis of group structure. Psychometrika 14(2):95–116CrossRef
Zurück zum Zitat Mangasarian O (1996) Machine learning via polyhedral concave minimization. Appl Math Parallel Comput. Springer, New York, pp 175–188CrossRef Mangasarian O (1996) Machine learning via polyhedral concave minimization. Appl Math Parallel Comput. Springer, New York, pp 175–188CrossRef
Zurück zum Zitat Mitchell B, Demyanov VF, Malozemov V (1974) Finding the point of a polyhedron closest to the origin. SIAM J Control 12(1):19–26CrossRef Mitchell B, Demyanov VF, Malozemov V (1974) Finding the point of a polyhedron closest to the origin. SIAM J Control 12(1):19–26CrossRef
Zurück zum Zitat Mitradjieva M, Lindberg PO (2013) The stiff is moving–conjugate direction Frank–Wolfe methods with applications to traffic assignment. Transp Sci 47(2):280–293CrossRef Mitradjieva M, Lindberg PO (2013) The stiff is moving–conjugate direction Frank–Wolfe methods with applications to traffic assignment. Transp Sci 47(2):280–293CrossRef
Zurück zum Zitat Mu C, Zhang Y, Wright J, Goldfarb D (2016) Scalable robust matrix recovery: Frank–Wolfe meets proximal methods. SIAM J Sci Comput 38(5):A3291–A3317CrossRef Mu C, Zhang Y, Wright J, Goldfarb D (2016) Scalable robust matrix recovery: Frank–Wolfe meets proximal methods. SIAM J Sci Comput 38(5):A3291–A3317CrossRef
Zurück zum Zitat Osokin A, Alayrac JB, Lukasewitz I, Dokania P, Lacoste-Julien S (2016) Minding the gaps for block Frank–Wolfe optimization of structured svms. In: International conference on machine learning, PMLR, pp 593–602 Osokin A, Alayrac JB, Lukasewitz I, Dokania P, Lacoste-Julien S (2016) Minding the gaps for block Frank–Wolfe optimization of structured svms. In: International conference on machine learning, PMLR, pp 593–602
Zurück zum Zitat Peña J, Rodriguez D (2018) Polytope conditioning and linear convergence of the Frank–Wolfe algorithm. Math Oper Res 44(1):1–18 Peña J, Rodriguez D (2018) Polytope conditioning and linear convergence of the Frank–Wolfe algorithm. Math Oper Res 44(1):1–18
Zurück zum Zitat Pedregosa F, Negiar G, Askari A, Jaggi M (2020) Linearly convergent Frank–Wolfe with backtracking line-search. In: International conference on artificial intelligence and statistics. PMLR, pp 1–10 Pedregosa F, Negiar G, Askari A, Jaggi M (2020) Linearly convergent Frank–Wolfe with backtracking line-search. In: International conference on artificial intelligence and statistics. PMLR, pp 1–10
Zurück zum Zitat Perederieieva O, Ehrgott M, Raith A, Wang JY (2015) A framework for and empirical study of algorithms for traffic assignment. Comput Oper Res 54:90–107CrossRef Perederieieva O, Ehrgott M, Raith A, Wang JY (2015) A framework for and empirical study of algorithms for traffic assignment. Comput Oper Res 54:90–107CrossRef
Zurück zum Zitat Rademacher L, Shu C (2020) The smoothed complexity of Frank–Wolfe methods via conditioning of random matrices and polytopes. arXiv preprint arXiv:2009.12685 Rademacher L, Shu C (2020) The smoothed complexity of Frank–Wolfe methods via conditioning of random matrices and polytopes. arXiv preprint arXiv:​2009.​12685
Zurück zum Zitat Rinaldi F, Schoen F, Sciandrone M (2010) Concave programming for minimizing the zero-norm over polyhedral sets. Comput Optim Appl 46(3):467–486CrossRef Rinaldi F, Schoen F, Sciandrone M (2010) Concave programming for minimizing the zero-norm over polyhedral sets. Comput Optim Appl 46(3):467–486CrossRef
Zurück zum Zitat Rinaldi F, Zeffiro D (2020) A unifying framework for the analysis of projection-free first-order methods under a sufficient slope condition. arXiv preprint arXiv:2008.09781 Rinaldi F, Zeffiro D (2020) A unifying framework for the analysis of projection-free first-order methods under a sufficient slope condition. arXiv preprint arXiv:​2008.​09781
Zurück zum Zitat Sahu AK, Kar S (2020) Decentralized zeroth-order constrained stochastic optimization algorithms: Frank–Wolfe and variants with applications to black-box adversarial attacks. Proc IEEE 108(11):1890–1905CrossRef Sahu AK, Kar S (2020) Decentralized zeroth-order constrained stochastic optimization algorithms: Frank–Wolfe and variants with applications to black-box adversarial attacks. Proc IEEE 108(11):1890–1905CrossRef
Zurück zum Zitat Shah N, Kolmogorov V, Lampert CH (2015) A multi-plane block-coordinate Frank–Wolfe algorithm for training structural svms with a costly max-oracle. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2737–2745 Shah N, Kolmogorov V, Lampert CH (2015) A multi-plane block-coordinate Frank–Wolfe algorithm for training structural svms with a costly max-oracle. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2737–2745
Zurück zum Zitat Sun Y (2020) Safe screening for the generalized conditional gradient method. Image 1:2 Sun Y (2020) Safe screening for the generalized conditional gradient method. Image 1:2
Zurück zum Zitat Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc: Ser B (Methodol) 58(1):267–288 Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc: Ser B (Methodol) 58(1):267–288
Zurück zum Zitat Vapnik V (2013) The nature of statistical learning theory. Springer, New York Vapnik V (2013) The nature of statistical learning theory. Springer, New York
Zurück zum Zitat Von Hohenbalken B (1977) Simplicial decomposition in nonlinear programming algorithms. Math Program 13(1):49–68CrossRef Von Hohenbalken B (1977) Simplicial decomposition in nonlinear programming algorithms. Math Program 13(1):49–68CrossRef
Zurück zum Zitat Wang H, Lu H, Mazumder R (2020) Frank–Wolfe methods with an unbounded feasible region and applications to structured learning. arXiv preprint arXiv:2012.15361 Wang H, Lu H, Mazumder R (2020) Frank–Wolfe methods with an unbounded feasible region and applications to structured learning. arXiv preprint arXiv:​2012.​15361
Zurück zum Zitat Wang YX, Sadhanala V, Dai W, Neiswanger W, Sra S, Xing E (2016) Parallel and distributed block-coordinate Frank–Wolfe algorithms. In: International Conference on Machine Learning. PMLR, pp 1548–1557 Wang YX, Sadhanala V, Dai W, Neiswanger W, Sra S, Xing E (2016) Parallel and distributed block-coordinate Frank–Wolfe algorithms. In: International Conference on Machine Learning. PMLR, pp 1548–1557
Zurück zum Zitat Wardrop JG (1952) Road paper. some theoretical aspects of road traffic research. Proc Inst Civ Eng 1(3):325–362 Wardrop JG (1952) Road paper. some theoretical aspects of road traffic research. Proc Inst Civ Eng 1(3):325–362
Zurück zum Zitat Weintraub A, Ortiz C, González J (1985) Accelerating convergence of the Frank–Wolfe algorithm. Transp Res Part B Methodol 19(2):113–122CrossRef Weintraub A, Ortiz C, González J (1985) Accelerating convergence of the Frank–Wolfe algorithm. Transp Res Part B Methodol 19(2):113–122CrossRef
Zurück zum Zitat Wolfe P (1970) Convergence theory in nonlinear programming. In: Abadie J (ed) Integer and nonlinear programming. North Holland, pp 1–36 Wolfe P (1970) Convergence theory in nonlinear programming. In: Abadie J (ed) Integer and nonlinear programming. North Holland, pp 1–36
Zurück zum Zitat Wolfe P (1976) Finding the nearest point in a polytope. Math Program 11(1):128–149CrossRef Wolfe P (1976) Finding the nearest point in a polytope. Math Program 11(1):128–149CrossRef
Zurück zum Zitat Wu Q, Hao JK (2015) A review on algorithms for maximum clique problems. Eur J Oper Res 242(3):693–709CrossRef Wu Q, Hao JK (2015) A review on algorithms for maximum clique problems. Eur J Oper Res 242(3):693–709CrossRef
Zurück zum Zitat Yıldırım EA (2008) Two algorithms for the minimum enclosing ball problem. SIAM J Optim 19(3):1368–1391CrossRef Yıldırım EA (2008) Two algorithms for the minimum enclosing ball problem. SIAM J Optim 19(3):1368–1391CrossRef
Metadaten
Titel
Frank–Wolfe and friends: a journey into projection-free first-order optimization methods
verfasst von
Immanuel M. Bomze
Francesco Rinaldi
Damiano Zeffiro
Publikationsdatum
06.09.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
4OR / Ausgabe 3/2021
Print ISSN: 1619-4500
Elektronische ISSN: 1614-2411
DOI
https://doi.org/10.1007/s10288-021-00493-y

Weitere Artikel der Ausgabe 3/2021

4OR 3/2021 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.