2010 | OriginalPaper | Buchkapitel
The Group-Lasso: ℓ1, ∞ Regularization versus ℓ1,2 Regularization
verfasst von : Julia E. Vogt, Volker Roth
Erschienen in: Pattern Recognition
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
The ℓ
1, ∞
norm and the ℓ
1,2
norm are well known tools for joint regularization in Group-Lasso methods. While the ℓ
1,2
version has been studied in detail, there are still open questions regarding the uniqueness of solutions and the efficiency of algorithms for the ℓ
1, ∞
variant. For the latter, we characterize the conditions for uniqueness of solutions, we present a simple test for uniqueness, and we derive a highly efficient active set algorithm that can deal with input dimensions in the millions. We compare both variants of the Group-Lasso for the two most common application scenarios of the Group-Lasso, one is to obtain sparsity on the level of groups in “standard” prediction problems, the second one is
multi-task
learning where the aim is to solve many learning problems in parallel which are coupled via the Group-Lasso constraint. We show that both version perform quite similar in “standard” applications. However, a very clear distinction between the variants occurs in multi-task settings where the ℓ
1,2
version consistently outperforms the ℓ
1, ∞
counterpart in terms of prediction accuracy.