2006 | OriginalPaper | Buchkapitel
Multi-objective Optimization with the Naive IDA
verfasst von : Peter A.N. Bosman, Dirk Thierens
Erschienen in: Towards a New Evolutionary Computation
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
EDAs have been shown to perform well on a wide variety of single-objective optimization problems, for binary and real-valued variables. In this chapter we look into the extension of the EDA paradigm to multi-objective optimization. To this end, we focus the chapter around the introduction of a simple, but effective, EDA for multi-objective optimization: the naive
$$ \mathbb{M} $$
ID
$$ \mathbb{E} $$
AA (mixture-based multi-objective iterated density-estimation evolutionary algorithm). The probabilistic model in this specific algorithm is a mixture distribution. Each component in the mixture is a univariate factorization. As will be shown in this chapter, mixture distributions allow for wide-spread exploration of a multi-objective front, whereas most operators focus on a specific part of the multi-objective front. This wide-spread exploration aids the important preservation of diversity in multi-objective optimization. To further improve and maintain the diversity that is obtained by the mixture distribution, a specialized diversity preserving selection operator is used in the naive
$$ \mathbb{M} $$
ID
$$ \mathbb{E} $$
A. We verify the effectiveness of the naive
$$ \mathbb{M} $$
ID
$$ \mathbb{E} $$
A in two different problem domains and compare it with two other well-known efficient multi-objective evolutionary algorithms (MOEAs).