In this paper we propose a new technique and a framework to select inlining heuristic constraints - referred to as an
, for program optimization. The proposed technique uses machine learning to model the correspondence between inlining vectors and performance (
). The automatic selection of a machine learning algorithm to build such a model is part of our technique and we present a rigorous selection procedure. Subject to a given architecture, such a model evaluates the benefit of inlining combined with other global optimizations and selects an inlining vector that, in the limits of the model, minimizes the completion time of a program.
We conducted our experiments using the GNU GCC compiler and optimized 22 combinations
from SPEC CINT2006 on the state-of-the-art Intel Xeon Westmere architecture. Compared with optimization level, i.e.,
, our technique yields performance improvements ranging from 2% to 9%.