A fundamental problem of theoretical and practical interest, that lies at the heart of control theory, is the design of controllers that yield acceptable performance for
a single plant and under known inputs but rather a
of plants under various types of inputs and disturbances. The importance of this problem has long been recognized, and over the years various scientific approaches have been developed and tested . A common initial phase of all these approaches has been the formulation of a mathematically well-defined problem, usually in the form of optimization of a performance index, which is then followed by either the use of available tools or the development of requisite new mathematical tools for the solution of these problems. Two of these approaches, the
, have dominated the field in the 1960s and early 1970s, with the former allowing small perturbations around an adopted nominal model (, ) and the latter ascribing some statistical description (specifically, Gaussian statistics) to the disturbances or unknown inputs . During this period, the role of
in the design of robust (minimax) controllers was also recognized (, , , ), with the terminology “minimax controller” adopted from the statistical decision theory of the 1950s . Here the objective is to obtain a design that minimizes a given performance index under
possible disturbances or parameter variations (which maximize the same performance index). Since the desired controller will have to have a
, this game-theoretic approach naturally requires the setting of dynamic (or differential) games, but with differential game theory (particularly with regard to information structures) being in its infancy in the 1960s, these initial attempts have not led to sufficiently general constructive methods for the design of robust controllers.