The notion of differential privacy has emerged in the area of statistical databases as a measure of protection of the participants’ sensitive information, which can be compromised by selected queries. Differential privacy is usually achieved by using mechanisms that add random noise to the query answer. Thus, privacy is obtained at the cost of reducing the accuracy, and therefore the
, of the answer. Since the utility depends on the user’s side information, commonly modelled as a prior distribution, a natural goal is to design mechanisms that are optimal
for every prior
. However, it has been shown that such mechanisms
do not exist
for any query other than (essentially) counting queries ().
Given the above negative result, in this paper we consider the problem of identifying
a restricted class of priors
for which an optimal mechanism
. Given an arbitrary query and a privacy parameter, we geometrically characterise a special region of priors as a convex polytope in the priors space. We then derive upper bounds for utility as well as for min-entropy leakage for the priors in this region. Finally we define what we call the
and we discuss the conditions for its existence. This mechanism reaches the bounds for all the priors of the region, and thus it is optimal on the whole region.