2009 | OriginalPaper | Buchkapitel
On the Foundations of Quantitative Information Flow
verfasst von : Geoffrey Smith
Erschienen in: Foundations of Software Science and Computational Structures
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and side-channel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “small” leaks that are necessary in practice. The emerging consensus is that quantitative information flow should be founded on the concepts of
Shannon entropy
and
mutual information
. But a useful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks
x
bits of secret information, then
x
should be useful in calculating bounds on the resulting threat. In this paper, we focus on the threat that an attack will allow the secret to be guessed correctly in one try. With respect to this threat model, we argue that the consensus definitions actually fail to give good security guarantees—the problem is that a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed. We then explore an alternative foundation based on a concept of
vulnerability
(closely related to
Bayes risk
) and which measures uncertainty using Rényi’s
min-entropy
, rather than Shannon entropy.