Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden.
powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden.
powered by
Abstract
Computational notions of entropy have recently found many applications, including leakage-resilient cryptography, deterministic encryption or memory delegation. The two main types of results which make computational notions so useful are (1) Chain rules, which quantify by how much the computational entropy of a variable decreases if conditioned on some other variable (2) Transformations, which quantify to which extend one type of entropy implies another.
Such chain rules and transformations typically lose a significant amount in quality of the entropy, and are the reason why applying these results one gets rather weak quantitative security bounds. In this paper we for the first time prove lower bounds in this context, showing that existing results for transformations are, unfortunately, basically optimal for non-adaptive black-box reductions (and it’s hard to imagine how non black-box reductions or adaptivity could be useful here.)
A variable X has k bits of HILL entropy of quality \((\epsilon ,s)\) if there exists a variable Y with k bits min-entropy which cannot be distinguished from X with advantage \(\epsilon \) by distinguishing circuits of size s. A weaker notion is Metric entropy, where we switch quantifiers, and only require that for every distinguisher of size s, such a Y exists.
We first describe our result concerning transformations. By definition, HILL implies Metric without any loss in quality. Metric entropy often comes up in applications, but must be transformed to HILL for meaningful security guarantees. The best known result states that if a variable X has k bits of Metric entropy of quality \((\epsilon ,s)\), then it has k bits of HILL with quality \((2\epsilon ,s\cdot \epsilon ^2)\). We show that this loss of a factor \({\varOmega }(\epsilon ^{-2})\) in circuit size is necessary. In fact, we show the stronger result that this loss is already necessary when transforming so called deterministic real valued Metric entropy to randomised boolean Metric (both these variants of Metric entropy are implied by HILL without loss in quality).
The chain rule for HILL entropy states that if X has k bits of HILL entropy of quality \((\epsilon ,s)\), then for any variable Z of length m, X conditioned on Z has \(k-m\) bits of HILL entropy with quality \((\epsilon ,s\cdot \epsilon ^2/ 2^{m})\). We show that a loss of \({\varOmega }(2^m/\epsilon )\) in circuit size necessary here. Note that this still leaves a gap of \(\epsilon \) between the known bound and our lower bound.
Anzeige
Bitte loggen Sie sich ein, um Zugang zu Ihrer Lizenz zu erhalten.
Their \({\varOmega }(n/log(n))\) lower bound matches existing constructions from regular one-way functions [10]. For general one-way functions this lower bound is still far of the best construction [28] making \({{\tilde{{\varTheta }}}}(n^3)\) calls.
Their question was about chain rules bounding the worst-case entropy, that is bounding \({\mathbf {H}}^{\mathsf{{HILL}}}(X|Z=z)\) for every z. Our result, stated simply for average entropy \({\mathbf {H}}^{\mathsf{{HILL}}}(X|Z)\), is much more general and applies to qualitatively better chain rules obtained by simulator arguments.
It might be hard to find a high min-entropy distribution Y that fools a randomized distinguisher D, but this task can become easy once D’s randomness is fixed.
The class of adversaries here consists of all circuits with the total number of gates, including oracle gates, at most T. Theorem 2 is also true when the circuit size s is much bigger than the total number of oracle gates T (under some assumption on s, \(\ell \), \(\epsilon \)). For simplicity, we do not state this version.
We use the following version: let \(X_i\) for \(i=1,\ldots ,N\) be independent random variables such that \(X_i \in [a_i,b_i]\). Then for any positive t we have \( \Pr _{X_1,\ldots ,X_N}\left[ \sum _{i=1}^{N} X_i - {{\mathrm{{\mathbb {E}}}}}\left[ \sum _{i=1}^{N} X_i \right] \geqslant t \right] \leqslant \exp \left( \frac{2 t^2}{\sum _{i=1}^{N} (b_i-a_i)^2}\right) \).
This can be shown along the lines of the proof that a random exponential size subset is unconditionally pseudorandom against exponential size distinguishers, see Goldreich’s book “Foundations of Cryptography – Basic Techniques", Proposition 3.2.3.