Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden.
powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden.
powered by
Abstract
Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains.
Alwen and Serbinenko [STOC’15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn’t scale linearly, functions with the same cmc could still have very different actual hardware cost.
In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using n steps and \(O(n/\log (n))\) memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs \(\varOmega (n/\log (n))\) memory for \(\varOmega (n)\) steps.
As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustained-memory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on n nodes with constant indegree with high “sustained-space complexity”, meaning that any parallel black-pebbling strategy requires \(\varOmega (n/\log (n))\) pebbles for at least \(\varOmega (n)\) steps.
Along the way we construct a family of maximally “depth-robust” DAGs with maximum indegree \(O(\log n)\), improving upon the construction of Mahmoody et al. [ITCS’13] which had maximum indegree \(O\left( \log ^2 n \cdot {{\mathsf {polylog}}} (\log n)\right) \).
Anzeige
Bitte loggen Sie sich ein, um Zugang zu Ihrer Lizenz zu erhalten.
We typically want a DAG G with \({\mathsf {indeg}} (G)=2\) because the compression function H which is used to label the graph typically maps 2w bit inputs to w bit outputs. In this case the labeling function would only be valid for graphs with maximum indegree two. If we used tricks such as Merkle-Damgard to build a new compression function G mapping \(\delta w\) bit inputs to w bit outputs then each pebbling step actually corresponds to \(\left( \delta -1\right) \) calls to the compression function H which means that each black pebbling step actually takes time \(\left( \delta -1\right) \) on a sequential computer with a single-core. As a consequence, by considering graphs of degree \(\delta \), we pay an additional factor \((\delta -1)\) in the gap between the naive and adversarial evaluation of the MHF.
Furthermore, even if we restrict our attention to pebblings which finish in time O(n) we still have \(\varPi _{ss}\left( G_n,f(n)\right) \le g(n)\) whenever \(f(n)g(n) \in \omega \left( \frac{n^2 \log \log n}{\log n}\right) \) and \({\mathsf {indeg}} (G_n)\in O(1)\). In particular, Alwen and Blocki [AB16] showed that for any \(G_n\) with \({\mathsf {indeg}} (G_n)\in O(1)\) then there is a pebbling \(P = (P_0,\ldots ,P_n) \in \varPi ^{\parallel }_{G_n}\) with \(\varPi ^{\parallel }_{cc}(P) \in O\left( \frac{n^2 \log \log n}{\log n}\right) \). By contrast, the generic pebbling [HPV77] of any DAG with \({\mathsf {indeg}} \in O(1)\) in space \(O\left( n/\log n\right) \) can take exponentially long.
To see this observe that if \(G_n^\epsilon \) is a \(\delta \)-local expander then \(G_n^\epsilon [\{1,\ldots ,i\}]\) is also a \(\delta \)-local expander. Therefore, Lemmas 5 and 6 imply that \(G_n^\epsilon [\{1,\ldots ,i\}]\) is (ai, bi)-depth robust for any \(a+b \le 1-\epsilon \). Since, \(H_i\) is a subgraph of \(G_n^\epsilon [\{1,\ldots ,i\}]\) it must be that \(H_i\) is \(\left( a\left| Good_i \right| ,\left( 1-a\right) \left| Good_i\right| -\epsilon i\right) \)-depth robust. Otherwise, we have a set \(S \subseteq V(H_i)\) of size \(a\left| Good_i \right| \) such that \({\mathsf {depth}} (H_i-S) < \left( 1-a\right) \left| Good_i \right| -\epsilon i\) which implies that \({\mathsf {depth}} (G_n^\epsilon [\{1,\ldots ,i\}] - S) \le i-|Good_i| + {\mathsf {depth}} (Good_i-S) < i -a|Good_i|-\epsilon i\) contradicting the depth-robustness of \(G_n^\epsilon [\{1,\ldots ,i\}]\).