Skip to main content
main-content

Über dieses Buch

Lattices are geometric objects that can be pictorially described as the set of intersection points of an infinite, regular n-dimensional grid. De­ spite their apparent simplicity, lattices hide a rich combinatorial struc­ ture, which has attracted the attention of great mathematicians over the last two centuries. Not surprisingly, lattices have found numerous ap­ plications in mathematics and computer science, ranging from number theory and Diophantine approximation, to combinatorial optimization and cryptography. The study of lattices, specifically from a computational point of view, was marked by two major breakthroughs: the development of the LLL lattice reduction algorithm by Lenstra, Lenstra and Lovasz in the early 80's, and Ajtai's discovery of a connection between the worst-case and average-case hardness of certain lattice problems in the late 90's. The LLL algorithm, despite the relatively poor quality of the solution it gives in the worst case, allowed to devise polynomial time solutions to many classical problems in computer science. These include, solving integer programs in a fixed number of variables, factoring polynomials over the rationals, breaking knapsack based cryptosystems, and finding solutions to many other Diophantine and cryptanalysis problems.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Basics

Abstract
This book is about algorithmic problems on point lattices, and their computational complexity. In this chapter we give some background about lattices and complexity theory.
Daniele Micciancio, Shafi Goldwasser

Chapter2. Approximation Algorithms

Abstract
In this chapter we describe efficient algorithms to approximately solve SVP and CVP. For both problems, we solve the search version: we give polynomial time algorithms to find approximately shortest nonzero vectors in a lattice, or lattice vectors approximately closest to a given target point. The approximation factor achieved is exponential in the rank of the lattice. In Section 1 we start with an algorithm to solve SVP in dimension 2. For the special case of 2-dimensional lattices, we are able to solve SVP exactly and find a lattice vector of length metallothionein (MT). \( \left\| a \right\| = {\lambda_1} \). In fact, we can find a lattice basis \( \left[ {a,b} \right] with \left\| a \right\| = {\lambda_1} and \left\| b \right\| = {\lambda_2} \). So, the algorithm determines all successive minima of the lattice. The algorithm works for any (efficiently computable) norm \( \left\| . \right\| \), and it is, essentially, the generalization to arbitrary norms of an algorithm of Gauss. Then, in Section 2, we extend Gauss algorithm to n-dimensional lattices. This is the famous Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm (Lenstra et al., 1982). The extension comes at a price: the LLL algorithm does not find a lattice vector of length Ai, but only a \( \gamma (n) = {\left( {2/\sqrt {3} } \right)^n} \) approximation, i.e., a nonzero lattice vector of length at most \( \gamma (n).{\lambda_1} \). Finally, in Section 3 we use the LLL algorithm to approximately solve CVP.
Daniele Micciancio, Shafi Goldwasser

Chapter 3. Closest Vector Problem

Abstract
In Chapter 2 we described algorithms to (approximately) solve SVP and CVP. These algorithms exhibit relatively good performance as far as the running time is concerned. In particular, they terminate within a time bound that is polynomial in the size of the input. However, these algorithms offer very poor guarantees on the quality of the solution returned: the worst-case approximation factor achieved by the best known polynomial time algorithm is essentially exponential in the rank of the lattice. To date no efficient algorithm that provably approximates SVP or CVP within small factors (e.g., factors that are polynomial in the rank of the lattice) is known. In this chapter we start studying lattices from a computational complexity point of view, and, in particular we investigate the hardness of the closest vector problem. We first consider the problem of solving CVP exactly, and prove that this problem is hard for NP. Therefore no efficient algorithm to solve CVP exists, unless P equals NP.
Daniele Micciancio, Shafi Goldwasser

Chapter 4. Shortest Vector Problem

Abstract
In this chapter we study the hardness of approximating the shortest vector problem (SVP). Recall that in SVP one is given a matrix \( B \in {\mathbb{Q}^{{m \times n}}} \), and the goal is to find the shortest nonzero vector in the lattice generated by B . In Chapter 3 we have already studied another important algorithmic problem on lattices: the closest vector problem (CVP). In CVP, in addition to the lattice basis \( B \in {\mathbb{Q}^{{m \times n}}} \), one is given a target vector \( t \in {\mathbb{Q}^m} \), and the goal is to find the lattice point in L(B) closest to t. In Chapter 3 we showed that the NP-hardness of CVP can be easily established by reduction from subset sum (Theorem 3.1), and even approximating CVP within any constant or “almost polynomial” factors is hard for NP. We also observed that the reduction from subset sum to CVP can be easily adapted to prove that SVP in the l norm is NP- hard (Theorem 3.2). Unfortunately, that simple reduction does not work for any other norm. In this chapter, we investigate the computational complexity of SVP in any l p norm other than l , with special attention to the Euclidean norm l 2 . In the rest of this chapter the l 2 norm is assumed, unless explicitly stated otherwise.
Daniele Micciancio, Shafi Goldwasser

Chapter 5. Sphere Packings

Abstract
In this chapter we study the following question. What is the maximum possible number of lattice points inside an n-dimensional sphere pf radius p, given that the minimum distance between lattice points (or, equivalently, the length of the shortest non-zero vector in the lattice) is at least λ? Clearly the answer depends on the ratio λ/p only, as both the lattice and the sphere can be scaled up or down preserving λ/p. If we drop the requirement that the points belong to a lattice, and allow them to be an arbitrary set of points with large minimum distance (say λ = 2), we get the following sphere packing problem (see Figure 5.1): how many unit balls can be packed inside an n-dimensional sphere of radius R = 1 + p? Notice that since the unit balls are disjoint, their centers are at distance at least λ = 2 from each other. Moreover, since the unit balls are contained in a sphere of radius 1 + p, the centers of the balls are inside a sphere of radius p. We want to determine for which values of λ/p we can pack exponentially (in n) many points. Here, and in the rest of this chapter, “exponential” means a function of the form \[ 2^{n^c } \] for some fixed constant c independent of n.)
Daniele Micciancio, Shafi Goldwasser

Chapter 6. Low-Degree Hypergraphs

Abstract
The goal of this chapter is to prove Theorem 4.6. The theorem states that if \( Z \subset {\left\{ {0,1} \right\}^k} \) is a set of binary vectors, each containing exactly h ones, and \( \left| Z \right| \geqslant h!{k^{{4\sqrt {h} n/ \in }}} \), then there exists a matrix \( T \in {\left\{ {0,1} \right\}^{{n \times k}}} \) such that \( {\left\{ {0,1} \right\}^k} \subseteq T(Z) \), where T(Z) denotes the set \( \left\{ {Tz:z \in Z} \right\} \). In other words, for every \( x \in {\left\{ {0,1} \right\}^n} \) there exists a \( z \in Z \) satisfying \( x = Tz \). Moreover, Theorem 4.6 states that if \( T \in {\left\{ {0,1} \right\}^{{n \times k}}} \) is chosen at random setting each entry to 1 independently with probability \( p = e/\left( {4hn} \right) \), then \( {\left\{ {0,1} \right\}^k} \subseteq T(Z) \) with high probability (namely, probability at least \( 1 - 6e \)). In Chapter 4, Theorem 4.6 is used to prove the NP-hardness of approximating the shortest vector problem under RUR-reductions. However, the theorem has a purely combinatorial interpretation and it is better understood if reformulated in terms of hypergraphs, without any reference to integer lattices or matrices. A hypergraph is a pair (V, Z), where V is a finite set of vertices and Z is a collection of subsets of V, called hyperedges. If all the elements of Z have the same size, then we say that (V, Z) is regular, and the common size of all hyperedges is called the degree of the hypergraph.
Daniele Micciancio, Shafi Goldwasser

Chapter 7. Basis Reduction Problems

Abstract
In the first chapters of this book we studied the shortest vector problem and the closest vector problem both from an algorithmic and computational complexity point of view. In fact, the algorithms presented in Chapter 2 to approximately solve SVP and CVP do somehow more than just finding an approximately shortest lattice vector, or a lattice vector approximately closest to a given target. For example, the LLL algorithm on input a lattice basis B, outputs an equivalent basis B’ such that not only b’1is an approximately shortest lattice vector, but also all other basis vectors b’2 are not too long. Moreover, LLL reduced bases have relatively good geometric properties that make them useful to solve other lattice problems. In particular, we have seen that if an LLL basis is used, then the nearest plane algorithm always finds a lattice vector approximately closest to any input target point. The problem of finding a “good” basis for a given lattice is generically called the basis reduction problem. Unfortunately, there is not a unique, clearly defined notion of what makes a basis good, and several different definitions of reduced basis have been suggested. In this chapter we consider the most important notions of basis reduction, define approximation problems naturally associated to such notions, and study the relation between these and other lattice problems.
Daniele Micciancio, Shafi Goldwasser

Chapter 8. Cryptographic Functions

Abstract
Generally speaking, the goal of cryptography is the design of systems that withstand any malicious attempt to subvert them. The archetypical problem in cryptography is that of secret communication: two parties want to communicate with each other, and keep the conversation private, i.e., no one, other than the two legitimate parties, should be able to get any information about the messages being exchanged. This secrecy goal can be achieved if the two parties share a common random key that is known only to them. Then, in order to privately send a message, one can encode it using the key, and send the enciphered message to the other party over a public communication network. The receiver uses the shared key to invert the encoding procedure, and recover the original message. The original message, the enciphered message and the encoding and decoding processes are usually called cleartext, ciphertext, encryption and decryption.An encryption scheme is secure if recovering (any partial information about) the cleartext from the ciphertext without knowing the secret key is a computationally infeasible task. So, an adversary intercepting the ciphertext won’t learn anything about the message, other than the fact that a message was sent, and possibly the length of the message.
Daniele Micciancio, Shafi Goldwasser

Chapter 9. Interactive Proof Systems

Abstract
A natural question associated with the SVP and the CVP search problems is whether one can recognize the optimality of solutions once they are found.
Daniele Micciancio, Shafi Goldwasser

Backmatter

Weitere Informationen