## Chapter 1 Lattice

### 1.1 Basic Notions on Lattices

**Definition 1.1.1.** Let $\{\underline{b}_{1},…,\underline{b}_{n}\}$ be a linearly independent set of (row) vectors in $R^{m}(m\geq n)$. The lattice generated by $\{\underline{b}_{1},…,\underline{b}_{n}\}$ is the set

of integer linear combinations of the $\underline{b}_{i}$. The vectors $\{\underline{b}_{1},…,\underline{b}_{n}\}$ are called a lattice basis. The lattice rank is n and the lattice dimension is m. If n = m then L is said to be a full rank lattice.

**Lemma 1.1.2.** Two $a\times b$ matrices B and B’ generate the same lattice L if and only if B and B’ are related by a unimodular matrix, i.e., $B’=UB$ where U is an $n\times n$ matrix with integer entries and determinant ±1.

**Definition 1.1.3.** Let L be a lattice in $R^{m}$ of rank n with basis matrix B. The **Gram matrix** of B is $BB^{T}$. This is an $n\times n$ matrix whose (i, j)th entry is $<\underline{b}_{i},\underline{b}_{j}>$.

**Lemma 1.1.4.** Let L be a lattice in $R^{m}$ of rank n with basis matrix B. Then $det(L)=\sqrt{det(BB^{T})}$

**Definition 1.1.5.** Let $L\subset R^{m}$ be a lattice of rank n. The **successive minima** of L are $\lambda_{1},…,\lambda_{n}\in R$ such that, for $1\leq i\leq n,\lambda_{i}$ is minimal such that there exist i linearly independent vectors $\underline{v}_{1},…,\underline{v}_{i}\in L$ with $\left|\underline{v}_{j}\right|\leq \lambda_{i}$ for $1\leq j\leq i$.It follows that $0<\lambda_{1}<\lambda_{2}<…<\lambda_{n}$. Sometimes there exists a basis consisting of vectors whose lengths are equal to the successive minima, but not always.

### 1.2 The Hermite and Minkowski Bounds

**Theorem 1.2.1.** (Minkowski) Let L be a lattice of rank n in $R^{n}$ with successive minima $\lambda_{1},…,\lambda_{n}$ for the Euclidean norm. Then

### 1.3 Computational Problems in Lattices

**Definition 1.3.1.** Let L be a lattice in $Z^{m}$

- The shortest vector problem (SVP) is the computational problem: given a basis matrix B for L, compute a non-zero vector $\underline{v}\in L$ such that $\left|\underline{v}\right|$ is minimal (i.e., $\left|\underline{v}\right|=\lambda_{1}$).
- The closest vector problem (CVP) is the computational problem: given a basis matrix B for L and a vector $\underline{\omega}\in Q^{m}$ (one can work with high-precision approximations in $R^{m}$, but this is essentially still working in $Q^{m}$), compute $\underline{v}\in L$ such that $\left|\underline{\omega}-\underline{v}\right|$ is minimal.
- Others

## Chapter 2 Lattice Basis Reduction

### 2.1 LLL-Reduced Lattice Bases

The Goal of Lattice ReductionObtain a basis B in which the Gram-Schmidt vectors are not decreasing too quickly.

This roughly means that the basis vectors are somewhat orthogonal to each other.

**Definition 2.1.1.** Let $\{\underline{b}_{1},…,\underline{b}_{n}\}$ be linearly independent in $R^{m}$ and let $\{\tilde{\underline{b}_{1}},…,\tilde{\underline{b}_{n}\}}$ be the Gram-Schmidt orthogonalisation. Let $\mu_{i,j}=\frac{<\underline{b}_{i},\tilde{\underline{b}_{j}}>}{\left|\tilde{\underline{b}_{j}}\right|^{2}}$ for $1\leq j<i\leq n$ be the coefficients from the Gram-Schmidt process. Fix $\frac{1}{4}<\delta<1$. The (ordered) basis is **LLL-Reduced** (with factor $\delta$) if the following conditions hold:

- (Size reduced) All $|\mu_{i,j}|\leq 0.5$.
- (Lov´asz condition) $(\delta-\mu_{i+1,i}^{2})\left|\tilde{\underline{b}_{i}}\right|^{2}\leq \left|\tilde{\underline{b}_{i+1}}\right|^{2}$ (It is traditional to choose $\delta=\frac{3}{4}$ in this case).

Size reduced can be easily solved by Gauss elimination method while Lov´asz condition by swap $b_{i}$ and $b_{j}$.

### 2.2 Generic Problems that Fall under the Scope of Lattice Reduction

#### 2.2.1 Pre-knowledge

**Lemma 2.2.1.** Assume $\{\underline{b}_{1},…,\underline{b}_{n}\}$ is a family of vectors with integer coefficients *in the* t-dimensional space, with t<n. Let M denote an upper bound for the absolute values of all coefficients of the various $\underline{b}_{i}s$. There exists an integer relation

such that $max|\lambda_{i}|\leq B$，where B is given by

#### 2.2.2 Ordinary Case

Given a family of integer vectors $\{\underline{b}_{1},…,\underline{b}_{n}\}$, there exists an integer relation

We construct the lattice given by the rows of the following matrix:

where K is a well chosen constant.

K should be large enough to ensure that the first vector of the reduced basis has zero components in its upper part corresponding to the first t coordinates, where t is the dimension of the $\underline{b}_{i}$.

#### 2.2.3 Modular Case

We now discuss the case of **mod m** numbers. The basic problem is still how one can force lattice reduction to deal with modular relations. The matrix we construct is:

where I is a t-dimensional **identity matrix**, with t the dimension of the $\underline{b}_{i}$. It is clear that the added identity matrix force reduction of numbers module m.

#### 2.2.4 Knapsack problems

In cryptographic scenarios, we know that such a relation exists between the given elements of the knapsack $a_{1}, …, a_{n}$ and the target sum $s=\sum^{n}_{i=1}\epsilon_{i}a_{i}.(\epsilon_{i}\in\{0,1\})$

The euclidean size of this relation is $\sqrt{\alpha n }$, where $\alpha$ is the proportion of ones in the relations. Moreover, $\{\epsilon_{i}\}$ can be seen as random, so we will set $\alpha=\frac{1}{2}$ here.

Subset sum problem can be converted into SVP and easily solved by LLL algorithm with the following matrix:

The reason for using $\frac{1}{2}$ rather than 0 in the last row vector, is that we want the correct vector to be as short as possible so that we can find it by solving SVP.

But the density of the knapsack should be low enough:

It is said that “almost all” subset sum problems with d(a)<0.645 can be solved as SVP by LLL algorithm, but the difficulty of solving is also related to size of n.

### 2.3 Examples

#### 2.3.1 Cryptanalysis of Knuth’s truncated LCG

LCG is defined by $x_{i+1}=(ax_{i}+b)$ mod m.

In case all the bits of the successive $x_{i}s$ are announced, the sequence becomes exactly predictable even if the modulus, the multiplier and the increment are not known. This is a result of J. Boyar.

The version we introduce(Knuth) extends the initial method to the case where a small portion of the lower bits are discarded. Moreover, our results cover the case where *m* and *a* are unknown parameters.

**First step of the algorithm**

We let $\nu$ be the number of bits of the modules m. If we output a proportion $\alpha$ of bits, we can write

where $\beta=1-\alpha,y_{i}$ consists of the leading bits of $x_{i}$ and $z_{i}$ of the trailing bits.

Our algorithm is more accurately described as a sequence of different algorithms depending on a parameter *t*.

We let $V_{i}$ be the element of $Z^{t}$ defined by

Applying the techniques of section 2.2.2, we can find an linear relation

whose coefficients are moderate integers. More precisely, it follows from lemma 2.2.1 that such a relation exists with $|\lambda_{i}|\leq B$ with

We now consider the (unknown) vectors $W_{i}$ defined by

and we let

$\because x_{i+j+1}-x_{i+j}=a^{j}(x_{i+1}-x_{i})\ mod\ m$，$\therefore$ all vectors $W_{i}s$ belong to the lattice L generated by the rows of the following matrix

It is easily seen that $det(L)=m^{t-1}$，hence the expected size of short vectors is around $tm^{\frac{t-1}{t}}$(Minkowski Theorem). Since U belongs to L and is of size $O(m^{\beta+\delta})$, for any $\delta>0$, (can be easily proved by $\sum^{n}_{i=1}\lambda_{i}V_{i}=0$), U is unusually short as soon as $\beta<\frac{t-1}{t}$, which means $\alpha>\frac{1}{t}$. Such a vector has to be **zero**. ($\lambda_{i}s$ were produced by $V_{i}s$ and LLL algorithm)

Now we have U=0, and we can write U as

As soon as $x_{1}-x_{0}$ is prime to m(unless $x_{0}$ is a bad seed), we get the polynomial $P(x)=\sum\lambda_{i}x^{i}$ vanishes at a module m. This is precisely what we want from the first step.

**Second step of the algorithm**

After applying part 1 several times, we come out with a sequence $P_{1},…,P_{r}$ of polynomials of degree n, each of these vanishing at a module m. Now, if we identify polynomials of degree n with elements of $Z^{n+1}$(by coefficients), we see that the polynomials that **vanish at a module m** form a lattice L generated by the sequence

which all vanish at a module m and by the constant polynomial m. The basis s generated by the row vectors below

which $det(L)=m$. Now, if the $P_{i}s$ generate the lattice, then one can apply lattice reduction, output a basis of the lattice and **compute the determinant**. Based on experiments, we claim that such an algorithm actually discloses m.

We finally say a word on recovering a from m. We twist the lattice L generated by $P_{i}s$ and constant m by multiplying all coefficients of $degree\geq 2$ by a large constant K and we apply lattice reduction. We usually let $K\geq m2^\frac{n}{2}$, then the sublattice of L generated by m and x-a will be disclosed: this is because the shortest two vectors of L is of size m and any polynomial of the lattice with $degree\geq 2$ exceeds this size by a factor $\geq 2^{\frac{n}{2}}$. So we can solve polynomial with $degree\leq 2$ to get a.