Skip to main content
Erschienen in: Designs, Codes and Cryptography 6/2023

Open Access 29.01.2023

Unique optima of the Delsarte linear program

verfasst von: Rupert Li

Erschienen in: Designs, Codes and Cryptography | Ausgabe 6/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The Delsarte linear program is used to bound the size of codes given their block length n and minimal distance d by taking a linear relaxation from codes to quasicodes. We study for which values of (nd) this linear program has a unique optimum: while we show that it does not always have a unique optimum, we prove that it does if \(d>n/2\) or if \(d \le 2\). Introducing the Krawtchouk decomposition of a quasicode, we prove there exist optima to the (n, 2e) and \((n-1,2e-1)\) linear programs that have essentially identical Krawtchouk decompositions, revealing a parity phenomenon among the Delsarte linear programs. We generalize the notion of extending and puncturing codes to quasicodes, from which we see that this parity relationship is given by extending/puncturing. We further characterize these pairs of optima, in particular demonstrating that they exhibit a symmetry property, effectively halving the number of decision variables.
Hinweise
Communicated by T. Helleseth.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

The practical problem of communicating over a noisy channel, whose study was initiated by Hamming [9], can be modeled by the problem of choosing as many words as possible for our code such that no two words are less than d distance apart. Let \({\mathbb {F}}_q\) denote an alphabet with q elements, and let \(|x-y|\) denote the Hamming distance between words \(x,y \in {\mathbb {F}}_q^n\), i.e., the number of indices for which the words have different letters. This notation suggests \({\mathbb {F}}_q\) has the additional structure of a finite field, which is often used to describe certain particularly elegant codes, but is not necessary for the definition of the problem. However, we will pick a distinguished word \(0 \in {\mathbb {F}}_q^n\) and define the weight of x, denoted |x|, to be \(|x-0|\).
A code of length n is then simply a nonempty subset \({\mathcal {C}}\subseteq {\mathbb {F}}_q^n\). Its minimal distance d is given by
$$\begin{aligned} d = \min _{\begin{array}{c} x,y \in {\mathcal {C}} \\ x \ne y \end{array}} |x-y|. \end{aligned}$$
The value of d corresponds to how error-resistant the code is: if we only use words in \({\mathcal {C}}\) rather than arbitrary words in \({\mathbb {F}}_q^n\), then if up to \(d-1\) letters are changed (perhaps due to an error in storage, or due to communication over a noisy transmission channel), it is impossible for one word to be changed to another valid word, so any such modified word will be detected as a word not in the code. Thus, we say that the code detects \(d-1\) errors, and similarly it corrects \(\lfloor (d-1)/2\rfloor \) errors, in the sense that if at most \(\lfloor (d-1)/2\rfloor \) errors occur, the original word can be determined.
Immediately, a tradeoff manifests itself: in order to transmit more information per word, one would like the code to have larger size \(|{\mathcal {C}}|\), i.e., the number of words in the code, but this causes the minimal distance to decrease, reducing the code’s resistance to errors. Hence, the basic question of codes is, given length n, alphabet size q, and a lower bound for the minimal distance d, what is the maximum possible size of the code?
One may view this problem as the sphere-packing problem in Hamming space \({\mathbb {F}}_q^n\) rather than Euclidean space \({\mathbb {R}}^n\). The sphere-packing problem is the problem of how to most densely pack unit balls into \({\mathbb {R}}^n\) without overlap, or equivalently the problem of picking “as many" points in \({\mathbb {R}}^n\) as possible such that the minimal distance d is at least 2, where “as many" refers to the density of points contained in a closed ball as the radius of this ball goes to infinity. It has been solved for \(n \le 3\) (see Hales [7, 8]), as well as famously for \(n=8\) by Viazovska [12] and for \(n=24\) by Cohn, Kumar, Miller, Radchenko, and Viazovska [2]. One complication that arises in Hamming space compared to Euclidean space is that due to the continuity of \({\mathbb {R}}^n\), the choice of the lower bound on minimal distance d does not matter as the space can be dilated appropriately, but this is not the case in Hamming space, where the choice of d nontrivially changes the structure of the largest codes. For more background on sphere-packing, error-correcting codes, and their connections, we refer readers to Conway and Sloane [3].
We now introduce concepts that will be useful in addressing the question of the largest possible code given n and d, where we fix \(q=2\) for the remainder of the paper. The j-th Krawtchouk polynomial is defined by
$$\begin{aligned} K_j(i;n) = \sum _{k=0}^j (-1)^k \left( {\begin{array}{c}i\\ k\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-k\end{array}}\right) , \end{aligned}$$
which we typically write as \(K_j(i)\) as n will be clear from the context. We will use the convention that \(\left( {\begin{array}{c}a\\ b\end{array}}\right) =0\) if \(0 \le b \le a\) does not hold, so \(K_j(i;n)=0\) if \(j\not \in [0,n]\) or \(i\not \in [0,n]\), where we define \([a,b]=\{x\in {\mathbb {Z}}\mid a\le x \le b\}\). For a word \(x \in {\mathbb {F}}_2^n\) of weight i, the Krawtchouk polynomial \(K_j(i)\) is the sum of \((-1)^{\left<x,y\right>}\) over all words \(y \in {\mathbb {F}}_2^n\) of weight j, where \(\left<x,y\right>\) is the inner product of x and y over the finite field \({\mathbb {F}}_2\), which up to parity equals the number of indices for which x and y are both 1. From this interpretation, we typically consider i and j for \(0 \le i,j \le n\), so the Krawtchouk polynomials can be condensed into the Krawtchouk matrix K, the \((n+1)\times (n+1)\) matrix given by \(K_{ji} = K_j(i)\) for \(0 \le i,j \le n\). Hence, we let \(K_j\) denote the j-th row of K, representing the j-th Krawtchouk polynomial.
The distance distribution of a code \({\mathcal {C}}\subseteq {\mathbb {F}}_q^n\) is the vector \(A=(A_0,\dots ,A_n)\), where
$$\begin{aligned} A_i = \frac{1}{|{\mathcal {C}}|} \left| \left\{ (x,y)\in {\mathcal {C}}^2 : |x-y|=i\right\} \right| \end{aligned}$$
for \(0 \le i \le n\). The normalization factor of \(|{\mathcal {C}}|^{-1}\) ensures that \(A_0=1\) and \(\sum _{i=0}^n A_i = |{\mathcal {C}}|\). If a lower bound d on the minimal distance is specified, this corresponds to requiring \(A_i = 0\) for all \(i \in [d-1]\), where [a] denotes the set \(\{1,\dots ,a\}\). The support of \({\mathcal {C}}\) is \(S=\{i>0\mid A_i > 0\}\).
Delsarte [4] proved that the distance distribution of any code must satisfy certain inequalities expressed in terms of Krawtchouk polynomials. This yields the Delsarte linear program, which, for a given pair of integers (nd) such that \(1 \le d \le n\), is given by
$$\begin{aligned} \max \quad&\sum _{i=0}^n A_i \nonumber \\ \text {such that} \quad&\sum _{i=0}^n A_i K_j(i) \ge 0 \quad \text {for all } j \in [n] \nonumber \\&A_i = 0 \quad \text {for all } i \in [d-1] \nonumber \\&A_0 = 1 \nonumber \\&A_i \ge 0 \quad \text {for all } i \in [n]. \end{aligned}$$
(1)
The inequalities in Eq. (1) are referred to as the Delsarte inequalities. Any code \({\mathcal {C}}\subseteq {\mathbb {F}}_2^n\) whose minimal distance is at least d has a distance distribution that is a feasible point of the Delsarte linear program, and as \(|{\mathcal {C}}|=\sum _{i=0}^n A_i\), we find the optimal objective value of the Delsarte linear program is an upper bound on \(|{\mathcal {C}}|\). As an arbitrary feasible solution A may not actually be realized as the distance distribution of a code, we refer to these points \(A=(A_0,\dots ,A_n)\) as quasicodes. Let the feasible region for this linear program, which is a convex polytope in \({\mathbb {R}}^{n-d+1}\) corresponding to variables \(A_d\) through \(A_n\), be denoted by P.
The dual of the Delsarte linear program is given by
$$\begin{aligned} \min \quad&\sum _{j=0}^n c_j \left( {\begin{array}{c}n\\ j\end{array}}\right) \\ \text {such that} \quad&\sum _{j=0}^n c_j K_j(i) \le 0 \quad \text {for all } i \in [d,n] \\&c_j \ge 0 \quad \text {for all } j \in [n] \\&c_0 = 1. \end{aligned}$$
By complementary slackness, for any optimal quasicode \(A^*\) and optimal dual solution \(c^*\), for all \(i \in [d,n]\), if \(A^*_i > 0\) then \(\sum _{j=0}^n c^*_j K_j(i) = 0\). And for all \(j \in [n]\), if \(c^*_j > 0\) then \(\sum _{i=0}^n A^*_i K_j(i) = 0\).
The main results of this paper comprise Sects. 4 and 5. We introduce the Krawtchouk decomposition of a quasicode A, which is the vector \(b=(b_0,\dots ,b_n)\) given by
$$\begin{aligned} b_j = \frac{1}{\left( {\begin{array}{c}n\\ j\end{array}}\right) }\sum _{i=0}^n A_i K_j(i), \end{aligned}$$
so that for all i,
$$\begin{aligned} A_i = \left( {\begin{array}{c}n\\ i\end{array}}\right) (b_0 K_0(i) + \cdots + b_n K_n(i))/2^n. \end{aligned}$$
We prove that there exist optima, i.e., feasible points achieving the optimal value, of the (n, 2e) and \((n-1,2e-1)\) Delsarte linear programs whose Krawtchouk decompositions agree on indices 0 through \(n-1\), inclusive. This reveals a previously unseen parity phenomenon in the Delsarte linear program. Moreover, this parity phenomenon manifests from the generalization of extending and puncturing codes to quasicodes. Two common practical operations performed on codes are extending a code by adding a parity check bit and puncturing a code by removing a bit. We introduce the generalization of these two operations to quasicodes, and show that solutions to the \((n-1,2e-1)\) and (n, 2e) Delsarte linear programs can be sent to each other by extending and puncturing, while still preserving the optimal value. Thus, extending a \((n-1,2e-1)\) optima yields a (n, 2e) optima, and vice versa for puncturing. In particular, puncturing corresponds to truncating the Krawtchouk decomposition, proving the parity phenomenon.
The parity phenomenon suggests that quasicodes fundamentally possess the same structure as codes with respect to extending and puncturing. We additionally prove a symmetry phenomenon that shows the structure of even codes persists among quasicodes. When d is even, efficient codes are typically even, meaning the distance distribution A has even support \(S \subset 2{\mathbb {Z}}\). We prove that when d is even, the (nd) Delsarte linear program always has an even optimum, demonstrating that the evenness structure extends from codes to quasicodes. The quasicode A being even corresponds to the Krawtchouk decomposition b being symmetric, i.e., \(b_j=b_{n-j}\) for all j, and thus this proves our symmetry phenomenon.
In Sect. 2, we provide some preliminary properties of the Krawtchouk polynomials that will be important for later sections. In Sect. 3, we prove that if \(d>n/2\) or if \(d\le 2\), the Delsarte linear program has a unique optimum. We also show that the dual does not have a unique optimum in many cases, and present some examples in which the primal does not have a unique optimum. In Sect. 4, we define the Krawtchouk decomposition of a quasicode and then present the parity and symmetry phenomena. In Sect. 5, we generalize the notion of extending and puncturing codes to quasicodes, allowing us to prove the parity and symmetry phenomena via extending/puncturing quasicodes.

2 Preliminaries of the Krawtchouk polynomials

We now introduce numerous classically known properties of the Krawtchouk polynomials that will be useful throughout the paper.
Lemma 2.1
(Reciprocity of the Krawtchouk polynomials; see [11, p. 152]) For all \(0 \le i,j \le n\), we have \(\left( {\begin{array}{c}n\\ i\end{array}}\right) K_j(i) = \left( {\begin{array}{c}n\\ j\end{array}}\right) K_i(j)\).
The well-established basic properties in Lemma 2.2 directly follow from the definition of the Krawtchouk polynomials and Lemma 2.1.
Lemma 2.2
The Krawtchouk polynomials satisfy the following properties, for all \(0 \le i,j \le n\):
(1)
\(K_0(i) = 1\).
(2)
\(K_j(0) = \left( {\begin{array}{c}n\\ j\end{array}}\right) \).
(3)
\(K_{n-j}(i) = (-1)^i K_j(i)\).
(4)
\(K_j(n-i) = (-1)^j K_j(i)\).
Lemma 2.3
(Orthogonality of the Krawtchouk polynomials; see [11, p. 151]) For all \(0 \le j,k \le n\),
$$\begin{aligned} \frac{1}{2^n} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) K_j(i) K_k(i) = \left( {\begin{array}{c}n\\ j\end{array}}\right) \delta _{jk}, \end{aligned}$$
where \(\delta \) is the Kronecker delta function.
Lemma 2.3 allows you to recover the coefficients of a linear combination of Krawtchouk polynomials: if \(v=(v_0,\dots ,v_n)=b_0K_0 + \cdots + b_n K_n\), then
$$\begin{aligned} \frac{1}{2^n}\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) v_i K_j(i) = \left( {\begin{array}{c}n\\ j\end{array}}\right) b_j. \end{aligned}$$
It also implies \(K^2=2^n I\), as
$$\begin{aligned} (K^2)_{jk} = \sum _{i=0}^n K_j(i) K_i(k) = \sum _{i=0}^n \frac{\left( {\begin{array}{c}n\\ i\end{array}}\right) K_j(i) K_k(i)}{\left( {\begin{array}{c}n\\ k\end{array}}\right) } = 2^n \delta _{jk}. \end{aligned}$$
In particular, K is non-singular and the Krawtchouk polynomials \(K_j\) are linearly independent.
The following result gives the column sums of the Krawtchouk matrix K.
Lemma 2.4
([11, p. 153]) For all n,
$$\begin{aligned} \sum _{j=0}^n K_j = \left( 2^n,0,0,\dots ,0\right) . \end{aligned}$$
The Krawtchouk polynomials also satisfy a three-term recurrence.
Lemma 2.5
([11, p. 152]) For all \(0 \le i,j \le n\),
$$\begin{aligned} (j+1)K_{j+1}(i)=(n-2i)K_j(i)-(n-j+1)K_{j-1}(i;n). \end{aligned}$$
While typically n is kept fixed, we present the following useful recurrence for Krawtchouk polynomials between block lengths n and \(n-1\).
Lemma 2.6
([1, Proposition 2.1(1)]) For all \(n \ge 1\) and \(0 \le i\le n-1\),
$$\begin{aligned} K_j(i;n)=K_j(i;n-1)+K_{j-1}(i;n-1) \end{aligned}$$
for any \(0 \le j \le n\).
In the reverse direction, we provide another relation moving from block length \(n-1\) to n.
Lemma 2.7
([1, Proposition 2.1(4)]) For all \(0 \le i,j \le n-1\),
$$\begin{aligned} 2K_j(i;n-1)=K_j(i;n)+K_j(i+1;n). \end{aligned}$$
It is known that the magnitude of \(K_j(i)\) over all integers \(0 \le i \le n\) is maximized at \(i=0\) and n (see, for example, Dette [5]). The following result shows that \(i=1\) and \(n-1\) are next largest, i.e., \(|K_j(i)|\) over all \(1 \le i \le n-1\) is maximized at \(i=1\) and \(n-1\), except when \(j=\frac{n}{2}\) so that \(K_j(1)=\left( {\begin{array}{c}n\\ d\end{array}}\right) \frac{n-2j}{n}=0\).
Lemma 2.8
([6, Corollary 10]) If \(j \ne \frac{n}{2}\), then for all \(i \in [n-1]\),
$$\begin{aligned} |K_j(1)|=\left| \left( {\begin{array}{c}n\\ j\end{array}}\right) \frac{n-2j}{n}\right| \ge |K_j(i)|. \end{aligned}$$

3 Uniqueness of optima

3.1 The upper half case

In this section, we prove that the Delsarte linear program has a unique optimum when \(2\lceil d/2\rceil >n-d\), roughly corresponding to d being at least n/2: precisely, if d is even, then \(2d>n\), and when d is odd, then \(2d\ge n\).
Theorem 3.1
If \(2\lceil d/2\rceil >n-d\), the Delsarte linear program has a unique optimum, namely the quasicode \(A^*\) given by
$$\begin{aligned} A^*_i = {\left\{ \begin{array}{ll} 1 &{} \text {if } i=0 \\ \frac{n}{2d-n} &{} \text {if } i=d \\ 0 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
when d is even, and
$$\begin{aligned} A^*_i = {\left\{ \begin{array}{ll} 1 &{} \text {if } i=0 \\ \frac{d+1}{2d-n+1} &{} \text {if } i=d \\ \frac{n-d}{2d-n+1} &{} \text {if } i = d+1 \\ 0 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
when d is odd.
Proof
We first address the even d case. The objective value of \(A^*\) is \(\frac{2d}{2d-n}\), and all of the constraints trivially hold except for the Delsarte inequalities, which become
$$\begin{aligned} \sum _{i=0}^n A_i^* K_j(i) = \left( {\begin{array}{c}n\\ j\end{array}}\right) +\frac{n}{2d-n}K_j(d) \ge 0 \end{aligned}$$
for all \(j \in [n]\), or equivalently
$$\begin{aligned} K_j(d) \ge \frac{n-2d}{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) . \end{aligned}$$
Applying reciprocity of the Krawtchouk polynomials yields
$$\begin{aligned} K_d(j) \ge \frac{n-2d}{n}\left( {\begin{array}{c}n\\ d\end{array}}\right) . \end{aligned}$$
As \(2d>n\), Lemma 2.8 implies this inequality for all \(j \in [n-1]\). For \(j=n\),
$$\begin{aligned} K_d(n)=(-1)^d K_d(0) = \left( {\begin{array}{c}n\\ d\end{array}}\right)> 0 > \frac{n-2d}{n}\left( {\begin{array}{c}n\\ d\end{array}}\right) , \end{aligned}$$
which completes the proof that \(A^*\) is feasible.
Consider the dual solution \(c^*\) given by
$$\begin{aligned} c^*_j = {\left\{ \begin{array}{ll} 1 &{} \text {if } j = 0 \\ \frac{2d-n+1}{(2d-n)(2d-n+2)} &{} \text {if } j = 1 \\ \frac{1}{(2d-n)(2d-n+2)} &{} \text {if } j = n-1 \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
It has dual objective of \(\frac{2d}{2d-n}\), so demonstrating it is a feasible solution proves \(A^*\) is optimal. As \(2d>n\), we find that \(c_j^* \ge 0\) for all \(j \in [n]\), so it remains to show that for all \(i \in [d,n]\),
$$\begin{aligned} \sum _{j=0}^n c_j^* K_j(i) \le 0. \end{aligned}$$
Note that \(K_0(i)=1\), \(K_1(i)=n-2i\), and \(K_{n-1}(i)=(-1)^i(n-2i)\), yielding
$$\begin{aligned} \sum _{j=0}^n c_j^* K_j(i) = 1 + (n-2i)\frac{2d-n+1}{(2d-n)(2d-n+2)} + (-1)^i (n-2i)\frac{1}{(2d-n)(2d-n+2)}. \end{aligned}$$
When i is even, as \(i \ge d\) this becomes
$$\begin{aligned} 1 + \frac{n-2i}{2d-n} \le 1+\frac{n-2d}{2d-n} = 0, \end{aligned}$$
and when i is odd, we have \(i \ge d+1\) as d is even, so the RHS becomes
$$\begin{aligned} 1+\frac{n-2i}{2d-n+2} \le 1+\frac{n-2d-2}{2d+2-n} = 0, \end{aligned}$$
as desired. Notice that these constraints are tight if and only if \(i=d\) or \(d+1\).
Hence, \(A^*\) and \(c^*\) are optimal solutions to the primal and dual, respectively. By complementary slackness, as the dual constraint is strict for \(i > d+1\), any optimum to the primal must satisfy \(A_i = 0\) for all \(i >d+1\). Hence, \(A_0=1\), \(A_d\), and \(A_{d+1}\) are the only potentially nonzero values. As \(c_1^*,c_{n-1}^* > 0\) then the Delsarte inequality must be tight for \(j=1\) and \(n-1\) for any primal optimum. These conditions become
$$\begin{aligned} n + (n-2d) A_d + (n-2d-2) A_{d+1}&= 0 \\ n + (n-2d) A_d - (n-2d-2) A_{d+1}&= 0, \end{aligned}$$
which requires \(A_{d+1}=0\) and \(A_d=\frac{n}{2d-n}\), proving that \(A^*\) is the unique optimum.
We now address the odd d case, where \(2d\ge n\). The objective value of \(A^*\) is \(\frac{2d+2}{2d-n+1}\), and all of the constraints trivially hold except for the Delsarte inequalities, which become
$$\begin{aligned} \sum _{i=0}^n A_i^* K_j(i) = \left( {\begin{array}{c}n\\ j\end{array}}\right) +\frac{d+1}{2d-n+1}K_j(d)+\frac{n-d}{2d-n+1}K_j(d+1) \ge 0 \end{aligned}$$
for all \(j \in [n]\). Multiplying by \(\frac{n!}{(d+1)!(n-d)!}\) yields the equivalent inequality
$$\begin{aligned} \left( {\begin{array}{c}n\\ d\end{array}}\right) \frac{K_j(d)}{2d-n+1} + \left( {\begin{array}{c}n\\ d+1\end{array}}\right) \frac{K_j(d+1)}{2d-n+1} \ge -\frac{n!}{(d+1)!(n-d)!}\left( {\begin{array}{c}n\\ j\end{array}}\right) , \end{aligned}$$
and after applying reciprocity of Krawtchouk polynomials, this becomes
$$\begin{aligned} \frac{K_d(j)}{2d-n+1}+\frac{K_{d+1}(j)}{2d-n+1} \ge -\frac{n!}{(d+1)!(n-d)!}. \end{aligned}$$
By Lemma 2.6, this is equivalent to
$$\begin{aligned} K_{d+1}(j;n+1) \ge -\frac{n!(2d-n+1)}{(d+1)!(n-d)!} = \left( {\begin{array}{c}n+1\\ d+1\end{array}}\right) \frac{(n+1)-2(d+1)}{n+1}. \end{aligned}$$
As \(d+1>\frac{n+1}{2}\), Lemma 2.8 implies this inequality holds for all \(j\in [n]\). Hence, \(A^*\) is a feasible solution.
Consider the dual solution \(c^*\) given by
$$\begin{aligned} c_j^* = {\left\{ \begin{array}{ll} 1 &{} \text {if } j = 0 \\ \frac{1}{2d-n+1} &{} \text {if } j\in \{1,n\} \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
It has dual objective of \(\frac{2d+2}{2d-n+1}\), so showing \(c^*\) is a feasible solution proves \(A^*\) is optimal. The assumption that \(2d \ge n\) ensures \(c_j^* \ge 0\) for all j, and so it remains to show that for all \(i \in [d,n]\),
$$\begin{aligned} \sum _{j=0}^n c_j^* K_j(i) \le 0. \end{aligned}$$
We have
$$\begin{aligned} \sum _{j=0}^n c_j^* K_j(i) = 1 + \frac{n-2i}{2d-n+1} + (-1)^i \frac{1}{2d-n+1}. \end{aligned}$$
When i is even, \(i \ge d+1\) so this becomes
$$\begin{aligned} 1 + \frac{n+1-2i}{2d-n+1} \le 1+\frac{n-2d-1}{2d-n+1} = 0, \end{aligned}$$
and when i is odd, as \(i \ge d\) we find
$$\begin{aligned} 1 + \frac{n-1-2i}{2d-n+1} \le 1+\frac{n-2d-1}{2d-n+1} = 0, \end{aligned}$$
as desired. These constraints are tight if and only if \(i=d\) or \(d+1\).
Thus, \(A^*\) and \(c^*\) are optimal solutions to the primal and dual, respectively. By complementary slackness, any primal optimum must additionally have \(A_i=0\) for all \(i>d+1\). As \(c_1^*,c_n^*>0\) then the Delsarte inequality must be tight for \(j=1\) and n for any primal optimum, i.e.,
$$\begin{aligned} n + (n-2d) A_d + (n-2d-2) A_{d+1}&= 0 \\ 1 - A_d + A_{d+1}&= 0. \end{aligned}$$
Substituting the latter into the former yields
$$\begin{aligned} n + (n-2d)(A_{d+1}+1)+(n-2d-2)A_{d+1} = 2n-2d+(2n-4d-2)A_{d+1} = 0, \end{aligned}$$
whose solution is \(A_{d+1}=\frac{n-d}{2d-n+1}\). Requiring \(A_d=A_{d+1}+1\) yields \(A^*\) as the unique optimum. \(\square \)
Example 3.2
For an example of a code realizing the optimum from Theorem 3.1, the binary simplex code [11, Ch. 1, §9], which is the dual of a Hamming code, has \(n=2^r-1\) and \(d=2^{r-1}\), where the code is supported at exactly this one distance, i.e., \(S=\{d\}\). It has size \(2^r\), and thus has distance distribution given by \(A_0=1\), \(A_d=2^r-1\), and \(A_i=0\) everywhere else. This matches the even case of Theorem 3.1, and puncturing this code, i.e., removing a bit from the code, corresponds to the odd case of Theorem 3.1.
Remark 3.3
The case \(2d>n\) is known as the Plotkin range. For codes, which must have integer size, the Plotkin bound (see [11, p. 43]) upper bounds the size of codes by \(2\left\lfloor \frac{d}{2d-n}\right\rfloor \) when d is even, and \(2\left\lfloor \frac{d+1}{2d-n+1}\right\rfloor \) when d is odd. These are the same bounds as for quasicodes from Theorem 3.1, except rounding down to the nearest even integer. Provided enough Hadamard matrices exist, the Plotkin bound is tight (see [11, p. 50]).
Delsarte [4, Theorem 14] provided an upper bound of \(\frac{2d}{2d-n}\) on the optimal objective value of the Delsarte linear program in the Plotkin range. Theorem 3.1 proves this bound is tight when d is even, and improves the bound to \(\frac{2d+2}{2d-n+1}<\frac{2d}{2d-n}\) when d is odd, which is tight for the optimal objective value of the Delsarte linear program.

3.2 Non-uniqueness of dual optimum

Studying the cases \(d=n\) and \(n-1\), we find that the dual of the Delsarte linear program does not generally have a unique optimum, i.e., a unique feasible point achieving the optimal value, for all pairs (nd).
Proposition 3.4
When \(d=n\), the dual of the Delsarte linear program has a unique optimum if and only if \(n \le 2\).
Proof
By complementary slackness with primal optimum \(A^*=(1,0,0,\dots ,0,1)\) from Theorem 3.1, a dual optimum must have
$$\begin{aligned} \sum _{j=0}^n c_j K_j(n) = 1+\sum _{j=1}^n c_j (-1)^j \left( {\begin{array}{c}n\\ j\end{array}}\right) = 0, \end{aligned}$$
(2)
using properties from Lemma 2.2. As \(c_j \ge 0\) for all j, clearly \(c_j=0\) for any positive even j, as otherwise we can decrease it along with some \(c_j'>0\) for odd \(j'\) to decrease the objective value while preserving the necessary equation, Eq. (2). Thus, any dual optimum must have
$$\begin{aligned} \sum _{\begin{array}{c} 1 \le j \le n \\ j \text { odd} \end{array}} c_j \left( {\begin{array}{c}n\\ j\end{array}}\right) = 1, \end{aligned}$$
(3)
which implies the objective value is 2, which is optimal, so this condition, along with \(c_j=0\) for all positive even j, is necessary and sufficient for a dual optimum.
For \(n \le 2\) this clearly yields a unique dual optimum, but for \(n \ge 3\), we find Eq. (3) is a equation over multiple variables, yielding multiple optimal solutions. \(\square \)
Theorem 3.5
When \(d=n-1\), the dual of the Delsarte linear program has a unique optimum if and only if n is even, in which case the unique dual optimum is \(c^*=\left( 1,\frac{1}{n-1},0,0,\dots ,0,\frac{1}{n-1}\right) \).
Proof
As \(d=n-1 \ge 1\), we have \(n \ge 2\). From the proof of Theorem 3.1, we find that, with respect to the unique optimum \(A^*\), the constraint \(\sum _{i=0}^n A^*_i K_j(i) \ge 0\) is strict for all \(j \in [n]\) except \(j=1\) and \(j=2\lfloor n/2\rfloor \). By complementary slackness, any optimal dual solution c can thus only have positive entries \(c_0=1\), \(c_1\), and \(c_{2\lfloor n/2\rfloor }\). Complementary slackness when \(A^*_i > 0\) also yields \(\sum _{j=0}^n c_j K_j(i) = 0\) for \(i\in \{n-1,n\}\) if n is even, and for \(i=n-1\) if n is odd.
For even \(n \ge 2\), implementing these observations from complementary slackness allows the dual to be rewritten as
$$\begin{aligned} \min \quad&1 + nc_1 + c_n \\ \text {such that} \quad&1 - (n-2) c_1 - c_n = 0 \\&1 - n c_1 + c_n = 0 \\&c_1,c_n \ge 0. \end{aligned}$$
The first two constraints have a unique solution \(c_1=c_n=\frac{1}{n-1}\), meaning the unique dual optimum is \(c^*=\left( 1,\frac{1}{n-1},0,0,\dots ,0,\frac{1}{n-1}\right) \).
For odd \(n \ge 3\), we rewrite the dual as
$$\begin{aligned} \min \quad&1 + nc_1 + n c_{n-1} \\ \text {such that} \quad&1 - (n-2) c_1 - (n-2) c_{n-1} = 0 \\&1 - nc_1 + nc_{n-1} \le 0 \\&c_1, c_{n-1} \ge 0. \end{aligned}$$
The first constraint yields \(c_1 + c_{n-1} = \frac{1}{n-2}\), which fixes the objective value to be \(1+\frac{n}{n-2}\), which is the optimal value by Theorem 3.1. So any feasible solution is a dual optimum, and namely substituting \(c_{n-1}=\frac{1}{n-2}-c_1\), the second constraint becomes \(c_1 \ge \frac{n-1}{n(n-2)}\). Combining this with the nonnegativity constraint \(\frac{1}{n-2}-c_1=c_{n-1} \ge 0\) yields
$$\begin{aligned} \frac{n-1}{n}\cdot \frac{1}{n-2} \le c_1 \le \frac{1}{n-2}, \end{aligned}$$
so there are multiple values for \(c_1\), all of which yield a nonnegative value for \(c_{n-1}\). Each of these yields a feasible dual solution, which must then be a dual optimum, so the dual does not have a unique optimum when n is odd. \(\square \)

3.3 The case \(d=1\)

When \(d=1\), we see that both the primal and dual have unique optima.
Theorem 3.6
When \(d=1\), the Delsarte linear program has a unique optimum, namely the quasicode \(A^*\) given by \(A^*_i = \left( {\begin{array}{c}n\\ i\end{array}}\right) \) for all \(0 \le i \le n\). In addition, the dual linear program has a unique optimum, namely the dual solution \(c^*={\textbf{1}}\).
The fact that \(A_i^*=\left( {\begin{array}{c}n\\ i\end{array}}\right) \) is optimal is well-known; see Levenshtein [10]. For completeness, we prove that it is not only optimal, but also unique, and additionally address the dual linear program.
Proof
Define vectors \(K_j'=(K_j(1),\dots ,K_j(n))\) for all \(1 \le j \le n\). We first show that these vectors are linearly independent. Suppose we have \(b_1,\dots ,b_n\) such that \(b_1K_1'+\cdots +b_nK_n' = {\textbf{0}}\). Define vectors \(K_j=(K_j(0),\dots ,K_j(n))\) for all \(1 \le j \le n\), and let v be the vector
$$\begin{aligned} v=b_1K_1+\cdots +b_nK_n=\left( b_1\left( {\begin{array}{c}n\\ 1\end{array}}\right) +\cdots +b_n\left( {\begin{array}{c}n\\ n\end{array}}\right) ,0,\dots ,0\right) . \end{aligned}$$
By the orthogonality of the Krawtchouk polynomials, for all \(1 \le j \le n\),
$$\begin{aligned} \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( b_1\left( {\begin{array}{c}n\\ 1\end{array}}\right) +\cdots +b_n\left( {\begin{array}{c}n\\ n\end{array}}\right) \right) = \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) v_i K_j(i) = 2^n \left( {\begin{array}{c}n\\ j\end{array}}\right) b_j, \end{aligned}$$
so
$$\begin{aligned} b_1\left( {\begin{array}{c}n\\ 1\end{array}}\right) +\cdots +b_n\left( {\begin{array}{c}n\\ n\end{array}}\right) =2^n b_j \end{aligned}$$
for all j. Hence \(b_1=\cdots =b_n\), and the necessary equation is
$$\begin{aligned} b_1 \left( \left( {\begin{array}{c}n\\ 1\end{array}}\right) +\cdots +\left( {\begin{array}{c}n\\ n\end{array}}\right) \right) = b_1\left( 2^n-1\right) = 2^n b_1, \end{aligned}$$
which yields \(b_1=\cdots =b_n=0\), implying the vectors \(K_j'\) for \(1 \le j \le n\) are linearly independent.
We first show that \(A^*\) is the unique optimum of the primal linear program. The linear independence of the vectors \(K_j'\) implies that there is at most one vertex A of P that has \(A_i > 0\) for all \(i \in [n]\), namely the unique solution to \(\sum _{i=0}^n A_i K_j(i) = 0\) for all \(j \in [n]\) where \(A_0=1\). This point is \(A^*\), as \(\sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) K_j(i)=0\) for all \(j\in [n]\) via orthogonality of the Krawtchouk polynomials, using \(K_0(i)=1\) as the other polynomial. Optimality of \(A^*\) can be shown by finding a dual solution that achieves the same objective value, namely \(2^n\). The dual solution \(c^*={\textbf{1}}\) is such a solution, as it yields an objective value of \(2^n\) and is feasible because for any \(i \in [n]\),
$$\begin{aligned} \sum _{j=0}^n K_j(i) = 0 \end{aligned}$$
by Lemma 2.4.
By complementary slackness with this optimal dual solution \(c^*={\textbf{1}}\), this means the Delsarte inequalities \(\sum _{i=0}^n A_i K_j(i) \ge 0\) must be sharp for all \(j \in [n]\). We know that there is only one solution to this system of equations, which is \(A^*\), so \(A^*\) is the unique optimal quasicode.
To show that \(c^*={\textbf{1}}\) is the unique optimum of the dual linear program, by complementary slackness with optimal primal solution \(A^*\) where \(A^*_i=\left( {\begin{array}{c}n\\ i\end{array}}\right) >0\) for all i, the dual inequalities \(\sum _{j=0}^n c_j K_j(i) \le 0\) for all \(i \in [n]\) must be sharp. Equivalently, we must have \(\sum _{j=1}^n c_j K_j(i) = -1\) for all \(i \in [n]\). The linear independence of vectors \(K_j'\) implies that the \(n\times n\) matrix \(K'\) given by \(K'_{ij}=K_i(j)\) is non-singular, so its columns, which are the vectors \((K_1(i),\dots ,K_n(i))\) for \(1 \le i \le n\), are linearly independent. Thus, the system of equations \(\sum _{j=1}^n c_j K_j(i) = -1\) for all \(i \in [n]\) has a unique solution, which we have already shown is given by \(c_1=\cdots =c_n=1\). Therefore, \(c^*={\textbf{1}}\) is the unique optimum of the dual. \(\square \)

3.4 The case \(d=2\)

In this section we will show that although the primal always has a unique optimum when \(d=2\), the dual almost always does not.
Theorem 3.7
When \(d=2\), the Delsarte linear program has a unique optimum, namely the quasicode \(A^*\) given by \(A^*_i = {\left\{ \begin{array}{ll} \left( {\begin{array}{c}n\\ i\end{array}}\right) &{} i \text { is even} \\ 0 &{} i \text { is odd}\end{array}\right. }\) for all \(0 \le i \le n\).
Similar to the case \(d=1\), it is already known that the stated \(A^*\) is optimal; see Levenshtein [10]. However, the uniqueness of this optimum, as well as the dual as addressed in Theorem 3.8, have not been demonstrated previously, so for completeness we provide a full proof of the result.
Proof
Notice that the objective value for \(A^*\) is \(2^{n-1}\). We claim that \(c_j=\frac{n-j}{n}\) for all \(0 \le j \le n\) is a dual solution with objective value \(2^{n-1}\), which proves \(A^*\) is optimal.
To show c is a feasible solution to the dual, we will show that for all \(i \in [2,n]\),
$$\begin{aligned} \sum _{j=0}^n c_j K_j(i) = 0. \end{aligned}$$
Notice that
$$\begin{aligned} \sum _{j=0}^n c_j K_j(i) = \sum _{j=0}^n K_j(i) - \frac{1}{n}\sum _{j=0}^n j K_j(i) = -\frac{1}{n}\sum _{j=0}^n j K_j(i) \end{aligned}$$
by Lemma 2.4, so it is equivalent to show
$$\begin{aligned} \sum _{j=0}^n j K_j(i) = 0. \end{aligned}$$
Directly from the definition of \(K_j(i)\), we see that \(K_0(i)=1\) and \(K_1(i)=n-2i\), meaning
$$\begin{aligned} i=\frac{n}{2}K_0(i)-\frac{1}{2}K_1(i). \end{aligned}$$
Using Lemma 2.2 and then Lemma 2.3, we find
$$\begin{aligned} \sum _{j=0}^n j K_j(i) = \frac{1}{\left( {\begin{array}{c}n\\ i\end{array}}\right) } \sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) j K_i(j) = \frac{1}{\left( {\begin{array}{c}n\\ i\end{array}}\right) } \sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( \frac{n}{2}K_0(j)-\frac{1}{2}K_1(j)\right) K_i(j) = 0 \end{aligned}$$
for all \(i \in [2,n]\), as desired.
The objective value of c is
$$\begin{aligned} \sum _{j=0}^n c_j \left( {\begin{array}{c}n\\ j\end{array}}\right) = \sum _{j=0}^n \frac{n-j}{n} \left( {\begin{array}{c}n\\ j\end{array}}\right) = 2^n - \frac{1}{n}\sum _{j=1}^n j\left( {\begin{array}{c}n\\ j\end{array}}\right) = 2^n \frac{1}{n}n\cdot 2^{n-1} = 2^{n-1}, \end{aligned}$$
where the combinatorial identity
$$\begin{aligned} \sum _{j=1}^n j\left( {\begin{array}{c}n\\ j\end{array}}\right) = n 2^{n-1} \end{aligned}$$
can be easily seen by noticing both sides count the ways to choose a subset of a set of n elements, where one element of the subset is distinguished: the left hand side first picks the subset of j elements and then picks the distinguished element, while the right hand side picks the distinguished element and then considers whether each of the remaining \(n-1\) elements are in the subset.
By complementary slackness, as \(c_j > 0\) for all \(j \in [n-1]\), any optimal quasicode A must satisfy
$$\begin{aligned} \sum _{i=0}^n A_i K_j(i) = 0 \end{aligned}$$
for all \(j \in [n-1]\). Note that \(A^*\) satisfies these conditions, because expressing \(A^*\) by
$$\begin{aligned} A^*_i = \left( {\begin{array}{c}n\\ i\end{array}}\right) (K_0(i)+K_n(i))/2 \end{aligned}$$
yields
$$\begin{aligned} \sum _{i=0}^n A_i K_j(i) = \frac{1}{2} \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) (K_0(i)+K_n(i))K_j(i) = 0 \end{aligned}$$
by Lemma 2.3 for all \(j \in [n-1]\). We have \(n-1\) linear equations, one for each \(j\in [n-1]\), that must be satisfied for any optimal quasicode A; given that \(A_1=0\), we also have \(n-1\) variables, \(A_2,\dots ,A_n\). Let \(K_j^{(2)}=\left( K_j(2),\dots ,K_j(n)\right) \) for \(j \in [n-1]\). If the vectors \(K_j^{(2)}\) for \(j \in [n-1]\) are linearly independent, then this implies there is a unique solution to our system of \(n-1\) linear equations, and thus \(A^*\) is the unique optimal quasicode.
Suppose \(b_1 K_1^{(2)}+\cdots +b_{n-1}K_{n-1}^{(2)} = {\textbf{0}}\). Let \(v=b_1K_1+\cdots +b_{n-1} K_{n-1}\), using the same definition \(K_j=(K_j(0),\dots ,K_j(n))\) as in Theorem 3.6. Using orthogonality of the Krawtchouk polynomials with \(K_0(i)=1\), we find
$$\begin{aligned} 0 = \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) v_i = v_0 + n v_1, \end{aligned}$$
and using \(K_n(i)=(-1)^i\), we find
$$\begin{aligned} 0 = \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) v_i (-1)^i = v_0 - n v_1. \end{aligned}$$
This implies \(v_0=v_1=0\), so \(v={\textbf{0}}\). Hence, for all \(j \in [n-1]\), we find
$$\begin{aligned} 0 = \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) v_i K_j(i) = 2^n \left( {\begin{array}{c}n\\ j\end{array}}\right) b_j, \end{aligned}$$
so \(b_1=\cdots =b_{n-1}=0\), and the vectors \(K_j^{(2)}\) are linearly independent.
This completes the proof that \(A^*\) is the unique optimum. \(\square \)
By contrast, the dual almost never has a unique optimum.
Theorem 3.8
When \(d=2\), the dual of the Delsarte linear program has a unique optimum if and only if \(n = 2\).
Proof
Proposition 3.4 implies the result when \(n=d=2\), so it suffices to show that for all \(n \ge 3\), the dual of the Delsarte linear program with \(d=2\) does not have a unique optimal solution.
As the objective value was demonstrated in Theorem 3.7 to be \(2^{n-1}\), a dual solution c is an optimum if and only if the following properties are satisfied:
$$\begin{aligned} \sum _{j=0}^n c_j \left( {\begin{array}{c}n\\ j\end{array}}\right)&= 2^{n-1} \end{aligned}$$
(4)
$$\begin{aligned} \sum _{j=0}^n c_j K_j(i)&\le 0 \quad \text {for all } i \in [2,n] \end{aligned}$$
(5)
$$\begin{aligned} c_j&\ge 0 \quad \text {for all } j \in [n] \end{aligned}$$
(6)
$$\begin{aligned} c_0&= 1. \end{aligned}$$
(7)
Expressing c as a linear combination of the Krawtchouk polynomials, i.e., for all \(0 \le j \le n\),
$$\begin{aligned} c_j = b_0 K_0(j) + \cdots + b_n K_n(j), \end{aligned}$$
then notice that Eq. (4) is equivalent to
$$\begin{aligned} \frac{1}{2} = \frac{1}{2^n}\sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) c_j = \frac{1}{2^n}\sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) c_j K_0(j) = b_0, \end{aligned}$$
using the orthogonality of the Krawtchouk polynomials. By first applying reciprocity and then orthogonality of the Krawtchouk polynomials, we find
$$\begin{aligned} \sum _{j=0}^n c_j K_j(i) = \frac{1}{\left( {\begin{array}{c}n\\ i\end{array}}\right) }\sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) c_j K_i(j) = 2^n b_i. \end{aligned}$$
Therefore, Eq. (5) is equivalent to \(b_i \le 0\) for all \(i \in [2,n]\).
If \(n\ge 3\) is odd, we claim the dual solution c given by
$$\begin{aligned} c_j&= \frac{1}{2}K_0(j) + \frac{1+\varepsilon }{2n}K_1(j)-\frac{\varepsilon }{2}K_n(j) = \frac{1}{2}+\frac{1+\varepsilon }{2}-\frac{1+\varepsilon }{n}j-(-1)^j\frac{\varepsilon }{2}\\&= {\left\{ \begin{array}{ll} 1-\frac{1+\varepsilon }{n}j &{} j \text { is even} \\ (1+\varepsilon )\frac{n-j}{n} &{} j \text { is odd} \end{array}\right. } \end{aligned}$$
for any \(0 \le \varepsilon \le \frac{1}{n-1}\) is an optimal solution. As \(b_0=\frac{1}{2}\) in this case and \(b_i=0\) for all \(i \in [2,n-1]\) and \(b_n=-\varepsilon /2\le 0\), we find Eqs. (4) and (5) hold. We directly verify \(c_0=1\), and thus it remains to show \(c_j \ge 0\) for all \(j \in [n]\). If j is odd, as \(0 \le j \le n\), we find \(c_j \ge 0\). If j is even, then as n is odd, we have \(j \le n-1\), and thus
$$\begin{aligned} c_j = 1-\frac{1+\varepsilon }{n}j \ge 1-(1+\varepsilon )\frac{n-1}{n} = \frac{1-(n-1)\varepsilon }{n} \ge 0, \end{aligned}$$
as \(\varepsilon \le \frac{1}{n-1}\). Hence any \(0\le \varepsilon \le \frac{1}{n-1}\) yields a distinct optimum c, so for odd \(n \ge 3\), there are multiple dual optima.
If \(n \ge 4\) is even, then we claim the dual solution c given by
$$\begin{aligned} c_j = \frac{1}{2}K_0(j) + \frac{1+\varepsilon }{2n}K_1(j)-\frac{\varepsilon }{2n}K_{n-1}(j) = {\left\{ \begin{array}{ll} \frac{n-j}{n} &{} j \text { is even} \\ 1+\varepsilon -\frac{1+2\varepsilon }{n}j &{} j \text { is odd} \end{array}\right. } \end{aligned}$$
for any \(0 \le \varepsilon \le \frac{1}{n-2}\) is an optimal solution. As \(b_0=\frac{1}{2}\) in this case and \(b_i\le 0\) for all \(i \in [2,n]\), we find Eqs. (4) and (5) hold. We directly verify \(c_0 = 1\), and thus it remains to show \(c_j \ge 0\) for all \(j \in [n]\). If j is even, as \(0 \le j \le n\), we find \(c_j \ge 0\). If j is odd, then as n is even, we have \(j \le n-1\), and thus
$$\begin{aligned} c_j = 1 + \varepsilon - \frac{1+2\varepsilon }{n}j \ge 1+\varepsilon -(1+2\varepsilon )\frac{n-1}{n} = \frac{1-(n-2)\varepsilon }{n} \ge 0, \end{aligned}$$
as \(\varepsilon \le \frac{1}{n-2}\). Hence any \(0 \le \varepsilon \le \frac{1}{n-2}\) yields a distinct optimum c, so for even \(n \ge 4\), there are multiple dual optima. This completes the proof. \(\square \)

3.5 Non-uniqueness of primal optimum

While our previous results for \(d=1,2\), and for the upper half \(2\lceil d/2\rceil +d>n\) have all had a unique primal optimum, this uniqueness does not hold in general. The smallest n for which the Delsarte linear program does not have a unique optimum is \((n,d)=(17,5)\), and the next smallest cases are (21, 5) and (23, 5). For example, the optimal solutions for (17, 5) are the points on the line segment between the two quasicodes
$$\begin{aligned} \left( 1, 0, 0, 0, 0, 52, \frac{304}{3}, \frac{176}{3}, \frac{250}{3}, \frac{520}{3}, \frac{368}{3}, \frac{112}{3}, 32, 20, 0, 0, 1, 0\right) \end{aligned}$$
and
$$\begin{aligned} \left( 1, 0, 0, 0, 0, 51, \frac{307}{3}, \frac{191}{3}, \frac{235}{3}, \frac{490}{3}, \frac{398}{3}, \frac{142}{3}, 22, 15, 5, 1, 0, 0\right) . \end{aligned}$$
These three cases were found via a computer search solving a system of linear inequalities and equations using the optimal objective value, Delsarte inequalities, and complementary slackness conditions. No other cases exist for \(1 \le d \le n \le 23\). This prompts the question of when the Delsarte linear program has a unique optimum.
Question 3.9
For which values of (nd) does the Delsarte linear program have a unique optimum? Similarly, for which values of (nd) does the dual have a unique optimum?

4 Krawtchouk decomposition of optimal quasicodes

In this section, we further characterize optimal quasicodes by first applying a transformation to the quasicodes. Suppose A is a quasicode of the Delsarte linear program for a given pair of values (nd). Let \(A'\) be the vector given by \(A_i'=A_i\cdot 2^n/\left( {\begin{array}{c}n\\ i\end{array}}\right) \) for \(0 \le i \le n\). If A is the distance distribution of a code \({\mathcal {C}}\), then \(A'_i\) is \(4^n/|{\mathcal {C}}|\) times the proportion of ordered pairs (xy) of words \(x,y\in {\mathbb {F}}_2^n\) a Hamming distance i apart whose two elements x and y are both in \({\mathcal {C}}\). Then there is a unique vector b such that \(A'=b K\), given by \(b = \frac{1}{2^n}A' K\), or equivalently
$$\begin{aligned} b_j = \sum _{i=0}^n \frac{A_i K_i(j)}{\left( {\begin{array}{c}n\\ i\end{array}}\right) } = \frac{1}{\left( {\begin{array}{c}n\\ j\end{array}}\right) }\sum _{i=0}^n A_i K_j(i). \end{aligned}$$
This means for all i,
$$\begin{aligned} A_i = \left( {\begin{array}{c}n\\ i\end{array}}\right) (b_0 K_0(i) + \cdots + b_n K_n(i)) / 2^n. \end{aligned}$$
We will refer to this vector b as the Krawtchouk decomposition of the quasicode A. Then the Delsarte inequalities for A can be rewritten as \(\left( {\begin{array}{c}n\\ j\end{array}}\right) b_j \ge 0\) for all \(j \in [n]\), or equivalently simply \(b_j \ge 0\). And the objective function simplifies to \(b_0\):
$$\begin{aligned} \sum _{i=0}^n A_i&= \frac{1}{2^n}\! \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) \sum _{j=0}^n b_j K_j(i) = \frac{1}{2^n} \!\sum _{j=0}^n b_j \sum _{i=0}^n \left( {\begin{array}{c}n\\ i\end{array}}\right) K_j(i) K_0(i)\! =\! \sum _{j=0}^n b_j \left( {\begin{array}{c}n\\ j\end{array}}\right) \delta _{j0} = b_0. \end{aligned}$$
Hence, the Delsarte linear program can be rephrased using the Krawtchouk decomposition as follows:
$$\begin{aligned} \max \quad&b_0 \\ \text {such that} \quad&\sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) b_j = 2^n \\&\sum _{j=0}^n b_j K_j(i) \ge 0 \quad \text {for all } i \in [n] \\&\sum _{j=0}^n b_j K_j(i) = 0 \quad \text {for all } i \in [d-1] \\&b_j \ge 0 \quad \text {for all } j \in [n]. \end{aligned}$$
Theorem 3.6 shows that the unique optimal quasicode \(A^*\) when \(d=1\) has Krawtchouk decomposition \(b=(2^n,0,0,\dots ,0)\). Similarly, Theorem 3.7 shows that when \(d=2\), the optimum \(A^*\) has Krawtchouk decomposition \(b=(2^{n-1},0,0,\dots ,0,2^{n-1})\).
The Krawtchouk decomposition of the unique optimum for the upper half is given in the following result. But first, we will introduce some notation to simplify our future discussions. Define \(h=n-d\), and let \(k=2\lceil d/2\rceil \) be the smallest even integer at least d. Using this notation, the upper half condition simply becomes \(k>h\).
Theorem 4.1
For all \(d \ge 1\) and all \(h \ge 0\) such that \(k=2\lceil d/2\rceil >h\), the unique optimum to the Delsarte linear program with \((n,d)=(d+h,d)\) has Krawtchouk decomposition \(b^*\) given by
$$\begin{aligned} b_j^* = 1 - \frac{K_k(j;h+k)}{K_k(1;h+k)}. \end{aligned}$$
Proof
If d is even, then our expression for \(b^*\) becomes
$$\begin{aligned} b_j^* = 1 - \frac{K_d(j;n)}{K_d(1;n)}. \end{aligned}$$
Then the corresponding quasicode is given by
$$\begin{aligned} A_i = \left( {\begin{array}{c}n\\ i\end{array}}\right) \sum _{j=0}^n b_j^* K_j(i;n)/2^n. \end{aligned}$$
By Lemma 2.4, the constant 1 term in \(b^*\) contributes 1 to \(A_0\) and 0 everywhere else. Thus we expand \(b_j^*\) to yield
$$\begin{aligned} A_i&= \delta _{i0} - \left( {\begin{array}{c}n\\ i\end{array}}\right) \sum _{j=0}^n \frac{K_d(j;n)}{K_d(1;n)}K_j(i;n)/2^n = \delta _{i0} - \sum _{j=0}^n \frac{K_d(j;n)}{K_d(1;n)}\left( {\begin{array}{c}n\\ j\end{array}}\right) K_i(j;n)/2^n\\&= \delta _{i0} - \left( {\begin{array}{c}n\\ d\end{array}}\right) \delta _{id}\frac{1}{K_d(1;n)} = \delta _{i0} + \delta _{id} \frac{n}{2d-n}, \end{aligned}$$
using reciprocity and orthogonality of the Krawtchouk polynomials. But this expression equals the unique primal optimum \(A^*\) as given by Theorem 3.1.
If d is odd, then our expression for \(b^*\) becomes
$$\begin{aligned} b_j^* = 1 - \frac{K_{d+1}(j;n+1)}{K_{d+1}(1;n+1)}, \end{aligned}$$
and the corresponding quasicode is given by
$$\begin{aligned} A_i&= \delta _{i0} - \sum _{j=0}^n \frac{K_{d+1}(j;n+1)}{K_{d+1}(1;n+1)} \left( {\begin{array}{c}n\\ j\end{array}}\right) K_i(j;n)/2^n \\&= \delta _{i0} - \sum _{j=0}^n \frac{K_{d+1}(j;n)+K_d(j;n)}{K_{d+1}(1;n+1)} \left( {\begin{array}{c}n\\ j\end{array}}\right) K_i(j;n)/2^n\\&= \delta _{i0} - \left( {\begin{array}{c}n\\ d+1\end{array}}\right) \frac{\delta _{i,d+1}}{K_{d+1}(1;n+1)} - \left( {\begin{array}{c}n\\ d\end{array}}\right) \frac{\delta _{id}}{K_{d+1}(1;n+1)} \\&= \delta _{i0} + \delta _{id}\frac{d+1}{2d-n+1} + \delta _{i,d+1}\frac{n-d}{2d-n+1}, \end{aligned}$$
where we use Lemma 2.6 to reduce \(K_{d+1}(j;n+1)\) to block length n. This expression equals the unique primal optimum \(A^*\) as given by Theorem 3.1, completing the proof. \(\square \)
The presence of \(k=2\lceil d/2\rceil \), which rounds d up to the nearest even integer, suggests that there are some parity connections between neighboring values of (nd). In particular, if \(d=2e\) is even, then \((n-1,2e-1)\) and (n, 2e) have the same values of k and h, and thus their unique optima \(b^*\) of the Krawtchouk decomposition LP are the same, up to truncating the nth entry \(b^*_n\) for the \((n-1,2e-1)\) case. This parity phenomenon holds for all \(1 \le d \le n\), not just the upper half, as seen in the following result.
Theorem 4.2
For all positive integers n and e where \(2e \le n\), there exist optima \(A^E\) and \(A^O\) of the (n, 2e) and \((n-1,2e-1)\) Delsarte linear programs, respectively, whose Krawtchouk decompositions agree on indices 0 through \(n-1\), inclusive.
We need additional tools, developed in Section 5, in order to prove Theorem 4.2, so we defer the proof to Section 5.
We also have another result on the symmetry of the optimal Krawtchouk decompositions that is closely related to the aforementioned parity phenomenon. Again, we defer the proof to Section 5.
Theorem 4.3
For all \(1 \le d \le n\) where d is even, the Delsarte linear program for (nd) has an optimum \(A^E\) whose Krawtchouk decomposition \(b^E\) satisfies \(b_j^E=b_{n-j}^E\) for all \(0 \le j \le n\).
We first comment that this symmetry property is equivalent to the quasicodes being even, i.e., having a support contained in the set of even integers. From this interpretation, it is intuitively clear why d must be even. If the symmetry property holds, then for odd i, we find
$$\begin{aligned} A_i = \frac{\left( {\begin{array}{c}n\\ i\end{array}}\right) }{2^n}\sum _{j=0}^n b_j K_j(i) = \frac{\left( {\begin{array}{c}n\\ i\end{array}}\right) }{2^n}\sum _{j=0}^n b_{n-j} (-1)^i K_{n-j}(i) = -\frac{\left( {\begin{array}{c}n\\ i\end{array}}\right) }{2^n}\sum _{j=0}^n b_j K_j(i) = - A_i, \end{aligned}$$
so \(A_i = 0\) for odd i. And if \(A_i = 0\) for all odd i, then
$$\begin{aligned} b_j = \frac{1}{\left( {\begin{array}{c}n\\ j\end{array}}\right) }\sum _{i=0}^n A_i K_j(i) = \frac{1}{\left( {\begin{array}{c}n\\ n-j\end{array}}\right) }\sum _{\begin{array}{c} 0 \le i \le n\\ i \text { even} \end{array}} A_i K_j(i) = \frac{1}{\left( {\begin{array}{c}n\\ n-j\end{array}}\right) }\sum _{\begin{array}{c} 0 \le i \le n\\ i \text { even} \end{array}} A_i K_{n-j}(i) = b_{n-j}. \end{aligned}$$

5 Extending and puncturing quasicodes

Two common practical operations performed on error-correcting codes are extending and puncturing codes, which respectively increase and decrease the block length by one. In this section, we generalize both operations to quasicodes, and use these operations to prove Theorems 4.2 and 4.3 along with a further refinement of these optima. Theorem 5.4 collates all of these results.
For a given code \({\mathcal {C}}\subseteq {\mathbb {F}}_2^n\), we may extend \({\mathcal {C}}\) to a new code \(\mathcal {C'}\subseteq {\mathbb {F}}_2^{n+1}\) by adding a parity check bit, i.e., adding a bit that ensures the sum of the bits is always even. If two codewords in \({\mathcal {C}}\) differ by an odd number of bits, then in \(\mathcal {C'}\) the corresponding codewords differ on their parity check bit as well, increasing their Hamming distance by 1; if two codewords in \({\mathcal {C}}\) differ by an even number of bits, then their Hamming distance is unchanged by extension. Hence, if A is the distance distribution of \({\mathcal {C}}\), then the distance distribution \(A'\) of \({\mathcal {C}}'\) is given by \(A_i'=0\) for all odd i, and \(A_i'=A_i+A_{i-1}\) for all even i. If the minimal distance of \({\mathcal {C}}\) is d, then the minimal distance of \({\mathcal {C}}'\) is \(2\lceil d/2\rceil \), i.e., rounding d up to the nearest even integer. We can naturally generalize the definition of extension to quasicodes by applying the same transformation taking A to \(A'\). We will thus call \(A'\) the extension of the quasicode A. While it is obvious that if \({\mathcal {C}}\) is a code, then \(\mathcal {C'}\) is also a code, it is not immediately clear whether A being a valid quasicode implies \(A'\) is a valid quasicode. However, we will prove that this is indeed true: if A is a feasible solution to the (nd) Delsarte linear program, then \(A'\) is a feasible solution to the \((n+1,2\lceil d/2\rceil )\) Delsarte linear program. To do so, we first prove the following lemma.
Lemma 5.1
For n a positive integer, \(j \in [n+1]\), and \(0 \le i \le n\), we have \(K_j(2\lceil i/2\rceil ;n+1)=K_j(i;n)+K_{n+1-j}(i;n)\).
Proof
If i is even, this immediately follows from Lemma 2.6. If \(i<n\) is odd, then by Lemmas 2.2 and 2.6,
$$\begin{aligned} K_j(i+1;n+1)&= (-1)^j K_j(n-i;n+1) =(-1)^j (K_j(n-i;n)+K_{j-1}(n-i;n)) \\ {}&= K_j(i;n)-K_{j-1}(i;n) = K_j(i;n)+K_{n+1-j}(i;n) \end{aligned}$$
as desired. If \(i=n\) is odd, then
$$\begin{aligned} K_j(n\!+\!1;n\!+\!1)&=(-1)^j\left( {\begin{array}{c}n\!+\!1\\ j\end{array}}\right) =(-1)^j\left( \left( {\begin{array}{c}n\\ j\end{array}}\right) \!+\!\left( {\begin{array}{c}n\\ j\!-\!1\end{array}}\right) \right) =K_j(n;n)-K_{j-1}(n;n)\\&=K_j(n;n)+K_{n+1-j}(n;n), \end{aligned}$$
completing the proof. \(\square \)
This allows us to prove the extension of a quasicode is a quasicode.
Proposition 5.2
If A is a feasible solution to the (nd) Delsarte linear program, then its extension \(A'\) is a feasible solution to the \((n+1,2\lceil d/2\rceil )\) Delsarte linear program.
Proof
The only nontrivial constraints that we need to verify are the Delsarte inequalities. But Lemma 5.1 immediately implies that for all \(j \in [n+1]\),
$$\begin{aligned} \sum _{i=0}^{n+1} A_i' K_j(i;n+1)&\!=\! \sum _{i=0}^n A_i K_j(2\lceil i/2\rceil ;n\!+\!1) \!=\! \sum _{i=0}^n A_i K_j(i;n) \!+\! \sum _{i=0}^n A_i K_{n+1-j}(i;n) \ge 0 \end{aligned}$$
by the Delsarte inequalities for A, where if \(j=n+1\) the first sum vanishes and the second sum becomes simply \(\sum A_i \ge 0\) by the nonnegativity constraints. \(\square \)
Note that this result implies that the optimal objective value of the \((n-1,2e-1)\) Delsarte linear program is at most the optimal objective value of the (n, 2e) Delsarte LP, as any feasible solution of the \((n-1,2e-1)\) LP can be extended to a feasible solution of the (n, 2e) LP with the same objective value.
We now prove the reverse direction, that any feasible solution of the (nd) LP can be mapped to a feasible solution of the \((n-1,d-1)\) LP with the same objective value. We do so by generalizing the notion of puncturing from codes to quasicodes. One may puncture a code \({\mathcal {C}}\subseteq {\mathbb {F}}_2^n\) to yield a new code \({\mathcal {C}}'\subseteq {\mathbb {F}}_2^{n-1}\) by simply removing a bit, i.e., an index. It is easy to see that if \({\mathcal {C}}\) had minimal distance \(d\ge 2\), then \({\mathcal {C}}'\) is a valid code with minimal distance at least \(d-1\). If \({\mathcal {C}}\) has support S, then \({\mathcal {C}}'\) has support \(S'\subseteq (S\cup (S-1))\setminus \{n\}\), where \(S-1:=\{i-1\mid i \in S\}\). It is important to note that the behavior of puncturing a code depends on which index is removed: especially if \({\mathcal {C}}\) is not symmetric with respect to its indices, even the distance distribution \(A'\) of \({\mathcal {C}}'\) may depend on the choice of removed index. Our definition of puncturing a quasicode, which need not be realized by a code, conveniently has no such ambiguity. If quasicode A has Krawtchouk decomposition \(b=(b_0,\dots ,b_n)\), then puncturing A yields a quasicode \(A^P\) given by the truncated Krawtchouk decomposition \(b^P=(b_0,\dots ,b_{n-1})\). We now show that if A is a feasible solution to the (nd) Delsarte linear program and has support S, then \(A^P\) is a feasible solution to the \((n-1,d-1)\) Delsarte linear program with support \(S'\subseteq (S\cup (S-1))\setminus \{n\}\), thus exhibiting the same properties as punctured codes.
Proposition 5.3
If A is a feasible solution to the (nd) Delsarte linear program for \(d \ge 2\) and has support S, then its punctured quasicode \(A^P\) is a feasible solution to the \((n-1,d-1)\) Delsarte linear program with support \(S'\subseteq (S\cup (S-1))\setminus \{n\}\).
Proof
Let b be the Krawtchouk decomposition of A. We will show \(b^P\) satisfies the necessary constraints. Using Lemma 2.7, for \(0 \le i \le n-1\),
$$\begin{aligned} \sum _{j=0}^{n-1} b_j K_j(i;n-1)&= \frac{1}{2}\sum _{j=0}^{n} b_j (K_j(i;n)+K_j(i+1;n)) - \frac{1}{2}b_n(K_n(i;n)+K_n(i+1;n))\\&= \frac{1}{2}\sum _{j=0}^n b_j(K_j(i;n)+K_j(i+1;n)) - \frac{1}{2}b_n((-1)^i+(-1)^{i+1}) \\&= \frac{1}{2}\sum _{j=0}^n b_jK_j(i;n)+\frac{1}{2}\sum _{j=0}^n b_j K_j(i+1;n). \end{aligned}$$
For \(i=0\), as \(d \ge 2\) this yields
$$\begin{aligned} \sum _{j=0}^{n-1} b_j \left( {\begin{array}{c}n-1\\ j\end{array}}\right) = \frac{1}{2}\sum _{j=0}^n b_j \left( {\begin{array}{c}n\\ j\end{array}}\right) = 2^{n-1}, \end{aligned}$$
using the constraints that b must satisfy for the (nd) linear program. These constraints for b also imply that for \(i \in [d-2]\), this expression equals 0, and more generally for \(i \in [n-1]\) it is nonnegative. In fact, if positive integer \(i \not \in (S\cup (S-1))\setminus \{n\}\), then as \(A_i=A_{i+1}=0\), this implies \(A_i^P = \frac{\left( {\begin{array}{c}n-1\\ i\end{array}}\right) }{2^{n-1}}\sum b_j K_j(i;n-1) = 0\). Hence, \(b^P\) is a feasible solution to the \((n-1,d-1)\) Delsarte linear program with support \(S'\subseteq (S\cup (S-1))\setminus \{n\}\). \(\square \)
Proposition 5.3 implies that the optimal objective value of the (nd) Delsarte linear program is at most the optimal objective value of the \((n-1,d-1)\) Delsarte LP, as any feasible solution of the (nd) LP can be punctured to a feasible solution of the \((n-1,d-1)\) LP with the same objective value, as \(b_0\) is unchanged. Combined with Proposition 5.2, this implies the \((n-1,2e-1)\) and (n, 2e) Delsarte linear programs have the same optimal objective value. This now enables us to prove Theorems 4.2 and 4.3.
Proof of Theorem 4.3
Consider an optimum \(A^*\) to the \((n-1,2e-1)\) LP. By Proposition 5.2, extending \(A^*\) to \(A^E\) gives an optimal quasicode to the (n, 2e) LP. In particular, extension creates an even quasicode, and thus the corresponding Krawtchouk decomposition \(b^E\) is symmetric. \(\square \)
To prove Theorem 4.2, we simply puncture \(A^E\).
Proof of Theorem 4.2
Puncturing \(A^E\) yields an optimum \(A^O\) to the \((n-1,2e-1)\) LP, and the corresponding Krawtchouk decompositions \(b^E\) and \(b^O\) agree on indices 0 through \(n-1\), inclusive. \(\square \)
In fact, there are further patterns for \(A^E\) and \(A^O\), as seen in the following result.
Theorem 5.4
For all positive integers n and e such that \(2e \le n\), there exist optima \(A^E\) and \(A^O\) of the (n, 2e) and \((n-1,2e-1)\) Delsarte linear programs, respectively, with corresponding Krawtchouk decompositions \(b^E\) and \(b^O\), such that the following properties hold:
(i)
For all \(0 \le j \le n\), we have \(b_j^E=b_{n-j}^E\).
(ii)
\(b^O=(b_0^E,\dots ,b_{n-1}^E)\).
(iii)
For all even \(i \in [n]\), we have \(A_i^E = A_{i-1}^O + A_i^O\) and \(A_{i-1}^O \left( {\begin{array}{c}n-1\\ i\end{array}}\right) =A_i^O\left( {\begin{array}{c}n-1\\ i-1\end{array}}\right) \), where we define \(A_i^O=0\) if \(i\not \in [0,n-1]\).
(iv)
If \(A^O\) is the unique optimum for \((n-1,2e-1)\), then \(A^E\) is the unique optimum for (n, 2e).
Proof
We have already shown we may choose optima \(A^E\) and \(A^O\) so that the first two conditions hold. We now address the third property. We trivially have \(A_0^E = A_0^O = 1\). For even \(i \in [0,n-1]\), using the Krawtchouk recurrence from Lemma 2.6 as well as the symmetry of \(b^E\),
$$\begin{aligned} \sum _{j=0}^n b_j^E K_j(i;n)&= \sum _{j=0}^n b_j^E (K_j(i;n-1)+K_{j-1}(i;n-1)) \\ {}&= \sum _{j=0}^{n} \left( b_j^E K_j(i;n-1) + b_{n-j}^E (-1)^i K_{n-j}(i;n-1)\right) \\&= 2 \sum _{j=0}^{n-1} b_j^E K_j(i;n-1). \end{aligned}$$
For odd \(i \in [n-1]\), a similar argument yields
$$\begin{aligned} 2 \sum _{j=0}^{n-1} b_j^E K_j(i;n-1)&= \sum _{j=0}^n b_j^E K_j(i;n-1) - \sum _{j=0}^n b_{n-j}^E K_{n-j-1}(i;n-1) \\ {}&= \sum _{j=0}^n b_j^E \left( K_j(i;n-1)-K_{j-1}(i;n-1)\right) \\ {}&= \sum _{j=0}^n (-1)^j b_j^E (K_j(n-i-1;n-1)+K_{j-1}(n-i-1;n-1)) \\ {}&= \sum _{j=0}^n (-1)^j b_j^E K_j(n-i-1;n) = \sum _{j=0}^n b_j^E K_j(i+1;n). \end{aligned}$$
Combining these, we find for even \(i \in [n-1]\),
$$\begin{aligned} A_{i-1}^O + A_i^O&= \frac{1}{2^{n-1}}\left( \left( {\begin{array}{c}n-1\\ i-1\end{array}}\right) \sum _{j=0}^{n-1}b_j^E K_j(i-1;n-1) + \left( {\begin{array}{c}n-1\\ i\end{array}}\right) \sum _{j=0}^{n-1} b_j^E K_j(i;n-1)\right) \\&= \frac{1}{2^n}\left( \left( {\begin{array}{c}n-1\\ i-1\end{array}}\right) \sum _{j=0}^n b_j^E K_j(i;n) + \left( {\begin{array}{c}n-1\\ i\end{array}}\right) \sum _{j=0}^n b_j^E K_j(i;n)\right) \\ {}&= \frac{\left( {\begin{array}{c}n\\ i\end{array}}\right) }{2^n}\sum _{j=0}^n b_j^E K_j(i;n) = A_i^E. \end{aligned}$$
Additionally, this shows
$$\begin{aligned} \frac{A_{i-1}^O}{A_i^O}=\frac{\left( {\begin{array}{c}n-1\\ i-1\end{array}}\right) }{\left( {\begin{array}{c}n-1\\ i\end{array}}\right) }, \end{aligned}$$
as desired. If n is odd, the proof is complete; however, if n is even, it remains to show \(A_n^E = A_{n-1}^O\) and \(A_{n-1}^O \left( {\begin{array}{c}n-1\\ n\end{array}}\right) =A_n^O \left( {\begin{array}{c}n-1\\ n-1\end{array}}\right) \). The latter can be immediately addressed because both sides equal zero, as \(\left( {\begin{array}{c}n-1\\ n\end{array}}\right) =A_n^O=0\). For the former, as \(n-1\) is odd, using the same argument as before gives
$$\begin{aligned} A_{n-1}^O = \frac{1}{2^{n-1}}\sum _{j=0}^{n-1} b_j^E K_j(n-1;n-1) = \frac{1}{2^n}\sum _{j=0}^n b_j^E K_j(n;n) = A_n^E, \end{aligned}$$
completing the proof.
Finally, for the fourth property, if \(A^O\) is the unique optimum for \((n-1,2e-1)\), then consider any optimum \(A^*\) of the (n, 2e) linear program. Puncturing \(A^*\) must yield an optimum for \((n-1,2e-1)\), which can only be \(A^O\). Thus the Krawtchouk decomposition \(b^*\) of \(A^*\) must satisfy \(b^*_i = b_i^O\) for all \(0 \le i \le n-1\). The first constraint of the Krawtchouk decomposition LP then fixes \(b^*_n\) as well, implying there is a unique optimum to the (n, 2e) linear program, which must be \(A^E\). \(\square \)
Hence, extending \(A^O\) yields \(A^E\), and puncturing \(A^E\) yields \(A^O\).
The fourth condition implies that if \((n-1,2e-1)\) has a unique optimum, then so does (n, 2e); however, the converse does not hold in general. As shown in Section 3.5, the (18, 6) Delsarte linear program has a unique optimum, but (17, 5) does not.
The third property of Theorem 5.4 follows from the symmetry of \(b^E\), which requires \(b^O\) to satisfy a truncated symmetry. Constraining the (n, 2e) and \((n-1,2e-1)\) Delsarte linear programs to have symmetry and truncated symmetry, respectively, i.e., \(b_j = b_{n-j}\), results in a stronger parity phenomenon: these two symmetry-constrained linear programs are equivalent, i.e., have the same feasible region and objective.
Proposition 5.5
For all positive integers n and e such that \(2e \le n\), the symmetry-constrained (n, 2e) and \((n-1,2e-1)\) Delsarte linear programs, where we add the constraint that \(b_j=b_{n-j}\) for all \(0 \le j \le n\), are equivalent.
Proof
We may write the symmetry-constrained (n, 2e) Delsarte LP as Similarly, we impose the symmetry condition on the \((n-1,2e-1)\) Delsarte LP, yielding In the \((n-1,2e-1)\) Delsarte LP, we may add the variable \(b_n\) with the constraint \(b_n = b_0 \ge 0\) without affecting the LP, allowing the symmetry and non-negativity conditions, i.e., the last two constraints, of the two LPs to be identical, using the same decision variables \(b_0,\dots ,b_n\).
We first show that constraint (\(\star \)) is equivalent to constraint (\(*\)), and so on. Using the same argument as in the proof of Theorem 5.4,
$$\begin{aligned} \sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) b_j = \sum _{j=0}^n b_j K_j(0;n) = 2 \sum _{j=0}^{n-1} b_j K_j(0;n-1) = 2 \sum _{j=0}^{n-1} \left( {\begin{array}{c}n-1\\ j\end{array}}\right) b_j, \end{aligned}$$
which implies (\(\star \)) is logically equivalent to (\(*\)).
The (\(\star \star \)) and (\(\star \star \star \)) constraints are structurally identical, so we will address them together. Firstly, for odd \(i \in [n]\), notice that by symmetry,
$$\begin{aligned} \sum _{j=0}^n b_j K_j(i;n) = \sum _{j=0}^n b_{n-j} (-1)^i K_{n-j}(i;n) = - \sum _{j=0}^n b_j K_j(i;n), \end{aligned}$$
which implies this sum must equal 0, and thus (\(\star \star \)) and (\(\star \star \star \)) trivially hold for odd i, and these constraints can be removed from the (n, 2e) LP. For even \(i\in [n-1]\), by the proof of Theorem 5.4,
$$\begin{aligned} \sum _{j=0}^n b_j K_j(i;n)&= 2 \sum _{j=0}^{n-1} b_j K_j(i;n-1), \end{aligned}$$
so the (\(\star \star \)) and (\(\star \star \star \)) constraints for even i are equivalent for \(i\in [n-1]\), and for odd \(i \in [n-1]\),
$$\begin{aligned} 2 \sum _{j=0}^{n-1} b_j K_j(i;n-1)&= \sum _{j=0}^n b_j K_j(i+1;n). \end{aligned}$$
Thus, the constraint for odd i in the \((n-1,2e-1)\) LP is equivalent to the constraint for even \(i+1\) in the (n, 2e) LP. Hence, using the symmetry constraints, the (\(\star \star \star \)) constraints hold for all \(i\in [2e-1]\) if and only if they hold for all even \(i \in [2e-2]\), which is equivalent to the (\(***\)) constraint holding for all \(i \in [2e-2]\).
The (\(\star \star \)) and (\(**\)) constraints are also equivalent, though we take a bit more care in differentiating the cases depending on the parity of n. If n is even, then the relevant (\(\star \star \)) constraints are the even \(i \in [n]\); the even \(i\in [n-2]\) constraints are equivalent to the (\(**\)) constraints for \(i \in [n-2]\), and the (\(\star \star \)) constraint for \(i=n\) is equivalent to the (\(**\)) constraint for \(i=n-1\), and thus the (\(\star \star \)) constraints are equivalent to the (\(**\)) constraints. If n is odd, then we directly see that the relevant (\(\star \star \)) constraints are the even ones, i.e., even \(i \in [n-1]\), which following our previous reasoning are equivalent to the set of (\(**\)) constraints for all \(i \in [n-1]\).
This shows that the two symmetry-constrained LPs are equivalent. \(\square \)

Acknowledgements

We sincerely thank Henry Cohn for his input throughout the research process as well as editing feedback. We are grateful to the anonymous referees who made helpful suggestions.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
2.
Zurück zum Zitat Cohn H., Kumar A., Miller S.D., Radchenko D., Viazovska M.: The sphere packing problem in dimension 24. Ann. Math. 185(3), 1017–1033 (2017).MathSciNetCrossRefMATH Cohn H., Kumar A., Miller S.D., Radchenko D., Viazovska M.: The sphere packing problem in dimension 24. Ann. Math. 185(3), 1017–1033 (2017).MathSciNetCrossRefMATH
3.
Zurück zum Zitat Conway J.H., Sloane N.J.A.: Sphere packings, lattices and groups, volume 290 of Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, New York, third edition. With additional contributions by Bannai E., Borcherds R.E., Leech J., Norton S.P., Odlyzko A.M., Parker R.A., Queen L., Venkov B.B, (1999). Conway J.H., Sloane N.J.A.: Sphere packings, lattices and groups, volume 290 of Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, New York, third edition. With additional contributions by Bannai E., Borcherds R.E., Leech J., Norton S.P., Odlyzko A.M., Parker R.A., Queen L., Venkov B.B, (1999).
4.
Zurück zum Zitat Delsarte P.: Bounds for unrestricted codes, by linear programming. Philips Res. Rep. 27, 272–289 (1972).MathSciNetMATH Delsarte P.: Bounds for unrestricted codes, by linear programming. Philips Res. Rep. 27, 272–289 (1972).MathSciNetMATH
6.
7.
10.
Zurück zum Zitat Levenshteĭn V.I.: Designs as maximum codes in polynomial metric spaces. Acta Appl. Math. 29(1–2), 1–82 (1992) Interactions between algebra and combinatorics.MathSciNetCrossRefMATH Levenshteĭn V.I.: Designs as maximum codes in polynomial metric spaces. Acta Appl. Math. 29(1–2), 1–82 (1992) Interactions between algebra and combinatorics.MathSciNetCrossRefMATH
11.
Zurück zum Zitat MacWilliams F.J., Sloane N.J.A.: The theory of error-correcting codes. I. North-Holland Mathematical Library, Vol. 16. North-Holland Publishing Co., Amsterdam, New York (1977). MacWilliams F.J., Sloane N.J.A.: The theory of error-correcting codes. I. North-Holland Mathematical Library, Vol. 16. North-Holland Publishing Co., Amsterdam, New York (1977).
Metadaten
Titel
Unique optima of the Delsarte linear program
verfasst von
Rupert Li
Publikationsdatum
29.01.2023
Verlag
Springer US
Erschienen in
Designs, Codes and Cryptography / Ausgabe 6/2023
Print ISSN: 0925-1022
Elektronische ISSN: 1573-7586
DOI
https://doi.org/10.1007/s10623-023-01191-y

Weitere Artikel der Ausgabe 6/2023

Designs, Codes and Cryptography 6/2023 Zur Ausgabe

Premium Partner