This post provides some background for a series of planned follow-up posts on recent progress in constructing maximal- or large-determinant binary matrices.

How big can the determinant of a matrix be if its elements are restricted to the set {0, 1}?  How big can the determinant of a matrix be if its elements are restricted to the set {1, −1} instead?  These two questions are equivalent.  For consider an n×n matrix with elements taken from {1, −1} that has no elements −1 in its first row or column.  A series of determinant-preserving transformations produces an (n−1)×(n−1) matrix with elements taken from {0, 1}, multiplied by the scalar $(-2)^{n-1}$:

$\left[\begin{array}{c|ccc}1 & 1 & 1 & 1\\ \hline 1 & 1 & 1 & -1\\1 & -1 & -1 & -1\\1 & 1 & -1 & -1\end{array}\right]\rightarrow\left[\begin{array}{c|ccc}1 & 1 & 1 & 1\\ \hline 0 & 0 & 0 & -2\\ 0 & -2 & -2 & -2\\ 0 & 0 & -2 & -2\end{array}\right]\rightarrow(-2)^{n-1}\begin{bmatrix}0 & 0 & 1\\1 & 1 & 1\\ 0 & 1 & 1\end{bmatrix}.$

In the first transformation, the first row was subtracted from each subsequent row; in the second transformation, the first row and column were deleted and the factor −2 was pulled out of each remaining row.
A {1,  −1} matrix with no element −1 in row or column 1 is called normalized.  Any {1, −1} matrix may be brought to normalized form by multiplying rows and columns by −1 as necessary.  Normalization changes the determinant by at most a factor of −1.  As a consequence, the maximal determinant of an n×n matrix with elements in {1,  −1} equals the maximal determinant of an (n−1)×(n−1) matrix with elements in {0, 1}, multiplied by $2^{n-1}$.

There are certain advantages to working with {1,  −1} matrices.  Every row and column has vector norm $\sqrt{n}.$  By Hadamard’s determinant bound, the maximal determinant is therefore $n^{n/2}.$  This bound is attained only when rows are pairwise orthogonal.  Since when n is odd the inner product of two rows is odd, the bound is never attained when n is odd and greater than 1.  We can go beyond this observation with the help of the following result:

Lemma 1.  Let i, j, k be {1, −1} vectors of length n.  Then i·j + i·k + j·k ≡ −n (mod 4).

Proof.  Define

• a = the number of coordinate positions in which ij, and k have the same sign;
• b = the number of coordinate positions in which i and j have the same sign and k has opposite sign;
• c = the number of coordinate positions in which i and k have the same sign and j has opposite sign;
• d = the number of coordinate positions in which j and k have the same sign and i has opposite sign.

Then

\begin{aligned}a+b+c+d&=n,\\ a+b-c-d&=i\cdot j,\\ a-b+c-d&=i\cdot k,\\ a-b-c+d&=j\cdot k.\end{aligned}

Adding the four equations and subtracting n from both sides yields 4an = i·j + i·k + j·k, which establishes the result.

A consequence of Lemma 1 is that if j and k are both orthogonal to i then j·k is congruent to −n (mod 4).  If n ≡ 2 (mod 4), this means that j and k are not orthogonal.   In other words, when n ≡ 2 (mod 4) it is impossible to have three pairwise orthogonal rows.  Therefore Hadamard’s bound cannot be attained when n > 2 and n ≡ 2 (mod 4).

This leaves 1, 2, and multiples of 4 as the matrix sizes in which Hadamard’s bound might be attained.  Matrices that do so are known as Hadamard matrices.  It is believed that Hadamard matrices exist of every allowed size.  This is the famous Hadamard conjecture.  (It is not clear that Hadamard actually made this conjecture; it certainly appears in Paley’s work on Hadamard matrices.  Interestingly, Wojtas attributes the conjecture to Sylvester, whose work preceded Hadamard’s by 26 years, but I have not been able to locate any statement concerning Hadamard matrices of non-power-of-2 sizes in Sylvester’s work.)

Can improvements on Hadamard’s bound be obtained when n > 2 and n congruent to 1, 2, or 3 (mod 4)?  The answer is yes, and such improvements were achieved by Barba, Ehlich, and Wojtas.  Let R be an n×n matrix with elements  in {1, −1}.  The key idea is to consider the Gram matrix G formed by the columns of R, that is, the matrix $G=R^TR$.  One could equally well consider the Gram matrix G′ formed by the rows of R, that is $G'=RR^T$.  The two Gram matrices are not equal in general—or even equivalent—and in some contexts it is useful to consider both of them at once.  (We define G and H to be equivalent if there is a permutation, applied both to rows and columns, followed by a set of negations, applied both to rows and columns, that transforms G into H.)

The Gram matrix is symmetric, positive-semidefinite, and has diagonal elements equal to n.  The columns of R may be partitioned into a set A consisting of those columns of R whose inner product with column 1 is congruent to n (mod 4), and a set B, possibly empty, consisting of those columns of R whose inner product with column 1 is congruent to n+2 (mod 4).  (Clearly A is non-empty since column 1 itself is in A.)  Lemma 1 implies that any two columns in A have inner product congruent to n (mod 4), that any two columns in B have inner product congruent to n (mod 4), and that any column of A has inner product congruent to n+2 (mod 4) with any column of B .  We may therefore assume, by reordering columns of R as necessary, that G can be partitioned as

$G=\begin{bmatrix}S & U\\ U^T & T\end{bmatrix},$

where S and T are symmetric, positive-semidefinite matrices with diagonal elements equal to n and off-diagonal elements congruent to n (mod 4), and U is a matrix with elements congruent to n + 2 (mod 4).

When n is even, this structure of G is independent of the normalization of the columns of R.  When n is odd, however, negating the columns of R in the set B results in a matrix for which all elements of G are congruent to n (mod 4).  Hence, by normalizing the columns of R in a suitable manner, we may always assume B to be empty.  The same normalization may also be carried out for the rows of R, so that the elements of G′ can be assumed congruent to n (mod 4) as well.  With such normalization in rows and columns, R is said to be parity normalized.

We will use the term candidate Gram matrix to denote an n×n symmetric, positive-semidefinite matrix $((g_{ij}))$ such that $g_{ii}=n$ and $g_{ij}+g_{ik}+g_{jk}\equiv -n\pmod{4}$.  That is, a candidate Gram matrix satisfies the properties of actual Gram matrices described in the preceding paragraphs, including having elements compatible with Lemma 1.  When n is odd, any candidate Gram matrix is equivalent to a candidate Gram matrix all of whose elements satisfy $g_{ij}\equiv n\pmod{4}$.  When n is even, any candidate Gram matrix is equivalent to a partitioned matrix with blocks S, T, U satisfying the properties described above.  Unless otherwise stated, we will impose these additional requirements from now on.

There are properties of actual Gram matrices that we have not imposed on candidate Gram matrices.  For example, actual Gram matrices must have perfect-square determinant.  They must also satisfy criteria implied by the Hasse–Minkowski theory of rational equivalence of quadratic forms, which will be discussed in a future post.  The criteria that we have imposed are the ones that are locally true, that is, that are true of all of the principal minors of the matrix.  Having perfect-square determinant is not a local property.

One property of actual Gram matrices that candidate Gram matrices automatically do satisfy is that their determinant is divisible by $4^{n-1}$.   This follows for n odd by subtracting row 1 of the candidate Gram matrix from each subsequent row and observing that each row except the first is divisible by 4.  It follows for n even by subtracting the first row of each partition from each subsequent row in that partition, and then observing that that the first row in each partition is divisible by 2, while each subsequent row in each partition is divisible by 4.

Ehlich and Wojtas independently showed in 1964 that when n ≡ 2 (mod 4), the candidate Gram matrix with maximal determinant has S and T both of size n/2, U = 0, and S and T both equivalent to the matrix $(n-2)I_{n/2}+2J_{n/2}$, where $I_j$ is the j×j identity matrix and $J_j$ is the j×j all-ones matrix.  In other words, the candidate Gram matrix with maximal determinant is equivalent to

$G_2(n)=I_2\otimes\left((n-2)I_{n/2}+2J_{n/2}\right).$

I plan to write a follow-up post describing this result in detail. Ehlich also found that a necessary condition for a decomposition $G_2(n)=R^TR$ to exist, with R a {1, −1} matrix,  is that n − 1 be the sum of two squares.  In addition, he produced examples of such matrices R for all n up to 38, excluding 22 and 34, for which examples cannot exist since they do not satisfy the sum-of-two-squares condition.  Ehlich and Wojtas’s result establishes for n ≡ 2 (mod 4) that

$\lvert\det R\rvert\le\sqrt{\det G_2(n)}=(2n-2)(n-2)^{(n-2)/2}.$

In 1933, Guido Barba published a strengthening of Hadamard’s bound for odd n.  I have not translated his article from the Italian in order to study its arguments in depth, but Barba appears to state that, among symmetric, positive-semidefinite matrices with odd elements and diagonal elements equal to n, the determinant is maximized by matrices equivalent to $G_1(n)=(n-1)I_n+J_n$.  This implies that

$\lvert\det R\rvert\le\sqrt{\det G_1(n)}=\sqrt{2n-1}\,(n-1)^{(n-1)/2}$

for any {1, −1} matrix R of odd size n.  Ehlich, apparently unaware of Barba’s result, reproduced it in his 1964 paper.  Wojtas, on the other hand, cites Barba’s paper and its result.

There can be no {1, −1} matrix R satisfying $R^TR=G_1(n)$ if 2n − 1 is not a perfect square.  Equivalently, there can be no such R if n is not the sum of two consecutive squares.  Supposing that $n=q^2+(q+1)^2$, solutions to $R^TR=G_1(n)$ are equivalent to symmetric balanced incomplete block designs with parameters $(v,k,\lambda)=(n,q^2,q(q-1)/2)$.  Such designs are known for all odd prime q, due to a construction of Brouwer, and for a finite number of other sizes.  The smallest q for which a design is not known is 6, which corresponds to n = 85.  Observe that the values of n for which Barba’s bound is attainable are all congruent to 1 (mod 4).

That the matrix $G_1(n)$ does not have all elements congruent to n (mod 4) when n is congruent to 3 (mod 4) suggests that Barba’s bound can be improved in this case.  In a sequel to his paper on the n ≡ 2 (mod 4) bound, Ehlich proves that this is in fact so.  He shows that among candidate Gram matrices of size congruent to 3 (mod 4),  the determinant is maximized by particular instances of matrices of a special form that we call Ehlich block form.  Let D(a, b, c, …) denote the block-diagonal matrix whose diagonal blocks are all-ones matrices of sizes a, b, c, …  Then the matrix E(a, b, c, …) = (n − 3)I − J + 4D(a, b, c, …) is said to be of Ehlich block form.  It has diagonal elements n, and off-diagonal elements 3 or −1, with the 3s appearing in blocks along the diagonal of the matrix.  For example,

$E(2,2,2,1)=\begin{bmatrix}7 & 3 & -1 & -1 & -1 & -1 & -1\\ 3 & 7 & -1 & -1 & -1 & -1 & -1\\ -1 & -1 & 7 & 3 & -1 & -1 & -1\\ -1 & -1 & 3 & 7 & -1 & -1 & -1\\ -1 & -1 & -1 & -1 & 7 & 3 & -1\\ -1 & -1 & -1 & -1 & 3 & 7 & -1\\ -1 & -1 & -1 & -1 & -1 & -1 & 7\end{bmatrix}$

is the Gram matrix of the determinant-maximizing {1, −1} matrix of size 7.  Furthermore, Ehlich proved that the optimal block sizes are determined by the following rules

• there are s blocks of sizes $r=\lfloor n/s\rfloor$ or $r+1$, with v = nsr of the latter, and u = sv of the former;
• the number of blocks, s, is given by the following table.
 n s 3 7 11 15 ≤ n ≤ 59 63 ≤ n 3 5 5 or 6 6 7

This result implies the bound

$\lvert\det R\rvert\le(n-3)^{(n-s)/2}(n-3+4r)^{u/2}(n+1+4r)^{v/2}\sqrt{1 - \frac{ur}{n-3+4r} - \frac{v(r+1)}{n+1+4r}}.$

The methods that Ehlich used to obtain this result are well worth examining in detail, and I will do so in a subsequent post.

In size 7, Ehlich’s optimal candidate Gram matrix, E(2,2,1,1,1), satisfies $\det E(2,2,1,1,1)/4^6=84$, whereas the Gram matrix, E(2,2,2,1), of the actual maximal determinant {1, −1} matrix satisfies $\det E(2,2,2,1)/4^6=81$.  Clearly there can be no decomposition $R^TR=E(2,2,1,1,1)$ since 84 is not a perfect square.  Cohn has determined the conditions under which Ehlich’s optimal candidate Gram matrix has perfect-square determinant; Tamura has carried out a detailed analysis using the Hasse–Minkowski criterion of the conditions under which Ehlich’s optimal candidate Gram matrix might decompose as $G=R^TR$.  One consequence is that, apart from n = 3, the smallest n for which a decomposition is possible is 511.  At present, Ehlich’s bound is not known to be attained for any n > 3.  A future post will discuss these analyses.

It is interesting to see how the improved bounds of Barba, Ehlich and Wojtas, and Ehlich compare with the Hadamard bound, $h(n)=n^{n/2}$.  We compute that

• the Barba bound for n ≡ 1 (mod 4) is asymptotically $\sqrt\frac{2}{e}\,h(n)\approx 0. 8577639\,h(n);$
• the Ehlich–Wojtas bound for n ≡ 2 (mod 4) is asymptotically $\frac{2}{e}\,h(n)\approx 0. 7357589\,h(n);$
• the Ehlich bound for n ≡ 3 (mod 4) is asymptotically $\frac{2\cdot11^3}{7^3}\sqrt\frac{1}{7e^3}\,h(n)\approx 0. 6545204\,h(n).$

The Gram matrices of maximal-determinant matrices can be quite a bit more complicated than are the candidate Gram matrices used to obtain the upper bounds.  Consider, for example, n = 9, 11, 17, 19, 21, and 37.  Current record-determinant matrices also provide interesting examples, such as n = 22, 23, 29, and 33.  All known maximal-determinant matrices have the property that their Gram matrices $G=R^TR$ and $G'=RR^T$ are equivalent.  The smallest example of a record-determinant matrix for which this is not the case is n = 45.

References

1. G. Barba, Intorno al teorema di Hadamard sui determinanti a valore massimo, Giorn. Mat. Battaglini 71 (1933) 70–86.
2. A. E. Brouwer, An infinite series of symmetric designs, Math. Centrum Amsterdam Report ZW 202/83 (1983).
3. J. H. E. Cohn, Almost D-optimal designs, Utilitas Math. 57 (2000) 121–128.
4. H. Ehlich, Determinantenabschätzungen für binäre Matrizen, Math. Z. 83 (1964) 123–132.
5. H. Ehlich, Determinantenabschätzungen für binäre Matrizen mit N≡3 mod 4, Math. Z. 84 (1964) 438–447.
6. H. Ehlich and K. Zeller, Binäre Matrizen, Z. Angew. Math. Mech. 42 (1962) T20–T21.
7. J. Hadamard, Résolution d’une question relative aux déterminants, Bull. Sci. Math. 17 (1893) 240–246.
8. R. E. A. C. Paley, On orthogonal matrices, J. Mathematics and Physics 12 (1933) 311–320.
9. J. J. Sylvester, Thoughts on inverse orthogonal matrices, simultaneous sign successions, and tesselated pavements in two or more colours, with applications to Newton’s rule, ornamental tile-work, and the theory of numbers, London Edinburgh and Dublin Philos. Mag. and J. Sci. 34 (1867) 461–475.
10. H. Tamura, D-optimal designs and group divisible designs, Journal of Combinatorial Designs 14 (2006) 451–462.
11. J. Williamson, Determinants whose elements are 0 and 1, Amer. Math. Monthly 53 (1946) 427–434.
12. M. Wojtas, On Hadamard’s inequality for the determinants of order non-divisible by 4, Colloq. Math. 12 (1964) 73-83.