A Putnam problem with a twist












20












$begingroup$


This question is motivated by one of the problem set from this year's Putnam Examination. That is,




Problem. Let $S_1, S_2, dots, S_{2^n-1}$ be the nonempty subsets of ${1,2,dots,n}$ in some order, and let
$M$ be the $(2^n-1) times (2^n-1)$ matrix whose $(i,j)$ entry is
$$M_{ij} = begin{cases} 0 & mbox{if }S_i cap S_j = emptyset; \
1 & mbox{otherwise.}
end{cases}$$

Calculate the determinant of $M$. Answer: If $n=1$ then $det M=1$; else $det(M)=-1$.




I like to consider the following variant which got me puzzled.




Question. Preserve the notation from above, let
$A$ be the matrix whose $(i,j)$ entry is
$$A_{ij} = begin{cases} 1 & # (S_i cap S_j) =1; \
0 & mbox{otherwise.}
end{cases}$$

If $n>1$, is this true?
$$det(A)=-prod_{k=1}^nk^{binom{n}k}.$$




Remark. Amusingly, the same number counts "product of sizes of all the nonempty subsets of $[n]$" according to OEIS.



POSTSCRIPT.




If $B$ is the matrix whose $(i,j)$ entry is
$B_{ij} = q^{#(S_icap S_j)}$
then does this hold?
$$det(B)=q^n(q-1)^{n(2^{n-1}-1)}.$$











share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Nice one! The proof I posted at artofproblemsolving.com/community/u432h1747709p11384734 still works, but you now have to replace $C$ by the matrix $left( left|Sright| cdot left[S subseteq Tright] right)_{left(S, Tright) in P times P}$.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 5:14








  • 2




    $begingroup$
    Maybe, the eigenvalues of this matrix are equal to $pm k$ with multiplicities $binom{n-1}{k}$, $binom{n-1}{k-1}$? And there is some clear pattern on how the eigenvectors look like?r
    $endgroup$
    – Fedor Petrov
    Dec 7 '18 at 6:44












  • $begingroup$
    @FedorPetrov: That is a fine proposition worthy of pushing further that I like to see explored.
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 17:50










  • $begingroup$
    @FedorPetrov: My computations for $n=3$ (using SageMath) don't confirm your conjecture; I'm getting irrational eigenvalues (the characteristic polynomial factors over $mathbb{Q}$ as $(x^{2} - 2)^{2} cdot (x^{3} - 3 x^{2} - 5 x + 6)$). Can you confirm or disconfirm this?
    $endgroup$
    – darij grinberg
    Jan 12 at 23:08










  • $begingroup$
    As for the matrix from the postscriptum, its characteristic polynomial fails to fully factor even for $n = 2$; it is $(x - q + 1) cdot (x^{2} + left(-q^{2} - q - 1right) x + q^{3} - q^{2})$ in that case. (Computation courtesy of SageMath; don't blame me for the weirdly placed minus signs.)
    $endgroup$
    – darij grinberg
    Jan 12 at 23:10


















20












$begingroup$


This question is motivated by one of the problem set from this year's Putnam Examination. That is,




Problem. Let $S_1, S_2, dots, S_{2^n-1}$ be the nonempty subsets of ${1,2,dots,n}$ in some order, and let
$M$ be the $(2^n-1) times (2^n-1)$ matrix whose $(i,j)$ entry is
$$M_{ij} = begin{cases} 0 & mbox{if }S_i cap S_j = emptyset; \
1 & mbox{otherwise.}
end{cases}$$

Calculate the determinant of $M$. Answer: If $n=1$ then $det M=1$; else $det(M)=-1$.




I like to consider the following variant which got me puzzled.




Question. Preserve the notation from above, let
$A$ be the matrix whose $(i,j)$ entry is
$$A_{ij} = begin{cases} 1 & # (S_i cap S_j) =1; \
0 & mbox{otherwise.}
end{cases}$$

If $n>1$, is this true?
$$det(A)=-prod_{k=1}^nk^{binom{n}k}.$$




Remark. Amusingly, the same number counts "product of sizes of all the nonempty subsets of $[n]$" according to OEIS.



POSTSCRIPT.




If $B$ is the matrix whose $(i,j)$ entry is
$B_{ij} = q^{#(S_icap S_j)}$
then does this hold?
$$det(B)=q^n(q-1)^{n(2^{n-1}-1)}.$$











share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Nice one! The proof I posted at artofproblemsolving.com/community/u432h1747709p11384734 still works, but you now have to replace $C$ by the matrix $left( left|Sright| cdot left[S subseteq Tright] right)_{left(S, Tright) in P times P}$.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 5:14








  • 2




    $begingroup$
    Maybe, the eigenvalues of this matrix are equal to $pm k$ with multiplicities $binom{n-1}{k}$, $binom{n-1}{k-1}$? And there is some clear pattern on how the eigenvectors look like?r
    $endgroup$
    – Fedor Petrov
    Dec 7 '18 at 6:44












  • $begingroup$
    @FedorPetrov: That is a fine proposition worthy of pushing further that I like to see explored.
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 17:50










  • $begingroup$
    @FedorPetrov: My computations for $n=3$ (using SageMath) don't confirm your conjecture; I'm getting irrational eigenvalues (the characteristic polynomial factors over $mathbb{Q}$ as $(x^{2} - 2)^{2} cdot (x^{3} - 3 x^{2} - 5 x + 6)$). Can you confirm or disconfirm this?
    $endgroup$
    – darij grinberg
    Jan 12 at 23:08










  • $begingroup$
    As for the matrix from the postscriptum, its characteristic polynomial fails to fully factor even for $n = 2$; it is $(x - q + 1) cdot (x^{2} + left(-q^{2} - q - 1right) x + q^{3} - q^{2})$ in that case. (Computation courtesy of SageMath; don't blame me for the weirdly placed minus signs.)
    $endgroup$
    – darij grinberg
    Jan 12 at 23:10
















20












20








20


10



$begingroup$


This question is motivated by one of the problem set from this year's Putnam Examination. That is,




Problem. Let $S_1, S_2, dots, S_{2^n-1}$ be the nonempty subsets of ${1,2,dots,n}$ in some order, and let
$M$ be the $(2^n-1) times (2^n-1)$ matrix whose $(i,j)$ entry is
$$M_{ij} = begin{cases} 0 & mbox{if }S_i cap S_j = emptyset; \
1 & mbox{otherwise.}
end{cases}$$

Calculate the determinant of $M$. Answer: If $n=1$ then $det M=1$; else $det(M)=-1$.




I like to consider the following variant which got me puzzled.




Question. Preserve the notation from above, let
$A$ be the matrix whose $(i,j)$ entry is
$$A_{ij} = begin{cases} 1 & # (S_i cap S_j) =1; \
0 & mbox{otherwise.}
end{cases}$$

If $n>1$, is this true?
$$det(A)=-prod_{k=1}^nk^{binom{n}k}.$$




Remark. Amusingly, the same number counts "product of sizes of all the nonempty subsets of $[n]$" according to OEIS.



POSTSCRIPT.




If $B$ is the matrix whose $(i,j)$ entry is
$B_{ij} = q^{#(S_icap S_j)}$
then does this hold?
$$det(B)=q^n(q-1)^{n(2^{n-1}-1)}.$$











share|cite|improve this question











$endgroup$




This question is motivated by one of the problem set from this year's Putnam Examination. That is,




Problem. Let $S_1, S_2, dots, S_{2^n-1}$ be the nonempty subsets of ${1,2,dots,n}$ in some order, and let
$M$ be the $(2^n-1) times (2^n-1)$ matrix whose $(i,j)$ entry is
$$M_{ij} = begin{cases} 0 & mbox{if }S_i cap S_j = emptyset; \
1 & mbox{otherwise.}
end{cases}$$

Calculate the determinant of $M$. Answer: If $n=1$ then $det M=1$; else $det(M)=-1$.




I like to consider the following variant which got me puzzled.




Question. Preserve the notation from above, let
$A$ be the matrix whose $(i,j)$ entry is
$$A_{ij} = begin{cases} 1 & # (S_i cap S_j) =1; \
0 & mbox{otherwise.}
end{cases}$$

If $n>1$, is this true?
$$det(A)=-prod_{k=1}^nk^{binom{n}k}.$$




Remark. Amusingly, the same number counts "product of sizes of all the nonempty subsets of $[n]$" according to OEIS.



POSTSCRIPT.




If $B$ is the matrix whose $(i,j)$ entry is
$B_{ij} = q^{#(S_icap S_j)}$
then does this hold?
$$det(B)=q^n(q-1)^{n(2^{n-1}-1)}.$$








nt.number-theory co.combinatorics determinants elementary-proofs






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 7 '18 at 8:33









Greg Martin

8,32813559




8,32813559










asked Dec 7 '18 at 4:41









T. AmdeberhanT. Amdeberhan

17.1k228126




17.1k228126








  • 2




    $begingroup$
    Nice one! The proof I posted at artofproblemsolving.com/community/u432h1747709p11384734 still works, but you now have to replace $C$ by the matrix $left( left|Sright| cdot left[S subseteq Tright] right)_{left(S, Tright) in P times P}$.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 5:14








  • 2




    $begingroup$
    Maybe, the eigenvalues of this matrix are equal to $pm k$ with multiplicities $binom{n-1}{k}$, $binom{n-1}{k-1}$? And there is some clear pattern on how the eigenvectors look like?r
    $endgroup$
    – Fedor Petrov
    Dec 7 '18 at 6:44












  • $begingroup$
    @FedorPetrov: That is a fine proposition worthy of pushing further that I like to see explored.
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 17:50










  • $begingroup$
    @FedorPetrov: My computations for $n=3$ (using SageMath) don't confirm your conjecture; I'm getting irrational eigenvalues (the characteristic polynomial factors over $mathbb{Q}$ as $(x^{2} - 2)^{2} cdot (x^{3} - 3 x^{2} - 5 x + 6)$). Can you confirm or disconfirm this?
    $endgroup$
    – darij grinberg
    Jan 12 at 23:08










  • $begingroup$
    As for the matrix from the postscriptum, its characteristic polynomial fails to fully factor even for $n = 2$; it is $(x - q + 1) cdot (x^{2} + left(-q^{2} - q - 1right) x + q^{3} - q^{2})$ in that case. (Computation courtesy of SageMath; don't blame me for the weirdly placed minus signs.)
    $endgroup$
    – darij grinberg
    Jan 12 at 23:10
















  • 2




    $begingroup$
    Nice one! The proof I posted at artofproblemsolving.com/community/u432h1747709p11384734 still works, but you now have to replace $C$ by the matrix $left( left|Sright| cdot left[S subseteq Tright] right)_{left(S, Tright) in P times P}$.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 5:14








  • 2




    $begingroup$
    Maybe, the eigenvalues of this matrix are equal to $pm k$ with multiplicities $binom{n-1}{k}$, $binom{n-1}{k-1}$? And there is some clear pattern on how the eigenvectors look like?r
    $endgroup$
    – Fedor Petrov
    Dec 7 '18 at 6:44












  • $begingroup$
    @FedorPetrov: That is a fine proposition worthy of pushing further that I like to see explored.
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 17:50










  • $begingroup$
    @FedorPetrov: My computations for $n=3$ (using SageMath) don't confirm your conjecture; I'm getting irrational eigenvalues (the characteristic polynomial factors over $mathbb{Q}$ as $(x^{2} - 2)^{2} cdot (x^{3} - 3 x^{2} - 5 x + 6)$). Can you confirm or disconfirm this?
    $endgroup$
    – darij grinberg
    Jan 12 at 23:08










  • $begingroup$
    As for the matrix from the postscriptum, its characteristic polynomial fails to fully factor even for $n = 2$; it is $(x - q + 1) cdot (x^{2} + left(-q^{2} - q - 1right) x + q^{3} - q^{2})$ in that case. (Computation courtesy of SageMath; don't blame me for the weirdly placed minus signs.)
    $endgroup$
    – darij grinberg
    Jan 12 at 23:10










2




2




$begingroup$
Nice one! The proof I posted at artofproblemsolving.com/community/u432h1747709p11384734 still works, but you now have to replace $C$ by the matrix $left( left|Sright| cdot left[S subseteq Tright] right)_{left(S, Tright) in P times P}$.
$endgroup$
– darij grinberg
Dec 7 '18 at 5:14






$begingroup$
Nice one! The proof I posted at artofproblemsolving.com/community/u432h1747709p11384734 still works, but you now have to replace $C$ by the matrix $left( left|Sright| cdot left[S subseteq Tright] right)_{left(S, Tright) in P times P}$.
$endgroup$
– darij grinberg
Dec 7 '18 at 5:14






2




2




$begingroup$
Maybe, the eigenvalues of this matrix are equal to $pm k$ with multiplicities $binom{n-1}{k}$, $binom{n-1}{k-1}$? And there is some clear pattern on how the eigenvectors look like?r
$endgroup$
– Fedor Petrov
Dec 7 '18 at 6:44






$begingroup$
Maybe, the eigenvalues of this matrix are equal to $pm k$ with multiplicities $binom{n-1}{k}$, $binom{n-1}{k-1}$? And there is some clear pattern on how the eigenvectors look like?r
$endgroup$
– Fedor Petrov
Dec 7 '18 at 6:44














$begingroup$
@FedorPetrov: That is a fine proposition worthy of pushing further that I like to see explored.
$endgroup$
– T. Amdeberhan
Dec 9 '18 at 17:50




$begingroup$
@FedorPetrov: That is a fine proposition worthy of pushing further that I like to see explored.
$endgroup$
– T. Amdeberhan
Dec 9 '18 at 17:50












$begingroup$
@FedorPetrov: My computations for $n=3$ (using SageMath) don't confirm your conjecture; I'm getting irrational eigenvalues (the characteristic polynomial factors over $mathbb{Q}$ as $(x^{2} - 2)^{2} cdot (x^{3} - 3 x^{2} - 5 x + 6)$). Can you confirm or disconfirm this?
$endgroup$
– darij grinberg
Jan 12 at 23:08




$begingroup$
@FedorPetrov: My computations for $n=3$ (using SageMath) don't confirm your conjecture; I'm getting irrational eigenvalues (the characteristic polynomial factors over $mathbb{Q}$ as $(x^{2} - 2)^{2} cdot (x^{3} - 3 x^{2} - 5 x + 6)$). Can you confirm or disconfirm this?
$endgroup$
– darij grinberg
Jan 12 at 23:08












$begingroup$
As for the matrix from the postscriptum, its characteristic polynomial fails to fully factor even for $n = 2$; it is $(x - q + 1) cdot (x^{2} + left(-q^{2} - q - 1right) x + q^{3} - q^{2})$ in that case. (Computation courtesy of SageMath; don't blame me for the weirdly placed minus signs.)
$endgroup$
– darij grinberg
Jan 12 at 23:10






$begingroup$
As for the matrix from the postscriptum, its characteristic polynomial fails to fully factor even for $n = 2$; it is $(x - q + 1) cdot (x^{2} + left(-q^{2} - q - 1right) x + q^{3} - q^{2})$ in that case. (Computation courtesy of SageMath; don't blame me for the weirdly placed minus signs.)
$endgroup$
– darij grinberg
Jan 12 at 23:10












2 Answers
2






active

oldest

votes


















20












$begingroup$

$newcommand{QQ}{mathbb{Q}}
newcommand{set}[1]{left{ #1 right}}
newcommand{abs}[1]{left| #1 right|}
newcommand{tup}[1]{left( #1 right)}
newcommand{ive}[1]{left[ #1 right]}
newcommand{suml}{sumlimits}
newcommand{sumS}{suml_{S in P}}
newcommand{prodl}{prodlimits}
newcommand{prodS}{prodl_{S in P}}
newcommand{subs}{subseteq}
newcommand{sups}{supseteq}$



Here are proofs for both the original question (Theorem 1 below) and for the postscript (Theorem 5 below) that follow my argument at https://artofproblemsolving.com/community/u432h1747709p11384734 as closely as possible. Sorry for their length, much of it due to exposition of matrix folklore. (If you know this, scroll all the way down to section 2.)



1. Matrices indexed by finite sets



We are grown-ups and don't need to index the rows and the columns of a matrix by numbers. We can pick any set $P$, and define a $P times P$-matrix (say, with rational entries) to be a family $tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ of rational numbers indexed by pairs $tup{p, q} in P times P$. When the set $P$ is finite, such a $P times P$-matrix always has a well-defined determinant, which can be defined by
begin{equation}
det tup{ tup{a_{p, q}}_{tup{p, q} in P times P} }
= sum_{sigma in S_P} tup{-1}^{sigma} prod_{p in P} a_{p, sigmatup{p}} .
end{equation}

Here, $S_P$ denotes the set of permutations of $P$, and the notation $tup{-1}^{sigma}$ stands for the sign of a permutation $sigma$. If you think that this is problematic because the sign of a permutation is usually defined using a total order on $P$, then take a break and convince yourself that it does not actually depend on this total order. (This is proven in maximum detail in the solution of Exercise 5.12 of my Notes on the combinatorial fundamentals of algebra, 10th of January 2019.) Determinants of $P times P$-matrices behave as nicely as determinants of "usual" $n times n$-matrices, and arguably even better, since it is easier to define minors and cofactors when the rows and columns are indexed by an arbitrary finite set. Do keep in mind that the rows and the columns must be indexed by the same set; we cannot define the determinant of a $P times Q$-matrix for different $P$ and $Q$, even when $abs{P} = abs{Q}$.



Of course, an $n times n$-matrix (for a nonnegative integer $n$) is the same as an $N times N$-matrix for $N = set{1,2,ldots,n}$. Thus, our notion of $P times P$-matrices generalizes the classical notion of $n times n$-matrices. This generalization "respects" standard concepts like determinants -- e.g., the determinant of an $n times n$-matrix does not change if we instead consider it as an $N times N$-matrix. Does this generalization exhibit genuinely new behavior or does it just add convenience? Arguably it's the latter, because conversely, you can turn any $P times P$-matrix (for any finite set $P$) into an $n times n$-matrix. Namely, if you have a finite set $P$ and a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$, then you can set $n = abs{P}$ and fix a bijection $phi : set{1,2,ldots,n} to P$, and form the $n times n$-matrix $A_phi := tup{ a_{phitup{i}, phitup{j}} }_{tup{i, j} in set{1,2,ldots,n} times set{1,2,ldots,n}} in QQ^{n times n}$, and observe that $A_phi$ is "basically the same as" $A$ except that the rows and the columns have been reindexed. We shall refer to this operation (going from $A$ to $A_phi$) as numerical reindexing. So $P times P$-matrices differ from $n times n$-matrices only in how we index their rows and columns. Nevertheless they are useful, because often there is no canonical bijection $phi : set{1,2,ldots,n} to P$ (exhibit A: this Putnam problem), and we are algebraists and want to make as few non-canonical choices as possible.



Whenever $P$ is a finite set, we can multiply two $P times P$-matrices: The product $AB$ of two $P times P$-matrices $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ and $B = tup{b_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ is defined to be the $P times P$-matrix $tup{ sum_{r in P} a_{p, r} b_{r, q} }_{tup{p, q} in P times P} in QQ^{P times P}$. This multiplication behaves just as nicely as usual multiplication of $n times n$-matrices. In particular, $dettup{AB} = det A cdot det B$ whenever $A$ and $B$ are two $P times P$-matrices. And of course, you can prove this by numerical reindexing, because numerical reindexing preserves both products and determinants (assuming, of course, that you pick one bijection $phi : set{1,2,ldots,n} to P$ and stick with it).



If you remember the definition of triangular $n times n$-matrices, then its generalization to $P times P$-matrices is straightforward: When $P$ is a totally ordered finite set, we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is lower-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p < q ;
end{equation}

and we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is upper-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p > q .
end{equation}

Again, lower-triangular $P times P$-matrices become lower-triangular $n times n$-matrices upon numerical reindexing, as long as we make sure to pick our bijection $phi : set{1,2,ldots,n} to P$ to be an order isomorphism (which, by the way, is unique). So we can derive properties of lower-triangular $P times P$-matrices from analogous properties of lower-triangular $n times n$-matrices by numerical reindexing. In particular, we can thus show that the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries; i.e., if $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is a lower-triangular $P times P$-matrix, then $det A = prodl_{p in P} a_{p, p}$. The same holds for upper-triangular $P times P$-matrices.



2. Answering the original question



Now, let's not forget about the question.



Fix a positive integer $n$. Let $N$ be the set $set{1,2,ldots,n}$. Let $P$ be the set of all the $2^n - 1$ nonempty subsets of $N$. (We shall later totally order $P$, but for now $P$ is just a finite set.)



We shall also use the Iverson bracket notation, which will save us some ampersands and braces. This allows us to rewrite the definition of $A_{ij}$ in your question as $A_{ij} = ive{ abs{ S_i cap S_j } = 1 }$. But as we said, we drop the numerical indexing, and instead define a matrix whose rows and columns are indexed by $P$ (that is, by all the nonempty subsets of $N$). Thus, the claim of the exercise rewrites as follows:




Theorem 1. Let $A$ be the $P times P$-matrix $tup{ ive{ abs{ S cap T } = 1 } }_{tup{S, T} in P}$. Then,
begin{equation}
det A = tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{equation}




The key to proving this is the following simple fact:




Lemma 2. Let $S in P$ and $T in P$. Then,
begin{equation}
ive{ abs{ S cap T } = 1 } = suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T } .
end{equation}




This, in turn, shall be derived from the following binomial identity:




Lemma 3. Let $k$ be a nonnegative integer. Then,
begin{equation}
sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} = ive{k = 1}.
end{equation}




Proof of Lemma 3 (sketched). If $k = 0$, then this equality holds for obvious reasons (indeed, the left hand side is an empty sum and thus vanishes, while the right hand side vanishes since $k = 0 neq 1$). Hence, for the rest of this proof, we WLOG assume that $k neq 0$. Hence, $k$ is a positive integer. Thus, $k-1$ is a nonnegative integer.



It is well-known that $suml_{i=0}^n tup{-1}^i dbinom{n}{i} = ive{ n = 0 }$ for every nonnegative integer $n$. Applying this to $n = k-1$, we obtain $suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = ive{ k - 1 = 0 } = ive{k = 1}$. Multiplying both sides of this equality by $k$, we obtain $k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = k ive{k = 1} = ive{k = 1}$ (where the last equality sign is easy to check directly). Hence,
begin{align}
ive{k = 1}
&= k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i}
= suml_{i=0}^{k-1} tup{-1}^i cdot underbrace{k dbinom{k-1}{i}}_{substack{= tup{i+1} dbinom{k}{i+1} \ text{(by a simple computation)}}} \
&= suml_{i=0}^{k-1} tup{-1}^i cdot tup{i+1} dbinom{k}{i+1}
= suml_{i=1}^{k} tup{-1}^{i-1} cdot i dbinom{k}{i}
end{align}

(here, we have substituted $i-1$ for $i$ in the sum). This proves Lemma 3. $blacksquare$



Proof of Lemma 2 (sketched). If a subset $U in P$ does not satisfy $U subs S$, then the Iverson bracket $ive{ U subs S }$ is zero and thus renders the whole product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U subs T$. Thus, the product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ is zero unless $U in P$ satisfies both $U subs S$ and $U subs T$. Hence, the sum $suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ does not change its value when we restrict it to those $U$ that satisfy both $U subs S$ and $U subs T$ (because all the addends that we lose are zero anyway). In other words,
begin{align*}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}} tup{-1}^{abs{U} - 1} abs{U} cdot underbrace{ive{ U subs S }}_{=1} underbrace{ive{ U subs T }}_{=1} \
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}.
end{align*}

Now, let $K$ be the intersection $S cap T$, and let $k = abs{K}$. Then, the sets $U in P$ that satisfy both $U subs S$ and $U subs T$ are precisely the nonempty subsets of the $k$-element set $K$. Thus, we know how many such sets $U$ exist of each size: there are exactly $dbinom{k}{1}$ ones of size $1$, exactly $dbinom{k}{2}$ ones of size $2$, and so on, all the way up to $dbinom{k}{k}$ ones of size $k$. Hence,
begin{align*}
suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
&= dbinom{k}{1} tup{-1}^{1-1} cdot 1 + dbinom{k}{2} tup{-1}^{2-1} cdot 2 + cdots + dbinom{k}{k} tup{-1}^{k-1} cdot k \
&= sum_{i=1}^k dbinom{k}{i} tup{-1}^{i-1} cdot i
= sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} \
&= ive{k = 1} qquad tup{ text{by Lemma 3} } \
&= ive{ abs{ S cap T } = 1 }
end{align*}

(since $k = abs{K} = abs{ S cap T }$).
Combining what we have, we obtain
begin{equation}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
= ive{ abs{ S cap T } = 1 } .
end{equation}

This proves Lemma 2. $blacksquare$




Lemma 4. (a) We have $sumS abs{S} = n 2^{n-1}$.



(b) We have $sumS tup{ abs{S} - 1 } = n 2^{n-1} - 2^n + 1$.



(c) We have $prodS tup{-1}^{abs{S} - 1} = tup{-1}^{ive{n neq 1}}$.



(d) We have $prodS abs{S} = prodl_{k=1}^n k^{dbinom{n}{k}}$.




Proof of Lemma 4. (a) Fix $i in set{1,2,ldots,n}$. Then, exactly $2^{n-1}$ subsets of $set{1,2,ldots,n}$ contain $i$ (since we can freely choose which of the other $n-1$ elements such a subset should contain). Of course, all of these $2^{n-1}$ subsets are nonempty, and thus belong to $P$. Hence, exactly $2^{n-1}$ subsets $S in P$ contain $i$. Hence, the sum $sumS ive{ i in S }$ has exactly $2^{n-1}$ addends equal to $1$, while all its other addends are $0$. Therefore, this sum equals $2^{n-1}$. In other words, we have
begin{equation}
sumS ive{ i in S } = 2^{n-1} .
label{darij1.pf.l4.1}
tag{1}
end{equation}



Now, forget that we fixed $i$. We thus have proven eqref{darij1.pf.l4.1} for each $i in set{1,2,ldots,n}$.



But we have $abs{S} = suml_{i=1}^n ive{ i in S }$ for each subset $S$ of $set{1,2,ldots,n}$ (because the sum $suml_{i=1}^n ive{ i in S }$ contains exactly $abs{S}$ many addends equal to $1$, whereas all its other addends are $0$). Summing up this equation over all $S in P$, we obtain
begin{align}
sumS abs{S}
&= sumS suml_{i=1}^n ive{ i in S }
= suml_{i=1}^n underbrace{sumS ive{ i in S }}_{substack{= 2^{n-1} \ text{(by eqref{darij1.pf.l4.1})}}}
= suml_{i=1}^n 2^{n-1} = n 2^{n-1} .
end{align}

This proves Lemma 4 (a).



(b) We have $abs{P} = 2^n - 1$ (since the set $P$ consists of all the $2^n$ subsets of $set{1,2,ldots,n}$ apart from the empty set). Now,
begin{align}
sumS tup{ abs{S} - 1 }
= underbrace{sumS abs{S}}_{substack{= n 2^{n-1} \ text{(by Lemma 4 (a))}}}
- underbrace{sumS 1}_{= abs{P} = 2^n - 1}
= n 2^{n-1} - tup{2^n - 1} = n 2^{n-1} - 2^n + 1 .
end{align}

This proves Lemma 4 (b).



(c) If $n neq 1$, then $n geq 2$ (since $n$ is a positive integer), and therefore both $2^{n-1}$ and $2^n$ are even integers. Hence, if $n neq 1$, then $n 2^{n-1} - 2^n + 1 equiv 1 mod 2$. On the other hand, if $n = 1$, then $n 2^{n-1} - 2^n + 1 = 1 cdot 2^{1-1} - 2^1 + 1 = 0 equiv 0 mod 2$. Combining the previous two sentences, we conclude that $n 2^{n-1} - 2^n + 1 equiv ive{n neq 1} mod 2$. Hence, $tup{-1}^{n 2^{n-1} - 2^n + 1} = tup{-1}^{ive{n neq 1}}$. Now,
begin{align}
prodS tup{-1}^{abs{S} - 1}
&= tup{-1}^{sumS tup{ abs{S} - 1 }}
= tup{-1}^{n 2^{n-1} - 2^n + 1}
qquad tup{ text{by Lemma 4 (b)} } \
&= tup{-1}^{ive{n neq 1}} .
end{align}

This proves Lemma 4 (c).



(d) Recall that $P$ is the set of all nonempty subsets of $set{1,2,ldots,n}$. Hence, among the elements of $P$ are exactly $dbinom{n}{1}$ subsets of size $1$, exactly $dbinom{n}{2}$ subsets of size $2$, and so on, and finally exactly $dbinom{n}{n}$ subsets of size $n$ (and no other subsets). Therefore, the product $prodS abs{S}$ has exactly $dbinom{n}{1}$ factors equal to $1$, exactly $dbinom{n}{2}$ factors equal to $2$, and so on, and finally exactly $dbinom{n}{n}$ factors equal to $n$ (and no other factors). Therefore, this product equals $prodl_{k=1}^n k^{dbinom{n}{k}}$. This proves Lemma 4 (d). $blacksquare$



Proof of Theorem 1 (sketched). Define two $P times P$-matrices
begin{align*}
B &= tup{ tup{-1}^{abs{T} - 1} abs{T} cdot ive{ T subs S } }_{tup{S, T} in P times P} qquad text{and} \
C &= tup{ ive{ S subs T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 2 says that $A = BC$. (More precisely, Lemma 2 says that the $tup{S, T}$-th entry of $A$ equals the corresponding entry of $BC$ for all $S in P$ and $T in P$; but this yields $A = BC$.) Thus, $det A = dettup{BC} = det B cdot det C$. Now, it remains to find $det B$ and $det C$.



Here we shall finally endow our set $P$ with a total ordering. Namely, we pick any total order on $P$ with the property that every pair of two subsets $S in P$ and $T in P$ satisfying $S subs T$ must satisfy $S leq T$. There are many total orders that do the trick here; for example, we can order them by increasing size (resolving ties arbitrarily). Now, it is easy to see that the $P times P$-matrix $B$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det B
&= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} underbrace{ive{ S subs S }}_{=1} }
= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} } \
&= underbrace{tup{ prodS tup{-1}^{abs{S} - 1} }}_{substack{= tup{-1}^{ive{n neq 1}} \ text{(by Lemma 4 (c))}}}
tup{ prodS abs{S} }
= tup{-1}^{ive{n neq 1}} tup{ prodS abs{S} } .
end{align}



Likewise, the $P times P$-matrix $C$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{equation}
det C = prodS underbrace{ive{ S subs S }}_{=1}
= 1 .
end{equation}

Now,
begin{align}
det A
&= det B cdot underbrace{det C}_{=1} = det B = tup{-1}^{ive{n neq 1}} underbrace{tup{ prodS abs{S} }}_{substack{= prodl_{k=1}^n k^{dbinom{n}{k}} \ text{(by Lemma 4 (d))}}} \
&= tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{align}

This proves Theorem 1. $blacksquare$



In hindsight, the above proof could have been restated without introducing $P times P$-matrices, since picking a total order on a finite set $P$ is equivalent to picking a bijection $phi : set{1,2,ldots,n} to P$. But I prefer to give the exercise an "invariant" formulation, and $P times P$-matrices are worth exposing anyway.



A few more telegraphic remarks:




  • The decomposition $A = BC$ constructed in the above proof of Theorem 1 is an LU-decomposition.


  • The solution to the original Putnam question I found first was the same recursive argument appearing a few times in the AoPS thread linked above. But the structure of the recursion foreshadowed the existence of a slicker proof -- the block-row-reduction operations felt like multiplying by a lower-triangular (in a properly chosen order) matrix. So I dug deeper and found the above.


  • This reminds me of Lemma 7.1 of Chris Godsil, An Introduction to the Moebius Function, arXiv:1803.06664v1.



3. Answering the postscriptum



Anyway, I wanted to prove another theorem, from the postscriptum to the original question:




Theorem 5. Let $q in QQ$. Let $D$ be the $P times P$-matrix $tup{ q^{abs{ S cap T }} }_{tup{S, T} in P}$. Then,
begin{equation}
det D = q^n tup{q-1}^{ntup{2^{n-1}-1}} .
end{equation}




Just as our proof of Theorem 1 relied on Lemma 2, our proof of Theorem 5 will rely on the following lemma:




Lemma 6. Let $S in P$ and $T in P$. Let $q in QQ$ be nonzero. Then,
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}




Note that the assumptions "$S in P$ and $T in P$" in Lemma 6 can be replaced by the weaker assumption "$S$ and $T$ are subsets of $N$", but we will only use Lemma 6 in the case when $S$ and $T$ belong to $P$.



Proof of Lemma 6 (sketched). If a subset $U in P$ does not satisfy $U sups S$, then the Iverson bracket $ive{ U sups S }$ is zero and thus renders the whole product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U sups T$. Thus, the product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ is zero unless $U in P$ satisfies both $U sups S$ and $U sups T$. Hence, the sum $suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ does not change its value when we restrict it to those $U$ that satisfy both $U sups S$ and $U sups T$ (because all the addends that we lose are zero anyway). In other words,
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}} tup{q-1}^{n-abs{U}} q^{abs{S}-n} underbrace{ive{ U sups S }}_{=1} q^{abs{T}} underbrace{ive{ U sups T }}_{=1} \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} q^{abs{S}-n} q^{abs{T}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.1}
tag{2}
end{align}



Now, let $G = S cup T$. This is a nonempty subset of $N$ (since $S$ and $T$ are nonempty subsets of $N$). Let $H = N setminus G$ be the complement of $G$ in $N$. Let $h = abs{H}$.



Every subset $U$ of $N$ that satisfies $U sups S$ and $U sups T$ is automatically nonempty (since $U$ contains the nonempty set $S$ as a subset), and thus belongs to $P$. Hence, the $U in P$ that satisfy $U sups S$ and $U sups T$ are precisely the subsets $U$ of $N$ that satisfy $U sups S$ and $U sups T$. Thus, we have
begin{align*}
& set{ U in P mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups underbrace{S cap T}_{=G} } \
= & set{ U subs N mid U sups G } \
= & set{ U subs N mid N setminus U subs N setminus G }
end{align*}

(because the condition "$U sups G$" on a subset $U$ of $N$ is equivalent to the condition "$N setminus U subs N setminus G$").
Hence, the summation sign "$suml_{substack{U in P; \ U sups S text{ and } U sups T}}$" in eqref{darij1.pf.l6.1} can be rewritten as "$suml_{substack{U subs N; \ N setminus U subs N setminus G}}$". Thus, eqref{darij1.pf.l6.1} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs N setminus G}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.2}
tag{3}
end{align}

Every subset $U$ of $N$ satisfies $n - abs{U} = abs{N setminus U}$ (since $U subs N$ yields $abs{N setminus U} = underbrace{abs{N}}_{=n} - abs{U} = n - abs{U}$). Thus, eqref{darij1.pf.l6.2} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{substack{U subs N; \ N setminus U subs N setminus G}}}_{substack{= suml_{substack{U subs N; \ N setminus U subs H}} \ tup{ text{since } N setminus G = H } }}
underbrace{tup{q-1}^{n-abs{U}}}_{substack{= tup{q-1}^{abs{N setminus U}} \ tup{ text{since } n - abs{U} = abs{N setminus U} } }} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs H}}
tup{q-1}^{abs{N setminus U}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ U subs H}}
tup{q-1}^{abs{U}}
label{darij1.pf.l6.3}
tag{4}
end{align}

(here, we have substituted $U$ for $N setminus U$ in the sum, since the map
begin{align}
set{U subs N} to set{U subs N}, qquad
U mapsto N setminus U
end{align}

is a bijection).



The summation sign "$suml_{substack{U subs N; \ U subs H}}$" on the right hand side of eqref{darij1.pf.l6.3} can be replaced by "$suml_{U subs H}$", since $H$ is a subset of $N$. Thus, eqref{darij1.pf.l6.3} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{U subs H} tup{q-1}^{abs{U}} .
label{darij1.pf.l6.4}
tag{5}
end{align}



The sum on the right hand side of this equality is easy to simplify. The set $H$ has size $abs{H} = h$. Thus, it has exactly $dbinom{h}{0}$ subsets of size $0$, exactly $dbinom{h}{1}$ subsets of size $1$, and so on, all the way up to $dbinom{h}{h}$ subsets of size $h$. Hence, the sum $suml_{U subs H} tup{q-1}^{abs{U}}$ has exactly $dbinom{h}{0}$ many terms equal to $tup{q-1}^0$, exactly $dbinom{h}{1}$ many terms equal to $tup{q-1}^1$, and so on, all the way up to $dbinom{h}{h}$ many terms equal to $tup{q-1}^h$. Hence, this sum can be rewritten as follows:
begin{align}
& suml_{U subs H} tup{q-1}^{abs{U}} \
& = dbinom{h}{0} tup{q-1}^0 + dbinom{h}{1} tup{q-1}^1 + cdots + dbinom{h}{h} tup{q-1}^h \
& = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k = tup{tup{q-1}+1}^h \
& qquad tup{ text{since the binomial formula yields } tup{tup{q-1}+1}^h = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k } \
& = q^h .
end{align}

Thus, eqref{darij1.pf.l6.4} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{U subs H} tup{q-1}^{abs{U}}}_{= q^h} \
&= q^{abs{S}-n} q^{abs{T}} q^h = q^{tup{abs{S}-n} + abs{T} + h} .
label{darij1.pf.l6.5}
tag{6}
end{align}



But $H$ is the complement of $G$ in $N$. Hence, $abs{H} = underbrace{abs{N}}_{=n} - abs{G} = n - abs{G}$. Therefore, $abs{G} = n - underbrace{abs{H}}_{= h} = n - h$.



Recall the basic fact that $abs{S} + abs{T} = abs{S cup T} + abs{S cap T} = abs{G} + abs{S cap T}$ (since $S cup T = G$). Solving this equation for $abs{S cap T}$, we obtain
begin{align}
abs{S cap T} = abs{S} + abs{T} - underbrace{abs{G}}_{= n - h} = abs{S} + abs{T} - tup{n - h} = tup{abs{S}-n} + abs{T} + h .
end{align}

Therefore, $q^{ abs{S cap T} } = q^{tup{abs{S}-n} + abs{T} + h}$. Comparing this with eqref{darij1.pf.l6.5}, we obtain
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}

This proves Lemma 6. $blacksquare$



Proof of Theorem 5 (sketched). Both $det D$ and $q^n tup{q-1}^{ntup{2^{n-1}-1}}$ are polynomials in $q$ with integer coefficients (since $2^{n-1}-1$ is a nonnegative integer). Thus, the equality we need to prove is an equality between two polynomials in $q$. Hence, in order to prove it for all $q in QQ$, it suffices to prove it for infinitely many values of $q$ (because an equality between two polynomials in $q$ that holds for infinitely many values of $q$ must automatically hold for all $q in QQ$). Thus, in particular, it suffices to prove it for all nonzero $q in QQ$.
Thus, let us WLOG assume that $q$ is nonzero.
Define two $P times P$-matrices
begin{align*}
E &= tup{ tup{q-1}^{n-abs{T}} q^{abs{S}-n} ive{ T sups S } }_{tup{S, T} in P times P} qquad text{and} \
F &= tup{ q^{abs{T}} ive{ S sups T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 6 says that $D = EF$. (More precisely, Lemma 6 says that the $tup{S, T}$-th entry of $D$ equals the corresponding entry of $EF$ for all $S in P$ and $T in P$; but this yields $D = EF$.) Thus, $det D = dettup{EF} = det E cdot det F$. Now, it remains to find $det E$ and $det F$.



Recall that $abs{P} = 2^n - 1$ (as we proved during our proof of Lemma 4 (b)). Also, $sumS abs{S} = n 2^{n-1}$ (by Lemma 4 (a)).



Endow our set $P$ with the same total ordering that we used in the proof of Theorem 1. Now, it is easy to see that the $P times P$-matrix $E$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det E
&= prodS tup{ underbrace{tup{q-1}^{n-abs{S}} q^{abs{S}-n}}_{= tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} }
underbrace{ive{ S sups S }}_{= 1} }
= prodS tup{ tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} } \
&= underbrace{ tup{ prodS tup{dfrac{q-1}{q}}^n } }_{= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} }
underbrace{ tup{ prodS tup{dfrac{q}{q-1}}^{abs{S}} } }_{= tup{dfrac{q}{q-1}}^{sumS abs{S}} } \
&= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} tup{dfrac{q}{q-1}}^{sumS abs{S}} \
&= underbrace{ tup{tup{dfrac{q-1}{q}}^n}^{2^n - 1} }_{= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} }
underbrace{ tup{dfrac{q}{q-1}}^{n 2^{n-1}} }_{= tup{dfrac{q-1}{q}}^{-n 2^{n-1}}}\
& qquad tup{ text{since } abs{P} = 2^n - 1
text{ and } sumS abs{S} = n 2^{n-1} } \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} tup{dfrac{q-1}{q}}^{-n 2^{n-1}} \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1} - n 2^{n-1}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}
end{align}

(since $n tup{2^n - 1} - n 2^{n-1} = n 2^{n-1} - n$).



Likewise, the $P times P$-matrix $F$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det F
&= prodS tup{ q^{abs{S}}
underbrace{ive{ S sups S }}_{= 1} }
= prodS q^{abs{S}}
= q^{sumS abs{S}}
= q^{n 2^{n-1}}
end{align}

(since $sumS abs{S} = n 2^{n-1}$).



Now,
begin{align}
det D
&= underbrace{det E}_{= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}}
cdot
underbrace{det F}_{= q^{n 2^{n-1}}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n} q^{n 2^{n-1}} \
&= dfrac{tup{q-1}^{n 2^{n-1} - n}}{q^{n 2^{n-1} - n}} q^{n 2^{n-1}}
= tup{q-1}^{n 2^{n-1} - n} q^{n 2^{n-1} - tup{n 2^{n-1} - n}} \
&= q^{n 2^{n-1} - tup{n 2^{n-1} - n}} tup{q-1}^{n 2^{n-1} - n}
= q^n tup{q-1}^{n tup{2^{n-1} - 1}}
end{align}

(since $n 2^{n-1} - tup{n 2^{n-1} - n} = n$ and $n 2^{n-1} - n = n tup{2^{n-1} - 1}$).
This proves Theorem 5. $blacksquare$






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    Lemma 3 is trivial since $ibinom ki=kbinom{k-1}{i-1}$,
    $endgroup$
    – Zhi-Wei Sun
    Dec 7 '18 at 6:21








  • 5




    $begingroup$
    @Zhi-WeiSun: I know it is. This is a semi-expository writeup that is supposed to be readable by anyone who understands the problem.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 6:38










  • $begingroup$
    @darijgrinberg: Thank you for the detailed work!
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 4:36










  • $begingroup$
    @darij BRAVO for using Iverson notation in a matrix definition ? While avoiding to create a name names it adds clarity. From a computer point of view we could say "The implementation IS the name."
    $endgroup$
    – Jérôme JEAN-CHARLES
    Dec 14 '18 at 15:35












  • $begingroup$
    Another approach to Lemma 3: say $f(x) =(1-x)^k$; then the sum is $f’(1)$.
    $endgroup$
    – Jeff Strom
    2 days ago



















14












$begingroup$

Darij already gave a great elementary detailed exposition. I wanted to remark that all such kinds of results (including the postscript) are special cases of a classical result that (at least in the generality of semilattices) goes back to Lindström in Determinants on semilattices. For this question we are in the special case of the Boolean lattice.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Oh, are you applying Lindström's result with $cup$ as $wedge$?
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 15:17












  • $begingroup$
    @darijgrinberg: isn't he using $cap$ as $wedge$ in the obvious way?
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:04










  • $begingroup$
    @SamHopkins: Because $cap$ is not always a nonempty subset. This is what confused me, too. But I think $cup$ can be used perfectly well.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 16:15










  • $begingroup$
    @darijgrinberg: another way to resolve that might be to add a row of all $1$s to the bottom of the matrix (thinking of the last row/column as corresponding to the empty set) by choosing the corresponding $f_i$ to be the constant function $1$. When I expand the determinant about the last row, I will just get the determinant of the $(2^n-1)times(2^n-1)$ submatrix in the upper left because every other $(2^n-1)times(2^n-1)$ determinant will have a column of all zeros.
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:21






  • 1




    $begingroup$
    @darijgrinberg Yes, for the postscript, I was thinking the nonempty subsets with $cup$. We can take the function $f(A,B)=q^{-|Acup B|}$ and then multiply the row(or column) of A by $q^{|A|}$.
    $endgroup$
    – Gjergji Zaimi
    Dec 7 '18 at 18:11











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "504"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f317100%2fa-putnam-problem-with-a-twist%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









20












$begingroup$

$newcommand{QQ}{mathbb{Q}}
newcommand{set}[1]{left{ #1 right}}
newcommand{abs}[1]{left| #1 right|}
newcommand{tup}[1]{left( #1 right)}
newcommand{ive}[1]{left[ #1 right]}
newcommand{suml}{sumlimits}
newcommand{sumS}{suml_{S in P}}
newcommand{prodl}{prodlimits}
newcommand{prodS}{prodl_{S in P}}
newcommand{subs}{subseteq}
newcommand{sups}{supseteq}$



Here are proofs for both the original question (Theorem 1 below) and for the postscript (Theorem 5 below) that follow my argument at https://artofproblemsolving.com/community/u432h1747709p11384734 as closely as possible. Sorry for their length, much of it due to exposition of matrix folklore. (If you know this, scroll all the way down to section 2.)



1. Matrices indexed by finite sets



We are grown-ups and don't need to index the rows and the columns of a matrix by numbers. We can pick any set $P$, and define a $P times P$-matrix (say, with rational entries) to be a family $tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ of rational numbers indexed by pairs $tup{p, q} in P times P$. When the set $P$ is finite, such a $P times P$-matrix always has a well-defined determinant, which can be defined by
begin{equation}
det tup{ tup{a_{p, q}}_{tup{p, q} in P times P} }
= sum_{sigma in S_P} tup{-1}^{sigma} prod_{p in P} a_{p, sigmatup{p}} .
end{equation}

Here, $S_P$ denotes the set of permutations of $P$, and the notation $tup{-1}^{sigma}$ stands for the sign of a permutation $sigma$. If you think that this is problematic because the sign of a permutation is usually defined using a total order on $P$, then take a break and convince yourself that it does not actually depend on this total order. (This is proven in maximum detail in the solution of Exercise 5.12 of my Notes on the combinatorial fundamentals of algebra, 10th of January 2019.) Determinants of $P times P$-matrices behave as nicely as determinants of "usual" $n times n$-matrices, and arguably even better, since it is easier to define minors and cofactors when the rows and columns are indexed by an arbitrary finite set. Do keep in mind that the rows and the columns must be indexed by the same set; we cannot define the determinant of a $P times Q$-matrix for different $P$ and $Q$, even when $abs{P} = abs{Q}$.



Of course, an $n times n$-matrix (for a nonnegative integer $n$) is the same as an $N times N$-matrix for $N = set{1,2,ldots,n}$. Thus, our notion of $P times P$-matrices generalizes the classical notion of $n times n$-matrices. This generalization "respects" standard concepts like determinants -- e.g., the determinant of an $n times n$-matrix does not change if we instead consider it as an $N times N$-matrix. Does this generalization exhibit genuinely new behavior or does it just add convenience? Arguably it's the latter, because conversely, you can turn any $P times P$-matrix (for any finite set $P$) into an $n times n$-matrix. Namely, if you have a finite set $P$ and a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$, then you can set $n = abs{P}$ and fix a bijection $phi : set{1,2,ldots,n} to P$, and form the $n times n$-matrix $A_phi := tup{ a_{phitup{i}, phitup{j}} }_{tup{i, j} in set{1,2,ldots,n} times set{1,2,ldots,n}} in QQ^{n times n}$, and observe that $A_phi$ is "basically the same as" $A$ except that the rows and the columns have been reindexed. We shall refer to this operation (going from $A$ to $A_phi$) as numerical reindexing. So $P times P$-matrices differ from $n times n$-matrices only in how we index their rows and columns. Nevertheless they are useful, because often there is no canonical bijection $phi : set{1,2,ldots,n} to P$ (exhibit A: this Putnam problem), and we are algebraists and want to make as few non-canonical choices as possible.



Whenever $P$ is a finite set, we can multiply two $P times P$-matrices: The product $AB$ of two $P times P$-matrices $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ and $B = tup{b_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ is defined to be the $P times P$-matrix $tup{ sum_{r in P} a_{p, r} b_{r, q} }_{tup{p, q} in P times P} in QQ^{P times P}$. This multiplication behaves just as nicely as usual multiplication of $n times n$-matrices. In particular, $dettup{AB} = det A cdot det B$ whenever $A$ and $B$ are two $P times P$-matrices. And of course, you can prove this by numerical reindexing, because numerical reindexing preserves both products and determinants (assuming, of course, that you pick one bijection $phi : set{1,2,ldots,n} to P$ and stick with it).



If you remember the definition of triangular $n times n$-matrices, then its generalization to $P times P$-matrices is straightforward: When $P$ is a totally ordered finite set, we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is lower-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p < q ;
end{equation}

and we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is upper-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p > q .
end{equation}

Again, lower-triangular $P times P$-matrices become lower-triangular $n times n$-matrices upon numerical reindexing, as long as we make sure to pick our bijection $phi : set{1,2,ldots,n} to P$ to be an order isomorphism (which, by the way, is unique). So we can derive properties of lower-triangular $P times P$-matrices from analogous properties of lower-triangular $n times n$-matrices by numerical reindexing. In particular, we can thus show that the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries; i.e., if $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is a lower-triangular $P times P$-matrix, then $det A = prodl_{p in P} a_{p, p}$. The same holds for upper-triangular $P times P$-matrices.



2. Answering the original question



Now, let's not forget about the question.



Fix a positive integer $n$. Let $N$ be the set $set{1,2,ldots,n}$. Let $P$ be the set of all the $2^n - 1$ nonempty subsets of $N$. (We shall later totally order $P$, but for now $P$ is just a finite set.)



We shall also use the Iverson bracket notation, which will save us some ampersands and braces. This allows us to rewrite the definition of $A_{ij}$ in your question as $A_{ij} = ive{ abs{ S_i cap S_j } = 1 }$. But as we said, we drop the numerical indexing, and instead define a matrix whose rows and columns are indexed by $P$ (that is, by all the nonempty subsets of $N$). Thus, the claim of the exercise rewrites as follows:




Theorem 1. Let $A$ be the $P times P$-matrix $tup{ ive{ abs{ S cap T } = 1 } }_{tup{S, T} in P}$. Then,
begin{equation}
det A = tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{equation}




The key to proving this is the following simple fact:




Lemma 2. Let $S in P$ and $T in P$. Then,
begin{equation}
ive{ abs{ S cap T } = 1 } = suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T } .
end{equation}




This, in turn, shall be derived from the following binomial identity:




Lemma 3. Let $k$ be a nonnegative integer. Then,
begin{equation}
sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} = ive{k = 1}.
end{equation}




Proof of Lemma 3 (sketched). If $k = 0$, then this equality holds for obvious reasons (indeed, the left hand side is an empty sum and thus vanishes, while the right hand side vanishes since $k = 0 neq 1$). Hence, for the rest of this proof, we WLOG assume that $k neq 0$. Hence, $k$ is a positive integer. Thus, $k-1$ is a nonnegative integer.



It is well-known that $suml_{i=0}^n tup{-1}^i dbinom{n}{i} = ive{ n = 0 }$ for every nonnegative integer $n$. Applying this to $n = k-1$, we obtain $suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = ive{ k - 1 = 0 } = ive{k = 1}$. Multiplying both sides of this equality by $k$, we obtain $k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = k ive{k = 1} = ive{k = 1}$ (where the last equality sign is easy to check directly). Hence,
begin{align}
ive{k = 1}
&= k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i}
= suml_{i=0}^{k-1} tup{-1}^i cdot underbrace{k dbinom{k-1}{i}}_{substack{= tup{i+1} dbinom{k}{i+1} \ text{(by a simple computation)}}} \
&= suml_{i=0}^{k-1} tup{-1}^i cdot tup{i+1} dbinom{k}{i+1}
= suml_{i=1}^{k} tup{-1}^{i-1} cdot i dbinom{k}{i}
end{align}

(here, we have substituted $i-1$ for $i$ in the sum). This proves Lemma 3. $blacksquare$



Proof of Lemma 2 (sketched). If a subset $U in P$ does not satisfy $U subs S$, then the Iverson bracket $ive{ U subs S }$ is zero and thus renders the whole product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U subs T$. Thus, the product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ is zero unless $U in P$ satisfies both $U subs S$ and $U subs T$. Hence, the sum $suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ does not change its value when we restrict it to those $U$ that satisfy both $U subs S$ and $U subs T$ (because all the addends that we lose are zero anyway). In other words,
begin{align*}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}} tup{-1}^{abs{U} - 1} abs{U} cdot underbrace{ive{ U subs S }}_{=1} underbrace{ive{ U subs T }}_{=1} \
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}.
end{align*}

Now, let $K$ be the intersection $S cap T$, and let $k = abs{K}$. Then, the sets $U in P$ that satisfy both $U subs S$ and $U subs T$ are precisely the nonempty subsets of the $k$-element set $K$. Thus, we know how many such sets $U$ exist of each size: there are exactly $dbinom{k}{1}$ ones of size $1$, exactly $dbinom{k}{2}$ ones of size $2$, and so on, all the way up to $dbinom{k}{k}$ ones of size $k$. Hence,
begin{align*}
suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
&= dbinom{k}{1} tup{-1}^{1-1} cdot 1 + dbinom{k}{2} tup{-1}^{2-1} cdot 2 + cdots + dbinom{k}{k} tup{-1}^{k-1} cdot k \
&= sum_{i=1}^k dbinom{k}{i} tup{-1}^{i-1} cdot i
= sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} \
&= ive{k = 1} qquad tup{ text{by Lemma 3} } \
&= ive{ abs{ S cap T } = 1 }
end{align*}

(since $k = abs{K} = abs{ S cap T }$).
Combining what we have, we obtain
begin{equation}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
= ive{ abs{ S cap T } = 1 } .
end{equation}

This proves Lemma 2. $blacksquare$




Lemma 4. (a) We have $sumS abs{S} = n 2^{n-1}$.



(b) We have $sumS tup{ abs{S} - 1 } = n 2^{n-1} - 2^n + 1$.



(c) We have $prodS tup{-1}^{abs{S} - 1} = tup{-1}^{ive{n neq 1}}$.



(d) We have $prodS abs{S} = prodl_{k=1}^n k^{dbinom{n}{k}}$.




Proof of Lemma 4. (a) Fix $i in set{1,2,ldots,n}$. Then, exactly $2^{n-1}$ subsets of $set{1,2,ldots,n}$ contain $i$ (since we can freely choose which of the other $n-1$ elements such a subset should contain). Of course, all of these $2^{n-1}$ subsets are nonempty, and thus belong to $P$. Hence, exactly $2^{n-1}$ subsets $S in P$ contain $i$. Hence, the sum $sumS ive{ i in S }$ has exactly $2^{n-1}$ addends equal to $1$, while all its other addends are $0$. Therefore, this sum equals $2^{n-1}$. In other words, we have
begin{equation}
sumS ive{ i in S } = 2^{n-1} .
label{darij1.pf.l4.1}
tag{1}
end{equation}



Now, forget that we fixed $i$. We thus have proven eqref{darij1.pf.l4.1} for each $i in set{1,2,ldots,n}$.



But we have $abs{S} = suml_{i=1}^n ive{ i in S }$ for each subset $S$ of $set{1,2,ldots,n}$ (because the sum $suml_{i=1}^n ive{ i in S }$ contains exactly $abs{S}$ many addends equal to $1$, whereas all its other addends are $0$). Summing up this equation over all $S in P$, we obtain
begin{align}
sumS abs{S}
&= sumS suml_{i=1}^n ive{ i in S }
= suml_{i=1}^n underbrace{sumS ive{ i in S }}_{substack{= 2^{n-1} \ text{(by eqref{darij1.pf.l4.1})}}}
= suml_{i=1}^n 2^{n-1} = n 2^{n-1} .
end{align}

This proves Lemma 4 (a).



(b) We have $abs{P} = 2^n - 1$ (since the set $P$ consists of all the $2^n$ subsets of $set{1,2,ldots,n}$ apart from the empty set). Now,
begin{align}
sumS tup{ abs{S} - 1 }
= underbrace{sumS abs{S}}_{substack{= n 2^{n-1} \ text{(by Lemma 4 (a))}}}
- underbrace{sumS 1}_{= abs{P} = 2^n - 1}
= n 2^{n-1} - tup{2^n - 1} = n 2^{n-1} - 2^n + 1 .
end{align}

This proves Lemma 4 (b).



(c) If $n neq 1$, then $n geq 2$ (since $n$ is a positive integer), and therefore both $2^{n-1}$ and $2^n$ are even integers. Hence, if $n neq 1$, then $n 2^{n-1} - 2^n + 1 equiv 1 mod 2$. On the other hand, if $n = 1$, then $n 2^{n-1} - 2^n + 1 = 1 cdot 2^{1-1} - 2^1 + 1 = 0 equiv 0 mod 2$. Combining the previous two sentences, we conclude that $n 2^{n-1} - 2^n + 1 equiv ive{n neq 1} mod 2$. Hence, $tup{-1}^{n 2^{n-1} - 2^n + 1} = tup{-1}^{ive{n neq 1}}$. Now,
begin{align}
prodS tup{-1}^{abs{S} - 1}
&= tup{-1}^{sumS tup{ abs{S} - 1 }}
= tup{-1}^{n 2^{n-1} - 2^n + 1}
qquad tup{ text{by Lemma 4 (b)} } \
&= tup{-1}^{ive{n neq 1}} .
end{align}

This proves Lemma 4 (c).



(d) Recall that $P$ is the set of all nonempty subsets of $set{1,2,ldots,n}$. Hence, among the elements of $P$ are exactly $dbinom{n}{1}$ subsets of size $1$, exactly $dbinom{n}{2}$ subsets of size $2$, and so on, and finally exactly $dbinom{n}{n}$ subsets of size $n$ (and no other subsets). Therefore, the product $prodS abs{S}$ has exactly $dbinom{n}{1}$ factors equal to $1$, exactly $dbinom{n}{2}$ factors equal to $2$, and so on, and finally exactly $dbinom{n}{n}$ factors equal to $n$ (and no other factors). Therefore, this product equals $prodl_{k=1}^n k^{dbinom{n}{k}}$. This proves Lemma 4 (d). $blacksquare$



Proof of Theorem 1 (sketched). Define two $P times P$-matrices
begin{align*}
B &= tup{ tup{-1}^{abs{T} - 1} abs{T} cdot ive{ T subs S } }_{tup{S, T} in P times P} qquad text{and} \
C &= tup{ ive{ S subs T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 2 says that $A = BC$. (More precisely, Lemma 2 says that the $tup{S, T}$-th entry of $A$ equals the corresponding entry of $BC$ for all $S in P$ and $T in P$; but this yields $A = BC$.) Thus, $det A = dettup{BC} = det B cdot det C$. Now, it remains to find $det B$ and $det C$.



Here we shall finally endow our set $P$ with a total ordering. Namely, we pick any total order on $P$ with the property that every pair of two subsets $S in P$ and $T in P$ satisfying $S subs T$ must satisfy $S leq T$. There are many total orders that do the trick here; for example, we can order them by increasing size (resolving ties arbitrarily). Now, it is easy to see that the $P times P$-matrix $B$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det B
&= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} underbrace{ive{ S subs S }}_{=1} }
= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} } \
&= underbrace{tup{ prodS tup{-1}^{abs{S} - 1} }}_{substack{= tup{-1}^{ive{n neq 1}} \ text{(by Lemma 4 (c))}}}
tup{ prodS abs{S} }
= tup{-1}^{ive{n neq 1}} tup{ prodS abs{S} } .
end{align}



Likewise, the $P times P$-matrix $C$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{equation}
det C = prodS underbrace{ive{ S subs S }}_{=1}
= 1 .
end{equation}

Now,
begin{align}
det A
&= det B cdot underbrace{det C}_{=1} = det B = tup{-1}^{ive{n neq 1}} underbrace{tup{ prodS abs{S} }}_{substack{= prodl_{k=1}^n k^{dbinom{n}{k}} \ text{(by Lemma 4 (d))}}} \
&= tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{align}

This proves Theorem 1. $blacksquare$



In hindsight, the above proof could have been restated without introducing $P times P$-matrices, since picking a total order on a finite set $P$ is equivalent to picking a bijection $phi : set{1,2,ldots,n} to P$. But I prefer to give the exercise an "invariant" formulation, and $P times P$-matrices are worth exposing anyway.



A few more telegraphic remarks:




  • The decomposition $A = BC$ constructed in the above proof of Theorem 1 is an LU-decomposition.


  • The solution to the original Putnam question I found first was the same recursive argument appearing a few times in the AoPS thread linked above. But the structure of the recursion foreshadowed the existence of a slicker proof -- the block-row-reduction operations felt like multiplying by a lower-triangular (in a properly chosen order) matrix. So I dug deeper and found the above.


  • This reminds me of Lemma 7.1 of Chris Godsil, An Introduction to the Moebius Function, arXiv:1803.06664v1.



3. Answering the postscriptum



Anyway, I wanted to prove another theorem, from the postscriptum to the original question:




Theorem 5. Let $q in QQ$. Let $D$ be the $P times P$-matrix $tup{ q^{abs{ S cap T }} }_{tup{S, T} in P}$. Then,
begin{equation}
det D = q^n tup{q-1}^{ntup{2^{n-1}-1}} .
end{equation}




Just as our proof of Theorem 1 relied on Lemma 2, our proof of Theorem 5 will rely on the following lemma:




Lemma 6. Let $S in P$ and $T in P$. Let $q in QQ$ be nonzero. Then,
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}




Note that the assumptions "$S in P$ and $T in P$" in Lemma 6 can be replaced by the weaker assumption "$S$ and $T$ are subsets of $N$", but we will only use Lemma 6 in the case when $S$ and $T$ belong to $P$.



Proof of Lemma 6 (sketched). If a subset $U in P$ does not satisfy $U sups S$, then the Iverson bracket $ive{ U sups S }$ is zero and thus renders the whole product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U sups T$. Thus, the product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ is zero unless $U in P$ satisfies both $U sups S$ and $U sups T$. Hence, the sum $suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ does not change its value when we restrict it to those $U$ that satisfy both $U sups S$ and $U sups T$ (because all the addends that we lose are zero anyway). In other words,
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}} tup{q-1}^{n-abs{U}} q^{abs{S}-n} underbrace{ive{ U sups S }}_{=1} q^{abs{T}} underbrace{ive{ U sups T }}_{=1} \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} q^{abs{S}-n} q^{abs{T}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.1}
tag{2}
end{align}



Now, let $G = S cup T$. This is a nonempty subset of $N$ (since $S$ and $T$ are nonempty subsets of $N$). Let $H = N setminus G$ be the complement of $G$ in $N$. Let $h = abs{H}$.



Every subset $U$ of $N$ that satisfies $U sups S$ and $U sups T$ is automatically nonempty (since $U$ contains the nonempty set $S$ as a subset), and thus belongs to $P$. Hence, the $U in P$ that satisfy $U sups S$ and $U sups T$ are precisely the subsets $U$ of $N$ that satisfy $U sups S$ and $U sups T$. Thus, we have
begin{align*}
& set{ U in P mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups underbrace{S cap T}_{=G} } \
= & set{ U subs N mid U sups G } \
= & set{ U subs N mid N setminus U subs N setminus G }
end{align*}

(because the condition "$U sups G$" on a subset $U$ of $N$ is equivalent to the condition "$N setminus U subs N setminus G$").
Hence, the summation sign "$suml_{substack{U in P; \ U sups S text{ and } U sups T}}$" in eqref{darij1.pf.l6.1} can be rewritten as "$suml_{substack{U subs N; \ N setminus U subs N setminus G}}$". Thus, eqref{darij1.pf.l6.1} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs N setminus G}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.2}
tag{3}
end{align}

Every subset $U$ of $N$ satisfies $n - abs{U} = abs{N setminus U}$ (since $U subs N$ yields $abs{N setminus U} = underbrace{abs{N}}_{=n} - abs{U} = n - abs{U}$). Thus, eqref{darij1.pf.l6.2} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{substack{U subs N; \ N setminus U subs N setminus G}}}_{substack{= suml_{substack{U subs N; \ N setminus U subs H}} \ tup{ text{since } N setminus G = H } }}
underbrace{tup{q-1}^{n-abs{U}}}_{substack{= tup{q-1}^{abs{N setminus U}} \ tup{ text{since } n - abs{U} = abs{N setminus U} } }} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs H}}
tup{q-1}^{abs{N setminus U}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ U subs H}}
tup{q-1}^{abs{U}}
label{darij1.pf.l6.3}
tag{4}
end{align}

(here, we have substituted $U$ for $N setminus U$ in the sum, since the map
begin{align}
set{U subs N} to set{U subs N}, qquad
U mapsto N setminus U
end{align}

is a bijection).



The summation sign "$suml_{substack{U subs N; \ U subs H}}$" on the right hand side of eqref{darij1.pf.l6.3} can be replaced by "$suml_{U subs H}$", since $H$ is a subset of $N$. Thus, eqref{darij1.pf.l6.3} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{U subs H} tup{q-1}^{abs{U}} .
label{darij1.pf.l6.4}
tag{5}
end{align}



The sum on the right hand side of this equality is easy to simplify. The set $H$ has size $abs{H} = h$. Thus, it has exactly $dbinom{h}{0}$ subsets of size $0$, exactly $dbinom{h}{1}$ subsets of size $1$, and so on, all the way up to $dbinom{h}{h}$ subsets of size $h$. Hence, the sum $suml_{U subs H} tup{q-1}^{abs{U}}$ has exactly $dbinom{h}{0}$ many terms equal to $tup{q-1}^0$, exactly $dbinom{h}{1}$ many terms equal to $tup{q-1}^1$, and so on, all the way up to $dbinom{h}{h}$ many terms equal to $tup{q-1}^h$. Hence, this sum can be rewritten as follows:
begin{align}
& suml_{U subs H} tup{q-1}^{abs{U}} \
& = dbinom{h}{0} tup{q-1}^0 + dbinom{h}{1} tup{q-1}^1 + cdots + dbinom{h}{h} tup{q-1}^h \
& = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k = tup{tup{q-1}+1}^h \
& qquad tup{ text{since the binomial formula yields } tup{tup{q-1}+1}^h = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k } \
& = q^h .
end{align}

Thus, eqref{darij1.pf.l6.4} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{U subs H} tup{q-1}^{abs{U}}}_{= q^h} \
&= q^{abs{S}-n} q^{abs{T}} q^h = q^{tup{abs{S}-n} + abs{T} + h} .
label{darij1.pf.l6.5}
tag{6}
end{align}



But $H$ is the complement of $G$ in $N$. Hence, $abs{H} = underbrace{abs{N}}_{=n} - abs{G} = n - abs{G}$. Therefore, $abs{G} = n - underbrace{abs{H}}_{= h} = n - h$.



Recall the basic fact that $abs{S} + abs{T} = abs{S cup T} + abs{S cap T} = abs{G} + abs{S cap T}$ (since $S cup T = G$). Solving this equation for $abs{S cap T}$, we obtain
begin{align}
abs{S cap T} = abs{S} + abs{T} - underbrace{abs{G}}_{= n - h} = abs{S} + abs{T} - tup{n - h} = tup{abs{S}-n} + abs{T} + h .
end{align}

Therefore, $q^{ abs{S cap T} } = q^{tup{abs{S}-n} + abs{T} + h}$. Comparing this with eqref{darij1.pf.l6.5}, we obtain
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}

This proves Lemma 6. $blacksquare$



Proof of Theorem 5 (sketched). Both $det D$ and $q^n tup{q-1}^{ntup{2^{n-1}-1}}$ are polynomials in $q$ with integer coefficients (since $2^{n-1}-1$ is a nonnegative integer). Thus, the equality we need to prove is an equality between two polynomials in $q$. Hence, in order to prove it for all $q in QQ$, it suffices to prove it for infinitely many values of $q$ (because an equality between two polynomials in $q$ that holds for infinitely many values of $q$ must automatically hold for all $q in QQ$). Thus, in particular, it suffices to prove it for all nonzero $q in QQ$.
Thus, let us WLOG assume that $q$ is nonzero.
Define two $P times P$-matrices
begin{align*}
E &= tup{ tup{q-1}^{n-abs{T}} q^{abs{S}-n} ive{ T sups S } }_{tup{S, T} in P times P} qquad text{and} \
F &= tup{ q^{abs{T}} ive{ S sups T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 6 says that $D = EF$. (More precisely, Lemma 6 says that the $tup{S, T}$-th entry of $D$ equals the corresponding entry of $EF$ for all $S in P$ and $T in P$; but this yields $D = EF$.) Thus, $det D = dettup{EF} = det E cdot det F$. Now, it remains to find $det E$ and $det F$.



Recall that $abs{P} = 2^n - 1$ (as we proved during our proof of Lemma 4 (b)). Also, $sumS abs{S} = n 2^{n-1}$ (by Lemma 4 (a)).



Endow our set $P$ with the same total ordering that we used in the proof of Theorem 1. Now, it is easy to see that the $P times P$-matrix $E$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det E
&= prodS tup{ underbrace{tup{q-1}^{n-abs{S}} q^{abs{S}-n}}_{= tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} }
underbrace{ive{ S sups S }}_{= 1} }
= prodS tup{ tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} } \
&= underbrace{ tup{ prodS tup{dfrac{q-1}{q}}^n } }_{= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} }
underbrace{ tup{ prodS tup{dfrac{q}{q-1}}^{abs{S}} } }_{= tup{dfrac{q}{q-1}}^{sumS abs{S}} } \
&= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} tup{dfrac{q}{q-1}}^{sumS abs{S}} \
&= underbrace{ tup{tup{dfrac{q-1}{q}}^n}^{2^n - 1} }_{= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} }
underbrace{ tup{dfrac{q}{q-1}}^{n 2^{n-1}} }_{= tup{dfrac{q-1}{q}}^{-n 2^{n-1}}}\
& qquad tup{ text{since } abs{P} = 2^n - 1
text{ and } sumS abs{S} = n 2^{n-1} } \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} tup{dfrac{q-1}{q}}^{-n 2^{n-1}} \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1} - n 2^{n-1}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}
end{align}

(since $n tup{2^n - 1} - n 2^{n-1} = n 2^{n-1} - n$).



Likewise, the $P times P$-matrix $F$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det F
&= prodS tup{ q^{abs{S}}
underbrace{ive{ S sups S }}_{= 1} }
= prodS q^{abs{S}}
= q^{sumS abs{S}}
= q^{n 2^{n-1}}
end{align}

(since $sumS abs{S} = n 2^{n-1}$).



Now,
begin{align}
det D
&= underbrace{det E}_{= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}}
cdot
underbrace{det F}_{= q^{n 2^{n-1}}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n} q^{n 2^{n-1}} \
&= dfrac{tup{q-1}^{n 2^{n-1} - n}}{q^{n 2^{n-1} - n}} q^{n 2^{n-1}}
= tup{q-1}^{n 2^{n-1} - n} q^{n 2^{n-1} - tup{n 2^{n-1} - n}} \
&= q^{n 2^{n-1} - tup{n 2^{n-1} - n}} tup{q-1}^{n 2^{n-1} - n}
= q^n tup{q-1}^{n tup{2^{n-1} - 1}}
end{align}

(since $n 2^{n-1} - tup{n 2^{n-1} - n} = n$ and $n 2^{n-1} - n = n tup{2^{n-1} - 1}$).
This proves Theorem 5. $blacksquare$






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    Lemma 3 is trivial since $ibinom ki=kbinom{k-1}{i-1}$,
    $endgroup$
    – Zhi-Wei Sun
    Dec 7 '18 at 6:21








  • 5




    $begingroup$
    @Zhi-WeiSun: I know it is. This is a semi-expository writeup that is supposed to be readable by anyone who understands the problem.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 6:38










  • $begingroup$
    @darijgrinberg: Thank you for the detailed work!
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 4:36










  • $begingroup$
    @darij BRAVO for using Iverson notation in a matrix definition ? While avoiding to create a name names it adds clarity. From a computer point of view we could say "The implementation IS the name."
    $endgroup$
    – Jérôme JEAN-CHARLES
    Dec 14 '18 at 15:35












  • $begingroup$
    Another approach to Lemma 3: say $f(x) =(1-x)^k$; then the sum is $f’(1)$.
    $endgroup$
    – Jeff Strom
    2 days ago
















20












$begingroup$

$newcommand{QQ}{mathbb{Q}}
newcommand{set}[1]{left{ #1 right}}
newcommand{abs}[1]{left| #1 right|}
newcommand{tup}[1]{left( #1 right)}
newcommand{ive}[1]{left[ #1 right]}
newcommand{suml}{sumlimits}
newcommand{sumS}{suml_{S in P}}
newcommand{prodl}{prodlimits}
newcommand{prodS}{prodl_{S in P}}
newcommand{subs}{subseteq}
newcommand{sups}{supseteq}$



Here are proofs for both the original question (Theorem 1 below) and for the postscript (Theorem 5 below) that follow my argument at https://artofproblemsolving.com/community/u432h1747709p11384734 as closely as possible. Sorry for their length, much of it due to exposition of matrix folklore. (If you know this, scroll all the way down to section 2.)



1. Matrices indexed by finite sets



We are grown-ups and don't need to index the rows and the columns of a matrix by numbers. We can pick any set $P$, and define a $P times P$-matrix (say, with rational entries) to be a family $tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ of rational numbers indexed by pairs $tup{p, q} in P times P$. When the set $P$ is finite, such a $P times P$-matrix always has a well-defined determinant, which can be defined by
begin{equation}
det tup{ tup{a_{p, q}}_{tup{p, q} in P times P} }
= sum_{sigma in S_P} tup{-1}^{sigma} prod_{p in P} a_{p, sigmatup{p}} .
end{equation}

Here, $S_P$ denotes the set of permutations of $P$, and the notation $tup{-1}^{sigma}$ stands for the sign of a permutation $sigma$. If you think that this is problematic because the sign of a permutation is usually defined using a total order on $P$, then take a break and convince yourself that it does not actually depend on this total order. (This is proven in maximum detail in the solution of Exercise 5.12 of my Notes on the combinatorial fundamentals of algebra, 10th of January 2019.) Determinants of $P times P$-matrices behave as nicely as determinants of "usual" $n times n$-matrices, and arguably even better, since it is easier to define minors and cofactors when the rows and columns are indexed by an arbitrary finite set. Do keep in mind that the rows and the columns must be indexed by the same set; we cannot define the determinant of a $P times Q$-matrix for different $P$ and $Q$, even when $abs{P} = abs{Q}$.



Of course, an $n times n$-matrix (for a nonnegative integer $n$) is the same as an $N times N$-matrix for $N = set{1,2,ldots,n}$. Thus, our notion of $P times P$-matrices generalizes the classical notion of $n times n$-matrices. This generalization "respects" standard concepts like determinants -- e.g., the determinant of an $n times n$-matrix does not change if we instead consider it as an $N times N$-matrix. Does this generalization exhibit genuinely new behavior or does it just add convenience? Arguably it's the latter, because conversely, you can turn any $P times P$-matrix (for any finite set $P$) into an $n times n$-matrix. Namely, if you have a finite set $P$ and a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$, then you can set $n = abs{P}$ and fix a bijection $phi : set{1,2,ldots,n} to P$, and form the $n times n$-matrix $A_phi := tup{ a_{phitup{i}, phitup{j}} }_{tup{i, j} in set{1,2,ldots,n} times set{1,2,ldots,n}} in QQ^{n times n}$, and observe that $A_phi$ is "basically the same as" $A$ except that the rows and the columns have been reindexed. We shall refer to this operation (going from $A$ to $A_phi$) as numerical reindexing. So $P times P$-matrices differ from $n times n$-matrices only in how we index their rows and columns. Nevertheless they are useful, because often there is no canonical bijection $phi : set{1,2,ldots,n} to P$ (exhibit A: this Putnam problem), and we are algebraists and want to make as few non-canonical choices as possible.



Whenever $P$ is a finite set, we can multiply two $P times P$-matrices: The product $AB$ of two $P times P$-matrices $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ and $B = tup{b_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ is defined to be the $P times P$-matrix $tup{ sum_{r in P} a_{p, r} b_{r, q} }_{tup{p, q} in P times P} in QQ^{P times P}$. This multiplication behaves just as nicely as usual multiplication of $n times n$-matrices. In particular, $dettup{AB} = det A cdot det B$ whenever $A$ and $B$ are two $P times P$-matrices. And of course, you can prove this by numerical reindexing, because numerical reindexing preserves both products and determinants (assuming, of course, that you pick one bijection $phi : set{1,2,ldots,n} to P$ and stick with it).



If you remember the definition of triangular $n times n$-matrices, then its generalization to $P times P$-matrices is straightforward: When $P$ is a totally ordered finite set, we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is lower-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p < q ;
end{equation}

and we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is upper-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p > q .
end{equation}

Again, lower-triangular $P times P$-matrices become lower-triangular $n times n$-matrices upon numerical reindexing, as long as we make sure to pick our bijection $phi : set{1,2,ldots,n} to P$ to be an order isomorphism (which, by the way, is unique). So we can derive properties of lower-triangular $P times P$-matrices from analogous properties of lower-triangular $n times n$-matrices by numerical reindexing. In particular, we can thus show that the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries; i.e., if $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is a lower-triangular $P times P$-matrix, then $det A = prodl_{p in P} a_{p, p}$. The same holds for upper-triangular $P times P$-matrices.



2. Answering the original question



Now, let's not forget about the question.



Fix a positive integer $n$. Let $N$ be the set $set{1,2,ldots,n}$. Let $P$ be the set of all the $2^n - 1$ nonempty subsets of $N$. (We shall later totally order $P$, but for now $P$ is just a finite set.)



We shall also use the Iverson bracket notation, which will save us some ampersands and braces. This allows us to rewrite the definition of $A_{ij}$ in your question as $A_{ij} = ive{ abs{ S_i cap S_j } = 1 }$. But as we said, we drop the numerical indexing, and instead define a matrix whose rows and columns are indexed by $P$ (that is, by all the nonempty subsets of $N$). Thus, the claim of the exercise rewrites as follows:




Theorem 1. Let $A$ be the $P times P$-matrix $tup{ ive{ abs{ S cap T } = 1 } }_{tup{S, T} in P}$. Then,
begin{equation}
det A = tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{equation}




The key to proving this is the following simple fact:




Lemma 2. Let $S in P$ and $T in P$. Then,
begin{equation}
ive{ abs{ S cap T } = 1 } = suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T } .
end{equation}




This, in turn, shall be derived from the following binomial identity:




Lemma 3. Let $k$ be a nonnegative integer. Then,
begin{equation}
sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} = ive{k = 1}.
end{equation}




Proof of Lemma 3 (sketched). If $k = 0$, then this equality holds for obvious reasons (indeed, the left hand side is an empty sum and thus vanishes, while the right hand side vanishes since $k = 0 neq 1$). Hence, for the rest of this proof, we WLOG assume that $k neq 0$. Hence, $k$ is a positive integer. Thus, $k-1$ is a nonnegative integer.



It is well-known that $suml_{i=0}^n tup{-1}^i dbinom{n}{i} = ive{ n = 0 }$ for every nonnegative integer $n$. Applying this to $n = k-1$, we obtain $suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = ive{ k - 1 = 0 } = ive{k = 1}$. Multiplying both sides of this equality by $k$, we obtain $k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = k ive{k = 1} = ive{k = 1}$ (where the last equality sign is easy to check directly). Hence,
begin{align}
ive{k = 1}
&= k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i}
= suml_{i=0}^{k-1} tup{-1}^i cdot underbrace{k dbinom{k-1}{i}}_{substack{= tup{i+1} dbinom{k}{i+1} \ text{(by a simple computation)}}} \
&= suml_{i=0}^{k-1} tup{-1}^i cdot tup{i+1} dbinom{k}{i+1}
= suml_{i=1}^{k} tup{-1}^{i-1} cdot i dbinom{k}{i}
end{align}

(here, we have substituted $i-1$ for $i$ in the sum). This proves Lemma 3. $blacksquare$



Proof of Lemma 2 (sketched). If a subset $U in P$ does not satisfy $U subs S$, then the Iverson bracket $ive{ U subs S }$ is zero and thus renders the whole product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U subs T$. Thus, the product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ is zero unless $U in P$ satisfies both $U subs S$ and $U subs T$. Hence, the sum $suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ does not change its value when we restrict it to those $U$ that satisfy both $U subs S$ and $U subs T$ (because all the addends that we lose are zero anyway). In other words,
begin{align*}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}} tup{-1}^{abs{U} - 1} abs{U} cdot underbrace{ive{ U subs S }}_{=1} underbrace{ive{ U subs T }}_{=1} \
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}.
end{align*}

Now, let $K$ be the intersection $S cap T$, and let $k = abs{K}$. Then, the sets $U in P$ that satisfy both $U subs S$ and $U subs T$ are precisely the nonempty subsets of the $k$-element set $K$. Thus, we know how many such sets $U$ exist of each size: there are exactly $dbinom{k}{1}$ ones of size $1$, exactly $dbinom{k}{2}$ ones of size $2$, and so on, all the way up to $dbinom{k}{k}$ ones of size $k$. Hence,
begin{align*}
suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
&= dbinom{k}{1} tup{-1}^{1-1} cdot 1 + dbinom{k}{2} tup{-1}^{2-1} cdot 2 + cdots + dbinom{k}{k} tup{-1}^{k-1} cdot k \
&= sum_{i=1}^k dbinom{k}{i} tup{-1}^{i-1} cdot i
= sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} \
&= ive{k = 1} qquad tup{ text{by Lemma 3} } \
&= ive{ abs{ S cap T } = 1 }
end{align*}

(since $k = abs{K} = abs{ S cap T }$).
Combining what we have, we obtain
begin{equation}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
= ive{ abs{ S cap T } = 1 } .
end{equation}

This proves Lemma 2. $blacksquare$




Lemma 4. (a) We have $sumS abs{S} = n 2^{n-1}$.



(b) We have $sumS tup{ abs{S} - 1 } = n 2^{n-1} - 2^n + 1$.



(c) We have $prodS tup{-1}^{abs{S} - 1} = tup{-1}^{ive{n neq 1}}$.



(d) We have $prodS abs{S} = prodl_{k=1}^n k^{dbinom{n}{k}}$.




Proof of Lemma 4. (a) Fix $i in set{1,2,ldots,n}$. Then, exactly $2^{n-1}$ subsets of $set{1,2,ldots,n}$ contain $i$ (since we can freely choose which of the other $n-1$ elements such a subset should contain). Of course, all of these $2^{n-1}$ subsets are nonempty, and thus belong to $P$. Hence, exactly $2^{n-1}$ subsets $S in P$ contain $i$. Hence, the sum $sumS ive{ i in S }$ has exactly $2^{n-1}$ addends equal to $1$, while all its other addends are $0$. Therefore, this sum equals $2^{n-1}$. In other words, we have
begin{equation}
sumS ive{ i in S } = 2^{n-1} .
label{darij1.pf.l4.1}
tag{1}
end{equation}



Now, forget that we fixed $i$. We thus have proven eqref{darij1.pf.l4.1} for each $i in set{1,2,ldots,n}$.



But we have $abs{S} = suml_{i=1}^n ive{ i in S }$ for each subset $S$ of $set{1,2,ldots,n}$ (because the sum $suml_{i=1}^n ive{ i in S }$ contains exactly $abs{S}$ many addends equal to $1$, whereas all its other addends are $0$). Summing up this equation over all $S in P$, we obtain
begin{align}
sumS abs{S}
&= sumS suml_{i=1}^n ive{ i in S }
= suml_{i=1}^n underbrace{sumS ive{ i in S }}_{substack{= 2^{n-1} \ text{(by eqref{darij1.pf.l4.1})}}}
= suml_{i=1}^n 2^{n-1} = n 2^{n-1} .
end{align}

This proves Lemma 4 (a).



(b) We have $abs{P} = 2^n - 1$ (since the set $P$ consists of all the $2^n$ subsets of $set{1,2,ldots,n}$ apart from the empty set). Now,
begin{align}
sumS tup{ abs{S} - 1 }
= underbrace{sumS abs{S}}_{substack{= n 2^{n-1} \ text{(by Lemma 4 (a))}}}
- underbrace{sumS 1}_{= abs{P} = 2^n - 1}
= n 2^{n-1} - tup{2^n - 1} = n 2^{n-1} - 2^n + 1 .
end{align}

This proves Lemma 4 (b).



(c) If $n neq 1$, then $n geq 2$ (since $n$ is a positive integer), and therefore both $2^{n-1}$ and $2^n$ are even integers. Hence, if $n neq 1$, then $n 2^{n-1} - 2^n + 1 equiv 1 mod 2$. On the other hand, if $n = 1$, then $n 2^{n-1} - 2^n + 1 = 1 cdot 2^{1-1} - 2^1 + 1 = 0 equiv 0 mod 2$. Combining the previous two sentences, we conclude that $n 2^{n-1} - 2^n + 1 equiv ive{n neq 1} mod 2$. Hence, $tup{-1}^{n 2^{n-1} - 2^n + 1} = tup{-1}^{ive{n neq 1}}$. Now,
begin{align}
prodS tup{-1}^{abs{S} - 1}
&= tup{-1}^{sumS tup{ abs{S} - 1 }}
= tup{-1}^{n 2^{n-1} - 2^n + 1}
qquad tup{ text{by Lemma 4 (b)} } \
&= tup{-1}^{ive{n neq 1}} .
end{align}

This proves Lemma 4 (c).



(d) Recall that $P$ is the set of all nonempty subsets of $set{1,2,ldots,n}$. Hence, among the elements of $P$ are exactly $dbinom{n}{1}$ subsets of size $1$, exactly $dbinom{n}{2}$ subsets of size $2$, and so on, and finally exactly $dbinom{n}{n}$ subsets of size $n$ (and no other subsets). Therefore, the product $prodS abs{S}$ has exactly $dbinom{n}{1}$ factors equal to $1$, exactly $dbinom{n}{2}$ factors equal to $2$, and so on, and finally exactly $dbinom{n}{n}$ factors equal to $n$ (and no other factors). Therefore, this product equals $prodl_{k=1}^n k^{dbinom{n}{k}}$. This proves Lemma 4 (d). $blacksquare$



Proof of Theorem 1 (sketched). Define two $P times P$-matrices
begin{align*}
B &= tup{ tup{-1}^{abs{T} - 1} abs{T} cdot ive{ T subs S } }_{tup{S, T} in P times P} qquad text{and} \
C &= tup{ ive{ S subs T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 2 says that $A = BC$. (More precisely, Lemma 2 says that the $tup{S, T}$-th entry of $A$ equals the corresponding entry of $BC$ for all $S in P$ and $T in P$; but this yields $A = BC$.) Thus, $det A = dettup{BC} = det B cdot det C$. Now, it remains to find $det B$ and $det C$.



Here we shall finally endow our set $P$ with a total ordering. Namely, we pick any total order on $P$ with the property that every pair of two subsets $S in P$ and $T in P$ satisfying $S subs T$ must satisfy $S leq T$. There are many total orders that do the trick here; for example, we can order them by increasing size (resolving ties arbitrarily). Now, it is easy to see that the $P times P$-matrix $B$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det B
&= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} underbrace{ive{ S subs S }}_{=1} }
= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} } \
&= underbrace{tup{ prodS tup{-1}^{abs{S} - 1} }}_{substack{= tup{-1}^{ive{n neq 1}} \ text{(by Lemma 4 (c))}}}
tup{ prodS abs{S} }
= tup{-1}^{ive{n neq 1}} tup{ prodS abs{S} } .
end{align}



Likewise, the $P times P$-matrix $C$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{equation}
det C = prodS underbrace{ive{ S subs S }}_{=1}
= 1 .
end{equation}

Now,
begin{align}
det A
&= det B cdot underbrace{det C}_{=1} = det B = tup{-1}^{ive{n neq 1}} underbrace{tup{ prodS abs{S} }}_{substack{= prodl_{k=1}^n k^{dbinom{n}{k}} \ text{(by Lemma 4 (d))}}} \
&= tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{align}

This proves Theorem 1. $blacksquare$



In hindsight, the above proof could have been restated without introducing $P times P$-matrices, since picking a total order on a finite set $P$ is equivalent to picking a bijection $phi : set{1,2,ldots,n} to P$. But I prefer to give the exercise an "invariant" formulation, and $P times P$-matrices are worth exposing anyway.



A few more telegraphic remarks:




  • The decomposition $A = BC$ constructed in the above proof of Theorem 1 is an LU-decomposition.


  • The solution to the original Putnam question I found first was the same recursive argument appearing a few times in the AoPS thread linked above. But the structure of the recursion foreshadowed the existence of a slicker proof -- the block-row-reduction operations felt like multiplying by a lower-triangular (in a properly chosen order) matrix. So I dug deeper and found the above.


  • This reminds me of Lemma 7.1 of Chris Godsil, An Introduction to the Moebius Function, arXiv:1803.06664v1.



3. Answering the postscriptum



Anyway, I wanted to prove another theorem, from the postscriptum to the original question:




Theorem 5. Let $q in QQ$. Let $D$ be the $P times P$-matrix $tup{ q^{abs{ S cap T }} }_{tup{S, T} in P}$. Then,
begin{equation}
det D = q^n tup{q-1}^{ntup{2^{n-1}-1}} .
end{equation}




Just as our proof of Theorem 1 relied on Lemma 2, our proof of Theorem 5 will rely on the following lemma:




Lemma 6. Let $S in P$ and $T in P$. Let $q in QQ$ be nonzero. Then,
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}




Note that the assumptions "$S in P$ and $T in P$" in Lemma 6 can be replaced by the weaker assumption "$S$ and $T$ are subsets of $N$", but we will only use Lemma 6 in the case when $S$ and $T$ belong to $P$.



Proof of Lemma 6 (sketched). If a subset $U in P$ does not satisfy $U sups S$, then the Iverson bracket $ive{ U sups S }$ is zero and thus renders the whole product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U sups T$. Thus, the product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ is zero unless $U in P$ satisfies both $U sups S$ and $U sups T$. Hence, the sum $suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ does not change its value when we restrict it to those $U$ that satisfy both $U sups S$ and $U sups T$ (because all the addends that we lose are zero anyway). In other words,
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}} tup{q-1}^{n-abs{U}} q^{abs{S}-n} underbrace{ive{ U sups S }}_{=1} q^{abs{T}} underbrace{ive{ U sups T }}_{=1} \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} q^{abs{S}-n} q^{abs{T}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.1}
tag{2}
end{align}



Now, let $G = S cup T$. This is a nonempty subset of $N$ (since $S$ and $T$ are nonempty subsets of $N$). Let $H = N setminus G$ be the complement of $G$ in $N$. Let $h = abs{H}$.



Every subset $U$ of $N$ that satisfies $U sups S$ and $U sups T$ is automatically nonempty (since $U$ contains the nonempty set $S$ as a subset), and thus belongs to $P$. Hence, the $U in P$ that satisfy $U sups S$ and $U sups T$ are precisely the subsets $U$ of $N$ that satisfy $U sups S$ and $U sups T$. Thus, we have
begin{align*}
& set{ U in P mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups underbrace{S cap T}_{=G} } \
= & set{ U subs N mid U sups G } \
= & set{ U subs N mid N setminus U subs N setminus G }
end{align*}

(because the condition "$U sups G$" on a subset $U$ of $N$ is equivalent to the condition "$N setminus U subs N setminus G$").
Hence, the summation sign "$suml_{substack{U in P; \ U sups S text{ and } U sups T}}$" in eqref{darij1.pf.l6.1} can be rewritten as "$suml_{substack{U subs N; \ N setminus U subs N setminus G}}$". Thus, eqref{darij1.pf.l6.1} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs N setminus G}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.2}
tag{3}
end{align}

Every subset $U$ of $N$ satisfies $n - abs{U} = abs{N setminus U}$ (since $U subs N$ yields $abs{N setminus U} = underbrace{abs{N}}_{=n} - abs{U} = n - abs{U}$). Thus, eqref{darij1.pf.l6.2} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{substack{U subs N; \ N setminus U subs N setminus G}}}_{substack{= suml_{substack{U subs N; \ N setminus U subs H}} \ tup{ text{since } N setminus G = H } }}
underbrace{tup{q-1}^{n-abs{U}}}_{substack{= tup{q-1}^{abs{N setminus U}} \ tup{ text{since } n - abs{U} = abs{N setminus U} } }} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs H}}
tup{q-1}^{abs{N setminus U}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ U subs H}}
tup{q-1}^{abs{U}}
label{darij1.pf.l6.3}
tag{4}
end{align}

(here, we have substituted $U$ for $N setminus U$ in the sum, since the map
begin{align}
set{U subs N} to set{U subs N}, qquad
U mapsto N setminus U
end{align}

is a bijection).



The summation sign "$suml_{substack{U subs N; \ U subs H}}$" on the right hand side of eqref{darij1.pf.l6.3} can be replaced by "$suml_{U subs H}$", since $H$ is a subset of $N$. Thus, eqref{darij1.pf.l6.3} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{U subs H} tup{q-1}^{abs{U}} .
label{darij1.pf.l6.4}
tag{5}
end{align}



The sum on the right hand side of this equality is easy to simplify. The set $H$ has size $abs{H} = h$. Thus, it has exactly $dbinom{h}{0}$ subsets of size $0$, exactly $dbinom{h}{1}$ subsets of size $1$, and so on, all the way up to $dbinom{h}{h}$ subsets of size $h$. Hence, the sum $suml_{U subs H} tup{q-1}^{abs{U}}$ has exactly $dbinom{h}{0}$ many terms equal to $tup{q-1}^0$, exactly $dbinom{h}{1}$ many terms equal to $tup{q-1}^1$, and so on, all the way up to $dbinom{h}{h}$ many terms equal to $tup{q-1}^h$. Hence, this sum can be rewritten as follows:
begin{align}
& suml_{U subs H} tup{q-1}^{abs{U}} \
& = dbinom{h}{0} tup{q-1}^0 + dbinom{h}{1} tup{q-1}^1 + cdots + dbinom{h}{h} tup{q-1}^h \
& = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k = tup{tup{q-1}+1}^h \
& qquad tup{ text{since the binomial formula yields } tup{tup{q-1}+1}^h = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k } \
& = q^h .
end{align}

Thus, eqref{darij1.pf.l6.4} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{U subs H} tup{q-1}^{abs{U}}}_{= q^h} \
&= q^{abs{S}-n} q^{abs{T}} q^h = q^{tup{abs{S}-n} + abs{T} + h} .
label{darij1.pf.l6.5}
tag{6}
end{align}



But $H$ is the complement of $G$ in $N$. Hence, $abs{H} = underbrace{abs{N}}_{=n} - abs{G} = n - abs{G}$. Therefore, $abs{G} = n - underbrace{abs{H}}_{= h} = n - h$.



Recall the basic fact that $abs{S} + abs{T} = abs{S cup T} + abs{S cap T} = abs{G} + abs{S cap T}$ (since $S cup T = G$). Solving this equation for $abs{S cap T}$, we obtain
begin{align}
abs{S cap T} = abs{S} + abs{T} - underbrace{abs{G}}_{= n - h} = abs{S} + abs{T} - tup{n - h} = tup{abs{S}-n} + abs{T} + h .
end{align}

Therefore, $q^{ abs{S cap T} } = q^{tup{abs{S}-n} + abs{T} + h}$. Comparing this with eqref{darij1.pf.l6.5}, we obtain
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}

This proves Lemma 6. $blacksquare$



Proof of Theorem 5 (sketched). Both $det D$ and $q^n tup{q-1}^{ntup{2^{n-1}-1}}$ are polynomials in $q$ with integer coefficients (since $2^{n-1}-1$ is a nonnegative integer). Thus, the equality we need to prove is an equality between two polynomials in $q$. Hence, in order to prove it for all $q in QQ$, it suffices to prove it for infinitely many values of $q$ (because an equality between two polynomials in $q$ that holds for infinitely many values of $q$ must automatically hold for all $q in QQ$). Thus, in particular, it suffices to prove it for all nonzero $q in QQ$.
Thus, let us WLOG assume that $q$ is nonzero.
Define two $P times P$-matrices
begin{align*}
E &= tup{ tup{q-1}^{n-abs{T}} q^{abs{S}-n} ive{ T sups S } }_{tup{S, T} in P times P} qquad text{and} \
F &= tup{ q^{abs{T}} ive{ S sups T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 6 says that $D = EF$. (More precisely, Lemma 6 says that the $tup{S, T}$-th entry of $D$ equals the corresponding entry of $EF$ for all $S in P$ and $T in P$; but this yields $D = EF$.) Thus, $det D = dettup{EF} = det E cdot det F$. Now, it remains to find $det E$ and $det F$.



Recall that $abs{P} = 2^n - 1$ (as we proved during our proof of Lemma 4 (b)). Also, $sumS abs{S} = n 2^{n-1}$ (by Lemma 4 (a)).



Endow our set $P$ with the same total ordering that we used in the proof of Theorem 1. Now, it is easy to see that the $P times P$-matrix $E$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det E
&= prodS tup{ underbrace{tup{q-1}^{n-abs{S}} q^{abs{S}-n}}_{= tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} }
underbrace{ive{ S sups S }}_{= 1} }
= prodS tup{ tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} } \
&= underbrace{ tup{ prodS tup{dfrac{q-1}{q}}^n } }_{= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} }
underbrace{ tup{ prodS tup{dfrac{q}{q-1}}^{abs{S}} } }_{= tup{dfrac{q}{q-1}}^{sumS abs{S}} } \
&= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} tup{dfrac{q}{q-1}}^{sumS abs{S}} \
&= underbrace{ tup{tup{dfrac{q-1}{q}}^n}^{2^n - 1} }_{= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} }
underbrace{ tup{dfrac{q}{q-1}}^{n 2^{n-1}} }_{= tup{dfrac{q-1}{q}}^{-n 2^{n-1}}}\
& qquad tup{ text{since } abs{P} = 2^n - 1
text{ and } sumS abs{S} = n 2^{n-1} } \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} tup{dfrac{q-1}{q}}^{-n 2^{n-1}} \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1} - n 2^{n-1}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}
end{align}

(since $n tup{2^n - 1} - n 2^{n-1} = n 2^{n-1} - n$).



Likewise, the $P times P$-matrix $F$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det F
&= prodS tup{ q^{abs{S}}
underbrace{ive{ S sups S }}_{= 1} }
= prodS q^{abs{S}}
= q^{sumS abs{S}}
= q^{n 2^{n-1}}
end{align}

(since $sumS abs{S} = n 2^{n-1}$).



Now,
begin{align}
det D
&= underbrace{det E}_{= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}}
cdot
underbrace{det F}_{= q^{n 2^{n-1}}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n} q^{n 2^{n-1}} \
&= dfrac{tup{q-1}^{n 2^{n-1} - n}}{q^{n 2^{n-1} - n}} q^{n 2^{n-1}}
= tup{q-1}^{n 2^{n-1} - n} q^{n 2^{n-1} - tup{n 2^{n-1} - n}} \
&= q^{n 2^{n-1} - tup{n 2^{n-1} - n}} tup{q-1}^{n 2^{n-1} - n}
= q^n tup{q-1}^{n tup{2^{n-1} - 1}}
end{align}

(since $n 2^{n-1} - tup{n 2^{n-1} - n} = n$ and $n 2^{n-1} - n = n tup{2^{n-1} - 1}$).
This proves Theorem 5. $blacksquare$






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    Lemma 3 is trivial since $ibinom ki=kbinom{k-1}{i-1}$,
    $endgroup$
    – Zhi-Wei Sun
    Dec 7 '18 at 6:21








  • 5




    $begingroup$
    @Zhi-WeiSun: I know it is. This is a semi-expository writeup that is supposed to be readable by anyone who understands the problem.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 6:38










  • $begingroup$
    @darijgrinberg: Thank you for the detailed work!
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 4:36










  • $begingroup$
    @darij BRAVO for using Iverson notation in a matrix definition ? While avoiding to create a name names it adds clarity. From a computer point of view we could say "The implementation IS the name."
    $endgroup$
    – Jérôme JEAN-CHARLES
    Dec 14 '18 at 15:35












  • $begingroup$
    Another approach to Lemma 3: say $f(x) =(1-x)^k$; then the sum is $f’(1)$.
    $endgroup$
    – Jeff Strom
    2 days ago














20












20








20





$begingroup$

$newcommand{QQ}{mathbb{Q}}
newcommand{set}[1]{left{ #1 right}}
newcommand{abs}[1]{left| #1 right|}
newcommand{tup}[1]{left( #1 right)}
newcommand{ive}[1]{left[ #1 right]}
newcommand{suml}{sumlimits}
newcommand{sumS}{suml_{S in P}}
newcommand{prodl}{prodlimits}
newcommand{prodS}{prodl_{S in P}}
newcommand{subs}{subseteq}
newcommand{sups}{supseteq}$



Here are proofs for both the original question (Theorem 1 below) and for the postscript (Theorem 5 below) that follow my argument at https://artofproblemsolving.com/community/u432h1747709p11384734 as closely as possible. Sorry for their length, much of it due to exposition of matrix folklore. (If you know this, scroll all the way down to section 2.)



1. Matrices indexed by finite sets



We are grown-ups and don't need to index the rows and the columns of a matrix by numbers. We can pick any set $P$, and define a $P times P$-matrix (say, with rational entries) to be a family $tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ of rational numbers indexed by pairs $tup{p, q} in P times P$. When the set $P$ is finite, such a $P times P$-matrix always has a well-defined determinant, which can be defined by
begin{equation}
det tup{ tup{a_{p, q}}_{tup{p, q} in P times P} }
= sum_{sigma in S_P} tup{-1}^{sigma} prod_{p in P} a_{p, sigmatup{p}} .
end{equation}

Here, $S_P$ denotes the set of permutations of $P$, and the notation $tup{-1}^{sigma}$ stands for the sign of a permutation $sigma$. If you think that this is problematic because the sign of a permutation is usually defined using a total order on $P$, then take a break and convince yourself that it does not actually depend on this total order. (This is proven in maximum detail in the solution of Exercise 5.12 of my Notes on the combinatorial fundamentals of algebra, 10th of January 2019.) Determinants of $P times P$-matrices behave as nicely as determinants of "usual" $n times n$-matrices, and arguably even better, since it is easier to define minors and cofactors when the rows and columns are indexed by an arbitrary finite set. Do keep in mind that the rows and the columns must be indexed by the same set; we cannot define the determinant of a $P times Q$-matrix for different $P$ and $Q$, even when $abs{P} = abs{Q}$.



Of course, an $n times n$-matrix (for a nonnegative integer $n$) is the same as an $N times N$-matrix for $N = set{1,2,ldots,n}$. Thus, our notion of $P times P$-matrices generalizes the classical notion of $n times n$-matrices. This generalization "respects" standard concepts like determinants -- e.g., the determinant of an $n times n$-matrix does not change if we instead consider it as an $N times N$-matrix. Does this generalization exhibit genuinely new behavior or does it just add convenience? Arguably it's the latter, because conversely, you can turn any $P times P$-matrix (for any finite set $P$) into an $n times n$-matrix. Namely, if you have a finite set $P$ and a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$, then you can set $n = abs{P}$ and fix a bijection $phi : set{1,2,ldots,n} to P$, and form the $n times n$-matrix $A_phi := tup{ a_{phitup{i}, phitup{j}} }_{tup{i, j} in set{1,2,ldots,n} times set{1,2,ldots,n}} in QQ^{n times n}$, and observe that $A_phi$ is "basically the same as" $A$ except that the rows and the columns have been reindexed. We shall refer to this operation (going from $A$ to $A_phi$) as numerical reindexing. So $P times P$-matrices differ from $n times n$-matrices only in how we index their rows and columns. Nevertheless they are useful, because often there is no canonical bijection $phi : set{1,2,ldots,n} to P$ (exhibit A: this Putnam problem), and we are algebraists and want to make as few non-canonical choices as possible.



Whenever $P$ is a finite set, we can multiply two $P times P$-matrices: The product $AB$ of two $P times P$-matrices $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ and $B = tup{b_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ is defined to be the $P times P$-matrix $tup{ sum_{r in P} a_{p, r} b_{r, q} }_{tup{p, q} in P times P} in QQ^{P times P}$. This multiplication behaves just as nicely as usual multiplication of $n times n$-matrices. In particular, $dettup{AB} = det A cdot det B$ whenever $A$ and $B$ are two $P times P$-matrices. And of course, you can prove this by numerical reindexing, because numerical reindexing preserves both products and determinants (assuming, of course, that you pick one bijection $phi : set{1,2,ldots,n} to P$ and stick with it).



If you remember the definition of triangular $n times n$-matrices, then its generalization to $P times P$-matrices is straightforward: When $P$ is a totally ordered finite set, we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is lower-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p < q ;
end{equation}

and we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is upper-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p > q .
end{equation}

Again, lower-triangular $P times P$-matrices become lower-triangular $n times n$-matrices upon numerical reindexing, as long as we make sure to pick our bijection $phi : set{1,2,ldots,n} to P$ to be an order isomorphism (which, by the way, is unique). So we can derive properties of lower-triangular $P times P$-matrices from analogous properties of lower-triangular $n times n$-matrices by numerical reindexing. In particular, we can thus show that the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries; i.e., if $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is a lower-triangular $P times P$-matrix, then $det A = prodl_{p in P} a_{p, p}$. The same holds for upper-triangular $P times P$-matrices.



2. Answering the original question



Now, let's not forget about the question.



Fix a positive integer $n$. Let $N$ be the set $set{1,2,ldots,n}$. Let $P$ be the set of all the $2^n - 1$ nonempty subsets of $N$. (We shall later totally order $P$, but for now $P$ is just a finite set.)



We shall also use the Iverson bracket notation, which will save us some ampersands and braces. This allows us to rewrite the definition of $A_{ij}$ in your question as $A_{ij} = ive{ abs{ S_i cap S_j } = 1 }$. But as we said, we drop the numerical indexing, and instead define a matrix whose rows and columns are indexed by $P$ (that is, by all the nonempty subsets of $N$). Thus, the claim of the exercise rewrites as follows:




Theorem 1. Let $A$ be the $P times P$-matrix $tup{ ive{ abs{ S cap T } = 1 } }_{tup{S, T} in P}$. Then,
begin{equation}
det A = tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{equation}




The key to proving this is the following simple fact:




Lemma 2. Let $S in P$ and $T in P$. Then,
begin{equation}
ive{ abs{ S cap T } = 1 } = suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T } .
end{equation}




This, in turn, shall be derived from the following binomial identity:




Lemma 3. Let $k$ be a nonnegative integer. Then,
begin{equation}
sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} = ive{k = 1}.
end{equation}




Proof of Lemma 3 (sketched). If $k = 0$, then this equality holds for obvious reasons (indeed, the left hand side is an empty sum and thus vanishes, while the right hand side vanishes since $k = 0 neq 1$). Hence, for the rest of this proof, we WLOG assume that $k neq 0$. Hence, $k$ is a positive integer. Thus, $k-1$ is a nonnegative integer.



It is well-known that $suml_{i=0}^n tup{-1}^i dbinom{n}{i} = ive{ n = 0 }$ for every nonnegative integer $n$. Applying this to $n = k-1$, we obtain $suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = ive{ k - 1 = 0 } = ive{k = 1}$. Multiplying both sides of this equality by $k$, we obtain $k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = k ive{k = 1} = ive{k = 1}$ (where the last equality sign is easy to check directly). Hence,
begin{align}
ive{k = 1}
&= k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i}
= suml_{i=0}^{k-1} tup{-1}^i cdot underbrace{k dbinom{k-1}{i}}_{substack{= tup{i+1} dbinom{k}{i+1} \ text{(by a simple computation)}}} \
&= suml_{i=0}^{k-1} tup{-1}^i cdot tup{i+1} dbinom{k}{i+1}
= suml_{i=1}^{k} tup{-1}^{i-1} cdot i dbinom{k}{i}
end{align}

(here, we have substituted $i-1$ for $i$ in the sum). This proves Lemma 3. $blacksquare$



Proof of Lemma 2 (sketched). If a subset $U in P$ does not satisfy $U subs S$, then the Iverson bracket $ive{ U subs S }$ is zero and thus renders the whole product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U subs T$. Thus, the product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ is zero unless $U in P$ satisfies both $U subs S$ and $U subs T$. Hence, the sum $suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ does not change its value when we restrict it to those $U$ that satisfy both $U subs S$ and $U subs T$ (because all the addends that we lose are zero anyway). In other words,
begin{align*}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}} tup{-1}^{abs{U} - 1} abs{U} cdot underbrace{ive{ U subs S }}_{=1} underbrace{ive{ U subs T }}_{=1} \
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}.
end{align*}

Now, let $K$ be the intersection $S cap T$, and let $k = abs{K}$. Then, the sets $U in P$ that satisfy both $U subs S$ and $U subs T$ are precisely the nonempty subsets of the $k$-element set $K$. Thus, we know how many such sets $U$ exist of each size: there are exactly $dbinom{k}{1}$ ones of size $1$, exactly $dbinom{k}{2}$ ones of size $2$, and so on, all the way up to $dbinom{k}{k}$ ones of size $k$. Hence,
begin{align*}
suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
&= dbinom{k}{1} tup{-1}^{1-1} cdot 1 + dbinom{k}{2} tup{-1}^{2-1} cdot 2 + cdots + dbinom{k}{k} tup{-1}^{k-1} cdot k \
&= sum_{i=1}^k dbinom{k}{i} tup{-1}^{i-1} cdot i
= sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} \
&= ive{k = 1} qquad tup{ text{by Lemma 3} } \
&= ive{ abs{ S cap T } = 1 }
end{align*}

(since $k = abs{K} = abs{ S cap T }$).
Combining what we have, we obtain
begin{equation}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
= ive{ abs{ S cap T } = 1 } .
end{equation}

This proves Lemma 2. $blacksquare$




Lemma 4. (a) We have $sumS abs{S} = n 2^{n-1}$.



(b) We have $sumS tup{ abs{S} - 1 } = n 2^{n-1} - 2^n + 1$.



(c) We have $prodS tup{-1}^{abs{S} - 1} = tup{-1}^{ive{n neq 1}}$.



(d) We have $prodS abs{S} = prodl_{k=1}^n k^{dbinom{n}{k}}$.




Proof of Lemma 4. (a) Fix $i in set{1,2,ldots,n}$. Then, exactly $2^{n-1}$ subsets of $set{1,2,ldots,n}$ contain $i$ (since we can freely choose which of the other $n-1$ elements such a subset should contain). Of course, all of these $2^{n-1}$ subsets are nonempty, and thus belong to $P$. Hence, exactly $2^{n-1}$ subsets $S in P$ contain $i$. Hence, the sum $sumS ive{ i in S }$ has exactly $2^{n-1}$ addends equal to $1$, while all its other addends are $0$. Therefore, this sum equals $2^{n-1}$. In other words, we have
begin{equation}
sumS ive{ i in S } = 2^{n-1} .
label{darij1.pf.l4.1}
tag{1}
end{equation}



Now, forget that we fixed $i$. We thus have proven eqref{darij1.pf.l4.1} for each $i in set{1,2,ldots,n}$.



But we have $abs{S} = suml_{i=1}^n ive{ i in S }$ for each subset $S$ of $set{1,2,ldots,n}$ (because the sum $suml_{i=1}^n ive{ i in S }$ contains exactly $abs{S}$ many addends equal to $1$, whereas all its other addends are $0$). Summing up this equation over all $S in P$, we obtain
begin{align}
sumS abs{S}
&= sumS suml_{i=1}^n ive{ i in S }
= suml_{i=1}^n underbrace{sumS ive{ i in S }}_{substack{= 2^{n-1} \ text{(by eqref{darij1.pf.l4.1})}}}
= suml_{i=1}^n 2^{n-1} = n 2^{n-1} .
end{align}

This proves Lemma 4 (a).



(b) We have $abs{P} = 2^n - 1$ (since the set $P$ consists of all the $2^n$ subsets of $set{1,2,ldots,n}$ apart from the empty set). Now,
begin{align}
sumS tup{ abs{S} - 1 }
= underbrace{sumS abs{S}}_{substack{= n 2^{n-1} \ text{(by Lemma 4 (a))}}}
- underbrace{sumS 1}_{= abs{P} = 2^n - 1}
= n 2^{n-1} - tup{2^n - 1} = n 2^{n-1} - 2^n + 1 .
end{align}

This proves Lemma 4 (b).



(c) If $n neq 1$, then $n geq 2$ (since $n$ is a positive integer), and therefore both $2^{n-1}$ and $2^n$ are even integers. Hence, if $n neq 1$, then $n 2^{n-1} - 2^n + 1 equiv 1 mod 2$. On the other hand, if $n = 1$, then $n 2^{n-1} - 2^n + 1 = 1 cdot 2^{1-1} - 2^1 + 1 = 0 equiv 0 mod 2$. Combining the previous two sentences, we conclude that $n 2^{n-1} - 2^n + 1 equiv ive{n neq 1} mod 2$. Hence, $tup{-1}^{n 2^{n-1} - 2^n + 1} = tup{-1}^{ive{n neq 1}}$. Now,
begin{align}
prodS tup{-1}^{abs{S} - 1}
&= tup{-1}^{sumS tup{ abs{S} - 1 }}
= tup{-1}^{n 2^{n-1} - 2^n + 1}
qquad tup{ text{by Lemma 4 (b)} } \
&= tup{-1}^{ive{n neq 1}} .
end{align}

This proves Lemma 4 (c).



(d) Recall that $P$ is the set of all nonempty subsets of $set{1,2,ldots,n}$. Hence, among the elements of $P$ are exactly $dbinom{n}{1}$ subsets of size $1$, exactly $dbinom{n}{2}$ subsets of size $2$, and so on, and finally exactly $dbinom{n}{n}$ subsets of size $n$ (and no other subsets). Therefore, the product $prodS abs{S}$ has exactly $dbinom{n}{1}$ factors equal to $1$, exactly $dbinom{n}{2}$ factors equal to $2$, and so on, and finally exactly $dbinom{n}{n}$ factors equal to $n$ (and no other factors). Therefore, this product equals $prodl_{k=1}^n k^{dbinom{n}{k}}$. This proves Lemma 4 (d). $blacksquare$



Proof of Theorem 1 (sketched). Define two $P times P$-matrices
begin{align*}
B &= tup{ tup{-1}^{abs{T} - 1} abs{T} cdot ive{ T subs S } }_{tup{S, T} in P times P} qquad text{and} \
C &= tup{ ive{ S subs T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 2 says that $A = BC$. (More precisely, Lemma 2 says that the $tup{S, T}$-th entry of $A$ equals the corresponding entry of $BC$ for all $S in P$ and $T in P$; but this yields $A = BC$.) Thus, $det A = dettup{BC} = det B cdot det C$. Now, it remains to find $det B$ and $det C$.



Here we shall finally endow our set $P$ with a total ordering. Namely, we pick any total order on $P$ with the property that every pair of two subsets $S in P$ and $T in P$ satisfying $S subs T$ must satisfy $S leq T$. There are many total orders that do the trick here; for example, we can order them by increasing size (resolving ties arbitrarily). Now, it is easy to see that the $P times P$-matrix $B$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det B
&= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} underbrace{ive{ S subs S }}_{=1} }
= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} } \
&= underbrace{tup{ prodS tup{-1}^{abs{S} - 1} }}_{substack{= tup{-1}^{ive{n neq 1}} \ text{(by Lemma 4 (c))}}}
tup{ prodS abs{S} }
= tup{-1}^{ive{n neq 1}} tup{ prodS abs{S} } .
end{align}



Likewise, the $P times P$-matrix $C$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{equation}
det C = prodS underbrace{ive{ S subs S }}_{=1}
= 1 .
end{equation}

Now,
begin{align}
det A
&= det B cdot underbrace{det C}_{=1} = det B = tup{-1}^{ive{n neq 1}} underbrace{tup{ prodS abs{S} }}_{substack{= prodl_{k=1}^n k^{dbinom{n}{k}} \ text{(by Lemma 4 (d))}}} \
&= tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{align}

This proves Theorem 1. $blacksquare$



In hindsight, the above proof could have been restated without introducing $P times P$-matrices, since picking a total order on a finite set $P$ is equivalent to picking a bijection $phi : set{1,2,ldots,n} to P$. But I prefer to give the exercise an "invariant" formulation, and $P times P$-matrices are worth exposing anyway.



A few more telegraphic remarks:




  • The decomposition $A = BC$ constructed in the above proof of Theorem 1 is an LU-decomposition.


  • The solution to the original Putnam question I found first was the same recursive argument appearing a few times in the AoPS thread linked above. But the structure of the recursion foreshadowed the existence of a slicker proof -- the block-row-reduction operations felt like multiplying by a lower-triangular (in a properly chosen order) matrix. So I dug deeper and found the above.


  • This reminds me of Lemma 7.1 of Chris Godsil, An Introduction to the Moebius Function, arXiv:1803.06664v1.



3. Answering the postscriptum



Anyway, I wanted to prove another theorem, from the postscriptum to the original question:




Theorem 5. Let $q in QQ$. Let $D$ be the $P times P$-matrix $tup{ q^{abs{ S cap T }} }_{tup{S, T} in P}$. Then,
begin{equation}
det D = q^n tup{q-1}^{ntup{2^{n-1}-1}} .
end{equation}




Just as our proof of Theorem 1 relied on Lemma 2, our proof of Theorem 5 will rely on the following lemma:




Lemma 6. Let $S in P$ and $T in P$. Let $q in QQ$ be nonzero. Then,
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}




Note that the assumptions "$S in P$ and $T in P$" in Lemma 6 can be replaced by the weaker assumption "$S$ and $T$ are subsets of $N$", but we will only use Lemma 6 in the case when $S$ and $T$ belong to $P$.



Proof of Lemma 6 (sketched). If a subset $U in P$ does not satisfy $U sups S$, then the Iverson bracket $ive{ U sups S }$ is zero and thus renders the whole product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U sups T$. Thus, the product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ is zero unless $U in P$ satisfies both $U sups S$ and $U sups T$. Hence, the sum $suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ does not change its value when we restrict it to those $U$ that satisfy both $U sups S$ and $U sups T$ (because all the addends that we lose are zero anyway). In other words,
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}} tup{q-1}^{n-abs{U}} q^{abs{S}-n} underbrace{ive{ U sups S }}_{=1} q^{abs{T}} underbrace{ive{ U sups T }}_{=1} \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} q^{abs{S}-n} q^{abs{T}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.1}
tag{2}
end{align}



Now, let $G = S cup T$. This is a nonempty subset of $N$ (since $S$ and $T$ are nonempty subsets of $N$). Let $H = N setminus G$ be the complement of $G$ in $N$. Let $h = abs{H}$.



Every subset $U$ of $N$ that satisfies $U sups S$ and $U sups T$ is automatically nonempty (since $U$ contains the nonempty set $S$ as a subset), and thus belongs to $P$. Hence, the $U in P$ that satisfy $U sups S$ and $U sups T$ are precisely the subsets $U$ of $N$ that satisfy $U sups S$ and $U sups T$. Thus, we have
begin{align*}
& set{ U in P mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups underbrace{S cap T}_{=G} } \
= & set{ U subs N mid U sups G } \
= & set{ U subs N mid N setminus U subs N setminus G }
end{align*}

(because the condition "$U sups G$" on a subset $U$ of $N$ is equivalent to the condition "$N setminus U subs N setminus G$").
Hence, the summation sign "$suml_{substack{U in P; \ U sups S text{ and } U sups T}}$" in eqref{darij1.pf.l6.1} can be rewritten as "$suml_{substack{U subs N; \ N setminus U subs N setminus G}}$". Thus, eqref{darij1.pf.l6.1} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs N setminus G}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.2}
tag{3}
end{align}

Every subset $U$ of $N$ satisfies $n - abs{U} = abs{N setminus U}$ (since $U subs N$ yields $abs{N setminus U} = underbrace{abs{N}}_{=n} - abs{U} = n - abs{U}$). Thus, eqref{darij1.pf.l6.2} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{substack{U subs N; \ N setminus U subs N setminus G}}}_{substack{= suml_{substack{U subs N; \ N setminus U subs H}} \ tup{ text{since } N setminus G = H } }}
underbrace{tup{q-1}^{n-abs{U}}}_{substack{= tup{q-1}^{abs{N setminus U}} \ tup{ text{since } n - abs{U} = abs{N setminus U} } }} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs H}}
tup{q-1}^{abs{N setminus U}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ U subs H}}
tup{q-1}^{abs{U}}
label{darij1.pf.l6.3}
tag{4}
end{align}

(here, we have substituted $U$ for $N setminus U$ in the sum, since the map
begin{align}
set{U subs N} to set{U subs N}, qquad
U mapsto N setminus U
end{align}

is a bijection).



The summation sign "$suml_{substack{U subs N; \ U subs H}}$" on the right hand side of eqref{darij1.pf.l6.3} can be replaced by "$suml_{U subs H}$", since $H$ is a subset of $N$. Thus, eqref{darij1.pf.l6.3} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{U subs H} tup{q-1}^{abs{U}} .
label{darij1.pf.l6.4}
tag{5}
end{align}



The sum on the right hand side of this equality is easy to simplify. The set $H$ has size $abs{H} = h$. Thus, it has exactly $dbinom{h}{0}$ subsets of size $0$, exactly $dbinom{h}{1}$ subsets of size $1$, and so on, all the way up to $dbinom{h}{h}$ subsets of size $h$. Hence, the sum $suml_{U subs H} tup{q-1}^{abs{U}}$ has exactly $dbinom{h}{0}$ many terms equal to $tup{q-1}^0$, exactly $dbinom{h}{1}$ many terms equal to $tup{q-1}^1$, and so on, all the way up to $dbinom{h}{h}$ many terms equal to $tup{q-1}^h$. Hence, this sum can be rewritten as follows:
begin{align}
& suml_{U subs H} tup{q-1}^{abs{U}} \
& = dbinom{h}{0} tup{q-1}^0 + dbinom{h}{1} tup{q-1}^1 + cdots + dbinom{h}{h} tup{q-1}^h \
& = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k = tup{tup{q-1}+1}^h \
& qquad tup{ text{since the binomial formula yields } tup{tup{q-1}+1}^h = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k } \
& = q^h .
end{align}

Thus, eqref{darij1.pf.l6.4} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{U subs H} tup{q-1}^{abs{U}}}_{= q^h} \
&= q^{abs{S}-n} q^{abs{T}} q^h = q^{tup{abs{S}-n} + abs{T} + h} .
label{darij1.pf.l6.5}
tag{6}
end{align}



But $H$ is the complement of $G$ in $N$. Hence, $abs{H} = underbrace{abs{N}}_{=n} - abs{G} = n - abs{G}$. Therefore, $abs{G} = n - underbrace{abs{H}}_{= h} = n - h$.



Recall the basic fact that $abs{S} + abs{T} = abs{S cup T} + abs{S cap T} = abs{G} + abs{S cap T}$ (since $S cup T = G$). Solving this equation for $abs{S cap T}$, we obtain
begin{align}
abs{S cap T} = abs{S} + abs{T} - underbrace{abs{G}}_{= n - h} = abs{S} + abs{T} - tup{n - h} = tup{abs{S}-n} + abs{T} + h .
end{align}

Therefore, $q^{ abs{S cap T} } = q^{tup{abs{S}-n} + abs{T} + h}$. Comparing this with eqref{darij1.pf.l6.5}, we obtain
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}

This proves Lemma 6. $blacksquare$



Proof of Theorem 5 (sketched). Both $det D$ and $q^n tup{q-1}^{ntup{2^{n-1}-1}}$ are polynomials in $q$ with integer coefficients (since $2^{n-1}-1$ is a nonnegative integer). Thus, the equality we need to prove is an equality between two polynomials in $q$. Hence, in order to prove it for all $q in QQ$, it suffices to prove it for infinitely many values of $q$ (because an equality between two polynomials in $q$ that holds for infinitely many values of $q$ must automatically hold for all $q in QQ$). Thus, in particular, it suffices to prove it for all nonzero $q in QQ$.
Thus, let us WLOG assume that $q$ is nonzero.
Define two $P times P$-matrices
begin{align*}
E &= tup{ tup{q-1}^{n-abs{T}} q^{abs{S}-n} ive{ T sups S } }_{tup{S, T} in P times P} qquad text{and} \
F &= tup{ q^{abs{T}} ive{ S sups T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 6 says that $D = EF$. (More precisely, Lemma 6 says that the $tup{S, T}$-th entry of $D$ equals the corresponding entry of $EF$ for all $S in P$ and $T in P$; but this yields $D = EF$.) Thus, $det D = dettup{EF} = det E cdot det F$. Now, it remains to find $det E$ and $det F$.



Recall that $abs{P} = 2^n - 1$ (as we proved during our proof of Lemma 4 (b)). Also, $sumS abs{S} = n 2^{n-1}$ (by Lemma 4 (a)).



Endow our set $P$ with the same total ordering that we used in the proof of Theorem 1. Now, it is easy to see that the $P times P$-matrix $E$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det E
&= prodS tup{ underbrace{tup{q-1}^{n-abs{S}} q^{abs{S}-n}}_{= tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} }
underbrace{ive{ S sups S }}_{= 1} }
= prodS tup{ tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} } \
&= underbrace{ tup{ prodS tup{dfrac{q-1}{q}}^n } }_{= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} }
underbrace{ tup{ prodS tup{dfrac{q}{q-1}}^{abs{S}} } }_{= tup{dfrac{q}{q-1}}^{sumS abs{S}} } \
&= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} tup{dfrac{q}{q-1}}^{sumS abs{S}} \
&= underbrace{ tup{tup{dfrac{q-1}{q}}^n}^{2^n - 1} }_{= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} }
underbrace{ tup{dfrac{q}{q-1}}^{n 2^{n-1}} }_{= tup{dfrac{q-1}{q}}^{-n 2^{n-1}}}\
& qquad tup{ text{since } abs{P} = 2^n - 1
text{ and } sumS abs{S} = n 2^{n-1} } \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} tup{dfrac{q-1}{q}}^{-n 2^{n-1}} \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1} - n 2^{n-1}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}
end{align}

(since $n tup{2^n - 1} - n 2^{n-1} = n 2^{n-1} - n$).



Likewise, the $P times P$-matrix $F$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det F
&= prodS tup{ q^{abs{S}}
underbrace{ive{ S sups S }}_{= 1} }
= prodS q^{abs{S}}
= q^{sumS abs{S}}
= q^{n 2^{n-1}}
end{align}

(since $sumS abs{S} = n 2^{n-1}$).



Now,
begin{align}
det D
&= underbrace{det E}_{= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}}
cdot
underbrace{det F}_{= q^{n 2^{n-1}}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n} q^{n 2^{n-1}} \
&= dfrac{tup{q-1}^{n 2^{n-1} - n}}{q^{n 2^{n-1} - n}} q^{n 2^{n-1}}
= tup{q-1}^{n 2^{n-1} - n} q^{n 2^{n-1} - tup{n 2^{n-1} - n}} \
&= q^{n 2^{n-1} - tup{n 2^{n-1} - n}} tup{q-1}^{n 2^{n-1} - n}
= q^n tup{q-1}^{n tup{2^{n-1} - 1}}
end{align}

(since $n 2^{n-1} - tup{n 2^{n-1} - n} = n$ and $n 2^{n-1} - n = n tup{2^{n-1} - 1}$).
This proves Theorem 5. $blacksquare$






share|cite|improve this answer











$endgroup$



$newcommand{QQ}{mathbb{Q}}
newcommand{set}[1]{left{ #1 right}}
newcommand{abs}[1]{left| #1 right|}
newcommand{tup}[1]{left( #1 right)}
newcommand{ive}[1]{left[ #1 right]}
newcommand{suml}{sumlimits}
newcommand{sumS}{suml_{S in P}}
newcommand{prodl}{prodlimits}
newcommand{prodS}{prodl_{S in P}}
newcommand{subs}{subseteq}
newcommand{sups}{supseteq}$



Here are proofs for both the original question (Theorem 1 below) and for the postscript (Theorem 5 below) that follow my argument at https://artofproblemsolving.com/community/u432h1747709p11384734 as closely as possible. Sorry for their length, much of it due to exposition of matrix folklore. (If you know this, scroll all the way down to section 2.)



1. Matrices indexed by finite sets



We are grown-ups and don't need to index the rows and the columns of a matrix by numbers. We can pick any set $P$, and define a $P times P$-matrix (say, with rational entries) to be a family $tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ of rational numbers indexed by pairs $tup{p, q} in P times P$. When the set $P$ is finite, such a $P times P$-matrix always has a well-defined determinant, which can be defined by
begin{equation}
det tup{ tup{a_{p, q}}_{tup{p, q} in P times P} }
= sum_{sigma in S_P} tup{-1}^{sigma} prod_{p in P} a_{p, sigmatup{p}} .
end{equation}

Here, $S_P$ denotes the set of permutations of $P$, and the notation $tup{-1}^{sigma}$ stands for the sign of a permutation $sigma$. If you think that this is problematic because the sign of a permutation is usually defined using a total order on $P$, then take a break and convince yourself that it does not actually depend on this total order. (This is proven in maximum detail in the solution of Exercise 5.12 of my Notes on the combinatorial fundamentals of algebra, 10th of January 2019.) Determinants of $P times P$-matrices behave as nicely as determinants of "usual" $n times n$-matrices, and arguably even better, since it is easier to define minors and cofactors when the rows and columns are indexed by an arbitrary finite set. Do keep in mind that the rows and the columns must be indexed by the same set; we cannot define the determinant of a $P times Q$-matrix for different $P$ and $Q$, even when $abs{P} = abs{Q}$.



Of course, an $n times n$-matrix (for a nonnegative integer $n$) is the same as an $N times N$-matrix for $N = set{1,2,ldots,n}$. Thus, our notion of $P times P$-matrices generalizes the classical notion of $n times n$-matrices. This generalization "respects" standard concepts like determinants -- e.g., the determinant of an $n times n$-matrix does not change if we instead consider it as an $N times N$-matrix. Does this generalization exhibit genuinely new behavior or does it just add convenience? Arguably it's the latter, because conversely, you can turn any $P times P$-matrix (for any finite set $P$) into an $n times n$-matrix. Namely, if you have a finite set $P$ and a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$, then you can set $n = abs{P}$ and fix a bijection $phi : set{1,2,ldots,n} to P$, and form the $n times n$-matrix $A_phi := tup{ a_{phitup{i}, phitup{j}} }_{tup{i, j} in set{1,2,ldots,n} times set{1,2,ldots,n}} in QQ^{n times n}$, and observe that $A_phi$ is "basically the same as" $A$ except that the rows and the columns have been reindexed. We shall refer to this operation (going from $A$ to $A_phi$) as numerical reindexing. So $P times P$-matrices differ from $n times n$-matrices only in how we index their rows and columns. Nevertheless they are useful, because often there is no canonical bijection $phi : set{1,2,ldots,n} to P$ (exhibit A: this Putnam problem), and we are algebraists and want to make as few non-canonical choices as possible.



Whenever $P$ is a finite set, we can multiply two $P times P$-matrices: The product $AB$ of two $P times P$-matrices $A = tup{a_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ and $B = tup{b_{p, q}}_{tup{p, q} in P times P} in QQ^{P times P}$ is defined to be the $P times P$-matrix $tup{ sum_{r in P} a_{p, r} b_{r, q} }_{tup{p, q} in P times P} in QQ^{P times P}$. This multiplication behaves just as nicely as usual multiplication of $n times n$-matrices. In particular, $dettup{AB} = det A cdot det B$ whenever $A$ and $B$ are two $P times P$-matrices. And of course, you can prove this by numerical reindexing, because numerical reindexing preserves both products and determinants (assuming, of course, that you pick one bijection $phi : set{1,2,ldots,n} to P$ and stick with it).



If you remember the definition of triangular $n times n$-matrices, then its generalization to $P times P$-matrices is straightforward: When $P$ is a totally ordered finite set, we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is lower-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p < q ;
end{equation}

and we say that a $P times P$-matrix $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is upper-triangular if
begin{equation}
a_{p, q} = 0 qquad text{for all } tup{p, q} in P times P text{ satisfying } p > q .
end{equation}

Again, lower-triangular $P times P$-matrices become lower-triangular $n times n$-matrices upon numerical reindexing, as long as we make sure to pick our bijection $phi : set{1,2,ldots,n} to P$ to be an order isomorphism (which, by the way, is unique). So we can derive properties of lower-triangular $P times P$-matrices from analogous properties of lower-triangular $n times n$-matrices by numerical reindexing. In particular, we can thus show that the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries; i.e., if $A = tup{a_{p, q}}_{tup{p, q} in P times P}$ is a lower-triangular $P times P$-matrix, then $det A = prodl_{p in P} a_{p, p}$. The same holds for upper-triangular $P times P$-matrices.



2. Answering the original question



Now, let's not forget about the question.



Fix a positive integer $n$. Let $N$ be the set $set{1,2,ldots,n}$. Let $P$ be the set of all the $2^n - 1$ nonempty subsets of $N$. (We shall later totally order $P$, but for now $P$ is just a finite set.)



We shall also use the Iverson bracket notation, which will save us some ampersands and braces. This allows us to rewrite the definition of $A_{ij}$ in your question as $A_{ij} = ive{ abs{ S_i cap S_j } = 1 }$. But as we said, we drop the numerical indexing, and instead define a matrix whose rows and columns are indexed by $P$ (that is, by all the nonempty subsets of $N$). Thus, the claim of the exercise rewrites as follows:




Theorem 1. Let $A$ be the $P times P$-matrix $tup{ ive{ abs{ S cap T } = 1 } }_{tup{S, T} in P}$. Then,
begin{equation}
det A = tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{equation}




The key to proving this is the following simple fact:




Lemma 2. Let $S in P$ and $T in P$. Then,
begin{equation}
ive{ abs{ S cap T } = 1 } = suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T } .
end{equation}




This, in turn, shall be derived from the following binomial identity:




Lemma 3. Let $k$ be a nonnegative integer. Then,
begin{equation}
sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} = ive{k = 1}.
end{equation}




Proof of Lemma 3 (sketched). If $k = 0$, then this equality holds for obvious reasons (indeed, the left hand side is an empty sum and thus vanishes, while the right hand side vanishes since $k = 0 neq 1$). Hence, for the rest of this proof, we WLOG assume that $k neq 0$. Hence, $k$ is a positive integer. Thus, $k-1$ is a nonnegative integer.



It is well-known that $suml_{i=0}^n tup{-1}^i dbinom{n}{i} = ive{ n = 0 }$ for every nonnegative integer $n$. Applying this to $n = k-1$, we obtain $suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = ive{ k - 1 = 0 } = ive{k = 1}$. Multiplying both sides of this equality by $k$, we obtain $k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i} = k ive{k = 1} = ive{k = 1}$ (where the last equality sign is easy to check directly). Hence,
begin{align}
ive{k = 1}
&= k suml_{i=0}^{k-1} tup{-1}^i dbinom{k-1}{i}
= suml_{i=0}^{k-1} tup{-1}^i cdot underbrace{k dbinom{k-1}{i}}_{substack{= tup{i+1} dbinom{k}{i+1} \ text{(by a simple computation)}}} \
&= suml_{i=0}^{k-1} tup{-1}^i cdot tup{i+1} dbinom{k}{i+1}
= suml_{i=1}^{k} tup{-1}^{i-1} cdot i dbinom{k}{i}
end{align}

(here, we have substituted $i-1$ for $i$ in the sum). This proves Lemma 3. $blacksquare$



Proof of Lemma 2 (sketched). If a subset $U in P$ does not satisfy $U subs S$, then the Iverson bracket $ive{ U subs S }$ is zero and thus renders the whole product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U subs T$. Thus, the product $tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ is zero unless $U in P$ satisfies both $U subs S$ and $U subs T$. Hence, the sum $suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }$ does not change its value when we restrict it to those $U$ that satisfy both $U subs S$ and $U subs T$ (because all the addends that we lose are zero anyway). In other words,
begin{align*}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}} tup{-1}^{abs{U} - 1} abs{U} cdot underbrace{ive{ U subs S }}_{=1} underbrace{ive{ U subs T }}_{=1} \
&= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}.
end{align*}

Now, let $K$ be the intersection $S cap T$, and let $k = abs{K}$. Then, the sets $U in P$ that satisfy both $U subs S$ and $U subs T$ are precisely the nonempty subsets of the $k$-element set $K$. Thus, we know how many such sets $U$ exist of each size: there are exactly $dbinom{k}{1}$ ones of size $1$, exactly $dbinom{k}{2}$ ones of size $2$, and so on, all the way up to $dbinom{k}{k}$ ones of size $k$. Hence,
begin{align*}
suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
&= dbinom{k}{1} tup{-1}^{1-1} cdot 1 + dbinom{k}{2} tup{-1}^{2-1} cdot 2 + cdots + dbinom{k}{k} tup{-1}^{k-1} cdot k \
&= sum_{i=1}^k dbinom{k}{i} tup{-1}^{i-1} cdot i
= sum_{i=1}^k tup{-1}^{i-1} i cdot dbinom{k}{i} \
&= ive{k = 1} qquad tup{ text{by Lemma 3} } \
&= ive{ abs{ S cap T } = 1 }
end{align*}

(since $k = abs{K} = abs{ S cap T }$).
Combining what we have, we obtain
begin{equation}
suml_{U in P} tup{-1}^{abs{U} - 1} abs{U} cdot ive{ U subs S } ive{ U subs T }
= suml_{substack{U in P; \ U subs S text{ and } U subs T}}
tup{-1}^{abs{U} - 1} abs{U}
= ive{ abs{ S cap T } = 1 } .
end{equation}

This proves Lemma 2. $blacksquare$




Lemma 4. (a) We have $sumS abs{S} = n 2^{n-1}$.



(b) We have $sumS tup{ abs{S} - 1 } = n 2^{n-1} - 2^n + 1$.



(c) We have $prodS tup{-1}^{abs{S} - 1} = tup{-1}^{ive{n neq 1}}$.



(d) We have $prodS abs{S} = prodl_{k=1}^n k^{dbinom{n}{k}}$.




Proof of Lemma 4. (a) Fix $i in set{1,2,ldots,n}$. Then, exactly $2^{n-1}$ subsets of $set{1,2,ldots,n}$ contain $i$ (since we can freely choose which of the other $n-1$ elements such a subset should contain). Of course, all of these $2^{n-1}$ subsets are nonempty, and thus belong to $P$. Hence, exactly $2^{n-1}$ subsets $S in P$ contain $i$. Hence, the sum $sumS ive{ i in S }$ has exactly $2^{n-1}$ addends equal to $1$, while all its other addends are $0$. Therefore, this sum equals $2^{n-1}$. In other words, we have
begin{equation}
sumS ive{ i in S } = 2^{n-1} .
label{darij1.pf.l4.1}
tag{1}
end{equation}



Now, forget that we fixed $i$. We thus have proven eqref{darij1.pf.l4.1} for each $i in set{1,2,ldots,n}$.



But we have $abs{S} = suml_{i=1}^n ive{ i in S }$ for each subset $S$ of $set{1,2,ldots,n}$ (because the sum $suml_{i=1}^n ive{ i in S }$ contains exactly $abs{S}$ many addends equal to $1$, whereas all its other addends are $0$). Summing up this equation over all $S in P$, we obtain
begin{align}
sumS abs{S}
&= sumS suml_{i=1}^n ive{ i in S }
= suml_{i=1}^n underbrace{sumS ive{ i in S }}_{substack{= 2^{n-1} \ text{(by eqref{darij1.pf.l4.1})}}}
= suml_{i=1}^n 2^{n-1} = n 2^{n-1} .
end{align}

This proves Lemma 4 (a).



(b) We have $abs{P} = 2^n - 1$ (since the set $P$ consists of all the $2^n$ subsets of $set{1,2,ldots,n}$ apart from the empty set). Now,
begin{align}
sumS tup{ abs{S} - 1 }
= underbrace{sumS abs{S}}_{substack{= n 2^{n-1} \ text{(by Lemma 4 (a))}}}
- underbrace{sumS 1}_{= abs{P} = 2^n - 1}
= n 2^{n-1} - tup{2^n - 1} = n 2^{n-1} - 2^n + 1 .
end{align}

This proves Lemma 4 (b).



(c) If $n neq 1$, then $n geq 2$ (since $n$ is a positive integer), and therefore both $2^{n-1}$ and $2^n$ are even integers. Hence, if $n neq 1$, then $n 2^{n-1} - 2^n + 1 equiv 1 mod 2$. On the other hand, if $n = 1$, then $n 2^{n-1} - 2^n + 1 = 1 cdot 2^{1-1} - 2^1 + 1 = 0 equiv 0 mod 2$. Combining the previous two sentences, we conclude that $n 2^{n-1} - 2^n + 1 equiv ive{n neq 1} mod 2$. Hence, $tup{-1}^{n 2^{n-1} - 2^n + 1} = tup{-1}^{ive{n neq 1}}$. Now,
begin{align}
prodS tup{-1}^{abs{S} - 1}
&= tup{-1}^{sumS tup{ abs{S} - 1 }}
= tup{-1}^{n 2^{n-1} - 2^n + 1}
qquad tup{ text{by Lemma 4 (b)} } \
&= tup{-1}^{ive{n neq 1}} .
end{align}

This proves Lemma 4 (c).



(d) Recall that $P$ is the set of all nonempty subsets of $set{1,2,ldots,n}$. Hence, among the elements of $P$ are exactly $dbinom{n}{1}$ subsets of size $1$, exactly $dbinom{n}{2}$ subsets of size $2$, and so on, and finally exactly $dbinom{n}{n}$ subsets of size $n$ (and no other subsets). Therefore, the product $prodS abs{S}$ has exactly $dbinom{n}{1}$ factors equal to $1$, exactly $dbinom{n}{2}$ factors equal to $2$, and so on, and finally exactly $dbinom{n}{n}$ factors equal to $n$ (and no other factors). Therefore, this product equals $prodl_{k=1}^n k^{dbinom{n}{k}}$. This proves Lemma 4 (d). $blacksquare$



Proof of Theorem 1 (sketched). Define two $P times P$-matrices
begin{align*}
B &= tup{ tup{-1}^{abs{T} - 1} abs{T} cdot ive{ T subs S } }_{tup{S, T} in P times P} qquad text{and} \
C &= tup{ ive{ S subs T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 2 says that $A = BC$. (More precisely, Lemma 2 says that the $tup{S, T}$-th entry of $A$ equals the corresponding entry of $BC$ for all $S in P$ and $T in P$; but this yields $A = BC$.) Thus, $det A = dettup{BC} = det B cdot det C$. Now, it remains to find $det B$ and $det C$.



Here we shall finally endow our set $P$ with a total ordering. Namely, we pick any total order on $P$ with the property that every pair of two subsets $S in P$ and $T in P$ satisfying $S subs T$ must satisfy $S leq T$. There are many total orders that do the trick here; for example, we can order them by increasing size (resolving ties arbitrarily). Now, it is easy to see that the $P times P$-matrix $B$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det B
&= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} underbrace{ive{ S subs S }}_{=1} }
= prodS tup{ tup{-1}^{abs{S} - 1} abs{S} } \
&= underbrace{tup{ prodS tup{-1}^{abs{S} - 1} }}_{substack{= tup{-1}^{ive{n neq 1}} \ text{(by Lemma 4 (c))}}}
tup{ prodS abs{S} }
= tup{-1}^{ive{n neq 1}} tup{ prodS abs{S} } .
end{align}



Likewise, the $P times P$-matrix $C$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{equation}
det C = prodS underbrace{ive{ S subs S }}_{=1}
= 1 .
end{equation}

Now,
begin{align}
det A
&= det B cdot underbrace{det C}_{=1} = det B = tup{-1}^{ive{n neq 1}} underbrace{tup{ prodS abs{S} }}_{substack{= prodl_{k=1}^n k^{dbinom{n}{k}} \ text{(by Lemma 4 (d))}}} \
&= tup{-1}^{ive{n neq 1}} prod_{k=1}^n k^{dbinom{n}{k}} .
end{align}

This proves Theorem 1. $blacksquare$



In hindsight, the above proof could have been restated without introducing $P times P$-matrices, since picking a total order on a finite set $P$ is equivalent to picking a bijection $phi : set{1,2,ldots,n} to P$. But I prefer to give the exercise an "invariant" formulation, and $P times P$-matrices are worth exposing anyway.



A few more telegraphic remarks:




  • The decomposition $A = BC$ constructed in the above proof of Theorem 1 is an LU-decomposition.


  • The solution to the original Putnam question I found first was the same recursive argument appearing a few times in the AoPS thread linked above. But the structure of the recursion foreshadowed the existence of a slicker proof -- the block-row-reduction operations felt like multiplying by a lower-triangular (in a properly chosen order) matrix. So I dug deeper and found the above.


  • This reminds me of Lemma 7.1 of Chris Godsil, An Introduction to the Moebius Function, arXiv:1803.06664v1.



3. Answering the postscriptum



Anyway, I wanted to prove another theorem, from the postscriptum to the original question:




Theorem 5. Let $q in QQ$. Let $D$ be the $P times P$-matrix $tup{ q^{abs{ S cap T }} }_{tup{S, T} in P}$. Then,
begin{equation}
det D = q^n tup{q-1}^{ntup{2^{n-1}-1}} .
end{equation}




Just as our proof of Theorem 1 relied on Lemma 2, our proof of Theorem 5 will rely on the following lemma:




Lemma 6. Let $S in P$ and $T in P$. Let $q in QQ$ be nonzero. Then,
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}




Note that the assumptions "$S in P$ and $T in P$" in Lemma 6 can be replaced by the weaker assumption "$S$ and $T$ are subsets of $N$", but we will only use Lemma 6 in the case when $S$ and $T$ belong to $P$.



Proof of Lemma 6 (sketched). If a subset $U in P$ does not satisfy $U sups S$, then the Iverson bracket $ive{ U sups S }$ is zero and thus renders the whole product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ zero. Similarly, this product also is rendered zero when $U in P$ does not satisfy $U sups T$. Thus, the product $tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ is zero unless $U in P$ satisfies both $U sups S$ and $U sups T$. Hence, the sum $suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T }$ does not change its value when we restrict it to those $U$ that satisfy both $U sups S$ and $U sups T$ (because all the addends that we lose are zero anyway). In other words,
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}} tup{q-1}^{n-abs{U}} q^{abs{S}-n} underbrace{ive{ U sups S }}_{=1} q^{abs{T}} underbrace{ive{ U sups T }}_{=1} \
&= suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} q^{abs{S}-n} q^{abs{T}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U in P; \ U sups S text{ and } U sups T}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.1}
tag{2}
end{align}



Now, let $G = S cup T$. This is a nonempty subset of $N$ (since $S$ and $T$ are nonempty subsets of $N$). Let $H = N setminus G$ be the complement of $G$ in $N$. Let $h = abs{H}$.



Every subset $U$ of $N$ that satisfies $U sups S$ and $U sups T$ is automatically nonempty (since $U$ contains the nonempty set $S$ as a subset), and thus belongs to $P$. Hence, the $U in P$ that satisfy $U sups S$ and $U sups T$ are precisely the subsets $U$ of $N$ that satisfy $U sups S$ and $U sups T$. Thus, we have
begin{align*}
& set{ U in P mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups S text{ and } U sups T } \
= & set{ U subs N mid U sups underbrace{S cap T}_{=G} } \
= & set{ U subs N mid U sups G } \
= & set{ U subs N mid N setminus U subs N setminus G }
end{align*}

(because the condition "$U sups G$" on a subset $U$ of $N$ is equivalent to the condition "$N setminus U subs N setminus G$").
Hence, the summation sign "$suml_{substack{U in P; \ U sups S text{ and } U sups T}}$" in eqref{darij1.pf.l6.1} can be rewritten as "$suml_{substack{U subs N; \ N setminus U subs N setminus G}}$". Thus, eqref{darij1.pf.l6.1} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs N setminus G}}
tup{q-1}^{n-abs{U}} .
label{darij1.pf.l6.2}
tag{3}
end{align}

Every subset $U$ of $N$ satisfies $n - abs{U} = abs{N setminus U}$ (since $U subs N$ yields $abs{N setminus U} = underbrace{abs{N}}_{=n} - abs{U} = n - abs{U}$). Thus, eqref{darij1.pf.l6.2} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{substack{U subs N; \ N setminus U subs N setminus G}}}_{substack{= suml_{substack{U subs N; \ N setminus U subs H}} \ tup{ text{since } N setminus G = H } }}
underbrace{tup{q-1}^{n-abs{U}}}_{substack{= tup{q-1}^{abs{N setminus U}} \ tup{ text{since } n - abs{U} = abs{N setminus U} } }} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ N setminus U subs H}}
tup{q-1}^{abs{N setminus U}} \
&= q^{abs{S}-n} q^{abs{T}} suml_{substack{U subs N; \ U subs H}}
tup{q-1}^{abs{U}}
label{darij1.pf.l6.3}
tag{4}
end{align}

(here, we have substituted $U$ for $N setminus U$ in the sum, since the map
begin{align}
set{U subs N} to set{U subs N}, qquad
U mapsto N setminus U
end{align}

is a bijection).



The summation sign "$suml_{substack{U subs N; \ U subs H}}$" on the right hand side of eqref{darij1.pf.l6.3} can be replaced by "$suml_{U subs H}$", since $H$ is a subset of $N$. Thus, eqref{darij1.pf.l6.3} rewrites as
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} suml_{U subs H} tup{q-1}^{abs{U}} .
label{darij1.pf.l6.4}
tag{5}
end{align}



The sum on the right hand side of this equality is easy to simplify. The set $H$ has size $abs{H} = h$. Thus, it has exactly $dbinom{h}{0}$ subsets of size $0$, exactly $dbinom{h}{1}$ subsets of size $1$, and so on, all the way up to $dbinom{h}{h}$ subsets of size $h$. Hence, the sum $suml_{U subs H} tup{q-1}^{abs{U}}$ has exactly $dbinom{h}{0}$ many terms equal to $tup{q-1}^0$, exactly $dbinom{h}{1}$ many terms equal to $tup{q-1}^1$, and so on, all the way up to $dbinom{h}{h}$ many terms equal to $tup{q-1}^h$. Hence, this sum can be rewritten as follows:
begin{align}
& suml_{U subs H} tup{q-1}^{abs{U}} \
& = dbinom{h}{0} tup{q-1}^0 + dbinom{h}{1} tup{q-1}^1 + cdots + dbinom{h}{h} tup{q-1}^h \
& = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k = tup{tup{q-1}+1}^h \
& qquad tup{ text{since the binomial formula yields } tup{tup{q-1}+1}^h = sum_{k=0}^h dbinom{h}{k} tup{q-1}^k } \
& = q^h .
end{align}

Thus, eqref{darij1.pf.l6.4} becomes
begin{align}
& suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } \
&= q^{abs{S}-n} q^{abs{T}} underbrace{suml_{U subs H} tup{q-1}^{abs{U}}}_{= q^h} \
&= q^{abs{S}-n} q^{abs{T}} q^h = q^{tup{abs{S}-n} + abs{T} + h} .
label{darij1.pf.l6.5}
tag{6}
end{align}



But $H$ is the complement of $G$ in $N$. Hence, $abs{H} = underbrace{abs{N}}_{=n} - abs{G} = n - abs{G}$. Therefore, $abs{G} = n - underbrace{abs{H}}_{= h} = n - h$.



Recall the basic fact that $abs{S} + abs{T} = abs{S cup T} + abs{S cap T} = abs{G} + abs{S cap T}$ (since $S cup T = G$). Solving this equation for $abs{S cap T}$, we obtain
begin{align}
abs{S cap T} = abs{S} + abs{T} - underbrace{abs{G}}_{= n - h} = abs{S} + abs{T} - tup{n - h} = tup{abs{S}-n} + abs{T} + h .
end{align}

Therefore, $q^{ abs{S cap T} } = q^{tup{abs{S}-n} + abs{T} + h}$. Comparing this with eqref{darij1.pf.l6.5}, we obtain
begin{equation}
q^{abs{ S cap T }} = suml_{U in P} tup{q-1}^{n-abs{U}} q^{abs{S}-n} ive{ U sups S } q^{abs{T}} ive{ U sups T } .
end{equation}

This proves Lemma 6. $blacksquare$



Proof of Theorem 5 (sketched). Both $det D$ and $q^n tup{q-1}^{ntup{2^{n-1}-1}}$ are polynomials in $q$ with integer coefficients (since $2^{n-1}-1$ is a nonnegative integer). Thus, the equality we need to prove is an equality between two polynomials in $q$. Hence, in order to prove it for all $q in QQ$, it suffices to prove it for infinitely many values of $q$ (because an equality between two polynomials in $q$ that holds for infinitely many values of $q$ must automatically hold for all $q in QQ$). Thus, in particular, it suffices to prove it for all nonzero $q in QQ$.
Thus, let us WLOG assume that $q$ is nonzero.
Define two $P times P$-matrices
begin{align*}
E &= tup{ tup{q-1}^{n-abs{T}} q^{abs{S}-n} ive{ T sups S } }_{tup{S, T} in P times P} qquad text{and} \
F &= tup{ q^{abs{T}} ive{ S sups T } }_{tup{S, T} in P times P} .
end{align*}



Lemma 6 says that $D = EF$. (More precisely, Lemma 6 says that the $tup{S, T}$-th entry of $D$ equals the corresponding entry of $EF$ for all $S in P$ and $T in P$; but this yields $D = EF$.) Thus, $det D = dettup{EF} = det E cdot det F$. Now, it remains to find $det E$ and $det F$.



Recall that $abs{P} = 2^n - 1$ (as we proved during our proof of Lemma 4 (b)). Also, $sumS abs{S} = n 2^{n-1}$ (by Lemma 4 (a)).



Endow our set $P$ with the same total ordering that we used in the proof of Theorem 1. Now, it is easy to see that the $P times P$-matrix $E$ is upper-triangular. Since the determinant of an upper-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det E
&= prodS tup{ underbrace{tup{q-1}^{n-abs{S}} q^{abs{S}-n}}_{= tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} }
underbrace{ive{ S sups S }}_{= 1} }
= prodS tup{ tup{dfrac{q-1}{q}}^n tup{dfrac{q}{q-1}}^{abs{S}} } \
&= underbrace{ tup{ prodS tup{dfrac{q-1}{q}}^n } }_{= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} }
underbrace{ tup{ prodS tup{dfrac{q}{q-1}}^{abs{S}} } }_{= tup{dfrac{q}{q-1}}^{sumS abs{S}} } \
&= tup{tup{dfrac{q-1}{q}}^n}^{abs{P}} tup{dfrac{q}{q-1}}^{sumS abs{S}} \
&= underbrace{ tup{tup{dfrac{q-1}{q}}^n}^{2^n - 1} }_{= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} }
underbrace{ tup{dfrac{q}{q-1}}^{n 2^{n-1}} }_{= tup{dfrac{q-1}{q}}^{-n 2^{n-1}}}\
& qquad tup{ text{since } abs{P} = 2^n - 1
text{ and } sumS abs{S} = n 2^{n-1} } \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1}} tup{dfrac{q-1}{q}}^{-n 2^{n-1}} \
&= tup{dfrac{q-1}{q}}^{n tup{2^n - 1} - n 2^{n-1}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}
end{align}

(since $n tup{2^n - 1} - n 2^{n-1} = n 2^{n-1} - n$).



Likewise, the $P times P$-matrix $F$ is lower-triangular. Since the determinant of a lower-triangular $P times P$-matrix equals the product of its diagonal entries, we thus conclude that
begin{align}
det F
&= prodS tup{ q^{abs{S}}
underbrace{ive{ S sups S }}_{= 1} }
= prodS q^{abs{S}}
= q^{sumS abs{S}}
= q^{n 2^{n-1}}
end{align}

(since $sumS abs{S} = n 2^{n-1}$).



Now,
begin{align}
det D
&= underbrace{det E}_{= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n}}
cdot
underbrace{det F}_{= q^{n 2^{n-1}}}
= tup{dfrac{q-1}{q}}^{n 2^{n-1} - n} q^{n 2^{n-1}} \
&= dfrac{tup{q-1}^{n 2^{n-1} - n}}{q^{n 2^{n-1} - n}} q^{n 2^{n-1}}
= tup{q-1}^{n 2^{n-1} - n} q^{n 2^{n-1} - tup{n 2^{n-1} - n}} \
&= q^{n 2^{n-1} - tup{n 2^{n-1} - n}} tup{q-1}^{n 2^{n-1} - n}
= q^n tup{q-1}^{n tup{2^{n-1} - 1}}
end{align}

(since $n 2^{n-1} - tup{n 2^{n-1} - n} = n$ and $n 2^{n-1} - n = n tup{2^{n-1} - 1}$).
This proves Theorem 5. $blacksquare$







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 12 at 20:48

























answered Dec 7 '18 at 5:56









darij grinbergdarij grinberg

18.3k371181




18.3k371181








  • 1




    $begingroup$
    Lemma 3 is trivial since $ibinom ki=kbinom{k-1}{i-1}$,
    $endgroup$
    – Zhi-Wei Sun
    Dec 7 '18 at 6:21








  • 5




    $begingroup$
    @Zhi-WeiSun: I know it is. This is a semi-expository writeup that is supposed to be readable by anyone who understands the problem.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 6:38










  • $begingroup$
    @darijgrinberg: Thank you for the detailed work!
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 4:36










  • $begingroup$
    @darij BRAVO for using Iverson notation in a matrix definition ? While avoiding to create a name names it adds clarity. From a computer point of view we could say "The implementation IS the name."
    $endgroup$
    – Jérôme JEAN-CHARLES
    Dec 14 '18 at 15:35












  • $begingroup$
    Another approach to Lemma 3: say $f(x) =(1-x)^k$; then the sum is $f’(1)$.
    $endgroup$
    – Jeff Strom
    2 days ago














  • 1




    $begingroup$
    Lemma 3 is trivial since $ibinom ki=kbinom{k-1}{i-1}$,
    $endgroup$
    – Zhi-Wei Sun
    Dec 7 '18 at 6:21








  • 5




    $begingroup$
    @Zhi-WeiSun: I know it is. This is a semi-expository writeup that is supposed to be readable by anyone who understands the problem.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 6:38










  • $begingroup$
    @darijgrinberg: Thank you for the detailed work!
    $endgroup$
    – T. Amdeberhan
    Dec 9 '18 at 4:36










  • $begingroup$
    @darij BRAVO for using Iverson notation in a matrix definition ? While avoiding to create a name names it adds clarity. From a computer point of view we could say "The implementation IS the name."
    $endgroup$
    – Jérôme JEAN-CHARLES
    Dec 14 '18 at 15:35












  • $begingroup$
    Another approach to Lemma 3: say $f(x) =(1-x)^k$; then the sum is $f’(1)$.
    $endgroup$
    – Jeff Strom
    2 days ago








1




1




$begingroup$
Lemma 3 is trivial since $ibinom ki=kbinom{k-1}{i-1}$,
$endgroup$
– Zhi-Wei Sun
Dec 7 '18 at 6:21






$begingroup$
Lemma 3 is trivial since $ibinom ki=kbinom{k-1}{i-1}$,
$endgroup$
– Zhi-Wei Sun
Dec 7 '18 at 6:21






5




5




$begingroup$
@Zhi-WeiSun: I know it is. This is a semi-expository writeup that is supposed to be readable by anyone who understands the problem.
$endgroup$
– darij grinberg
Dec 7 '18 at 6:38




$begingroup$
@Zhi-WeiSun: I know it is. This is a semi-expository writeup that is supposed to be readable by anyone who understands the problem.
$endgroup$
– darij grinberg
Dec 7 '18 at 6:38












$begingroup$
@darijgrinberg: Thank you for the detailed work!
$endgroup$
– T. Amdeberhan
Dec 9 '18 at 4:36




$begingroup$
@darijgrinberg: Thank you for the detailed work!
$endgroup$
– T. Amdeberhan
Dec 9 '18 at 4:36












$begingroup$
@darij BRAVO for using Iverson notation in a matrix definition ? While avoiding to create a name names it adds clarity. From a computer point of view we could say "The implementation IS the name."
$endgroup$
– Jérôme JEAN-CHARLES
Dec 14 '18 at 15:35






$begingroup$
@darij BRAVO for using Iverson notation in a matrix definition ? While avoiding to create a name names it adds clarity. From a computer point of view we could say "The implementation IS the name."
$endgroup$
– Jérôme JEAN-CHARLES
Dec 14 '18 at 15:35














$begingroup$
Another approach to Lemma 3: say $f(x) =(1-x)^k$; then the sum is $f’(1)$.
$endgroup$
– Jeff Strom
2 days ago




$begingroup$
Another approach to Lemma 3: say $f(x) =(1-x)^k$; then the sum is $f’(1)$.
$endgroup$
– Jeff Strom
2 days ago











14












$begingroup$

Darij already gave a great elementary detailed exposition. I wanted to remark that all such kinds of results (including the postscript) are special cases of a classical result that (at least in the generality of semilattices) goes back to Lindström in Determinants on semilattices. For this question we are in the special case of the Boolean lattice.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Oh, are you applying Lindström's result with $cup$ as $wedge$?
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 15:17












  • $begingroup$
    @darijgrinberg: isn't he using $cap$ as $wedge$ in the obvious way?
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:04










  • $begingroup$
    @SamHopkins: Because $cap$ is not always a nonempty subset. This is what confused me, too. But I think $cup$ can be used perfectly well.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 16:15










  • $begingroup$
    @darijgrinberg: another way to resolve that might be to add a row of all $1$s to the bottom of the matrix (thinking of the last row/column as corresponding to the empty set) by choosing the corresponding $f_i$ to be the constant function $1$. When I expand the determinant about the last row, I will just get the determinant of the $(2^n-1)times(2^n-1)$ submatrix in the upper left because every other $(2^n-1)times(2^n-1)$ determinant will have a column of all zeros.
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:21






  • 1




    $begingroup$
    @darijgrinberg Yes, for the postscript, I was thinking the nonempty subsets with $cup$. We can take the function $f(A,B)=q^{-|Acup B|}$ and then multiply the row(or column) of A by $q^{|A|}$.
    $endgroup$
    – Gjergji Zaimi
    Dec 7 '18 at 18:11
















14












$begingroup$

Darij already gave a great elementary detailed exposition. I wanted to remark that all such kinds of results (including the postscript) are special cases of a classical result that (at least in the generality of semilattices) goes back to Lindström in Determinants on semilattices. For this question we are in the special case of the Boolean lattice.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Oh, are you applying Lindström's result with $cup$ as $wedge$?
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 15:17












  • $begingroup$
    @darijgrinberg: isn't he using $cap$ as $wedge$ in the obvious way?
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:04










  • $begingroup$
    @SamHopkins: Because $cap$ is not always a nonempty subset. This is what confused me, too. But I think $cup$ can be used perfectly well.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 16:15










  • $begingroup$
    @darijgrinberg: another way to resolve that might be to add a row of all $1$s to the bottom of the matrix (thinking of the last row/column as corresponding to the empty set) by choosing the corresponding $f_i$ to be the constant function $1$. When I expand the determinant about the last row, I will just get the determinant of the $(2^n-1)times(2^n-1)$ submatrix in the upper left because every other $(2^n-1)times(2^n-1)$ determinant will have a column of all zeros.
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:21






  • 1




    $begingroup$
    @darijgrinberg Yes, for the postscript, I was thinking the nonempty subsets with $cup$. We can take the function $f(A,B)=q^{-|Acup B|}$ and then multiply the row(or column) of A by $q^{|A|}$.
    $endgroup$
    – Gjergji Zaimi
    Dec 7 '18 at 18:11














14












14








14





$begingroup$

Darij already gave a great elementary detailed exposition. I wanted to remark that all such kinds of results (including the postscript) are special cases of a classical result that (at least in the generality of semilattices) goes back to Lindström in Determinants on semilattices. For this question we are in the special case of the Boolean lattice.






share|cite|improve this answer









$endgroup$



Darij already gave a great elementary detailed exposition. I wanted to remark that all such kinds of results (including the postscript) are special cases of a classical result that (at least in the generality of semilattices) goes back to Lindström in Determinants on semilattices. For this question we are in the special case of the Boolean lattice.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 7 '18 at 11:03









Gjergji ZaimiGjergji Zaimi

62.5k4163308




62.5k4163308












  • $begingroup$
    Oh, are you applying Lindström's result with $cup$ as $wedge$?
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 15:17












  • $begingroup$
    @darijgrinberg: isn't he using $cap$ as $wedge$ in the obvious way?
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:04










  • $begingroup$
    @SamHopkins: Because $cap$ is not always a nonempty subset. This is what confused me, too. But I think $cup$ can be used perfectly well.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 16:15










  • $begingroup$
    @darijgrinberg: another way to resolve that might be to add a row of all $1$s to the bottom of the matrix (thinking of the last row/column as corresponding to the empty set) by choosing the corresponding $f_i$ to be the constant function $1$. When I expand the determinant about the last row, I will just get the determinant of the $(2^n-1)times(2^n-1)$ submatrix in the upper left because every other $(2^n-1)times(2^n-1)$ determinant will have a column of all zeros.
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:21






  • 1




    $begingroup$
    @darijgrinberg Yes, for the postscript, I was thinking the nonempty subsets with $cup$. We can take the function $f(A,B)=q^{-|Acup B|}$ and then multiply the row(or column) of A by $q^{|A|}$.
    $endgroup$
    – Gjergji Zaimi
    Dec 7 '18 at 18:11


















  • $begingroup$
    Oh, are you applying Lindström's result with $cup$ as $wedge$?
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 15:17












  • $begingroup$
    @darijgrinberg: isn't he using $cap$ as $wedge$ in the obvious way?
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:04










  • $begingroup$
    @SamHopkins: Because $cap$ is not always a nonempty subset. This is what confused me, too. But I think $cup$ can be used perfectly well.
    $endgroup$
    – darij grinberg
    Dec 7 '18 at 16:15










  • $begingroup$
    @darijgrinberg: another way to resolve that might be to add a row of all $1$s to the bottom of the matrix (thinking of the last row/column as corresponding to the empty set) by choosing the corresponding $f_i$ to be the constant function $1$. When I expand the determinant about the last row, I will just get the determinant of the $(2^n-1)times(2^n-1)$ submatrix in the upper left because every other $(2^n-1)times(2^n-1)$ determinant will have a column of all zeros.
    $endgroup$
    – Sam Hopkins
    Dec 7 '18 at 16:21






  • 1




    $begingroup$
    @darijgrinberg Yes, for the postscript, I was thinking the nonempty subsets with $cup$. We can take the function $f(A,B)=q^{-|Acup B|}$ and then multiply the row(or column) of A by $q^{|A|}$.
    $endgroup$
    – Gjergji Zaimi
    Dec 7 '18 at 18:11
















$begingroup$
Oh, are you applying Lindström's result with $cup$ as $wedge$?
$endgroup$
– darij grinberg
Dec 7 '18 at 15:17






$begingroup$
Oh, are you applying Lindström's result with $cup$ as $wedge$?
$endgroup$
– darij grinberg
Dec 7 '18 at 15:17














$begingroup$
@darijgrinberg: isn't he using $cap$ as $wedge$ in the obvious way?
$endgroup$
– Sam Hopkins
Dec 7 '18 at 16:04




$begingroup$
@darijgrinberg: isn't he using $cap$ as $wedge$ in the obvious way?
$endgroup$
– Sam Hopkins
Dec 7 '18 at 16:04












$begingroup$
@SamHopkins: Because $cap$ is not always a nonempty subset. This is what confused me, too. But I think $cup$ can be used perfectly well.
$endgroup$
– darij grinberg
Dec 7 '18 at 16:15




$begingroup$
@SamHopkins: Because $cap$ is not always a nonempty subset. This is what confused me, too. But I think $cup$ can be used perfectly well.
$endgroup$
– darij grinberg
Dec 7 '18 at 16:15












$begingroup$
@darijgrinberg: another way to resolve that might be to add a row of all $1$s to the bottom of the matrix (thinking of the last row/column as corresponding to the empty set) by choosing the corresponding $f_i$ to be the constant function $1$. When I expand the determinant about the last row, I will just get the determinant of the $(2^n-1)times(2^n-1)$ submatrix in the upper left because every other $(2^n-1)times(2^n-1)$ determinant will have a column of all zeros.
$endgroup$
– Sam Hopkins
Dec 7 '18 at 16:21




$begingroup$
@darijgrinberg: another way to resolve that might be to add a row of all $1$s to the bottom of the matrix (thinking of the last row/column as corresponding to the empty set) by choosing the corresponding $f_i$ to be the constant function $1$. When I expand the determinant about the last row, I will just get the determinant of the $(2^n-1)times(2^n-1)$ submatrix in the upper left because every other $(2^n-1)times(2^n-1)$ determinant will have a column of all zeros.
$endgroup$
– Sam Hopkins
Dec 7 '18 at 16:21




1




1




$begingroup$
@darijgrinberg Yes, for the postscript, I was thinking the nonempty subsets with $cup$. We can take the function $f(A,B)=q^{-|Acup B|}$ and then multiply the row(or column) of A by $q^{|A|}$.
$endgroup$
– Gjergji Zaimi
Dec 7 '18 at 18:11




$begingroup$
@darijgrinberg Yes, for the postscript, I was thinking the nonempty subsets with $cup$. We can take the function $f(A,B)=q^{-|Acup B|}$ and then multiply the row(or column) of A by $q^{|A|}$.
$endgroup$
– Gjergji Zaimi
Dec 7 '18 at 18:11


















draft saved

draft discarded




















































Thanks for contributing an answer to MathOverflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f317100%2fa-putnam-problem-with-a-twist%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Berounka

Sphinx de Gizeh

Different font size/position of beamer's navigation symbols template's content depending on regular/plain...