\documentclass{article}
\usepackage{amsthm, amsmath, amssymb, amsfonts}
\newtheorem{Theorem}{Theorem}
\newtheorem{Case}{Case}
\begin{document}
\section{Introduction}
This is a summary of some work I did in December. I'm not sure if it will be useful in anyway. I began working with two dimensional lattices $a_{ij}$ satisfying
$$\det\begin{pmatrix} a_{(i-1)(j-1)} & a_{(i-1)j} & a_{(i-1)(j+1)} \\
a_{i(j-1)} & a_{ij} & a_{(i+1)j} \\
a_{(i+1)(j-1)} & a_{(i+1)j} & a_{(i+1)(j+1)}
\end{pmatrix}=1$$
for all $i$, $j$. (This was a problem that Prof. Propp had mentioned to me.) Assume that the $a_{ij}$ are given for $i+j=-1$, $0$, $1$ or $2$ are given for all $i$, so the other $a_ij$ are rational functions in this infinite family of variables.
In the first section, I show that the $a_{ij}$ are Laurent polynomials in the $a_{ij}$, $i+j=0$ or $1$ and in the two by two minors $a_{(i-1)j}a_{i(j+1)}-a_{ij}a_{(i-1)(j+1)}$, $i+j=0$ or $1$. The way this works is that I introduce a new indexing sytem where
$$b_{(i+j)(i-j)}=a_{ij}$$
and
$$b_{(i+j)(i-j-1)}=a_{(i-1)j}a_{i(j+1)}-a_{ij}a_{(i-1)(j+1)}.$$
(We can tell which formula to use by whether $i$ and $j$ have the same or opposite parities.) We show that we have
$$b_{k(l+1)}b_{k(l-1)}=b_{(k-1)l}b_{(k+1)l}+b_{kl}$$
for all $k$ and $l$. Our claim is then that, if were are given the $b_{k0}$ and the $b_{k1}$, the other $b_{kl}$ are Laurent polynomials in terms of them. We show that this holds and moreover that each $b_{kl}$ is relatively prime to its eight neighbors and is irreducible. Unlike in some previous cases, however, although they are relatively prime they do not generate the ideal 1, which means this property may not be preserved when we take some particular initial values.
As I did not have much luck understanding these objects combinatorially, I decided to switch to the special case $b_{k0}=x$, $b_{k1}=y$ for all $k$. (One should note that this is a very unnatural special case, in terms of the original interpretation of $3 \times 3$ matrices of determinant 1.) This is then the one dimensional recurrence
$$b_{l+1}b_{l-1}=b_l^2+b_l,\ b_0=x,\ b_1=y$$
If we define a sequence $c_l$ to be such that
$$b_l=c_l c_{l+1}$$
we get that
$$c_{l+3}c_l-c_{l+2}c_{l+1}=1.$$
This is a recurrence that Gregg has been looking at. Starting with ones, the first few values are
$$1,1,1,2,3,7,11,26,41,97,\ldots$$
Letting $T_{2n}$ be the number of tilings of a $3\times 2n$ rectangle with dominos, I show that these numbers are
$$1, 1, T_0, \frac{T_0+T_2}{2}, T_2, \frac{T_2+T_4}{2}, T_4, \frac{T_4+T_6}{2}, T_6, \frac{T_6+T_8}{2}, \ldots$$
via a bijective proof. The proof is irritating however in that I don't precisely get a bijection between the ordered pairs of tilings involced. Most are directly paired off, but at the end I have some exceptions on each sde (a small number, but unbounded). I show there are the same number of exceptions, but not in a way that gives a canonical pairing.
\section{Laurentness}
We first verify that the $b_{kl}$ do indeed obey the recurrence
$$b_{k(l+1)}b_{k(l-1)}=b_{(k-1)l}b_{(k+1)l}+b_{kl}$$
when the $a_{ij}$ have the determinant property. When $k \neq l \mod 2$, this is the definition of the $b$s. When $k=l \mod 2$, we are being asked to show that
\begin{multline*}
(a_{(i-1)(j-1)} a_{ij}-a_{(i-1)j} a_{i(j-1)})(a_{ij} a_{(i+1)(j+1)}-a_{i(j+1)} a_{(i+1)j}) - \\
(a_{(i-1)j} a_{i(j+1)}-a_{(i-1)(j+1)} a_{ij})(a_{i(j-1)} a_{(i+1)j}-a_{ij} a_{(i+1)(j-1)})=a_{ij}.
\end{multline*}
It is easy to verify that the left hand side is
$$a_{ij} \det\begin{pmatrix} a_{(i-1)(j-1)} & a_{(i-1)j} & a_{(i-1)(j+1)} \\
a_{i(j-1)} & a_{ij} & a_{(i+1)j} \\
a_{(i+1)(j-1)} & a_{(i+1)j} & a_{(i+1)(j+1)}
\end{pmatrix}$$
which by hypothesis is $a_{ij}$. (This is a special case of Dodgson condensation, for those who know it.)
So, we are considering an infinite grid in which we are given the first two rows and compute the rest by, for any five entries arranged as below,
$$\begin{matrix} & A & \\ B & C & D \\ & E & \end{matrix}$$
we have
$$E=\frac{BC+D}{A}.$$
We now prove that all the $b_{kl}$ are Lauernt polynomials in $b_{0l}$ and $b_{1l}$. We do this by proving the sollowing stronger result by induction on $l$
\begin{Theorem}
For $l\geq 2$ and any $k$, we have
\begin{enumerate}
\item $b_{kl}$ is a Laurent polynomial in the $b_{0l}$ and $b_{1l}$.
\item If $b{1m}$ appears in $b_{kl}$ then $l-k+1 \leq m \leq k+l-1$. \\
\item Each of $b_{1(l-k+1)}$ and $b_{1(k+l-1)}$ appear linearly in $b_{kl}$ with coefficeients of $b_{(k-1)(l+1)}/b_{0(l-k+2)}$ and $b_{(k-1)(l-1)}/b_{0(k+l-2)}$ respectively. \\
\item The $b_{kl}$ are nonzero. \\
\item The $b_{kl}$ are all distinct. \\
\item The $b_{kl}$ are irreducible. (In the ring of Laurent polynomials in the $b_{0l}$ and $b_{1l}$.)
\end{enumerate}
\end{Theorem}
\begin{proof}
This theorem may be checked by hand for $l \leq 4$ so we assume it is greater. Suppose we have proven the theorem for $l-1$ and are now working on $l$. Let $M=b_{kl}$ and label the other relevant variables as below:
$$\begin{matrix}
& & A & & \\
& B & C & D & \\
E & F & G & H & I \\
& J & K & L & \\
& & M & &
\end{matrix}$$
Let $w=b_{0(l-k+2)}$, $x=b_{0(l+k-2)}$, $y=b_{1(l-k+1)}$ and $x=b_{1(l+k-1)}$. That is
$$\begin{matrix}
& w & \cdots & x & \\
y & & \cdots & & z
\end{matrix}$$
{\bf (1)} We have
$$M=\frac{JL+K}{G}=\frac{ \left( \frac{EG+F}{B} \right) \left( \frac{GI+H}{D} \right) + \frac{FH+G}{C}} {G}$$
Rearranging this gives
$$\frac{EGI+FI+EH}{BD}+\frac{1}{C}+\frac{FHC+BDFH}{BDCG}.$$
The last term is
$$\frac{FH(BD+C)}{BDC(BD+C)/A}=\frac{AFH}{BDC}.$$
So we can write $M$ as a ratio of Laurent polynomials with denominator $G$ and also as a ratio of Laurent polynomials with denominator $BDC$. As $G$, $B$, $C$ and $D$ are distinct and irreducible (by induction), and the ring of Laurent polynmoials is a unique factorization domain, $M$ must be a Laurent polynomial.
{\bf (2)} If $m$ does not lie in the given range, by induction it will not appear in $J$, $K$, $L$ or $G$. Thus it will not appear in $M$.
{\bf (3)} Applying the induction hypothesis, the bottom part of the diagram wan be written
$$\begin{matrix}
& G & \\
y(G/w)+J_0 & K & z(G/x)+L_0 \\
& M &
\end{matrix}$$
where $y$ and $z$ do not appear in $G$, $K$, $J_0$ or $L_0$. So we get
$$M=yz(G/wx)+z(J_0/x)+y(L_0/w)+(J_0 L_0+K)/G$$
The last term does not contain a $y$ or a $z$ so the coefficients of $y$ and $z$ are
$$zG/wx+L_0/w=L/w \hbox{\ and\ } yG/wx+J_0/x=J/x$$
respectively, as claimed. These coefficients are nonzero by part (5).
{\bf 4} By induction, $G \neq 0$. Then by the above formula, the coefficient of $yz$ is not zero.
{\bf (5)} Let $m$ and $n$ be minimal and maximal respectively such that $b_{1m}$ and $b_{1n}$ appear in $b_{kl}$. Then, by parts (2) and (3), $k$ and $l$ can be recovered uniquely as
$$k=(n-m)/2+1,\ l=(m+n)/2.$$
{\bf (6)} We can write $P=ayz+by+cz+d$ where $y$ and $z$ do not appear in lower case letter from the begigining of the alphabet. If $P$ factors, it must factor either as $e(fyz+gy+hz+i)$ or as $(ey+f)(gz+h)$. In the first case, this would imply that $a$, $b$, $c$ and $d$ have a nontrivial common factor. But $az+b=L/w$ and $cz+d=J/x$, which are different irreducibles, so this is a contradiction. (Remember that $w$, $x$, $y$ and $z$ are units.) If the second option holds, we should have $ad-bc=0$, but in fact
\begin{multline*}
ad-bc=(G/wx)(J_0 L_0+K)/G-(J_0/x)(L_0/w)= \\
(J_0 L_0+K)/(wx)-(J_0 L_0)/(wx)=K/(wx).
\end{multline*}
By part (4), $K\neq 0$.
\end{proof}
\section{The One Dimensional Case}
We now consider what happens when all the $b_{0l}$ are equal, say to $x$ and all the $b_{1l}$ are equal, say to $y$. Then $b_{kl}$ is independent of $k$, call it $b_l$. We get
$$b_{l+1}=\frac{b_l^2+b_l}{b_{l-1}}.$$
Define a sequence $c_l$ by $b_l=c_l c_{l+1}$ and choosing some initial value for $c_0$. There is obviously some ambiguity as to the best choice of $c_0$. We get
$$(c_{l+2}c_{l+1})(c_{l}c_{l-1})=(c_{l+1}c_{l})^2+c_{l+1}c_l$$
or
$$c_{l+2}c_{l-1}-c_{l+1}c_{l}=1.$$
Now, we can chose $c_0$ freely. The normalization which seems to have the nicest effect is to choose $c_0$ such that $c_0+c_2=2c_1$. This has the nice property that we can prove inductively that it implies inductively that $c_{2l}+c_{2l+2}=2 c_{2l+1}$. Thus, we can set $d_l=c_{2l}$ and get the new recurrences
\begin{eqnarray}
d_{l+1} (d_l+d_{l-1})/2-(d_{l+1}+d_l)/2 d_l=2 & \Leftrightarrow & d_{l+1} d_{l-1}=d_l^2+2 \\
(d_{l+1}+d_l)/2 d_{l-1}-d_l (d_l+d_{l-1})/2=2 & \Leftrightarrow & d_{l+1} d_{l-1}=d_l^2+2
\end{eqnarray}
Set $d_0=x$, $d_1=y$. The first several $d_i$ are
\begin{eqnarray}
d_0 &=& x^1 y^0 \\
d_1 &=& x^0 y^1 \\
d_2 &=& x^{-1} y^2 + 2 x^{-1} y^0 \\
d_3 &=& 2 x^0 y^{-1} +4 x^{-2} y^{-1}+4 x^{-2} y^1+x^{-2} y^3 \\
d_4 &=& 12x^{-3}y^0+4x^{-1}y^0+8x^{-3}y^{-2}+8x^{-1} y^{-2}+2x^1 y^{-2}+6 y^2 x^{-3}+y^4 x^{-3}
\end{eqnarray}
\section{Domino tilings and the $d_i$}
If we set $x=y=1$, the initial $d_i$ are $1,1,3,11,41,153,\ldots$, exactly the number of ways to tile a $3 \times (2i-2)$ rectangle by dominos. The purpose of this section is to give a combinatorial proof that the number of tilings of a rectangle of this form obeys the recurrence of the $d_i$.
\begin{Theorem}: Let $A_n$ be the number of ways to tile a $3 \times (2n)$ rectanlge by dominos. Then
$$A_{n-1} A_{n+1} = A_n^2+2.$$
\end{Theorem}
\begin{proof}
We will draw our tilings as matchings of grid graphes. Consider a $3 \times 2(n+1)$ grid. To every pair of matchings of the $3 \times 2n$ grid, assosciate a graph on the $3 \times 2(n+1)$ grid by superimposing the two matchings, alligning one with the left and the other with the right of the grid and counting edges with multiplicity. Similarly, given a pair of one matching of the $3 \times 2(n+1)$ grid and one of the $3 \times 2(n-1)$ grid, superimpose them with the $3 \times 2(n-1)$ centered. We want to show that we get two more graphes the second way than the first. Given a graph, we will say that we have found an $LR$ decomposition (left-right) of it if we write it as a superposition of two graphes in the first way and an $NW$ (narrow-wide) decomposition if we write it in the second way.
Write our grid as
$$\begin{matrix}
A & B & * & \cdots & * & U & V \\
C & D & * & \cdots & * & W & X \\
E & F & * & \cdots & * & Y & Z
\end{matrix}$$
In every case we get a graph (possibly with multiple edges) in which the lettered vertices have degree 2. Thus, it is a union of cycles and pathes, where the endpoints of the pathes are letters. We will call a path short if it connects letters from the same end of the alphabet and long else. Only $B$, $D$, $F$, $U$, $W$ and $Y$ can be endpoints of long pathes. Clearly, the number of long pathes is even. Morevoer, if we color the grid in a checker board, there must be an equal number of black and white endpoints of long pathes, both on the left and right hand side. Putting these restirictions together, we see there are three cases:
\begin{Case} There are no long paths. We claim that such a graph has equally many $LR$ and $NW$ decompositions.
\end{Case}
For each edge, we must decide which of the two graphes being superimposed to put it into. For each cycle of length $>2$, we may choose one edge in it arbitrarily. For each short path, the final edges placement is forced and that forces the rest of the allocation of the edges of the path. So the number of decompositions is 2 rasied to the power of the number of cycles longer than length 2, which is independent of whether we are looking at $LR$ decompositions or $NW$ decompositions.
\begin{Case} There are two long paths: joining either $(B \leftrightarrow U, D \leftrightarrow W)$ or $(D \leftrightarrow W, F \leftrightarrow Y)$. Then there is exactly one $NW$ necomposition of this graph and no $LR$ decomposition.
\end{Case}
There is no room for any cycles except length 2 cycles. Each edge in a length two cycle must appear in both graphs of any decomposition. We can determine the decomposition of the long paths by putting the edges with endpoints in the left lettered columns in the correct graph. This gives us a unique decomposition, we must show that the right final edges long paths lie in the same decomposition as the left final edges. This is easy: by looking at the checker board coloring, the long pathes have an odd number of edges. This means the two final edges must lie in the same half of the decomposition.
\begin{Case} There are two long paths. They join either $(B \leftrightarrow W, D \leftrightarrow Y)$ or $(D \leftrightarrow U, F \leftrightarrow W)$. In this case, there is one $LR$ decomposition and no $NW$ decomposition.
\end{Case}
Similar to the previous case.
So, we are reduced to showing that the number of routings pairing $(B \leftrightarrow U, D \leftrightarrow W)$ or $(D \leftrightarrow W, F \leftrightarrow Y)$ is two more than the number pairing $(B \leftrightarrow W, D \leftrightarrow Y)$ or $(D \leftrightarrow U, F \leftrightarrow W)$. Consider such a routing. We will call a pair of adjacent columns a switch if the top and bottom lines between them are in the routing, that is, it looks like
$$\begin{matrix}
* & - & * \\
* & & * \\
* & - & *
\end{matrix}$$
(There will also, of course, be some vertical edges in the routing.) The importance of switches is that if we reflect all of the routing to the right of a seitch over a horizontal line, we will get another routing, and of the opposite type as before. Thus, if we group all routings into equivalence classes under this reflection operation, each class will contribute equally often to Case 2 and Case 3, except the classes where there are no switches. THere are two routing without switches,
$$\begin{matrix}
* & - & * & - & * & \cdots & * & - & * \\
* & - & * & - & * & \cdots & * & - & * \\
* & & * & & * & \cdots & * & & *
\end{matrix}$$
and
$$\begin{matrix}
* & & * & & * & \cdots & * & & * \\
* & - & * & - & * & \cdots & * & - & * \\
* & - & * & - & * & \cdots & * & - & *
\end{matrix}$$
Each of these lies in class 2, so we are done.
\end{proof}
\end{document}