\documentclass{article}
\title{comment on ``A=B''}
\author{John Nixon}
\date{March 2005}
\begin{document}
\maketitle
Copyright (c) 2005 John Nixon.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
\section{Introduction}
Thank you very much for making your book \begin{em}A=B\end{em}
\begin{verbatim} http://www.cis.upenn.edu/~wilf/AeqB.html \end{verbatim} freely available on the
Web. I have read most of it,but I have yet to fully comprehend chapter 8.
The equation numbers given here refer to numbered equations in that book.
It is a fascinating account of an obviously fundamentally
important basic area of mathematics, because of the simplicity of the ideas and the consequent
usefulness of the results. It seems remarkable to me that such progress can be made on
such basic problems, some of which I feel could be understood by grade 12 or at least good
undergraduate students. (I am sure I could have done). The essential idea seems to be to take an expression
you wish to simplify e.g. a sum with fixed or variable limits, and look for a linear recurrence
equation or difference equation (DE)
with polynomial coefficients that it satisfies (for a sum with fixed limits this can be deduced
from such an equation satisfied by the summand).
If this DE is not first order, look for solutions of the DE that are also solutions of a linear DE
of first order with polynomial coefficients; these are hypergeometric solutions and are characterised
by a rational function. If this is found,
it allows a reduction of order in the DE for the other solutions. Hypergeometric
solutions are considered to be "closed form" and they are usually faster to compute
and equivalent to the original expression that gave rise to the DE. All the methods are completely
algorithmic and lead to either a solution of the desired type or a proof that no such solution exists.
I imagine that it is also possible
to look for solutions of linear DE with polynomial coefficients of
order greater than 2 in the form of solutions second order linear DE, etc.
Another thing I have looked into a little is change of variables to linearize nonlinear DE.
What is amazing about this mathematics is that there are
for example a vast number of mathematical identities involving binomial coefficients derived
by Ramanujan and others that can be proved mechanically when the algorithms have been programmed
into computer algebra systems (CAS's) such as Maple and Mathematica.
I wonder how many areas of applied
mathematics e.g. mathematical physics will be transformed by the ability to now ``do''
summations/integrations that were formerly considered only possible numerically.
Some examples you give in the book suggest that probability
theory might be such an area, because
binomial and multinomial coefficients occur there frequently. Another may be differential
equations, by exploiting the obvious idea that a differential equation is the limiting
case of a difference equation of the same order. Some of the theory of differential
equations ought to be derivable from this work.
While reading \begin{em}A=B\end{em}, I often had the feeling that I could have done
that if only I had had the idea of treating
hypergeometric sequences as special
``closed form'' in regard to the solution of difference equations with polynomial
coefficients. The other key idea seems to be Gosper's namely the factorisation
(5.2.3) of $r(n)$ in the equation (5.2.2) which is $r(n)y(n+1)-y(n)=1$.
Also there seem to be the "tricks" (5.2.4) and (5.2.5) and the final ``miracle''
when all the rational solutions of (5.2.6) turn out to be polynomial solutions. Whenever
the argument seems non-obvious like this and I get the feeling ``why did they do that'' I try to
think it through myself.
In the case of Gosper's argument, equation (5.2.2) can be written as $r(n)=(1+y(n))/y(n+1)$
.\pagebreak[4]
Now, treating this for the moment as an equation with $n$ in $C$ (the complex numbers),
it is obvious that if $y()$ has a singular point (goes to infinity) at $z_{0}$ then
\begin{itemize}
\item
$r(z_{0})$ is infinite unless $y(z_{0}+1)$ is infinite, and
\item $r(z_{0}-1)$ is zero unless $y(z_{0}-1)$ is infinite.
\end{itemize}
Similarly if $y(z_{0}-1)$ is infinite then
\begin{itemize}
\item
$r(z_{0}-1)$ is infinite unless $y(z_{0})$ is infinite (which it is) and
\item $r(z_{0}-2)$ is zero unless $y(z_{0}-2)$ is infinite.
\end{itemize}
Because $r()$ is a rational function, it has a finite number of singularities and so this
argument will eventually terminate showing that $r(z_0)$ is infinite and $r(z_{0}-h)$ is zero
for some positive integer $h$, provided $z_{0}$ is chosen as the singular point of $y()$ with the
largest real part (ensuring that $y(z_{0}+1)$ is finite).
This behaviour of $r()$ can be removed by extracting from $r()$ the factor
$(n-(z_{0}-h))/(z-z_{0})$.
If necessary this will have to be repeated
until the remaining factor $r'()$ has no singular points that are greater than any zero points by an
exact positive integer. The total factor extracted is obviously expressible as $c(n+1)/c(n)$
(by introducing extra factors $(n-(z_{0}-m))$ in the numerator and denominator where $1