SSL Minutes for 3/4/04
Notes: Carl
Snacks: Jim's Birthday Surprise
Next time:
Notes: Hal
Snacks: We forgot to pick someone. (Carl will bring something next time
if no one else has high hopes of providing something)
SSL Starts right up!
Sam has been looking at "A=B", the automatic theorem prover, and suggests
we use it. A small debate arises as to whether we actually know what to
ask it to prove yet:
Hal/Carl  for the specific case of the newton recurrence (a=1,b=1,c=1),
it now seems that something went wrong with our empirical explicit formula
using Fibonacci numbers, but we don't understand why yet...
Jim enters with distinguished guest, Rick Kenyon. Before talking about
what we're working on / how things are going, how did VIGRE go on Wednesday?
Sam/Emilie  Talked mostly about SSL. Big question was, "do you write a
lot of stuff up?" (our emphasis in SSL has not been as heavily on writing
things up though..)
Jim says, Gloria says "We love your undergrads!" so that is encouraging.
Next Tuesday: we will be having the interviews again, about how you thing
things are going, etc. We won't be assigning time slots though.
[Jim will get more markers for B107]
Emilie  has a conjecture for the family of recurrences
a(n) a(nk) = a(n1) a(nk+1) + a(n(k1)/2) a(n(k+1)/2)
a(n) = [1/2 (k^2 + 1) + 3k] (a(nk+1)  a(n2k+2)) + a(n3k+3)
Jim  Fundamental theorem of linear recurrences says we can write the
solutions to the recurrence in terms of exponential functions in which
the bases of the exponentials are the roots of the characteristic
polynomial. In the case of the Fibonacci numbers, the characteristic
polynomial is x^2  x  1 = 0. We will be able to find an explicit
formula for the roots of the characteristic polynomial of the a(n)
recurrence.
m = k  1
r = 1/2 (k^2 + 1) + 3k
a(n) = r (a(nm)  a(n2m)) + a(n3m)
a~ = r (T^m a~  T^(2m) a~) + T^(3m) a~
(T^(3m)  r T^(2m) + r T^m  I) a~ = 0~
(Here a~ is the twosided infinite sequence ...,a(1),a(0),a(1),...,
0~ is the allzeroes sequence, T is the shiftoperator that sends
a~ to b~ where b(n) = a(n1), and I is the identity operator that
sends a~ to a~.
Thm: If p(T) has distinct roots lamda1, lamda2, ... then the general
solution to p(T) (a~) = 0~ is of the form
a(n) = c1 lamda1^n + c2 lamda2^n + ...
(There's a more complicated formula that applies when p(T) has
multiple roots, but we don't need it here.)
p(x) = x^(3m)  r x^(2m) + r x^m  1
= (x^m  1) (x^(2m)  (r1) x^m + 1)
= (x^m  1) (x^m  (((r1) + sqrt(r^2  2r  3))/2)
(x^m  (((r1)  sqrt(r^2  2r  3))/2)
And it's easy to find the roots of these: if we write this as
(x^m  1) (x^m  A) (x^m  B),
then the roots are precisely the numbers of the form zeta^n,
alpha (zeta^n), and beta (zeta^n), where zeta ranges over all
the complex mth roots of 1, alpha is an mth root of A, and beta
is an mth root of B.
The whole Newton's method problem gets brought up again.
Jim thinks there will be a connection with continued fractions.
To be specific about this, let's define polynomials p_n and q_n
so that what we called P_n before is the same as p_(2^n) and what
we called Q_n before is the same as q_(2^n) (but let's stick to
the P's and p's for now). Newton's algorithm is a way to get
from p_(N) to p_(2N), whereas the continued fraction algorithm
is a way to get from p_(N) to p_(N+1). Newton's algorithm
is a much faster way to get good approximations, but in some
ways its very speed makes it hard to analyze (witness the
trouble we're having proving our formulas). It might be helpful
to come up with a conjectural "continuedfraction" formula
relating p_(N) and p_(N+1).
[ Jim's birthday cake is excellent! Happy Birthday Jim! ]
[ Sadly, his birthday song wasn't very long... ]
Continuing with the Newton's method theme, Rick suggests looking
at the complex Mobius transformation z > (az + b)/(cz + d)
that maps the two roots to 0 and infinity AND maps the bisecting
line (in the complex plane) to the unit circle.
Try roots 1,1
f(z) = (z+1)/(z1) (sends 1 > inf, 1 > 0)
f^(1)(z) = (z+1)/(z1) = f(z)
p(x) = x^2  1
p'(x) = 2x
z > z  p(z)/p'(z) = (z^2+1)/(2z)
applying f^(1)([f(z)]^2) is equivalent to applying Newton's method.
p(z) = az^2 + bz + c
N_p(z) = z  p(z)/p'(z)
N_p
z > N_p(z)
 
M   M
V S V
zeta > zeta^2
where M is the Mobius transformation that sends r1 > 0, r2 > inf
and S is the squaring map that sends zeta to zeta^2
N_p = M^(1) o S o M (here "o" denotes composition of functions)
(N_p)^n = M^(1) o S^n o M
S^n is the map that sends zeta to zeta^(2^n), which is comparatively
easy to understand; so this could give us a good way to understand the
nth iterate of the Newton map N_p associated with the polynomial p.
Does something like this work for some cubics?
p= z^2 (z1)
N_p(z) = (2z^2  z) / (3z 2) ~~> z^2 + c

 Related to taking
 iterates of:
V
w > (w^2 + w)/2 }
} related by
z > z^2 + 1/4 } mobius
} transformations
z > (2z^2  z)/(3z  2) }
These should be analogous to Newton's method;
Interesting sequences to try..
Are there any instances of Newton's method that are conjugate to z > z^3 ?
(Rick says No, ... or at least it can't involve simple rational functions.)
The others have been working in B107 on their mathematical monsters.
(Trying to make them into nice, respectable, tame monsters though..)
Martin will be getting Magma tomorrow, OR ELSE BE CAST INTO THE VOLCANO!
But that's how things go here at sizzle :)
A parting observation made at today's SSL session:
"Two is the oddest prime."