The Incomplete Gamma and Confluent Hypergeometric Functions (Part 3)

In the previous post, I confirmed the curious integral

\displaystyle \int_0^z t^{a-1} e^{-t} \, dt = \displaystyle e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s},

where the right-hand side is a special case of the confluent hypergeometric function when a is a positive integer, by differentiating the right-hand side. However, the confirmation psychologically felt very unsatisfactory — we basically guessed the answer and then confirmed that it worked.

A seemingly better way to approach the integral is to use the Taylor series representation of e^{-t} to integrate the left-hand side term-by-term:

\displaystyle \int_0^z t^{a-1} e^{-t} \, dt = \int_0^z t^{a-1} \sum_{n=0}^\infty \frac{(-t)^n}{n!} \,  dt

= \displaystyle \sum_{n=0}^\infty \frac{(-1)^n}{n!} \int_0^z t^{a+n-1} \, dt

= \displaystyle \sum_{n=0}^\infty \frac{(-1)^n}{n!} \frac{z^{a+n}}{a+n}.

Well, that doesn’t look like the right-hand side of the top equation. However, the right-hand side of the top equation also has a e^{-z} in it. Let’s also convert that to its Taylor series expansion and then use the formula for multiplying two infinite series:

\displaystyle e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} = \left( \sum_{s=0}^\infty \frac{(-z)^s}{s!} \right) \left( \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} \right)

= \displaystyle z^a \left( \sum_{s=0}^\infty \frac{(-1)^s z^s}{s!} \right) \left( \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^s \right)

= \displaystyle z^a \sum_{n=0}^\infty \sum_{s=0}^n \frac{(-1)^s z^s}{s!} \frac{(a-1)!}{(a+n-s)!} z^{n-s}

= \displaystyle \sum_{n=0}^\infty \sum_{s=0}^n \frac{(-1)^s}{s!} \frac{(a-1)!}{(a+n-s)!} z^{a+n}

Summarizing, apparently the following two infinite series are supposed to be equal:

\displaystyle \sum_{n=0}^\infty \frac{(-1)^n}{n!} \frac{z^{a+n}}{a+n} = \sum_{n=0}^\infty \sum_{s=0}^n \frac{(-1)^s}{s!} \frac{(a-1)!}{(a+n-s)!} z^{a+n},

or, matching coefficients of z^{a+n},

\displaystyle \frac{(-1)^n}{n! (a+n)} = \sum_{s=0}^n \frac{(-1)^s (a-1)!}{s! (a+n-s)!}.

When I first came to this equality, my immediate reaction was to throw up my hands and assume I made a calculation error someplace — I had a hard time believing that this sum from s=0 to s=n was true. However, after using Mathematica to evaluate this sum for about a dozen different values of n and a, I was able to psychologically assure myself that this identity was somehow true.

But why does this awkward summation work? This is no longer a question about integration: it’s a question about a finite sum with factorials. I continue this exploration in the next post.

The Incomplete Gamma and Confluent Hypergeometric Functions (Part 2)

In this series of posts, I confirm this curious integral:

\displaystyle \int_0^z t^{a-1} e^{-t} \, dt = \displaystyle \frac{z^a e^{-z}}{a} M(1, 1+a, z),

where the confluent hypergeometric function M(a,b,z) is

M(a,b,z) = \displaystyle 1+\sum_{s=1}^\infty \frac{a(a+1)\dots(a+s-1)}{b(b+1)\dots (b+s-1)} \frac{z^s}{s!}.

This integral can be confirmed — unsatisfactorily confirmed, but confirmed — by differentiating the right-hand side. For the sake of simplicity, I restrict my attention to the case when a is a positive integer. To begin, the right-hand side is

\displaystyle \frac{z^a e^{-z}}{a} M(1, 1+a, z) = \displaystyle \frac{z^a e^{-z}}{a} \left[1 + \sum_{s=1}^\infty \frac{1 \cdot 2 \cdot \dots \cdot s}{(a+1)(a+2)\dots (a+s)} \frac{z^s}{s!} \right]

= \displaystyle \frac{z^a e^{-z}}{a} \left[1 + \sum_{s=1}^\infty \frac{1}{(a+1)(a+2)\dots (a+s)} z^s \right]

= \displaystyle \frac{z^a e^{-z}}{a} \left[1 + \sum_{s=1}^\infty \frac{a!}{(a+s)!} z^s \right]

= \displaystyle \frac{z^a e^{-z}}{a} \sum_{s=0}^\infty \frac{a!}{(a+s)!} z^s

= \displaystyle e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s}.

We now differentiate, first by using the Product Rule and then differentiating the series term-by-term (blatantly ignoring the need to confirm that term-by-term differentiation applies to this series):

\displaystyle \frac{d}{dz} \left[\frac{z^a e^{-z}}{a} M(1, 1+a, z) \right] = \displaystyle \frac{d}{dz} \left[ e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} \right]

= -e^{-z} \displaystyle \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} + e^{-z} \frac{d}{dz} \left[  \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} \right]

= -e^{-z} \displaystyle \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} + e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} \frac{d}{dz} z^{a+s}

= -e^{-z} \displaystyle \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} + e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} (a+s) z^{a+s-1}

= -e^{-z} \displaystyle \sum_{s=0}^\infty \frac{(a-1)!}{(a+s)!} z^{a+s} + e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s-1)!} z^{a+s-1}.

We now shift the index of the first series:

\displaystyle \frac{d}{dz} \left[\frac{z^a e^{-z}}{a} M(1, 1+a, z) \right] =-e^{-z} \sum_{s=1}^\infty \frac{(a-1)!}{(a+s-1)!} z^{a+s-1} + e^{-z} \sum_{s=0}^\infty \frac{(a-1)!}{(a+s-1)!} z^{a+s-1}.

By separating the s=0 term of the second series, the right-hand side becomes:

\displaystyle -e^{-z} \sum_{s=1}^\infty \frac{(a-1)!}{(a+s-1)!} z^{a+s-1} + e^{-z} \frac{(a-1)!}{(a-1)!} z^{a-1} + e^{-z} \sum_{s=1}^\infty \frac{(a-1)!}{(a+s-1)!} z^{a+s-1} = e^{-z} z^{a-1}$

since the two infinite series cancel. We have thus shown that

\displaystyle \frac{d}{dz} \left[\frac{z^a e^{-z}}{a} M(1, 1+a, z) \right] = \frac{e^{-z} z^{a-1}}{a}.

Therefore, we may integrate the right-hand side:

\displaystyle \int_0^z t^{a-1} e^{-t} \, dt = \left[\frac{t^a e^{-t}}{a} M(1, 1+a, t) \right]_0^z

\displaystyle = \frac{z^a e^{-z}}{a} M(1, 1+a, z) - \frac{0^a e^{0}}{a} M(1, 1+a, 0)

\displaystyle = \frac{z^a e^{-z}}{a} M(1, 1+a, z).

While this confirms the equality, this derivation still feels very unsatisfactory — we basically guessed the answer and then confirmed that it worked. In the next few posts, I’ll consider the direct verification of this series.

The Incomplete Gamma and Confluent Hypergeometric Functions (Part 1)

Yes, the title of this post is a mouthful.

While working on a research project, a trail of citations led me to this curious equality in the Digital Library of Mathematical Functions:

\gamma(a,z) = \displaystyle \frac{z^a e^{-z}}{a} M(1, 1+a, z),

where the incomplete gamma function \gamma(a,z) is

\gamma(a,z) = \displaystyle \int_0^z t^{a-1} e^{-t} \, dt

and the confluent hypergeometric function M(a,b,z) is

M(a,b,z) = \displaystyle 1+\sum_{s=1}^\infty \frac{a(a+1)\dots(a+s-1)}{b(b+1)\dots (b+s-1)} \frac{z^s}{s!}.

While I didn’t doubt that this was true — I don’t doubt this has been long established — I had an annoying problem: I didn’t really believe it. The gamma function

\Gamma(a) = \displaystyle \int_0^\infty t^{a-1} e^{-t} \, dt

is a well-known function with the famous property that

\Gamma(n+1) = n!

for non-negative integers n; this is often seen in calculus textbooks as an advanced challenge using integration by parts. The incomplete gamma function \gamma(a,z) has the same look as \Gamma(a), except that the range of integration is from 0 to z (and not \infty). The gamma function appears all over the place in mathematics courses.

The confluent hypergeometric function, on the other hand, typically arises in mathematical physics as the solution of the differential equation

z f''(z) + (b-z) f'(z) - af(z) = 0.

As I’m not a mathematical physicist, I won’t presume to state why this particular differential equation is important — except that it appears to be a niche equation that arises in very specialized applications.

So I had a hard time psychologically accepting that these two functions were in any way related.

While ultimately unimportant for advancing mathematics, this series will be about the journey I took to directly confirm the above equality.

Horrible False Analogy

I had forgotten the precise assumptions on uniform convergence that guarantees that an infinite series can be differentiated term by term, so that one can safely conclude

\displaystyle \frac{d}{dx} \sum_{n=1}^\infty f_n(x) = \sum_{n=1}^\infty f_n'(x).

This was part of my studies in real analysis as a student, so I remembered there was a theorem but I had forgotten the details.

So, like just about everyone else on the planet, I went to Google to refresh my memory even though I knew that searching for mathematical results on Google can be iffy at best.

And I was not disappointed. Behold this laughably horrible false analogy (and even worse graphic) that I found on chegg.com:

Suppose Arti has to plan a birthday party and has lots of work to do like arranging stuff for decorations, planning venue for the party, arranging catering for the party, etc. All these tasks can not be done in one go and so need to be planned. Once the order of the tasks is decided, they are executed step by step so that all the arrangements are made in time and the party is a success.

Similarly, in Mathematics when a long expression needs to be differentiated or integrated, the calculation becomes cumbersome if the expression is considered as a whole but if it is broken down into small expressions, both differentiation and the integration become easy.

Pedagogically, I’m all for using whatever technique an instructor might deem necessary to to “sell” abstract mathematical concepts to students. Nevertheless, I’m pretty sure that this particular party-planning analogy has no potency for students who have progressed far enough to rigorously study infinite series.

My Favorite One-Liners: Part 111

I tried a new wise-crack in class recently, and it was a rousing success. My math majors had trouble recalling basic facts about tests for convergent and divergent series, and so I projected onto the front screen the Official Repository of all Knowledge (www.google.com) and searched for “divergent series” to “help” them recall their prior knowledge.

Worked like a charm.

https://www.google.com/search?q=divergent+series

My Favorite One-Liners: Part 35

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Every once in a while, I’ll discuss something in class which is at least tangentially related to an unsolved problems in mathematics. For example, when discussing infinite series, I’ll ask my students to debate whether or not this series converges:

1 + \frac{1}{10} + \frac{1}{100} + \frac{1}{1000} + \dots

Of course, this one converges since it’s an infinite geometric series. Then we’ll move on to an infinite series that is not geometric:

1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \dots,

where the denominators are all perfect squares. I’ll have my students guess about whether or not this one converges. It turns out that it does, and the answer is exactly what my students should expect the answer to be, \pi^2/6.

Then I tell my students, that was a joke (usually to relieved laughter).

Next, I’ll put up the series

1 + \frac{1}{8} + \frac{1}{27} + \frac{1}{64} + \dots,

where the denominators are all perfect cubes. I’ll have my students guess about whether or not this one converges. Usually someone will see that this one has to converge since the previous one converged and the terms of this one are pairwise smaller than the previous series — an intuitive use of the Dominated Convergence Test. Then, I’ll ask, what does this converge to?

The answer is, nobody knows. It can be calculated to very high precision with modern computers, of course, but it’s unknown whether there’s a simple expression for this sum.

So, concluding the story whenever I present an unsolved problem, I’ll tell my students,

If you figure out the answer, call me, and call me collect.

My Favorite One-Liners: Part 12

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Often in mathematics, one proof is quite similar to another proof. For example, in Precalculus or Discrete Mathematics, students encounter the theorem

\sum_{k=1}^n (a_k + b_k) = \sum_{k=1}^n a_k + \sum_{k=1}^n b_k.

The formal proof requires mathematical induction, but the “good enough” proof is usually convincing enough for most students, as it’s just the repeated use of the commutative and associative properties to rearrange the terms in the sum:

\sum_{k=1}^n (a_k + b_k)= (a_1 + b_1) + (a_2 + b_2) + \dots + (a_n + b_n)

= (a_1 + a_2 + \dots + a_n) + (b_1 + b_2 + \dots + b_n)

= \sum_{k=1}^n a_k + \sum_{k=1}^n b_k.

Next, I’ll often present the new but closely related theorem

\sum_{k=1}^n (a_k - b_k) = \sum_{k=1}^n a_k -\sum_{k=1}^n b_k.

The proof of this would take roughly the same amount of time as the first proof, but there’s often little pedagogical value in doing all the steps over again in class. So here’s the line I’ll use: “At this point, I invoke the second-most powerful word in mathematics…” and then let them guess what this mysterious word is.

After a few seconds, I tell them the answer: “Similar.” The proof of the second theorem exactly parallels the proof of the first except for some sign changes. So I’ll tell them that mathematicians often use this word in mathematical proofs when it’s dead obvious that the proof can be virtually copied-and-pasted from a previous proof.

Eventually, students will catch on to my deliberate choice of words and ask, “What the most powerful word in mathematics?” As any mathematician knows, the most powerful word in mathematics is “Trivial”… the proof is so easy that it’s not necessary to write the proof down. But I warn my students that they’re not allowed to use this word when answering exam questions.

The third most powerful phrase in mathematics is “It is left for the student,” thus saving the professor from writing down the proof in class and encouraging students to figure out the details on their own.

 

What I Learned by Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post.

When I was researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites along with the page numbers in the book — while giving the book a very high recommendation.

Part 1: The smallest value of n so that 1 + \frac{1}{2} + \dots + \frac{1}{n} > 100 (page 23).

Part 2: Except for a couple select values of m<n, the sum \frac{1}{m} + \frac{1}{m+1} + \dots + \frac{1}{n} is never an integer (pages 24-25).

Part 3: The sum of the reciprocals of the twin primes converges (page 30).

Part 4: Euler somehow calculated \zeta(26) without a calculator (page 41).

Part 5: The integral called the Sophomore’s Dream (page 44).

Part 6: St. Augustine’s thoughts on mathematicians — in context, astrologers (page 65).

Part 7: The probability that two randomly selected integers have no common factors is 6/\pi^2 (page 68).

Part 8: The series for quickly computing \gamma to high precision (page 89).

Part 9: An observation about the formulas for 1^k + 2^k + \dots + n^k (page 81).

Part 10: A lower bound for the gap between successive primes (page 115).

Part 11: Two generalizations of \gamma (page 117).

Part 12: Relating the harmonic series to meteorological records (page 125).

Part 13: The crossing-the-desert problem (page 127).

Part 14: The worm-on-a-rope problem (page 133).

Part 15: An amazingly nasty formula for the nth prime number (page 168).

Part 16: A heuristic argument for the form of the prime number theorem (page 172).

Part 17: Oops.

Part 18: The Riemann Hypothesis can be stated in a form that can be understood by high school students (page 207).

 

 

Lessons from teaching gifted elementary school students: Index (updated)

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on various lessons I’ve learned while trying to answer the questions posed by gifted elementary school students. (This is updated from my previous index.)

Part 1: A surprising pattern in some consecutive perfect squares.

Part 2: Calculating 2 to a very large exponent.

Part 3a: Calculating 2 to an even larger exponent.

Part 3b: An analysis of just how large this number actually is.

Part 4a: The chance of winning at BINGO in only four turns.

Part 4b: Pedagogical thoughts on one step of the calculation.

Part 4c: A complicated follow-up question.

Part 5a: Exponentiation is multiplication as multiplication is to addition. So, multiplication is to addition as addition is to what? (I offered the answer of incrementation, but it was rejected: addition requires two inputs, while incrementation only requires one.)

Part 5b: Why there is no binary operation that completes the above analogy.

Part 5c: Knuth’s up-arrow notation for writing very big numbers.

Part 5d: Graham’s number, reputed to be the largest number ever to appear in a mathematical proof.

Part 6a: Calculating $(255/256)^x$.

Part 6b: Solving $(255/256)^x = 1/2$ without a calculator.

Part 7a: Estimating the size of a 1000-pound hailstone.

Part 7b: Estimating the size a 1000-pound hailstone.

Part 8a: Statement of an usually triangle summing problem.

Part 8b: Solution using binomial coefficients.

Part 8c: Rearranging the series.

Part 8d: Reindexing to further rearrange the series.

Part 8e: Rewriting using binomial coefficients again.

Part 8f: Finally obtaining the numerical answer.

Part 8g: Extracting the square root of the answer by hand.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 18

The Riemann Hypothesis (see here, here, and here) is perhaps the most famous (and also most important) unsolved problems in mathematics. Gamma (page 207) provides a way of writing down this conjecture in a form that only uses notation that is commonly taught in high school:

If \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \cos(b \ln r) = 0 and \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \sin(b \ln r) = 0 for some pair of real numbers a and b, then a = \frac{1}{2}.

As noted in the book, “It seems extraordinary that the most famous unsolved problem in the whole of mathematics can be phrased so that it involves the simplest of mathematical ideas: summation, trigonometry, logarithms, and [square roots].”

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.