Thoughts on Infinity (Part 3b)

The five most important numbers in mathematics are 0, 1, e, \pi, and i. In sixth place (a distant sixth place) is probably \gamma, the Euler-Mascheroni constant. See Mathworld or Wikipedia for more details. (For example, it’s astounding that we still don’t know if \gamma is irrational or not.)

In yesterday’s post, we’ve seen the curious phenomenon that the commutative and associative laws do not apply to a conditionally convergent series or infinite product. In tomorrow’s post, I’ll present another classic example of this phenomenon due to Cauchy. However, to be ready for this fact, I’ll need to see how \gamma arises from a certain conditionally convergent series.

Separately define the even and odd terms of the sequence \{a_n\} by

a_{2n} = \displaystyle \int_n^{n+1} \frac{dx}{x}

and

a_{2n-1} = \displaystyle \frac{1}{n}.

It’s pretty straightforward to show that this sequence is decreasing. The function f(x) = \displaystyle \frac{1}{x} is clearly decreasing for x > 0, and so the maximum value of f(x) on the interval [n,n+1] must occur at the left endpoint, while the minimum value must occur at the right endpoint. Since the length of this interval is 1, we have

\displaystyle \frac{1}{n+1} \cdot 1 < \displaystyle \int_n^{n+1} \frac{dx}{x} < \displaystyle \frac{1}{n} \cdot 1,

or

a_{2n+1} < a_{2n} < a_{2n-1}.

Since the subsequence \{a_{2n-1}\} clearly decreases to 0, this shows the full sequence \{a_n\} is a decreasing sequence with limit 0.

By the alternating series test, this implies that the series

\displaystyle \sum_{n=1}^\infty (-1)^{n-1} a_n

converges. This limit is called the

Since this series converges, that means that the limit of the partial sums converges to \gamma:

\displaystyle \lim_{M \to \infty} \sum_{n=1}^M (-1)^{n-1} a_n = \gamma.

Let’s take the upper limit to be an odd number M, where M = 2N-1 and N is an integer. Then by separating the even and odd terms, we obtain

\displaystyle \sum_{n=1}^{2N-1} (-1)^{n-1} a_n = \displaystyle \sum_{n=1}^{N} (-1)^{2n-1-1} a_{2n-1} + \sum_{n=1}^{N-1} (-1)^{2n-1} a_{2n}

= \displaystyle \sum_{n=1}^N a_{2n-1} - \sum_{n=1}^{N-1} a_{2n}

= \displaystyle \sum_{n=1}^N \frac{1}{n} - \sum_{n=1}^{N-1} \int_n^{n+1} \frac{dx}{x}

= \displaystyle \sum_{n=1}^N \frac{1}{n} - \int_1^N \frac{dx}{x}.

Therefore,

\displaystyle \lim_{N \to \infty} \left( \sum_{n=1}^N \frac{1}{n} - \int_1^N \frac{dx}{x} \right) = \gamma.

With this interpretation, the sum can be viewed as the sum of the N rectangles in the above picture, while the integral is the area under the hyperbola. Therefore, the limit \gamma can be viewed as the limit of the blue part of the above picture.

In other words, it’s an amazing fact that while both

\displaystyle \sum_{n=1}^\infty \frac{1}{n}

and

\displaystyle \int_1^\infty \frac{dx}{x}

diverge, somehow the difference

\displaystyle \lim_{N \to \infty} \left(\sum_{n=1}^N \frac{1}{n} - \int_1^N \frac{dx}{x} \right)

converges… and this limit is defined to be the number \gamma.

Thoughts on Infinity (Part 3a)

Last summer, Math With Bad Drawings had a nice series on the notion of infinity that I recommend highly. This topic is a perennial struggle for math majors to grasp, and I like the approach that the author uses to sell this difficult notion.

Part 3 on infinite series and products that are conditionally convergent discusses a head-scratching fact: according to the Riemann series theorem, the commutative and associative laws do not apply to conditionally convergent series.

An infinite series \displaystyle \sum_{n=1}^\infty a_n converges conditionally if it converges to a finite number but \displaystyle \sum_{n=1}^\infty |a_n| diverges. Indeed, by suitably rearranging the terms, the sum can be changed so that the (rearranged) series converges to any finite value. Even worse, the terms can be rearranged so that the sum converges to either \infty or -\infty. (Of course, this can’t happen for finite sums, and rearrangements of an absolutely convergent series do not change the value of the sum.)

I really like Math With Bad Drawing’s treatment of the subject, as it starts with an infinite product for \pi/2:

The top line is correct. However, the bottom line has to be incorrect since \pi/2 > 1 but each factor on the right-hand side is less than 1. The error, of course, stems from conditional convergence (the terms in the top product cannot be rearranged).

Conditional convergence is typically taught but glossed over in Calculus II since these rearrangements are such a head-scratching topic. I really like the above example because the flaw in the logic is made evidence after only three steps.

In tomorrow’s post, I’ll continue with another example of rearranging the terms in a conditionally convergent series.

 

 

Thoughts on Infinity (Part 2b)

Last summer, Math With Bad Drawings had a nice series on the notion of infinity that I recommend highly. This topic is a perennial struggle for math majors to grasp, and I like the approach that the author uses to sell this difficult notion.

Here’s Part 2 on the harmonic series, which is extremely well-written and which I recommend highly. Here’s a brief summary: the infinite harmonic series

\displaystyle \sum_{n=1}^\infty \frac{1}{n}

diverges. However, if you eliminate from the harmonic series all of the fractions whose denominator contains a 9, then the new series converges! This series has been called the Kempner series, named after the mathematician who first published this result about 100 years ago.

Source: http://smbc-comics.com/index.php?id=3777

To prove this, we’ll examine the series whose denominators are between 1 and 8, between 10 and 88, between 100 and 888, etc. First, each of the terms in the partial sum

\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8}

is less than or equal to 1, and so the sum of the above eight terms must be less than 8.

Next, each of the terms in the sum

\displaystyle \frac{1}{10} + \frac{1}{11} + \dots + \frac{1}{88}

is less than \displaystyle \frac{1}{10}. Notice that there are 72 terms in this sum since there are 8 possibilities for the first digit of the denominator (1 through 8) and 9 possibilities for the second digit (0 through 8). So the sum of these 72 terms must be less than \displaystyle 8 \times \frac{9}{10}.

Next, each of the terms in the sum

\displaystyle \frac{1}{100} + \frac{1}{101} + \dots + \frac{1}{888}

is less than \displaystyle \frac{1}{100}. Notice that there are 8 \times 9 \times 9 terms in this sum since there are 8 possibilities for the first digit of the denominator (1 through 8) and 9 possibilities for the second and third digits (0 through 8). So the sum of these 8 \times 9 \times 9 terms must be less than 8 \times \displaystyle \frac{9^2}{100}.

Continuing, we see that the Kempner series is bounded above by

\displaystyle 8 + 8 \times \frac{9}{10} + 8 \times \frac{9^2}{10^2} + \dots

Using the formula for an infinite geometric series, we see that the Kempner series converges, and the sum of the Kempner series must be less than 8 \times \displaystyle \frac{1}{1-9/10} = 80.

Using the same type of reasoning, much sharper bounds for the sum of the Kempner series can also be found. This 100-year-old article from the American Mathematical Monthly demonstrates that the sum of the Kempner series is between 22.4 and 23.3.  For more information about approximating the sum of the Kempner series, see Mathworld and Wikipedia.

It should be noted that there’s nothing particularly special about the number 9 in the above discussion. If all denominators containing 314159265, or any finite pattern, are eliminated from the harmonic series, then the resulting series will always converge.

Thoughts on Infinity (Part 2a)

Last summer, Math With Bad Drawings had a nice series on the notion of infinity that I recommend highly. This topic is a perennial struggle for math majors to grasp, and I like the approach that the author uses to sell this difficult notion.

Here’s Part 2 on the harmonic series, which is extremely well-written and which I recommend highly. Here’s a brief summary: the infinite harmonic series

\displaystyle \sum_{n=1}^\infty \frac{1}{n}

diverges. This is a perennial head-scratcher for students, as the terms become smaller and smaller yet the infinite series diverges.

To show this, notice that

\displaystyle \frac{1}{3} + \frac{1}{4} > \displaystyle \frac{1}{4} + \frac{1}{4} = \displaystyle \frac{1}{2},

\displaystyle \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} > \displaystyle \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} = \displaystyle \frac{1}{2},

and so on. Therefore,

\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} > \displaystyle 1 + \frac{1}{2} + \frac{1}{2} = 2,

\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} +\frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} > \displaystyle 1 + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} = \displaystyle \frac{5}{2},

and, in general,

\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{2^n} > \displaystyle 1+\frac{n}{2}.

Since \displaystyle \lim_{n \to \infty} \left(1 + \frac{n}{2} \right) = \infty, we can conclude that the harmonic series diverges.

However, here’s an amazing fact which I hadn’t known before the Math With Bad Drawings post: if you eliminate from the harmonic series all of the fractions whose denominator contains a 9, then the new series converges!

I’ll discuss the proof of this fact in tomorrow’s post. Until then, here’s a copy of the comic used in the Math With Bad Drawings post.

Source: http://smbc-comics.com/index.php?id=3777

Win $1,000,000 by Playing Minesweeper!

Yes, you read that headline correctly: it is possible for a mathematician to win $1,000,000 by playing Minesweeper. Here’s how:

First, the source of the $1,000,000. In 2000, the Clay Mathematics Institute created a list of important unsolved problems in mathematics (see also Wikipedia), offering each one a prize of $1,000,000 for a solution. Specifically, according to the rules,

[A] proposed solution must be published in a refereed mathematics publication of worldwide repute… and it must also have general acceptance in the mathematics community two years after.

Of the eight problems on the list, one (the Poincare conjecture) has been solved so far. Incredibly, the mathematician who produced the proof turned down the $1,000,000. He was also awarded the Fields Medal, considered the Nobel Prize of mathematics, but he refused that as well.

The Millennium Problems parallel the famous 23 Hilbert problems posed by eminent mathematician David Hilbert in 1900. The Hilbert problems were unsolved at the time and were posed as a challenge to the mathematicians of the 20th century. These problems certainly generated a lot of buzz in the mathematical community, and most have been solved since 1900. (Sadly, none of the solvers won $1,000,000 for their work!)

What does this have to do with Minesweeper?

One of the problems is the famed P vs. NP problem. From Clay Mathematics:

Suppose that you are organizing housing accommodations for a group of four hundred university students. Space is limited and only one hundred of the students will receive places in the dormitory. To complicate matters, the Dean has provided you with a list of pairs of incompatible students, and requested that no pair from this list appear in your final choice. This is an example of what computer scientists call an NP-problem, since it is easy to check if a given choice of one hundred students proposed by a coworker is satisfactory (i.e., no pair taken from your coworker’s list also appears on the list from the Dean’s office), however the task of generating such a list from scratch seems to be so hard as to be completely impractical. Indeed, the total number of ways of choosing one hundred students from the four hundred applicants is greater than the number of atoms in the known universe! Thus no future civilization could ever hope to build a supercomputer capable of solving the problem by brute force; that is, by checking every possible combination of 100 students. However, this apparent difficulty may only reflect the lack of ingenuity of your programmer. In fact, one of the outstanding problems in computer science is determining whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure. Problems like the one listed above certainly seem to be of this kind, but so far no one has managed to prove that any of them really are so hard as they appear, i.e., that there really is no feasible way to generate an answer with the help of a computer. Stephen Cook and Leonid Levin formulated the P (i.e., easy to find) versus NP (i.e., easy to check) problem independently in 1971.

In 2000, Richard Kaye (University of Birmingham, UK) made connected Minesweeper to the P vs. NP problem. Famed mathematical journalist Ian Stewart did a very nice job of describing the connection between the two, so I’ll simply refer to his post for a detailed description. Dr. Kaye’s website also gives a lot of information about the connection.

Proving theorems and special cases: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series on attempting to prove theorems by looking at special cases.

Famous propositions that are true for the first few cases (or many cases) before failing.

Part 1: The proposition “Is n^2 -n + 41 always a prime number?” is true for the first 40 cases but fails with n = 41.

Part 2: The Polya conjecture is true for over 900 million cases before failing.

Part 3: A conjecture about the distribution of prime numbers is true for the first 10^{318} cases before failing.

Mathematical induction

Part 4: Pedagogical thoughts on mathematical induction.

Part 5: More pedagogical thoughts on mathematical induction.

Famous unsolved problems in mathematics

Part 6: The Goldbach conjecture.

Part 7: The twin prime conjecture.

Part 8: The Collatz conjecture.

Part 9: The Riemann hypothesis.

Theorems in secondary mathematics that can be proven using special cases

Part 10: The sum of the angles in a convex polygon with n sides is 180n degrees.

Part 11: The Law of Cosines.

Part 12: Trig identities for \sin(\alpha \pm \beta) and \cos(\alpha \pm \beta).

Part 13: Uniqueness of logarithms.

Part 14: The Power Law for derivatives, or \displaystyle \frac{d}{dx} \left(x^n \right) = nx^{n-1}.

Part 15: The Mean Value Theorem.

Part 16: An old calculus problem of mine.

Part 17: Ending thoughts.