What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 5

Check out this lovely integral, dubbed the Sophomore’s Dream, found by Johann Bernoulli in 1697 (Gamma, page 44):

\displaystyle \int_0^1 \frac{dx}{x^x} = \displaystyle \frac{1}{1^1} + \frac{1}{2^2} + \frac{1}{3^3} + \frac{1}{4^4} + \dots.

I’ll refer to either Wikipedia or Mathworld for the derivation.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 4

For s > 1, Riemann’s famous zeta function is defined by

\zeta(s) = \displaystyle \sum_{n=1}^{\infty} \frac{1}{n^s}.

This is also called a p-series in calculus.

What I didn’t know (Gamma, page 41) is that, in 1748, Leonhard Euler exactly computed this infinite series for s = 26 without a calculator! Here’s the answer:

\displaystyle 1 + \frac{1}{2^{26}} + \frac{1}{3^{26}} + \frac{1}{4^{26}} + \dots = \frac{1,315,862 \pi^{26}}{11,094,481,976,030,578,125}.

I knew that Euler was an amazing human calculator, but I didn’t know he was that amazing.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 3

At the time of this writing, it is unknown if there are infinitely many twin primes, which are prime numbers that differ by 2 (like 3 and 5, 5 and 7, 11 and 13, 17 and 19, etc.) However, significant progress has been made in recent years. However, it is known (Gamma, page 30) the sum of the reciprocals of the twin primes converges:

\displaystyle \left( \frac{1}{3} + \frac{1}{5} \right) + \left( \frac{1}{5} + \frac{1}{7} \right) + \left( \frac{1}{11} + \frac{1}{13} \right) + \left( \frac{1}{17} + \frac{1}{19} \right) = 1.9021605824\dots.

This constant is known as Brun’s constant (see also Mathworld). In the process of computing this number, the infamous 1994 Pentium bug was found.

Although this sum is finite, it’s still unknown if there are infinitely many twin primes since it’s possible for an infinite sum to converge (like a geometric series).

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 2

Let’s define partial sums of the harmonic series as follows:

H(m,n) = \displaystyle \frac{1}{m} + \frac{1}{m+1} + \frac{1}{m+2} + \dots + \frac{1}{n-1} + \frac{1}{n},

where m < n are positive integers. Here are a couple of facts that I didn’t know before reading Gamma (pages 24-25):

  • H(m,n) is never equal to an integer.
  • The only values of n for which H(1,n) is an integer are n = 2 and n=6.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 1

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

green lineIt is well known the harmonic series diverges:

\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \dots = \infty.

This means that, no matter what number N you choose, I can find a number n so that

\displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \dots + \frac{1}{n} > N.

What I didn’t know (p. 23 of Gamma) is that, in 1968, somebody actually figured out the precise number of terms that are needed for the sum on the left hand side to exceed 100. Here’s the answer:

15,092,688,622,113,788,323,693,563,264,538,101,449,859,497.

With one fewer term, the sum is a little less than 100.

Math Maps The Island of Utopia

Under the category of “Somebody Had To Figure It Out,” Dr. Andrew Simoson of King University (Bristol, Tennessee) used calculus to determine the shape of the island of Utopia in the 500-year-old book by Sir Thomas More based on the description of island given in the book’s introduction.

News article: https://www.insidescience.org/news/math-maps-island-thomas-mores-utopia

Paper by Dr. Simoson: http://archive.bridgesmathart.org/2016/bridges2016-65.html

Thoughts on Infinity: Index

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on various lessons I’ve learned while trying to answer the questions posed by gifted elementary school students.

Part 1: Different types of countable sets

Part 2a: Divergence of the harmonic series.

Part 2b: Convergence of the Kempner series.

Part 3a: Conditional convergent series or products shouldn’t be rearranged.

Part 3b: Definition of the Euler-Mascheroni constant \gamma.

Part 3c: Evaluation of the conditionally convergent series \displaystyle 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} \dots

Part 3d: Confirmation of this evaluation using technology.

Part 3e: Evaluation of a rearrangement of this conditionally convergent series.

Part 3f: Confirmation of this different evaluation using technology.

Part 3g: Closing thoughts.

 

Computing e to Any Power (Part 5)

In this series, I’m exploring the following ancedote from the book Surely You’re Joking, Mr. Feynman!, which I read and re-read when I was young until I almost had the book memorized.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

I mumbled something about how it was easy to calculate e to any power using that series (you just substitute the power for x).

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

“I just summed the series.”

“Nobody can sum the series that fast. You must just happen to know that one. How about e to the 3?”

“Look,” I say. “It’s hard work! Only one a day!”

“Hah! It’s a fake!” they say, happily.

“All right,” I say, “It’s 20.085.”

They look in the book as I put a few more figures on. They’re all excited now, because I got another one right.

Here are these great mathematicians of the day, puzzled at how I can compute e to any power! One of them says, “He just can’t be substituting and summing—it’s too hard. There’s some trick. You couldn’t do just any old number like e to the 1.4.”

I say, “It’s hard work, but for you, OK. It’s 4.05.”

As they’re looking it up, I put on a few more digits and say, “And that’s the last one for the day!” and walk out.

What happened was this: I happened to know three numbers—the logarithm of 10 to the base e (needed to convert numbers from base 10 to base e), which is 2.3026 (so I knew that e to the 2.3 is very close to 10), and because of radioactivity (mean-life and half-life), I knew the log of 2 to the base e, which is.69315 (so I also knew that e to the.7 is nearly equal to 2). I also knew e (to the 1), which is 2. 71828.

The first number they gave me was e to the 3.3, which is e to the 2.3—ten—times e, or 27.18. While they were sweating about how I was doing it, I was correcting for the extra.0026—2.3026 is a little high.

I knew I couldn’t do another one; that was sheer luck. But then the guy said e to the 3: that’s e to the 2.3 times e to the.7, or ten times two. So I knew it was 20. something, and while they were worrying how I did it, I adjusted for the .693.

Now I was sure I couldn’t do another one, because the last one was again by sheer luck. But the guy said e to the 1.4, which is e to the.7 times itself. So all I had to do is fix up 4 a little bit!

They never did figure out how I did it.

My students invariably love this story; let’s take a look at the third calculation.

Feynman knew that e^{0.69315} \approx 2, so that

e^{0.69315} e^{0.69315} = e^{1.3863} \approx 2 \times 2 = 4.

Therefore, again using the Taylor series expansion:

e^{1.4} = e^{1.3863} e^{0.0137} = 4 e^{0.0137}

\approx 4 \times (1 + 0.0137)

= 4 + 4 \times 0.0137

\approx 4.05.

Again, I have no idea how he put on a few more digits in his head (other than his sheer brilliance), as this would require knowing the value of \ln 2 to six or seven digits as well as computing the next term in the Taylor series expansion.

Computing e to Any Power (Part 4)

In this series, I’m exploring the following ancedote from the book Surely You’re Joking, Mr. Feynman!, which I read and re-read when I was young until I almost had the book memorized.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

I mumbled something about how it was easy to calculate e to any power using that series (you just substitute the power for x).

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

“I just summed the series.”

“Nobody can sum the series that fast. You must just happen to know that one. How about e to the 3?”

“Look,” I say. “It’s hard work! Only one a day!”

“Hah! It’s a fake!” they say, happily.

“All right,” I say, “It’s 20.085.”

They look in the book as I put a few more figures on. They’re all excited now, because I got another one right.

Here are these great mathematicians of the day, puzzled at how I can compute e to any power! One of them says, “He just can’t be substituting and summing—it’s too hard. There’s some trick. You couldn’t do just any old number like e to the 1.4.”

I say, “It’s hard work, but for you, OK. It’s 4.05.”

As they’re looking it up, I put on a few more digits and say, “And that’s the last one for the day!” and walk out.

What happened was this: I happened to know three numbers—the logarithm of 10 to the base e (needed to convert numbers from base 10 to base e), which is 2.3026 (so I knew that e to the 2.3 is very close to 10), and because of radioactivity (mean-life and half-life), I knew the log of 2 to the base e, which is.69315 (so I also knew that e to the.7 is nearly equal to 2). I also knew e (to the 1), which is 2. 71828.

The first number they gave me was e to the 3.3, which is e to the 2.3—ten—times e, or 27.18. While they were sweating about how I was doing it, I was correcting for the extra.0026—2.3026 is a little high.

I knew I couldn’t do another one; that was sheer luck. But then the guy said e to the 3: that’s e to the 2.3 times e to the.7, or ten times two. So I knew it was 20. something, and while they were worrying how I did it, I adjusted for the .693.

Now I was sure I couldn’t do another one, because the last one was again by sheer luck. But the guy said e to the 1.4, which is e to the.7 times itself. So all I had to do is fix up 4 a little bit!

They never did figure out how I did it.

My students invariably love this story; let’s take a look at the second calculation.

Feynman knew that e^{2.3026} \approx 10 and e^{0.69315} \approx 2, so that

e^{2.3026} e^{0.69315} = e^{2.99575} \approx 10 \times 2 = 20.

Therefore, again using the Taylor series expansion:

e^3 = e^{2.99575} e^{0.00425} = 20 e^{0.00425}

\approx 20 \times (1 + 0.00425)

= 20 + 20 \times 0.00425

= 20.085.

Again, I have no idea how he put on a few more digits in his head (other than his sheer brilliance), as this would require knowing the values of \ln 10 and \ln 2 to six or seven digits as well as computing the next term in the Taylor series expansion:

e^3 = e^{\ln 20} e^{3 - \ln 20}

\approx 20 (1 +e^{ 0.0042677})

$\approx 20 \times \left(1 + 0.0042677 + \frac{0.0042677^2}{2!} \right)$

\approx 20.0855361\dots

This compares favorably with the actual answer, e^3 \approx 20.0855392\dots.