My Favorite One-Liners: Part 29

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use when I need my students to remember something from a previous course — especially when it’s a difficult concept from a previous course — that somebody else taught them in a previous semester.

For example, in my probability class, I’ll introduce the Poisson distribution

P(X = k) = e^{-\mu} \displaystyle \frac{\mu^k}{k!},

where \mu > 0 and the permissible values of k are non-negative integers.

In particular, since these are probabilities and one and only one of these values can be taken, this means that

\displaystyle \sum_{k=0}^\infty e^{-\mu} \frac{\mu^k}{k!} = 1.

At this point, I want students to remember that they’ve actually seen this before, so I replace \mu by x and then multiply both sides by e^x:

\displaystyle \sum_{k=0}^\infty \frac{x^k}{k!} = e^x.

Of course, this is the Taylor series expansion for e^x. However, my experience is that most students have decidedly mixed feelings about Taylor series; often, it’s the last thing that they learn in Calculus II, which means it’s the first thing that they forget when the semester is over. Also, most students have a really hard time with Taylor series when they first learn about them.

So here’s my one-liner that I’ll say at this point: “Does this bring back any bad memories for anyone? Perhaps like an old Spice Girls song?” And this never fails to get an understanding laugh before I remind them about Taylor series.

 

Computing e to Any Power: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series examining one of Richard Feynman’s anecdotes about mentally computing e^x for three different values of x.

Part 1: Feynman’s anecdote.

Part 2: Logarithm and antilogarithm tables from the 1940s.

Part 3: A closer look at Feynman’s computation of e^{3.3}.

Part 4: A closer look at Feynman’s computation of e^{3}.

Part 5: A closer look at Feynman’s computation of e^{1.4}.

 

 

Computing e to Any Power (Part 5)

In this series, I’m exploring the following ancedote from the book Surely You’re Joking, Mr. Feynman!, which I read and re-read when I was young until I almost had the book memorized.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

I mumbled something about how it was easy to calculate e to any power using that series (you just substitute the power for x).

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

“I just summed the series.”

“Nobody can sum the series that fast. You must just happen to know that one. How about e to the 3?”

“Look,” I say. “It’s hard work! Only one a day!”

“Hah! It’s a fake!” they say, happily.

“All right,” I say, “It’s 20.085.”

They look in the book as I put a few more figures on. They’re all excited now, because I got another one right.

Here are these great mathematicians of the day, puzzled at how I can compute e to any power! One of them says, “He just can’t be substituting and summing—it’s too hard. There’s some trick. You couldn’t do just any old number like e to the 1.4.”

I say, “It’s hard work, but for you, OK. It’s 4.05.”

As they’re looking it up, I put on a few more digits and say, “And that’s the last one for the day!” and walk out.

What happened was this: I happened to know three numbers—the logarithm of 10 to the base e (needed to convert numbers from base 10 to base e), which is 2.3026 (so I knew that e to the 2.3 is very close to 10), and because of radioactivity (mean-life and half-life), I knew the log of 2 to the base e, which is.69315 (so I also knew that e to the.7 is nearly equal to 2). I also knew e (to the 1), which is 2. 71828.

The first number they gave me was e to the 3.3, which is e to the 2.3—ten—times e, or 27.18. While they were sweating about how I was doing it, I was correcting for the extra.0026—2.3026 is a little high.

I knew I couldn’t do another one; that was sheer luck. But then the guy said e to the 3: that’s e to the 2.3 times e to the.7, or ten times two. So I knew it was 20. something, and while they were worrying how I did it, I adjusted for the .693.

Now I was sure I couldn’t do another one, because the last one was again by sheer luck. But the guy said e to the 1.4, which is e to the.7 times itself. So all I had to do is fix up 4 a little bit!

They never did figure out how I did it.

My students invariably love this story; let’s take a look at the third calculation.

Feynman knew that e^{0.69315} \approx 2, so that

e^{0.69315} e^{0.69315} = e^{1.3863} \approx 2 \times 2 = 4.

Therefore, again using the Taylor series expansion:

e^{1.4} = e^{1.3863} e^{0.0137} = 4 e^{0.0137}

\approx 4 \times (1 + 0.0137)

= 4 + 4 \times 0.0137

\approx 4.05.

Again, I have no idea how he put on a few more digits in his head (other than his sheer brilliance), as this would require knowing the value of \ln 2 to six or seven digits as well as computing the next term in the Taylor series expansion.

Computing e to Any Power (Part 4)

In this series, I’m exploring the following ancedote from the book Surely You’re Joking, Mr. Feynman!, which I read and re-read when I was young until I almost had the book memorized.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

I mumbled something about how it was easy to calculate e to any power using that series (you just substitute the power for x).

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

“I just summed the series.”

“Nobody can sum the series that fast. You must just happen to know that one. How about e to the 3?”

“Look,” I say. “It’s hard work! Only one a day!”

“Hah! It’s a fake!” they say, happily.

“All right,” I say, “It’s 20.085.”

They look in the book as I put a few more figures on. They’re all excited now, because I got another one right.

Here are these great mathematicians of the day, puzzled at how I can compute e to any power! One of them says, “He just can’t be substituting and summing—it’s too hard. There’s some trick. You couldn’t do just any old number like e to the 1.4.”

I say, “It’s hard work, but for you, OK. It’s 4.05.”

As they’re looking it up, I put on a few more digits and say, “And that’s the last one for the day!” and walk out.

What happened was this: I happened to know three numbers—the logarithm of 10 to the base e (needed to convert numbers from base 10 to base e), which is 2.3026 (so I knew that e to the 2.3 is very close to 10), and because of radioactivity (mean-life and half-life), I knew the log of 2 to the base e, which is.69315 (so I also knew that e to the.7 is nearly equal to 2). I also knew e (to the 1), which is 2. 71828.

The first number they gave me was e to the 3.3, which is e to the 2.3—ten—times e, or 27.18. While they were sweating about how I was doing it, I was correcting for the extra.0026—2.3026 is a little high.

I knew I couldn’t do another one; that was sheer luck. But then the guy said e to the 3: that’s e to the 2.3 times e to the.7, or ten times two. So I knew it was 20. something, and while they were worrying how I did it, I adjusted for the .693.

Now I was sure I couldn’t do another one, because the last one was again by sheer luck. But the guy said e to the 1.4, which is e to the.7 times itself. So all I had to do is fix up 4 a little bit!

They never did figure out how I did it.

My students invariably love this story; let’s take a look at the second calculation.

Feynman knew that e^{2.3026} \approx 10 and e^{0.69315} \approx 2, so that

e^{2.3026} e^{0.69315} = e^{2.99575} \approx 10 \times 2 = 20.

Therefore, again using the Taylor series expansion:

e^3 = e^{2.99575} e^{0.00425} = 20 e^{0.00425}

\approx 20 \times (1 + 0.00425)

= 20 + 20 \times 0.00425

= 20.085.

Again, I have no idea how he put on a few more digits in his head (other than his sheer brilliance), as this would require knowing the values of \ln 10 and \ln 2 to six or seven digits as well as computing the next term in the Taylor series expansion:

e^3 = e^{\ln 20} e^{3 - \ln 20}

\approx 20 (1 +e^{ 0.0042677})

$\approx 20 \times \left(1 + 0.0042677 + \frac{0.0042677^2}{2!} \right)$

\approx 20.0855361\dots

This compares favorably with the actual answer, e^3 \approx 20.0855392\dots.

Computing e to Any Power (Part 2)

In this series, I’m looking at a wonderful anecdote from Nobel Prize-winning physicist Richard P. Feynman from his book Surely You’re Joking, Mr. Feynman!. This story concerns a time that he computed e^x mentally for a few values of x, much to the astonishment of his companions.

Part of this story directly ties to calculus.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

As noted, this refers to the Taylor series expansion of e^x, which is can be used to compute e to any power. The terms get very small very quickly because of the factorials in the denominator, thus lending itself to the computation of e^x. Indeed, this series is used by modern calculators (with a few tricks to accelerate convergence). In other words, the series from calculus explains how the mysterious “black box” of a graphing calculator actually works.

Continuing the story…

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

For now, I’m going to ignore how Feynman did this computation in his head and instead discuss “the table.” The setting for this story was approximately 1940, long before the advent of handheld calculators. I’ll often ask my students, “The Brooklyn Bridge got built. So how did people compute e^x before calculators were invented?” The answer is by Taylor series, which were used to produce tables of values of e^x. So, if someone wanted to find e^{3.3}, they just had a book on the shelf.

For example, the following page comes from the book Marks’ Mechanical Engineers’ Handbook, 6th edition, which was published in 1958 and which I happen to keep on my bookshelf at home.

ExponentTable

Look down the fifth and sixth columns of this table, we see that e^{3.3} \approx 27.11. Somebody had computed all of these things (and plenty more) using the Taylor series, and they were compiled into a book and sold to mathematicians, scientists, and engineers.

But what if we needed an approximation better more accurate than four significant digits? Back in those days, there were only two options: do the Taylor series yourself, or buy a bigger book with more accurate tables.

Lessons from teaching gifted elementary students (Part 6b)

Every so often, I’ll informally teach a class of gifted elementary-school students. I greatly enjoy interacting with them, and I especially enjoy the questions they pose. Often these children pose questions that no one else will think about, and answering these questions requires a surprising depth of mathematical knowledge.

Here’s a question I once received:

255/256 to what power is equal to 1/2? And please don’t use a calculator.

Here’s how I answered this question without using a calculator… in fact, I answered it without writing anything down at all. I thought of the question as

\displaystyle \left( 1 - \epsilon \right)^x = \displaystyle \frac{1}{2}.

\displaystyle x \ln (1 - \epsilon) = \ln \displaystyle \frac{1}{2}

\displaystyle x \ln (1 - \epsilon) = -\ln 2

I was fortunate that my class chose 1/2, as I had memorized (from reading and re-reading Surely You’re Joking, Mr. Feynman! when I was young) that \ln 2 \approx 0.693. Therefore, we have

x \ln (1 - \epsilon) \approx -0.693.

Next, I used the Taylor series expansion

\ln(1+t) = t - \displaystyle \frac{t^2}{2} + \frac{t^3}{3} \dots

to reduce this to

-x \epsilon \approx -0.693,

or

x \approx \displaystyle \frac{0.693}{\epsilon}.

For my students’ problem, I had \epsilon = \frac{1}{256}, and so

x \approx 256(0.693).

So all I had left was the small matter of multiplying these two numbers. I thought of this as

x \approx 256(0.7 - 0.007).

Multiplying 256 and 7 in my head took a minute or two:

256 \times 7 = 250 \times 7 + 6 \times 7

= 250 \times (8-1) + 42

= 250 \times 8 - 250 + 42

= 2000 - 250 + 42

= 1750 + 42

= 1792.

Therefore, 256 \times 0.7 = 179.2 and 256 \times 0.007 = 1.792 \approx 1.8. Therefore, I had the answer of

x \approx 179.2 - 1.8 = 177.4 \approx 177.

So, after a couple minutes’ thought, I gave the answer of 177. I knew this would be close, but I had no idea it would be so close to the right answer, as

x = \displaystyle \frac{\displaystyle \ln \frac{1}{2} }{\displaystyle \ln \frac{255}{256}} \approx 177.0988786\dots

Thoughts on Infinity (Part 3g)

We have seen in recent posts that

$latex  \displaystyle 1 – \frac{1}{2} + \frac{1}{3} – \frac{1}{4} + \frac{1}{5} – … = \ln 2$

One way of remembering this fact is by using the Taylor series expansion for \ln(1+x):

\ln(1+x) = x - \displaystyle \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \frac{x^5}{5} \dots

“Therefore,” the first series can be obtained from the second series by substituting x=1.

I placed “therefore” in quotation marks because this reasoning is completely invalid, even though it happens to stumble across the correct answer in this instance. The radius of convergence for the above Taylor series is 1, which can be verified by using the Ratio Test. So the series converges absolutely for |x| < 1 and diverges for |x| > 1. The boundary of |x| = 1, on the other hand, has to be checked separately for convergence.

In other words, plugging in x=1 might be a useful way to remember the formula, but it’s not a proof of the formula and certainly not a technique that I want to encourage students to use!

It’s easy to find examples where just plugging in the boundary point happens to give the correct answer (see above). It’s also easy to find examples where plugging in the boundary point gives an incorrect answer because the series actually diverges: for example, substituting x = -1 into the geometric series

\displaystyle \frac{1}{1-x} = 1 + x + x^2 + x^3 + x^4 + \dots

However, I’ve been scratching my head to think of an example where plugging in the boundary point gives an incorrect answer because the series converges but converges to a different number. I could’ve sworn that I saw an example like this when I was a calculus student, but I can’t see to find an example in reading Apostol’s calculus text.

 

How I Impressed My Wife: Part 4h

So far in this series, I have used three different techniques to show that

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x} = \displaystyle \frac{2\pi}{|b|}.

For the third technique, a key step in the calculation was showing that the residue of the function

f(z) = \displaystyle \frac{1}{z^2 + 2\frac{S}{R}z + 1} = \displaystyle \frac{1}{(z-r_1)(z-r_2)}

at the point

r_1 = \displaystyle \frac{-S + \sqrt{S^2 -R^2}}{R}

was equal to

\displaystyle \frac{R}{ 2 \sqrt{S^2-R^2} }.

Initially, I did this by explicitly computing the Laurent series expansion about z = r_1 and identifying the coefficient for the term (z-r_1)^{-1}.

In this post, I’d like to discuss another way that this residue could have been obtained.
green line

Notice that the function f(z) has the form \displaystyle \frac{g(z)}{(z-r) h(z)}, where g and h are differentiable functions so that g(r) \ne 0 and h(r) \ne 0. Therefore, we may rewrite this function using the Taylor series expansion of \displaystyle \frac{g(z)}{h(z)} about z = r:

f(z) = \displaystyle \frac{1}{z-r} \left[ \frac{g(z)}{h(z)} \right]

f(z) = \displaystyle \frac{1}{z-r} \left[ a_0 + a_1 (z-r) + a_2 (z-r)^2 + a_3 (z-r)^3 + \dots \right]

f(z) = \displaystyle \frac{a_0}{z-r} + a_1 + a_2 (z-r) + a_3 (z-r)^2 + \dots

Clearly,

\displaystyle \lim_{z \to r} (z-r) f(z) = \displaystyle \lim_{z \to r} \left[ a_0 + a_1 (z-r) + a_2 (z-r)^2 + a_3 (z-r)^3 + \dots \right] = a_0

Therefore, the residue at z = r can be found by evaluating the limit \displaystyle \lim_{z \to r} (z-r) f(z). Notice that

\displaystyle \lim_{z \to r} (z-r) f(z) = \displaystyle \lim_{z \to r} \frac{(z-r) g(z)}{(z-r) h(z)}

= \displaystyle \lim_{z \to r} \frac{(z-r) g(z)}{H(z)},

where H(z) = (z-r) h(z) is the original denominator of f(z). By L’Hopital’s rule,

a_0 = \displaystyle \lim_{z \to r} \frac{(z-r) g(z)}{H(z)} = \displaystyle \lim_{z \to r} \frac{g(z) + (z-r) g'(z)}{H'(z)} = \displaystyle \frac{g(r)}{H'(r)}.

For the function at hand, g(z) \equiv 1 and H(z) = z^2 + 2\frac{S}{R}z + 1, so that H'(z) = 2z + 2\frac{S}{R}. Therefore, the residue at z = r_1 is equal to

\displaystyle \frac{1}{2r_1+2 \frac{S}{R}} = \displaystyle \frac{1}{2 \displaystyle \frac{-S + \sqrt{S^2 -R^2}}{R} + 2 \frac{S}{R}}

= \displaystyle \frac{1}{ ~ 2 \displaystyle \frac{\sqrt{S^2 -R^2}}{R} ~ }

= \displaystyle \frac{R}{2 \sqrt{S^2-R^2}},

matching the result found earlier.

 

How I Impressed My Wife: Part 4g

So far in this series, I have used three different techniques to show that

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x} = \displaystyle \frac{2\pi}{|b|}.

For the third technique, a key step in the calculation was showing that the residue of the function

f(z) = \displaystyle \frac{1}{z^2 + 2\frac{S}{R}z + 1} = \displaystyle \frac{1}{(z-r_1)(z-r_2)}

at the point

r_1 = \displaystyle \frac{-S + \sqrt{S^2 -R^2}}{R}

was equal to

\displaystyle \frac{R}{ 2 \sqrt{S^2-R^2} }.

Initially, I did this by explicitly computing the Laurent series expansion about z = r_1 and identifying the coefficient for the term (z-r_1)^{-1}.

In this post and the next post, I’d like to discuss alternate ways that this residue could have been obtained.
green line

Notice that the function f(z) has the form \displaystyle \frac{g(z)}{(z-r) h(z)}, where g and h are differentiable functions so that g(r) \ne 0 and h(r) \ne 0. Therefore, we may rewrite this function using the Taylor series expansion of \displaystyle \frac{g(z)}{h(z)} about z = r:

f(z) = \displaystyle \frac{1}{z-r} \left[ \frac{g(z)}{h(z)} \right]

f(z) = \displaystyle \frac{1}{z-r} \left[ a_0 + a_1 (z-r) + a_2 (z-r)^2 + a_3 (z-r)^3 + \dots \right]

f(z) = \displaystyle \frac{a_0}{z-r} + a_1 + a_2 (z-r) + a_3 (z-r)^2 + \dots

Therefore, the residue at z = r is equal to a_0, or the constant term in the Taylor expansion of \displaystyle \frac{g(z)}{h(z)} about z = r. Therefore,

a_0 = \displaystyle \frac{g(r)}{h(r)}

For the function at hand g(z) \equiv 1 and h(z) = z-r_2. Therefore, the residue at z = r_1 is equal to \displaystyle \frac{1}{r_1 - r_2}, matching the result found earlier.

 

Why Does 0.999… = 1? (Index)

I’m using the Twelve Days of Christmas (and perhaps a few extra days besides) to do something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on different techniques that I’ll use to try to convince students that 0.999\dots = 1.

Part 1: Converting the decimal expansion to a fraction, with algebra.

Part 2: Rewriting both sides of the equation 1 = 3 \times \displaystyle \frac{1}{3}.

Part 3: Converting the decimal expansion to a fraction, using infinite series.

Part 4: A proof by contradiction: what number can possibly be between 0.999\dots and 1?

Part 5: Same as Part 4, except by direct reasoning.