How I Impressed My Wife: Part 4g

So far in this series, I have used three different techniques to show that

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x} = \displaystyle \frac{2\pi}{|b|}.

For the third technique, a key step in the calculation was showing that the residue of the function

f(z) = \displaystyle \frac{1}{z^2 + 2\frac{S}{R}z + 1} = \displaystyle \frac{1}{(z-r_1)(z-r_2)}

at the point

r_1 = \displaystyle \frac{-S + \sqrt{S^2 -R^2}}{R}

was equal to

\displaystyle \frac{R}{ 2 \sqrt{S^2-R^2} }.

Initially, I did this by explicitly computing the Laurent series expansion about z = r_1 and identifying the coefficient for the term (z-r_1)^{-1}.

In this post and the next post, I’d like to discuss alternate ways that this residue could have been obtained.
green line

Notice that the function f(z) has the form \displaystyle \frac{g(z)}{(z-r) h(z)}, where g and h are differentiable functions so that g(r) \ne 0 and h(r) \ne 0. Therefore, we may rewrite this function using the Taylor series expansion of \displaystyle \frac{g(z)}{h(z)} about z = r:

f(z) = \displaystyle \frac{1}{z-r} \left[ \frac{g(z)}{h(z)} \right]

f(z) = \displaystyle \frac{1}{z-r} \left[ a_0 + a_1 (z-r) + a_2 (z-r)^2 + a_3 (z-r)^3 + \dots \right]

f(z) = \displaystyle \frac{a_0}{z-r} + a_1 + a_2 (z-r) + a_3 (z-r)^2 + \dots

Therefore, the residue at z = r is equal to a_0, or the constant term in the Taylor expansion of \displaystyle \frac{g(z)}{h(z)} about z = r. Therefore,

a_0 = \displaystyle \frac{g(r)}{h(r)}

For the function at hand g(z) \equiv 1 and h(z) = z-r_2. Therefore, the residue at z = r_1 is equal to \displaystyle \frac{1}{r_1 - r_2}, matching the result found earlier.


Why Does 0.999… = 1? (Index)

I’m using the Twelve Days of Christmas (and perhaps a few extra days besides) to do something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on different techniques that I’ll use to try to convince students that 0.999\dots = 1.

Part 1: Converting the decimal expansion to a fraction, with algebra.

Part 2: Rewriting both sides of the equation 1 = 3 \times \displaystyle \frac{1}{3}.

Part 3: Converting the decimal expansion to a fraction, using infinite series.

Part 4: A proof by contradiction: what number can possibly be between 0.999\dots and 1?

Part 5: Same as Part 4, except by direct reasoning.




Reminding students about Taylor series: Index

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on how I remind students about Taylor series. I often use this series in a class like Differential Equations, when Taylor series are needed but my class has simply forgotten about what a Taylor series is and why it’s important.

Part 1: Introduction – Why a Taylor series is important, and different applications of Taylor series.

Part 2: How I get students to understand the finite Taylor polynomial by solving a simple initial-value problem.

Part 3: Making the jump to an infinite series, and issues about tests of convergence.

Part 4: Application to f(x) = e^x, and a numerical investigation of speed of convergence.

Part 5: Application to f(x) = \displaystyle \frac{1}{1-x} and other related functions, including f(x) = \ln(1+x) and f(x) = \tan^{-1} x.

Part 6: Application to f(x) = \sin x and f(x) = \cos x, and Euler’s formula.




Different definitions of e (Part 12): Numerical computation

In this series of posts, we have seen that the number e can be thought about in three different ways.

1. e defines a region of area 1 under the hyperbola y = 1/x.logarea2. We have the limits

e = \displaystyle \lim_{h \to 0} (1+h)^{1/h} = \displaystyle \lim_{n \to \infty} \left(1 + \frac{1}{n} \right)^n.

These limits form the logical basis for the continuous compound interest formula.

3. We have also shown that \frac{d}{dx} \left(e^x \right) = e^x. From this derivative, the Taylor series expansion for e^x about x = 0 can be computed:

e^x = \displaystyle \sum_{n=0}^\infty \frac{x^n}{n!}

Therefore, we can let x = 1 to find e:

e = \displaystyle \sum_{n=0}^\infty \frac{1}{n!} = \displaystyle 1 + 1 + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \dots

green line

In yesterday’s post, I showed that using the original definition (in terms of an area under a hyperbola) does not lend itself well to numerically approximating e. Let’s now look at the other two methods.

2. The limit e = \displaystyle \lim_{n \to \infty} \left(1 + \frac{1}{n} \right)^n gives a somewhat more tractable way of approximating e, at least with a modern calculator. However, you can probably imagine the fun of trying to use this formula without a calculator.

ecalculator3. The best way to compute e (or, in general, e^x) is with Taylor series. The fractions \frac{1}{n!} get very small very quickly, leading to rapid convergence. Indeed, with only terms up to 1/6!, this approximation beats the above approximation with n = 1000. Adding just two extra terms comes close to matching the accuracy of the above limit when n = 1,000,000.


More about approximating e^x via Taylor series can be found in my previous post.


Calculators and complex numbers (Part 15)

In this series of posts, I explore properties of complex numbers that explain some surprising answers to exponential and logarithmic problems using a calculator (see video at the bottom of this post). These posts form the basis for a sequence of lectures given to my future secondary teachers.

To begin, we recall that the trigonometric form of a complex number z = a+bi is

z = r(\cos \theta + i \sin \theta)

where r = |z| = \sqrt{a^2 + b^2} and \tan \theta = b/a, with \theta in the appropriate quadrant. As noted before, this is analogous to converting from rectangular coordinates to polar coordinates.

There’s a shorthand notation for the right-hand side (r e^{i \theta}) that, at long last, I will explain in today’s post.

Definition. If z is a complex number, then we define

e^z = \displaystyle \sum_{n=0}^{\infty} \frac{z^n}{n!}

This of course matches the Taylor expansion of e^x for real numbers x.

Theorem. If \theta is a real number, then e^{i \theta} = \cos \theta + i \sin \theta.

e^{i \theta} = \displaystyle \sum_{n=0}^{\infty} \frac{(i \theta)^n}{n!}

= \displaystyle 1 + i\theta + \frac{(i\theta)^2}{2!} + \frac{(i\theta)^3}{3!} + \frac{(i\theta)^4}{4!} + \frac{(i\theta)^5}{5!} + \frac{(i\theta)^6}{6!} + \frac{(i\theta)^7}{7!} + \dots

= \displaystyle \left(1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} - \frac{\theta^6}{6!} \dots \right) + i \left( \theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - \frac{\theta^7}{7!} \right)

= \cos \theta + i \sin \theta,

using the Taylor expansions for cosine and sine.

This theorem explains one of the calculator’s results:

e^{i \pi} = \cos \pi + i \sin \pi = -1 + 0i = -1.

That said, you can imagine that finding something like e^{4-2i} would be next to impossible by directly plugging into the series and trying to simply the answer. The good news is that there’s an easy way to compute e^z for complex numbers z, which we develop in the next few posts.

green line

For completeness, here’s the movie that I use to engage my students when I begin this sequence of lectures.



Calculators and complex numbers (Part 14)

In this series of posts, I explore properties of complex numbers that explain some surprising answers to exponential and logarithmic problems using a calculator (see video at the bottom of this post). These posts form the basis for a sequence of lectures given to my future secondary teachers.

Definition. If z is a complex number, then we define

e^z = \displaystyle \sum_{n=0}^{\infty} \frac{z^n}{n!}

Even though this isn’t the usual way of defining the exponential function for real numbers, the good news is that one Law of Exponents remains true. (At we saw in an earlier post in this series, we can’t always assume that the usual Laws of Exponents will remain true when we permit the use of complex numbers.)

Theorem. If z and w are complex numbers, then e^z e^w = e^{z+w}.

I will formally prove this in the next post. Today, I want to talk about the idea behind the proof. Notice that

e^z e^w = \displaystyle \left( 1 + z + \frac{z^2}{2!} +\frac{z^3}{3!} + \frac{z^4}{4!} + \dots \right) \left( 1 + w + \frac{w^2}{2!} + \frac{w^3}{3!} + \frac{w^4}{4!} + \dots \right)

Let’s multiply this out (ugh!), but we’ll only worry about terms where the sum of the exponents of z and w is 4 or less. Here we go…

e^z e^w = \displaystyle 1 + z + \frac{z^2}{2!} + \frac{z^3}{3!} + \frac{z^4}{4!} + \dots

+ \displaystyle w + wz + \frac{wz^2}{2!} + \frac{wz^3}{3!} + \dots

+ \displaystyle \frac{w^2}{2!} + \frac{w^2 z}{2!} + \frac{w^2 z^2}{2! \times 2!} + \dots

+ \displaystyle \frac{w^3}{3!} + \frac{w^3 z}{3!} + \dots

+ \displaystyle \frac{w^4}{4!} + \dots

Next, we rearrange the terms according to the sum of the exponents. For example, the terms with z^3, w z^2, w^2 z, and w^3 are placed together because the sum of the exponents for each of these terms is 3.

e^z e^w = 1

+ z + w

\displaystyle + \frac{z^2}{2} + wz + \frac{w^2}{2}

\displaystyle + \frac{z^3}{6} + \frac{wz^2}{2} + \frac{w^2 z}{2} + \frac{w^3}{6}

\displaystyle + \frac{z^4}{24} + \frac{w z^3}{6} + \frac{w^2 z^2}{4} + \frac{w^3 z}{6} + \frac{w^4}{24} + \dots

For each line, we obtain a common denominator:

e^z e^w = 1

+ z + w

\displaystyle + \frac{z^2 + 2 z w + w^2}{2}

\displaystyle + \frac{z^3 + 3 z^2 w + 3 z w^2 + w^3}{6}

\displaystyle + \frac{z^4+ 4 z^3 w + 6 z^2 w^2 + 4 z w^3 + w^4}{24} + \dots

We recognize the familiar entries of Pascal’s triangle in the coefficients of the numerators, and so it appears that

e^z e^w = 1 + (z+w) + \displaystyle \frac{(z+w)^2}{2!} + \frac{(z+w)^3}{3!} + \frac{(z+w)^4}{4!} + \dots

If the pattern on the right-hand side holds up for exponents greater than 4, this proves that e^z e^w = e^{z+w}.

So that’s the idea of the proof. The formal proof will be presented in the next post.

green line

For completeness, here’s the movie that I use to engage my students when I begin this sequence of lectures.



Is there an easy function without an easy Taylor series expansion?

After class one day, a student approached me with an interesting question:

Is there an easy function without an easy Taylor expansion?

This question really struck me for several reasons.

  1. Most functions do not have an easy Taylor (or Maclaurin) expansion. After all, the formula for a Taylor expansion involves the nth derivative of the original function, and higher-order derivatives usually get progressively messier with each successive differentiation.
  2. Most of the series expansions that are taught in Calculus II arise from functions that somehow violate the above rule, like f(x) = \sin x, f(x) = \cos x, f(x) = e^x, and f(x) = 1/(1-x).
  3. Therefore, this student was under the misconception that most easy functions have easy Taylor expansions, while in reality most functions do not.

It took me a moment to answer his question, but I answered with f(x) = tan x. Successively using the Quotient Rule makes the derivatives of tan x messier and messier, but tan x definitely qualifies as an easy function that most students have seen since high school. It turns out that the Taylor expansion of f(x) = \sin x can be written as an infinite series using the Bernoulli numbers, but that’s a concept that most calculus students haven’t seen yet.

Earlier posts on Taylor series:

Fun lecture on geometric series (Part 2): Ways of counting money

Every once in a while, I’ll give a “fun lecture” to my students. The rules of a “fun lecture” are that I talk about some advanced applications of classroom topics, but I won’t hold them responsible for these ideas on homework and on exams. In other words, they can just enjoy the lecture without being responsible for its content.

This series of posts describes a fun lecture that I’ve given to my Precalculus students after they’ve learned about partial fractions and geometric series.

In the 1949 cartoon “Hare Do,” Bugs Bunny comes across the following sign when trying to buy candy (well, actually, a carrot) from a vending machine. The picture below can be seen at the 2:40 mark of this video:


How many ways are there of expressing 20 cents using pennies, nickels, dimes, and (though not applicable to this problem) quarters? Believe it or not, this is equivalent to the following very complicated multiplication problem:

\left[1 + x + x^2 + x^3 + x^4 + x^5 + \dots \right]

\times \left[1 + x^5 + x^{10} + x^{15} + x^{20} + x^{25} + \dots \right]

\times \left[1 + x^{10} + x^{20} + x^{30} + x^{40} + x^{50} + \dots \right]

\times \left[1 + x^{25} + x^{50} + x^{75} + x^{100} + x^{125} + \dots \right]

On the first line, the exponents are all multiples of 1. On the second line, the exponents are all multiples of 5. On the third line, the exponents are all multiples of 10. On the fourth line, the exponents are all multiples of 25.

How many ways are there of constructing a product of x^{20} from the product of these four infinite series? I offer a thought bubble if you’d like to think about it before seeing the answer.

green_speech_bubbleThere are actually 9 ways. We could choose 1 from the first, second, and fourth lines while choosing x^{20} from the third line. So,

1 \cdot 1 \cdot x^{20} \cdot 1 = x^{20}

There are 8 other ways. For each of these lines, the first term comes from the first infinite series, the second term comes from the second infinite series, and so on.

1 \cdot x^{10} \cdot x^{10} \cdot 1 = x^{20}

1 \cdot x^{20} \cdot 1 \cdot 1 = x^{20}

x^{10} \cdot 1 \cdot x^{10} \cdot 1 = x^{20}

x^5 \cdot x^{15} \cdot 1 \cdot 1 = x^{20}

x^{10} \cdot x^{10} \cdot 1 \cdot 1 = x^{20}

x^5 \cdot x^{15} \cdot 1 \cdot 1 = x^{20}

x^{20} \cdot 1 \cdot 1 \cdot 1 = x^{20}

x^5 \cdot x^5 \cdot x^{10} \cdot 1 = x^{20}

The nice thing is that each of these expressions is conceptually equivalent to a way of expressing 20 cents using pennies, nickels, dimes, and quarters. In each case, the value in parentheses matches an exponent.

  • 1 \cdot 1 \cdot x^{20} \cdot 1 = x^{20}: 2 dimes (20 cents).
  • 1 \cdot x^{10} \cdot x^{10} \cdot 1 = x^{20}: 2 nickels (10 cents) and 1 dime (10 cents)
  • 1 \cdot x^{20} \cdot 1 \cdot 1 = x^{20}: 4 nickels (20 cents)
  • x^{10} \cdot 1 \cdot x^{10} \cdot 1 = x^{20}: 10 pennies (10 cents) and 1 dime (10 cents)
  • x^{15} \cdot x^5 \cdot 1 \cdot 1 = x^{20}: 15 pennies (15 cents) and 1 nickel (5 cents)
  • x^{10} \cdot x^{10} \cdot 1 \cdot 1 = x^{20}: 10 pennies (10 cents) and 2 nickels (10 cents)
  • x^5 \cdot x^{15} \cdot 1 \cdot 1 = x^{20}: 5 pennies (5 cents) and 3 nickels (15 cents)
  • x^{20} \cdot 1 \cdot 1 \cdot 1 = x^{20}: 20 pennies (20 cents)
  • x^5 \cdot x^5 \cdot x^{10} \cdot 1 = x^{20}: 5 pennies (5 cents), 1 nickel (5 cents), and 1 dime (10 cents)

Notice that the last line didn’t appear in the Bugs Bunny cartoon.

green lineUsing the formula for an infinite geometric series (and assuming -1 < x < 1), we may write the infinite product as

f(x) = \displaystyle \frac{1}{(1-x)(1-x^5)(1-x^{10})(1-x^{25})}

When written as an infinite series — that is, as a Taylor series about x =0 — the coefficients provide the number of ways of expressing that many cents using pennies, nickels, dimes and quarters. This Taylor series can be computed with Mathematica:

generating1Looking at the coefficient of x^{20}, we see that there are indeed 9 ways of expressing 20 cents with pennies, nickels, dimes, and quarters. We also see that there are 242 of expressing 1 dollar and 1463 ways of expressing 2 dollars.

The United States also has 50-cent coins and dollar coins, although they are rarely used in circulation. Our answers become slightly different if we permit the use of these larger coins:

generating2Finally, just for the fun of it, the coins in the United Kingdom are worth 1 pence, 2 pence, 5 pence, 10 pence, 20 pence, 50 pence, 100 pence (1 pound), and 200 pence (2 pounds). With these different coins, there are 41 ways of expressing 20 pence, 4563 ways of expressing 1 pound, and 73,682 ways of expressing 2 pounds.


green line

For more discussion about this application of generating functions — including ways of determining the above coefficients without Mathematica — I’ll refer to the 1000+ results of the following Google search:

FYI, previous posts on an infinite geometric series:

Previous posts on Taylor series:

Formula for an infinite geometric series (Part 11)

Many math majors don’t have immediate recall of the formula for an infinite geometric series. They often can remember that there is a formula, but they can’t recollect the details. While it’s I think it’s OK that they don’t have the formula memorized, I think is a real shame that they’re also unaware of where the formula comes from and hence are unable to rederive the formula if they’ve forgotten it.

In this post, I’d like to give some thoughts about why the formula for an infinite geometric series is important for other areas of mathematics besides Precalculus. (There may be others, but here’s what I can think of in one sitting.)

1. An infinite geometric series is actually a special case of a Taylor series. (See for details.) Therefore, it would be wonderful if students learning Taylor series in Calculus II could be able to relate the new topic (Taylor series) to their previous knowledge (infinite geometric series) which they had already seen in Precalculus.

2. An infinite geometric series is also a special case of the binomial series (1+x)^n, when n does not have to be a positive integer and hence Pascal’s triangle cannot be used to find the expansion.

3. Infinite geometric series is a rare case when an infinite sum can be found exactly. In Calculus II, a whole battery of tests (e.g., the Root Test, the Ratio Test, the Limit Comparison Test) are introduced to determine whether a series converges or not. In other words, these tests only determine if an answer exists, without determining what the answer actually is.

Throughout the entire undergraduate curriculum, I’m aware of only four types of series that can actually be evaluated exactly.

  • An infinite geometric series with -1 < r < 1
  • The Taylor series of a real analytic function. (Of course, an infinite geometric series is a special case of a Taylor series.)
  • A telescoping series. For example, using partial fractions and cancelling a bunch of terms, we find that

\displaystyle \sum_{k=1}^\infty \frac{1}{k^2+k} = \displaystyle \sum_{k=1}^\infty \left( \frac{1}{k} - \frac{1}{k+1} \right)

\displaystyle \sum_{k=1}^\infty \frac{1}{k^2+k} = \displaystyle \left( 1 - \frac{1}{2} \right) + \left( \frac{1}{2} - \frac{1}{3} \right) \dots

\displaystyle \sum_{k=1}^\infty \frac{1}{k^2+k} = 1

4. Infinite geometric series are essential for proving basic facts about decimal representations that we often take for granted.

5. Properties of an infinite geometric series are needed to find the mean and standard deviation of a geometric random variable, which is used to predict the number of independent trials needed before an event happens. This is used for analyzing the coupon collector’s problem, among other applications.

Square roots and logarithms without a calculator (Part 10)

This is the fifth in a series of posts about calculating roots without a calculator, with special consideration to how these tales can engage students more deeply with the secondary mathematics curriculum. As most students today have a hard time believing that square roots can be computed without a calculator, hopefully giving them some appreciation for their elders.

Today’s story takes us back to a time before the advent of cheap pocket calculators: 1949.

The following story comes from the chapter “Lucky Numbers” of Surely You’re Joking, Mr. Feynman!, a collection of tales by the late Nobel Prize winning physicist, Richard P. Feynman. Feynman was arguably the greatest American-born physicist — the subject of the excellent biography Genius: The Life and Science of Richard Feynman — and he had a tendency to one-up anyone who tried to one-up him. (He was also a serial philanderer, but that’s another story.) Here’s a story involving how, in the summer of 1949, he calculated \sqrt[3]{1729.03} without a calculator.

The first time I was in Brazil I was eating a noon meal at I don’t know what time — I was always in the restaurants at the wrong time — and I was the only customer in the place. I was eating rice with steak (which I loved), and there were about four waiters standing around.

A Japanese man came into the restaurant. I had seen him before, wandering around; he was trying to sell abacuses. (Note: At the time of this story, before the advent of pocket calculators, the abacus was arguably the world’s most powerful hand-held computational device.) He started to talk to the waiters, and challenged them: He said he could add numbers faster than any of them could do.

The waiters didn’t want to lose face, so they said, “Yeah, yeah. Why don’t you go over and challenge the customer over there?”

The man came over. I protested, “But I don’t speak Portuguese well!”

The waiters laughed. “The numbers are easy,” they said.

They brought me a paper and pencil.

The man asked a waiter to call out some numbers to add. He beat me hollow, because while I was writing the numbers down, he was already adding them as he went along.

I suggested that the waiter write down two identical lists of numbers and hand them to us at the same time. It didn’t make much difference. He still beat me by quite a bit.

However, the man got a little bit excited: he wanted to prove himself some more. “Multiplição!” he said.

Somebody wrote down a problem. He beat me again, but not by much, because I’m pretty good at products.

The man then made a mistake: he proposed we go on to division. What he didn’t realize was, the harder the problem, the better chance I had.

We both did a long division problem. It was a tie.

This bothered the hell out of the Japanese man, because he was apparently well trained on the abacus, and here he was almost beaten by this customer in a restaurant.

Raios cubicos!” he says with a vengeance. Cube roots! He wants to do cube roots by arithmetic. It’s hard to find a more difficult fundamental problem in arithmetic. It must have been his topnotch exercise in abacus-land.

He writes down a number on some paper— any old number— and I still remember it: 1729.03. He starts working on it, mumbling and grumbling: “Mmmmmmagmmmmbrrr”— he’s working like a demon! He’s poring away, doing this cube root.

Meanwhile I’m just sitting there.

One of the waiters says, “What are you doing?”.

I point to my head. “Thinking!” I say. I write down 12 on the paper. After a little while I’ve got 12.002.

The man with the abacus wipes the sweat off his forehead: “Twelve!” he says.

“Oh, no!” I say. “More digits! More digits!” I know that in taking a cube root by arithmetic, each new digit is even more work that the one before. It’s a hard job.

He buries himself again, grunting “Rrrrgrrrrmmmmmm …,” while I add on two more digits. He finally lifts his head to say, “12.0!”

The waiter are all excited and happy. They tell the man, “Look! He does it only by thinking, and you need an abacus! He’s got more digits!”

He was completely washed out, and left, humiliated. The waiters congratulated each other.

How did the customer beat the abacus?

The number was 1729.03. I happened to know that a cubic foot contains 1728 cubic inches, so the answer is a tiny bit more than 12. The excess, 1.03, is only one part in nearly 2000, and I had learned in calculus that for small fractions, the cube root’s excess is one-third of the number’s excess. So all I had to do is find the fraction 1/1728, and multiply by 4 (divide by 3 and multiply by 12). So I was able to pull out a whole lot of digits that way.

A few weeks later, the man came into the cocktail lounge of the hotel I was staying at. He recognized me and came over. “Tell me,” he said, “how were you able to do that cube-root problem so fast?”

I started to explain that it was an approximate method, and had to do with the percentage of error. “Suppose you had given me 28. Now the cube root of 27 is 3 …”

He picks up his abacus: zzzzzzzzzzzzzzz— “Oh yes,” he says.

I realized something: he doesn’t know numbers. With the abacus, you don’t have to memorize a lot of arithmetic combinations; all you have to do is to learn to push the little beads up and down. You don’t have to memorize 9+7=16; you just know that when you add 9, you push a ten’s bead up and pull a one’s bead down. So we’re slower at basic arithmetic, but we know numbers.

Furthermore, the whole idea of an approximate method was beyond him, even though a cubic root often cannot be computed exactly by any method. So I never could teach him how I did cube roots or explain how lucky I was that he happened to choose 1729.03.

The key part of the story, “for small fractions, the cube root’s excess is one-third of the number’s excess,” deserves some elaboration, especially since this computational trick isn’t often taught in those terms anymore. If f(x) = (1+x)^n, then f'(x) = n (1+x)^{n-1}, so that f'(0) = n. Since f(0) = 1, the equation of the tangent line to f(x) at x = 0 is

L(x) = f(0) + f'(0) \cdot (x-0) = 1 + nx.

The key observation is that, for x \approx 0, the graph of L(x) will be very close indeed to the graph of f(x). In Calculus I, this is sometimes called the linearization of f at x =a. In Calculus II, we observe that these are the first two terms in the Taylor series expansion of f about x = a.

For Feynman’s problem, n =\frac{1}{3}, so that \sqrt[3]{1+x} \approx 1 + \frac{1}{3} x if $x \approx 0$. Then $\latex \sqrt[3]{1729.03}$ can be rewritten as

\sqrt[3]{1729.03} = \sqrt[3]{1728} \sqrt[3]{ \displaystyle \frac{1729.03}{1728} }

\sqrt[3]{1729.03} = 12 \sqrt[3]{\displaystyle 1 + \frac{1.03}{1728}}

\sqrt[3]{1729.03} \approx 12 \left( 1 + \displaystyle \frac{1}{3} \times \frac{1.03}{1728} \right)

\sqrt[3]{1729.03} \approx 12 + 4 \times \displaystyle \frac{1.03}{1728}

This last equation explains the line “all I had to do is find the fraction 1/1728, and multiply by 4.” With enough patience, the first few digits of the correction can be mentally computed since

\displaystyle \frac{1.03}{500} < \frac{1.03}{432} = 4 \times \frac{1.03}{1728} < \frac{1.03}{400}

\displaystyle \frac{1.03 \times 2}{1000} < 4 \times \frac{1.03}{1728} < \frac{1.03 \times 25}{10000}

0.00206 < 4 \times \displaystyle \frac{1.03}{1728} < 0.0025075

So Feynman could determine quickly that the answer was 12.002\hbox{something}.

By the way,

\sqrt[3]{1729.03} \approx 12.00238378\dots

\hbox{Estimate} \approx 12.00238426\dots

So the linearization provides an estimate accurate to eight significant digits. Additional digits could be obtained by using the next term in the Taylor series.

green line

I have a similar story to tell. Back in 1996 or 1997, when I first moved to Texas and was making new friends, I quickly discovered that one way to get odd facial expressions out of strangers was by mentioning that I was a math professor. Occasionally, however, someone would test me to see if I really was a math professor. One guy (who is now a good friend; later, we played in the infield together on our church-league softball team) asked me to figure out \sqrt{97} without a calculator — before someone could walk to the next room and return with the calculator. After two seconds of panic, I realized that I was really lucky that he happened to pick a number close to 100. Using the same logic as above,

\sqrt{97} = \sqrt{100} \sqrt{1 - 0.03} \approx 10 \left(1 - \displaystyle \frac{0.03}{2}\right) = 9.85.

Knowing that this came from a linearization and that the tangent line to y = \sqrt{1+x} lies above the curve, I knew that this estimate was too high. But I didn’t have time to work out a correction (besides, I couldn’t remember the full Taylor series off the top of my head), so I answered/guessed 9.849, hoping that I did the arithmetic correctly. You can imagine the amazement when someone punched into the calculator to get 9.84886\dots