Square roots and logarithms without a calculator (Part 12)

I recently came across the following computational trick: to estimate \sqrt{b}, use

\sqrt{b} \approx \displaystyle \frac{b+a}{2\sqrt{a}},

where a is the closest perfect square to b. For example,

\sqrt{26} \approx \displaystyle \frac{26+25}{2\sqrt{25}} = 5.1.

I had not seen this trick before — at least stated in these terms — and I’m definitely not a fan of computational tricks without an explanation. In this case, the approximation is a straightforward consequence of a technique we teach in calculus. If f(x) = (1+x)^n, then f'(x) = n (1+x)^{n-1}, so that f'(0) = n. Since f(0) = 1, the equation of the tangent line to f(x) at x = 0 is

L(x) = f(0) + f'(0) \cdot (x-0) = 1 + nx.

The key observation is that, for x \approx 0, the graph of L(x) will be very close indeed to the graph of f(x). In Calculus I, this is sometimes called the linearization of f at x =a. In Calculus II, we observe that these are the first two terms in the Taylor series expansion of f about x = a.

For the problem at hand, if n = 1/2, then

\sqrt{1+x} \approx 1 + \displaystyle \frac{x}{2}

if x is close to zero. Therefore, if a is a perfect square close to b so that the relative difference (b-a)/a is small, then

\sqrt{b} = \sqrt{a + b - a}

= \sqrt{a} \sqrt{1 + \displaystyle \frac{b-a}{a}}

\approx \sqrt{a} \displaystyle \left(1 + \frac{b-a}{2a} \right)

= \sqrt{a} \displaystyle \left( \frac{2a + b-a}{2a} \right)

= \sqrt{a} \displaystyle \left( \frac{b+a}{2a} \right)

= \displaystyle \frac{b+a}{2\sqrt{a}}.

One more thought: All of the above might be a bit much to swallow for a talented but young student who has not yet learned calculus. So here’s another heuristic explanation that does not require calculus: if a \approx b, then the geometric mean \sqrt{ab} will be approximately equal to the arithmetic mean (a+b)/2. That is,

\sqrt{ab} \approx \displaystyle \frac{a+b}{2},

so that

\sqrt{b} \approx \displaystyle \frac{a+b}{2\sqrt{a}}.

Terrific video on Taylor series

Some time ago, I posted a series on the lecture that I’ll give to student to remind them about Taylor series. I won’t repost the whole thing here, but the basic ideas are inductively motivating the concept by starting with a polynomial and then reinforcing the concept with both numerical calculation and comparison of graphs.

After giving this lecture recently, one of my students told me about this terrific video on Taylor series that does much of the same things, with the added bonus of engaging animations. I recommend this highly.

Decimal Approximations of Logarithms (Part 1)

My latest article on mathematics education, titled “Developing Intuition for Logarithms,” was published this month in the “My Favorite Lesson” section of the September 2018 issue of the journal Mathematics Teacher. This is a lesson that I taught for years to my Precalculus students, and I teach it currently to math majors who are aspiring high school teachers. Per copyright law, I can’t reproduce the article here, though the gist of the article appeared in an earlier blog post from five years ago.

Rather than repeat the article here, I thought I would write about some extra thoughts on developing intuition for logarithms that, due to space limitations, I was not able to include in the published article.

While some common (i.e., base-10) logarithms work out evenly, like \log_{10} 10,000, most do not. Here is the typical output when a scientific calculator computes a logarithm:

To a student first learning logarithms, the answer is just an apparently random jumble of digits; indeed, it can proven that the answer is irrational. With a little prompting, a teacher can get his/her students wondering about how people 50 years ago could have figured this out without a calculator. This leads to a natural pedagogical question:

Can good Algebra II students, using only the tools at their disposal, understand how decimal expansions of base-10 logarithms could have been found before computers were invented?

Students who know calculus, of course, can do these computations since

\log_{10} x = \displaystyle \frac{\ln x}{\ln 10},

and the Taylor series

\ln (1+t) = t - \displaystyle \frac{t^2}{2} + \frac{t^3}{3} - \frac{t^4}{4} + \dots,

a standard topic in second-semester calculus, can be used to calculate \ln x for values of x close to 1. However, a calculation using a power series is probably inaccessible to bright Algebra II students, no matter how precocious they are. (Besides, in real life, calculators don’t actually use Taylor series to perform these calculations; see the article CORDIC: How Hand Calculators Calculate, which appeared in College Mathematics Journal, for more details.)

In this series, I’ll discuss a technique that Algebra II students can use to find the decimal expansions of base-10 logarithms to surprisingly high precision using only tools that they’ve learned in Algebra II. This technique won’t be very efficient, but it should be completely accessible to students who are learning about base-10 logarithms for the first time. All that will be required are the Laws of Logarithms and a standard scientific calculator. A little bit of patience can yield the first few decimal places. And either a lot of patience, a teacher who knows how to use Wolfram Alpha appropriately, or a spreadsheet that I wrote can be used to obtain the decimal approximations of logarithms up to the digits displayed on a scientific calculator.

I’ll start this discussion in my next post.

My Favorite One-Liners: Part 104

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

I use today’s quip when discussing the Taylor series expansions for sine and/or cosine:

\sin x = x - \displaystyle \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} \dots

\cos x = 1 - \displaystyle \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} \dots

To try to convince students that these intimidating formulas are indeed correct, I’ll ask them to pull out their calculators and compute the first three terms of the above expansion for $x=0.2$, and then compute \sin 0.2. The results:

This generates a pretty predictable reaction, “Whoa; it actually works!” Of course, this shouldn’t be a surprise; calculators actually use the Taylor series expansion (and a few trig identity tricks) when calculating sines and cosines. So, I’ll tell my class,

It’s not like your calculator draws a right triangle, takes out a ruler to measure the lengths of the opposite side and the hypotenuse, and divides to find the sine of an angle.

 

My Favorite One-Liners: Part 29

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use when I need my students to remember something from a previous course — especially when it’s a difficult concept from a previous course — that somebody else taught them in a previous semester.

For example, in my probability class, I’ll introduce the Poisson distribution

P(X = k) = e^{-\mu} \displaystyle \frac{\mu^k}{k!},

where \mu > 0 and the permissible values of k are non-negative integers.

In particular, since these are probabilities and one and only one of these values can be taken, this means that

\displaystyle \sum_{k=0}^\infty e^{-\mu} \frac{\mu^k}{k!} = 1.

At this point, I want students to remember that they’ve actually seen this before, so I replace \mu by x and then multiply both sides by e^x:

\displaystyle \sum_{k=0}^\infty \frac{x^k}{k!} = e^x.

Of course, this is the Taylor series expansion for e^x. However, my experience is that most students have decidedly mixed feelings about Taylor series; often, it’s the last thing that they learn in Calculus II, which means it’s the first thing that they forget when the semester is over. Also, most students have a really hard time with Taylor series when they first learn about them.

So here’s my one-liner that I’ll say at this point: “Does this bring back any bad memories for anyone? Perhaps like an old Spice Girls song?” And this never fails to get an understanding laugh before I remind them about Taylor series.

 

Computing e to Any Power: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series examining one of Richard Feynman’s anecdotes about mentally computing e^x for three different values of x.

Part 1: Feynman’s anecdote.

Part 2: Logarithm and antilogarithm tables from the 1940s.

Part 3: A closer look at Feynman’s computation of e^{3.3}.

Part 4: A closer look at Feynman’s computation of e^{3}.

Part 5: A closer look at Feynman’s computation of e^{1.4}.

 

 

Computing e to Any Power (Part 5)

In this series, I’m exploring the following ancedote from the book Surely You’re Joking, Mr. Feynman!, which I read and re-read when I was young until I almost had the book memorized.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

I mumbled something about how it was easy to calculate e to any power using that series (you just substitute the power for x).

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

“I just summed the series.”

“Nobody can sum the series that fast. You must just happen to know that one. How about e to the 3?”

“Look,” I say. “It’s hard work! Only one a day!”

“Hah! It’s a fake!” they say, happily.

“All right,” I say, “It’s 20.085.”

They look in the book as I put a few more figures on. They’re all excited now, because I got another one right.

Here are these great mathematicians of the day, puzzled at how I can compute e to any power! One of them says, “He just can’t be substituting and summing—it’s too hard. There’s some trick. You couldn’t do just any old number like e to the 1.4.”

I say, “It’s hard work, but for you, OK. It’s 4.05.”

As they’re looking it up, I put on a few more digits and say, “And that’s the last one for the day!” and walk out.

What happened was this: I happened to know three numbers—the logarithm of 10 to the base e (needed to convert numbers from base 10 to base e), which is 2.3026 (so I knew that e to the 2.3 is very close to 10), and because of radioactivity (mean-life and half-life), I knew the log of 2 to the base e, which is.69315 (so I also knew that e to the.7 is nearly equal to 2). I also knew e (to the 1), which is 2. 71828.

The first number they gave me was e to the 3.3, which is e to the 2.3—ten—times e, or 27.18. While they were sweating about how I was doing it, I was correcting for the extra.0026—2.3026 is a little high.

I knew I couldn’t do another one; that was sheer luck. But then the guy said e to the 3: that’s e to the 2.3 times e to the.7, or ten times two. So I knew it was 20. something, and while they were worrying how I did it, I adjusted for the .693.

Now I was sure I couldn’t do another one, because the last one was again by sheer luck. But the guy said e to the 1.4, which is e to the.7 times itself. So all I had to do is fix up 4 a little bit!

They never did figure out how I did it.

My students invariably love this story; let’s take a look at the third calculation.

Feynman knew that e^{0.69315} \approx 2, so that

e^{0.69315} e^{0.69315} = e^{1.3863} \approx 2 \times 2 = 4.

Therefore, again using the Taylor series expansion:

e^{1.4} = e^{1.3863} e^{0.0137} = 4 e^{0.0137}

\approx 4 \times (1 + 0.0137)

= 4 + 4 \times 0.0137

\approx 4.05.

Again, I have no idea how he put on a few more digits in his head (other than his sheer brilliance), as this would require knowing the value of \ln 2 to six or seven digits as well as computing the next term in the Taylor series expansion.

Computing e to Any Power (Part 4)

In this series, I’m exploring the following ancedote from the book Surely You’re Joking, Mr. Feynman!, which I read and re-read when I was young until I almost had the book memorized.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

I mumbled something about how it was easy to calculate e to any power using that series (you just substitute the power for x).

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

“I just summed the series.”

“Nobody can sum the series that fast. You must just happen to know that one. How about e to the 3?”

“Look,” I say. “It’s hard work! Only one a day!”

“Hah! It’s a fake!” they say, happily.

“All right,” I say, “It’s 20.085.”

They look in the book as I put a few more figures on. They’re all excited now, because I got another one right.

Here are these great mathematicians of the day, puzzled at how I can compute e to any power! One of them says, “He just can’t be substituting and summing—it’s too hard. There’s some trick. You couldn’t do just any old number like e to the 1.4.”

I say, “It’s hard work, but for you, OK. It’s 4.05.”

As they’re looking it up, I put on a few more digits and say, “And that’s the last one for the day!” and walk out.

What happened was this: I happened to know three numbers—the logarithm of 10 to the base e (needed to convert numbers from base 10 to base e), which is 2.3026 (so I knew that e to the 2.3 is very close to 10), and because of radioactivity (mean-life and half-life), I knew the log of 2 to the base e, which is.69315 (so I also knew that e to the.7 is nearly equal to 2). I also knew e (to the 1), which is 2. 71828.

The first number they gave me was e to the 3.3, which is e to the 2.3—ten—times e, or 27.18. While they were sweating about how I was doing it, I was correcting for the extra.0026—2.3026 is a little high.

I knew I couldn’t do another one; that was sheer luck. But then the guy said e to the 3: that’s e to the 2.3 times e to the.7, or ten times two. So I knew it was 20. something, and while they were worrying how I did it, I adjusted for the .693.

Now I was sure I couldn’t do another one, because the last one was again by sheer luck. But the guy said e to the 1.4, which is e to the.7 times itself. So all I had to do is fix up 4 a little bit!

They never did figure out how I did it.

My students invariably love this story; let’s take a look at the second calculation.

Feynman knew that e^{2.3026} \approx 10 and e^{0.69315} \approx 2, so that

e^{2.3026} e^{0.69315} = e^{2.99575} \approx 10 \times 2 = 20.

Therefore, again using the Taylor series expansion:

e^3 = e^{2.99575} e^{0.00425} = 20 e^{0.00425}

\approx 20 \times (1 + 0.00425)

= 20 + 20 \times 0.00425

= 20.085.

Again, I have no idea how he put on a few more digits in his head (other than his sheer brilliance), as this would require knowing the values of \ln 10 and \ln 2 to six or seven digits as well as computing the next term in the Taylor series expansion:

e^3 = e^{\ln 20} e^{3 - \ln 20}

\approx 20 (1 +e^{ 0.0042677})

$\approx 20 \times \left(1 + 0.0042677 + \frac{0.0042677^2}{2!} \right)$

\approx 20.0855361\dots

This compares favorably with the actual answer, e^3 \approx 20.0855392\dots.

Computing e to Any Power (Part 2)

In this series, I’m looking at a wonderful anecdote from Nobel Prize-winning physicist Richard P. Feynman from his book Surely You’re Joking, Mr. Feynman!. This story concerns a time that he computed e^x mentally for a few values of x, much to the astonishment of his companions.

Part of this story directly ties to calculus.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

As noted, this refers to the Taylor series expansion of e^x, which is can be used to compute e to any power. The terms get very small very quickly because of the factorials in the denominator, thus lending itself to the computation of e^x. Indeed, this series is used by modern calculators (with a few tricks to accelerate convergence). In other words, the series from calculus explains how the mysterious “black box” of a graphing calculator actually works.

Continuing the story…

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

For now, I’m going to ignore how Feynman did this computation in his head and instead discuss “the table.” The setting for this story was approximately 1940, long before the advent of handheld calculators. I’ll often ask my students, “The Brooklyn Bridge got built. So how did people compute e^x before calculators were invented?” The answer is by Taylor series, which were used to produce tables of values of e^x. So, if someone wanted to find e^{3.3}, they just had a book on the shelf.

For example, the following page comes from the book Marks’ Mechanical Engineers’ Handbook, 6th edition, which was published in 1958 and which I happen to keep on my bookshelf at home.

ExponentTable

Look down the fifth and sixth columns of this table, we see that e^{3.3} \approx 27.11. Somebody had computed all of these things (and plenty more) using the Taylor series, and they were compiled into a book and sold to mathematicians, scientists, and engineers.

But what if we needed an approximation better more accurate than four significant digits? Back in those days, there were only two options: do the Taylor series yourself, or buy a bigger book with more accurate tables.

Lessons from teaching gifted elementary students (Part 6b)

Every so often, I’ll informally teach a class of gifted elementary-school students. I greatly enjoy interacting with them, and I especially enjoy the questions they pose. Often these children pose questions that no one else will think about, and answering these questions requires a surprising depth of mathematical knowledge.

Here’s a question I once received:

255/256 to what power is equal to 1/2? And please don’t use a calculator.

Here’s how I answered this question without using a calculator… in fact, I answered it without writing anything down at all. I thought of the question as

\displaystyle \left( 1 - \epsilon \right)^x = \displaystyle \frac{1}{2}.

\displaystyle x \ln (1 - \epsilon) = \ln \displaystyle \frac{1}{2}

\displaystyle x \ln (1 - \epsilon) = -\ln 2

I was fortunate that my class chose 1/2, as I had memorized (from reading and re-reading Surely You’re Joking, Mr. Feynman! when I was young) that \ln 2 \approx 0.693. Therefore, we have

x \ln (1 - \epsilon) \approx -0.693.

Next, I used the Taylor series expansion

\ln(1+t) = t - \displaystyle \frac{t^2}{2} + \frac{t^3}{3} \dots

to reduce this to

-x \epsilon \approx -0.693,

or

x \approx \displaystyle \frac{0.693}{\epsilon}.

For my students’ problem, I had \epsilon = \frac{1}{256}, and so

x \approx 256(0.693).

So all I had left was the small matter of multiplying these two numbers. I thought of this as

x \approx 256(0.7 - 0.007).

Multiplying 256 and 7 in my head took a minute or two:

256 \times 7 = 250 \times 7 + 6 \times 7

= 250 \times (8-1) + 42

= 250 \times 8 - 250 + 42

= 2000 - 250 + 42

= 1750 + 42

= 1792.

Therefore, 256 \times 0.7 = 179.2 and 256 \times 0.007 = 1.792 \approx 1.8. Therefore, I had the answer of

x \approx 179.2 - 1.8 = 177.4 \approx 177.

So, after a couple minutes’ thought, I gave the answer of 177. I knew this would be close, but I had no idea it would be so close to the right answer, as

x = \displaystyle \frac{\displaystyle \ln \frac{1}{2} }{\displaystyle \ln \frac{255}{256}} \approx 177.0988786\dots