My Favorite One-Liners: Part 8

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

At many layers of the mathematics curriculum, students learn about that various functions can essentially commute with each other. In other words, the order in which the operations is performed doesn’t affect the final answer. Here’s a partial list off the top of my head:

  1. Arithmetic/Algebra: a \cdot (b + c) = a \cdot b + a \cdot c. This of course is commonly called the distributive property (and not the commutative property), but the essential idea is that the same answer is obtained whether the multiplications are performed first or if the addition is performed first.
  2. Algebra: If a,b > 0, then \sqrt{ab} = \sqrt{a} \sqrt{b}.
  3. Algebra: If a,b > 0 and x is any real number, then (ab)^x = a^x b^x.
  4. Precalculus: \displaystyle \sum_{i=1}^n (a_i+b_i) = \displaystyle \sum_{i=1}^n a_i + \sum_{i=1}^n b_i.
  5. Precalculus: \displaystyle \sum_{i=1}^n c a_i = c \displaystyle \sum_{i=1}^n a_i.
  6. Calculus: If f is continuous at an interior point c, then \displaystyle \lim_{x \to c} f(x) = f(c).
  7. Calculus: If f and g are differentiable, then (f+g)' = f' + g'.
  8. Calculus: If f is differentiable and c is a constant, then (cf)' = cf'.
  9. Calculus: If f and g are integrable, then \int (f+g) = \int f + \int g.
  10. Calculus: If f is integrable and c is a constant, then \int cf = c \int f.
  11. Calculus: If f: \mathbb{R}^2 \to \mathbb{R} is integrable, \iint f(x,y) dx dy = \iint f(x,y) dy dx.
  12. Calculus: For most differentiable function f: \mathbb{R}^2 \to \mathbb{R} that arise in practice, \displaystyle \frac{\partial^2 f}{\partial x \partial y} = \displaystyle \frac{\partial^2 f}{\partial y \partial x}.
  13. Probability: If X and Y are random variables, then E(X+Y) = E(X) + E(Y).
  14. Probability: If X is a random variable and c is a constant, then E(cX) = c E(X).
  15. Probability: If X and Y are independent random variables, then E(XY) = E(X) E(Y).
  16. Probability: If X and Y are independent random variables, then \hbox{Var}(X+Y) = \hbox{Var}(X) + \hbox{Var}(Y).
  17. Set theory: If A, B, and C are sets, then A \cup (B \cap C) = (A \cup B) \cap (A \cup C).
  18. Set theory: If A, B, and C are sets, then A \cap (B \cup C) = (A \cap B) \cup (A \cap C).

However, there are plenty of instances when two functions do not commute. Most of these, of course, are common mistakes that students make when they first encounter these concepts. Here’s a partial list off the top of my head. (For all of these, the inequality sign means that the two sides do not have to be equal… though there may be special cases when equality happens to happen.)

  1. Algebra: (a+b)^x \ne a^x + b^x if x \ne 1. Important special cases are x = 2, x = 1/2, and x = -1.
  2. Algebra/Precalculus: \log_b(x+y) = \log_b x + \log_b y. I call this the third classic blunder.
  3. Precalculus: (f \circ g)(x) \ne (g \circ f)(x).
  4. Precalculus: \sin(x+y) \ne \sin x + \sin y, \cos(x+y) \ne \cos x + \cos y, etc.
  5. Precalculus: \displaystyle \sum_{i=1}^n (a_i b_i) \ne \displaystyle \left(\sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n b_i \right).
  6. Calculus: (fg)' \ne f' \cdot g'.
  7. Calculus \left( \displaystyle \frac{f}{g} \right)' \ne \displaystyle \frac{f'}{g'}
  8. Calculus: \int fg \ne \left( \int f \right) \left( \int g \right).
  9. Probability: If X and Y are dependent random variables, then E(XY) \ne E(X) E(Y).
  10. Probability: If X and Y are dependent random variables, then \hbox{Var}(X+Y) \ne \hbox{Var}(X) + \hbox{Var}(Y).

All this to say, it’s a big deal when two functions commute, because this doesn’t happen all the time.

green lineI wish I could remember the speaker’s name, but I heard the following one-liner at a state mathematics conference many years ago, and I’ve used it to great effect in my classes ever since. Whenever I present a property where two functions commute, I’ll say, “In other words, the order of operations does not matter. This is a big deal, because, in real life, the order of operations usually is important. For example, this morning, you probably got dressed and then went outside. The order was important.”

 

My Favorite One-Liners: Part 1

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

One of the most common student mistakes with logarithms is thinking that

\log_b(x+y) = \log_b x + \log_b y.

Whenever students make this mistake, I call it the Third Classic Blunder. The first classic blunder, of course, is getting into a major land war in Asia. The second classic blunder is getting into a battle of wits with a Sicilian when death is on the line. And the third classic blunder is thinking that \log_b(x+y) somehow simplfies as \log_b x + \log_b y.

Sadly, as the years pass, fewer and fewer students immediately get the cultural reference. On the bright side, it’s also an opportunity to introduce a new generation to one of the great cinematic masterpieces of all time.

One of my colleagues calls this mistake the Universal Distributive Law, where the \log_b distributes just as if x+y was being multiplied by a constant. Other mistakes in this vein include  \sqrt{x+y} = \sqrt{x} + \sqrt{y}  and  (x+y)^2 = x^2 + y^2.

Along the same lines, other classic blunders are thinking that

\left(\log_b x\right)^n  simplifies as  \log_b \left(x^n \right)

and that

\displaystyle \frac{\log_b x}{\log_b y}  simplifies as  \log_b \left( \frac{x}{y} \right).

I’m continually amazed at the number of good students who intellectually know that the above equations are false but panic and use them when solving a problem.

Computing e to Any Power: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series examining one of Richard Feynman’s anecdotes about mentally computing e^x for three different values of x.

Part 1: Feynman’s anecdote.

Part 2: Logarithm and antilogarithm tables from the 1940s.

Part 3: A closer look at Feynman’s computation of e^{3.3}.

Part 4: A closer look at Feynman’s computation of e^{3}.

Part 5: A closer look at Feynman’s computation of e^{1.4}.

 

 

Lessons from teaching gifted elementary school students: Index (updated)

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on various lessons I’ve learned while trying to answer the questions posed by gifted elementary school students. (This is updated from my previous index.)

Part 1: A surprising pattern in some consecutive perfect squares.

Part 2: Calculating 2 to a very large exponent.

Part 3a: Calculating 2 to an even larger exponent.

Part 3b: An analysis of just how large this number actually is.

Part 4a: The chance of winning at BINGO in only four turns.

Part 4b: Pedagogical thoughts on one step of the calculation.

Part 4c: A complicated follow-up question.

Part 5a: Exponentiation is multiplication as multiplication is to addition. So, multiplication is to addition as addition is to what? (I offered the answer of incrementation, but it was rejected: addition requires two inputs, while incrementation only requires one.)

Part 5b: Why there is no binary operation that completes the above analogy.

Part 5c: Knuth’s up-arrow notation for writing very big numbers.

Part 5d: Graham’s number, reputed to be the largest number ever to appear in a mathematical proof.

Part 6a: Calculating $(255/256)^x$.

Part 6b: Solving $(255/256)^x = 1/2$ without a calculator.

Part 7a: Estimating the size of a 1000-pound hailstone.

Part 7b: Estimating the size a 1000-pound hailstone.

Part 8a: Statement of an usually triangle summing problem.

Part 8b: Solution using binomial coefficients.

Part 8c: Rearranging the series.

Part 8d: Reindexing to further rearrange the series.

Part 8e: Rewriting using binomial coefficients again.

Part 8f: Finally obtaining the numerical answer.

Part 8g: Extracting the square root of the answer by hand.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 18

The Riemann Hypothesis (see here, here, and here) is perhaps the most famous (and also most important) unsolved problems in mathematics. Gamma (page 207) provides a way of writing down this conjecture in a form that only uses notation that is commonly taught in high school:

If \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \cos(b \ln r) = 0 and \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \sin(b \ln r) = 0 for some pair of real numbers a and b, then a = \frac{1}{2}.

As noted in the book, “It seems extraordinary that the most famous unsolved problem in the whole of mathematics can be phrased so that it involves the simplest of mathematical ideas: summation, trigonometry, logarithms, and [square roots].”

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 17

Let \pi(n) denote the number of positive prime numbers that are less than or equal to n. The prime number theorem, one of the most celebrated results in analytic number theory, states that

\pi(x) \approx \displaystyle \frac{x}{\ln x}.

This is a very difficult result to prove. However, Gamma (page 172) provides a heuristic argument that suggests that this answer might be halfway reasonable.

Consider all of the integers between 1 and x.

  • About half of these numbers won’t be divisible by 2.
  • Of those that aren’t divisible by 2, about two-thirds won’t be divisible by 3. (This isn’t exactly correct, but it’s good enough for heuristics.)
  • Of those that aren’t divisible by 2 and 3, about four-fifths won’t be divisible by 5.
  • And so on.

If we repeat for all primes less than or equal to \sqrt{x}, we can conclude that the number of prime numbers less than or equal to x is approximately

\pi(x) \approx \displaystyle x \prod_{p \le \sqrt{x}} \left(1 - \frac{1}{p} \right).

From this point, we can use Mertens product formula

\displaystyle \lim_{n \to \infty} \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right)^{-1} = e^\gamma

to conclude that

\displaystyle \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right) \approx \displaystyle \frac{e^{-\gamma}}{\ln n}

if n is large. Therefore,

\pi(x) \approx x \displaystyle \frac{e^{-\gamma}}{\ln \sqrt{x}} = 2 e^{-\gamma} \displaystyle \frac{x}{\ln x}.

Though not a formal proof, it’s a fast way to convince students that the unusual fraction \displaystyle \frac{x}{\ln x} ought to appear someplace in the prime number theorem.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 16

Let \pi(n) denote the number of positive prime numbers that are less than or equal to n. The prime number theorem, one of the most celebrated results in analytic number theory, states that

\pi(x) \approx \displaystyle \frac{x}{\ln x}.

This is a very difficult result to prove. However, Gamma (page 172) provides a heuristic argument that suggests that this answer might be halfway reasonable.

Consider all of the integers between 1 and x.

  • About half of these numbers won’t be divisible by 2.
  • Of those that aren’t divisible by 2, about two-thirds won’t be divisible by 3. (This isn’t exactly correct, but it’s good enough for heuristics.)
  • Of those that aren’t divisible by 2 and 3, about four-fifths won’t be divisible by 5.
  • And so on.

If we repeat for all primes less than or equal to \sqrt{x}, we can conclude that the number of prime numbers less than or equal to x is approximately

\pi(x) \approx \displaystyle x \prod_{p \le \sqrt{x}} \left(1 - \frac{1}{p} \right).

From this point, we can use Mertens product formula

\displaystyle \lim_{n \to \infty} \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right)^{-1} = e^\gamma

to conclude that

\displaystyle \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right) \approx \displaystyle \frac{e^{-\gamma}}{\ln n}

if n is large. Therefore,

\pi(x) \approx x \displaystyle \frac{e^{-\gamma}}{\ln \sqrt{x}} = 2 e^{-\gamma} \displaystyle \frac{x}{\ln x}.

Though not a formal proof, it’s a fast way to convince students that the unusual fraction \displaystyle \frac{x}{\ln x} ought to appear someplace in the prime number theorem.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 14

I hadn’t heard of the worm-on-a-rope problem until I read Gamma (page 133). From Cut-The-Knot:

A worm is at one end of a rubber rope that can be stretched indefinitely. Initially the rope is one kilometer long. The worm crawls along the rope toward the other end at a constant rate of one centimeter per second. At the end of each second the rope is instantly stretched another kilometer. Thus, after the first second the worm has traveled one centimeter, and the length of the rope has become two kilometers. After the second second, the worm has crawled another centimeter and the rope has become three kilometers long, and so on. The stretching is uniform, like the stretching of a rubber band. Only the rope stretches. Units of length and time remain constant.

It turns out that, after n seconds, that the fraction of the band that the worm has traveled is H_n/N, where

H_n = \displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}

and N is the length of the rope in centimeters. Using the estimate H_n \approx \ln n + \gamma, we see that the worm will reach the end of the rope when

H_n = N

\ln n + \gamma \approx N

\ln n \approx N - \gamma

n \approx e^{N - \gamma}.

If N = 100,000 (since the rope is initially a kilometer long), it will take a really long time for the worm to reach its destination!

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 10

Suppose p_n is the nth prime number, so that p_{n+1} - p_n is the size of the nth gap between successive prime numbers. It turns out (Gamma, page 115) that there’s an incredible theorem for the lower bound of this number:

\displaystyle \limsup_{n \to \infty} \frac{(p_{n+1}-p_n) (\ln \ln \ln p_n)^2}{(\ln p_n)(\ln \ln p_n)(\ln \ln \ln \ln p_n)} \ge \displaystyle \frac{4 e^{\gamma}}{c},

where \gamma is the Euler-Mascheroni constant and c is the solution of c = 3 + e^{-c}.

Holy cow, what a formula. Let’s take a look at just a small part of it.

Let’s look at the amazing function f(x) = \ln \ln \ln \ln x, iterating the natural logarithm function four times. This function has a way of converting really large inputs into unimpressive outputs. For example, the canonical “big number” in popular culture is the googolplex, defined as 10^{10^{100}}. Well, it takes some work just to rearrange \displaystyle f \left(10^{10^{100}} \right) in a form suitable for plugging into a calculator:

\displaystyle f \left(10^{10^{100}} \right) = \displaystyle \ln \ln \ln \left( \ln 10^{10^{100}} \right)

= \displaystyle \ln \ln \ln \left( 10^{100} \ln 10 \right)

= \displaystyle \ln \ln \left[ \ln \left(10^{100} \right) + \ln \ln 10 \right]

= \displaystyle \ln \ln \left[ 100 \ln 10 + \ln \ln 10 \right]

= \displaystyle \ln \ln \left[ 100 \ln 10 \left( 1 + \frac{\ln \ln 10}{100 \ln 10} \right) \right]

= \displaystyle \ln \left( \ln [ 100 \ln 10] + \ln \left( 1 + \frac{\ln \ln 10}{100 \ln 10} \right)\right)

\approx 1.6943

after using a calculator.

This function grows extremely slowly. What value of x gives an output of 0? Well:

\ln \ln \ln \ln x = 0

\ln \ln \ln x = 1

\ln \ln x = e

\ln x = e^e

x = e^{e^e} \approx 3,814,279.1

What value of x gives an output of 1? Well:

\ln \ln \ln \ln x = 1

\ln \ln \ln x = e

\ln \ln x = e^e

\ln x = e^{e^e}

x = e^{e^{e^e}}

\approx e^{3,814,279.1}

\approx 10^{3,814,279.1 \log_{10} e}

\approx 10^{1,656,420.367636}

\approx 2.3315 \times 10^{1,656,420}

That’s a number with 1,656,421 digits! At the rapid rate of 5 digits per second, it would take over 92 hours (nearly 4 days) just to write out the answer by hand!

Finally, how large does x have to be for the output to be 2? As we’ve already seen, it’s going to be larger than a googolplex:

\displaystyle f \left(10^{10^{x}} \right) = 2

\displaystyle \ln \ln \ln \left( \ln 10^{10^{x}} \right) = 2

\displaystyle \ln \ln \ln \left( 10^{x} \ln 10 \right) = 2

\displaystyle \ln \ln \left[ \ln \left(10^{x} \right) + \ln \ln 10 \right] = 2

\displaystyle \ln \ln \left[ x\ln 10 + \ln \ln 10 \right] = 2

\displaystyle \ln \ln \left[ x\ln 10 \left( 1 + \frac{\ln \ln 10}{x\ln 10} \right) \right] = 2

\displaystyle \ln \left( \ln [ x\ln 10] + \ln \left( 1 + \frac{\ln \ln 10}{x \ln 10} \right)\right) = 2

Let’s simplify things slightly by letting y = x \ln 10:

\displaystyle \ln \left( \ln y + \ln \left( 1 + \frac{\ln \ln 10}{y} \right)\right) = 2

\displaystyle \ln y + \ln \left( 1 + \frac{\ln \ln 10}{y} \right) = e^2

This is a transcendental equation in y; however, we can estimate that the solution will approximately solve \ln y = e^2 since the second term on the left-hand side is small compared to \ln y. This gives the approximation y = e^{e^2} \approx 1618.18. Using either Newton’s method or else graphing the left-hand side yields the more precise solution y \approx 1617.57.

Therefore, x \approx 1617.57 \ln 10 \approx 3725.99, so that

f \left(10^{10^{3725.99}} \right) \approx 2.

One final note: despite what’s typically taught in high school, mathematicians typically use \log to represent natural logarithms (as opposed to base-10 logarithms), so the above formula is more properly written as

\displaystyle \limsup_{n \to \infty} \frac{(p_{n+1}-p_n) (\log \log \log p_n)^2}{(\log p_n)(\log \log p_n)(\log \log \log \log p_n)} \ge \displaystyle \frac{4 e^{\gamma}}{c}.

And this sets up a standard joke, also printed in Gamma:

Q: What noise does a drowning analytic number theorist make?

A: Log… log… log… log…

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

Computing e to Any Power (Part 5)

In this series, I’m exploring the following ancedote from the book Surely You’re Joking, Mr. Feynman!, which I read and re-read when I was young until I almost had the book memorized.

One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for e^x, which is 1 + x + x^2/2! + x^3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x^4/4! you multiply that term by x and divide by 5. It’s very simple.

When I was a kid I was excited by series, and had played with this thing. I had computed e using that series, and had seen how quickly the new terms became very small.

I mumbled something about how it was easy to calculate e to any power using that series (you just substitute the power for x).

“Oh yeah?” they said. “Well, then what’s e to the 3.3?” said some joker—I think it was Tukey.

I say, “That’s easy. It’s 27.11.”

Tukey knows it isn’t so easy to compute all that in your head. “Hey! How’d you do that?”

Another guy says, “You know Feynman, he’s just faking it. It’s not really right.”

They go to get a table, and while they’re doing that, I put on a few more figures.: “27.1126,” I say.

They find it in the table. “It’s right! But how’d you do it!”

“I just summed the series.”

“Nobody can sum the series that fast. You must just happen to know that one. How about e to the 3?”

“Look,” I say. “It’s hard work! Only one a day!”

“Hah! It’s a fake!” they say, happily.

“All right,” I say, “It’s 20.085.”

They look in the book as I put a few more figures on. They’re all excited now, because I got another one right.

Here are these great mathematicians of the day, puzzled at how I can compute e to any power! One of them says, “He just can’t be substituting and summing—it’s too hard. There’s some trick. You couldn’t do just any old number like e to the 1.4.”

I say, “It’s hard work, but for you, OK. It’s 4.05.”

As they’re looking it up, I put on a few more digits and say, “And that’s the last one for the day!” and walk out.

What happened was this: I happened to know three numbers—the logarithm of 10 to the base e (needed to convert numbers from base 10 to base e), which is 2.3026 (so I knew that e to the 2.3 is very close to 10), and because of radioactivity (mean-life and half-life), I knew the log of 2 to the base e, which is.69315 (so I also knew that e to the.7 is nearly equal to 2). I also knew e (to the 1), which is 2. 71828.

The first number they gave me was e to the 3.3, which is e to the 2.3—ten—times e, or 27.18. While they were sweating about how I was doing it, I was correcting for the extra.0026—2.3026 is a little high.

I knew I couldn’t do another one; that was sheer luck. But then the guy said e to the 3: that’s e to the 2.3 times e to the.7, or ten times two. So I knew it was 20. something, and while they were worrying how I did it, I adjusted for the .693.

Now I was sure I couldn’t do another one, because the last one was again by sheer luck. But the guy said e to the 1.4, which is e to the.7 times itself. So all I had to do is fix up 4 a little bit!

They never did figure out how I did it.

My students invariably love this story; let’s take a look at the third calculation.

Feynman knew that e^{0.69315} \approx 2, so that

e^{0.69315} e^{0.69315} = e^{1.3863} \approx 2 \times 2 = 4.

Therefore, again using the Taylor series expansion:

e^{1.4} = e^{1.3863} e^{0.0137} = 4 e^{0.0137}

\approx 4 \times (1 + 0.0137)

= 4 + 4 \times 0.0137

\approx 4.05.

Again, I have no idea how he put on a few more digits in his head (other than his sheer brilliance), as this would require knowing the value of \ln 2 to six or seven digits as well as computing the next term in the Taylor series expansion.