# My Favorite One-Liners: Part 50

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s today’s one-liner: “To prove that two things are equal, show that the difference is zero.” This principle is surprisingly handy in the secondary mathematics curriculum. For example, it is the basis for the proof of the Mean Value Theorem, one of the most important theorems in calculus that serves as the basis for curve sketching and the uniqueness of antiderivatives (up to a constant).

And I have a great story that goes along with this principle, from 30 years ago.

I forget the exact question out of Apostol’s calculus, but there was some equation that I had to prove on my weekly homework assignment that, for the life of me, I just couldn’t get. And for no good reason, I had a flash of insight: subtract the left- and right-hand sides. While it was very difficult to turn the left side into the right side, it turned out that, for this particular problem, was very easy to show that the difference was zero. (Again, I wish I could remember exactly which question this was so that I could show this technique and this particular example to my own students.)

So I finished my homework, and I went outside to a local basketball court and worked on my jump shot.

Later that week, I went to class, and there was a great buzz in the air. It took ten seconds to realize that everyone was up in arms about how to do this particular problem. Despite the intervening 30 years, I remember the scene as clear as a bell. I can still hear one of my classmates ask me, “Quintanilla, did you get that one?”

I said with great pride, “Yeah, I got it.” And I showed them my work.

And, either before then or since then, I’ve never heard the intensity of the cussing that followed.

Truth be told, probably the only reason that I remember this story from my adolescence is that I usually was the one who had to ask for help on the hardest homework problems in that Honors Calculus class. This may have been the one time in that entire two-year calculus sequence that I actually figured out a homework problem that had stumped everybody else.

# My Favorite One-Liners: Part 29

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use when I need my students to remember something from a previous course — especially when it’s a difficult concept from a previous course — that somebody else taught them in a previous semester.

For example, in my probability class, I’ll introduce the Poisson distribution

$P(X = k) = e^{-\mu} \displaystyle \frac{\mu^k}{k!}$,

where $\mu > 0$ and the permissible values of $k$ are non-negative integers.

In particular, since these are probabilities and one and only one of these values can be taken, this means that

$\displaystyle \sum_{k=0}^\infty e^{-\mu} \frac{\mu^k}{k!} = 1$.

At this point, I want students to remember that they’ve actually seen this before, so I replace $\mu$ by $x$ and then multiply both sides by $e^x$:

$\displaystyle \sum_{k=0}^\infty \frac{x^k}{k!} = e^x$.

Of course, this is the Taylor series expansion for $e^x$. However, my experience is that most students have decidedly mixed feelings about Taylor series; often, it’s the last thing that they learn in Calculus II, which means it’s the first thing that they forget when the semester is over. Also, most students have a really hard time with Taylor series when they first learn about them.

So here’s my one-liner that I’ll say at this point: “Does this bring back any bad memories for anyone? Perhaps like an old Spice Girls song?” And this never fails to get an understanding laugh before I remind them about Taylor series.

# My Favorite One-Liners: Part 25

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Consider the integral

$\displaystyle \int_0^2 2x(1-x^2)^3 \, dx$

The standard technique — other than multiplying it out — is using the substitution $u = 1-x^2$. With this substitution $du = -2x \, dx$. Also, $x = 0$ corresponds to $u = 1$, while $x = 2$ corresponds to $u = -3$. Therefore,

$\displaystyle\int_0^2 2x(1-x^2)^3 \, dx = - \displaystyle\int_0^2 (-2x)(1-x^2)^3 \, dx = -\displaystyle\int_1^{-3} u^3 \, du$.

My one-liner at this point is telling my students, “At this point, about 10,000 volts of electricity should be going down your spine.” I’ll use this line when a very unexpected result happens — like a “left” endpoint that’s greater than the “right” endpoint. Naturally, for this problem, the next step — though not logically necessary, it’s psychologically reassuring — is to absorb the negative sign by flipping the endpoints:

$\displaystyle\int_0^2 2x(1-x^2)^3 \, dx = -\displaystyle\int_1^{-3} u^3 \, du = \displaystyle\int_{-3}^1 u^3 \, du$,

and then the calculation can continue.

# My Favorite One-Liners: Part 24

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s a problem that could appear in my class in probability or statistics:

Let $f(x) = 3x^2$ be a probability density function for $0 \le x \le 1$. Find $F(x) = P(X \le x)$, the cumulative distribution function of $X$.

A student’s first reaction might be to set up the integral as

$\displaystyle \int_0^x 3x^2 \, dx$

The problem with this set-up, of course, is that the letter $x$ has already been reserved as the right endpoint for this definite integral. Therefore, inside the integral, we should choose any other letter — just not $x$ — as the dummy variable.

Which sets up my one-liner: “In the words of the great philosopher Jean-Luc Picard: Plenty of letters left in the alphabet.”

We then write the integral as something like

$\displaystyle \int_0^x 3t^2 \, dt$

and then get on with the business of finding $F(x)$.

# My Favorite One-Liners: Part 8

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

At many layers of the mathematics curriculum, students learn about that various functions can essentially commute with each other. In other words, the order in which the operations is performed doesn’t affect the final answer. Here’s a partial list off the top of my head:

1. Arithmetic/Algebra: $a \cdot (b + c) = a \cdot b + a \cdot c$. This of course is commonly called the distributive property (and not the commutative property), but the essential idea is that the same answer is obtained whether the multiplications are performed first or if the addition is performed first.
2. Algebra: If $a,b > 0$, then $\sqrt{ab} = \sqrt{a} \sqrt{b}$.
3. Algebra: If $a,b > 0$ and $x$ is any real number, then $(ab)^x = a^x b^x$.
4. Precalculus: $\displaystyle \sum_{i=1}^n (a_i+b_i) = \displaystyle \sum_{i=1}^n a_i + \sum_{i=1}^n b_i$.
5. Precalculus: $\displaystyle \sum_{i=1}^n c a_i = c \displaystyle \sum_{i=1}^n a_i$.
6. Calculus: If $f$ is continuous at an interior point $c$, then $\displaystyle \lim_{x \to c} f(x) = f(c)$.
7. Calculus: If $f$ and $g$ are differentiable, then $(f+g)' = f' + g'$.
8. Calculus: If $f$ is differentiable and $c$ is a constant, then $(cf)' = cf'$.
9. Calculus: If $f$ and $g$ are integrable, then $\int (f+g) = \int f + \int g$.
10. Calculus: If $f$ is integrable and $c$ is a constant, then $\int cf = c \int f$.
11. Calculus: If $f: \mathbb{R}^2 \to \mathbb{R}$ is integrable, $\iint f(x,y) dx dy = \iint f(x,y) dy dx$.
12. Calculus: For most differentiable function $f: \mathbb{R}^2 \to \mathbb{R}$ that arise in practice, $\displaystyle \frac{\partial^2 f}{\partial x \partial y} = \displaystyle \frac{\partial^2 f}{\partial y \partial x}$.
13. Probability: If $X$ and $Y$ are random variables, then $E(X+Y) = E(X) + E(Y)$.
14. Probability: If $X$ is a random variable and $c$ is a constant, then $E(cX) = c E(X)$.
15. Probability: If $X$ and $Y$ are independent random variables, then $E(XY) = E(X) E(Y)$.
16. Probability: If $X$ and $Y$ are independent random variables, then $\hbox{Var}(X+Y) = \hbox{Var}(X) + \hbox{Var}(Y)$.
17. Set theory: If $A$, $B$, and $C$ are sets, then $A \cup (B \cap C) = (A \cup B) \cap (A \cup C)$.
18. Set theory: If $A$, $B$, and $C$ are sets, then $A \cap (B \cup C) = (A \cap B) \cup (A \cap C)$.

However, there are plenty of instances when two functions do not commute. Most of these, of course, are common mistakes that students make when they first encounter these concepts. Here’s a partial list off the top of my head. (For all of these, the inequality sign means that the two sides do not have to be equal… though there may be special cases when equality happens to happen.)

1. Algebra: $(a+b)^x \ne a^x + b^x$ if $x \ne 1$. Important special cases are $x = 2$, $x = 1/2$, and $x = -1$.
2. Algebra/Precalculus: $\log_b(x+y) = \log_b x + \log_b y$. I call this the third classic blunder.
3. Precalculus: $(f \circ g)(x) \ne (g \circ f)(x)$.
4. Precalculus: $\sin(x+y) \ne \sin x + \sin y$, $\cos(x+y) \ne \cos x + \cos y$, etc.
5. Precalculus: $\displaystyle \sum_{i=1}^n (a_i b_i) \ne \displaystyle \left(\sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n b_i \right)$.
6. Calculus: $(fg)' \ne f' \cdot g'$.
7. Calculus $\left( \displaystyle \frac{f}{g} \right)' \ne \displaystyle \frac{f'}{g'}$
8. Calculus: $\int fg \ne \left( \int f \right) \left( \int g \right)$.
9. Probability: If $X$ and $Y$ are dependent random variables, then $E(XY) \ne E(X) E(Y)$.
10. Probability: If $X$ and $Y$ are dependent random variables, then $\hbox{Var}(X+Y) \ne \hbox{Var}(X) + \hbox{Var}(Y)$.

All this to say, it’s a big deal when two functions commute, because this doesn’t happen all the time.

I wish I could remember the speaker’s name, but I heard the following one-liner at a state mathematics conference many years ago, and I’ve used it to great effect in my classes ever since. Whenever I present a property where two functions commute, I’ll say, “In other words, the order of operations does not matter. This is a big deal, because, in real life, the order of operations usually is important. For example, this morning, you probably got dressed and then went outside. The order was important.”

# What I Learned by Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post.

When I was researching for my series of posts on conditional convergence, especially examples related to the constant $\gamma$, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites along with the page numbers in the book — while giving the book a very high recommendation.

Part 1: The smallest value of $n$ so that $1 + \frac{1}{2} + \dots + \frac{1}{n} > 100$ (page 23).

Part 2: Except for a couple select values of $m, the sum $\frac{1}{m} + \frac{1}{m+1} + \dots + \frac{1}{n}$ is never an integer (pages 24-25).

Part 3: The sum of the reciprocals of the twin primes converges (page 30).

Part 4: Euler somehow calculated $\zeta(26)$ without a calculator (page 41).

Part 5: The integral called the Sophomore’s Dream (page 44).

Part 6: St. Augustine’s thoughts on mathematicians — in context, astrologers (page 65).

Part 7: The probability that two randomly selected integers have no common factors is $6/\pi^2$ (page 68).

Part 8: The series for quickly computing $\gamma$ to high precision (page 89).

Part 9: An observation about the formulas for $1^k + 2^k + \dots + n^k$ (page 81).

Part 10: A lower bound for the gap between successive primes (page 115).

Part 11: Two generalizations of $\gamma$ (page 117).

Part 12: Relating the harmonic series to meteorological records (page 125).

Part 13: The crossing-the-desert problem (page 127).

Part 14: The worm-on-a-rope problem (page 133).

Part 15: An amazingly nasty formula for the $n$th prime number (page 168).

Part 16: A heuristic argument for the form of the prime number theorem (page 172).

Part 17: Oops.

Part 18: The Riemann Hypothesis can be stated in a form that can be understood by high school students (page 207).

# Computing e to Any Power: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series examining one of Richard Feynman’s anecdotes about mentally computing $e^x$ for three different values of $x$.

Part 1: Feynman’s anecdote.

Part 2: Logarithm and antilogarithm tables from the 1940s.

Part 3: A closer look at Feynman’s computation of $e^{3.3}$.

Part 4: A closer look at Feynman’s computation of $e^{3}$.

Part 5: A closer look at Feynman’s computation of $e^{1.4}$.

# A Natural Function with Discontinuities: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series on a natural function that nevertheless has discontinuities.

Part 1: Introduction

Part 2: Derivation of this piecewise function, beginning.

Part 3: Derivation of the piecewise function, ending.

# What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 11

The Euler-Mascheroni  constant $\gamma$ is defined by

$\gamma = \displaystyle \lim_{n \to \infty} \left( \sum_{r=1}^n \frac{1}{r} - \ln n \right)$.

What I didn’t know, until reading Gamma (page 117), is that there are at least two ways to generalize this definition.

First, $\gamma$ may be thought of as

$\gamma = \displaystyle \lim_{n \to \infty} \left( \sum_{r=1}^n \frac{1}{\hbox{length of~} [0,r]} - \ln n \right)$,

and so this can be generalized to two dimensions as follows:

$\delta = \displaystyle \lim_{n \to \infty} \left( \sum_{r=2}^n \frac{1}{\pi (\rho_r)^2} - \ln n \right)$,

where $\rho_r$ is the radius of the smallest disk in the plane containing at least $r$ points $(a,b)$ so that $a$ and $b$ are both integers. This new constant $\delta$ is called the Masser-Gramain constant; like $\gamma$, the exact value isn’t known.

Second, let $f(x) = \displaystyle \frac{1}{x}$. Then $\gamma$ may be written as

$\gamma = \displaystyle \lim_{n \to \infty} \left( \sum_{r=1}^n f(r) - \int_1^n f(x) \, dx \right)$.

Euler (not surprisingly) had the bright idea of changing the function $f(x)$ to any other positive, decreasing function, such as

$f(x) = x^a, \qquad -1 \le a < 0$,

producing Euler’s generalized constants. Alternatively (from Stieltjes), we could choose

$f(x) = \displaystyle \frac{ (\ln x)^m }{x}$.

When I researching for my series of posts on conditional convergence, especially examples related to the constant $\gamma$, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

# What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 8

I had always wondered how the constant $\gamma$ can be computed to high precision. I probably should have known this already, but here’s one way that it can be computed (Gamma, page 89):

$\gamma = \displaystyle \sum_{k=1}^n \frac{1}{k} - \ln n - \sum_{k=1}^{\infty} \frac{B_{2k}}{2k \cdot n^{2k}}$,

where $B_{2k}$ is the $2k$th Bernoulli number.

When I researching for my series of posts on conditional convergence, especially examples related to the constant $\gamma$, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.