My Favorite One-Liners: Part 106

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Years ago, when I first taught Precalculus at the college level, I was starting a section on trigonometry by reminding my students of the acronym SOHCAHTOA for keeping the trig functions straight:

\sin \theta = \displaystyle \frac{\hbox{Opposite}}{\hbox{Hypotenuse}},

\cos \theta = \displaystyle \frac{\hbox{Adjacent}}{\hbox{Hypotenuse}},

\tan \theta = \displaystyle \frac{\hbox{Opposite}}{\hbox{Adjacent}}.

At this point, one of my students volunteered that a previous math teacher had taught her an acrostic to keep these straight: Some Old Hippie Caught Another Hippie Tripping On Acid.

Needless to say, I’ve been passing this pearl of wisdom on to my students ever since.

My Favorite One-Liners: Part 104

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

I use today’s quip when discussing the Taylor series expansions for sine and/or cosine:

\sin x = x - \displaystyle \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} \dots

\cos x = 1 - \displaystyle \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} \dots

To try to convince students that these intimidating formulas are indeed correct, I’ll ask them to pull out their calculators and compute the first three terms of the above expansion for $x=0.2$, and then compute \sin 0.2. The results:

This generates a pretty predictable reaction, “Whoa; it actually works!” Of course, this shouldn’t be a surprise; calculators actually use the Taylor series expansion (and a few trig identity tricks) when calculating sines and cosines. So, I’ll tell my class,

It’s not like your calculator draws a right triangle, takes out a ruler to measure the lengths of the opposite side and the hypotenuse, and divides to find the sine of an angle.

 

My Favorite One-Liners: Part 100

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Today’s quip is one that I’ll use surprisingly often:

If you ever meet a mathematician at a bar, ask him or her, “What is your favorite application of the Cauchy-Schwartz inequality?”

The point is that the Cauchy-Schwartz inequality arises surprisingly often in the undergraduate mathematics curriculum, and so I make a point to highlight it when I use it. For example, off the top of my head:

1. In trigonometry, the Cauchy-Schwartz inequality states that

|{\bf u} \cdot {\bf v}| \le \; \parallel \!\! {\bf u} \!\! \parallel \cdot \parallel \!\! {\bf v} \!\! \parallel

for all vectors {\bf u} and {\bf v}. Consequently,

-1 \le \displaystyle \frac{ {\bf u} \cdot {\bf v} } {\parallel \!\! {\bf u} \!\! \parallel \cdot \parallel \!\! {\bf v} \!\! \parallel} \le 1,

which means that the angle

\theta = \cos^{-1} \left( \displaystyle \frac{ {\bf u} \cdot {\bf v} } {\parallel \!\! {\bf u} \!\! \parallel \cdot \parallel \!\! {\bf v} \!\! \parallel} \right)

is defined. This is the measure of the angle between the two vectors {\bf u} and {\bf v}.

2. In probability and statistics, the standard deviation of a random variable X is defined as

\hbox{SD}(X) = \sqrt{E(X^2) - [E(X)]^2}.

The Cauchy-Schwartz inequality assures that the quantity under the square root is nonnegative, so that the standard deviation is actually defined. Also, the Cauchy-Schwartz inequality can be used to show that \hbox{SD}(X) = 0 implies that X is a constant almost surely.

3. Also in probability and statistics, the correlation between two random variables X and Y must satisfy

-1 \le \hbox{Corr}(X,Y) \le 1.

Furthermore, if \hbox{Corr}(X,Y)=1, then Y= aX +b for some constants a and b, where a > 0. On the other hand, if \hbox{Corr}(X,Y)=-1, if \hbox{Corr}(X,Y)=1, then Y= aX +b for some constants a and b, where a < 0.

Since I’m a mathematician, I guess my favorite application of the Cauchy-Schwartz inequality appears in my first professional article, where the inequality was used to confirm some new bounds that I derived with my graduate adviser.

My Favorite One-Liners: Part 76

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Here’s a problem that might arise in trigonometry:

Compute \cos \displaystyle \frac{2017\pi}{6}.

To begin, we observe that \displaystyle \frac{2017}{6} = 336 + \displaystyle \frac{1}{6}, so that

\cos \displaystyle \frac{2017\pi}{6} = \cos \left( \displaystyle 336\pi + \frac{\pi}{6} \right).

We then remember that \cos \theta is a periodic function with period 2\pi. This means that we can add or subtract any multiple of 2\pi to the angle, and the result of the function doesn’t change. In particular, -336\pi is a multiple of 2 \pi, so that

\cos \displaystyle \frac{2017\pi}{6} = \cos \left( \displaystyle 336\pi + \frac{\pi}{6} \right)

= \cos \left( \displaystyle 336\pi + \frac{\pi}{6} - 336\pi \right)

= \cos \displaystyle \frac{\pi}{6}

= \displaystyle \frac{\sqrt{3}}{2}.

Said another way, 336\pi corresponds to 336/2 = 168 complete rotations, and the value of cosine doesn’t change with a complete rotation. So it’s OK to just throw away any even multiple of \pi when computing the sine or cosine of a very large angle. I then tell my class:

In mathematics, there’s a technical term for this idea; it’s called \pi throwing.

My Favorite One-Liners: Part 40

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

In some classes, the Greek letter \phi or \Phi naturally appears. Sometimes, it’s an angle in a triangle or a displacement when graphing a sinusoidal function. Other times, it represents the cumulative distribution function of a standard normal distribution.

Which begs the question, how should a student pronounce this symbol?

I tell my students that this is the Greek letter “phi,” pronounced “fee”. However, other mathematicians may pronounce it as “fie,” rhyming with “high”. Continuing,

Other mathematicians pronounce it as “foe.” Others, as “fum.”

My Favorite One-Liners: Part 8

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

At many layers of the mathematics curriculum, students learn about that various functions can essentially commute with each other. In other words, the order in which the operations is performed doesn’t affect the final answer. Here’s a partial list off the top of my head:

  1. Arithmetic/Algebra: a \cdot (b + c) = a \cdot b + a \cdot c. This of course is commonly called the distributive property (and not the commutative property), but the essential idea is that the same answer is obtained whether the multiplications are performed first or if the addition is performed first.
  2. Algebra: If a,b > 0, then \sqrt{ab} = \sqrt{a} \sqrt{b}.
  3. Algebra: If a,b > 0 and x is any real number, then (ab)^x = a^x b^x.
  4. Precalculus: \displaystyle \sum_{i=1}^n (a_i+b_i) = \displaystyle \sum_{i=1}^n a_i + \sum_{i=1}^n b_i.
  5. Precalculus: \displaystyle \sum_{i=1}^n c a_i = c \displaystyle \sum_{i=1}^n a_i.
  6. Calculus: If f is continuous at an interior point c, then \displaystyle \lim_{x \to c} f(x) = f(c).
  7. Calculus: If f and g are differentiable, then (f+g)' = f' + g'.
  8. Calculus: If f is differentiable and c is a constant, then (cf)' = cf'.
  9. Calculus: If f and g are integrable, then \int (f+g) = \int f + \int g.
  10. Calculus: If f is integrable and c is a constant, then \int cf = c \int f.
  11. Calculus: If f: \mathbb{R}^2 \to \mathbb{R} is integrable, \iint f(x,y) dx dy = \iint f(x,y) dy dx.
  12. Calculus: For most differentiable function f: \mathbb{R}^2 \to \mathbb{R} that arise in practice, \displaystyle \frac{\partial^2 f}{\partial x \partial y} = \displaystyle \frac{\partial^2 f}{\partial y \partial x}.
  13. Probability: If X and Y are random variables, then E(X+Y) = E(X) + E(Y).
  14. Probability: If X is a random variable and c is a constant, then E(cX) = c E(X).
  15. Probability: If X and Y are independent random variables, then E(XY) = E(X) E(Y).
  16. Probability: If X and Y are independent random variables, then \hbox{Var}(X+Y) = \hbox{Var}(X) + \hbox{Var}(Y).
  17. Set theory: If A, B, and C are sets, then A \cup (B \cap C) = (A \cup B) \cap (A \cup C).
  18. Set theory: If A, B, and C are sets, then A \cap (B \cup C) = (A \cap B) \cup (A \cap C).

However, there are plenty of instances when two functions do not commute. Most of these, of course, are common mistakes that students make when they first encounter these concepts. Here’s a partial list off the top of my head. (For all of these, the inequality sign means that the two sides do not have to be equal… though there may be special cases when equality happens to happen.)

  1. Algebra: (a+b)^x \ne a^x + b^x if x \ne 1. Important special cases are x = 2, x = 1/2, and x = -1.
  2. Algebra/Precalculus: \log_b(x+y) = \log_b x + \log_b y. I call this the third classic blunder.
  3. Precalculus: (f \circ g)(x) \ne (g \circ f)(x).
  4. Precalculus: \sin(x+y) \ne \sin x + \sin y, \cos(x+y) \ne \cos x + \cos y, etc.
  5. Precalculus: \displaystyle \sum_{i=1}^n (a_i b_i) \ne \displaystyle \left(\sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n b_i \right).
  6. Calculus: (fg)' \ne f' \cdot g'.
  7. Calculus \left( \displaystyle \frac{f}{g} \right)' \ne \displaystyle \frac{f'}{g'}
  8. Calculus: \int fg \ne \left( \int f \right) \left( \int g \right).
  9. Probability: If X and Y are dependent random variables, then E(XY) \ne E(X) E(Y).
  10. Probability: If X and Y are dependent random variables, then \hbox{Var}(X+Y) \ne \hbox{Var}(X) + \hbox{Var}(Y).

All this to say, it’s a big deal when two functions commute, because this doesn’t happen all the time.

green lineI wish I could remember the speaker’s name, but I heard the following one-liner at a state mathematics conference many years ago, and I’ve used it to great effect in my classes ever since. Whenever I present a property where two functions commute, I’ll say, “In other words, the order of operations does not matter. This is a big deal, because, in real life, the order of operations usually is important. For example, this morning, you probably got dressed and then went outside. The order was important.”

 

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 18

The Riemann Hypothesis (see here, here, and here) is perhaps the most famous (and also most important) unsolved problems in mathematics. Gamma (page 207) provides a way of writing down this conjecture in a form that only uses notation that is commonly taught in high school:

If \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \cos(b \ln r) = 0 and \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \sin(b \ln r) = 0 for some pair of real numbers a and b, then a = \frac{1}{2}.

As noted in the book, “It seems extraordinary that the most famous unsolved problem in the whole of mathematics can be phrased so that it involves the simplest of mathematical ideas: summation, trigonometry, logarithms, and [square roots].”

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 15

I did not know — until I read Gamma (page 168) — that there actually is a formula for generating nth prime number by directly plugging in n. The catch is that it’s a mess:

p_n = 1 + \displaystyle \sum_{m=1}^{2^n} \left[ n^{1/n} \left( \sum_{i=1}^m \cos^2 \left( \pi \frac{(i-1)!+1}{i} \right) \right)^{-1/n} \right],

where the outer brackets [~ ] represent the floor function.

This mathematical curiosity has no practical value, as determining the 10th prime number would require computing 1 + 2 + 3 + \dots + 2^{10} = 524,800 different terms!

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

A Visual Proof of a Remarkable Trig Identity

Strange but true (try it on a calculator):

\displaystyle \cos \left( \frac{\pi}{9} \right) \cos \left( \frac{2\pi}{9} \right) \cos \left( \frac{4\pi}{9} \right) = \displaystyle \frac{1}{8}.

Richard Feynman learned this from a friend when he was young, and it stuck with him his whole life.

Recently, the American Mathematical Monthly published a visual proof of this identity using a regular 9-gon:

Feynman identity

Source: https://www.facebook.com/AmerMathMonthly/photos/a.250425975006394.53155.241224542593204/1045091252206525/?type=3&theater

This same argument would work for any 2^n+1-gon. For example, a regular pentagon can be used to show that

\displaystyle \cos \left( \frac{\pi}{5} \right)  \cos \left( \frac{2\pi}{5} \right) = \displaystyle \frac{1}{4},

and a regular 17-gon can be used to show that

\displaystyle \cos \left( \frac{\pi}{17} \right) \cos \left( \frac{2\pi}{17} \right) \cos \left( \frac{4\pi}{17} \right) \cos \left( \frac{8\pi}{17} \right) = \displaystyle \frac{1}{16}.

A natural function with discontinuities (Part 2)

Yesterday, I began a short series motivated by the following article from the American Mathematical Monthly.

discontinuous

Today, I’d like to talk about the how this function was obtained.

If 180^\circ \le latex \theta \le 360^\circ, then clearly r = R. The original circle of radius R clearly works. Furthermore, any circle that inscribes the grey circular region (centered at the origin) must include the points (-R,0) and (R,0), and the distance between these two points is 2R. Therefore, the diameter of any circle that works must be at least 2R, so a smaller circle can’t work.

reflexangle

The other extreme is also easy: if \theta =0^\circ, then the “circular region” is really just a single point.

Let’s now take a look at the case 0 < \theta \le 90^\circ. The smallest circle that encloses the grey region must have the points (0,0), (R,0), and (R \cos \theta, R \sin \theta) on its circumference, and so the center of the circle will be equidistant from these three points.

acuteangle

The center must be on the angle bisector (the dashed line depicted in the figure) since the bisector is the locus of points equidistant from (R,0) and (R \cos \theta, R \sin \theta). Therefore, we must find the point on the bisector that is equidistant from (0,0) and (R,0). This point forms an isosceles triangle, and so the distance r can be found using trigonometry:

\cos \displaystyle \frac{\theta}{2} = \displaystyle \frac{R/2}{r},

or

r = \displaystyle \frac{R}{2} \sec \frac{\theta}{2}.

This logic works up until \theta = 90^\circ, when the isosceles triangle will be a 45-45-90 triangle. However, when \theta > 90^\circ, a different picture will be needed. I’ll consider this in tomorrow’s post.