# Convexity and Orthogonality at Saddle Points

Today, the Texas Section of the Mathematical Association of America is holding its annual conference. Like many other professional conferences these days, this conference will be held virtually, and so my contribution to the conference is saved on YouTube and is available to the public.

Here’s the abstract of my talk: “At a saddle point (like the middle of a Pringles potato chip), the directions of maximum upward concavity and maximum downward concavity are perpendicular. The usual proof requires a fair amount of linear algebra: eigenvectors of different eigenvalues of a real symmetric matrix, like the Hessian, must be orthogonal. For this reason, the orthogonality of these two directions is not often stated in calculus textbooks, let alone proven, when the Second Partial Derivative Test for identifying local extrema and saddle points is discussed. In this talk, we present an elementary proof of the orthogonality of these two directions that requires only ideas from Calculus III and trigonometry. Not surprisingly, this proof can be connected to the usual proof from linear algebra.”

If you have 12 minutes to spare, here’s the talk.

# A Clean Calculus Joke

The change of position over time is velocity.

The change of velocity over time is acceleration.

The change of acceleration over time is jerk.

And the change of jerk over time is an election.d

# Differentiation and Integration

As I tell my calculus students, differentiation is a science. There are rules to follow, but if you follow them carefully, you can compute the derivative of anything. This leads to one of my favorite classroom activities. However, integration is as much art as science; for example, see my series on different techniques for computing

$\displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x}$

The contrast between differentiation and integration was more vividly illustrated in a recent xkcd webcomic:

Source: https://xkcd.com/2117/

# My Favorite One-Liners: Part 117

I absolutely love this joke. The integral looks diabolical but can be computed mentally.

For what it’s worth, while it was able to produce an answer to as many decimal places as needed, even Wolfram Alpha was unable to exactly compute this integral. Feel free to click the link if you’d like the (highly suggestive) answer.

# My Favorite One-Liners: Part 115

I credit Math With Bad Drawings for this new weapon in my arsenal of awful mathematical puns.

# Adding by a Form of 0 (Part 2)

Often intuitive appeals for the proof of the Product Rule rely on pictures like the following:

The above picture comes from https://mrchasemath.com/2017/04/02/the-product-rule/, which notes the intuitive appeal of the argument but also its lack of rigor.

My preferred technique is to use the above rectangle picture but make it more rigorous. Assuming that the functions $f$ and $g$ are increasing, the difference $f(x+h) g(x+h) - f(x) g(x)$ is exactly equal to the sum of the green and blue areas in the figure below.

In other words,

$f(x+h) g(x+h) - f(x) g(x) = f(x+h) [g(x+h) - g(x)] + [f(x+h) - f(x)] g(x)$,

or

$f(x+h) g(x+h) - f(x+h) g(x) + f(x+h) g(x) - f(x) g(x)$.

This gives a geometrical way of explaining this otherwise counterintuitive step for students not used to adding by a form of 0. I make a point of noting that we took one term, $f(x+h)$, from the first product $f(x+h) g(x+h)$, while the second term, $g(x)$, came from the second product $f(x) g(x)$. From this, the usual proof of the Product Rule follows:

$[(fg)(x)]' = \displaystyle \lim_{h \to 0} \frac{f(x+h) g(x+h) - f(x) g(x)}{h}$

$\displaystyle = \lim_{h \to 0} \frac{f(x+h) [g(x+h) - g(x)]}{h} + \lim_{h\ to 0} \frac{[f(x+h) - f(x)] g(x)}{h}$

$\displaystyle = \lim_{h \to 0} \frac{f(x+h) [g(x+h) - g(x)]}{h} + \lim_{h\ to 0} \frac{[f(x+h) - f(x)] g(x)}{h}$

$\displaystyle = \lim_{h \to 0} f(x+h) \frac{g(x+h) - g(x)}{h} + \lim_{h\ to 0} \frac{f(x+h) - f(x) }{h} g(x)$

$= f(x)g'(x) + f'(x) g(x)$

For what it’s worth, a Google Images search for proofs of the Product Rule yielded plenty of pictures like the one at the top of this post but did not yield any pictures remotely similar to the green and blue rectangles above. This suggests to me that the above approach of motivating this critical step of this derivation might not be commonly known.

Once students have been introduced to the idea of adding by a form of 0, my experience is that the proof of the Quotient Rule is much more palatable. I’m unaware of a geometric proof that I would be willing to try with students (a description of the best attempt I’ve seen can be found here), and so adding by a form of 0 becomes unavoidable. The proof begins

$\left[\left( \displaystyle \frac{f}{g} \right)(x) \right]' = \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h)}{ g(x+h)} - \frac{f(x)}{ g(x)}}{h}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x) g(x+h)}{ g(x) g(x+h)}}{h}$

$= \displaystyle \lim_{h \to 0} \frac{f(x+h) g(x) - f(x) g(x+h)}{ h g(x) g(x+h)}$.

At this point, I ask my students what we should add and subtract this time to complete the derivation. Given the previous experience with the Product Rule, students are usually quick to chose one factor from the first term and another factor from the second term, usually picking $f(x) g(x)$. In fact, they usually find this step easier than the analogous step in the Product Rule because this expression is more palatable than the slightly more complicated $f(x+h) g(x)$. From here, the rest of the proof follows:

$[(fg)(x)]' = \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x) g(x+h)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x)g(x) + f(x)g(x) - f(x) g(x+h)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x)g(x)}{h} + \frac{f(x)g(x) - f(x) g(x+h)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x)g(x)}{h} - \frac{f(x) g(x+h) - f(x)g(x)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{[f(x+h) - f(x)] g(x)}{h} - \frac{f(x) [g(x+h) - g(x)]}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) - f(x) }{h} g(x) - f(x) \frac{ g(x+h) - g(x)}{h }}{g(x) g(x+h)}$

$= \displaystyle \frac{ f'(x) g(x) - f(x) g'(x)}{g(x)^2}$

P.S.

• The website https://mrchasemath.com/2017/04/02/the-product-rule/ also suggests an interesting pedagogical idea: before giving the formal proof of the Product Rule, use a particular function and the limit definition of a derivative so that students can intuitively guess the form of the rule. For example, if $g(x) = x^2$:

# Adding by a Form of 0 (Part 1)

Adding by a form of 0, or adding and subtracting the same quantity, is a common technique in mathematical proofs. For example, this technique is used in the second step of the standard proof of the Product Rule in calculus:

$[(fg)(x)]' = \displaystyle \lim_{h \to 0} \frac{f(x+h) g(x+h) - f(x) g(x)}{h}$

$\displaystyle = \lim_{h \to 0} \frac{f(x+h) g(x+h) - f(x+h) g(x) + f(x+h) g(x) - f(x) g(x)}{h}$

$\displaystyle = \lim_{h \to 0} \left[ \frac{f(x+h) g(x+h) - f(x+h) g(x)}{h} + \frac{f(x+h) g(x) - f(x) g(x)}{h} \right]$

$\displaystyle = \lim_{h \to 0} \frac{f(x+h) g(x+h) - f(x+h) g(x)}{h} + \lim_{h\ to 0} \frac{f(x+h) g(x) - f(x) g(x)}{h}$

$\displaystyle = \lim_{h \to 0} \frac{f(x+h) [g(x+h) - g(x)]}{h} + \lim_{h\ to 0} \frac{[f(x+h) - f(x)] g(x)}{h}$

$\displaystyle = \lim_{h \to 0} f(x+h) \frac{g(x+h) - g(x)}{h} + \lim_{h\ to 0} \frac{f(x+h) - f(x) }{h} g(x)$

$= f(x)g'(x) + f'(x) g(x)$

Or the proof of the Quotient Rule:

$\left[\left( \displaystyle \frac{f}{g} \right)(x) \right]' = \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h)}{ g(x+h)} - \frac{f(x)}{ g(x)}}{h}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x) g(x+h)}{ g(x) g(x+h)}}{h}$

$= \displaystyle \lim_{h \to 0} \frac{f(x+h) g(x) - f(x) g(x+h)}{ h g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x) g(x+h)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x)g(x) + f(x)g(x) - f(x) g(x+h)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x)g(x)}{h} + \frac{f(x)g(x) - f(x) g(x+h)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) g(x) - f(x)g(x)}{h} - \frac{f(x) g(x+h) - f(x)g(x)}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{[f(x+h) - f(x)] g(x)}{h} - \frac{f(x) [g(x+h) - g(x)]}{h }}{g(x) g(x+h)}$

$= \displaystyle \lim_{h \to 0} \frac{ \displaystyle \frac{f(x+h) - f(x) }{h} g(x) - f(x) \frac{ g(x+h) - g(x)}{h }}{g(x) g(x+h)}$

$= \displaystyle \frac{ f'(x) g(x) - f(x) g'(x)}{g(x)^2}$

This is a technique that we expect math majors to add to their repertoire of techniques as they progress through the curriculum. I forget the exact proof, but I remember that, when I was a student in honors calculus, we had some theorem that required an argument of the form

$|x - y| = |x - A + A - B + B - C + C - D + D - E + E - F + F - y|$

$\le |x - A| + |A - B| + |B - C| + |C - D| + |D - E| + |E - F| + |F - y|$

$\le \displaystyle \frac{\epsilon}{7} + \frac{\epsilon}{7} +\frac{\epsilon}{7} +\frac{\epsilon}{7} +\frac{\epsilon}{7} +\frac{\epsilon}{7} +\frac{\epsilon}{7}$

$= \epsilon$

But while this is a technique that expect students to master, there’s no doubt that this looks utterly foreign to a student first encountering this technique. After all, in high school algebra, students would simplify something like $x - A + A - B + B - C + C - D + D - E + E - F + F - y$ into $x-y$. If they were to convert $x-y$ into something more complicated like $x - A + A - B + B - C + C - D + D - E + E - F + F - y$, they would most definitely get points taken off.

In this brief series, I’d like to give some thoughts on getting students comfortable with this technique.

# Richard Feynman’s Integral Trick

“I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to differentiate parameters under the integral sign — it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.” (Surely you’re Joking, Mr. Feynman!)

I read Surely You’re Joking, Mr. Feynman! dozens of times when I was a teenager, and I was always curious about exactly what this integration technique actually was. So I enjoyed reading this article about the Leibniz Integration Rule: https://medium.com/dialogue-and-discourse/richard-feynmans-integral-trick-e7afae85e25c