# Reminding students about Taylor series (Part 3)

Sadly, at least at my university, Taylor series is the topic that is least retained by students years after taking Calculus II. They can remember the rules for integration and differentiation, but their command of Taylor series seems to slip through the cracks. In my opinion, the reason for this lack of retention is completely understandable from a student’s perspective: Taylor series is usually the last topic covered in a semester, and so students learn them quickly for the final and quickly forget about them as soon as the final is over.

Of course, when I need to use Taylor series in an advanced course but my students have completely forgotten this prerequisite knowledge, I have to get them up to speed as soon as possible. Here’s the sequence that I use to accomplish this task. Covering this sequence usually takes me about 30 minutes of class time.

I should emphasize that I present this sequence in an inquiry-based format: I ask leading questions of my students so that the answers of my students are driving the lecture. In other words, I don’t ask my students to simply take dictation. It’s a little hard to describe a question-and-answer format in a blog, but I’ll attempt to do this below.

In the previous post, I described how I lead students to the equations

$f(x) = \displaystyle \sum_{k=0}^n \frac{f^{(k)}(0)}{k!} x^k$.

and

$f(x) = \displaystyle \sum_{k=0}^n \frac{f^{(k)}(a)}{k!} (x-a)^k$,

where $f(x)$ is a polynomial and $a$ can be any number.

Step 3. What happens if the original function $f(x)$ is not a polynomial? For one thing, the right-hand side can no longer be a finite sum. As long as the sum on the right-hand side stops at some degree $n$, the right-hand side is a polynomial, but the left-hand side is assumed to not be a polynomial.

To resolve this, we can cross our fingers and hope that

$f(x) = \displaystyle \sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{k!} x^k$,

or

$f(x) = \displaystyle \sum_{k=0}^{\infty}\frac{f^{(k)}(a)}{k!} (x-a)^k$.

In other words, let’s make the right-hand side an infinite series, and hope for the best. This is the definition of the Taylor series expansions of $f$.

Note: At this point in the review, I can usually see the light go on in my students’ eyes. Usually, they can now recall their work with Taylor series in the past… and they wonder why they weren’t taught this topic inductively (like I’ve tried to do in the above exposition) instead of deductively (like the presentation in most textbooks).

While we’d like to think that the Taylor series expansions always work, there are at least two things that can go wrong.

1. First, the sum on the left is an infinite series, and there’s no guarantee that the series will converge in the first place. There are plenty of example of series that diverge, like $\displaystyle \sum_{k=0}^\infty \frac{1}{k+1}$.
2. Second, even if the series converges, there’s no guarantee that the series will converge to the “right” answer $f(x)$. The canonical example of this behavior is $f(x) = e^{-1/x^2}$, which is so “flat” near $x=0$ that every single derivative of $f$ is equal to $0$ at $x =0$.

For the first complication, there are multiple tests devised in Calculus II, especially the Ratio Test, to determine the values of $x$ for which the series converges. This establishes a radius of convergence for the series.

The second complication is far more difficult to address rigorously. The good news is that, for all commonly occurring functions in the secondary mathematics curriculum, the Taylor series of a function properly converges (when it does converge). So we will happily ignore this complication for the remainder of the presentation.

Indeed, it’s remarkable that the series should converge to $f(x)$ at all. Think about the meaning of the terms on the right-hand side:

1. $f(a)$ is the $y-$coordinate at $x=a$.
2. $f'(a)$ is the slope of the curve at $x=a$.
3. $f''(a)$ is a measure of the concavity of the curve at — you guessed it — $x=a$.
4. $f'''(a)$ is an even more subtle description of the curve… once again, at $x=a$.

In other words, if the Taylor series converges to $f(x)$, then every twist and turn of the function, even at points far away from $x=a$, is encoded somehow in the shape of the curve at the one point $x=a$. So analytic functions (which has a Taylor series which converges to the original functions) are indeed quite remarkable.