# Exponential growth and decay (Part 9): Amortization tables

This post is inspired by one of the questions that I pose to our future high school math teachers during our Friday question-and-answer sessions. In these sessions, I play the role of a middle- or high-school student who’s asking a tricky question to his math teacher. Here’s the question:

A student asks, “My father bought a house for $200,000 at 12% interest. He told me that by the time he fi nishes paying for the house, it will have cost him more than$500,000. How is that possible? 12% of $200,000 is only$24,000.”

Without fail, these future teachers don’t have a good response to this question. Indeed, my experience is that most young adults (including college students) have never used an amortization table, which is the subject of today’s post.

In the past few posts, we have considered the solution of the following recurrence relation, which is often used to model the payment of a mortgage or of credit-card debt:

$A_{n+1} = r A_n - k$

With this difference equation, the rate at which the principal is reduced can be simply computed using Microsoft Excel. This tool is called an amortization schedule or an amortization table; see E-How for the instructions of how to build one. Here’s a sample Excel spreadsheet that I’ll be illustrating below: Amortization schedule. My personal experience is that many math majors have never seen such a spreadsheet, even though they are familiar with compound interest problems and certainly have the mathematical tools to understand this spreadsheet.

Here’s a screen capture from the spreadsheet:

The terms of the loan are typed into Cells B1 (length of loan, in years), B2 (annual percentage rate), and B3 (initial principal). Cell B4 is computed from this information using the Microsoft Excel command $\hbox{PMT}$:

$=\hbox{PMT}(\hbox{B2}/12,\hbox{B1}*12,-\hbox{B3})$

This is the amount that must be paid every month in order to pay off the loan in the prescribed number of years. Of course, there is a formula for this:

$M = \displaystyle \frac{Pr}{12 \displaystyle \left[1 - \left( 1 + \frac{r}{12} \right)^{-12t} \right]}$

I won’t go into the derivation of this formula here, as it’s a bit complicated. Notice that this formula does not include escrow, points, closing costs, etc. This is strictly the amount of money that’s needed to pay down the principal.

The table, beginning in Row 8 of the above picture, shows how quickly the principal will be paid off. In row 8, the interest that’s paid for that month is  computed by

$=\hbox{B8} * \ \hbox{B}\\hbox{2}/12$

Therefore, the amount of the monthly payment that actually goes toward paying down the principal will be

$= \\hbox{B}\\hbox{4} - \hbox{C8}$

Column E provides an opportunity to pay something extra each month; more on this later. So, after taking into account the payments in columns D and E, the amount remaining on the loan is recorded in Cell F8:

$= \hbox{B8} - \hbox{D8} - \hbox{E8}$

This amount is then copied into Cell B9, and then the pattern can be filled down.

The yellow graph shows how quickly the balance of the loan is paid off over the length of the loan. A picture is worth a thousand words: in the initial years of the loan, most of the payments are gobbled up by the interest, and so the principal is paid off slowly. Only in the latter years of the loan is the principal paid off quickly.

So, it stands to reason that any extra payments in the initial months and years of the loan can do wonders for paying off the loan quickly. For example, here’s a screenshot of what happens if an extra $200/month is paid only in the first 12 months of the loan: A definite bend in the graph is evident in the initial 12 months until the normal payment is resumed in month 13. As a result of those extra payments, the curve now intersects the horizontal axis around 340. In other words, 20 fewer months are required to pay off the loan. Stated another way, the extra payments in the first year cost an extra $\200 \times 12 = \2400$. However, in the long run, those payments saved about $\536.82 \times 20 \approx \10,700$! # Exponential growth and decay (Part 8): Paying off credit-card debt via recurrence relations The following problem in differential equations has a very practical application for anyone who has either (1) taken out a loan to buy a house or a car or (2) is trying to pay off credit card debt. To my surprise, most math majors haven’t thought through the obvious applications of exponential functions as a means of engaging their future students, even though it is directly pertinent to their lives (both the students’ and the teachers’). You have a balance of$2,000 on your credit card. Interest is compounded continuously with a rate of growth of 25% per year. If you pay the minimum amount of $50 per month (or$600 per year), how long will it take for the balance to be paid?

In previous posts, I approached this problem using differential equations. There’s another way to approach this problem that avoids using calculus that, hypothetically, is within the grasp of talented Precalculus students. Instead of treating this problem as a differential equation, we instead treat it as a first-order difference equation (also called a recurrence relation):

$A_{n+1} = r A_n - k$

The idea is that the amount owed is multiplied by a factor $r$ (which is greater than 1), and from this product the amount paid is deducted. With this approach — and unlike the approach using calculus — the payment period would be each month and not per year. Therefore, we can write

$A_{n+1} = \displaystyle \left( 1 + \frac{0.25}{12} \right) A_n - 50$

Notice that the meaning of the 25% has changed somewhat… it’s no longer the relative rate of growth, as the 25% has been equally divided for the 12 months. The solution of this difference equation is

$A_n = r^n P - k \left( \displaystyle \frac{1 - r^n}{1-r} \right)$

A great advantage of using a difference equation to solve this problem is that the solution can be easily checked with a simple spreadsheet. (Indeed, pedagogically, I would recommend showing a spreadsheet like this before doing any of the calculations of the previous few posts, so that students can begin to wrap their heads around the notion of a difference equation before the solution is presented.)

To start the spreadsheet, I wrote “Step” in Cell A1 and “Amount” in Cell B1. Then I entered the initial conditions: $0$ in Cell A2 and $2000$ in Cell B2. (In the screenshot below, I changed the format of column B to show dollars and cents.) Next, I entered $=\hbox{A2}+1$ in Cell A3 and

$=\hbox{B2}*(1+0.25/12)-50$

in Cell B3. Finally, I copied the pattern in Cells A3 and B3 downward. Here’s the result:

After the formula the algebraic solution of the difference equation has been found, this can be added to the spreadsheet in a different column. For example, I added the header “Predicted Amount” in Cell D1. In Cell D2, I typed the formula

$=2000*\hbox{POWER}(1+0.25/12,\hbox{A2})-50*(1-\hbox{POWER}(1+0.25/12,\hbox{A2}))/(1-(1+0.25/12))$

Finally, I copied this pattern down the Column D. Here’s the result:

Invariably, when I perform a demonstration like this in class, I elicit a reaction of “Whoa…. it actually works!” Even for a class of math majors. Naturally, I tease them about this… they didn’t believe me when I used algebra, but now it has to be true because the computer says so.

Here’s the spreadsheet that I used to make the above pictures: CreditCardDebt.

# Day One of my Calculus I class: Part 5

In this series of posts, I’d like to describe what I tell my students on the very first day of Calculus I. On this first day, I try to set the table for the topics that will be discussed throughout the semester. I should emphasize that I don’t hold students immediately responsible for the content of this lecture. Instead, this introduction, which usually takes 30-45 minutes, depending on the questions I get, is meant to help my students see the forest for all of the trees. For example, when we start discussing somewhat dry topics like the definition of a continuous function and the Mean Value Theorem, I can always refer back to this initial lecture for why these concepts are ultimately important.

I’ve told students that the topics in Calculus I build upon each other (unlike the topics of Precalculus), but that there are going to be two themes that run throughout the course:

1. Approximating curved things by straight things, and
2. Passing to limits

We are now trying to answer the following problem.

Problem #2. Find the area under the parabola $f(x) = x^2$ between $x=0$ and $x=1$.

Using five rectangles with right endpoints, we find the approximate answer of $0.44$. With ten rectangles, the approximation is $0.385$. With one hundred rectangles (and Microsoft Excel), the approximation is $0.33835$. This last expression was found by evaluating

$0.01[ (0.01)^2 + (0.02)^2 + \dots + (0.99)^2 + 1^2]$

At this juncture, what I’ll do depends on my students’ background. For many years, I had the same group of students for both Precalculus and Calculus I, and so I knew full well that they had seen the formula for $\displaystyle \sum_{k=1}^n k^2$. And so I’d feel comfortable showing my students the contents of this post. However, if I didn’t know for sure that my students had at least seen this formula, I probably would just ask them to guess the limiting answer without doing any of the algebra to follow.

Assuming students have the necessary prerequisite knowledge, I’ll ask, “What happens if we have $n$ rectangles?” Without much difficulty, they’ll see that the rectangles have a common width of $1/n$. The heights of the rectangles take a little more work to determine. I’ll usually work left to right. The left-most rectangle has right-most $x-$coordinate of $1/n$, and so the height of the leftmost rectangle is $(1/n)^2$. The next rectangle has a height of $(2/n)^2$, and so we must evaluate

$\displaystyle \frac{1}{n} \left[ \frac{1^2}{n^2} + \frac{2^2}{n^2} + \dots + \frac{n^2}{n^2} \right]$, or

$\displaystyle \frac{1}{n^3} \left[ 1^2 + 2^2 + \dots + n^2 \right]$

I then ask my class, what’s the formula for this sum? Invariably, they’ve forgotten the formula in the five or six weeks between the end of Precalculus and the start of Calculus I, and I’ll tease them about this a bit. Eventually, I’ll give them the answer (or someone volunteers an answer that’s either correct or partially correct):

$\displaystyle \frac{1}{n^3} \times \frac{n(n+1)(2n+1)}{6}$, or $\frac{(n+1)(2n+1)}{6n^2}$.

I’ll then directly verify that our previous numerical work matches this expression by plugging in $n=5$, $n= 10$, and $n = 100$.

I then ask, “What limit do we need to take this time?” Occasionally, I’ll get the incorrect answer of sending $n$ to zero, as students sometimes get mixed up thinking about the width of the rectangles instead of the number of rectangles. Eventually, the class will agree that we should send $n$ to plus infinity. Fortunately, the answer $\displaystyle \frac{(n+1)(2n+1)}{6n^2}$ is an example of a rational function, and so the horizontal asymptote can be immediately determined by dividing the leading coefficients of the numerator and denominator (since both have degree 2). We conclude that the limit is $2/6 = 1/3$, and so that’s the area under the parabola.

# Reminding students about Taylor series (Part 4)

I’m in the middle of a series of posts describing how I remind students about Taylor series. In the previous posts, I described how I lead students to the definition of the Maclaurin series

$f(x) = \displaystyle \sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{k!} x^k$,

which converges to $f(x)$ within some radius of convergence for all functions that commonly appear in the secondary mathematics curriculum.

Step 4. Let’s now get some practice with Maclaurin series. Let’s start with $f(x) = e^x$.

What’s $f(0)$? That’s easy: $f(0) = e^0 = 1$.

Next, to find $f'(0)$, we first find $f'(x)$. What is it? Well, that’s also easy: $f'(x) = \frac{d}{dx} (e^x) = e^x$. So $f'(0)$ is also equal to $1$.

How about $f''(0)$? Yep, it’s also $1$. In fact, it’s clear that $f^{(n)}(0) = 1$ for all $n$, though we’ll skip the formal proof by induction.

Plugging into the above formula, we find that

$e^x = \displaystyle \sum_{k=0}^{\infty} \frac{1}{k!} x^k = \sum_{k=0}^{\infty} \frac{x^k}{k!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{3} + \dots$

It turns out that the radius of convergence for this power series is $\infty$. In other words, the series on the right converges for all values of $x$. So we’ll skip this for review purposes, this can be formally checked by using the Ratio Test.

At this point, students generally feel confident about the mechanics of finding a Taylor series expansion, and that’s a good thing. However, in my experience, their command of Taylor series is still somewhat artificial. They can go through the motions of taking derivatives and finding the Taylor series, but this complicated symbol in $\displaystyle \sum$ notation still doesn’t have much meaning.

So I shift gears somewhat to discuss the rate of convergence. My hope is to deepen students’ knowledge by getting them to believe that $f(x)$ really can be approximated to high precision with only a few terms. Perhaps not surprisingly, it converges quicker for small values of $x$ than for big values of $x$.

Pedagogically, I like to use a spreadsheet like Microsoft Excel to demonstrate the rate of convergence. A calculator could be used, but students can see quickly with Excel how quickly (or slowly) the terms get smaller. I usually construct the spreadsheet in class on the fly (the fill down feature is really helpful for doing this quickly), with the end product looking something like this:

In this way, students can immediately see that the Taylor series is accurate to four significant digits by going up to the $x^4$ term and that about ten or eleven terms are needed to get a figure that is as accurate as the precision of the computer will allow. In other words, for all practical purposes, an infinite number of terms are not necessary.

In short, this is how a calculator computes $e^x$: adding up the first few terms of a Taylor series. Back in high school, when students hit the $e^x$ button on their calculators, they’ve trusted the result but the mechanics of how the calculator gets the result was shrouded in mystery. No longer.

Then I shift gears by trying a larger value of $x$:

I ask my students the obvious question: What went wrong? They’re usually able to volunteer a few ideas:

• The convergence is slower for larger values of $x$.
• The series will converge, but more terms are needed (and I’ll later use the fill down feature to get enough terms so that it does converge as accurate as double precision will allow).
• The individual terms get bigger until $k=11$ and then start getting smaller. I’ll ask my students why this happens, and I’ll eventually get an explanation like

$\displaystyle \frac{(11.5)^6}{6!} < \frac{(11.5)^6}{6!} \times \frac{11.5}{7} = \frac{(11.5)^7}{7!}$

but

$\displaystyle \frac{(11.5)^{11}}{11!} < \frac{(11.5)^{11}}{11!} \times \frac{11.5}{12} = \frac{(11.5)^{12}}{12!}$

At this point, I’ll mention that calculators use some tricks to speed up convergence. For example, the calculator can simply store a few values of $e^x$ in memory, like $e^{16}$, $e^{8}$, $e^{4}$, $e^{2}$, and $e^{1} = e$. I then ask my class how these could be used to find $e^{11.5}$. After some thought, they will volunteer that

$e^{11.5} = e^8 \cdot e^2 \cdot e \cdot e^{0.5}$.

The first three values don’t need to be computed — they’ve already been stored in memory — while the last value can be computed via Taylor series. Also, since $0.5 < 1$, the series for $e^{0.5}$ will converge pretty quickly. (Some students may volunteer that the above product is logically equivalent to turning $11$ into binary.)

At this point — after doing these explicit numerical examples — I’ll show graphs of $e^x$ and graphs of the Taylor polynomials of $e^x$, observing that the polynomials get closer and closer to the graph of $e^x$ as more terms are added. (For example, see the graphs on the Wikipedia page for Taylor series, though I prefer to use Mathematica for in-class purposes.) In my opinion, the convergence of the graphs only becomes meaningful to students only after doing some numerical examples, as done above.

At this point, I hope my students are familiar with the definition of Taylor (Maclaurin) series, can apply the definition to $e^x$, and have some intuition meaning that the nasty Taylor series expression practically means add a bunch of terms together until you’re satisfied with the convergence.

In the next post, we’ll consider another Taylor series which ought to be (but usually isn’t) really familiar to students: an infinite geometric series.

P.S. Here’s the Excel spreadsheet that I used to make the above figures: Taylor.