Integration Using Schwinger Parametrization

I recently read the terrific article Integration Using Schwinger Parametrization, by David M. Bradley, Albert Natian, and Sean M. Stewart in the American Mathematical Monthly. I won’t reproduce the entire article here, but I’ll hit a couple of early highlights.

The basic premise of the article is that a complicated integral can become tractable by changing it into an apparently more complicated double integral. The idea stems from the gamma integral

\Gamma(p) = \displaystyle \int_0^\infty t^{p-1} e^{-t} \, dt,

where $\Gamma(p) = (p-1)!$ if p is a positive integer. If we perform the substitution t = \phi u in the above integral, where \phi is a quantity independent of t, we obtain

\Gamma(p) = \displaystyle \int_0^\infty (\phi u)^{p-1} e^{-\phi t} \phi \, du = \displaystyle \int_0^\infty \phi^p u^{p-1} e^{-\phi u} \, du,

which may be rewritten as

\displaystyle \frac{1}{\phi^p} = \displaystyle \frac{1}{\Gamma(p)} \int_0^\infty t^{p-1} e^{-\phi t} \, dt

after changing the dummy variable back to t.

A simple (!) application of this method is the famous Dirichlet integral

I = \displaystyle \int_0^\infty \frac{\sin x}{x} \, dx

which is pretty much unsolvable using techniques from freshman calculus. However, by substituting \phi = x and p=1 in the above gamma equation, and using the fact that \Gamma(1) = 0! = 1, we obtain

I = \displaystyle \int_0^\infty \sin x \int_0^\infty e^{-xt} \, dt \, dx

= \displaystyle \int_0^\infty \int_0^\infty e^{-xt} \sin x \, dx \, dt

after interchanging the order of integration. The inner integral can be found by integration by parts and is often included in tables of integrals:

I = \displaystyle \int_0^\infty -\left[ \frac{e^{-xt} (\cos x + t \sin x)}{1+t^2} \right]_{x=0}^{x=\infty} \, dt

= \displaystyle \int_0^\infty \left[0 +\frac{e^{0} (\cos 0 + t \sin 0)}{1+t^2} \right] \, dt

= \displaystyle \int_0^\infty \frac{1}{1+t^2} \, dt.

At this point, the integral is now a standard one from freshman calculus:

I = \displaystyle \left[ \tan^{-1} t \right]_0^\infty = \displaystyle \frac{\pi}{2} - 0 = \displaystyle \frac{\pi}{2}.

In the article, the authors give many more applications of this method to other integrals, thus illustrating the famous quote, “An idea which can be used only once is a trick. If one can use it more than once it becomes a method.” The authors also add, “We present some examples to illustrate the utility of this technique in the hope that by doing so we may convince the reader that it makes a valuable addition to one’s integration toolkit.” I’m sold.

Solving Problems Submitted to MAA Journals (Part 5e)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

f(x) = \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

By using the Taylor series expansions of \sin x and \cos x and flipping the order of a double sum, I was able to show that

f(x) = -\displaystyle \frac{x \sin x}{2} \qquad \hbox{and} \qquad g(x) = \frac{x\cos x - \sin x}{2}.

I immediately got to thinking: there’s nothing particularly special about \sin x and \cos x for this analysis. Is there a way of generalizing this result to all functions with a Taylor series expansion?

Suppose

h(x) = \displaystyle \sum_{k=0}^\infty a_k x^k,

and let’s use the same technique to evaluate

\displaystyle \sum_{n=0}^\infty \left( h(x) - \sum_{k=0}^n a_k x^k \right) = \sum_{n=0}^\infty \sum_{k=n+1}^\infty a_k x^k

= \displaystyle \sum_{k=1}^\infty \sum_{n=0}^{k-1} a_k x^k

= \displaystyle \sum_{k=1}^\infty k a_k x^k

= x \displaystyle \sum_{k=1}^\infty k a_k x^{k-1}

= x \displaystyle \sum_{k=1}^\infty \left(a_k x^k \right)'

= x \displaystyle \left[ (a_0)' +  \sum_{k=1}^\infty \left(a_k x^k \right)' \right]

= x \displaystyle \sum_{k=0}^\infty \left(a_k x^k \right)'

= x \displaystyle \left( \sum_{k=0}^\infty a_k x^k \right)'

= x h'(x).

To see why this matches our above results, let’s start with h(x) = \cos x and write out the full Taylor series expansion, including zero coefficients:

\cos x = 1 + 0x - \displaystyle \frac{x^2}{2!} + 0x^3 + \frac{x^4}{4!} + 0x^5 - \frac{x^6}{6!} \dots,

so that

x (\cos x)' = \displaystyle \sum_{n=0}^\infty \left( \cos x - \sum_{k=0}^n a_k x^k \right)

or

-x \sin x= \displaystyle \left(\cos x - 1 \right) + \left(\cos x - 1 + 0x \right) + \left( \cos x -1 + 0x + \frac{x^2}{2!} \right) + \left( \cos x -1 + 0x + \frac{x^2}{2!} + 0x^3 \right)

\displaystyle + \left( \cos x -1 + 0x + \frac{x^2}{2!} + 0x^3 - \frac{x^4}{4!} \right) + \left( \cos x -1 + 0x + \frac{x^2}{2!} + 0x^3 - \frac{x^4}{4!} + 0x^5 \right) \dots

After dropping the zero terms and collecting, we obtain

-x \sin x= \displaystyle 2 \left(\cos x - 1 \right) + 2 \left( \cos x -1 + \frac{x^2}{2!} \right) + 2 \left( \cos x -1 + \frac{x^2}{2!} - \frac{x^4}{4!} \right) \dots

-x \sin x = 2 f(x)

\displaystyle -\frac{x \sin x}{2} = f(x).

A similar calculation would apply to any even function h(x).

We repeat for

h(x) = \sin x = 0 + x + 0x^2 - \displaystyle \frac{x^3}{3!} + 0x^4 + \frac{x^5}{5!} + 0x^6 - \frac{x^7}{7!} \dots,

so that

x (\sin x)' = (\sin x - 0) + (\sin x - 0 - x) + (\sin x - 0 - x + 0x^2)

+ \displaystyle \left( \sin x - 0 - x + 0x^2 + \frac{x^3}{3!} \right) + \left( \sin x - 0 - x + 0x^2 + \frac{x^3}{3!} + 0x^4 \right)

+ \displaystyle \left( \sin x - 0 - x + 0x^2 + \frac{x^3}{3!} + 0x^4 - \frac{x^5}{5!} \right) + \left( \sin x - 0 - x + 0x^2 + \frac{x^3}{3!} + 0x^4 - \frac{x^5}{5!} + 0 x^6 \right) \dots,

or

x\cos x - \sin x = 2(\sin x - x) + \displaystyle 2\left(\sin x - x + \frac{x^3}{3!} \right) + 2 \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \right) \dots

or

x \cos x - \sin x = 2 g(x)

\displaystyle \frac{x \cos x - \sin x}{2} = g(x).

A similar argument applies for any odd function h(x).

Solving Problems Submitted to MAA Journals (Part 5d)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

f(x) = \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

In the previous two posts, I showed that

f(x) = - \displaystyle \frac{x \sin x}{2} \qquad \hbox{and} \qquad g(x) = \displaystyle \frac{x \cos x - \sin x}{2};

the technique that I used was using the Taylor series expansions of \sin x and \cos x to write f(x) and g(x) as double sums and then interchanging the order of summation.

In the post, I share an alternate way of solving for f(x) and g(x). I wish I could take credit for this, but I first learned the idea from my daughter. If we differentiate g(x), we obtain

g'(x) = \displaystyle \sum_{n=0}^\infty \left( [\sin x]' - [x]' + \left[\frac{x^3}{3!}\right]' - \left[\frac{x^5}{5!}\right]' \dots + \left[(-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!}\right]' \right)

= \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{3x^2}{3!} - \frac{5x^4}{5!} \dots + (-1)^{n-1} \frac{(2n+1)x^{2n}}{(2n+1)!} \right)

= \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{3x^2}{3 \cdot 2!} - \frac{5x^4}{5 \cdot 4!} \dots + (-1)^{n-1} \frac{(2n+1)x^{2n}}{(2n+1)(2n)!} \right)

= \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

= f(x).

Something similar happens when differentiating the series for f(x); however, it’s not quite so simple because of the -1 term. I begin by separating the n=0 term from the sum, so that a sum from n =1 to \infty remains:

f(x) = \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

= (\cos x - 1) + \displaystyle \sum_{n=1}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right).

I then differentiate as before:

f'(x) = (\cos x - 1)' + \displaystyle \sum_{n=1}^\infty \left( [\cos x - 1]' + \left[ \frac{x^2}{2!} \right]' - \left[ \frac{x^4}{4!} \right]' \dots + \left[ (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right]' \right)

= -\sin x + \displaystyle \sum_{n=1}^\infty \left( -\sin x + \frac{2x}{2!}  - \frac{4x^3}{4!} \dots + (-1)^{n-1} \frac{(2n) x^{2n-1}}{(2n)!} \right)

= -\sin x + \displaystyle \sum_{n=1}^\infty \left( -\sin x + \frac{2x}{2 \cdot 1!}  - \frac{4x^3}{4 \cdot 3!} \dots + (-1)^{n-1} \frac{(2n) x^{2n-1}}{(2n)(2n-1)!} \right)

= -\sin x + \displaystyle \sum_{n=1}^\infty \left( -\sin x + x - \frac{x^3}{3!} + \dots + (-1)^{n-1} \frac{x^{2n-1}}{(2n-1)!} \right)

= -\sin x - \displaystyle \sum_{n=1}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{n-1} \frac{x^{2n-1}}{(2n-1)!} \right).

At this point, we reindex the sum. We make the replacement k = n - 1, so that n = k+1 and k varies from k=0 to \infty. After the replacement, we then change the dummy index from k back to n.

f'(x) = -\sin x - \displaystyle \sum_{k=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{(k+1)-1} \frac{x^{2(k+1)-1}}{(2(k+1)-1)!} \right)

= -\sin x -  \displaystyle \sum_{k=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{k} \frac{x^{2k+1}}{(2k+1)!} \right)

= -\sin x -  \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^{n} \frac{x^{2n+1}}{(2n+1)!} \right)

With a slight alteration to the (-1)^n term, this sum is exactly the definition of g(x):

f'(x)= -\sin x -  \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots - (-1)^1 (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right)

= -\sin x -  \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} + \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right)

= -\sin x - g(x).

Summarizing, we have shown that g'(x) = f(x) and f'(x) = -\sin x - g(x). Differentiating f'(x) a second time, we obtain

f''(x) = -\cos x - g'(x) = -\cos x - f(x)

or

f''(x) + f(x) = -\cos x.

This last equation is a second-order nonhomogeneous linear differential equation with constant coefficients. A particular solution, using the method of undetermined coefficients, must have the form F(x) = Ax\cos x + Bx \sin x. Substituting, we see that

[Ax \cos x + B x \sin x]'' + A x \cos x + Bx \sin x = -\cos x

-2A \sin x - Ax \cos x + 2B \cos x - B x \sin x + Ax \cos x + B x \sin x = -\cos x

-2A \sin x  + 2B \cos x = -\cos x

We see that A = 0 and B = -1/2, which then lead to the particular solution

F(x) = -\displaystyle \frac{1}{2} x \sin x

Since \cos x and \sin x are solutions of the associated homogeneous equation f''(x) + f(x) = 0, we conclude that

f(x) = c_1 \cos x + c_2 \sin x - \displaystyle \frac{1}{2} x \sin x,

where the values of c_1 and c_2 depend on the initial conditions on f. As it turns out, it is straightforward to compute f(0) and f'(0), so we will choose x=0 for the initial conditions. We observe that f(0) and g(0) are both clearly equal to 0, so that f'(0) = -\sin 0 - g(0) = 0 as well.

The initial condition f(0)=0 clearly imples that c_1 = 0:

f(0) = c_1 \cos 0 + c_2 \sin 0 - \displaystyle \frac{1}{2} \cdot 0 \sin 0

0 = c_1

To find c_2, we first find f'(x):

f'(x) = c_2 \cos x - \displaystyle \frac{1}{2} \sin x - \frac{1}{2} x \cos x

f'(0) = c_2 \cos 0 - \displaystyle  \frac{1}{2} \sin 0 - \frac{1}{2} \cdot 0 \cos 0

0 = c_2.

Since c_1 = c_2 = 0, we conclude that f(x) = - \displaystyle \frac{1}{2} x \sin x, and so

g(x) = -\sin x - f'(x)

= -\sin x - \displaystyle  \left( -\frac{1}{2} \sin x - \frac{1}{2} x \cos x \right)

= \displaystyle \frac{x \cos x - \sin x}{2}.

Solving Problems Submitted to MAA Journals (Part 5c)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

f(x) = \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

In the previous post, we showed that f(x) = - \frac{1}{2} x \sin x by writing the series as a double sum and then reversing the order of summation. We proceed with very similar logic to evaluate g(x). Since

\sin x = \displaystyle \sum_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

is the Taylor series expansion of \sin x, we may write g(x) as

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sum_{k=0}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!} - \sum_{k=0}^n (-1)^k \frac{x^{2k+1}}{(2k+1)!} \right)

= \displaystyle \sum_{n=0}^\infty \sum_{k=n+1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

As before, we employ one of my favorite techniques from the bag of tricks: reversing the order of summation. Also as before, the inner sum is inner sum is independent of n, and so the inner sum is simply equal to the summand times the number of terms. We see that

g(x) = \displaystyle \sum_{k=1}^\infty \sum_{n=0}^{k-1} (-1)^k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \sum_{k=1}^\infty (-1)^k \cdot k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \cdot 2k \frac{x^{2k+1}}{(2k+1)!}.

At this point, the solution for g(x) diverges from the previous solution for f(x). I want to cancel the factor of 2k in the summand; however, the denominator is

(2k+1)! = (2k+1)(2k)!,

and 2k doesn’t cancel cleanly with (2k+1). Hypothetically, I could cancel as follows:

\displaystyle \frac{2k}{(2k+1)!} = \frac{2k}{(2k+1)(2k)(2k-1)!} = \frac{1}{(2k+1)(2k-1)!},

but that introduces an extra (2k+1) in the denominator that I’d rather avoid.

So, instead, I’ll write 2k as (2k+1)-1 and then distribute and split into two different sums:

g(x) = \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \cdot 2k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k (2k+1-1) \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty \left[ (-1)^k (2k+1) \frac{x^{2k+1}}{(2k+1)!} - (-1)^k \cdot 1 \frac{x^{2k+1}}{(2k+1)!} \right]

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k (2k+1) \frac{x^{2k+1}}{(2k+1)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k  \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k (2k+1) \frac{x^{2k+1}}{(2k+1)(2k)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}.

At this point, I factored out a power of x from the first sum. In this way, the two sums are the Taylor series expansions of \cos x and \sin x:

g(x) = \displaystyle \frac{x}{2} \sum_{k=1}^\infty (-1)^k \cdot \frac{x^{2k}}{(2k)!} - \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k+1}}{(2k+1)!}

= \displaystyle \frac{x}{2} \cos x - \frac{1}{2} \sin x

= \displaystyle \frac{x \cos x - \sin x}{2}.

This was sufficiently complicated that I was unable to guess this solution by experimenting with Mathematica; nevertheless, Mathematica can give graphical confirmation of the solution since the graphs of the two expressions overlap perfectly.

Solving Problems Submitted to MAA Journals (Part 5b)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Evaluate the following sums in closed form:

f(x) =  \displaystyle \sum_{n=0}^\infty \left( \cos x - 1 + \frac{x^2}{2!} - \frac{x^4}{4!} \dots + (-1)^{n-1} \frac{x^{2n}}{(2n)!} \right)

and

g(x) = \displaystyle \sum_{n=0}^\infty \left( \sin x - x + \frac{x^3}{3!} - \frac{x^5}{5!} \dots + (-1)^{n-1} \frac{x^{2n+1}}{(2n+1)!} \right).

We start with f(x) and the Taylor series

\cos x = \displaystyle \sum_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k)!}.

With this, f(x) can be written as

f(x) = \displaystyle \sum_{n=0}^\infty \left( \sum_{k=0}^\infty (-1)^k \frac{x^{2k}}{(2k)!} - \sum_{k=0}^n (-1)^k \frac{x^{2k}}{(2k)!} \right)

= \displaystyle \sum_{n=0}^\infty \sum_{k=n+1}^\infty (-1)^k \frac{x^{2k}}{(2k)!}.

At this point, my immediate thought was one of my favorite techniques from the bag of tricks: reversing the order of summation. (Two or three chapters of my Ph.D. theses derived from knowing when to apply this technique.) We see that

f(x) = \displaystyle \sum_{k=1}^\infty \sum_{n=0}^{k-1} (-1)^k \frac{x^{2k}}{(2k)!}.

At this point, the inner sum is independent of n, and so the inner sum is simply equal to the summand times the number of terms. Since there are k terms for the inner sum (n = 0, 1, \dots, k-1), we see

f(x) =  \displaystyle \sum_{k=1}^\infty (-1)^k \cdot k \frac{x^{2k}}{(2k)!}.

To simplify, we multiply top and bottom by 2 so that the first term of (2k)! cancels:

f(x) = \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \cdot 2k \frac{x^{2k}}{(2k)(2k-1)!}

= \displaystyle \frac{1}{2} \sum_{k=1}^\infty (-1)^k \frac{x^{2k}}{(2k-1)!}

At this point, I factored out a (-1) and a power of x to make the sum match the Taylor series for \sin x:

f(x) = \displaystyle -\frac{x}{2} \sum_{k=1}^\infty (-1)^{k-1} \frac{x^{2k-1}}{(2k-1)!} = -\frac{x \sin x}{2}.

I was unsurprised but comforted that this matched the guess I had made by experimenting with Mathematica.

Solving Problems Submitted to MAA Journals (Part 4)

The following problem appeared in Volume 96, Issue 3 (2023) of Mathematics Magazine.

Let A_1, \dots, A_n be arbitrary events in a probability field. Denote by B_k the event that at least k of A_1, \dots A_n occur. Prove that \displaystyle \sum_{k=1}^n P(B_k) = \sum_{k=1}^n P(A_k).

I’ll admit when I first read this problem, I didn’t believe it. I had to draw a couple of Venn diagrams to convince myself that it actually worked:

Of course, pictures are not proofs, so I started giving the problem more thought.

I wish I could say where I got the inspiration from, but I got the idea to define a new random variable N to be the number of events from A_1, \dots A_n that occur. With this definition, B_k becomes the event that N \ge k, so that

\displaystyle \sum_{k=1}^n P(B_k) = \sum_{k=1}^n P(N \ge k)

At this point, my Spidey Sense went off: that’s the tail-sum formula for expectation! Since N is a non-negative integer-valued random variable, the mean of N can be computed by

E(N) = \displaystyle \sum_{k=1}^n P(N \ge k).

Said another way, E(N) = \displaystyle \sum_{k=1}^n P(B_k).

Therefore, to solve the problem, it remains to show that \displaystyle \sum_{k=1}^n P(A_k) is also equal to E(N). To do this, I employed the standard technique from the bag of tricks of writing N as the sum of indicator random variables. Define

I_k = \displaystyle \bigg\{ \begin{array}{ll} 1, & A_k \hbox{~occurs} \\ 0, & A_k \hbox{~does not occur} \end{array}

Then N = I_1 + \dots + I_n, so that

E(N) = \displaystyle \sum_{k=1}^n E(I_k) =\sum_{k=1}^n [1 \cdot P(A_k) + 0 \cdot P(A_k^c)] =\sum_{k=1}^n P(A_k).

Equating the two expressions for E(N), we conclude that \displaystyle \sum_{k=1}^n P(B_k)  = \sum_{k=1}^n P(A_k), as claimed.

My Favorite One-Liners: Part 20

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Perhaps the world’s most famous infinite series is

S = \displaystyle \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \dots

as this is the subject of Zeno’s paradox. When I teach infinite series in class, I often engage the students by reminding students about Zeno’s paradox and then show them this clip from the 1994 movie I.Q.

This clip is almost always a big hit with my students. After showing this clip, I’ll conclude, “When I was single, this was part of my repertoire of pick-up lines… but it never worked.”

Even after showing this clip, some students resist the idea that an infinite series can have a finite answer. For such students, I use a physical demonstration: I walk half-way across the classroom, then a quarter, and so on… until I walk head-first into a walk at full walking speed. The resulting loud thud usually confirms for students that an infinite sum can indeed have a finite answer.

For further reading, see my series on arithmetic and geometric series.

 

 

My Favorite One-Liners: Part 18

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. This is a quip that I’ll use when a theoretical calculation can be easily confirmed with a calculator.

Sometimes I teach my students how people converted decimal expansions into fractions before there was a button on a calculator to do this for them. For example, to convert  x = 0.\overline{432} = 0.432432432\dots into a fraction, the first step (from the Bag of Tricks) is to multiply by 1000: How do we change this into a decimal? Let’s call this number x.

1000x = 432.432432\dots

x = 0.432432\dots

Notice that the decimal parts of both x and 1000x are the same. Subtracting, the decimal parts cancel, leaving

999x = 432

or

x = \displaystyle \frac{432}{999} = \displaystyle \frac{16}{37}

In my experience, most students — even senior math majors who have taken a few theorem-proof classes and hence are no dummies — are a little stunned when they see this procedure for the first time.

To make this more real and believable to them, I then tell them my one-liner: “I can see that no one believes me. OK, let’s try something that you will believe. Pop out your calculators. Then punch in 16 divided by 37.”

Indeed, my experience many students really do need this technological confirmation to be psychologically sure that it really did work. Then I’ll tease them that, by pulling out their calculators, I’m trying to speak my students’ language.

TI1637

See also my fuller post on this topic as well as the index for the entire series.

 

My Favorite One-Liners: Part 14

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. This quip is similar to the “bag of tricks” one-liner, and I’ll use this one if the “bag of tricks” line is starting to get a little dry.

Sometimes in math, there’s a step in a derivation that, to the novice, appears to make absolutely no sense. For example, to find the antiderivative of \sec x, the first step is far from obvious:

\displaystyle \int \sec x \, dx = \displaystyle \int \sec x \frac{\sec x + \tan x}{\sec x + \tan x} \, dx

While that’s certainly correct, it’s from from obvious to a student that this such a “simplification” is actually helpful.

To give a simpler example, to convert

x = 0.\overline{432} = 0.432432432\dots

into a decimal, the first step is to multiply x by 1000:

1000x = 432.432432\dots

Students often give skeptical, quizzical, and/or frustrated looks about this non-intuitive next step… they’re thinking, “How did you know to do that?” To lighten the mood, I’ll explain with a big smile that I’m clairvoyant… when I got my Ph.D., I walked across the stage, got my diploma, someone waved a magic wand at me, and poof! I became clairvoyant.

Clairvoyance is wonderful; I highly recommend it.

The joke, of course, is that the only reason that I multiplied by 1000 is that someone figured out that multiplying by 1000 at this juncture would actually be helpful. Subtracting x from 1000x, the decimal parts cancel, leaving

999x = 432

or

x = \displaystyle \frac{432}{999} = \displaystyle \frac{16}{37}.

In my experience, most students — even senior math majors who have taken a few theorem-proof classes and hence are no dummies — are a little stunned when they see this procedure for the first time. I learned this procedure when I was very young; however, in modern times, this procedure appears to be a dying art. I’m guessing that this algorithm is a dying art because of the ease and convenience of modern calculators. As always, I hold my students blameless for the things that they were simply not taught at a younger age, and part of my job is repairing these odd holes in their mathematical backgrounds so that they’ll have their best chance at becoming excellent high school math teachers.

For further reading, here’s my series on rational numbers and decimal expansions.

My Favorite One-Liners: Part 5

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Every once in a while, students encounter a step that seems far from obvious. To give one example, to evaluate the series

\displaystyle \sum_{n=1}^{100} \frac{1}{n^2+n},

the natural first step is to rewrite this as

\displaystyle \sum_{n=1}^{100} \left(\frac{1}{n} - \frac{1}{n+1} \right)

and then use the principle of telescoping series. However, students may wonder how they were ever supposed to think of the first step for themselves.

Students often give skeptical, quizzical, and/or frustrated looks about this non-intuitive next step… they’re thinking, “How would I ever have thought to do that on my own?” To allay these concerns, I explain that this step comes from the patented Bag of Tricks. Socrates gave the Bag of Tricks to Plato, Plato gave it to Aristotle, it passed down the generations, my teacher taught the Bag of Tricks to me, and I teach it to my students.

Sadly, there aren’t any videos of Greek philosophers teaching, so I’ll have to settle for this: