The number of digits of n!: Index

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on computing the number of digits in n!.

Part 1: Introduction – my own childhood explorations.

Part 2: Why a power-law fit is inappropriate.

Part 3: The correct answer, using Stirling’s formula.

Part 4: An elementary derivation of the first three significant terms of Stirling’s formula.

To prove that two things are equal, show that the difference is zero

The title of this post, “To prove that two things are equal, show that the difference is zero,” is surprisingly handy in the secondary mathematics curriculum. For example, it is the basis for the proof of the Mean Value Theorem, one of the most important theorems in calculus that serves as the basis for curve sketching and the uniqueness of antiderivatives (up to a constant).

And I have a great story that goes along with this principle, from 30 years ago.

I forget the exact question out of Apostol’s calculus, but there was some equation that I had to prove on my weekly homework assignment that, for the life of me, I just couldn’t get. And for no good reason, I had a flash of insight: subtract the left- and right-hand sides. While it was very difficult to turn the left side into the right side, it turned out that, for this particular problem, was very easy to show that the difference was zero. (Again, I wish I could remember exactly which question this was so that I could show this technique and this particular example to my own students.)

So I finished my homework, and I went outside to a local basketball court and worked on my jump shot.

Later that week, I went to class, and there was a great buzz in the air. It took ten seconds to realize that everyone was up in arms about how to do this particular problem. Despite the intervening 30 years, I remember the scene as clear as a bell. I can still hear one of my classmates ask me, “Quintanilla, did you get that one?”

I said with great pride, “Yeah, I got it.” And I showed them my work.

And, either before then or since then, I’ve never heard the intensity of the cussing that followed.

Truth be told, probably the only reason that I remember this story from my adolescence is that I usually was the one who had to ask for help on the hardest homework problems in that Honors Calculus class. This may have been the one time in that entire two-year calculus sequence that I actually figured out a homework problem that had stumped everybody else.

A 100-Year old computer for computing Fourier transforms

From http://www.engineerguy.com/fourier/:

Many famous machines have been built to do math — like Babbage’s Difference Engine for solving polynomials or Leibniz’s Stepped Reckoner for multiplying and dividing — yet none worked as well as Albert Michelson’s harmonic analyzer. This 19th century mechanical marvel does Fourier analysis: it can find the frequency components of a signal using only gears, springs and levers. We discovered this long-forgotten machine locked in a glass case at the University of Illinois. For your enjoyment, we brought it back to life in this book and in a companion video series — all written and created by Bill Hammack, Steve Kranz and Bruce Carpenter.

A free PDF of their book is available at the above link; the book is also available for purchase. Here are the companion videos for the book.

Helping Mathematics Students Survive the Post-Calculus Transition

Every so often, I’ll publicize through this blog an interesting article that I’ve found in the mathematics or mathematics education literature that can be freely distributed to the general public. Today, I’d like to highlight Michael J. Cullinane (2011) Helping Mathematics Students Survive the Post-Calculus Transition, PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, 21:8, 669-684, DOI:10.1080/10511971003692830

Here’s the abstract:

Many mathematics students have difficulty making the transition from procedurally oriented courses such as calculus to the more conceptually oriented courses in which they subsequently enroll. What are some of the key “stumbling blocks” for students as they attempt to make this transition? How do differences in faculty expectations for students and student expectations for themselves contribute to the “transition dilemma?” What might faculty incorporate into students’ learning experiences during the transition to help students better navigate the shift from procedural to conceptual, from concrete to abstract? This article offers some lessons learned in connection with these questions.

The full article can be found here: http://dx.doi.org/10.1080/10511971003692830

Schoolhouse Rock and Calculus

After presenting the Fundamental Theorem of Calculus to my calculus students, I make a point of doing the following example in class:

\displaystyle \int_0^4 \frac{1}{4} x^2 \, dx

Hopefully my students are able to produce the correct answer:

\displaystyle \int_0^4 \frac{1}{4} x^2 \, dx = \displaystyle \left[ \frac{x^3}{12} \right]^4_0

= \displaystyle \frac{(4)^3}{12} - \frac{(0)^3}{12}

= \displaystyle \frac{64}{12}

= \displaystyle \frac{16}{3}

Then I tell my students that they’ve probably known the solution of this one since they were kids… and I show them the classic video “Unpack Your Adjectives” from Schoolhouse Rock. They’ll watch this video with no small amount of confusion (“How is this possibly connected to calculus?”)… until I reach the 1:15 mark of the video below, when I’ll pause and discuss this children’s cartoon. This never fails to get an enthusiastic response from my students.

If you have no idea what I’m talking about, be sure to watch the first 75 seconds of the video below. I think you’ll be amused.

Inverse Functions: Index

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on the different definitions on inverse functions that appear in Precalculus and Calculus.

Square Roots, nth Roots, and Rational Exponents

Part 1: Simplifying \sqrt{x^2}

Part 2: The difference between \sqrt{t} and solving x^2 = t

Part 3: Definition of an inverse function and the horizontal line test

Part 4: Why extraneous solutions may occur when solving algebra problems involving a square root

Part 5: Defining \sqrt{x}

Part 6: Consequences of the definition of \sqrt{x}: simplifying \sqrt{x^2}

Part 7: Defining \sqrt[n]{x} if n is odd or even

Part 8: Rational exponents if the denominator of the exponent is odd or even

Arcsine

Part 9: There are infinitely many solutions to \sin x = 0.8

Part 10: Defining arcsine with domain [-\pi/2,\pi/2]

Part 11: Pedagogical thoughts on teaching arcsine.

Part 12: Solving SSA triangles: impossible case

Part 13: Solving SSA triangles: one way of getting a unique solution

Part 14: Solving SSA triangles: another way of getting a unique solution

Part 15: Solving SSA triangles: continuation of Part 14

Part 16: Solving SSA triangles: ambiguous case of two solutions

Part 17: Summary of rules for solving SSA triangles

Arccosine

Part 18: Definition for arccosine with domain [0,\pi]

Part 19: The Law of Cosines and solving SSS triangles

Part 20: Identifying impossible triangles with the Law of Cosines

Part 21: The Law of Cosines provides an unambiguous angle, unlike the Law of Sines

Part 22: Finding the angle between two vectors

Part 23: A proof for why the formula in Part 22 works

Arctangent

 

Part 18: Definition for arctangent with domain (-\pi/2,\pi/2)

Part 24: Finding the angle between two lines

Part 25: A proof for why the formula in Part 24 works.

Arcsecant

Part 26: Defining arcsecant using [0,\pi/2) \cup (\pi/2,\pi]

Part 27: Issues that arise in calculus using the domain [0,\pi/2) \cup (\pi/2,\pi]

Part 28: More issues that arise in calculus using the domain [0,\pi/2) \cup (\pi/2,\pi]

Part 29: Defining arcsecant using [0,\pi/2) \cup [pi,3\pi/2)

Logarithm

Part 30: Logarithms and complex numbers

 

 

 

Different definitions of e: Index

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on the different definitions of e that appear in Precalculus and Calculus.

Part 1: Justification for the formula for discrete compound interest

Part 2: Pedagogical thoughts on justifying the discrete compound interest formula for students.

Part 3: Application of the discrete compound interest formula as compounding becomes more frequent.

Part 4: Informal definition of e based on a limit of the compound interest formula.

Part 5: Justification for the formula for continuous compound interest.

Part 6: A second derivation of the formula for continuous compound interest by solving a differential equation.

Part 7: A formal justification of the formula from Part 4 using the definition of a derivative.

Part 8: A formal justification of the formula from Part 4 using L’Hopital’s Rule.

Part 9: A formal justification of the continuous compound interest formula as a limit of the discrete compound interest formula.

Part 10: A second formal justification of the continuous compound interest formula as a limit of the discrete compound interest formula.

Part 11: Numerical computation of e using Riemann sums and the Trapezoid Rule to approximate areas under y = 1/x.

Part 12: Numerical computation of e using \displaystyle \left(1 + \frac{1}{n} \right)^{1/n} and also Taylor series.