Thoughts on Numerical Integration (Part 14): Right endpoint rule and local rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:

  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?

In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.

In this post, we will perform an error analysis for the right-endpoint rule

\int_a^b f(x) \, dx \approx h \left[f(x_1) + f(x_2) + \dots + f(x_n) \right] \equiv R_n

where n is the number of subintervals and h = (b-a)/n is the width of each subinterval, so that x_k = x_0 + kh.

As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum, if we restrict our attention to the special case f(x) = x^k, where k \ge 5 is a positive integer.

For this special case, the true area under the curve $f(x) = x^k$ on the subinterval [x_i, x_i +h] will be

\displaystyle \int_{x_i}^{x_i+h} x^k \, dx = \frac{1}{k+1} \left[ (x_i+h)^{k+1} - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \left[x_i^{k+1} + {k+1 \choose 1} x_i^k h + {k+1 \choose 2} x_i^{k-1} h^2 + O(h^3) - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \left[ (k+1) x_i^k h + \frac{(k+1)k}{2} x_i^{k-1} h^2 + O(h^3) \right]

= x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2 + O(h^3)

In the above, the shorthand O(h^3) can be formally defined, but here we’ll just take it to mean “terms that have a factor of h^3 or higher that we’re too lazy to write out.” Since h is supposed to be a small number, these terms will be much smaller in magnitude that the terms that have h or h^2 and thus can be safely ignored.

Using only the right-endpoint of the subinterval, the left-endpoint approximation of \displaystyle \int_{x_i}^{x_i+h} x^k \, dx is

(x_i+h)^k h = x_i^k h + k x_i^{k-1} h^2 + O(h^3).

Subtracting, the error in this approximation will be equal to

\displaystyle x_i^k h + k x_i^{k-1} h^2 - x_i^k h - \frac{k}{2} x_i^{k-1} h^2 + O(h^3) = \displaystyle \frac{k}{2} x_i^{k-1} h^2 + O(h^3)

Repeating the logic from the previous post in this series, this local error on [x_i, x_i+h], which is proportional to O(h^2), generates a total error on [a,b] that is proportional to h. That is, the right-endpoint rule has an error that is approximately linear in h, confirming the numerical observation that we made earlier in this series.

One thought on “Thoughts on Numerical Integration (Part 14): Right endpoint rule and local rate of convergence

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.