Thoughts on Numerical Integration: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The links below show my series on numerical integration.

Part 1 and Part 2: Introduction

Part 3: Derivation of left, right, and midpoint rules

Part 4: Derivation of Trapezoid Rule

Part 5: Derivation of Simpson’s Rule

Part 6: Connection between the Midpoint Rule, the Trapezoid Rule, and Simpson’s Rule

Part 7: Implementation of numerical integration using Microsoft Excel

Part 8, Part 9, Part 10, Part 11: Numerical exploration of error analysis

Part 12 and Part 13: Left endpoint rule and rate of convergence

Part 14 and Part 15: Right endpoint rule and rate of convergence

Part 16 and Part 17: Midpoint Rule and rate of convergence

Part 18 and Part 19: Trapezoid Rule and rate of convergence

Part 20 and Part 21: Simpson’s Rule and rate of convergence

Part 22: Comparison of these results to theorems found in textbooks

Part 23: Return to Part 2 and accuracy of normalcdf function on TI calculators

Parabolic Properties from Pieces of String

I am pleased to announce that my latest paper, “Parabolic Properties from Pieces of String,” has now been published in Math Horizons. This was a really fun project for me. As I describe in the paper, I started wondering if it was possible to convince a student who hadn’t learned calculus yet that string art from two line segments traces a parabola. Not only was I able to come up with a way of demonstrating this without calculus, but I was also able to (1) prove that a quadratic polynomial satisfies the focus-directrix property of a parabola, which is the reverse of the usual logic when students learn conic sections, and (2) prove the reflective property of parabolas. I was really pleased with the final result, and am very happy that this was accepted for publication.

Due to copyright restrictions, I’m not permitted to freely distribute the final, published version of my article. However, I am able to share the following version of the article.

The above PDF file is an Accepted Manuscript of an article published by Taylor & Francis in College Mathematics Journal on February 24, 2022, available online: Full article: Parabolic Properties from Pieces of String (tandfonline.com)

Square roots and logarithms without a calculator (Part 12)

I recently came across the following computational trick: to estimate \sqrt{b}, use

\sqrt{b} \approx \displaystyle \frac{b+a}{2\sqrt{a}},

where a is the closest perfect square to b. For example,

\sqrt{26} \approx \displaystyle \frac{26+25}{2\sqrt{25}} = 5.1.

I had not seen this trick before — at least stated in these terms — and I’m definitely not a fan of computational tricks without an explanation. In this case, the approximation is a straightforward consequence of a technique we teach in calculus. If f(x) = (1+x)^n, then f'(x) = n (1+x)^{n-1}, so that f'(0) = n. Since f(0) = 1, the equation of the tangent line to f(x) at x = 0 is

L(x) = f(0) + f'(0) \cdot (x-0) = 1 + nx.

The key observation is that, for x \approx 0, the graph of L(x) will be very close indeed to the graph of f(x). In Calculus I, this is sometimes called the linearization of f at x =a. In Calculus II, we observe that these are the first two terms in the Taylor series expansion of f about x = a.

For the problem at hand, if n = 1/2, then

\sqrt{1+x} \approx 1 + \displaystyle \frac{x}{2}

if x is close to zero. Therefore, if a is a perfect square close to b so that the relative difference (b-a)/a is small, then

\sqrt{b} = \sqrt{a + b - a}

= \sqrt{a} \sqrt{1 + \displaystyle \frac{b-a}{a}}

\approx \sqrt{a} \displaystyle \left(1 + \frac{b-a}{2a} \right)

= \sqrt{a} \displaystyle \left( \frac{2a + b-a}{2a} \right)

= \sqrt{a} \displaystyle \left( \frac{b+a}{2a} \right)

= \displaystyle \frac{b+a}{2\sqrt{a}}.

One more thought: All of the above might be a bit much to swallow for a talented but young student who has not yet learned calculus. So here’s another heuristic explanation that does not require calculus: if a \approx b, then the geometric mean \sqrt{ab} will be approximately equal to the arithmetic mean (a+b)/2. That is,

\sqrt{ab} \approx \displaystyle \frac{a+b}{2},

so that

\sqrt{b} \approx \displaystyle \frac{a+b}{2\sqrt{a}}.

A New Derivation of Snell’s Law without Calculus

Last week, I posted that my latest paper, “A New Derivation of Snell’s Law without Calculus,” has now been published in College Mathematics Journal. In that previous post, I didn’t provide the complete exposition because of my understanding of copyright restrictions at that time.

I’ve since received requests for copies of my paper, which prompted me to carefully read the publisher’s copyright restrictions. In a nutshell, I was wrong: I am allowed to widely distribute preprints that did not go through peer review and, with extra restrictions, the accepted manuscript after peer review.

So, anyway, here it is.

The above PDF file is an Accepted Manuscript of an article published by Taylor & Francis in College Mathematics Journal on January 28, 2022, available online: Full article: A New Derivation of Snell’s Law Without Calculus (tandfonline.com).

A New Derivation of Snell’s Law without Calculus

I’m pleased to say that my latest paper, “A New Derivation of Snell’s Law without Calculus,” has now been published in College Mathematics Journal. The article is now available for online access to anyone who has access to the journal — usually, that means members of the Mathematical Association of America or anyone whose employer (say, a university) has institutional access. I expect that it will be in the printed edition of the journal later this year; however, I’ve not been told yet the issue in which it will appear.

Because of copyright issues, I can’t reproduce my new derivation of Snell’s Law here on the blog, so let me instead summarize the main idea. Snell’s Law (see Wikipedia) dictates the angle at which light is refracted when it passes from one medium (say, air) into another (say, water). If the velocity of light through air is v_1 while its velocity in water is v_2, then Snell’s Law says that

\displaystyle \frac{\sin \theta_1}{v_1} = \displaystyle \frac{\sin \theta_2}{v_2}

From Wikipedia

I was asked by a bright student who was learning physics if there was a way to prove Snell’s Law without using calculus. At the time, I was blissfully unaware of Huygens’s Principle (see OpenStax) and I didn’t have a good answer. I had only seen derivations of Snell’s Law using the first-derivative test, which is a standard optimization problem found in most calculus books (again, see Wikipedia) based on Fermat’s Principle that light travels along a path that minimizes time.

Anyway, after a couple of days, I found an elementary proof that does not require proof. I should warn that the word “elementary” can be a loaded word when used by mathematicians. The proof uses only concepts found in Precalculus, especially rotating a certain hyperbola and careful examining the domain of two functions. So while the proof does not use calculus, I can’t say that the proof is particularly easy — especially compared to the classical proof using Huygens’s Principle.

That said, I’m pretty sure that my proof is original, and I’m pretty proud of it.

Thoughts on Numerical Integration (Part 23): The normalcdf function on TI calculators

I end this series about numerical integration by returning to the most common (if hidden) application of numerical integration in the secondary mathematics curriculum: finding the area under the normal curve. This is a critically important tool for problems in both probability and statistics; however, the antiderivative of \displaystyle \frac{1}{\sqrt{2\pi}} e^{-x^2/2} cannot be expressed using finitely many elementary functions. Therefore, we must resort to numerical methods instead.

In days of old, of course, students relied on tables in the back of the textbook to find areas under the bell curve, and I suppose that such tables are still being printed. For students with access to modern scientific calculators, of course, there’s no need for tables because this is a built-in function on many calculators. For the line of TI calculators, the command is normalcdf.

Unfortunately, it’s a sad (but not well-known) fact of life that the TI-83 and TI-84 calculators are not terribly accurate at computing these areas. For example:

TI-84: \displaystyle \int_0^1 \frac{e^{-x^2/2}}{\sqrt{2\pi}} \, dx \approx 0.3413447\underline{399}

Correct answer, with Mathematica: 0.3413447\underline{467}\dots

TI-84: \displaystyle \int_1^2 \frac{e^{-x^2/2}}{\sqrt{2\pi}} \, dx \approx 0.1359051\underline{975}

Correct answer, with Mathematica: 0.1359051\underline{219}\dots

TI-84: \displaystyle \int_2^3 \frac{e^{-x^2/2}}{\sqrt{2\pi}} \, dx \approx 0.021400\underline{0948}

Correct answer, with Mathematica: 0.021400\underline{2339}\dots

TI-84: \displaystyle \int_3^4 \frac{e^{-x^2/2}}{\sqrt{2\pi}} \, dx \approx 0.0013182\underline{812}

Correct answer, with Mathematica: 0.0013182\underline{267}\dots

TI-84: \displaystyle \int_4^5 \frac{e^{-x^2/2}}{\sqrt{2\pi}} \, dx \approx 0.0000313\underline{9892959}

Correct answer, with Mathematica: 0.0000313\underline{84590261}\dots

TI-84: \displaystyle \int_5^6 \frac{e^{-x^2/2}}{\sqrt{2\pi}} \, dx \approx 2.8\underline{61148776} \times 10^{-7}

Correct answer, with Mathematica: 2.8\underline{56649842}\dots \times 10^{-7}

I don’t presume to know the proprietary algorithm used to implement normalcdf on TI-83 and TI-84 calculators. My honest if brutal assessment is that it’s probably not worth knowing: in the best case (when the endpoints are close to 0), the calculator provides an answer that is accurate to only 7 significant digits while presenting the illusion of a higher degree of accuracy. I can say that Simpson’s Rule with only n = 26 subintervals provides a better approximation to \displaystyle \int_0^1 \frac{e^{-x^2/2}}{\sqrt{2\pi}} \, dx than the normalcdf function.

For what it’s worth, I also looked at the accuracy of the NORMSDIST function in Microsoft Excel. This is much better, almost always producing answers that are accurate to 11 or 12 significant digits, which is all that can be realistically expected in floating-point double-precision arithmetic (in which numbers are usually stored accurate to 13 significant digits prior to any computations).

Thoughts on Numerical Integration (Part 22): Comparison to theorems about magnitudes of errors

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:

  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?

In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.

In this series, we have shown the following approximations of errors when using various numerical approximations for \int_a^b x^k \, dx. We obtained these approximations using only techniques within the reach of a talented high school student who has mastered Precalculus — especially the Binomial Theorem — and elementary techniques of integration.

As we now present, the formulas that we derived are (of course) easily connected to known theorems for the convergence of these techniques. These proofs, however, require some fairly advanced techniques from calculus. So, while the formulas derived in this series of posts only apply to f(x) = x^k (and, by an easy extension, any polynomial), the formulas that we do obtain easily foreshadow the actual formulas found on Wikipedia or Mathworld or calculus textbooks, thus (hopefully) taking some of the mystery out of these formulas.

Left and right endpoints: Our formula was

E \approx \displaystyle \frac{k}{2} x_*^{k-1} (b-a)h,

where x_* is some number between a and b. By comparison, the actual formula for the error is

E = \displaystyle \frac{f'(x_*) (b-a)^2}{2n} = \frac{f'(x_*)}{2} (b-a)h.

This reduces to the formula that we derived since f'(x) = kx^{k-1}.
 

Midpoint Rule: Our formula was

E \approx \displaystyle \frac{k(k-1)}{24} x_*^{k-1} (b-a)h,

where x_* is some number between a and b. By comparison, the actual formula for the error is

E = \displaystyle \frac{f''(x_*) (b-a)^3}{24n^2} = \frac{f''(x_*)}{24} (b-a)h^2.

This reduces to the formula that we derived since f''(x) = k(k-1)x^{k-2}.

Trapezoid Rule: Our formula was

E \approx \displaystyle \frac{k(k-1)}{12} x_*^{k-1} (b-a)h,

where x_* is some number between a and b. By comparison, the actual formula for the error is

E = \displaystyle \frac{f''(x_*) (b-a)^3}{12n^2} = \frac{f''(x_*)}{12} (b-a)h^2.

This reduces to the formula that we derived since f''(x) = k(k-1)x^{k-2}.

This reduces to the formula that we derived since f''(x) = k(k-1)x^{k-2}.

Simpson’s Rule: Our formula was

E \approx \displaystyle \frac{k(k-1)(k-2)(k-3)}{180} x_*^{k-4} (b-a)h^4,

where x_* is some number between a and b. By comparison, the actual formula for the error is

E = \displaystyle \frac{f^{(4)}(x_*)}{180} (b-a)h^4.

This reduces to the formula that we derived since f^{(4)}(x) = k(k-1)(k-2)(k-3)x^{k-4}.

Thoughts on Numerical Integration (Part 21): Simpson’s rule and global rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this previous post in this series, we showed that the Simpson’s Rule approximation of \displaystyle \int_{x_i}^{x_i+2h} x^k \, dx has an error of 

-\displaystyle \frac{k(k-1)(k-2)(k-3)}{90} x_i^{k-4} h^5 + O(h^6).

In this post, we consider the global error when integrating on the interval [a,b] instead of a subinterval [x_i,x_i+2h]. The total error when approximating \displaystyle \int_a^b x^k \, dx = \int_{x_0}^{x_n} x^k \, dx will be the sum of the errors for the integrals over [x_0,x_2], [x_2,x_4], through [x_{n-2},x_n]. Therefore, the total error will be

$latex E \approx \displaystyle \frac{k(k-1)(k-2)(k-3)}{90} \left(x_0^{k-4} + x_2^{k-4} + \dots + x_{n-2}^{k-4} \right) h^5.

So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to \displaystyle \int_1^2 x^9 \, dx for different numbers of subintervals. If we take n = 100 and h = 0.01, then the error should be approximately equal to

\displaystyle \frac{9 \times 8 \times 7 \times 6}{90} \left(1^5 + 1.02^5 + 1.04^5 + \dots + 1.98^5 \right) (0.01)^5 \approx 0.0000017,

which, as expected, is close to the observed error of 102.3000018 - 102.3 \approx 0.0000018.
Let y_i = x_i^{k-4}, so that the error becomes

E \approx \displaystyle \frac{k(k-1)(k-2)(k-3)}{90} \left(y_0 + y_2 + \dots + y_{n-2} \right) h^5 = \displaystyle \frac{k(k-1)(k-2)(k-3)}{90} \overline{y} \frac{n}{2} h^5,

where \overline{y} = (y_0 + y_2 + \dots + y_{n-2})/(n/2) is the average of the y_i. (We notice that there are only n/2 terms in this sum since we’re adding only the even terms.) Clearly, this average is somewhere between the smallest and the largest of the y_i. Since y = x^{k-4} is a continuous function, that means that there must be some value of x_* between x_0 and x_{k-2} — and therefore between a and b — so that x_*^{k-4} = \overline{y} by the Intermediate Value Theorem. We conclude that the error can be written as

E \approx \displaystyle \frac{k(k-1)(k-2)(k-3)}{180} x_*^{k-4} nh^5,

Finally, since h is the length of one subinterval, we see that nh = b-a is the total length of the interval [a,b]. Therefore,

E \approx \displaystyle \frac{k(k-1)(k-2)(k-3)}{180} x_*^{k-4} (b-a)h^4 \equiv ch^4,

where the constant c is determined by a, b, and k. In other words, for the special case f(x) = x^k, we have established that the error from Simpson’s Rule is approximately quartic in h — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.

Thoughts on Numerical Integration (Part 20): Simpson’s rule and local rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this post, we will perform an error analysis for Simpson’s Rule

\int_a^b f(x) \, dx \approx \frac{h}{3} \left[f(x_0) + 4(x_1) + 2f(x_2) + \dots + 2f(x_{n-2}) + 4f(x_{n-1}) +f(x_n) \right] \equiv T_n

where n is the number of subintervals (which has to be even) and h = (b-a)/n is the width of each subinterval, so that x_k = x_0 + kh.
As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum (especially the Binomial Theorem), if we restrict our attention to the special case f(x) = x^k, where k \ge 5 is a positive integer.

For this special case, the true area under the curve f(x) = x^k on the subinterval [x_i, x_i +h] will be

\displaystyle \int_{x_i}^{x_i+h} x^k \, dx = \frac{1}{k+1} \left[ (x_i+h)^{k+1} - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \left[x_i^{k+1} + {k+1 \choose 1} x_i^k h + {k+1 \choose 2} x_i^{k-1} h^2 + {k+1 \choose 3} x_i^{k-2} h^3 + {k+1 \choose 4} x_i^{k-3} h^4+ {k+1 \choose 5} x_i^{k-4} h^5+ O(h^6) - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \bigg[ (k+1) x_i^k h + \frac{(k+1)k}{2} x_i^{k-1} h^2 + \frac{(k+1)k(k-1)}{6} x_i^{k-2} h^3+ \frac{(k+1)k(k-1)(k-2)}{24} x_i^{k-3} h^4

+ \displaystyle \frac{(k+1)k(k-1)(k-2)(k-3)}{120} x_i^{k-4} h^5 \bigg] + O(h^6)

= x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2 + \frac{k(k-1)}{6} x_i^{k-2} h^3 + \frac{k(k-1)(k-2)}{24} x_i^{k-3} h^4 + \frac{k(k-1)(k-2)(k-3)}{120} x_i^{k-4} h^5 + O(h^6)

In the above, the shorthand O(h^6) can be formally defined, but here we’ll just take it to mean “terms that have a factor of h^6 or higher that we’re too lazy to write out.” Since h is supposed to be a small number, these terms will small in magnitude and thus can be safely ignored.
Earlier in this series, we derived the very convenient relationship S_{2n} = \displaystyle \frac{2}{3} M_n + \frac{1}{3} T_n relating the approximations from Simpson’s Rule, the Midpoint Rule, and the Trapezoid Rule. We now exploit this relationship to approximate \displaystyle \int_{x_i}^{x_i+h} x^k \, dx. Earlier in this series, we found the Midpoint Rule approximation on this subinterval to be

M = x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2  + \frac{k(k-1)}{8} x_i^{k-2} h^3 + \frac{k(k-1)(k-2}{48} x_i^{k-3} h^4

\displaystyle + \frac{k(k-1)(k-2)(k-3)}{384} x_i^{k-4} h^5 + O(h^6)

while we found the Trapezoid Rule approximation to be

 T = x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2  + \frac{k(k-1)}{4} x_i^{k-2} h^3 + \frac{k(k-1)(k-2)}{12} x_i^{k-3} h^4

\displaystyle + \frac{k(k-1)(k-2)(k-3)}{48} x_i^{k-4} h^5 + O(h^6).

Therefore, if there are 2n subintervals, the Simpson’s Rule approximation of \displaystyle \int_{x_i}^{x_i+h} x^k \, dx — that is, the area under the parabola that passes through (x_i, x_i^k), (x_i + h/2, (x_i +h/2)^k), and (x_i + h, (x_i +h)^k) — will be S = \frac{2}{3}M + \frac{1}{3}T. Since

\displaystyle \frac{2}{3} \frac{1}{8} + \frac{1}{3} \frac{1}{4} = \frac{1}{6},

\displaystyle \frac{2}{3} \frac{1}{48} + \frac{1}{3} \frac{1}{12} = \frac{1}{24},

and

\displaystyle \frac{2}{3} \frac{1}{384} + \frac{1}{3} \frac{1}{48} = \frac{5}{576},

we see that

 S = x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2  + \frac{k(k-1)}{6} x_i^{k-2} h^3 + \frac{k(k-1)(k-2)}{24} x_i^{k-3} h^4

\displaystyle + \frac{5k(k-1)(k-2)(k-3)}{576} x_i^{k-4} h^5 + O(h^6).

We notice that something wonderful just happened: the first four terms of S perfectly match the first four terms of the exact value of the integral! Subtracting from the actual integral, the error in this approximation will be equal to

\displaystyle \frac{k(k-1)(k-2)(k-3)}{120} x_i^{k-4} h^5 - \frac{5k(k-1)(k-2)(k-3)}{576} x_i^{k-4} h^5 + O(h^6)

= -\displaystyle \frac{k(k-1)(k-2)(k-3)}{2880} x_i^{k-4} h^5 + O(h^6)

Before moving on, there’s one minor bookkeeping issue to deal with. We note that this is the error for S_{2n}, where 2n subintervals are used. However, the value of h in this equal arose from T_n and M_n, where only n subintervals are used. So let’s write the error with 2n subintervals as

-\displaystyle \frac{k(k-1)(k-2)(k-3)}{90} x_i^{k-4} \left( \frac{h}{2} \right)^5 + O(h^6),

where h/2 is the width of all of the 2n subintervals. By analogy, we see that the error for n subintervals will be

-\displaystyle \frac{k(k-1)(k-2)(k-3)}{90} x_i^{k-4} h^5 + O(h^6).

But even after adjusting for this constant, we see that this local error behaves like O(h^5), a vast improvement over both the Midpoint Rule and the Trapezoid Rule. This illustrates a general principle of numerical analysis: given two algorithms that are O(h^3), an improved algorithm can typically be made by taking some linear combination of the two algorithms. Usually, the improvement will be to O(h^4); however, in this example, we magically obtained an improvement to O(h^5).

Thoughts on Numerical Integration (Part 19): Trapezoid rule and global rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In the previous post, we showed that the Trapezoid Rule approximation of \displaystyle \int_{x_i}^{x_i+h} x^k \, dx  has error

\displaystyle \frac{k(k-1)}{12} x_i^{k-2} h^3 + O(h^4)

In this post, we consider the global error when integrating on the interval [a,b] instead of a subinterval [x_i,x_i+h]. The logic is almost a perfect copy-and-paste from the analysis used for the Midpoint Rule. The total error when approximating \displaystyle \int_a^b x^k \, dx = \int_{x_0}^{x_n} x^k \, dx will be the sum of the errors for the integrals over [x_0,x_1], [x_1,x_2], through [x_{n-1},x_n]. Therefore, the total error will be

E \approx \displaystyle \frac{k(k-1)}{12} \left(x_0^{k-2} + x_1^{k-2} + \dots + x_{n-1}^{k-2} \right) h^3.

So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to \displaystyle \int_1^2 x^9 \, dx for different numbers of subintervals. If we take n = 100 and h = 0.01, then the error should be approximately equal to

\displaystyle \frac{9 \times 8}{12} \left(1^7 + 1.01^7 + \dots + 1.99^7 \right) (0.01)^3 \approx 0.0187462,

which, as expected, is close to the actual error of 102.3191246 - 102.3 \approx 0.0191246.
Let y_i = x_i^{k-2}, so that the error becomes

E \approx \displaystyle \frac{k(k-1)}{12} \left(y_0 + y_1 + \dots + y_{n-1} \right) h^3 + O(h^4) = \displaystyle \frac{k(k-1)}{12} \overline{y} n h^3,

where \overline{y} = (y_0 + y_1 + \dots + y_{n-1})/n is the average of the y_i. Clearly, this average is somewhere between the smallest and the largest of the y_i. Since y = x^{k-2} is a continuous function, that means that there must be some value of x_* between x_0 and x_{k-1} — and therefore between a and b — so that x_*^{k-2} = \overline{y} by the Intermediate Value Theorem. We conclude that the error can be written as

E \approx \displaystyle \frac{k(k-1)}{12} x_*^{k-2} nh^3,

Finally, since h is the length of one subinterval, we see that nh = b-a is the total length of the interval [a,b]. Therefore,

E \approx \displaystyle \frac{k(k-1)}{12} x_*^{k-2} (b-a)h^2 \equiv ch^2,

where the constant c is determined by a, b, and k. In other words, for the special case f(x) = x^k, we have established that the error from the Trapezoid Rule is approximately quadratic in h — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.