Engaging students: Making and interpreting bar charts, frequency charts, pie charts, and histograms

In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place.

I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course).

This student submission comes from my former student Taylor Bigelow. Her topic, from Pre-Algebra: making and interpreting bar charts, frequency charts, pie charts, and histograms.

green line

How could you as a teacher create an activity or project that involves your topic?

Charts allow for a lot of fun class activities. For example, we can have them take their own data for a table and create charts from that data. For my activity, I will give them all dice, which they should be very familiar with, and have them roll the dice 20 times and keep track of how many times it lands on each number in a table. From that table, they will make their own bar charts, frequency charts, and pie charts. After they roll their dice and make their charts, they will then answer questions interpreting the charts. This tests their ability to understand data and make all the different types of charts.

green line

How has this topic appeared in the news?

Charts are all over in the news, especially recently. There were pie charts and frequency charts all over during the election cycle, and with covid, all we see is bar charts of covid data. An easy engage for this topic would be to make observations about these types of graphs that they’ll probably see all the time during election seasons and might even be familiar with. First, we will ask the students what news can benefit from graphs, and what news they have seen graphs in recently. I expect answers similar to elections, covid, and economics. Then we can look at some of the graphs that usually show up around election cycles. We will take a minute as a class to discuss what they notice about the graphs and what they mean. Questions like “what type of graph is this”, “what are the variables in this graph”, and “what information do you get from this graph”. This will show the students that being able to read these graphs has real life applications, and it also teaches them what important things to look for in the graphs during class time and homework.

green line

How can technology be used to effectively engage students with this topic?

Technology is very useful for making graphs and being able to make and manipulate graphs can help them understand how to interpret the information given in graphs. Google sheets or excel can both be used to make and manipulate graphs. For this activity we would give the students some sample data and have them enter it into an online spreadsheet, and then make an appropriate graph to show this data. They then would answer questions about this graph, like “Why did you choose this type of graph to represent the data?”, “what is the independent variable and what is the dependent variable”, “What observations can you make about this graph?”, and “What would happen if you changed X to be # instead? Or if you added more information?” and other questions, especially about graphs with multiple variables. This helps students see how different information can be represented and lets them experiment with the information on their own, while also answering questions that steer them in the direction that the teacher wants them to know.

Thoughts on Numerical Integration (Part 18): Trapezoid rule and local rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this post, we will perform an error analysis for the Trapezoid Rule

\int_a^b f(x) \, dx \approx \frac{h}{2} \left[f(x_0) + 2f(x_1) + 2f(x_2) + \dots + 2f(x_{n-1}) +f(x_n) \right] \equiv T_n

where n is the number of subintervals and h = (b-a)/n is the width of each subinterval, so that x_k = x_0 + kh.
As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum (especially the Binomial Theorem), if we restrict our attention to the special case f(x) = x^k, where k \ge 5 is a positive integer.

For this special case, the true area under the curve f(x) = x^k on the subinterval [x_i, x_i +h] will be

\displaystyle \int_{x_i}^{x_i+h} x^k \, dx = \frac{1}{k+1} \left[ (x_i+h)^{k+1} - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \left[x_i^{k+1} + {k+1 \choose 1} x_i^k h + {k+1 \choose 2} x_i^{k-1} h^2 + {k+1 \choose 3} x_i^{k-2} h^3 + {k+1 \choose 4} x_i^{k-3} h^4+ {k+1 \choose 5} x_i^{k-4} h^5+ O(h^6) - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \bigg[ (k+1) x_i^k h + \frac{(k+1)k}{2} x_i^{k-1} h^2 + \frac{(k+1)k(k-1)}{6} x_i^{k-2} h^3+ \frac{(k+1)k(k-1)(k-2)}{24} x_i^{k-3} h^4

+ \displaystyle \frac{(k+1)k(k-1)(k-2)(k-3)}{120} x_i^{k-4} h^5 \bigg] + O(h^6)

= x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2 + \frac{k(k-1)}{6} x_i^{k-2} h^3 + \frac{k(k-1)(k-2)}{24} x_i^{k-3} h^4 + \frac{k(k-1)(k-2)(k-3)}{120} x_i^{k-4} h^5 + O(h^6)

In the above, the shorthand O(h^6) can be formally defined, but here we’ll just take it to mean “terms that have a factor of h^6 or higher that we’re too lazy to write out.” Since h is supposed to be a small number, these terms will small in magnitude and thus can be safely ignored. I wrote the above formula to include terms up to and including h^5 because I’ll need this later in this series of posts. For now, looking only at the Trapezoid Rule, it will suffice to write this integral as

\displaystyle \int_{x_i}^{x_i+h} x^k \, dx =x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2 + \frac{k(k-1)}{6} x_i^{k-2} h^3 + O(h^4).

Using the Trapezoid Rule, we approximate \displaystyle \int_{x_i}^{x_i+h} x^k \, dx as \displaystyle \frac{h}{2} \left[x_i^k + (x_i + h)^k \right], using the width h and the bases x_i^k and (x_i + h)^k of the trapezoid. Using the Binomial Theorem, this expands as

 x_i^k h + \displaystyle {k \choose 1} x_i^{k-1} \frac{h^2}{2}  + {k \choose 2} x_i^{k-2} \frac{h^3}{2} + {k \choose 3} x_i^{k-3} \frac{h^4}{2}  + {k \choose 4} x_i^{k-4} \frac{h^5}{2} + O(h^6)

 = x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2  + \frac{k(k-1)}{4} x_i^{k-2} h^3 + \frac{k(k-1)(k-2)}{12} x_i^{k-3} h^4

\displaystyle + \frac{k(k-1)(k-2)(k-3)}{48} x_i^{k-4} h^5 + O(h^6)

Once again, this is a little bit overkill for the present purposes, but we’ll need this formula later in this series of posts. Truncating somewhat earlier, we find that the Trapezoid Rule for this subinterval gives

x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2  + \displaystyle \frac{k(k-1)}{4} x_i^{k-2} h^3 + O(h^4)

Subtracting from the actual integral, the error in this approximation will be equal to

\displaystyle x_i^k h + \frac{k}{2} x_i^{k-1} h^2 + \frac{k(k-1)}{6} x_i^{k-2} h^3 - x_i^k h - \frac{k}{2} x_i^{k-1} h^2  - \frac{k(k-1)}{4} x_i^{k-2} h^3 + O(h^4)

= \displaystyle \frac{k(k-1)}{12} x_i^{k-2} h^3 + O(h^4)

In other words, like the Midpoint Rule, both of the first two terms x_i^k h and \displaystyle \frac{k}{2} x_i^{k-1} h^2 cancel perfectly, leaving us with a local error on the order of h^3. We also recall, from the previous post in this series that the local error from the Midpoint Rule was \displaystyle \frac{k(k-1)}{24} x_i^{k-2} h^3 + O(h^4). In other words, while both the Midpoint Rule and Trapezoid Rule have local errors on the order of O(h^3), we expect the error in the Midpoint Rule to be about half of the error from the Trapezoid Rule.

Thoughts on Numerical Integration (Part 17): Midpoint rule and global rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In the previous post, we showed that the midpoint approximation of \displaystyle \int_{x_i}^{x_i+h} x^k \, dx  has error

= \displaystyle \frac{k(k-1)}{24} x_i^{k-2} h^3 + O(h^4)

In this post, we consider the global error when integrating on the interval [a,b] instead of a subinterval [x_i,x_i+h]. The logic for determining the global error is much the same as what we used earlier for the left-endpoint rule. The total error when approximating \displaystyle \int_a^b x^k \, dx = \int_{x_0}^{x_n} x^k \, dx will be the sum of the errors for the integrals over [x_0,x_1], [x_1,x_2], through [x_{n-1},x_n]. Therefore, the total error will be

E \approx \displaystyle \frac{k(k-1)}{24} \left(x_0^{k-2} + x_1^{k-2} + \dots + x_{n-1}^{k-2} \right) h^3.

So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to \displaystyle \int_1^2 x^9 \, dx for different numbers of subintervals. If we take n = 100 and h = 0.01, then the error should be approximately equal to

\displaystyle \frac{9 \times 8}{24} \left(1^7 + 1.01^7 + \dots + 1.99^7 \right) (0.01)^3 \approx 0.0093731,

which, as expected, is close to the actual error of 102.3 - 102.2904379 \approx 0.00956211.
Let y_i = x_i^{k-2}, so that the error becomes

E \approx \displaystyle \frac{k(k-1)}{24} \left(y_0 + y_1 + \dots + y_{n-1} \right) h^3 + O(h^4) = \displaystyle \frac{k(k-1)}{24} \overline{y} n h^3,

where \overline{y} = (y_0 + y_1 + \dots + y_{n-1})/n is the average of the y_i. Clearly, this average is somewhere between the smallest and the largest of the y_i. Since y = x^{k-2} is a continuous function, that means that there must be some value of x_* between x_0 and x_{k-1} — and therefore between a and b — so that x_*^{k-2} = \overline{y} by the Intermediate Value Theorem. We conclude that the error can be written as

E \approx \displaystyle \frac{k(k-1)}{24} x_*^{k-2} nh^3,

Finally, since h is the length of one subinterval, we see that nh = b-a is the total length of the interval [a,b]. Therefore,

E \approx \displaystyle \frac{k(k-1)}{24} x_*^{k-2} (b-a)h^2 \equiv ch^2,

where the constant c is determined by a, b, and k. In other words, for the special case f(x) = x^k, we have established that the error from the Midpoint Rule is approximately quadratic in h — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.

Thoughts on Numerical Integration (Part 16): Midpoint rule and local rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this post, we will perform an error analysis for the Midpoint Rule

\int_a^b f(x) \, dx \approx h \left[f(c_1) + f(c_2) + \dots + f(c_n) \right] \equiv M_n

where n is the number of subintervals and h = (b-a)/n is the width of each subinterval, so that x_k = x_0 + kh. Also, c_i = (x_{i-1} + x_i)/2 is the midpoint of the ith subinterval.
As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum (especially the Binomial Theorem), if we restrict our attention to the special case f(x) = x^k, where k \ge 5 is a positive integer.

For this special case, the true area under the curve f(x) = x^k on the subinterval [x_i, x_i +h] will be

\displaystyle \int_{x_i}^{x_i+h} x^k \, dx = \frac{1}{k+1} \left[ (x_i+h)^{k+1} - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \left[x_i^{k+1} + {k+1 \choose 1} x_i^k h + {k+1 \choose 2} x_i^{k-1} h^2 + {k+1 \choose 3} x_i^{k-2} h^3 + {k+1 \choose 4} x_i^{k-3} h^4+ {k+1 \choose 5} x_i^{k-4} h^5+ O(h^6) - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \bigg[ (k+1) x_i^k h + \frac{(k+1)k}{2} x_i^{k-1} h^2 + \frac{(k+1)k(k-1)}{6} x_i^{k-2} h^3+ \frac{(k+1)k(k-1)(k-2)}{24} x_i^{k-3} h^4

+ \displaystyle \frac{(k+1)k(k-1)(k-2)(k-3)}{120} x_i^{k-4} h^5 \bigg] + O(h^6)

= x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2 + \frac{k(k-1)}{6} x_i^{k-2} h^3 + \frac{k(k-1)(k-2)}{24} x_i^{k-3} h^4 + \frac{k(k-1)(k-2)(k-3)}{120} x_i^{k-4} h^5 + O(h^6)

In the above, the shorthand O(h^6) can be formally defined, but here we’ll just take it to mean “terms that have a factor of h^6 or higher that we’re too lazy to write out.” Since h is supposed to be a small number, these terms will small in magnitude and thus can be safely ignored. I wrote the above formula to include terms up to and including h^5 because I’ll need this later in this series of posts. For now, looking only at the Midpoint Rule, it will suffice to write this integral as

\displaystyle \int_{x_i}^{x_i+h} x^k \, dx =x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2 + \frac{k(k-1)}{6} x_i^{k-2} h^3 + O(h^4).

Using the midpoint of the subinterval, the left-endpoint approximation of \displaystyle \int_{x_i}^{x_i+h} x^k \, dx is \displaystyle \left(x_i+ \frac{h}{2} \right)^k h. Using the Binomial Theorem, this expands as

 x_i^k h + \displaystyle {k \choose 1} x_i^{k-1} \frac{h^2}{2}  + {k \choose 2} x_i^{k-2} \frac{h^3}{4} + {k \choose 3} x_i^{k-3} \frac{h^4}{8}  + {k \choose 4} x_i^{k-4} \frac{h^5}{16} + O(h^6)

 = x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2  + \frac{k(k-1)}{8} x_i^{k-2} h^3 + \frac{k(k-1)(k-2)}{48} x_i^{k-3} h^4

\displaystyle + \frac{k(k-1)(k-2)(k-3)}{384} x_i^{k-4} h^5 + O(h^6)

Once again, this is a little bit overkill for the present purposes, but we’ll need this formula later in this series of posts. Truncating somewhat earlier, we find that the Midpoint Rule for this subinterval gives

x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2  + \displaystyle \frac{k(k-1)}{8} x_i^{k-2} h^3 + O(h^4)

Subtracting from the actual integral, the error in this approximation will be equal to

\displaystyle x_i^k h + \frac{k}{2} x_i^{k-1} h^2 + \frac{k(k-1)}{6} x_i^{k-2} h^3 - x_i^k h - \frac{k}{2} x_i^{k-1} h^2  - \frac{k(k-1)}{8} x_i^{k-2} h^3 + O(h^4)

= \displaystyle \frac{k(k-1)}{24} x_i^{k-2} h^3 + O(h^4)

In other words, unlike the left-endpoint and right-endpoint approximations, both of the first two terms x_i^k h and \displaystyle \frac{k}{2} x_i^{k-1} h^2 cancel perfectly, leaving us with a local error on the order of h^3.
The logic for determining the global error is much the same as what we used earlier for the left-endpoint rule. The total error when approximating \displaystyle \int_a^b x^k \, dx = \int_{x_0}^{x_n} x^k \, dx will be the sum of the errors for the integrals over [x_0,x_1], [x_1,x_2], through [x_{n-1},x_n]. Therefore, the total error will be

E \approx \displaystyle \frac{k(k-1)}{24} \left(x_0^{k-2} + x_1^{k-2} + \dots + x_{n-1}^{k-2} \right) h^3.

So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to \displaystyle \int_1^2 x^9 \, dx for different numbers of subintervals. If we take n = 100 and h = 0.01, then the error should be approximately equal to

\displaystyle \frac{9 \times 8}{24} \left(1^7 + 1.01^7 + \dots + 1.99^7 \right) (0.01)^3 \approx 0.0093731,

which, as expected, is close to the actual error of 102.3 - 102.2904379 \approx 0.00956211.
Let y_i = x_i^{k-2}, so that the error becomes

E \approx \displaystyle \frac{k(k-1)}{24} \left(y_0 + y_1 + \dots + y_{n-1} \right) h^3 + O(h^4) = \displaystyle \frac{k(k-1)}{24} \overline{y} n h^3,

where \overline{y} = (y_0 + y_1 + \dots + y_{n-1})/n is the average of the y_i. Clearly, this average is somewhere between the smallest and the largest of the y_i. Since y = x^{k-2} is a continuous function, that means that there must be some value of x_* between x_0 and x_{k-1} — and therefore between a and b — so that x_*^{k-2} = \overline{y} by the Intermediate Value Theorem. We conclude that the error can be written as

E \approx \displaystyle \frac{k(k-1)}{24} x_*^{k-2} nh^3,

Finally, since h is the length of one subinterval, we see that nh = b-a is the total length of the interval [a,b]. Therefore,

E \approx \displaystyle \frac{k(k-1)}{24} x_*^{k-2} (b-a)h^2 \equiv ch^2,

where the constant c is determined by a, b, and k. In other words, for the special case f(x) = x^k, we have established that the error from the Midpoint Rule is approximately quadratic in h — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.

Thoughts on Numerical Integration (Part 15): Right endpoint rule and global rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In the previous post in this series, we found that the local error of the right endpoint approximation to \displaystyle \int_{x_i}^{x_i+h} x^k \, dx was equal to 

\displaystyle \frac{k}{2} x_i^{k-1} h^2 + O(h^3).

We now consider the global error when integrating over the interval [a,b] and not just a particular subinterval.
The total error when approximating \displaystyle \int_a^b x^k \, dx = \int_{x_0}^{x_n} x^k , dx will be the sum of the errors for the integrals over [x_0,x_1], [x_1,x_2], through [x_{n-1},x_n]. Therefore, the total error will be

E \approx \displaystyle \frac{k}{2} \left(x_1^{k-1} + x_2^{k-1} + \dots + x_{n}^{k-1} \right) h^2.

So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to \displaystyle \int_1^2 x^9 \, dx for different numbers of subintervals. If we take n = 100 and h = 0.01, then the error should be approximately equal to

\displaystyle \frac{9}{2} \left(1.01^8 + 1.02^8 + \dots + 2^8 \right) (0.01)^2 \approx 2.61276,

which, as expected, is close to the actual error of 104.8741246 - 102.3 \approx 2.57412.
We now perform a more detailed analysis of the global error, which is almost a perfect copy-and-paste from the previous analysis. Let y_i = x_i^{k-1}, so that the error becomes

E \approx \displaystyle \frac{k}{2} \left(y_1 + y_2 + \dots + y_n \right) h^2 + O(h^3) = \displaystyle \frac{k}{2} \overline{y} n h^2,

where \overline{y} = (y_1 + y_2 + \dots + y_{n})/n is the average of the y_i. Clearly, this average is somewhere between the smallest and the largest of the y_i. Since y = x^{k-1} is a continuous function, that means that there must be some value of x_* between x_1 and x_{n} — and therefore between a and b — so that x_*^{k-1} = \overline{y} by the Intermediate Value Theorem. We conclude that the error can be written as

E \approx \displaystyle \frac{k}{2} x_*^{k-1} nh^2,

Finally, since h is the length of one subinterval, we see that nh = b-a is the total length of the interval [a,b]. Therefore,

E \approx \displaystyle \frac{k}{2} x_*^{k-1} (b-a)h \equiv ch,

where the constant c is determined by a, b, and k. In other words, for the special case f(x) = x^k, we have established that the error from the left-endpoint rule is approximately linear in h — without resorting to the generalized mean-value theorem.

Thoughts on Numerical Integration (Part 14): Right endpoint rule and local rate of convergence

Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:

  • Why is numerical integration necessary in the first place?
  • Where do these formulas come from (especially Simpson’s Rule)?
  • How can I do all of these formulas quickly?
  • Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
  • Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
  • Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?

In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.

In this post, we will perform an error analysis for the right-endpoint rule

\int_a^b f(x) \, dx \approx h \left[f(x_1) + f(x_2) + \dots + f(x_n) \right] \equiv R_n

where n is the number of subintervals and h = (b-a)/n is the width of each subinterval, so that x_k = x_0 + kh.

As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum, if we restrict our attention to the special case f(x) = x^k, where k \ge 5 is a positive integer.

For this special case, the true area under the curve $f(x) = x^k$ on the subinterval [x_i, x_i +h] will be

\displaystyle \int_{x_i}^{x_i+h} x^k \, dx = \frac{1}{k+1} \left[ (x_i+h)^{k+1} - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \left[x_i^{k+1} + {k+1 \choose 1} x_i^k h + {k+1 \choose 2} x_i^{k-1} h^2 + O(h^3) - x_i^{k+1} \right]

= \displaystyle \frac{1}{k+1} \left[ (k+1) x_i^k h + \frac{(k+1)k}{2} x_i^{k-1} h^2 + O(h^3) \right]

= x_i^k h + \displaystyle \frac{k}{2} x_i^{k-1} h^2 + O(h^3)

In the above, the shorthand O(h^3) can be formally defined, but here we’ll just take it to mean “terms that have a factor of h^3 or higher that we’re too lazy to write out.” Since h is supposed to be a small number, these terms will be much smaller in magnitude that the terms that have h or h^2 and thus can be safely ignored.

Using only the right-endpoint of the subinterval, the left-endpoint approximation of \displaystyle \int_{x_i}^{x_i+h} x^k \, dx is

(x_i+h)^k h = x_i^k h + k x_i^{k-1} h^2 + O(h^3).

Subtracting, the error in this approximation will be equal to

\displaystyle x_i^k h + k x_i^{k-1} h^2 - x_i^k h - \frac{k}{2} x_i^{k-1} h^2 + O(h^3) = \displaystyle \frac{k}{2} x_i^{k-1} h^2 + O(h^3)

Repeating the logic from the previous post in this series, this local error on [x_i, x_i+h], which is proportional to O(h^2), generates a total error on [a,b] that is proportional to h. That is, the right-endpoint rule has an error that is approximately linear in h, confirming the numerical observation that we made earlier in this series.