The following problem appeared in Volume 97, Issue 3 (2024) of Mathematics Magazine.
Two points and are chosen at random (uniformly) from the interior of a unit circle. What is the probability that the circle whose diameter is segment lies entirely in the interior of the unit circle?
Let be the interior of the circle centered at the origin with radius . Also, let denote the circle with diameter , and let be the distance of from the origin.
In the previous post, we showed that
.
To find , I will integrate over this conditional probability:
,
where is the cumulative distribution function of . For ,
.
Therefore,
.
To calculate this integral, I’ll use the trigonometric substitution . Then the endpoints and become and . Also, . Therefore,
,
confirming the answer I had guessed from simulations.
In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.
We previously showed that if the motion of a planet around the Sun is expressed in polar coordinates , with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation
,
where and is a certain constant. We will also impose the initial condition that the planet is at perihelion (i.e., is closest to the sun), at a distance of , when . This means that obtains its maximum value of when . This leads to the two initial conditions
;
the second equation arises since has a local extremum at .
We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of variation of parameters. First, we solve the associated homogeneous differential equation
.
The characteristic equation of this differential equation is , which clearly has the two imaginary roots . Therefore, two linearly independent solutions of the associated homogeneous equation are and .
(As an aside, this is one answer to the common question, “What are complex numbers good for?” The answer is naturally above the heads of Algebra II students when they first encounter the mysterious number , but complex numbers provide a way of solving the differential equations that model multiple problems in statics and dynamics.)
According to the method of variation of parameters, the general solution of the original nonhomogeneous differential equation
is
,
where
,
,
and is the Wronskian of and , defined by the determinant
.
Well, that’s a mouthful.
Fortunately, for the example at hand, these computations are pretty easy. First, since and , we have
from the usual Pythagorean trigonometric identity. Therefore, the denominators in the integrals for and essentially disappear.
Since , the integrals for and are straightforward to compute:
,
where we use for the constant of integration instead of the usual . Second,
,
using for the constant of integration. Therefore, by variation of parameters, the general solution of the nonhomogeneous differential equation is
.
Unsurprisingly, this matches the answer in the previous post that was found by the method of undetermined coefficients.
For the sake of completeness, I repeat the argument used in the previous two posts to determine and . This is require using the initial conditions and . From the first initial condition,
From the second initial condition,
.
From these two constants, we obtain
,
where .
Finally, since , we see that the planet’s orbit satisfies
,
so that, as shown earlier in this series, the orbit is an ellipse with eccentricity .
I end this series about numerical integration by returning to the most common (if hidden) application of numerical integration in the secondary mathematics curriculum: finding the area under the normal curve. This is a critically important tool for problems in both probability and statistics; however, the antiderivative of cannot be expressed using finitely many elementary functions. Therefore, we must resort to numerical methods instead.
In days of old, of course, students relied on tables in the back of the textbook to find areas under the bell curve, and I suppose that such tables are still being printed. For students with access to modern scientific calculators, of course, there’s no need for tables because this is a built-in function on many calculators. For the line of TI calculators, the command is normalcdf.
Unfortunately, it’s a sad (but not well-known) fact of life that the TI-83 and TI-84 calculators are not terribly accurate at computing these areas. For example:
TI-84:
Correct answer, with Mathematica:
TI-84:
Correct answer, with Mathematica:
TI-84:
Correct answer, with Mathematica:
TI-84:
Correct answer, with Mathematica:
TI-84:
Correct answer, with Mathematica:
TI-84:
Correct answer, with Mathematica:
I don’t presume to know the proprietary algorithm used to implement normalcdf on TI-83 and TI-84 calculators. My honest if brutal assessment is that it’s probably not worth knowing: in the best case (when the endpoints are close to 0), the calculator provides an answer that is accurate to only 7 significant digits while presenting the illusion of a higher degree of accuracy. I can say that Simpson’s Rule with only subintervals provides a better approximation to than the normalcdf function.
For what it’s worth, I also looked at the accuracy of the NORMSDIST function in Microsoft Excel. This is much better, almost always producing answers that are accurate to 11 or 12 significant digits, which is all that can be realistically expected in floating-point double-precision arithmetic (in which numbers are usually stored accurate to 13 significant digits prior to any computations).
Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
Why is numerical integration necessary in the first place?
Where do these formulas come from (especially Simpson’s Rule)?
How can I do all of these formulas quickly?
Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this series, we have shown the following approximations of errors when using various numerical approximations for . We obtained these approximations using only techniques within the reach of a talented high school student who has mastered Precalculus — especially the Binomial Theorem — and elementary techniques of integration.
As we now present, the formulas that we derived are (of course) easily connected to known theorems for the convergence of these techniques. These proofs, however, require some fairly advanced techniques from calculus. So, while the formulas derived in this series of posts only apply to (and, by an easy extension, any polynomial), the formulas that we do obtain easily foreshadow the actual formulas found on Wikipedia or Mathworld or calculus textbooks, thus (hopefully) taking some of the mystery out of these formulas.
Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
Why is numerical integration necessary in the first place?
Where do these formulas come from (especially Simpson’s Rule)?
How can I do all of these formulas quickly?
Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this previous post in this series, we showed that the Simpson’s Rule approximation of has an error of
.
In this post, we consider the global error when integrating on the interval instead of a subinterval .
The total error when approximating will be the sum of the errors for the integrals over , , through . Therefore, the total error will be
So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to for different numbers of subintervals. If we take and , then the error should be approximately equal to
,
which, as expected, is close to the observed error of .
Let , so that the error becomes
,
where is the average of the . (We notice that there are only terms in this sum since we’re adding only the even terms.) Clearly, this average is somewhere between the smallest and the largest of the . Since is a continuous function, that means that there must be some value of between and — and therefore between and — so that by the Intermediate Value Theorem. We conclude that the error can be written as
,
Finally, since is the length of one subinterval, we see that is the total length of the interval . Therefore,
,
where the constant is determined by , , and . In other words, for the special case , we have established that the error from Simpson’s Rule is approximately quartic in — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.
Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
Why is numerical integration necessary in the first place?
Where do these formulas come from (especially Simpson’s Rule)?
How can I do all of these formulas quickly?
Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this post, we will perform an error analysis for Simpson’s Rule
where is the number of subintervals (which has to be even) and is the width of each subinterval, so that .
As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum (especially the Binomial Theorem), if we restrict our attention to the special case , where is a positive integer.
For this special case, the true area under the curve on the subinterval will be
In the above, the shorthand can be formally defined, but here we’ll just take it to mean “terms that have a factor of or higher that we’re too lazy to write out.” Since is supposed to be a small number, these terms will small in magnitude and thus can be safely ignored.
Earlier in this series, we derived the very convenient relationship relating the approximations from Simpson’s Rule, the Midpoint Rule, and the Trapezoid Rule. We now exploit this relationship to approximate . Earlier in this series, we found the Midpoint Rule approximation on this subinterval to be
while we found the Trapezoid Rule approximation to be
.
Therefore, if there are subintervals, the Simpson’s Rule approximation of — that is, the area under the parabola that passes through , , and — will be . Since
,
,
and
,
we see that
.
We notice that something wonderful just happened: the first four terms of perfectly match the first four terms of the exact value of the integral! Subtracting from the actual integral, the error in this approximation will be equal to
Before moving on, there’s one minor bookkeeping issue to deal with. We note that this is the error for , where subintervals are used. However, the value of in this equal arose from and , where only subintervals are used. So let’s write the error with subintervals as
,
where is the width of all of the subintervals. By analogy, we see that the error for subintervals will be
.
But even after adjusting for this constant, we see that this local error behaves like , a vast improvement over both the Midpoint Rule and the Trapezoid Rule. This illustrates a general principle of numerical analysis: given two algorithms that are , an improved algorithm can typically be made by taking some linear combination of the two algorithms. Usually, the improvement will be to ; however, in this example, we magically obtained an improvement to .
Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
Why is numerical integration necessary in the first place?
Where do these formulas come from (especially Simpson’s Rule)?
How can I do all of these formulas quickly?
Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In the previous post, we showed that the Trapezoid Rule approximation of has error
In this post, we consider the global error when integrating on the interval instead of a subinterval . The logic is almost a perfect copy-and-paste from the analysis used for the Midpoint Rule.
The total error when approximating will be the sum of the errors for the integrals over , , through . Therefore, the total error will be
.
So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to for different numbers of subintervals. If we take and , then the error should be approximately equal to
,
which, as expected, is close to the actual error of .
Let , so that the error becomes
,
where is the average of the . Clearly, this average is somewhere between the smallest and the largest of the . Since is a continuous function, that means that there must be some value of between and — and therefore between and — so that by the Intermediate Value Theorem. We conclude that the error can be written as
,
Finally, since is the length of one subinterval, we see that is the total length of the interval . Therefore,
,
where the constant is determined by , , and . In other words, for the special case , we have established that the error from the Trapezoid Rule is approximately quadratic in — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.
Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
Why is numerical integration necessary in the first place?
Where do these formulas come from (especially Simpson’s Rule)?
How can I do all of these formulas quickly?
Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this post, we will perform an error analysis for the Trapezoid Rule
where is the number of subintervals and is the width of each subinterval, so that .
As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum (especially the Binomial Theorem), if we restrict our attention to the special case , where is a positive integer.
For this special case, the true area under the curve on the subinterval will be
In the above, the shorthand can be formally defined, but here we’ll just take it to mean “terms that have a factor of or higher that we’re too lazy to write out.” Since is supposed to be a small number, these terms will small in magnitude and thus can be safely ignored.
I wrote the above formula to include terms up to and including because I’ll need this later in this series of posts. For now, looking only at the Trapezoid Rule, it will suffice to write this integral as
.
Using the Trapezoid Rule, we approximate as , using the width and the bases and of the trapezoid. Using the Binomial Theorem, this expands as
Once again, this is a little bit overkill for the present purposes, but we’ll need this formula later in this series of posts. Truncating somewhat earlier, we find that the Trapezoid Rule for this subinterval gives
Subtracting from the actual integral, the error in this approximation will be equal to
In other words, like the Midpoint Rule, both of the first two terms and cancel perfectly, leaving us with a local error on the order of .
We also recall, from the previous post in this series that the local error from the Midpoint Rule was . In other words, while both the Midpoint Rule and Trapezoid Rule have local errors on the order of , we expect the error in the Midpoint Rule to be about half of the error from the Trapezoid Rule.
Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
Why is numerical integration necessary in the first place?
Where do these formulas come from (especially Simpson’s Rule)?
How can I do all of these formulas quickly?
Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In the previous post, we showed that the midpoint approximation of has error
In this post, we consider the global error when integrating on the interval instead of a subinterval . The logic for determining the global error is much the same as what we used earlier for the left-endpoint rule.
The total error when approximating will be the sum of the errors for the integrals over , , through . Therefore, the total error will be
.
So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to for different numbers of subintervals. If we take and , then the error should be approximately equal to
,
which, as expected, is close to the actual error of .
Let , so that the error becomes
,
where is the average of the . Clearly, this average is somewhere between the smallest and the largest of the . Since is a continuous function, that means that there must be some value of between and — and therefore between and — so that by the Intermediate Value Theorem. We conclude that the error can be written as
,
Finally, since is the length of one subinterval, we see that is the total length of the interval . Therefore,
,
where the constant is determined by , , and . In other words, for the special case , we have established that the error from the Midpoint Rule is approximately quadratic in — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.
Numerical integration is a standard topic in first-semester calculus. From time to time, I have received questions from students on various aspects of this topic, including:
Why is numerical integration necessary in the first place?
Where do these formulas come from (especially Simpson’s Rule)?
How can I do all of these formulas quickly?
Is there a reason why the Midpoint Rule is better than the Trapezoid Rule?
Is there a reason why both the Midpoint Rule and the Trapezoid Rule converge quadratically?
Is there a reason why Simpson’s Rule converges like the fourth power of the number of subintervals?
In this series, I hope to answer these questions. While these are standard questions in a introductory college course in numerical analysis, and full and rigorous proofs can be found on Wikipedia and Mathworld, I will approach these questions from the point of view of a bright student who is currently enrolled in calculus and hasn’t yet taken real analysis or numerical analysis.
In this post, we will perform an error analysis for the Midpoint Rule
where is the number of subintervals and is the width of each subinterval, so that . Also, is the midpoint of the th subinterval.
As noted above, a true exploration of error analysis requires the generalized mean-value theorem, which perhaps a bit much for a talented high school student learning about this technique for the first time. That said, the ideas behind the proof are accessible to high school students, using only ideas from the secondary curriculum (especially the Binomial Theorem), if we restrict our attention to the special case , where is a positive integer.
For this special case, the true area under the curve on the subinterval will be
In the above, the shorthand can be formally defined, but here we’ll just take it to mean “terms that have a factor of or higher that we’re too lazy to write out.” Since is supposed to be a small number, these terms will small in magnitude and thus can be safely ignored.
I wrote the above formula to include terms up to and including because I’ll need this later in this series of posts. For now, looking only at the Midpoint Rule, it will suffice to write this integral as
.
Using the midpoint of the subinterval, the left-endpoint approximation of is . Using the Binomial Theorem, this expands as
Once again, this is a little bit overkill for the present purposes, but we’ll need this formula later in this series of posts. Truncating somewhat earlier, we find that the Midpoint Rule for this subinterval gives
Subtracting from the actual integral, the error in this approximation will be equal to
In other words, unlike the left-endpoint and right-endpoint approximations, both of the first two terms and cancel perfectly, leaving us with a local error on the order of .
The logic for determining the global error is much the same as what we used earlier for the left-endpoint rule.
The total error when approximating will be the sum of the errors for the integrals over , , through . Therefore, the total error will be
.
So that this formula doesn’t appear completely mystical, this actually matches the numerical observations that we made earlier. The figure below shows the left-endpoint approximations to for different numbers of subintervals. If we take and , then the error should be approximately equal to
,
which, as expected, is close to the actual error of .
Let , so that the error becomes
,
where is the average of the . Clearly, this average is somewhere between the smallest and the largest of the . Since is a continuous function, that means that there must be some value of between and — and therefore between and — so that by the Intermediate Value Theorem. We conclude that the error can be written as
,
Finally, since is the length of one subinterval, we see that is the total length of the interval . Therefore,
,
where the constant is determined by , , and . In other words, for the special case , we have established that the error from the Midpoint Rule is approximately quadratic in — without resorting to the generalized mean-value theorem and confirming the numerical observations we made earlier.