Confirming Einstein’s Theory of General Relativity With Calculus, Part 6g: Rationale for Method of Undetermined Coefficients IV

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

In this post, we will use the guesses

u(\theta) = f(\theta) \cos \theta \qquad \hbox{or} u(\theta) = f(\theta) \sin \theta

that arose from the technique/trick of reduction of order, where f(\theta) is some unknown function, to find the general solution of the differential equation

u^{(4)} + 2u'' + u = 0.

To do this, we will need to use the Product Rule for higher-order derivatives that was derived in the previous post:

(fg)'' = f'' g + 2 f' g' + f g''

and

(fg)^{(4)} = f^{(4)} g + 4 f''' g' + 6 f'' g'' + 4f' g''' + f g^{(4)}.

In these formulas, Pascal’s triangle makes a somewhat surprising appearance; indeed, this pattern can be proven with mathematical induction.

We begin with u(\theta) = f(\theta) \cos \theta. If g(\theta) = \cos \theta, then

g'(\theta) = - \sin \theta,

g''(\theta) = -\cos \theta,

g'''(\theta) = \sin \theta,

g^{(4)}(\theta) = \cos \theta.

Substituting into the fourth-order differential equation, we find the differential equation becomes

(f \cos \theta)^{(4)} + 2 (f \cos \theta)'' + f \cos \theta = 0

f^{(4)} \cos \theta - 4 f''' \sin \theta - 6 f'' \cos \theta + 4 f' \sin \theta + f \cos \theta + 2 f'' \cos \theta - 4 f' \sin \theta - 2 f \cos \theta + f \cos \theta = 0

f^{(4)} \cos \theta - 4 f''' \sin \theta - 6 f'' \cos \theta  + 2 f'' \cos \theta  = 0

f^{(4)} \cos \theta - 4 f''' \sin \theta - 4 f'' \cos \theta = 0

The important observation is that the terms containing f and f' cancelled each other. This new differential equation doesn’t look like much of an improvement over the original fourth-order differential equation, but we can make a key observation: if f'' = 0, then differentiating twice more trivially yields f''' = 0 and f^{(4)} = 0. Said another way: if f'' = 0, then u(\theta) = f(\theta) \cos \theta will be a solution of the original differential equation.

Integrating twice, we can find f:

f''(\theta) = 0

f'(\theta) = c_1

f(\theta) = c_1 \theta + c_2.

Therefore, a solution of the original differential equation will be

u(\theta) = c_1 \theta \cos \theta + c_2 \cos \theta.

We now repeat the logic for u(\theta) = f(\theta) \sin \theta:

(f \sin \theta)^{(4)} + 2 (f \sin \theta)'' + f \sin \theta = 0

f^{(4)} \sin \theta + 4 f''' \cos \theta - 6 f'' \sin \theta - 4 f' \cos\theta + f \sin \theta + 2 f'' \sin \theta + 4 f' \cos \theta - 2 f \sin \theta + f \sin \theta = 0

f^{(4)} \sin\theta + 4 f''' \cos \theta - 6 f'' \sin \theta + 2 f'' \sin \theta = 0

f^{(4)} \sin\theta - 4 f''' \cos\theta - 4 f'' \sin\theta = 0.

Once again, a solution of this new differential equation will be f(\theta) = c_3 \theta + c_4, so that f'' = f''' = f^{(4)} = 0. Therefore, another solution of the original differential equation will be

u(\theta) = c_3 \theta \sin \theta + c_4 \sin \theta.

Adding these provides the general solution of the differential equation:

u(\theta) = c_1 \theta \cos \theta + c_2 \cos \theta + c_3 \theta \sin \theta + c_4 \sin \theta.

Except for the order of the constants, this matches the solution that was presented earlier by using techniques taught in a proper course in differential equations.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6f: Rationale for Method of Undetermined Coefficients III

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

In the previous post, I used a standard technique from differential equations to find the general solution of

u^{(4)} + 2u'' + u = 0.

to be

u(theta) = c_1 \cos \theta + c_2 \sin \theta + c_3 \theta \cos \theta + c_4 \theta \sin \theta.

However, as much as possible in this series, I want to take the perspective of a talented calculus student who has not yet taken differential equations — so that the conclusion above is far from obvious. How could this be reasonable coaxed out of such a student?

To begin, we observe that the characteristic equation is

r^4 + 2r^2 + 1 = 0,

or

(r^2 + 1)^2 = 0.

Clearly this has the same roots as the simpler equation r^2 + 1 = 0, which corresponds to the second-order differential equation u'' + u = 0. We’ve already seen that u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta are solutions of this differential equation; perhaps they might also be solutions of the more complicated differential equation also? The answer, of course, is yes:

u_1^{(4)} + 2 u_1'' + u_1 = \cos \theta - 2 \cos \theta + \cos \theta = 0

and

u_2^{(4)} + 2u_2'' + u_2 = \sin \theta - 2 \sin \theta + \sin \theta = 0.

The far trickier part is finding the two additional solutions. To find these, we use a standard trick/technique called reduction of order. In this technique, we guess that any additional solutions much have the form of either

u(\theta) = f(\theta) \cos \theta \qquad \hbox{or} \qquad  u(\theta) = f(\theta) \sin \theta,

where f(\theta) is some unknown function that we’re multiplying by the solutions we already have. We then substitute this into the differential equation u^{(4)} + 2u'' + u = 0 to form a new differential equation for the unknown f, which we can (hopefully) solve.

Doing this will require multiple applications of the Product Rule for differentiation. We already know that

(fg)' = f' g + f g'.

We now differentiate again, using the Product Rule, to find (fg)'':

(fg)'' = ( [fg]')' = (f'g)' + (fg')'

= f''g + f' g' + f' g' + f g''

= f'' g + 2 f' g' + f g''.

We now differential twice more to find (fg)^{(4)}:

(fg)''' = ( [fg]'')' = (f''g)' + 2(f'g')' +  (fg'')'

= f'''g + f'' g' + 2f'' g' + 2f' g'' + f' g'' + f g'''

= f''' g + 3 f'' g' + 3 f' g'' + f g'''.

A good student may be able to guess the pattern for the next derivative:

(fg)^{(4)} = ( [fg]''')' = (f'''g)' + 3(f''g')' +3(f'g'')' + (fg''')'

= f^{(4)}g + f''' g' + 3f''' g' + 3f'' g'' + 3f'' g'' + 3f'g''' + f' g''' + f g^{(4)}

= f^{(4)} g + 4 f''' g' + 6 f'' g'' + 4f' g''' + f g^{(4)}.

In this way, Pascal’s triangle makes a somewhat surprising appearance; indeed, this pattern can be proven with mathematical induction.

In the next post, we’ll apply this to the solution of the fourth-order differential equation.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6e: Rationale for Method of Undetermined Coefficients II

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

In the previous post, we derived the method of undetermined coefficients for the simplified differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}.

In this post, we consider the simplified differential equation if the right-hand side has only the fourth term,

u''(\theta) + u(\theta) =  \displaystyle \frac{2\delta \epsilon }{\alpha^2}\cos \theta.

Let v(\theta) =  \displaystyle \frac{2\delta \epsilon }{\alpha^2}\cos \theta. Then v satisfies the new differential equation v'' + v = 0. Since u'' + u = v, we may substitute:

(u''+u)'' + (u'' + u) = 0

u^{(4)} + u'' + u'' + u = 0

u^{(4)} + 2u'' + u = 0.

The characteristic equation of this homogeneous differential equation is r^4 + 2r^2 + 1 = 0, or (r^2+1)^2 = 0. Therefore, r = i and r = -i are both double roots of this quartic equation. Therefore, the general solution for u is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + c_3 \theta \cos \theta + c_4 \theta \sin \theta.

Substituting into the original differential equation will allow for the computation of c_3 and c_4:

u''(\theta) + u(\theta) = -c_1 \cos \theta - c_2 \sin \theta - 2c_3 \sin \theta - c_3 \theta \cos \theta + 2c_4 \cos \theta - c_4 \theta \sin \theta

+   c_1 \cos \theta + c_2 \sin \theta + c_3 \theta \cos \theta + c_4 \theta \sin \theta

\displaystyle \frac{2\delta \epsilon }{\alpha^2}\cos \theta = - 2c_3 \sin \theta+ 2c_4 \cos \theta

Matching coefficients, we see that c_3 = 0 and c_4 = \displaystyle \frac{\delta \epsilon }{\alpha^2}. Therefore,

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + \displaystyle \frac{\delta \epsilon }{\alpha^2} \theta \sin \theta

is the general solution of the simplified differential equation. Setting c_1 = c_2 = 0, we find that

u(\theta) =  \displaystyle \frac{\delta \epsilon }{\alpha^2} \theta \sin \theta

is one particular solution of this simplified differential equation. Not surprisingly, this matches the result is the method of undetermined coefficients had been blindly followed.

As we’ll see in a future post, the presence of this \theta \sin \theta term is what predicts the precession of a planet’s orbit under general relativity.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6d: Rationale for Method of Undetermined Coefficeints I

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We have shown that the motion of a planet around the Sun, expressed in polar coordinates (r,\theta) with the Sun at the origin, under general relativity follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2,

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of constant coefficients. To use this technique, we first expand the right-hand side of the differential equation and then apply a power-reduction trigonometric identity:

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{2\delta  \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos^2 \theta}{\alpha^2}

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2}{\alpha^2} \frac{1 + \cos 2\theta}{2}

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2\delta \epsilon \cos \theta}{\alpha^2} + \frac{\delta \epsilon^2 \cos 2\theta}{2\alpha^2}

This is now in the form for using the method of undetermined coefficients. However, in this series, I’d like to take some time to explain why this technique actually works. To begin, we look at a simplified differential equation using only the first three terms on the right-hand side:

u''(\theta) + u(\theta) = \displaystyle\frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  .

Let v(\theta) =\displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  . Since v is a constant, this function satisfies the simple differential equation v' = 0. Since u''+u=v, we can substitute:

(u'' + u)' = 0

u''' + u' = 0

(We could have more easily said, “Take the derivative of both sides,” but we’ll be using a more complicated form of this technique in future posts.) The characteristic equation of this differential equation is r^3 + r = 0. Factoring, we obtain r(r^2 + 1) = 0, so that the three roots are r = 0 and r = \pm i. Therefore, the general solution of this differential equation is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + c_3.

Notice that this matches the outcome of blindly using the method of undetermined coefficients without conceptually understanding why this technique works.

The constants c_1 and c_2 are determined by the initial conditions. To find c_3, we observe

u''(\theta) +u(\theta) =  -c_1 \cos \theta - c_2 \sin \theta +c_1 \cos \theta + c_2 \sin \theta + c_3

\displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}  = c_3.

Therefore, the general solution of this simplified differential equation is

u(\theta) = c_1 \cos \theta + c_2 \sin \theta + \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}.

Furthermore, setting c_1 = c_2 = 0, we see that

u(\theta) = \displaystyle\frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2}

is a particular solution to the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} .

In the next couple of posts, we find the particular solutions associated with the other terms on the right-hand side.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 6b: Checking Solution of New Differential Equation with Calculus

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

In the last post, we showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,\theta), with the Sun at the origin, then under general relativity the motion of the planet follows the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1 + \epsilon \cos \theta}{\alpha} \right)^2,

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0,

where u = \displaystyle \frac{1}{r}, \displaystyle \frac{1}{\alpha} = \frac{GMm^2}{\ell^2}, \delta = \displaystyle \frac{3GM}{c^2}, G is the gravitational constant of the universe, m is the mass of the planet, M is the mass of the Sun, \ell is the constant angular momentum of the planet, c is the speed of light, and P is the smallest distance of the planet from the Sun during its orbit (i.e., at perihelion).

I won’t sugar-coat it; the solution is a big mess:

u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} \theta \sin \theta - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta.

That said, it is an elementary, if complicated, exercise in calculus to confirm that this satisfies all three equations above. We’ll start with the second one:

u(0) = \displaystyle \frac{1 + \epsilon \cos 0}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} \cdot 0 \sin 0 - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 0 - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos 0

= \displaystyle \frac{1 + \epsilon}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} - \frac{\epsilon^2 \delta}{6\alpha^2} - \frac{\delta(3+\epsilon^2)}{3\alpha^2}

= \displaystyle \frac{ 6\alpha(1+\epsilon) + 6\delta + 3 \delta \epsilon^2 - \delta \epsilon^2 - 2\delta (3+\epsilon^2)}{6\alpha^2}

= \displaystyle \frac{ 6\alpha(1+\epsilon) + 6\delta + 3 \delta \epsilon^2 - \delta \epsilon^2 - 6\delta - 2\delta \epsilon^2}{6\alpha^2}

= \displaystyle \frac{ 6\alpha(1+\epsilon)}{6\alpha^2}

= \displaystyle \frac{ 1+\epsilon}{\alpha}

= \displaystyle \frac{1}{P},

where in the last step we used the equation P = \displaystyle \frac{\alpha}{1 + \epsilon} that was obtained earlier in this series.

Next, to check the initial condition u'(0) = 0, we differentiate:

u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} \theta \sin \theta - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta

u'(\theta) = \displaystyle -\frac{\epsilon \sin \theta}{\alpha} + \frac{\epsilon \delta}{\alpha^2} (\sin \theta + \theta \cos \theta) + \frac{\epsilon^2 \delta}{3\alpha^2} \sin 2\theta + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \sin\theta

u'(0) = \displaystyle -\frac{\epsilon \sin 0}{\alpha} + \frac{\epsilon \delta}{\alpha^2} (\sin 0 + 0 \cdot \cos 0) + \frac{\epsilon^2 \delta}{3\alpha^2} \sin 0 + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \sin 0 = 0.

Finally, to check the differential equation itself, we compute the second derivative:

u''(\theta) = \displaystyle -\frac{\epsilon \cos \theta}{\alpha} + \frac{\epsilon \delta}{\alpha^2} (2 \cos \theta - \theta \sin \theta) + \frac{2\epsilon^2 \delta}{3\alpha^2} \cos 2\theta + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos\theta.

Adding u''(\theta) and u(\theta), we find

u''(\theta) + u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta - \epsilon \cos \theta}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{\epsilon \delta}{\alpha^2} (\theta \sin \theta + 2 \cos \theta - \theta \sin \theta)

\displaystyle - \frac{\epsilon^2 \delta}{6\alpha^2} \cos 2\theta + \frac{2\epsilon^2 \delta}{3\alpha^2} \cos 2\theta - \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos \theta + \frac{\delta(3+\epsilon^2)}{3\alpha^2} \cos\theta,

which simplifies considerably:

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} + \frac{\delta \epsilon^2}{2\alpha^2} + \frac{2 \epsilon \delta}{\alpha^2} \cos \theta + \frac{\epsilon^2 \delta}{2\alpha^2} \cos 2\theta

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} \left( 1 + \frac{\epsilon^2}{2} + 2 \epsilon \cos \theta + \frac{\epsilon^2}{2} \cos 2\theta \right)

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} \left( 1 + 2 \epsilon \cos \theta + \epsilon^2 \frac{1+\cos 2\theta}{2} \right)

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} \left( 1 + 2 \epsilon \cos \theta + \epsilon^2 \cos^2 \theta \right)

= \displaystyle \frac{1}{\alpha} + \frac{\delta}{\alpha^2} (1 + \epsilon \cos \theta)^2

= \displaystyle \frac{1}{\alpha} + \delta \left( \frac{1+\epsilon \cos \theta}{\alpha} \right)^2,

where we used the power-reduction trigonometric identity

\cos^2 \theta = \displaystyle \frac{1 + \cos 2\theta}{2}

on the second-to-last step.

While we have verified the proposed solution of the initial-value problem, and the steps for doing so lie completely within the grasp of a good calculus student, I’ll be the first to say that this solution is somewhat unsatisfying: the solution appeared seemingly out of thin air, and we just checked to see if this mysterious solution actually works. In the next few posts, I’ll discuss how this solution can be derived using standard techniques from first-semester differential equations.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 5d: Deriving Orbits under Newtonian Mechanics Using Variation of Parameters

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We previously showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

where u = 1/r and \alpha is a certain constant. We will also impose the initial condition that the planet is at perihelion (i.e., is closest to the sun), at a distance of P, when \theta = 0. This means that u obtains its maximum value of 1/P when \theta = 0. This leads to the two initial conditions

u(0) = \displaystyle \frac{1}{P} \qquad \hbox{and} \qquad u'(0) = 0;

the second equation arises since u has a local extremum at \theta = 0.

We now take the perspective of a student who is taking a first-semester course in differential equations. There are two standard techniques for solving a second-order non-homogeneous differential equations with constant coefficients. One of these is the method of variation of parameters. First, we solve the associated homogeneous differential equation

u''(\theta) + u(\theta) = 0.

The characteristic equation of this differential equation is r^2 + 1 = 0, which clearly has the two imaginary roots r = \pm i. Therefore, two linearly independent solutions of the associated homogeneous equation are u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta.

(As an aside, this is one answer to the common question, “What are complex numbers good for?” The answer is naturally above the heads of Algebra II students when they first encounter the mysterious number i, but complex numbers provide a way of solving the differential equations that model multiple problems in statics and dynamics.)

According to the method of variation of parameters, the general solution of the original nonhomogeneous differential equation

u''(\theta) + u(\theta) = g(\theta)

is

u(\theta) = f_1(\theta) u_1(\theta) + f_2(\theta) u_2(\theta),

where

f_1(\theta) = -\displaystyle \int \frac{u_2(\theta) g(\theta)}{W(\theta)} d\theta ,

f_2(\theta) = \displaystyle \int \frac{u_1(\theta) g(\theta)}{W(\theta)} d\theta ,

and W(\theta) is the Wronskian of u_1(\theta) and u_2(\theta), defined by the determinant

W(\theta) = \displaystyle \begin{vmatrix} u_1(\theta) & u_2(\theta) \\ u_1'(\theta) & u_2'(\theta) \end{vmatrix}  = u_1(\theta) u_2'(\theta) - u_1'(\theta) u_2(\theta).

Well, that’s a mouthful.

Fortunately, for the example at hand, these computations are pretty easy. First, since u_1(\theta) = \cos \theta and u_2(\theta) = \sin \theta, we have

W(\theta) = (\cos \theta)(\cos \theta) - (\sin \theta)(-\sin \theta) = \cos^2 \theta + \sin^2 \theta = 1

from the usual Pythagorean trigonometric identity. Therefore, the denominators in the integrals for f_1(\theta) and f_2(\theta) essentially disappear.

Since g(\theta) = \displaystyle \frac{1}{\alpha}, the integrals for f_1(\theta) and f_2(\theta) are straightforward to compute:

f_1(\theta) = -\displaystyle \int u_2(\theta) \frac{1}{\alpha} d\theta = -\displaystyle \frac{1}{\alpha} \int \sin \theta \, d\theta = \displaystyle \frac{1}{\alpha}\cos \theta + a,

where we use +a for the constant of integration instead of the usual +C. Second,

f_2(\theta) = \displaystyle \int u_1(\theta)  \frac{1}{\alpha} d\theta = \displaystyle \frac{1}{\alpha} \int \cos \theta \, d\theta = \displaystyle \frac{1}{\alpha}\sin \theta + b,

using +b for the constant of integration. Therefore, by variation of parameters, the general solution of the nonhomogeneous differential equation is

u(\theta) = f_1(\theta) u_1(\theta) + f_2(\theta) u_2(\theta)

= \left( \displaystyle \frac{1}{\alpha}\cos \theta + a \right) \cos \theta + \left( \displaystyle \frac{1}{\alpha}\sin\theta + b \right) \sin \theta

= a \cos \theta + b\sin \theta + \displaystyle \frac{\cos^2 \theta + \sin^2 \theta}{\alpha}

= a \cos \theta + b \sin \theta + \displaystyle \frac{1}{\alpha}.

Unsurprisingly, this matches the answer in the previous post that was found by the method of undetermined coefficients.

For the sake of completeness, I repeat the argument used in the previous two posts to determine a and b. This is require using the initial conditions u(0) = \displaystyle \frac{1}{P} and u'(0) = 0. From the first initial condition,

u(0) = a \cos 0 + b \sin 0 + \displaystyle \frac{1}{\alpha}

\displaystyle \frac{1}{P} = a + \frac{1}{\alpha}

\displaystyle \frac{1}{P} - \frac{1}{\alpha} = a

\displaystyle \frac{\alpha - P}{\alpha P} = a

From the second initial condition,

u'(\theta) = -a \sin \theta + b \cos \theta

u'(0) = -a \sin 0 + b \cos 0

0 = b.

From these two constants, we obtain

u(\theta) = \displaystyle \frac{\alpha - P}{\alpha P}  \cos \theta + 0 \sin \theta + \displaystyle \frac{1}{\alpha}

= \displaystyle \frac{1}{\alpha} \left(  1 + \frac{\alpha-P}{P} \cos \theta \right)

= \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha},

where \epsilon = \displaystyle \frac{\alpha - P}{P}.

Finally, since r = 1/u, we see that the planet’s orbit satisfies

r = \displaystyle \frac{\alpha}{1 + \epsilon \cos \theta},

so that, as shown earlier in this series, the orbit is an ellipse with eccentricity \epsilon.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 5b: Deriving Orbits under Newtonian Mechanics with Calculus

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We previously showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

where u = 1/r and \alpha is a certain constant. We will also impose the initial condition that the planet is at perihelion (i.e., is closest to the sun), at a distance of P, when \theta = 0. This means that u obtains its maximum value of 1/P when \theta = 0. This leads to the two initial conditions

u(0) = \displaystyle \frac{1}{P} \qquad \hbox{and} \qquad u'(0) = 0;

the second equation arises since u has a local extremum at \theta = 0.

In the previous post, we confirmed that

u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha}

solved this initial-value problem. However, the solution was unsatisfying because it gave no indication of where this guess might have come from. In this post, I suggest a series of questions that good calculus students could be asked that would hopefully lead them quite naturally to this solution.

Step 1. Let’s make the differential equation simpler, for now, by replacing the right-hand side with 0:

u''(\theta) + u(\theta) = 0,

or

u''(\theta) = -u(\theta).

Can you think of a function or two that, when you differentiate twice, you get the original function back, except with a minus sign in front?

Answer to Step 1. With a little thought, hopefully students can come up with the standard answers of u(\theta) = \cos \theta and u(\theta) = \sin \theta.

Step 2. Using these two answers, can you think of a third function that works?

Answer to Step 2. This is usually the step that students struggle with the most, as they usually try to think of something completely different that works. This won’t work, but that’s OK… we all learn from our failures. If they can’t figure out, I’ll give a big hint: “Try multiplying one of these two answers by something.” In time, they’ll see that answers like u(\theta) = 2\cos \theta and u(\theta) = 3\sin \theta work. Once that conceptual barrier is broken, they’ll usually produce the solutions u(\theta) = a \cos \theta and u(\theta) = b \sin \theta.

Step 3. Using these two answers, can you think of anything else that works?

Answer to Step 3. Again, students might struggle as they imagine something else that works. If this goes on for too long, I’ll give a big hint: “Try combining them.” Eventually, we hopefully get to the point that they’ll see that the linear combination u(\theta) = a \cos \theta + b \sin \theta also solves the associated homogeneous differential equation.

Step 4. Let’s now switch back to the original differential equation u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha}. Let’s start simple: u''(\theta) + u(\theta) = 5. Can you think of an easy function that’s a solution?

Answer to Step 4. This might take some experimentation, and students will probably try unnecessarily complicated guesses first. If this goes on for too long, I’ll give a big hint: “Try a constant.” Eventually, they hopefully determine that if u(\theta) = 5 is a constant function, then clearly u'(\theta) = 0 and u''(\theta) = 0, so that u''(\theta) + u(\theta) = 5.

Step 5. Let’s return to u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha}. Any guesses on an answer to this one?

Answer to Step 5. Hopefully, students quickly realize that the constant function u(\theta) = \displaystyle \frac{1}{\alpha} works.

Step 6. Let’s review. We’ve shown that anything of the form u(\theta) = a\cos \theta + b \sin \theta is a solution of u''(\theta) + u(\theta) = 0. We’ve also shown that u(\theta) = \displaystyle\frac{1}{\alpha} is a solution of u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha}. Can you think use these two answers to find something else that works?

Answer to Step 6. Hopefully, with the experience learned from Step 3, students will guess that u(\theta) = a\cos \theta + b\sin \theta + \displaystyle \frac{1}{\alpha} will work.

Step 7. OK, that solves the differential equation. Any thoughts on how to find the values of a and b so that u(0) = \displaystyle \frac{1}{P} and u'(0) = 0?

Answer to Step 7. Hopefully, students will see that we should just plug into u(\theta):

u(0) = a \cos 0 + b \sin 0 + \displaystyle \frac{1}{\alpha}

\displaystyle \frac{1}{P} = a + \frac{1}{\alpha}

\displaystyle \frac{1}{P} - \frac{1}{\alpha} = a

\displaystyle \frac{\alpha - P}{\alpha P} = a

To find b, we first find u'(\theta) and then substitute \theta = 0:

u'(\theta) = -a \sin \theta + b \cos \theta

u'(0) = -a \sin 0 + b \cos 0

0 = b.

From these two constants, we obtain

u(\theta) = \displaystyle \frac{\alpha - P}{\alpha P}  \cos \theta + 0 \sin \theta + \displaystyle \frac{1}{\alpha}

= \displaystyle \frac{1}{\alpha} \left(  1 + \frac{\alpha-P}{P} \cos \theta \right)

= \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha},

where \epsilon = \displaystyle \frac{\alpha - P}{P}.

Finally, since r = 1/u, we see that the planet’s orbit satisfies

r = \displaystyle \frac{\alpha}{1 + \epsilon \cos \theta},

so that, as shown earlier in this series, the orbit is an ellipse with eccentricity \epsilon.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 5a: Confirming Orbits under Newtonian Mechanics with Calculus

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

We previously showed that if the motion of a planet around the Sun is expressed in polar coordinates (r,\theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

where u = 1/r and \alpha is a certain constant. We will also impose the initial condition that the planet is at perihelion (i.e., is closest to the sun), at a distance of P, when \theta = 0. This means that u obtains its maximum value of 1/P when \theta = 0. This leads to the two initial conditions

u(0) = \displaystyle \frac{1}{P} \qquad \hbox{and} \qquad u'(0) = 0;

the second equation arises since u has a local extremum at \theta = 0.

In the next few posts, we’ll discuss the solution of this initial-value problem. Today’s post would be appropriate for calculus students, which is confirming that

u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha}

solves this initial-value problem, where \epsilon = \displaystyle \frac{\alpha-P}{P}. Since r is the reciprocal of u, we infer that

r = \displaystyle \frac{\alpha}{1 + \epsilon \cos \theta}.

As we’ve already seen in this series, this means that the orbit of the planet is a conic section — either a circle, ellipse, parabola, or hyperbola. Since the orbit of a planet is stable and \epsilon = 0 is extremely unlikely, this means that the planet orbits the Sun in an ellipse, with the Sun at one focus of the ellipse.

So, for a calculus student to verify that planets move in ellipses, one must check that

u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha}

is a solution of the initial-value problem

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

u(0) = \displaystyle \frac{1}{P},

u'(0) = 0.

The second line is easy to check:

u(0) = \displaystyle \frac{1 + \epsilon \cos 0}{\alpha}

= \displaystyle \frac{1 + \epsilon}{\alpha}

= \displaystyle \frac{1 + \displaystyle \frac{\alpha-P}{P}}{\alpha}

= \displaystyle \frac{1}{\alpha} \frac{P + \alpha - P}{P}

= \displaystyle \frac{1}{\alpha} \frac{\alpha}{P}

= \displaystyle \frac{1}{P}.

The third line is also easy to check:

u'(\theta) = \displaystyle \frac{-\epsilon \sin \theta}{\alpha}

u'(0) = \displaystyle \frac{-\epsilon \sin 0}{\alpha} = 0.

To check the first line, we first find u''(\theta):

u''(\theta) = \displaystyle \frac{-\epsilon \cos \theta}{\alpha},

so that

u''(\theta) + u(\theta) = \displaystyle \frac{-\epsilon \cos \theta}{\alpha} + \frac{1 + \epsilon \cos \theta}{\alpha} = \frac{1}{\alpha},

thus confirming that u(\theta) = \displaystyle \frac{1 + \epsilon \cos \theta}{\alpha} solves the initial-value problem.

While the above calculations are well within the grasp of a good Calculus I student, I’ll be the first to admit that this solution is less than satisfying. We just mysteriously proposed a solution, seemingly out of thin air, and confirmed that it worked. In the next post, I’ll proposed a way that calculus students can be led to guess this solution. Then, we talk about finding the solution of this nonhomogeneous initial-value problem using standard techniques from differential equations.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 4b: Acceleration in Polar Coordinates

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

In this part of the series, we will show that if the motion of a planet around the Sun is expressed in polar coordinates (r,\theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

where u = 1/r and \alpha is a certain constant. Deriving this governing differential equation will require some principles from physics. If you’d rather skip the physics and get to the mathematics, we’ll get to solving this differential equations in the next post.

Part of the derivation of this governing differential equation will involve Newton’s Second Law

{\bf F} = m {\bf a},

where m is the mass of the planet and the force {\bf F} and the acceleration a are vectors. In usual rectangular coordinates, the acceleration vector would be expressed as

{\bf a} = x''(t) {\bf i} + y''(t) {\bf j},

where the components of the acceleration in the x- and y-directors are x''(t) and y''(t), and the unit vectors {\bf i} and {\bf j} are perpendicular, pointing in the positive x and positive y directions.

Unfortunately, our problem involves polar coordinates, and rewriting the acceleration vector in polar coordinates, instead of rectangular coordinates, is going to take some work.

Suppose that the position of the planet is (r,\theta) in polar coordinates, so that the position in rectangular coordinates is {\bf r} = (r\cos \theta, r \sin \theta). This may be rewritten as

{\bf r} = r \cos \theta {\bf i} + r \sin \theta {\bf j} = r ( \cos \theta {\bf i} + \sin \theta {\bf j}) = r {\bf u}_r,

where

{\bf u}_r = \cos \theta {\bf i} + \sin \theta {\bf j}

is a unit vector that points away from the origin. We see that this is a unit vector since

\parallel {\bf u}_r \parallel = {\bf u}_r \cdot {\bf u}_r = \cos^2 \theta + \sin^2 \theta =1.

We also define

{\bf u}_\theta = -\sin \theta {\bf i} + \cos \theta {\bf j}

to be a unit vector that is perpendicular to {\bf u}_r; it turns out that {\bf u}_\theta points in the direction of increasing \theta. To see that {\bf u}_r and {\bf u}_\theta are perpendicular, we observe

{\bf u}_r \cdot {\bf u}_\theta = -\sin \theta \cos \theta + \sin \theta \cos \theta = 0.

Computing the velocity and acceleration vectors in polar coordinates will have a twist that’s not experienced with rectangular coordinates since both {\bf u}_r and {\bf u}_\theta are functions of \theta. Indeed, we have

\displaystyle \frac{d{\bf u}_r}{d\theta} =  \frac{d \cos \theta}{d\theta} {\bf i} + \frac{d\sin \theta}{d\theta} {\bf j} = -\sin \theta {\bf i} + \cos \theta {\bf j} = {\bf u}_\theta.

Furthermore,

\displaystyle \frac{d{\bf u}_\theta}{d\theta} =  -\frac{d \sin \theta}{d\theta} {\bf i} + \frac{d\cos \theta}{d\theta} {\bf j} = -\cos \theta {\bf i} - \sin \theta {\bf j} = -{\bf u}_r.

These two equations will be needed in the derivation below.

We are now in position to express the velocity and acceleration of the orbiting planet in polar coordinates. Clearly, the position of the planet is r {\bf u}_r, or a distance r from the origin in the direction of {\bf u}_r. Therefore, by the Product Rule, the velocity of the planet is

{\bf v} = \displaystyle \frac{d}{dt} (r {\bf u}_r) = \displaystyle \frac{dr}{dt} {\bf u}_r + r \frac{d {\bf u}_r}{dt}

We now apply the Chain Rule to the second term:

{\bf v} = \displaystyle \frac{dr}{dt} {\bf u}_r + r \frac{d {\bf u}_r}{d\theta} \frac{d\theta}{dt}

= \displaystyle \frac{dr}{dt} {\bf u}_r + r \frac{d\theta}{dt} {\bf u}_\theta.

Differentiating a second time with respect to time, and again using the Chain Rule, we find

{\bf a} = \displaystyle \frac{d {\bf v}}{dt} = \displaystyle \frac{d^2r}{dt^2} {\bf u}_r + \frac{dr}{dt} \frac{d{\bf u}_r}{dt} + \frac{dr}{dt} \frac{d\theta}{dt} {\bf u}_\theta + r \frac{d^2\theta}{dt^2} {\bf u}_\theta + r \frac{d\theta}{dt} \frac{d{\bf u}_\theta}{dt}

= \displaystyle \frac{d^2r}{dt^2} {\bf u}_r + \frac{dr}{dt} \frac{d{\bf u}_r}{d\theta} \frac{d\theta}{dt} + \frac{dr}{dt} \frac{d\theta}{dt} {\bf u}_\theta + r \frac{d^2\theta}{dt^2} {\bf u}_\theta +  r \frac{d\theta}{dt} \frac{d{\bf u}_\theta}{d\theta} \frac{d\theta}{dt}

= \displaystyle \frac{d^2r}{dt^2} {\bf u}_r + \frac{dr}{dt} \frac{d\theta}{dt} {\bf u}_\theta  + \frac{dr}{dt} \frac{d\theta}{dt} {\bf u}_\theta + r \frac{d^2\theta}{dt^2} {\bf u}_\theta -  r \left(\frac{d\theta}{dt} \right)^2 {\bf u}_r

= \displaystyle \left[ \frac{d^2r}{dt^2} -  r \left(\frac{d\theta}{dt} \right)^2 \right] {\bf u}_r + \left[ 2\frac{dr}{dt} \frac{d\theta}{dt} + r \frac{d^2\theta}{dt^2} \right] {\bf u}_\theta.

This will be needed in the next post, when we use both Newton’s Second Law and Newton’s Law of Gravitation, expressed in polar coordinates.

Confirming Einstein’s Theory of General Relativity With Calculus, Part 4a: Angular Momentum

In this series, I’m discussing how ideas from calculus and precalculus (with a touch of differential equations) can predict the precession in Mercury’s orbit and thus confirm Einstein’s theory of general relativity. The origins of this series came from a class project that I assigned to my Differential Equations students maybe 20 years ago.

In this part of the series, we will show that if the motion of a planet around the Sun is expressed in polar coordinates (r,\theta), with the Sun at the origin, then under Newtonian mechanics (i.e., without general relativity) the motion of the planet follows the differential equation

u''(\theta) + u(\theta) = \displaystyle \frac{1}{\alpha},

where u = 1/r and \alpha is a certain constant. Deriving this governing differential equation will require some principles from physics. If you’d rather skip the physics and get to the mathematics, we’ll get to solving this differential equations in a few posts.

One principle from physics that we’ll need is the Law of Conservation of Angular Momentum. Mathematically, this is expressed by

mr^2 \displaystyle \frac{d\theta}{dt} = \ell,

where \ell is a constant. Of course, this can be written as

\displaystyle \frac{d\theta}{dt} = \displaystyle \frac{\ell}{mr^2};

this will be used a couple times in the derivation below.

As we’ll soon see, we will need to express the second derivative \displaystyle \frac{d^2 r}{d t^2} in a form that depends only on \theta. To do this, we use the Chain Rule to obtain

r' = \displaystyle \frac{dr}{dt}

= \displaystyle \frac{dr}{d\theta} \cdot \frac{d\theta}{dt}

= \displaystyle \frac{\ell}{mr^2} \frac{dr}{d\theta}

= \displaystyle - \frac{\ell}{m} \frac{d}{d\theta} \left( \frac{1}{r} \right).

This last step used the Chain Rule in reverse:

\displaystyle \frac{d}{d\theta} \left( \frac{1}{r} \right) = \frac{d}{dr} \left( \frac{1}{r} \right) \cdot \frac{dr}{dt} = -\frac{1}{r^2} \cdot \frac{dr}{dt}.

To examine the second derivative \displaystyle \frac{d^2 r}{d t^2}, we again use the Chain Rule:

\displaystyle \frac{d^2 r}{d t^2} = \displaystyle \frac{dr'}{dt}

= \displaystyle \frac{dr'}{d\theta} \cdot \frac{d\theta}{dt}

= \displaystyle \frac{\ell}{mr^2} \frac{dr'}{d\theta}

= \displaystyle \frac{\ell}{mr^2} \frac{d}{d\theta} \left[ \frac{dr}{dt} \right]

= \displaystyle \frac{\ell}{mr^2} \frac{d}{d\theta} \left[ - \frac{\ell}{m} \frac{d}{d\theta} \left( \frac{1}{r} \right) \right]

= \displaystyle - \frac{\ell^2}{m^2r^2} \frac{d}{d\theta} \left[ \frac{d}{d\theta} \left( \frac{1}{r} \right) \right]

= \displaystyle - \frac{\ell^2}{m^2r^2} \frac{d^2}{d\theta^2}  \left( \frac{1}{r} \right) .

While far from obvious now, this will be needed when we rewrite Newton’s Second Law in polar coordinates.