Engaging students: Solving linear systems of equations with matrices

In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place.

I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course).

This student submission comes from my former student Danielle Pope. Her topic, from Algebra II: solving linear systems of equations with matrices.

green line

B2. How does this topic extend what your students should have learned in previous courses?

Based off of the TEKS, matrices are introduced in Algebra 2. In previous math courses, students are already going to learn basic arithmetic from elementary school and solving equations in middle and high school. By the time students get to high school, they should have solving single equations down. This concept is then expanded with a system of equations, which is taught with the help of matrices. A matrix is just an “array of numbers” so that’s why this method of solving can be used with linear equations. Once the matrix is set up there are 2 main ways to solve for the solutions. The one I will be discussing is reduced row echelon form. This method of solving systems utilizes the basic arithmetic that students already know. There are 3 row operations that students already know how to use in general not related to matrices. Those are multiplying a row by a constant, switching two rows, and adding a constant times a row to another row. Even though these specific operations are used for matrices, kids have seen how to multiply 2 constants or variables, switching variables, and adding constants or variables in their previous courses. Matrices just add another element to their basic arithmetic abilities.

 

green line

D4. What are the contributions of various cultures to this topic?

Matrices have been around for much longer than some people may realize. One of the earliest civilizations that matrices were traced back to were the Babylonians. This was just one of the many contributions that they contributed to mathematics. The Chinese wrote a book, Nine Chapters of the Mathematical Art, Written during the Han Dynasty in China gave the first known example of matrix methods”. During the same era, around 200 BC, a Chinese mathematician Liu Hui solved linear equations using matrices. In the 1800s, Germany started taking a look at matrices. German mathematician, Carl Jacobi, brought the idea of determinants and matrices into the light. Carl Gauss, another German mathematician, took this idea of determinants and developed it. It wasn’t until Augustin Cauchy, a French mathematician, used and defined the word determinant how was use it today. James Sylvester, an English mathematician, “used the term matrix in 1850”. Sylvester also worked with mathematician Arthur Cayley who “first published an abstract definition of matrix” in his memoir on the Theory of Matrices in 1858. This final definition of a determinant is still used today in classrooms to help solve complex system of equations.

 

green lineE1. How can technology (YouTube, Khan Academy [khanacademy.org], Vi Hart, Geometers Sketchpad, graphing calculators, etc.) be used to effectively engage students with this topic?

In a classroom today, students should be able to access use of a graphing calculator. The matrix feature on these can easily check the work of students just learning how to row-reduce or solve for determinants and inverse matrices. In the classroom, I would use this technology like a race for the right answer to get them engaged in matrices. Give students an easy 2-equation system and have them solve for the variables. Each new problem add an equation or add a variable. While students are solving by hand, the teacher will be using the calculator to see which person can get the answer first. Overtime the problems will be too daunting to do by hand so students will be more engaged to learn this faster shortcut using the calculator. Another resource that can be used out of the classroom is Khan Academies’ videos on solving system of equations with matrices. These videos can be used to fill in any gaps if students have questions at home. These videos can also be used as the lecture in a flipped classroom environment.

References

https://www.mathsisfun.com/algebra/systems-linear-equations-matrices.html
http://www.sparknotes.com/math/algebra2/matrices/section4.rhtml
http://math.nie.edu.sg/bwjyeo/it/MathsOnline_AM/livemath/the/IT3AMMatricesHistory.html

http://math.nie.edu.sg/bwjyeo/it/MathsOnline_AM/livemath/the/IT3AMMatricesHistory.html
http://www.storyofmathematics.com/mathematicians.html

https://www.khanacademy.org/math/precalculus/precalc-matrices/solving-equations-with-inverse-matrices/v/matrix-equations-systems

 

My Favorite One-Liners: Part 28

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s quip is one that I’ll use when simple techniques get used in a complicated way.

Consider the solution of the linear recurrence relation

Q_n = Q_{n-1} + 2 Q_{n-2},

where F_0 = 1 and F_1 = 1. With no modesty, I call this one the Quintanilla sequence when I teach my students — the forgotten little brother of the Fibonacci sequence.

To find the solution of this linear recurrence relation, the standard technique — which is a pretty long procedure — is to first solve the characteristic equation, from Q_n - Q_{n-1} - 2 Q_{n-2} = 0, we obtain the characteristic equation

r^2 - r - 2 = 0

This can be solved by any standard technique at a student’s disposal. If necessary, the quadratic equation can be used. However, for this one, the left-hand side simply factors:

(r-2)(r+1) = 0

r=2 \qquad \hbox{or} \qquad r = -1

(Indeed, I “developed” the Quintanilla equation on purpose, for pedagogical reasons, because its characteristic equation has two fairly simple roots — unlike the characteristic equation for the Fibonacci sequence.)

From these two roots, we can write down the general solution for the linear recurrence relation:

Q_n = \alpha_1 \times 2^n + \alpha_2 \times (-1)^n,

where \alpha_1 and \alpha_2 are constants to be determined. To find these constants, we plug in n =0:

Q_0 = \alpha_1 \times 2^0 + \alpha_2 \times (-1)^0.

To find these constants, we plug in n =0:

Q_0 = \alpha_1 \times 2^0 + \alpha_2 \times (-1)^0.

We then plug in n =1:

Q_1 = \alpha_1 \times 2^1 + \alpha_2 \times (-1)^1.

Using the initial conditions gives

1 = \alpha_1 + \alpha_2

1 = 2 \alpha_1 - \alpha_2

This is a system of two equations in two unknowns, which can then be solved using any standard technique at the student’s disposal. Students should quickly find that \alpha_1 = 2/3 and \alpha_2 = 1/3, so that

Q_n = \displaystyle \frac{2}{3} \times 2^n + \frac{1}{3} \times (-1)^n = \frac{2^{n+1} + (-1)^n}{3},

which is the final answer.

Although this is a long procedure, the key steps are actually first taught in Algebra I: solving a quadratic equation and solving a system of two linear equations in two unknowns. So here’s my one-liner to describe this procedure:

This is just an algebra problem on steroids.

Yes, it’s only high school algebra, but used in a creative way that isn’t ordinarily taught when students first learn algebra.

I’ll use this “on steroids” line in any class when a simple technique is used in an unusual — and usually laborious — way to solve a new problem at the post-secondary level.

 

 

The antiderivative of 1/(x^4+1): Part 4

This antiderivative has arguable the highest ratio of “really hard to compute” to “really easy to write”:

\displaystyle \int \frac{1}{x^4 + 1} dx

So far, I’ve shown that the denominator can be factored over the real numbers:

\displaystyle \int \frac{dx}{x^4 + 1} = \displaystyle \int \frac{dx}{\left(x^2 - x \sqrt{2} + 1 \right) \left(x^2 + x \sqrt{2} + 1\right)} ,

so that the technique of partial fractions can be applied. Since both quadratics in the denominator are irreducible (and the degree of the numerator is less than the degree of the denominator), the partial fractions decomposition has the form

\displaystyle \frac{1}{\left(x^2 - x \sqrt{2} + 1 \right) \left(x^2 + x \sqrt{2} + 1\right)} = \displaystyle \frac{Ax + B}{\left(x^2 - x \sqrt{2} + 1 \right)} + \displaystyle \frac{Cx + D}{ \left(x^2 + x \sqrt{2} + 1\right)}

Clearing out the denominators, I get

1 = (Ax + B) \left(x^2 + x \sqrt{2} + 1\right) + (Cx + D) \left(x^2 - x \sqrt{2} + 1\right)

or

1 = Ax^3 + Bx^2 + Ax^2 \sqrt{2} + Bx\sqrt{2} + Ax + B + Cx^3 + Dx^2 - Cx^2 \sqrt{2} - Dx\sqrt{2} + Cx + D

or

0x^3 + 0x^2 + 0x + 1 = (A + C)x^3 + (A \sqrt{2} + B - C \sqrt{2} + D)x^2 + (A + B\sqrt{2} + C - D \sqrt{2} ) x + (B+D)

Matching coefficients yields the following system of four equations in four unknowns:

A + C = 0

A\sqrt{2} + B - C\sqrt{2} + D = 0

A + B \sqrt{2} + C - D\sqrt{2} = 0

B + D = 1

Ordinarily, four-by-four systems of linear equations are somewhat painful to solve, but this system isn’t too bad. Since A + C = 0 from the first equation, the third equation becomes

0 + B \sqrt{2} - D \sqrt{2} = 0, or B = D.

From the fourth equation, I can conclude that B = 1/2 and D = 1/2. The second and third equations then become

A\sqrt{2} + \displaystyle \frac{1}{2} - C\sqrt{2} + \frac{1}{2} = 0

A + \displaystyle \frac{\sqrt{2}}{2} + C - \frac{\sqrt{2}}{2} = 0,

or

A - C = \displaystyle -\frac{\sqrt{2}}{2},

A + C = 0.

Adding the two equations yields 2A = -\displaystyle \frac{\sqrt{2}}{4}, so that A = -\displaystyle \frac{\sqrt{2}}{4} and C = \displaystyle \frac{\sqrt{2}}{4}.

Therefore, the integral can be rewritten as

\displaystyle \int \left( \frac{ -\displaystyle \frac{\sqrt{2}}{4} x + \frac{1}{2}}{ x^2 - x \sqrt{2} + 1 } + \frac{ \displaystyle \frac{\sqrt{2}}{4} x + \frac{1}{2}}{ x^2 + x \sqrt{2} + 1 } \right) dx

I’ll start evaluating this integral in tomorrow’s post.

How I Impressed My Wife: Part 5j

Earlier in this series, I gave three different methods of showing that

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x} = \displaystyle \frac{2\pi}{|b|}.

Using the fact that Q is independent of a, I’ll now give a fourth method.
green lineSince Q is independent of a, I can substitute any convenient value of a that I want without changing the value of Q. As shown in previous posts, substituting a =0 yields the following simplification:

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + 2 \cdot 0 \cdot \sin x \cos x + (0^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\pi}^{\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\infty}^{\infty} \frac{ 2(1+u^2) du}{u^4 + (4 b^2 - 2) u^2 + 1}

= \displaystyle \int_{-\infty}^{\infty} \frac{ 2(1+u^2) du}{ ([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2}

\displaystyle \int_{-\infty}^{\infty} \left[ \frac{1}{[u - \sqrt{1-b^2}]^2 +b^2} + \frac{1}{[u + \sqrt{1-b^2}]^2 +b^2} \right] du

if |b| < 1. (The cases |b| = 1 and |b| > 1 have already been handled earlier in this series.)

To complete the calculation, I employ the now-familiar antiderivative

\displaystyle \int \frac{dx}{x^2 + b^2} = \displaystyle \frac{1}{|b|} \tan^{-1} \left( \frac{x}{b} \right).

Using this antiderivative and a simple substitution, I see that

Q = \displaystyle \left[ \frac{1}{|b|} \tan^{-1} \left( \frac{u - \sqrt{1-b^2}}{b} \right) + \frac{1}{|b|} \tan^{-1} \left( \frac{u - \sqrt{1-b^2}}{b} \right) \right]^{\infty}_{-\infty}

= \displaystyle \frac{1}{|b|} \left[ \left( \frac{\pi}{2} + \frac{\pi}{2} \right) - \left( \frac{-\pi}{2} + \frac{-\pi}{2} \right) \right]

= \displaystyle \frac{2\pi}{|b|}.

This completes the fourth method of evaluating the integral Q, using partial fractions.

green line

There’s at least one more way that the integral Q can be calculated, which I’ll begin with tomorrow’s post.

How I Impressed My Wife: Part 5i

Earlier in this series, I gave three different methods of showing that

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x} = \displaystyle \frac{2\pi}{|b|}.

Using the fact that Q is independent of a, I’ll now give a fourth method.
green lineSince Q is independent of a, I can substitute any convenient value of a that I want without changing the value of Q. As shown in previous posts, substituting a =0 yields the following simplification:

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + 2 \cdot 0 \cdot \sin x \cos x + (0^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\pi}^{\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\infty}^{\infty} \frac{ 2(1+u^2) du}{u^4 + (4 b^2 - 2) u^2 + 1}

= \displaystyle \int_{-\infty}^{\infty} \frac{ 2(1+u^2) du}{ ([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2}

if |b| < 1. (The cases |b| = 1 and |b| > 1 have already been handled earlier in this series.)

I will evaluate this integral using partial fractions. The denominator factors as the product of two irreducible quadratics, and so I must solve

\displaystyle \frac{ 2(1+u^2)}{ ([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2)} = \displaystyle \frac{Au + B}{[u - \sqrt{1-b^2}]^2 +b^2} + \frac{Cu + D}{[u + \sqrt{1-b^2}]^2 +b^2},

or

\displaystyle \frac{ 2(1+u^2)}{ ([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2)} = \displaystyle \frac{Au + B}{u^2 - 2u \sqrt{1-b^2} +1} + \frac{Cu + D}{u^2 + 2u\sqrt{1-b^2} + 1}.

I now clear out the denominators:

2(1+u^2) = (Au+B)(u^2 + 2u\sqrt{1-b^2} + 1) + (Cu+D)(u^2 - 2u\sqrt{1-b^2} + 1)

Now I multiply out the right-hand side:

Au^3 + 2Au^2 \sqrt{1-b^2} + Au + Bu^2 + 2Bu\sqrt{1-b^2} + B

+ Cu^3 - 2Cu^2 \sqrt{1-b^2} + Cu + Du^2 - 2Du\sqrt{1-b^2} + D

Equating this with 2u^2 + 2 and matching coefficients yields the following system of four equations in four unknowns:

A + C = 0

2A\sqrt{1-b^2} + B - 2C \sqrt{1-b^2} + D = 2

Au + 2Bu \sqrt{1-b^2} + Cu - 2Du \sqrt{1-b^2} = 0

B + D = 2

Ordinarily, four-by-four systems of equations are rather painful to solve, but this system isn’t so bad.

From the first equation, I see that C = -A.

From the third equation, I see that

Au + 2Bu \sqrt{1-b^2} + Cu - 2Du \sqrt{1-b^2} = 0

Au + 2Bu \sqrt{1-b^2} -Au - 2Du \sqrt{1-b^2} = 0

2Bu \sqrt{1-b^2} - 2Du \sqrt{1-b^2} = 0

B - D = 0

B = D.

From the fourth equation, I see that

B + D = 2

B + B = 2

2B = 2

B = 1,

so that D =1 as well. Finally, from the second equation, I see that

2A\sqrt{1-b^2} + B - 2C \sqrt{1-b^2} + D = 2

2A\sqrt{1-b^2} + 1 -2(-A) \sqrt{1-b^2} + 1 = 2

4A\sqrt{1-b^2} = 0

A= 0,

so that C = 0 as well. This yields the partial fractions decomposition

\displaystyle \frac{ 2(1+u^2)}{ ([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2)} = \displaystyle \frac{1}{[u - \sqrt{1-b^2}]^2 +b^2} + \frac{1}{[u + \sqrt{1-b^2}]^2 +b^2}.

This can be confirmed by directly adding the fractions on the right-hand side:

\displaystyle \frac{1}{[u - \sqrt{1-b^2}]^2 +b^2} + \frac{1}{[u + \sqrt{1-b^2}]^2 +b^2}

= \displaystyle \frac{[u + \sqrt{1-b^2}]^2 +b^2 + [u - \sqrt{1-b^2}]^2 +b^2}{([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2)}

= \displaystyle \frac{u^2 + 2u\sqrt{1-b^2} + 1 - b^2 + b^2 + u^2 - 2u\sqrt{1-b^2} + 1 - b^2 + b^2}{([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2)}

= \displaystyle \frac{2u^2 + 2}{([u - \sqrt{1-b^2}]^2 +b^2)([u + \sqrt{1-b^2}]^2 +b^2)}.

green line

I’ll continue with this fourth evaluation of the integral, continuing the case |b| < 1, in tomorrow’s post.

How I Impressed My Wife: Part 5g

Earlier in this series, I gave three different methods of showing that

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x} = \displaystyle \frac{2\pi}{|b|}.

Using the fact that Q is independent of a, I’ll now give a fourth method.
green lineSince Q is independent of a, I can substitute any convenient value of a that I want without changing the value of Q. As shown in previous posts, substituting a =0 yields the following simplification:

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + 2 \cdot 0 \cdot \sin x \cos x + (0^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\pi}^{\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\infty}^{\infty} \frac{ 2(1+u^2) du}{u^4 + (4 b^2 - 2) u^2 + 1}

 = \displaystyle \int_{-\infty}^{\infty} \left[ \displaystyle \left( \frac{2 - 2k_1^2}{k_2^2 - k_1^2} \right) \frac{1}{u^2 + k_1^2} + \displaystyle \left( \frac{2 k_2^2 - 2}{k_2^2 - k_1^2} \right) \frac{1}{u^2 + k_2^2} \right] du,

where k_1 and k_2 are the positive numbers so that

k_1^2 = 2b^2 + 2|b| \sqrt{b^2-1} - 1,

k_2^2 = 2b^2 - 2|b| \sqrt{b^2-1} - 1,

so that

u^4 + (4 b^2 - 2) u^2 + 1 = (u^2 + k_1^2) (u^2 + k_2^2)

At this point in the calculation, we can employ the now familiar antiderivative

\displaystyle \int \frac{du}{u^2 +k^2} = \displaystyle \frac{1}{k} \tan^{-1} \left( \frac{u}{k} \right) + C,

so that

Q = \left[ \displaystyle \left( \frac{2 - 2k_1^2}{k_2^2 - k_1^2} \right) \frac{1}{k_1}\tan^{-1} \left( \frac{u}{k_1} \right) + \displaystyle \left( \frac{2 k_2^2 - 2}{k_2^2 - k_1^2} \right) \displaystyle \frac{1}{k_2} \tan^{-1} \left( \frac{u}{k_2} \right) \right]^{\infty}_{-\infty}

= \displaystyle \left( \frac{2 - 2k_1^2}{k_2^2 - k_1^2} \right) \frac{1}{k_1} \left( \frac{\pi}{2} - \frac{-\pi}{2} \right) + \displaystyle \left( \frac{2 k_2^2 - 2}{k_2^2 - k_1^2} \right) \displaystyle \frac{1}{k_2} \tan^{-1} \left(\frac{\pi}{2} - \frac{-\pi}{2} \right)

= \displaystyle\frac{\pi}{k_2^2-k_1^2} \left( \frac{2 - 2k_1^2}{k_1} + \frac{2 .k_2^2 - 2}{k_2} \right)

= \displaystyle\frac{\pi}{k_2^2-k_1^2} \left( \frac{(2 - 2k_1^2)k_2 + (2 k_2^2 - 2)k_1 }{k_1 k_2} \right)

= \displaystyle\frac{\pi}{k_2^2-k_1^2} \left( \frac{2 k_2 - 2k_1^2 k_2 + 2 k_1 k_2^2 - 2 k_1 }{k_1 k_2} \right)

= \displaystyle\frac{2\pi}{k_2^2-k_1^2} \left( \frac{k_2 - k_1^2 k_2 + k_1 k_2^2 - k_1 }{k_1 k_2} \right)

= \displaystyle\frac{2\pi}{k_2^2-k_1^2} \left( \frac{k_2 - k_1 - k_1^2 k_2 + k_1 k_2^2 }{k_1 k_2} \right)

= \displaystyle\frac{2\pi}{(k_2-k_1)(k_2+k_1)} \left( \frac{(k_2 - k_1) + k_1 k_2 (k_2 - k_1)}{k_1 k_2} \right)

= \displaystyle\frac{2\pi}{(k_2-k_1)(k_2+k_1)} \left( \frac{(k_2 - k_1)(1+ k_1 k_2)}{k_1 k_2} \right)

= \displaystyle 2\pi \left( \frac{1+ k_1 k_2}{(k_2+k_1) k_1 k_2} \right).

Now it remains to simplify this fraction. To do this, we note that

u^4 + (4 b^2 - 2) u^2 + 1 = (u^2 + k_1^2) (u^2 + k_2^2),

so that

u^4 + (4b^2 - 2)u^2 + 1 = u^4 + (k_1^2 + k_2^2)u^2 + k_1^2 k_2^2

Matching coefficients, we see that

k_1^2 + k_2^2 = 4b^2 -2,

k_1^2 k_2^2 = 1

Since both k_1 and k_2 are positive, we see that k_1 k_2 = 1 from the last equation. Therefore,

k_1^2 + 2 k_1 k_2 + k_2^2 = 4b^2 -2 + 2

(k_1+k_2)^2 = 4b^2

k_1 + k_2 = 2|b|

Plugging these in, we finally conclude that

Q = \displaystyle 2\pi \left( \frac{1+ 1}{2 |b| \cdot 1} \right) = \displaystyle \frac{2\pi}{|b|},

again matching our earlier result.

green line

Using the fourth method, I’ve shown that Q = \displaystyle \frac{2\pi}{|b|} for the cases |b| = 1 and |b| > 1. With tomorrow’s post, I’ll consider the remaining case of |b| < 1.

How I Impressed My Wife: Part 5f

Earlier in this series, I gave three different methods of showing that

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x} = \displaystyle \frac{2\pi}{|b|}.

Using the fact that Q is independent of a, I’ll now give a fourth method.
green lineSince Q is independent of a, I can substitute any convenient value of a that I want without changing the value of Q. As shown in previous posts, substituting a =0 yields the following simplification:

Q = \displaystyle \int_0^{2\pi} \frac{dx}{\cos^2 x + 2 a \sin x \cos x + (a^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + 2 \cdot 0 \cdot \sin x \cos x + (0^2 + b^2) \sin^2 x}

= \displaystyle \int_{0}^{2\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\pi}^{\pi} \frac{dx}{\cos^2 x + b^2 \sin^2 x}

= \displaystyle \int_{-\infty}^{\infty} \frac{ 2(1+u^2) du}{u^4 + (4 b^2 - 2) u^2 + 1}

The four roots of the denominator satisfy

u^2 = \displaystyle 1 - 2b^2 \pm 2|b| \sqrt{b^2 - 1}

In yesterday’s post, I handled the case |b| = 1. In today’s post, I’ll consider the case |b| > 1, so that u^2 is a real number for the four roots of the denominator.

For the sake of simplicity, let me define the positive numbers k_1 and k_2 so that

k_1^2 = 2b^2 + 2|b| \sqrt{b^2-1} - 1,

k_2^2 = 2b^2 - 2|b| \sqrt{b^2-1} - 1.

Clearly 2b^2 + 2|b| \sqrt{b^2-1} - 1 > 0 if |b| > 1, and so we can choose k_1 to be positive. For k_2, notice that

(2b^2 - 1)^2 = 4b^4 - 4b^2 + 1,

while

\left[ 2|b| \sqrt{b^2-1} \right]^2 = 4b^2 (b^2 - 1) = 4b^4 - 4b^2.

Therefore,

(2b^2 - 1)^2 > \left[ 2|b| \sqrt{b^2-1} \right]^2

2b^2 - 1 > 2|b| \sqrt{b^2-1}

2b^2 - 2|b| \sqrt{b^2-1} - 1 > 0

So k_2 can also be chosen to be a positive number.

Using k_1 and k_2, I can write

u^4 + (4 b^2 - 2) u^2 + 1 = (u^2 + k_1^2)(u^2 + k_2^2),

and so the integrand must have the partial fractions decomposition

\displaystyle \frac{ 2(1+u^2)}{u^4 + (4 b^2 - 2) u^2 + 1} = \displaystyle \frac{A}{u^2 + k_1^2} + \displaystyle \frac{B}{u^2 + k_2^2},

Notice that ordinarily, when the denominator contains an irreducible quadratic, the numerator of the partial fractions decomposition has the form Au + B and not A. However, there are no u^3 and u terms in the denominator, I can treat u^2 as the variable for the purposes of the decomposition. Since the right-hand side has linear terms in u^2, it suffices to use a constant for finding the decomposition.

To solve for the constants A and B, I clear out the denominator:

2u^2 + 2 = A \left[ u^2 + k_2^2 \right] + B \left[ u^2 + k_1^2 \right]

Matching coefficients, this yields the system of equations

A + B = 2

A k_2^2 + B k_1^2 = 2

Substituting B = 2-A into the second equation, I get

A k_2^2 + (2-A) k_1^2 = 2

2 k_1^2 + A (k_2^2 - k_1^2) = 2

A (k_2^2 - k_1^2) = 2 - 2k_1^2

A = \displaystyle \frac{2 - 2k_1^2}{k_2^2 - k_1^2}

Therefore,

B = 2 - A = 2 - \displaystyle \frac{2 - 2k_1^2}{k_2^2 - k_1^2}

B = \displaystyle \frac{2(k_2^2 - k_1^2) - (2 - 2k_1^2)}{k_2^2 - k_1^2}

B = \displaystyle \frac{2 k_2^2 - 2}{k_2^2 - k_1^2}

Therefore, the integrand has the partial fractions decomposition

\displaystyle \frac{ 2(1+u^2)}{u^4 + (4 b^2 - 2) u^2 + 1} = \displaystyle \left( \frac{2 - 2k_1^2}{k_2^2 - k_1^2} \right) \frac{1}{u^2 + k_1^2} + \displaystyle \left( \frac{2 k_2^2 - 2}{k_2^2 - k_1^2} \right) \frac{1}{u^2 + k_2^2}

green line

I’ll continue with this fourth evaluation of the integral, continuing the case |b| > 1, in tomorrow’s post.

2048 and algebra: Index

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on using algebra to study the 2048 game… with a special focus on reaching the event horizon of 2048 which cannot be surpassed.

2048-0Part 1: Introduction and statement of problem

Part 2: First insight: How points are accumulated in 2048

Part 3: Second insight: The sum of the tiles on the board

Part 4: Algebraic formulation of the two insights

Part 5: Algebraic formulation applied to a more complicated board

Part 6: Algebraic formulation applied to the event horizon of 2048

Part 7: Calculating one of the complicated sums in Part 6 using a finite geometric series

Part 8: Calculating another complicated sum in Part 6 using a finite geometric series

Part 9: Repeating Part 8 by reversing the order of summation in a double sum

Part 10: Estimating the probability of reaching the event horizon in game mode

 

 

 

 

2048 and algebra (Part 10)

In this series of posts, I used algebra to show that 114,795 moves were needed to produce the following final board. This board represents the event horizon of 2048 that cannot be surpassed.

2048-0

I reached after about four weeks of intermittent doodling. It should be noted that the above game board was accomplished in practice mode, and I needed perhaps a couple thousand undos to offset the bad luck of a tile randomly appearing in an unneeded place.

For what it’s worth, my personal best in game mode was reaching the 8192-tile. I’m convinced that, even with the random placements of the new 2-tiles and 4-tiles, the skilled player can reach the 2048-tile nearly every time and should reach the 4096-tile most of the time.  However, reaching the 8192-tile requires more luck than skill, and reaching the 16384-tile requires an extraordinary amount of luck.

So what are the odds of a skilled player reaching the event horizon in game mode, without the benefit of undoing the previous move? I will employ Fermi estimation to approach this question. Of the approximately 100,000 moves, I estimate that about 5% of the moves require a certain 2-tile or 4-tile to appear at a certain location on the board. For example, in the initial stages of the game, the board is wide open and really doesn’t matter a whole lot where the new tiles appear. However, when the board gets quite crowded, it’s essential that new tiles appear in certain places, or else even a highly skilled player will get stuck.

What is the probability of getting the right tile on each of these occasions? Usually it’s quite high (over 90%). But sometimes it’s necessary to get a 4-tile in exactly the right place when there are four blank spaces (estimated probability of 3%). So let’s estimate 10% to be the probability for getting the right tile for all of these occasions. Let’s also assume that the random number generator is indeed random, so that the tiles appear independently of all other tiles.

With these estimates, I can estimate the probability of reaching the event horizon in game as \displaystyle \left( \frac{1}{10} \right)^{5000} = \displaystyle \frac{1}{10^{5000}}. While this analysis isn’t foolproof, it sure beats playing the game about 10^{10,000} times and then dividing by the number of times the event horizon is reached by the total number of attempts!

How small is \displaystyle \frac{1}{10^{5000}}? Since 2^3 \approx 10, this is approximately equal to \displaystyle \frac{1}{2^{15,000}}, and that’s a probability so small that it was reached (and surpassed) when the Heart of Gold spaceship activated the Infinite Improbability Drive in The Hitchhiker’s Guide to the Galaxy. By way of comparison:

  • \displaystyle \frac{1}{2^{276,709}} is the probability that someone stranded in the vacuum of space will be picked up by a starship within 30 seconds.
  • \displaystyle \frac{1}{2^{100,000}} is the probability of skidding down a beam of light… or having a million-gallon vat of custard appearing in the sky and dumping its contents on you without warning.
  • \displaystyle \frac{1}{2^{75,000}} is the probability of a person turning into a penguin.
  • \displaystyle \frac{1}{2^{50,000}} is the probability of having one of your arms suddenly elongate.
  • \displaystyle \frac{1}{2^{20,000}} is the probability of an infinite number of monkeys randomly typing out Hamlet.

These are the events to which the probability of reaching the event horizon in 2048 without any undos should be compared.

 

 

 

 

2048 and algebra (Part 9)

In this series of posts, I consider how algebra can be used to answer a question about the 2048 game: From looking at a screenshot of the final board, can I figure out how many moves were needed to reach the final board? Can I calculate how many new 2-tiles and 4-tiles were introduced to the board throughout the course of this game? In this post, we consider the event horizon of 2048, which I reached after about four weeks of intermittent doodling:

2048-0

In yesterday’s post, we developed a system of two equations in two unknowns to solve for t and f, the number of 2-tiles and 4-tiles (respectively) that appeared throughout the course of the game:

2t + 4f = \displaystyle \sum_{n=2}^{17} 2^n.

2t + \displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = 3,867,072

In this post and tomorrow’s post, I consider how the two sums in the above equations can be obtained without directly adding the terms.

In yesterday’s post, we used the formula for the sum of a finite geometric series to calculate the second sum:

\displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = 14 \times 2^{18} + 2^3 = 3,670,024

In this post, I perform this calculation again, except symbolically and more compactly. The key initial steps are writing the series as a double sum and then interchanging the order of summation (much like reversing the order of integration in a double integral). This is a trick that I’ve used again and again in my own research efforts, but it seems that the students that I teach have never learned this trick. Here we go:

\displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = \displaystyle \sum_{n=1}^{15} \sum_{k=1}^n 2^{n+2} = \displaystyle \sum_{k=1}^{15} \sum_{n=k}^{15} 2^{n+2}

The inner sum is a finite geometric series with 15-k+1 terms, common ratio of 2, and initial term 2^{k+2}. Therefore,

\displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = \displaystyle \sum_{k=1}^{15} \frac{ 2^{k+2} \left(1 - 2^{15-k+1} \right) }{1 - 2}

= \displaystyle \sum_{k=1}^{15} \left(2^{18} - 2^{k+2} \right)

= \displaystyle \sum_{k=1}^{15} 2^{18} -\sum_{k=1}^{15} 2^{k+2}

 The first sum is merely the sum of a constant. The second sum is another finite geometric series with 15 terms, common ratio of 2, and initial term 2^3. So

\displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = 15 \times 2^{18} - \displaystyle \frac{ 2^3 \left(1 - 2^{15} \right) }{1 - 2}

\displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = 15 \times 2^{18} - \left( 2^{18} - 2^3 \right)

\displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = 14 \times 2^{18} + 2^3

\displaystyle \sum_{n=1}^{15} n \cdot 2^{n+2} = 3,670,024