Win $1,000,000 by Playing Minesweeper!

Yes, you read that headline correctly: it is possible for a mathematician to win $1,000,000 by playing Minesweeper. Here’s how:

First, the source of the $1,000,000. In 2000, the Clay Mathematics Institute created a list of important unsolved problems in mathematics (see also Wikipedia), offering each one a prize of $1,000,000 for a solution. Specifically, according to the rules,

[A] proposed solution must be published in a refereed mathematics publication of worldwide repute… and it must also have general acceptance in the mathematics community two years after.

Of the eight problems on the list, one (the Poincare conjecture) has been solved so far. Incredibly, the mathematician who produced the proof turned down the $1,000,000. He was also awarded the Fields Medal, considered the Nobel Prize of mathematics, but he refused that as well.

The Millennium Problems parallel the famous 23 Hilbert problems posed by eminent mathematician David Hilbert in 1900. The Hilbert problems were unsolved at the time and were posed as a challenge to the mathematicians of the 20th century. These problems certainly generated a lot of buzz in the mathematical community, and most have been solved since 1900. (Sadly, none of the solvers won $1,000,000 for their work!)

What does this have to do with Minesweeper?

One of the problems is the famed P vs. NP problem. From Clay Mathematics:

Suppose that you are organizing housing accommodations for a group of four hundred university students. Space is limited and only one hundred of the students will receive places in the dormitory. To complicate matters, the Dean has provided you with a list of pairs of incompatible students, and requested that no pair from this list appear in your final choice. This is an example of what computer scientists call an NP-problem, since it is easy to check if a given choice of one hundred students proposed by a coworker is satisfactory (i.e., no pair taken from your coworker’s list also appears on the list from the Dean’s office), however the task of generating such a list from scratch seems to be so hard as to be completely impractical. Indeed, the total number of ways of choosing one hundred students from the four hundred applicants is greater than the number of atoms in the known universe! Thus no future civilization could ever hope to build a supercomputer capable of solving the problem by brute force; that is, by checking every possible combination of 100 students. However, this apparent difficulty may only reflect the lack of ingenuity of your programmer. In fact, one of the outstanding problems in computer science is determining whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure. Problems like the one listed above certainly seem to be of this kind, but so far no one has managed to prove that any of them really are so hard as they appear, i.e., that there really is no feasible way to generate an answer with the help of a computer. Stephen Cook and Leonid Levin formulated the P (i.e., easy to find) versus NP (i.e., easy to check) problem independently in 1971.

In 2000, Richard Kaye (University of Birmingham, UK) made connected Minesweeper to the P vs. NP problem. Famed mathematical journalist Ian Stewart did a very nice job of describing the connection between the two, so I’ll simply refer to his post for a detailed description. Dr. Kaye’s website also gives a lot of information about the connection.

Proving theorems and special cases: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series on attempting to prove theorems by looking at special cases.

Famous propositions that are true for the first few cases (or many cases) before failing.

Part 1: The proposition “Is n^2 -n + 41 always a prime number?” is true for the first 40 cases but fails with n = 41.

Part 2: The Polya conjecture is true for over 900 million cases before failing.

Part 3: A conjecture about the distribution of prime numbers is true for the first 10^{318} cases before failing.

Mathematical induction

Part 4: Pedagogical thoughts on mathematical induction.

Part 5: More pedagogical thoughts on mathematical induction.

Famous unsolved problems in mathematics

Part 6: The Goldbach conjecture.

Part 7: The twin prime conjecture.

Part 8: The Collatz conjecture.

Part 9: The Riemann hypothesis.

Theorems in secondary mathematics that can be proven using special cases

Part 10: The sum of the angles in a convex polygon with n sides is 180n degrees.

Part 11: The Law of Cosines.

Part 12: Trig identities for \sin(\alpha \pm \beta) and \cos(\alpha \pm \beta).

Part 13: Uniqueness of logarithms.

Part 14: The Power Law for derivatives, or \displaystyle \frac{d}{dx} \left(x^n \right) = nx^{n-1}.

Part 15: The Mean Value Theorem.

Part 16: An old calculus problem of mine.

Part 17: Ending thoughts.

 

Wason Selection Task: Part 3

I recently read about a simple but clever logic puzzle, known as the “Wason selection task,” which is often claimed to be “the single most investigated experimental paradigm in the psychology of reasoning.” More than 90% of Wason’s subjects got the answer wrong when Wason first studied this problem back in the 1960s, and this result has been repeated time over time by psychologists ever since.

Here’s the puzzle: You are shown four different cards, showing a 5, an 8, a blue card, and a green card. You are told that each card has a number on one side and a color on the other side. You are asked to test the truth of the following statement:

If a card has an even number on one side, then its opposite side is blue.

Question: Which card (or cards) must you turn over to test the truth of this statement?

Interestingly, in the 1980s, a pair of psychologists slightly reworded the Wason selection puzzle in a form that’s logically equivalent, but this rewording caused a much higher rate of correct responses. Here was the rewording:

On this task imagine you are a police officer on duty. It is your job to make sure that people conform to certain rules. The cards in front of you have information about four people sitting at a table. On one side of the card is a person’s age and on the other side of the card is what the person is drinking. Here is a rule: “If a person is drinking beer, then the person must be over 19 years of age.” Select the card or cards that you definitely must turn over to determine whether or not the people are violating the rule.

Four cards are presented:

  • Drinking a beer
  • Drinking a Coke
  • 16 years of age
  • 22 years of age

In this experiment, 29 out of 40 respondents answered correctly. However, when presented with the same task using more abstract language, none of the 40 respondents answered correctly… even though the two puzzles are logically equivalent. Quoting from the above article:

Seventy-five percent of subjects nailed the puzzle when it was presented in this way—revealing what researchers now call a “content effect.” How you dress up the task, in other words, determines its difficulty, despite the fact that it involves the same basic challenge: to see if a rule—if P then Q—has been violated. But why should words matter when it’s the same logical structure that’s always underlying them?

This little study has harrowing implications for those of us that teach mathematical proofs and propositional logic. It’s very easy for people to get some logic questions correct but other logic questions incorrect, even if the puzzles look identical to the mathematician/logician who is posing the questions. Pedagogically, this means that it’s a good idea to use familiar contexts (like rules for underage drinking) to introduce propositional logic. But this comes with a warning, since students who answer questions arising from a familiar context correctly may not really understand propositional logic at all when the question is posed more abstract (like in a mathematical proof).

 

Wason Selection Task: Part 2

I recently read about a simple but clever logic puzzle, known as the “Wason selection task,” which is often claimed to be “the single most investigated experimental paradigm in the psychology of reasoning.”

Here’s the puzzle: You are shown four different cards, showing a 5, an 8, a blue card, and a green card. You are told that each card has a number on one side and a color on the other side. You are asked to test the truth of the following statement:

If a card has an even number on one side, then its opposite side is blue.

Question: Which card (or cards) must you turn over to test the truth of this statement?

https://www.youtube.com/watch?v=qNBzwwLiOUc

The answer is: You must turn over the 8 card and the green card. The following video explains why:

https://www.youtube.com/watch?annotation_id=annotation_602226&feature=iv&src_vid=l1g4TbVOkIs&v=3aAnrkC_-jE

Briefly:

  1. Clearly, you must turn over the 8 card. If the opposite side is not blue, then the proposition is false.
  2. Clearly, the 5 card is not helpful. The statement only tells us something if the card shows an even number.
  3. More subtly, the blue card is not helpful either. The statement claim is “If even, then blue,” not “If blue, then even.” This is the converse of the statement, and converses are not necessarily equivalent to the original statement.
  4. Finally, the contrapositive of “If even, then blue” is “If not blue, then not even.” Therefore, any card that is not blue (like the green one) should be turned over.

green line

If you got this wrong, you’re in good company. More than 90% of Wason’s subjects got the answer wrong when Wason first studied this problem back in the 1960s, and this result has been repeated time over time by psychologists ever since.

Speaking for myself, I must admit that I blew it too when I first came across this problem. In the haze of the early morning when I first read this article, I erroneously thought that the 8 card and the blue card had to be turned.

 

Wason Selection Task: Part 1

I recently read about a simple but clever logic puzzle, known as the “Wason selection task,” which is often claimed to be “the single most investigated experimental paradigm in the psychology of reasoning,” in the words of one textbook author.

Here’s the puzzle: You are shown four different cards, showing a 5, an 8, a blue card, and a green card. You are asked to test the truth of the following statement:

If a card has an even number on one side, then its opposite side is blue.

Question: Which card (or cards) must you turn over to test the truth of this statement?

https://www.youtube.com/watch?v=qNBzwwLiOUc

I’ll start discussing the answer to this puzzle in tomorrow’s post. If you’re impatient, you can click through the interactive video above or else read the article where I first learned about this puzzle: http://m.nautil.us/blog/the-simple-logical-puzzle-that-shows-how-illogical-people-are (I got the opening sentence of this post from this article).

green_speech_bubble

Lessons from teaching gifted elementary school students (Part 5d)

Every so often, I’ll informally teach a class of gifted elementary-school students. I greatly enjoy interacting with them, and I especially enjoy the questions they pose. Often these children pose questions that no one else will think about, and answering these questions requires a surprisingly depth of mathematical knowledge.

A bright young student of mine noticed that multiplication is repeated addition:

x \cdot y = x + x + x \dots + x,

Also, exponentiation is repeated addition:

x \uparrow y = x^y = x \cdot x \cdot x \dots \cdot x,

The notation x \uparrow y is unorthodox, but it leads to the natural extensions

x \upuparrows y = x \uparrow x \uparrow x \dots \uparrow x,

x \uparrow^3 y = x \upuparrows x \upuparrows x \dots \upuparrows x,

and so. I’ll refer the interested reader to Wikipedia and Mathworld (and references therein) for more information about Knuth’s up-arrow notation. As we saw in yesterday’s post, these numbers get very, very large… and very, very quickly.

When I was in elementary school myself, I remember reading in the 1980 Guiness Book of World Records about Graham’s number, which was reported to be the largest number ever used in a serious mathematical proof. Obviously, it’s not the largest number — there is no such thing — but the largest number that actually had some known usefulness. And this number is only expressible using Knuth’s up-arrow notation. (Again, see Wikipedia and Mathworld for details.)

From Mathworld, here’s a description of the problem that Graham’s number solves:

Stated colloquially, [consider] every possible committee from some number of people n and enumerating every pair of committees. Now assign each pair of committees to one of two groups, and find N, the smallest n that will guarantee that there are four committees in which all pairs fall in the same group and all the people belong to an even number of committees.

In 1971, Graham and Fairchild proved that there is a solution N, and that N \le F(F(F(F(F(F(F(12))))))), where

F(n) = 2 \uparrow^n 3.

For context, 2 \uparrow^4 3 is absolutely enormous. In yesterday’s post, I showed that 2 \uparrow^3 = 65,536. Therefore,

2 \uparrow^4 3 = 2 \uparrow^3 (2 \uparrow^3 2)

= 2 \uparrow^3 65,536

= 2 \upuparrows 2 \upuparrows 2 \upuparrows \dots \upuparrows 2,

repeated 65,536 times.

That’s just 2 \uparrow^4 3. Now try to imagine F(12) = 2 \uparrow^{12} 3. That’s a lot of arrows.

Now try to imagine F(F(12)) = 2 \uparrow^{F(12)} 3, which is even more arrows.

Now try to imagine F(F(F(F(F(F(F(12))))))). I bet you can’t. (I sure can’t.)

Graham and Fairchild also helpfully showed that N \ge 6. So somewhere between 6 and Graham’s number lies the true value of N.

A postscript: according to Wikipedia, things have improved somewhat since 1971. The best currently known bounds for N are

13 \le N \le 2 \uparrow^3 6.

Lessons from teaching gifted elementary school students (Part 5c)

Every so often, I’ll informally teach a class of gifted elementary-school students. I greatly enjoy interacting with them, and I especially enjoy the questions they pose. Often these children pose questions that no one else will think about, and answering these questions requires a surprisingly depth of mathematical knowledge.

A bright young student of mine noticed that multiplication is repeated addition:

x \cdot y = x + x + x \dots + x,

Also, exponentiation is repeated addition:

x^y = x \cdot x \cdot x \dots \cdot x,

So, my student asked, why can’t we define an operation that’s repeated exponentiation? Like all good explorers, my student claimed naming rights for this new operation and called it x \hbox{~playa~} y:

x \hbox{~playa~} y = x^{x^{x^{\dots}}}

For example,

4 \hbox{~playa~} 3 = 4^{4^4} = 4^{256} \approx 1.34 \times 10^{154}

Even with small numbers for x and y, x \hbox{~playa~} y gets very large.

Unfortunately for my student, someone came up with this notion already, and it’s called Knuth’s up-arrow notation. I’ll give some description here and refer the interested reader to Wikipedia and Mathworld (and references therein) for more information. Surprisingly, this notion has only become commonplace since 1976 — within my own lifetime.

Let’s define x \uparrow y to be ordinary exponentiation:

x \uparrow y = x \cdot x \cdot x \dots \cdot x.

Let’s now define x \upuparrows y to be the up-arrow operation repeated y times:

x \upuparrows y = x \uparrow x \uparrow x \dots \uparrow x.

In this expression, the order of operations is taken to be right to left.

Numbers constructed by \upuparrows get very, very big and very, very quickly. For example:

2 \upuparrows 2 = 2 \uparrow 2 = 2^2 = 4.

Next,

2 \upuparrows 3 = 2 \uparrow (2 \uparrow 2)

= 2 \uparrow (2^2)

= 2 \uparrow 4

= 2^4

= 16

Next,

2 \upuparrows 4 = 2 \uparrow (2 \uparrow (2 \uparrow 2))

= 2 \uparrow 16

= 2^{16}

= 65,536

Next,

2 \upuparrows 5 = 2 \uparrow (2 \uparrow (2 \uparrow (2 \uparrow 2)))

= 2 \uparrow 65,536

= 2^{65,536}

\approx 2.0035 \times 10^{19,728}

Next,

2 \upuparrows 6 = 2 \uparrow (2 \uparrow (2 \uparrow (2 \uparrow (2 \uparrow 2))))

= 2 \uparrow 2^{65,536}

= 2^{2^{65,536}}

\approx 10^{6.031 \times 10^{19,727}}

We see that 2 \upuparrows 6 is already far larger than a googolplex (or 10^{10^{100}}), which is often (and erroneously) held as the gold standard for very large numbers.

I’ll refer the interested reader to a previous post in this series for a description of how logarithms can be used to write something like 2^{65,536} in ordinary scientific notation.

green line

Knuth’s up-arrow notation can be further generalized:

x \uparrow^3 y = x \upuparrows x \upuparrows x \dots \upuparrows x,

repeated y times. The numbers x \uparrow^4 y, x \uparrow^5 y, etc., are defined similarly.

These numbers truly become large quickly. For example,

2 \uparrow^3 2 = 2 \upuparrows 2 = 4, from above.

Next,

2 \uparrow^3 3 = 2 \upuparrows (2 \upuparrows 2)

= 2 \upuparrows 4

= 65,536, from above

Next,

2 \uparrow^3 4 = 2 \upuparrows (2 \upuparrows (2 \upuparrows 2))

= 2 \upuparrows 65,536

= 2 \uparrow 2 \uparrow 2 \uparrow \dots \uparrow 2,

where there are 65,536 repeated 2’s on this last line. It’d be nearly impossible to write this number in scientific notation, and we’ve only reached 2 \uparrow^3 4.

Lessons from teaching gifted elementary school students (Part 5b)

Every so often, I’ll informally teach a class of gifted elementary-school students. I greatly enjoy interacting with them, and I especially enjoy the questions they pose. Often these children pose questions that no one else will think about, and answering these questions requires a surprisingly depth of mathematical knowledge.

Here’s a question I once received (though I probably changed the exact wording somewhat):

Exponentiation is to multiplication as multiplication is to addition. In other words,

x^y = x \cdot x \cdot x \dots \cdot x,

x \cdot y = x + x + x \dots + x,

where the operation is repeated y times.

So, multiplication is to addition as addition is to what?

My kneejerk answer was that there was no answer… while exponents can be thought of as repeated multiplication and multiplication can be thought of as repeated addition, addition can’t be thought of as some other thing being repeated. But it took me a few minutes before I could develop of proof that could be understood by my bright young questioner.

Suppose y = 1. Then the expressions above become

x^1 = x

and

x \cdot 1 = x

However, we know full well that

x + 1 \ne x.

Therefore, there can’t be an operation analogous to addition as addition is to multiplication or as multiplication is to exponentiation.