Engaging students: Solving logarithmic equations

In my capstone class for future secondary math teachers, I ask my students to come up with ideas for engaging their students with different topics in the secondary mathematics curriculum. In other words, the point of the assignment was not to devise a full-blown lesson plan on this topic. Instead, I asked my students to think about three different ways of getting their students interested in the topic in the first place.

I plan to share some of the best of these ideas on this blog (after asking my students’ permission, of course).

This student submission again comes from my former student Anna Park. Her topic: how to engage Algebra II or Precalculus students when solving logarithmic equations.

green line

Application:

 

The students will each be given a card with a) a logarithmic equation solution and b) a new logarithmic equation. The student that has a number one on the back of their card will begin the game. The student will stand up and tell the rest of the class what they have for b) the Log equation they have, then the student with the corresponding card will read their solution a) to the first students problem. If that student is correct they will read part b) the new log equation. Then another student that has the logarithmic solution will stand up and say their solution a) and then read their new log equation b). This will continue until the last student stands with their new equation and it loops back to student number one’s solution. This will end the game. This game requires students to solve logarithmic equations and recognize how to rewrite a logarithmic equation. There will be an appropriate amount of time before the game begins so the students can work backwards to find their logarithmic equation that matches their solution.

 

green line

History:

 

John Napier was the mathematician that introduced logarithms. The way he came up with logarithms is very fascinating, especially how long it took him to develop the logarithm table. He first published his work on logarithms in 1614. He published the findings under “A Description of the Wonderful Table of Logarithms.” He named them logarithms after two Greek words; logos, meaning proportion, and arithmos, meaning number. His discovery was based off of his imagination of two particles traveling along two parallel lines. One line had infinite length and the other had a finite length. He imagined both particles starting at the same horizontal positions with the same velocity. The first line’s velocity was proportional to the distance, which meant that the particle was covering equal distance in equal time. Whereas the second particle’s velocity was proportional with the distance remaining. His findings were that the distance not covered by the second line was the sine and the distance of the first line was the logarithm of the sine. This showed that the sines decreased and the logarithms increased. This also resulted in the sines decreasing in geometric proportion and the logarithms increasing in arithmetic proportion. He made his logarithm tables by taking increments of arc (theta) every minute, listing the sine of each minute by arc, and the corresponding logarithm. Completing his tables, Napier computed roughly ten million entries, and he selected the appropriate values. Napier said that his findings and completing this table took him about 20 years, which means he probably started his work in 1594.

Resource: http://www.maa.org/press/periodicals/convergence/logarithms-the-early-history-of-a-familiar-function-john-napier-introduces-logarithms

 

 

green line

Technology:

 

I have found that when it comes to remembering rules, sometime the cheesiest of songs help student’s to remember the rules. It is also a very good engage before the students start with the lesson. The chorus is typically the most important content for the student’s to remember. Here are two videos that would help the student’s to remember how to compute logarithms.

The first video is a song from Youtube set to the song Thriller by Michael Jackson. The song is produced very well and is very engaging throughout the whole song.

The Second video is of a student’s project  on Youtube of how to remember how to compute logarithms to the song Under the sea by the little mermaid. Though the production isn’t as good as the first video, the young girls do a good job at explaining how to solve logarithms.

 

 

My Favorite One-Liners: Part 80

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

Today’s awful pun comes courtesy of Math With Bad Drawings. Suppose we need to solve for x in the following equation:

2^{2x+1} = 3^{x}.

Naturally, the first step is taking the logarithm of both sides. But with which base? There are two reasonable options for most handheld scientific calculators: base-10 and base-e. So I’ll tell the class my preference:

I’m organic; I only use natural logs.

 

My Favorite One-Liners: Part 68

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

When discussing the Laws of Logarithms, I’ll make a big deal of the fact that one law converts a multiplication problem into a simpler addition problem, while another law converts exponentiation into a simpler multiplication problem.

After a few practice problems — and about 3 minutes before the end of class — I’ll inform my class that I’m about to tell the world’s worst math joke. Here it is:

After the flood, the ark landed, and Noah and the animals got out. And God said to Noah, “Go forth, be fruitful, and multiply.” So they disembarked.

Some time later, Noah went walking around and saw the two dogs with their baby puppies and the two cats with their baby kittens. However, he also came across two unhappy, frustrated, and disgruntled snakes. The snakes said to Noah, “We’re having some problems here; would you mind knocking down a tree for us?”

Noah says, “OK,” knocks down a tree, and goes off to continue his inspections.

Some time later, Noah returns, and sure enough, the two snakes are surrounding by baby snakes. Noah asked, “What happened?”

The snakes replied, “Well, you see, we’re adders. We need logs to multiply.”

After the laughter and groans subside, I then dismiss my class for the day:

Go forth, and multiply (pointing to the door of the classroom). For most of you, don’t be fruitful yet, but multiply. You’re dismissed.

My Favorite One-Liners: Part 8

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

At many layers of the mathematics curriculum, students learn about that various functions can essentially commute with each other. In other words, the order in which the operations is performed doesn’t affect the final answer. Here’s a partial list off the top of my head:

  1. Arithmetic/Algebra: a \cdot (b + c) = a \cdot b + a \cdot c. This of course is commonly called the distributive property (and not the commutative property), but the essential idea is that the same answer is obtained whether the multiplications are performed first or if the addition is performed first.
  2. Algebra: If a,b > 0, then \sqrt{ab} = \sqrt{a} \sqrt{b}.
  3. Algebra: If a,b > 0 and x is any real number, then (ab)^x = a^x b^x.
  4. Precalculus: \displaystyle \sum_{i=1}^n (a_i+b_i) = \displaystyle \sum_{i=1}^n a_i + \sum_{i=1}^n b_i.
  5. Precalculus: \displaystyle \sum_{i=1}^n c a_i = c \displaystyle \sum_{i=1}^n a_i.
  6. Calculus: If f is continuous at an interior point c, then \displaystyle \lim_{x \to c} f(x) = f(c).
  7. Calculus: If f and g are differentiable, then (f+g)' = f' + g'.
  8. Calculus: If f is differentiable and c is a constant, then (cf)' = cf'.
  9. Calculus: If f and g are integrable, then \int (f+g) = \int f + \int g.
  10. Calculus: If f is integrable and c is a constant, then \int cf = c \int f.
  11. Calculus: If f: \mathbb{R}^2 \to \mathbb{R} is integrable, \iint f(x,y) dx dy = \iint f(x,y) dy dx.
  12. Calculus: For most differentiable function f: \mathbb{R}^2 \to \mathbb{R} that arise in practice, \displaystyle \frac{\partial^2 f}{\partial x \partial y} = \displaystyle \frac{\partial^2 f}{\partial y \partial x}.
  13. Probability: If X and Y are random variables, then E(X+Y) = E(X) + E(Y).
  14. Probability: If X is a random variable and c is a constant, then E(cX) = c E(X).
  15. Probability: If X and Y are independent random variables, then E(XY) = E(X) E(Y).
  16. Probability: If X and Y are independent random variables, then \hbox{Var}(X+Y) = \hbox{Var}(X) + \hbox{Var}(Y).
  17. Set theory: If A, B, and C are sets, then A \cup (B \cap C) = (A \cup B) \cap (A \cup C).
  18. Set theory: If A, B, and C are sets, then A \cap (B \cup C) = (A \cap B) \cup (A \cap C).

However, there are plenty of instances when two functions do not commute. Most of these, of course, are common mistakes that students make when they first encounter these concepts. Here’s a partial list off the top of my head. (For all of these, the inequality sign means that the two sides do not have to be equal… though there may be special cases when equality happens to happen.)

  1. Algebra: (a+b)^x \ne a^x + b^x if x \ne 1. Important special cases are x = 2, x = 1/2, and x = -1.
  2. Algebra/Precalculus: \log_b(x+y) = \log_b x + \log_b y. I call this the third classic blunder.
  3. Precalculus: (f \circ g)(x) \ne (g \circ f)(x).
  4. Precalculus: \sin(x+y) \ne \sin x + \sin y, \cos(x+y) \ne \cos x + \cos y, etc.
  5. Precalculus: \displaystyle \sum_{i=1}^n (a_i b_i) \ne \displaystyle \left(\sum_{i=1}^n a_i \right) \left( \sum_{i=1}^n b_i \right).
  6. Calculus: (fg)' \ne f' \cdot g'.
  7. Calculus \left( \displaystyle \frac{f}{g} \right)' \ne \displaystyle \frac{f'}{g'}
  8. Calculus: \int fg \ne \left( \int f \right) \left( \int g \right).
  9. Probability: If X and Y are dependent random variables, then E(XY) \ne E(X) E(Y).
  10. Probability: If X and Y are dependent random variables, then \hbox{Var}(X+Y) \ne \hbox{Var}(X) + \hbox{Var}(Y).

All this to say, it’s a big deal when two functions commute, because this doesn’t happen all the time.

green lineI wish I could remember the speaker’s name, but I heard the following one-liner at a state mathematics conference many years ago, and I’ve used it to great effect in my classes ever since. Whenever I present a property where two functions commute, I’ll say, “In other words, the order of operations does not matter. This is a big deal, because, in real life, the order of operations usually is important. For example, this morning, you probably got dressed and then went outside. The order was important.”

 

My Favorite One-Liners: Part 1

In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.

One of the most common student mistakes with logarithms is thinking that

\log_b(x+y) = \log_b x + \log_b y.

Whenever students make this mistake, I call it the Third Classic Blunder. The first classic blunder, of course, is getting into a major land war in Asia. The second classic blunder is getting into a battle of wits with a Sicilian when death is on the line. And the third classic blunder is thinking that \log_b(x+y) somehow simplfies as \log_b x + \log_b y.

Sadly, as the years pass, fewer and fewer students immediately get the cultural reference. On the bright side, it’s also an opportunity to introduce a new generation to one of the great cinematic masterpieces of all time.

One of my colleagues calls this mistake the Universal Distributive Law, where the \log_b distributes just as if x+y was being multiplied by a constant. Other mistakes in this vein include  \sqrt{x+y} = \sqrt{x} + \sqrt{y}  and  (x+y)^2 = x^2 + y^2.

Along the same lines, other classic blunders are thinking that

\left(\log_b x\right)^n  simplifies as  \log_b \left(x^n \right)

and that

\displaystyle \frac{\log_b x}{\log_b y}  simplifies as  \log_b \left( \frac{x}{y} \right).

I’m continually amazed at the number of good students who intellectually know that the above equations are false but panic and use them when solving a problem.

Computing e to Any Power: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series examining one of Richard Feynman’s anecdotes about mentally computing e^x for three different values of x.

Part 1: Feynman’s anecdote.

Part 2: Logarithm and antilogarithm tables from the 1940s.

Part 3: A closer look at Feynman’s computation of e^{3.3}.

Part 4: A closer look at Feynman’s computation of e^{3}.

Part 5: A closer look at Feynman’s computation of e^{1.4}.

 

 

Lessons from teaching gifted elementary school students: Index (updated)

I’m doing something that I should have done a long time ago: collect past series of posts into a single, easy-to-reference post. The following posts formed my series on various lessons I’ve learned while trying to answer the questions posed by gifted elementary school students. (This is updated from my previous index.)

Part 1: A surprising pattern in some consecutive perfect squares.

Part 2: Calculating 2 to a very large exponent.

Part 3a: Calculating 2 to an even larger exponent.

Part 3b: An analysis of just how large this number actually is.

Part 4a: The chance of winning at BINGO in only four turns.

Part 4b: Pedagogical thoughts on one step of the calculation.

Part 4c: A complicated follow-up question.

Part 5a: Exponentiation is multiplication as multiplication is to addition. So, multiplication is to addition as addition is to what? (I offered the answer of incrementation, but it was rejected: addition requires two inputs, while incrementation only requires one.)

Part 5b: Why there is no binary operation that completes the above analogy.

Part 5c: Knuth’s up-arrow notation for writing very big numbers.

Part 5d: Graham’s number, reputed to be the largest number ever to appear in a mathematical proof.

Part 6a: Calculating $(255/256)^x$.

Part 6b: Solving $(255/256)^x = 1/2$ without a calculator.

Part 7a: Estimating the size of a 1000-pound hailstone.

Part 7b: Estimating the size a 1000-pound hailstone.

Part 8a: Statement of an usually triangle summing problem.

Part 8b: Solution using binomial coefficients.

Part 8c: Rearranging the series.

Part 8d: Reindexing to further rearrange the series.

Part 8e: Rewriting using binomial coefficients again.

Part 8f: Finally obtaining the numerical answer.

Part 8g: Extracting the square root of the answer by hand.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 18

The Riemann Hypothesis (see here, here, and here) is perhaps the most famous (and also most important) unsolved problems in mathematics. Gamma (page 207) provides a way of writing down this conjecture in a form that only uses notation that is commonly taught in high school:

If \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \cos(b \ln r) = 0 and \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \sin(b \ln r) = 0 for some pair of real numbers a and b, then a = \frac{1}{2}.

As noted in the book, “It seems extraordinary that the most famous unsolved problem in the whole of mathematics can be phrased so that it involves the simplest of mathematical ideas: summation, trigonometry, logarithms, and [square roots].”

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 17

Let \pi(n) denote the number of positive prime numbers that are less than or equal to n. The prime number theorem, one of the most celebrated results in analytic number theory, states that

\pi(x) \approx \displaystyle \frac{x}{\ln x}.

This is a very difficult result to prove. However, Gamma (page 172) provides a heuristic argument that suggests that this answer might be halfway reasonable.

Consider all of the integers between 1 and x.

  • About half of these numbers won’t be divisible by 2.
  • Of those that aren’t divisible by 2, about two-thirds won’t be divisible by 3. (This isn’t exactly correct, but it’s good enough for heuristics.)
  • Of those that aren’t divisible by 2 and 3, about four-fifths won’t be divisible by 5.
  • And so on.

If we repeat for all primes less than or equal to \sqrt{x}, we can conclude that the number of prime numbers less than or equal to x is approximately

\pi(x) \approx \displaystyle x \prod_{p \le \sqrt{x}} \left(1 - \frac{1}{p} \right).

From this point, we can use Mertens product formula

\displaystyle \lim_{n \to \infty} \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right)^{-1} = e^\gamma

to conclude that

\displaystyle \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right) \approx \displaystyle \frac{e^{-\gamma}}{\ln n}

if n is large. Therefore,

\pi(x) \approx x \displaystyle \frac{e^{-\gamma}}{\ln \sqrt{x}} = 2 e^{-\gamma} \displaystyle \frac{x}{\ln x}.

Though not a formal proof, it’s a fast way to convince students that the unusual fraction \displaystyle \frac{x}{\ln x} ought to appear someplace in the prime number theorem.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 16

Let \pi(n) denote the number of positive prime numbers that are less than or equal to n. The prime number theorem, one of the most celebrated results in analytic number theory, states that

\pi(x) \approx \displaystyle \frac{x}{\ln x}.

This is a very difficult result to prove. However, Gamma (page 172) provides a heuristic argument that suggests that this answer might be halfway reasonable.

Consider all of the integers between 1 and x.

  • About half of these numbers won’t be divisible by 2.
  • Of those that aren’t divisible by 2, about two-thirds won’t be divisible by 3. (This isn’t exactly correct, but it’s good enough for heuristics.)
  • Of those that aren’t divisible by 2 and 3, about four-fifths won’t be divisible by 5.
  • And so on.

If we repeat for all primes less than or equal to \sqrt{x}, we can conclude that the number of prime numbers less than or equal to x is approximately

\pi(x) \approx \displaystyle x \prod_{p \le \sqrt{x}} \left(1 - \frac{1}{p} \right).

From this point, we can use Mertens product formula

\displaystyle \lim_{n \to \infty} \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right)^{-1} = e^\gamma

to conclude that

\displaystyle \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right) \approx \displaystyle \frac{e^{-\gamma}}{\ln n}

if n is large. Therefore,

\pi(x) \approx x \displaystyle \frac{e^{-\gamma}}{\ln \sqrt{x}} = 2 e^{-\gamma} \displaystyle \frac{x}{\ln x}.

Though not a formal proof, it’s a fast way to convince students that the unusual fraction \displaystyle \frac{x}{\ln x} ought to appear someplace in the prime number theorem.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.