Combinatorics and Jason’s Deli: Index

I’m doing something that I should have done a long time ago: collecting a series of posts into one single post. The following links comprised my series on an advertisement that I saw in Jason’s Deli.

Part 1: The advertisement for the Jason’s Deli salad bar.

Part 2: Correct calculation of the number of salad bar combinations.

Part 3: Incorrect calculation of how long it would take to eat this many combinations.

 

 

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 18

The Riemann Hypothesis (see here, here, and here) is perhaps the most famous (and also most important) unsolved problems in mathematics. Gamma (page 207) provides a way of writing down this conjecture in a form that only uses notation that is commonly taught in high school:

If \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \cos(b \ln r) = 0 and \displaystyle \sum_{r=1}^\infty \frac{(-1)^r}{r^a} \sin(b \ln r) = 0 for some pair of real numbers a and b, then a = \frac{1}{2}.

As noted in the book, “It seems extraordinary that the most famous unsolved problem in the whole of mathematics can be phrased so that it involves the simplest of mathematical ideas: summation, trigonometry, logarithms, and [square roots].”

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 17

Let \pi(n) denote the number of positive prime numbers that are less than or equal to n. The prime number theorem, one of the most celebrated results in analytic number theory, states that

\pi(x) \approx \displaystyle \frac{x}{\ln x}.

This is a very difficult result to prove. However, Gamma (page 172) provides a heuristic argument that suggests that this answer might be halfway reasonable.

Consider all of the integers between 1 and x.

  • About half of these numbers won’t be divisible by 2.
  • Of those that aren’t divisible by 2, about two-thirds won’t be divisible by 3. (This isn’t exactly correct, but it’s good enough for heuristics.)
  • Of those that aren’t divisible by 2 and 3, about four-fifths won’t be divisible by 5.
  • And so on.

If we repeat for all primes less than or equal to \sqrt{x}, we can conclude that the number of prime numbers less than or equal to x is approximately

\pi(x) \approx \displaystyle x \prod_{p \le \sqrt{x}} \left(1 - \frac{1}{p} \right).

From this point, we can use Mertens product formula

\displaystyle \lim_{n \to \infty} \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right)^{-1} = e^\gamma

to conclude that

\displaystyle \frac{1}{\ln n} \prod_{p \le n} \left(1 - \frac{1}{p} \right) \approx \displaystyle \frac{e^{-\gamma}}{\ln n}

if n is large. Therefore,

\pi(x) \approx x \displaystyle \frac{e^{-\gamma}}{\ln \sqrt{x}} = 2 e^{-\gamma} \displaystyle \frac{x}{\ln x}.

Though not a formal proof, it’s a fast way to convince students that the unusual fraction \displaystyle \frac{x}{\ln x} ought to appear someplace in the prime number theorem.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 14

I hadn’t heard of the worm-on-a-rope problem until I read Gamma (page 133). From Cut-The-Knot:

A worm is at one end of a rubber rope that can be stretched indefinitely. Initially the rope is one kilometer long. The worm crawls along the rope toward the other end at a constant rate of one centimeter per second. At the end of each second the rope is instantly stretched another kilometer. Thus, after the first second the worm has traveled one centimeter, and the length of the rope has become two kilometers. After the second second, the worm has crawled another centimeter and the rope has become three kilometers long, and so on. The stretching is uniform, like the stretching of a rubber band. Only the rope stretches. Units of length and time remain constant.

It turns out that, after n seconds, that the fraction of the band that the worm has traveled is H_n/N, where

H_n = \displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}

and N is the length of the rope in centimeters. Using the estimate H_n \approx \ln n + \gamma, we see that the worm will reach the end of the rope when

H_n = N

\ln n + \gamma \approx N

\ln n \approx N - \gamma

n \approx e^{N - \gamma}.

If N = 100,000 (since the rope is initially a kilometer long), it will take a really long time for the worm to reach its destination!

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 13

I hadn’t heard of the crossing-the-desert problem until I read Gamma (page 127). From Wikipedia:

There are n units of fuel stored at a fixed base. The jeep can carry at most 1 unit of fuel at any time, and can travel 1 unit of distance on 1 unit of fuel (the jeep’s fuel consumption is assumed to be constant). At any point in a trip the jeep may leave any amount of fuel that it is carrying at a fuel dump, or may collect any amount of fuel that was left at a fuel dump on a previous trip, as long as its fuel load never exceeds 1 unit…

The jeep must return to the base at the end of every trip except for the final trip, when the jeep travels as far as it can before running out of fuel…

[T]he objective is to maximize the distance traveled by the jeep on its final trip.

The answer is, if n fuel dumps are used, the jeep can go a distance of

H_n = \displaystyle 1 + \frac{1}{3} + \frac{1}{5} + \dots + \frac{1}{2n-1}.

Since the right-hand side approaches infinity as n gets arbitrarily large, it is possible to cross an arbitrarily long desert according the rules of this problem.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 12

Let X_1, X_2, X_3, \dots be a sequence of independent and identically distributed random variables, and let H_n be the number of “record highs” upon to and including event n. For example, each X_i can represent the amount of rainfall in a year, where X_1 is amount of rainfall recorded the first time that records were kept. As shown in Gamma (page 125), the expected number of record highs is

H_n = \displaystyle 1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n}.

As noted in Gamma,

Two arbitrary chosen examples are revealing. The Radcliffe Meteorological Station in Oxford has data for rainfall in Oxford between 1767 and 2000 and there are five record years; this is a span of 234 recorded years and H_{234} = 6.03. For Central Park, New York City, between 1835 and 1994 there are six record years over the 160-year period and H_{160} = 5.65, providing good evidence that English weather is that bit more unpredictable.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 11

The Euler-Mascheroni  constant \gamma is defined by

\gamma = \displaystyle \lim_{n \to \infty} \left( \sum_{r=1}^n \frac{1}{r} - \ln n \right).

What I didn’t know, until reading Gamma (page 117), is that there are at least two ways to generalize this definition.

First, \gamma may be thought of as

\gamma = \displaystyle \lim_{n \to \infty} \left( \sum_{r=1}^n \frac{1}{\hbox{length of~} [0,r]} - \ln n \right),

and so this can be generalized to two dimensions as follows:

\delta = \displaystyle \lim_{n \to \infty} \left( \sum_{r=2}^n \frac{1}{\pi (\rho_r)^2} - \ln n \right),

where \rho_r is the radius of the smallest disk in the plane containing at least r points (a,b) so that a and b are both integers. This new constant \delta is called the Masser-Gramain constant; like \gamma, the exact value isn’t known.

green line

Second, let f(x) = \displaystyle \frac{1}{x}. Then \gamma may be written as

\gamma = \displaystyle \lim_{n \to \infty} \left( \sum_{r=1}^n f(r) - \int_1^n f(x) \, dx \right).

Euler (not surprisingly) had the bright idea of changing the function f(x) to any other positive, decreasing function, such as

f(x) = x^a, \qquad -1 \le a < 0,

producing Euler’s generalized constants. Alternatively (from Stieltjes), we could choose

f(x) = \displaystyle \frac{ (\ln x)^m }{x}.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 10

Suppose p_n is the nth prime number, so that p_{n+1} - p_n is the size of the nth gap between successive prime numbers. It turns out (Gamma, page 115) that there’s an incredible theorem for the lower bound of this number:

\displaystyle \limsup_{n \to \infty} \frac{(p_{n+1}-p_n) (\ln \ln \ln p_n)^2}{(\ln p_n)(\ln \ln p_n)(\ln \ln \ln \ln p_n)} \ge \displaystyle \frac{4 e^{\gamma}}{c},

where \gamma is the Euler-Mascheroni constant and c is the solution of c = 3 + e^{-c}.

Holy cow, what a formula. Let’s take a look at just a small part of it.

Let’s look at the amazing function f(x) = \ln \ln \ln \ln x, iterating the natural logarithm function four times. This function has a way of converting really large inputs into unimpressive outputs. For example, the canonical “big number” in popular culture is the googolplex, defined as 10^{10^{100}}. Well, it takes some work just to rearrange \displaystyle f \left(10^{10^{100}} \right) in a form suitable for plugging into a calculator:

\displaystyle f \left(10^{10^{100}} \right) = \displaystyle \ln \ln \ln \left( \ln 10^{10^{100}} \right)

= \displaystyle \ln \ln \ln \left( 10^{100} \ln 10 \right)

= \displaystyle \ln \ln \left[ \ln \left(10^{100} \right) + \ln \ln 10 \right]

= \displaystyle \ln \ln \left[ 100 \ln 10 + \ln \ln 10 \right]

= \displaystyle \ln \ln \left[ 100 \ln 10 \left( 1 + \frac{\ln \ln 10}{100 \ln 10} \right) \right]

= \displaystyle \ln \left( \ln [ 100 \ln 10] + \ln \left( 1 + \frac{\ln \ln 10}{100 \ln 10} \right)\right)

\approx 1.6943

after using a calculator.

This function grows extremely slowly. What value of x gives an output of 0? Well:

\ln \ln \ln \ln x = 0

\ln \ln \ln x = 1

\ln \ln x = e

\ln x = e^e

x = e^{e^e} \approx 3,814,279.1

What value of x gives an output of 1? Well:

\ln \ln \ln \ln x = 1

\ln \ln \ln x = e

\ln \ln x = e^e

\ln x = e^{e^e}

x = e^{e^{e^e}}

\approx e^{3,814,279.1}

\approx 10^{3,814,279.1 \log_{10} e}

\approx 10^{1,656,420.367636}

\approx 2.3315 \times 10^{1,656,420}

That’s a number with 1,656,421 digits! At the rapid rate of 5 digits per second, it would take over 92 hours (nearly 4 days) just to write out the answer by hand!

Finally, how large does x have to be for the output to be 2? As we’ve already seen, it’s going to be larger than a googolplex:

\displaystyle f \left(10^{10^{x}} \right) = 2

\displaystyle \ln \ln \ln \left( \ln 10^{10^{x}} \right) = 2

\displaystyle \ln \ln \ln \left( 10^{x} \ln 10 \right) = 2

\displaystyle \ln \ln \left[ \ln \left(10^{x} \right) + \ln \ln 10 \right] = 2

\displaystyle \ln \ln \left[ x\ln 10 + \ln \ln 10 \right] = 2

\displaystyle \ln \ln \left[ x\ln 10 \left( 1 + \frac{\ln \ln 10}{x\ln 10} \right) \right] = 2

\displaystyle \ln \left( \ln [ x\ln 10] + \ln \left( 1 + \frac{\ln \ln 10}{x \ln 10} \right)\right) = 2

Let’s simplify things slightly by letting y = x \ln 10:

\displaystyle \ln \left( \ln y + \ln \left( 1 + \frac{\ln \ln 10}{y} \right)\right) = 2

\displaystyle \ln y + \ln \left( 1 + \frac{\ln \ln 10}{y} \right) = e^2

This is a transcendental equation in y; however, we can estimate that the solution will approximately solve \ln y = e^2 since the second term on the left-hand side is small compared to \ln y. This gives the approximation y = e^{e^2} \approx 1618.18. Using either Newton’s method or else graphing the left-hand side yields the more precise solution y \approx 1617.57.

Therefore, x \approx 1617.57 \ln 10 \approx 3725.99, so that

f \left(10^{10^{3725.99}} \right) \approx 2.

One final note: despite what’s typically taught in high school, mathematicians typically use \log to represent natural logarithms (as opposed to base-10 logarithms), so the above formula is more properly written as

\displaystyle \limsup_{n \to \infty} \frac{(p_{n+1}-p_n) (\log \log \log p_n)^2}{(\log p_n)(\log \log p_n)(\log \log \log \log p_n)} \ge \displaystyle \frac{4 e^{\gamma}}{c}.

And this sets up a standard joke, also printed in Gamma:

Q: What noise does a drowning analytic number theorist make?

A: Log… log… log… log…

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 9

When teaching students mathematical induction, the following series (well, at least the first two or three) are used as typical examples:

1 + 2 + 3 + \dots + n = \displaystyle \frac{n(n+1)}{2}

1^2 + 2^2 + 3^2 + \dots + n^2 = \displaystyle \frac{n(n+1)(2n+1)}{6}

1^3 + 2^3 + 3^3 + \dots + n^3 = \displaystyle \frac{n^2(n+1)^2}{4}

1^4 + 2^4 + 3^4 + \dots + n^4 = \displaystyle \frac{n(n+1)(2n+1)(3n^2+3n-1)}{30}

What I didn’t know (Gamma, page 81) is that Johann Faulhaber published the following cute result in 1631 (see also Wikipedia): If k is odd, then

1^k + 2^k + 3^k + \dots + n^k = f_k(n(n+1)),

where f_k is a polynomial. For example, to match the above examples, f_1(x) = x/2 and f_3(x) = x^2/4. Furthermore, if k is even, then

1^k + 2^k + 3^k + \dots + n^k = (2n+1) f_k(n(n+1)),

where again f_k is a polynomial. For example, to match the above examples, f_2(x) = x/6 and f_3(x) = x(3x-1)/30.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.

What I Learned from Reading “Gamma: Exploring Euler’s Constant” by Julian Havil: Part 4

For s > 1, Riemann’s famous zeta function is defined by

\zeta(s) = \displaystyle \sum_{n=1}^{\infty} \frac{1}{n^s}.

This is also called a p-series in calculus.

What I didn’t know (Gamma, page 41) is that, in 1748, Leonhard Euler exactly computed this infinite series for s = 26 without a calculator! Here’s the answer:

\displaystyle 1 + \frac{1}{2^{26}} + \frac{1}{3^{26}} + \frac{1}{4^{26}} + \dots = \frac{1,315,862 \pi^{26}}{11,094,481,976,030,578,125}.

I knew that Euler was an amazing human calculator, but I didn’t know he was that amazing.

green line

When I researching for my series of posts on conditional convergence, especially examples related to the constant \gamma, the reference Gamma: Exploring Euler’s Constant by Julian Havil kept popping up. Finally, I decided to splurge for the book, expecting a decent popular account of this number. After all, I’m a professional mathematician, and I took a graduate level class in analytic number theory. In short, I don’t expect to learn a whole lot when reading a popular science book other than perhaps some new pedagogical insights.

Boy, was I wrong. As I turned every page, it seemed I hit a new factoid that I had not known before.

In this series, I’d like to compile some of my favorites — while giving the book a very high recommendation.