That’s the expansion of the fraction in base , as opposed to base .
In the previous post, I verified that the above infinite series actually converges to :
Still, a curious student may wonder how one earth one could directly convert into binary without knowing the above series ahead of time.
This can be addressed by using the principles that we’ve gleaned in this study of decimal representations, except translating this work into the language of base . In the following, I will use the subscripts and so that I’m clear about when I’m using decimal and binary, respectively.
To begin, we note that . (In other words, ten is equal to two times five.) So, following Case 3 of the previous post, we will attempt to write the denominator in the form
, or
If , then , but is not an integer.
If , then , but is not an integer.
If , then , but is not an integer.
If , then . This time, . Written in binary,
We now return to the binary representation of .
Therefore, the binary representation has a delay of one digit and a repeating block of four digits:
Naturally, this matches the binary representation given earlier.
In Part 5 of this series, I showed that fractions of the form , , and can be converted into their decimal representations without using long division and without using a calculator.
The amazing thing is that every rational number can be written in one of these three forms. Therefore, after this conversion is made, then the decimal expansion can be found without a calculator.
Case 1. If the denominator has a prime factorization of the form , then can be rewritten in the form , where .
For example,
The step of multiplying both sides by is perhaps unusual, since we’re so accustomed to converting fractions into lowest terms and not making the numerators and denominators larger. This particular form of was chosen in order to get a power of in the denominator, thus facilitating the construction of the decimal expansion.
Case 2. If the denominator is neither a multiple of nor , then can be rewritten in the form .
For example,
This example wasn’t too difficult since we knew that . However, finding the smallest value of that works can be a difficult task requiring laborious trial and error.
However, we can do even better than that. Using ideas from number theory, it can be proven that must be a factor of , which is the Euler toitent function or the number of integers less than that are relatively prime with . In the example above, the denominator was , and clearly, if , then . Since there are such numbers, we know that must be a factor of . In other words, must be either , , , or , thus considerably reducing the amount of guessing and checking that has to be done. (Of course, for the example above, was the least value of that worked.)
In general, if is the prime factorization of , then
For the example above, since was prime, we have .
Case 3. Suppose the prime factorization of the denominator both (1) contains and/or and also (2) another prime other than and . This is a mixture of Cases 1 and 2, and the fraction can be rewritten in the form .
For example, consider
Following the rule for Case 1, we should multiply by to get a in the denominator:
Next, we need to multiply by something to get a number of the form . Since is prime, every number less than is relatively prime with , so . Therefore, must be a factor of . So, must be one of , , , , , , , , and .
(Parenthetically, while we’ve still got some work to do, it’s still pretty impressive that — without doing any real work — we can reduce the choices of to these nine numbers. In that sense, the use of parallels how the Rational Root Test is used to determine possible roots of polynomials with integer coefficients.)
So let’s try to find the least value of that works.
Students are quite accustomed to obtaining the decimal expansion of a fraction by using a calculator. Here’s an (uncommonly, I think) taught technique for converting certain fractions into a decimal expansion without using long division and without using a calculator. I’ve taught this technique to college students who want to be future high school teachers for several years, and it never fails to surprise.
First off, it’s easy to divide any number by a power of , or . For example,
and
What’s less commonly known is that it’s also easy to divide by , or , a numeral with consecutive s. (This number can be used to prove the divisibility rules for 3 and 9 and is also the subject of one of my best math jokes.) The rule can be illustrated with a calculator:
In other words, if , then the decimal expansion of is a repeating block of digits containing the numeral , possibly adding enough zeroes to fill all digits.
To prove that this actually works, we notice that
The first line is obtained by multiplying the numerator and denominator by . The second line is obtained by using the formula for an infinite geometric series in reverse, so that the first term is and the common ratio is also . The third line is obtained by converting the series — including only powers of — into a decimal expansion.
If , then the division algorithm must be used to get a numerator that is less than . Fortunately, dividing big numbers by is quite easy and can be done without a calculator. For example, let’s find the decimal expansion of without a calculator. First,
Therefore,
This can be confirmed with a calculator. Notice that the repeating block doesn’t quite match the digits of the numerator because of the intermediate step of applying the division algorithm.
In the same vein, it’s also straightforward to find the decimal expansion of fractions of the form , so that the denominator has the form . This is especially easy if . For example,
On the other hand, if , then the division algorithm must be applied as before. For example, let’s find the decimal expansion of . To begin, we need to divide the numerator by , as before. Notice that, for this example, an extra iteration of the division algorithm is needed to get a remainder less than .
Therefore,
In particular, notice that the three s in the denominator correspond to a delay of length 3 (the digits ), while the in the denominator corresponds to the repeating block of length .
These can be confirmed for students who may be reluctant to believe that decimal expansions can be found without a calculator.
In Part 3 of this series, I considered the conversion of a repeating decimal expansion into a fraction. This was accomplished by an indirect technique which was pulled out of the patented Bag of Tricks. For example, if , we start by computing and then subtracting.
As mentioned in Part 3, most students are a little bit skeptical that this actually works, and often need to type the final fraction into a calculator to be reassured that the method actually works. Most students are also a little frustrated with this technique because it does come from the Bag of Tricks. After all, the first two steps (setting the decimal equal to and then multiplying by ) are hardly the most intuitive things to do first… unless you’re clairvoyant and know what’s going to happen next.
In this post, I’d like to discuss a more direct way of converting a repeating decimal into a fraction. In my experience, this approach presents a different conceptual barrier to students. This is a more direct approach, and so students are more immediately willing to accept its validity. However, the technique uses the formula for an infinite geometric series, which (unfortunately) most senior math majors cannot instantly recall. They’ve surely seen the formula before, but they’ve probably forgotten it because a few years have passed since they’ve had to extensively use the formula.
Anyway, here’s the method applied to . To begin, we recall the meaning of a decimal representation in the first place:
Combining fractions three at a time (matching the length of the repeating block), we get
This is an infinite geometric series whose first term is , and the common ratio that’s multiplied to go from one term to the next is . Using the formula for an infinite geometric series and simplifying, we conclude
For what it’s worth, the decimal representation could have been simplified by using three separate geometric series. Some students find this to be more intuitive, combining the unlike fractions at the final step as opposed to the initial step.
Finally, this direct technique also works for repeating decimals with a delay, like .
In Part 2 of this series, I discussed the process of converting a fraction into its decimal representation. In this post, I consider the reverse: starting with a decimal representation, and ending with a fraction.
Let me say at the onset that the process I’m about to describe appears to be a dying art. When I show this to my math majors who want to be high school teachers, roughly half have either not seen it before or else have no memory of seeing it before. (As always, I hold my students blameless for the things that they were simply not taught at a younger age, and part of my job is repairing these odd holes in their mathematical backgrounds so that they’ll have their best chance at becoming excellent high school math teachers.) I’m guessing that this algorithm is a dying art because of the ease and convenience of modern calculators.
So let me describe how I describe this procedure to my students. To begin, suppose that we’re given a repeating decimal like . How do we change this into a decimal? Let’s call this number .
I’m now about to do something that, if you don’t know what’s coming next, appears to make no sense. I’m going to multiply by . Students often give skeptical, quizzical, and/or frustrated looks about this non-intuitive next step… they’re thinking, “How would I ever have thought to do that on my own?” To allay these concerns, I explain that this step comes from the patented Bag of Tricks. Socrates gave the Bag of Tricks to Plato, Plato gave it to Aristotle, it passed down the generations, my teacher taught the Bag of Tricks to me, and I teach it to my students. Multiplying by on the next step is absolutely not obvious, unless you happen to know via clairvoyance what’s going to come next.
Anyway, let’s write down and also .
Notice that the decimal parts of both and are the same. Subtracting, the decimal parts cancel, leaving
or
In my experience, most students — even senior math majors who have taken a few theorem-proof classes and hence are no dummies — are a little stunned when they see this procedure for the first time. To make this more real and believable to them, I then ask them to pop out their calculators to confirm that this actually worked. (Indeed, many students need this confirmation to be psychologically sure that it really did work.)
Then I ask my students, why did we multiply by ? They’ll usually give the correct answer: so that the decimal parts will cancel. My follow-up question is, what should we do if the decimal is ? They’ll usually respond that we should multiply by or, in general, by , where is the length of the repeating block.
This strategy, of course, works for $0.\overline{142857}$, eventually yielding
after cancellation.
The same procedure works for decimal expansions with a delay, like . This time, I’ll ask them how we should go about changing this into a fraction. I usually get at least one of three responses. I love getting multiple responses, as this gives the students a chance to came the “different” answers, compare the different strategies, and
Answer #1. Multiply by since the repeating pattern starts at the 3rd decimal place.
Answer #2. Multiply by since the repeating block has length 1.
Answer #3. First multiply by 100 to get rid of the delay. Then multiply by an extra since the repeating block has length 1.
The above discussion concerned repeating decimals. For completeness, converting terminating decimals into a fraction is easy. For example,
One more thought. The concept behind Part 2 of this series shows that a rational number of the form , where both and are integers, must have either a terminating decimal expansion or else a repeating decimal expansion (possibly with a delay). In this post, we went the other direction. Therefore, we have the basis for the following theorem.
Theorem. A number is rational if and only if it has either a terminating decimal expansion or else a repeating decimal expansion.
The contrapositive of this theorem is perhaps intuitively obvious.
Theorem. A number is irrational if and only if it has a non-terminating and non-repeating decimal expansion.
In my experience, most students absolutely believe both of these theorems. For example, most students believe that has a decimal expansion that neither terminates nor repeats. That said, most math majors are surprised to discover that it does take quite a bit of work — like a formal write-up of Parts 2 and 3 of this series — to actually prove this statement from middle-school mathematics.
Let’s take another look at the decimal expansion of 1/7:
This result from a calculator should convince most students that . After all, there’s a second after the first , and the ending is consistent with rounding up the .
So the evidence that is persuasive.
But does this prove beyond a shadow of a doubt that this decimal representation is correct?
Sadly, no. Taken by itself, the result of the calculator is also consistent with, to give just one example, , which also would truncate after 10 decimal places to the result shown above.
In short, the calculator gives evidence that the decimal expansion is correct, but does not prove that it’s correct.
Which leads to the obvious question: how do we prove it?
One method, which used to be taught in elementary school (I honestly don’t know if this is taught anymore), is by traditional long division:
After six steps, we finally get to a remainder that was previously seen (in this case, on the first step). Therefore, we tell students, the subsequent digits have to repeat.
By the way, this is the essence of the proof for why every rational number has either a repeating decimal representation (possibly with a delay, like ) or else a terminating decimal representation. Though a more formal proof would be preferred by professional mathematicians, the idea is simple: in the algorithm for long division for , there are only possible remainders: . So we eventually have to arrive at a remainder that was seen before. If that remainder is , then the decimal representation terminates. Otherwise, the decimal representation repeats itself.
In my experience, every math major that I’ve ever met intuitively knows that the above theorem is true. After all, they’ve worked intensively with decimals since 5th grade and have seen decimals in the lower elementary grades. However, very few can articulate why it’s true.
I’m guessing that not many people ever blocked time out of their busy schedules to purposefully memorize the decimal representation of a fraction. Nevertheless, in my experience, most math majors and math teachers can immediately convert, from memory, most (but not all — more on this later) fractions of the form into its decimal representation as long as the denominator is less than or equal to . They can also go the other direction, mentally recognizing a decimal expansion as a fraction of this form.
This memorization occurs not because of purposeful study but because these fractions arise so commonly from 6th grade through college that students can’t help but memorize them. They just come up so often that good students almost can’t help but memorize them.
Here are the decimal representations of , where the fraction is in lowest terms and .
Like I said, most (but not all) of these have been memorized by math majors and math teachers. The exceptions, not surprisingly, are the fractions with a denominator of .
When I was a child, I read somewhere the following rule for memorizing the decimal expansion of . I must have been lucky, because I have yet to meet a student that also saw this rule. The following is not a formal proof of the rule, but it does work for the purposes of memorization.
Step 1. Let’s begin with . The decimal expansion can be remembered by repeating “3, 2, 6” along with repeating “up, down.” Repeating both patterns, we get
up 3
down 2
up 6
down 3
up 2
down 6
So,
Start at :
up 3:
down 2:
up 6:
down 3:
up 2:
down 6:
The pattern returns back to , and the digits repeat. That’s the decimal expansion:
Steps 2-6. For , the digits repeat in the same pattern as , just starting at a different place. For example:
For , the second smallest of the digits is . So we’ll drop the first and and start on :
For , the fourth smallest of the digits is . So we’ll drop the first , , , and and start on :
P.S. Plenty of math majors (though perhaps not a majority) have also memorized the decimal expansions of and . For , the rule is multiply by to form the two-digit repeating block. In other words:
, and so
, and so
, and so
For , the only lowest-term fractions are , , , and . To begin, the first should be memorized:
The others are obtained by addition or subtraction:
In the previous post, I gave a simple classroom demonstration to illustrate that some calculators only approximate an infinite decimal expansion with a terminating decimal expansion, and hence truncation errors can propagate. This example addresses the common student question, “What’s the big deal if I round off to a few decimal places?”
(For what it’s worth, I’m aware that some current high-end calculators are miniature computer algebra systems and can formally handle an answer of instead of its decimal expansion.)
Students may complain that the above exercise is artificial and unlikely to occur in real life. I would suggest following up with a real-world, non-artificial, and tragic example of an accident that happened in large part due to truncation error. This incident occurred during the first Gulf War in 1991 (perhaps ancient history to today’s students). I’m going to quote directly from the website http://www.ima.umn.edu/~arnold/disasters/patriot.html, published by Dr. Douglas Arnold at the University of Minnesota. Perhaps students don’t need to master the details of this explanation (a binary expansion as opposed to a decimal expansion might be a little abstract), but I think that this example illustrates truncation error vividly.
On February 25, 1991, during the Gulf War, an American Patriot Missile battery in Dharan, Saudi Arabia, failed to track and intercept an incoming Iraqi Scud missile. The Scud struck an American Army barracks, killing 28 soldiers and injuring around 100 other people. A report of the General Accounting office, GAO/IMTEC-92-26, entitled Patriot Missile Defense: Software Problem Led to System Failure at Dhahran, Saudi Arabia reported on the cause of the failure.
It turns out that the cause was an inaccurate calculation of the time since boot due to computer arithmetic errors. Specifically, the time in tenths of second as measured by the system’s internal clock was multiplied by to produce the time in seconds. This calculation was performed using a 24 bit fixed point register. In particular, the value , which has a non-terminating binary expansion, was chopped at 24 bits after the radix point. The small chopping error, when multiplied by the large number giving the time in tenths of a second, led to a significant error.
Indeed, the Patriot battery had been up around 100 hours, and an easy calculation shows that the resulting time error due to the magnified chopping error was about seconds.
The number equals
In other words, the binary expansion of is
Now the 24 bit register in the Patriot stored instead
introducing an error of
binary,
or about decimal. Multiplying by the number of tenths of a second in hours gives
.
A Scud travels at about meters per second, and so travels more than half a kilometer in this time. This was far enough that the incoming Scud was outside the “range gate” that the Patriot tracked.
Ironically, the fact that the bad time calculation had been improved in some parts of the code, but not all, contributed to the problem, since it meant that the inaccuracies did not cancel.
The following paragraph is excerpted from the GAO report.
The range gate’s prediction of where the Scud will next appear is a function of the Scud’s known velocity and the time of the last radar detection. Velocity is a real number that can be expressed as a whole number and a decimal (e.g., 3750.2563…miles per hour). Time is kept continuously by the system’s internal clock in tenths of seconds but is expressed as an integer or whole number (e.g., 32, 33, 34…). The longer the system has been running, the larger the number representing time. To predict where the Scud will next appear, both time and velocity must be expressed as real numbers. Because of the way the Patriot computer performs its calculations and the fact that its registers are only 24 bits long, the conversion of time from an integer to a real number cannot be any more precise than 24 bits. This conversion results in a loss of precision causing a less accurate time calculation. The effect of this inaccuracy on the range gate’s calculation is directly proportional to the target’s velocity and the length of the the system has been running. Consequently, performing the conversion after the Patriot has been running continuously for extended periods causes the range gate to shift away from the center of the target, making it less likely that the target, in this case a Scud, will be successfully intercepted.
A quick note of clarification. To verify the binary expansion of , we use the formula for an infinite geometric series.
OK, that verifies the answer. Still, a curious student may wonder how one earth one could directly convert into binary without knowing the above series ahead of time. I will address this question in a future post.
Far too often, students settle for a numerical approximation of a solution that can be found exactly. To give an extreme example, I have met quite intelligent college students who were convinced that was literally equal to .
That’s an extreme example of something that nearly all students do — round off a complicated answer to a fixed number of decimal places. In trigonometry, many students will compute by plugging into a calculator and reporting the first three to six decimal places, like . This is especially disappointing when there are accessible techniques for getting the exact answer (in this case, ) without using a calculator at all.
Unfortunately, even maintaining eight, nine, or ten decimal places of accuracy may not be good enough, as errors tend to propagate as a calculation continues. I’m sure every math teacher has an example where the correct answer was exactly $\displaystyle\frac{3}{2}$ but students returned an answer of or because of roundoff errors.
Students may ask, “What’s the big deal if I round off to five decimal places?” Here’s a simple example — which can be quickly demonstrated in a classroom — of how such truncation errors can propagate. I’m going to generate a recursive sequence. I will start with . Then I will alternate multiplying by and then subtracting . More mathematically,
if
Here’s what happens exactly:
So, repeating these two steps, the sequence alternates between and .
But looks what happens if I calculate the first twelve terms of this sequence on a calculator.
Notice that by the time I reach , the terms of the sequence are negative, which is clearly incorrect.
So what happened?
This is a natural by-product of the finite storage of a calculator. The calculator doesn’t store infinitely many digits of $\displaystyle \frac{1}{3}$ in memory because a calculator doesn’t possess an infinite amount of memory. Instead, what gets stored is something like the terminating decimal , with about fourteen s. (Of course, only the first ten digits are actually displayed.)
So multiplying by and then subtracting produces a new and different terminating decimal with three less s. Do this enough times, and you end up with negative numbers.
I’m in the middle of a series of posts concerning the elementary operation of computing a square root. This is such an elementary operation because nearly every calculator has a button, and so students today are accustomed to quickly getting an answer without giving much thought to (1) what the answer means or (2) what magic the calculator uses to find square roots.
I like to show my future secondary teachers a brief history on this topic… partially to deepen their knowledge about what they likely think is a simple concept, but also to give them a little appreciation for their elders. Indeed, when I show this method to today’s college students, they are absolutely mystified that a square root can be extracted by hand, without the aid of a calculator.
To begin, let’s again go back to a time before the advent of pocket calculators… say, ancient Rome. (I personally love using Back to the Future for the pedagogical purpose of simulating time travel, but I already used that in the previous post.)
How did previous generations figure out without a calculator? In the previous post, I introduced a trapping method that directly used the definition of for obtaining one digit at a time. Here’s a second trapping method that’s significantly more efficient. As we’ll see, this second method works because of base-10 arithmetic and a very clever use of Algebra I. My understanding is that this procedure was a standard topic in the mathematical training of children as little as 50 years ago.
Personally, I was taught this method when I was maybe 10 or 11 years old by my math teacher; I don’t doubt that she had to learn to extract square roots by hand when she was a student. Of course, this trapping method fell out of pedagogical favor with the advent of cheap pocket calculators.
I’ll illustrate this method again with . After illustrating the method, I’ll discuss how it works using Algebra I.
1. To begin, we start from the decimal point and group digits in block of two. (If the number had been , then the would have been in a group by itself.) I start with the . What perfect square is closest to without going over? Clearly, the answer is . So, mimicking the algorithm for long division:
We’ll place a over the , signifying that the answer is in the s.
We’ll subtract from , for an answer of .
2. On the next step, we’ll do a couple of things that are different from ordinary long division:
We’ll bring down the next two digits. So we’ll work with .
We’ll double the number currently on top and place the result to the side. In our case $6 \times 2 = 12$.
We’ll place a small ___ after the and under the .
The basic question is: I need –something times the same something to be as close to as possible without going over. I like calling this The Price Is Right problem, since so many games on that game show involve guessing a price without going over the actual price. For example…
: too small
: too small
: too small
: too small
: too big
Based on the above work, the next digit is . We place the over the next block of digits and subtract from . So we will work with on the next step.
3. On the next step, we’ll do a couple of things that are different from ordinary long division:
We’ll bring down the next two digits. On this step, the next two digits are the first two zeroes after the decimal point. So we’ll work with .
We’ll double the number currently on top and place the result to the side. In our case $64 \times 2 = 128$.
We’ll place a small ___ after the and under the .
The basic question is: I need –something times the same something to be as close to as possible without going over. For example…
: too small
: too small
: too small
: too small
: too small
: too small
: too small
: too small
: still too small
Based on the above work, the next digit is . We place the over the next block of digits and subtract from . So we will work with on the next step.
Then, to quote The King and I, et cetera, et cetera, et cetera. Each step extracts an extra digit of the square root. With a little practice, one gets better at guessing the correct value of .
A personal story: when I was a teenager and too cheap to buy a magazine, I would extract square roots to kill time while waiting in the airport for a flight to start boarding. My parents hated missing flights, so I was always at the gate with plenty of time to spare… and I could extract about 20 digits of while waiting for the boarding announcement.
So why does this algorithm work? I offer a thought bubble if you’d like to think about before I give the answer.
P.S. In case anyone complains, the people of ancient Rome could not have performed this algorithm since they used Roman numerals and not a base 10 decimal system.
To see why this works, let’s consider the first two steps of finding . Clearly, the answer lies between and somewhere (that was Step 1). So the basic problem is to solve for if
,
where is the excess amount over . Squaring, we obtain
,
or
,
or
Notice that the right-hand side is , which was obtained at the start of Step 2. The left-hand side has the form –something times the same something, which was the key part of completing Step 2. So the value of that gets as close to as possible (without going over) will be the next digit in the decimal representation of .
The logic for the remaining digits is similar.
I should mention that third roots, fourth roots, etc. can (in principle) be found using algebra to find excess amounts. However, it’s quite a bit more work for these higher roots. For example, to find the cube root of , we immediately see that , so that the answer lies between and . To find the excess amount over 10, we need to solve
,
which reduces to
.
So we then try out values of so that the left-hand side gets as close to as possible without going over.
In closing, in honor of this method, here’s a great compilation of clips from The Price Is Right when the contestant guessed a price that was quite close to the actual price without going over.