The Rocket Scientist Quarterbacking an Unlikely NFL Playoff Contender

I greatly enjoyed this Wall Street Journal article about Joshua Dobbs, currently the quarterback of the Minnesota Vikings. The opening paragraphs:

When quarterback Joshua Dobbs subbed in for the Minnesota Vikings last week and led them to a dramatic victory just days after they traded for him, it amazed his teammates whose names he barely knew when he stepped onto the field. 

It also left his former colleagues dumbfounded—which isn’t exactly easy to do considering they’re rocket scientists. 

“The quickness that he absorbed that playbook is astounding,” says Scott Colloredo, NASA’s Director of Engineering at Florida’s Kennedy Space Center.

Before Dobbs was a journeyman-turned-sensation for the Vikings, he majored in aerospace engineering at the University of Tennessee, where he was the rare SEC football player to take grueling science classes while also preparing to play in the NFL. Even more remarkably, Dobbs has kept up with the science while making millions of dollars in the pros: he has spent two offseasons moonlighting at NASA, where his bosses give him rave reviews and say he was just like any other engineer working on the Artemis program, except for how he made the football fans in the office giddy with excitement. 

His co-workers these days are even happier to have him in their orbit. 

ChatGPT and mathematics

I recently read the delightful blog post ChatGPT Is Not Ready to Teach Geometry (Yet), with the wonderful sub-headline “The viral chatbot is often wrong, but never in doubt. Educators need to tread carefully.” Many thanks to the article AI Bot ChatGPT Needs Some Help With Math Assignments in the Wall Street Journal for directing me to this post. Both of these articles are cited at length below; I recommend both.

In case you’ve been on the moon for the past few months, much digital ink has been spilled in the past few months about how ChatGPT will affect education. From the blog post:

On HITC, Ellissa Bain described how ChatGPT could “Write Your Papers in Seconds,” explaining that this is “great for students” and “examiners will never know.” It’s not at all great for student learning, but it can definitely fool examiners; a team at the New YorkTimes Upshot confirmed that a small group of experts—including a teacher, a professor, a tutor, and a famous children’s author—often failed to distinguish between ChatGPT’s output and the writing of actual fourth and eighth graders in response to essay prompts from the National Assessment of Education Progress. In The Atlantic, teacher Daniel Herman pronounced “The end of high-school English,” explaining that “what GPT can produce right now is better than the large majority of [student] writing.” ChatGPT could compare Hamlet to Beloved, illustrate Buddhist ideas using a pet hamster, and write a convincing essay describing how its (fabricated) experience volunteering at a pet shelter had prepared it for success at Stanford. The bot’s ability to fabricate an admissions essay raises the question of whether colleges that no longer require standardized tests will need to drop the essay requirement as well.

Unfortunately, as Paul von Hipple, an associate professor at the University of Texas, discovered, ChatGPT isn’t so great at the Pythagorean theorem. From the blog post:

What are some common misunderstandings of Pythagoras’ theorem?

ChatGPT responded with four examples, which I’ll take just slightly out of order. The first three were howlers:

Some common misunderstandings about Pythagoras’ theorem include:

  1. Believing that it only works for right triangles. While Pythagoras’ theorem is most commonly used to find the length of a missing side in a right triangle, it can also be used to determine whether a triangle is a right triangle. If the lengths of the sides of a triangle satisfy the formula of Pythagoras’ theorem (i.e., if the square of the length of the hypotenuse is equal to the sum of the squares of the other two sides), then the triangle is a right triangle.
  2. Believing that it only works for right triangles with a 90 degree angle: Pythagoras’ theorem is specifically designed for right triangles, which have one 90 degree angle. However, it is possible for a triangle to have two acute angles (angles less than 90 degrees) and still be a right triangle.
  3. Believing that it only works for right triangles with legs that are perpendicular to each other: In a right triangle, the sides that are not the hypotenuse are called the legs. While it is true that the legs of a right triangle are typically perpendicular to each other, this is not a requirement for the theorem to be applicable. As long as a triangle has one 90 degree angle, Pythagoras’ theorem can be used to find the length of a missing side.

It gets better. Dr. von Hipple then asked the following:

Suppose the hypotenuse of a right triangle is 7.1 miles long, and one of the legs is 3.85 miles long. How long is the other leg?

ChatGPT returned a wrong answer: 5.93 miles. (See the blog post for more on this error.)

Dr. von Hipple then, with a simple typo, inadvertently asked ChatGPT to solve a triangle that can’t be solved:

I wondered if it would recognize a right triangle if I described it indirectly. So I started my next question:

Suppose a triangle has three sides called A, B, and C. A is 7 inches long and B is 7 inches long. The angle between A and C is 45 degrees, and so is the angle between A and B. What is the length of side C?

This was a typo; the 45-degree angle was placed between the wrong two sides. Nevertheless ChatGPT gave an answer:

Since the angle between A and B is 45 degrees, and the angle between A and C is also 45 degrees, the triangle is an isosceles right triangle, where A and B are the legs and C is the hypotenuse….

Dr. von Hipple’s conclusion:

This doesn’t make sense. If A and B are the legs of a right triangle, the angle between them can’t be 45 degrees; it has to be 90. ChatGPT went ahead and calculated the length of C using Pythagoras’ theorem, but it had revealed something important: it didn’t have a coherent internal representation of the triangle that we were talking about. It couldn’t visualize the triangle as you or I can, and it didn’t have any equivalent way to catch errors in verbal descriptions of visual objects.

In short, ChatGPT doesn’t really “get” basic geometry. It can crank out reams of text that use geometric terminology, but it literally doesn’t know what it is talking about. It doesn’t have an internal representation of geometric shapes, and it occasionally makes basic calculation errors…

What is ChatGPT doing? It is bloviating, filling the screen with text that is fluent, persuasive, and sometimes accurate—but it isn’t reliable at all. ChatGPT is often wrong but never in doubt. 

The Wall Street Journal article cited above provided some more howlers. Here are a couple:

So what to make of all this? I like this conclusion from the Wall Street Journal:

Another reason that math instructors are less fussed by this innovation it that they have been here before. The field was upended for the first time decades ago with the general availability of computers and calculators.

Whereas English teachers are only now worrying about computers doing their students’ homework, math teachers have long wrestled with making sure students were actually learning and not just using a calculator. It’s why students have to show their work and take tests on paper.

The broader lesson is that AI, computers and calculators aren’t simply a shortcut. Math tools require math knowledge. A calculator can’t do calculus unless you know what you’re trying to solve. If you don’t know any math, Excel is just a tool for formatting tables with a lot of extra buttons.

Eventually, artificial intelligence will probably get to the point where its mathematics answers are not only confident but correct. A pure large language model might not be up for the job, but the technology will improve. The next generation of AI could combine the language skills of ChatGPT with the math skills of Wolfram Alpha.

In general, however, AI, like calculators and computers, will likely ultimately be most useful for those who already know a field well: They know the questions to ask, how to identify the shortcomings and what to do with the answer. A tool, in other words, for those who know the most math, not the least.

In Memoriam: Harry Lucas, Jr. (1932-2022)

I was saddened to recently read of the passing of Harry Lucas, Jr., who was a great proponent and benefactor of inquiry-based learning (IBL). To remember his contributions to the mathematical community, I certainly won’t be able to surpass the eloquent words of Michael Starbird in the June-July issue of MAA Focus.

Instead, I’ll share a little bit about my own interactions with Mr. Lucas. My first administrative position at my university was the founding co-director of Teach North Texas, a UTeach replication of the pioneering program UTeach program at the University of Texas for preparing teachers of secondary mathematics and science. I first met Mr. Lucas at the annual UTeach conference, and I don’t remember how it came up, but he personally encouraged me to submit a proposal to the Educational Advancement Foundation for the funding of equipment typically found in physics labs to get our university’s new Functions and Modeling course off the ground. Thanks to his generosity, hundreds of UNT students have experienced IBL firsthand early in their mathematical studies, often giving them an eye-opening new perspective on the way that mathematics “should” be taught. At future conferences, Mr. Lucas always had a keen interest in how Teach North Texas was progressing and seemed delighted to hear of our successes.

In the words of Dr. Starbird, “Mr. Lucas is one of very few individuals whose personal vision, decisions, persistence, and encouragement have clearly improved the lives of thousands of students and teachers across the country.” Thank you, Mr. Lucas.

Mathematics is about wonder, creativity and fun, so let’s teach it that way

I enjoyed this opinion piece at phys.org about project-based instruction in mathematics. A sample quote:

Mathematician Jo Boaler from the Stanford Graduate School of Education says that a “wide gulf between real mathematics and school mathematics is at the heart of the math problems we face in school education.”

Of the subject of mathematics, Boaler notes that: “Students will typically say it is a subject of calculations, procedures, or rules. But when we ask mathematicians what math is, they will say it is the study of patterns that is an aesthetic, creative, and beautiful subject. Why are these descriptions so different?”

She points out the same gulf isn’t seen if people ask students and English-literature professors what literature is about.

In the process of constructing the RabbitMath curriculum, problems or activities are included when team members find them engaging and a challenge to their intellect and imagination. Following the analogy with literature, we call the models we are working with mathematical novels.

 

Students Find Glaring Discrepancy in US News Rankings

Despite its hopelessly flawed methodology, U.S. News & World Report continues to sell magazines with its lists of Top 25 or Top 100 universities in various categories. Some universities who don’t play along, like Reed College, have long suspected that their rankings are penalized. So I enjoyed this press release from Reed College about statistics students who reverse-engineered the rankings to measure the magnitude of this penalty. The results are startling: while Reed was officially ranked #90, the formula should have them at about #38. In one glaring example, the magazine underestimated the college’s financial resources by over 100 spots even though this information the magazine could have obtained this information from free government databases instead of their survey.

What’s bigger: 1/3 pound burgers or 1/4 pound burgers?

I recently enjoyed reading about an unanticipated failed marketing campaign of the 1980s. Here’s the money quote:

One of the most vivid arithmetic failings displayed by Americans occurred in the early 1980s, when the A&W restaurant chain released a new hamburger to rival the McDonald’s Quarter Pounder. With a third-pound of beef, the A&W burger had more meat than the Quarter Pounder; in taste tests, customers preferred A&W’s burger. And it was less expensive. A lavish A&W television and radio marketing campaign cited these benefits. Yet instead of leaping at the great value, customers snubbed it.

Only when the company held customer focus groups did it become clear why. The Third Pounder presented the American public with a test in fractions. And we failed. Misunderstanding the value of one-third, customers believed they were being overcharged. Why, they asked the researchers, should they pay the same amount for a third of a pound of meat as they did for a quarter-pound of meat at McDonald’s. The “4” in “¼,” larger than the “3” in “⅓,” led them astray.

Here’s the article: https://gizmodo.com/whats-bigger-1-3-pound-burgers-or-1-4-pound-burgers-1611118517

 

Learning Math by Seeing It as a Story

I enjoyed this first-person piece about an English teacher who, by grim necessity, found herself thrust in the uncomfortable situation of co-teaching trigonometry and used her training as an English teacher to better engage her students.

Some quotes:

My students struggled with the calculations, thinking they just weren’t good at math. Like me, they hated it. What was the point in working and reworking these calculations? What were we trying to figure out anyway? And I originally agreed with them.

Yet trig slowly became my favorite class of the day. After spending years teaching English and reading, I was being challenged to move beyond what I had always been doing. When you’re new to something, you have a fresh perspective. You’re willing to take risks. You’re willing to try anything because you don’t know how something should be done.

And:

I brought in some books from Chris Ferrie’s Baby University series—books like General Relativity for Babies and Optical Physics for Babies. The idea is that you don’t fully know something unless you can break it down so simply that you can explain it to a young child.

That’s the task I gave my students. We started by reading Ferrie’s board books to see how simple language and illustrations could be used to explain complex subjects. Next, students chose a multistep equation they had initially struggled with. Working in pairs or small groups, they talked through their thinking and the steps needed to solve the equation. Their partners were encouraged to ask questions and get clarification so the ideas were explained at the simplest level.

And:

I used story problems as an opportunity to connect math to students’ lives by creating fictional math-based stories. First, students would work in small groups to go through the chapter in their math textbook and collect the story problems, writing them on index cards. Next, students would lay out the cards to see the questions as a whole: Out of 10 or more story problems in the chapter, were there five similar ones they could group together? What problem-solving skills were called for to work on these problems?

When they used creative writing skills to develop math story problems about things they were interested in, students became more engaged. They wanted to read the other groups’ stories and work on the math in them because they had a real investment in the outcome. The stories helped students find motivation because they created an answer to the question “Why do we need to learn this?”