I recently read the delightful blog post ChatGPT Is Not Ready to Teach Geometry (Yet), with the wonderful sub-headline “The viral chatbot is often wrong, but never in doubt. Educators need to tread carefully.” Many thanks to the article AI Bot ChatGPT Needs Some Help With Math Assignments in the Wall Street Journal for directing me to this post. Both of these articles are cited at length below; I recommend both.
In case you’ve been on the moon for the past few months, much digital ink has been spilled in the past few months about how ChatGPT will affect education. From the blog post:
On HITC, Ellissa Bain described how ChatGPT could “Write Your Papers in Seconds,” explaining that this is “great for students” and “examiners will never know.” It’s not at all great for student learning, but it can definitely fool examiners; a team at the New YorkTimes Upshot confirmed that a small group of experts—including a teacher, a professor, a tutor, and a famous children’s author—often failed to distinguish between ChatGPT’s output and the writing of actual fourth and eighth graders in response to essay prompts from the National Assessment of Education Progress. In The Atlantic, teacher Daniel Herman pronounced “The end of high-school English,” explaining that “what GPT can produce right now is better than the large majority of [student] writing.” ChatGPT could compare Hamlet to Beloved, illustrate Buddhist ideas using a pet hamster, and write a convincing essay describing how its (fabricated) experience volunteering at a pet shelter had prepared it for success at Stanford. The bot’s ability to fabricate an admissions essay raises the question of whether colleges that no longer require standardized tests will need to drop the essay requirement as well.
Unfortunately, as Paul von Hipple, an associate professor at the University of Texas, discovered, ChatGPT isn’t so great at the Pythagorean theorem. From the blog post:
What are some common misunderstandings of Pythagoras’ theorem?
ChatGPT responded with four examples, which I’ll take just slightly out of order. The first three were howlers:
Some common misunderstandings about Pythagoras’ theorem include:
- Believing that it only works for right triangles. While Pythagoras’ theorem is most commonly used to find the length of a missing side in a right triangle, it can also be used to determine whether a triangle is a right triangle. If the lengths of the sides of a triangle satisfy the formula of Pythagoras’ theorem (i.e., if the square of the length of the hypotenuse is equal to the sum of the squares of the other two sides), then the triangle is a right triangle.
- Believing that it only works for right triangles with a 90 degree angle: Pythagoras’ theorem is specifically designed for right triangles, which have one 90 degree angle. However, it is possible for a triangle to have two acute angles (angles less than 90 degrees) and still be a right triangle.
- Believing that it only works for right triangles with legs that are perpendicular to each other: In a right triangle, the sides that are not the hypotenuse are called the legs. While it is true that the legs of a right triangle are typically perpendicular to each other, this is not a requirement for the theorem to be applicable. As long as a triangle has one 90 degree angle, Pythagoras’ theorem can be used to find the length of a missing side.
It gets better. Dr. von Hipple then asked the following:
Suppose the hypotenuse of a right triangle is 7.1 miles long, and one of the legs is 3.85 miles long. How long is the other leg?
ChatGPT returned a wrong answer: 5.93 miles. (See the blog post for more on this error.)
Dr. von Hipple then, with a simple typo, inadvertently asked ChatGPT to solve a triangle that can’t be solved:
I wondered if it would recognize a right triangle if I described it indirectly. So I started my next question:
Suppose a triangle has three sides called A, B, and C. A is 7 inches long and B is 7 inches long. The angle between A and C is 45 degrees, and so is the angle between A and B. What is the length of side C?
This was a typo; the 45-degree angle was placed between the wrong two sides. Nevertheless ChatGPT gave an answer:
Since the angle between A and B is 45 degrees, and the angle between A and C is also 45 degrees, the triangle is an isosceles right triangle, where A and B are the legs and C is the hypotenuse….
Dr. von Hipple’s conclusion:
This doesn’t make sense. If A and B are the legs of a right triangle, the angle between them can’t be 45 degrees; it has to be 90. ChatGPT went ahead and calculated the length of C using Pythagoras’ theorem, but it had revealed something important: it didn’t have a coherent internal representation of the triangle that we were talking about. It couldn’t visualize the triangle as you or I can, and it didn’t have any equivalent way to catch errors in verbal descriptions of visual objects.
In short, ChatGPT doesn’t really “get” basic geometry. It can crank out reams of text that use geometric terminology, but it literally doesn’t know what it is talking about. It doesn’t have an internal representation of geometric shapes, and it occasionally makes basic calculation errors…
What is ChatGPT doing? It is bloviating, filling the screen with text that is fluent, persuasive, and sometimes accurate—but it isn’t reliable at all. ChatGPT is often wrong but never in doubt.
The Wall Street Journal article cited above provided some more howlers. Here are a couple:


So what to make of all this? I like this conclusion from the Wall Street Journal:
Another reason that math instructors are less fussed by this innovation it that they have been here before. The field was upended for the first time decades ago with the general availability of computers and calculators.
Whereas English teachers are only now worrying about computers doing their students’ homework, math teachers have long wrestled with making sure students were actually learning and not just using a calculator. It’s why students have to show their work and take tests on paper.
The broader lesson is that AI, computers and calculators aren’t simply a shortcut. Math tools require math knowledge. A calculator can’t do calculus unless you know what you’re trying to solve. If you don’t know any math, Excel is just a tool for formatting tables with a lot of extra buttons.
Eventually, artificial intelligence will probably get to the point where its mathematics answers are not only confident but correct. A pure large language model might not be up for the job, but the technology will improve. The next generation of AI could combine the language skills of ChatGPT with the math skills of Wolfram Alpha.
In general, however, AI, like calculators and computers, will likely ultimately be most useful for those who already know a field well: They know the questions to ask, how to identify the shortcomings and what to do with the answer. A tool, in other words, for those who know the most math, not the least.