In each ‘Down the AI Rabbit Hole’ post I aim to provide a tip on AI and then a broader perspective. This week, I am struggling as the two are blending together. I will start with a jaw-dropping tip that I discovered a few days ago, followed by the implications of this tip and the wider lens.
Listen to Down the AI Rabbit Hole Part 1: Consider What Else ChatGPT Can Do You For You
Listen to Down the AI Rabbit Hole Part 2: It Can’t Count but it CAN Code
This week’s tip:
The question you see below is a staple in the early weeks of a calculus course. You do not need to understand the math, but know this: it’s a question I’ve posed for over 25 years, and invariably, some students will resort to the incorrect reasoning presented.
When I submitted this question to ChatGPT 4.0 (the subscription version) with the prompt: “Here is a question from my calculus test and my answer. Why is my answer wrong?”- the response was not only correct but also precisely tailored to the right level, neatly formatted, and cleverly offered a hint rather than the solution.
Let’s not even discuss how it was able to read my handwriting. In the past, I’ve had students ask why I write “ONE” at the end of some computations, mistakenly reading my scrawled “DNE.” ChatGPT can clearly read my handwriting.
So the tip? Consider exploring what else ChatGPT can do for you. As we pivot to the broader perspective, remember that generative AI represents a ‘Jagged Technological Frontier,’ and that we are underusing it, like playing 1980s Pac-Man on a yet-to-be-released PS6.
A broader perspective
This past weekend, I was the keynote speaker at a mathematics conference in Minnesota, where one of my talks focused on AI. Tailoring the discussion to the audience, I highlighted specific AI applications in mathematics, including the aforementioned example.
Why don’t they just learn LaTeX! 😡
During the presentation, I shared the story of a colleague at a local high school who had to take over someone else’s calculus class for the rest of the semester. The previous instructor left a binder of printed handout notes, but the electronic files were in a dated system not used by mathematicians. My colleague scanned the files to ChatGPT, and then had ChatGPT convert the images to LaTeX, a mathematical typesetting program, bypassing the need to retype extensive notes manually.
An assistant professor from a nearby university interjected, somewhat heatedly, “Why don’t they just learn LaTeX!” I clarified that my colleague was proficient in LaTeX; the AI simply expedited the conversion process, allowing for easy editing to tailor the content to their students’ needs.
Striking a nerve
Why did this faculty member respond in this way? After the presentation, the faculty member came to me, “I want to apologize for my visceral outburst.” They shared their apprehension about his new technology potentially undermining their teaching methods. However they acknowledged the need to engage with these tools more openly after hearing the rest of my talk.
Pac Man on a PS6
I get it. The apprehension is understandable. Technology like this can feel threatening to our educational identities. But the reality is that AI is here to stay. Even if development were halted, the existing models are already remarkably capable, and we’re only scratching the surface of their potential. Ethan Mollick analogizes this situation to having PlayStation 6 technology but using it to play 1980s Pac-Man; we’re not fully utilizing the capabilities at our disposal. And keep in mind, this is the dumbest generative AI will ever be.
Yeah, but it can’t count!
While AI can do jaw-dropping things, it can also be quite stupid. For example, early users are often stymied when they ask it to create a 500 word essay, and it produces maybe 300 or so. And if you just copied and pasted the prompt (cough, cough), the essay will not be that great. Recall this piece from Marc Watkins.
But why only 300 words? You see, as a Large Language Model designed to predict the next word in a sentence, ChatGPT cannot count! Seriously? How absurd!
Yes, but it can code 😲
Have you ever seen those little four by four sliding block puzzles? For the math conference, I wanted to see if I could code a similar game for my area of research, knot theory. This is something a number of my research students, even the ones in computer sciences, were unable to do. In about two minutes, ChatGPT created working python code that would do what I asked! The code was about two pages, relatively short. When I asked ChatGPT how many lines of code it created, it wrote another Python program to tell me the block puzzle code took 74 lines!
The Jagged Technological Frontier
What I am experiencing is what Harvard researchers call the Jagged Technological Frontier. After a large study with 758 consultants at a Boston Consulting Group, the authors suggest that AI creates a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty, are outside the current capability of AI.
This is something we are not used to. We would intuitively expect a technology that can write lines of functioning Python code from a simple user query could easily count. But for the moment, this is not the case. Nevertheless, we mus continue to push boundaries to discover what AI can truly achieve and to understand its limitations.
The visceral comment
So, what about the attendee with the visceral comment during my first talk? They approached me after my subsequent talk, which included the coding example. they were intrigued and inquired about how they might apply this to their own research. It was exactly what they were looking for in their own research, but they didn’t have the coding skills to move the project forward!
Interested in trying your hand at writing Python with ChatGPT? Checkout my column There and Back Again: Traversing the Python Landscape for Applets. I offer a straightforward “how-to-try” guide. No coding experience? No problem. All you need is ChatGPT 4 and a Google account.