Down the AI Rabbit Hole: Writing with AI and Fostering a Playful Mindset

In this week’s ‘Down the AI Rabbit Hole,’ I share insights from Dr. Dominic Ashby’s recent Denison session, “Writing with AI: Building a Constructive Culture.” Plus, I tackle a curious enigma in generative AI: Why do 60% of faculty steer clear of using it?

Listen to Down the AI Rabbit Hole Part 1: Tips on Writing with AI
Listen to Down the AI Rabbit Hole Part 2: Fostering a Playful Mindset

This week’s tips:
During his recent presentation, “Writing with AI: Building a Constructive Culture,” Dr. Dominic Ashby from Eastern Kentucky University gave some constructive tips that almost anyone could use. First, he noted that while AI is good at many tasks, it cannot replace the sophisticated databases and resources available from our library. However, when students try to use these resources, their searching skills often fall short.

Tip one: Use AI to create search prompts for respected databases
Do you have a topic you want to search? Use generative AI to generate search prompts for your topic! Not only will it give you a more complete list, it will also provide the appropriate Boolean connectors:

Search terms for Long Term Consequences of Early Childhood Inequality:

  • Social class AND early childhood development
  • Socioeconomic status (SES) AND childhood experiences
  • Inequality AND child development outcomes
  • Classism AND children’s well-being (or mental health)

Tip two: Reverse outlining
Want some objective feedback on your paper? Send it to AI, then have AI generate an outline of the piece. Does it report all the things you intended to cover?

Tip three: Have AI critique your work via a rubric
This is my own take on Dominic’s tip above. I am writing a proposal for a workshop for the 2024 POD conference. I uploaded the proposal call and rubric to AI and then had it assess my proposal against the guidelines and rubric.

You can find these and other useful approaches in the video recording and his slide deck, replete with links.

Why do 60% of faculty steer clear of using AI?
Here we are, over 16 months since the release of ChatGPT, and 60% of the 120 faculty enrolled in my upcoming workshop report they’ve used generative AI ten times or fewer. The workshop is a feature of AI Week at a large state university, where I will be addressing the STEM faculty with “Beyond the Basics: Practical Applications of Generative AI in STEM Teaching.” But why is the 60% figure so significant?

Why is this number so sticky?
Over the past year, I’ve led numerous AI workshops for more than a thousand faculty members across various institutions and disciplines. Yet, this figure remains stubbornly around 60%. Why does this number refuse to shift? In this piece, I argue that some faculty members have developed a false sense of security. A few experimented with the free version of AI—which is somewhat limited—and now believe they are “safe from AI.” As Ethan Mollick has recently pointed out:

The gap between what experienced AI users know about what AI can do and what inexperienced users assume is a real and growing one. Many would be surprised by the true capabilities of even the current AI systems, and as a result, will be less prepared for future advancements.

Generative AI and the Faculty Identity Crisis
While discussing this persistent 60% with a colleague, my son, Bjorn, suggested a new angle. Throughout this technological evolution, I’ve encouraged faculty to adopt this new technology to enhance their teaching and support student learning. Bjorn noted that for many, generative AI poses an existential crisis. Faculty members’ identities are deeply intertwined with their teaching roles. The thought of incorporating generative AI, which could potentially undermine their position, begs the question: Why engage with technology that might replace me?

Play as a Pathway to Understanding AI
This perspective was eye-opening. On one hand, why should faculty engage with technology that could undermine their value as educators? On the other, how do we bridge Mollick’s widening gap between those who understand and exploit AI’s capabilities and those who do not? How do we encourage faculty to work with this perceived threat in a way that doesn’t feel threatening?

                    

The answer – play. Mollick early on suggested that to truly understand generative AI and its capabilities, you need approximately 10 hours of playtime with it. While this may seem simplistic, what does it truly mean? Personally, I struggle with tutorials. I need a goal and a deadline to learn something new, like software or an app. While Mollick’s suggestion is fundamentally correct, it can seem daunting. What exactly does “10 hours of play” mean? Although countless studies emphasize the significance of play in child development, we wouldn’t simply turn a group of toddlers loose for 10 hours with instructions to ‘go play.’

A Non-Threatening Approach to AI
I’m not implying that we faculty are akin to toddlers, yet we do require some guidance. Collaborating with my colleague, Michael Reder from Connecticut College, we’ve crafted a four-part, unintimidating approach to interacting with AI:

  1. The Mechanics of Generative AI Objective: Grasp the fundamentals of generative AI, including neural networks and machine learning.
  2. Exploring the Capabilities of Generative AI Objective: Enhance understanding of how generative AI can inspire creativity and innovation.
  3. When Can Things Go Wrong? Objective: Identify common AI pitfalls, such as ‘hallucinations,’ anthropomorphism, and biases, to promote a realistic view of AI’s capabilities and limitations.
  4. Navigating the Future with AI Objective: Develop insight into the evolving AI landscape, anticipate future developments, and learn how to stay informed and adaptable.

Each session will feature playful activities, such as using an AI tool to generate a review, an alternative ending, or a character analysis for a favorite book, TV series, or movie and compare it with personal interpretations. There will also be engaging follow-up ‘homework’ assignments like creating a series of trading cards for characters.

Fostering a Playful Mindset
The heart of the workshop series is to cultivate a space where engagement with generative AI is not just informative but also enjoyable—a place where play is not a diversion but a pathway to mastery. We stand firm in our conviction that the essence of teaching cannot be supplanted by technology; instead, we see AI as a means to augment the irreplaceable human element that is the hallmark of education. Our goal is to transform trepidation into curiosity and apprehension into strategy, ensuring that faculty remain at the forefront of pedagogical innovation. By the end of our series, we aim to have equipped our faculty not only with a deeper understanding of AI’s capabilities but also with the confidence to use these tools in ways that complement and elevate their innate role as educators.

Join the Denison Workshop
If you are interested in the Denison version of this workshop, please look for the announcement and sign-up in next week’s email from the CfLT.