In this week’s edition of ‘Down the AI Rabbit Hole,’ we explore a new AI-powered search engine that, in some instances, can replace Google. I also give an update on my AI Literacy experiment in my calculus class.
This week’s tip:
You might remember the buzz from last year when Kevin Roose, a columnist for the New York Times, had an interesting exchange with Microsoft’s generative AI, which bizarrely attempted to persuade him to leave his wife. Roose has again entered the AI conversation, this time with an endorsement for Perplexity. Denison’s Educational Technology Service (ETS) highlighted this technology in the last TTT. If you still need a reason to try it, in this recent piece in the Times, Roose claims this new AI-driven search engine has replaced Google for him – a pretty tall order given Google’s search engine dominance.
When you ask a question, Perplexity doesn’t give you back a list of links. Instead, it scours the web for you and uses A.I. to write a summary of what it finds. These answers are annotated with links to the sources the A.I. used, which also appear in a panel above the response.
I subscribed to Perplexity Pro, but as of this writing, I have yet to have a chance to test it much. But given the ETS shout-out and the hype from the NYT, it might be worth a look.
Broader perspective: A not terribly scientific survey
As I mentioned in my last post, three days a week, I teach calculus in my Math 130 course, and on the fourth day, my students and I are attempting to develop an AI Literacy overlay course. After a recent meeting with Denison’s AI working group, it became apparent that we need to involve students in the conversation around AI. To be clear, this is a scary prospect because, unlike our respective disciplines where we hold expertise, for many of us, our students may know more about AI or as much as we do.
Regardless, they need to be in the conversation around AI. After this recent meeting, I realized I had set the agenda for my students – mapping out our 14 weeks of instruction and topics without their input. Pot, the kettle just called…
I paused my plans this week and asked my students for their input. To better understand where things stood, I polled them about the current AI guidelines in their current courses.
For each of your classes, indicate whether:
- AI is not allowed
- AI is allowed, but only for certain uses
- AI allowed, for almost all uses
- No guidelines were given for the use of AI
The chart above indicates the responses from 20 students in 87 classes. As I noted, this survey was created on the fly and was not very scientific. I did not count for overlap, so all 20 students would have responded with “allowed for most all uses” for their course with me. Even with this skewing, in over 50% of their classes, either AI is forbidden or needs to be addressed. Again, let’s take this with a grain of salt. Maybe you discussed it in your syllabus, but the students didn’t read it or recall your guidelines.
Nevertheless, now is a good time for you to remind your students of your AI policy and your reasons for them. I use Ryan Watkins’ piece to help share my expectations about AI in my course. Even though his advice is from last July (eons in the AI world) it is still sound—particularly his recommendation for ongoing discussions with our students about AI due to its swift evolution.
If you don’t like Watkins’ approach, consider this: I asked my students, ‘What would you like Denison to do with regard to AI?’ (Did I mention I created these questions at the start of class?) The response was clear: students are seeking guidance. They want to know, “How can I use AI, when should I not use AI, and why?” As the graph above illustrates, navigating this new terrain and understanding our expectations pose significant challenges for our students. As we enter the fourth week of the semester, I encourage you to check in with your students about AI. Remind them of your policy and explain your rationale. As the science of learning tells us, a little distributive practice is good!