Restoring Trust in the Age of AI

Trust is a peculiar thing. It is not unique to humans—I think my dog trusts me—but it is clearly something that chatbots do not possess. In his recent piece in The Chronicle, ‘Why We Should Normalize Open Disclosure of AI Use‘  Marc Watkins aptly notes that “Teaching is all about trust, which is difficult to restore once it has been lost.” Generative AI has created a rift in this trust between students and faculty. Are they using it to cheat? Will I be falsely accused? These questions have sparked uncertainty across the nation, which can be particularly troubling at a place like Denison, where relationships are at the core of everything we do. To help restore this trust, Watkins argues that the disclosure of generative AI use should be normalized “… as a means of curbing the uncritical adoption of AI and restoring the trust between professors and students.”

Last fall, a colleague in charge of the writing center at another small liberal arts college set up a process where students could report their use of generative AI in his writing seminar. This involved students submitting the prompts, reflecting on what they used and learned, and providing resources to verify that the AI results were valid. Alternatively, students could choose not to use AI in their writing. To his surprise, “no one was using AI.” When I pointed out the high documentation threshold he had established for AI disclosure, and suggested that students might be using AI but not reporting it, it gave him serious pause. It was a reminder of how important it is to ensure that our policies don’t inadvertently discourage honesty.

This example brought Watkins’ words to mind:
“If we ridicule students for using generative AI openly by grading them differently, questioning their intelligence, or presenting other biases, we risk students hiding their use of AI.”

To curb this secrecy and restore trust, Watkins suggests that we—students, faculty members, and administrators—normalize the disclosure of AI use. For this piece, I would write:

“AI Usage Disclosure: This document was created with assistance from the AI tool ChatGPT 4.0. The content has been reviewed and edited by me, Lew Ludwig. For more information on the extent and nature of AI usage, please contact me.”

As we continue to explore the role of AI in our classrooms, I invite all of us to rethink our current approaches and consider how we might adapt together to this rapidly changing landscape. By normalizing the disclosure of AI use, we not only rebuild the trust that is so vital to our educational relationships, but we also model the kind of transparency and ethical behavior that we expect from our students. Let’s take this opportunity to lead by example, fostering an environment where trust, honesty, and learning flourish in the age of AI.

-A note on transparency –
Here is my writing process:

  1. I write a rough draft of the piece without any assistance from AI.
  2. I provide the draft to ChatGPT 4.0, setting up the context (writing for TTT) and explaining my goal (inviting faculty to rethink or adapt their AI disclosure practices).I then ask the AI for feedback on the tone and content.
  3. I consider the AI’s suggestions and make appropriate edits.
  4. Finally, I ask the AI to suggest “copy edits,” which I review and incorporate as needed.