Tidbit: Adopt or Resist? Beyond the AI Culture Wars

I was talking with Matt Kretchmar after our recent session with Leon Furze on “Understanding GenAI in Education: AI and Assessment.” He was very excited about Leon’s approach to move beyond merely moralizing about students cheating and focusing on how to productively live with this new reality. In particular, Leon emphasized a shift in perspective:

In Validity matters more than cheating the authors argue convincingly that the concept of cheating is an unproductive frame for academic integrity, and we should instead re-centre the concept of “validity” in assessment. Separating the ethical or values-based aspects of cheating – that cheating is wrong or dishonest – from the assurance of learning means we can avoid the “fundamental attribution error” of ascribing cheating to a student’s individual, unethical choice. Instead, we can look for ways in which the system itself might be “wrong” and not just the student: are the methods of assessment such that “all capable students can complete [the task]”? 

In a related piece, Adopt or Resist, our friend Marc Watkins argues that it is time to move beyond extremes. As Marc notes:

When confronted with tools like ChatGPT, faculty members tend to cluster around one of two extremes — uncritical acceptance of AI as inevitable or outright rejection of it as an ethical threat. But clinging to either view obscures the real challenge: how to develop thoughtful, practical approaches to deal with this shifting landscape. Generative AI is unavoidable, but its potential impact in higher ed is far from inevitable. The former speaks to the reality of our technological moment, while the latter to all the hype, much of it a sales pitch and little else. 

Our two faculty learning communities this semester on AI have found that most of us are navigating into this center territory cautiously – not being too accepting of what corporate America is dishing out (should we buy an enterprise AI package for the whole campus?) and not burying our heads in the sand over moral concerns (how does AI stack up against Zoom?). Adhering too closely to either extreme risks us losing our agency in shaping how AI integrates into our academic practices at Denison. We invite you to join us in this middle ground, where thoughtful engagement with AI can lead to innovative and ethical educational practices, enabling us to make informed decisions about its use and non-use as argued by Maha Bali.