• Skip to main content
  • Current
  • Home
  • About
    • About Current
    • Masthead
  • Podcasts
  • Blogs
    • The Way of Improvement Leads Home
    • The Arena
  • Reviews
  • šŸ”Ž

FORUM: AI and Education

Michael Dieter, Jim Cullen, Una M. Cadegan and Julie Durbin   |  September 4, 2024

Should teachers regard AI as an invasion? An intervention?

Three semesters into the ChatGPT era, we know that this technology has changed and challenged education. But how? We asked eight educators teaching at a variety of institutions and at different educational levels to tell us what they are doing differently to respond and (perhaps) adapt. How does each of them, as a teacher, intend to approach the reality of AI in this new academic year? Today is the second day of a two-day forum.

***

How honest AI use is shaping my pedagogy

Michael Dieter

I first encountered ChatGPT in late 2022 when one of my students told me that I should look into it and that it was going to become a big thing. At the time I had no idea how much I’d come to embrace this technology in my professional practice. Neither did I realize what mixed emotions would come with it. 

While I’ve had success incorporating the technology in course design, there were (and are) times where it feels like cheating, and I’ve wondered if I’m less of an educator for it. After all, is it really me doing the work? This is a question that I am confident my students have grappled with as they use generative AI as well. Since its arrival, AI has been the elephant in the room that students are often hesitant to use or discuss for fear of punishment. 

In the spring of 2024, to break the ice in one of my classes I opted to try the honesty approach. With the hope of spurring conversation, I let my guard down and shared with my students the ChatGPT-fabricated conversation I used when developing one of our course assignments. As is often the case when trying new pedagogical techniques, I was unsure what to expect. I wondered what would happen if students felt slighted, complained, or even refused to do the assignment.

Instead, my students were surprised to learn just how many prompts were needed to get something useful, and how much ChatGPT’s suggestions needed to be edited to fit the context of our course. I was relieved. They asked high-level questions regarding how I created prompts, and how I decided what to keep. I could tell they appreciated the experience as several of our discussions spilled outside of class time. Honesty worked; the ice was broken. 

This year, honesty will continue to be my approach in using Generative AI and teaching about it. I intend to build assignments into courses that require the use of AI, but most importantly, I aim to foster conversations with my students regarding ethical use of AI. This includes sharing my own redlines for where I do not use it. In continuing to model this level of transparency, I hope to help students form their own moral compasses surrounding AI use. While my classes will be a small part of their journey, they will use AI for the rest of their lives. 

Generative AI in education is a reality and we are not going to get anywhere by not talking about it. As academics, we have an important role in facilitating and shaping conversation around this emerging technology, and we should do so from a place of honesty. 

Michael Dieter is Assistant Professor of Education at Trinity Christian College

***

AI may improve, not erode, academic instruction and standards

Jim Cullen

When Open AI’s ChatGPT-3 (that’s Generative Pre-Trained Transformer) debuted in late 2022—exponentially better than the first iteration of 2018—there was much chatter in the media, as well as in the hallways and lunch tables at my school, about what it might mean for American education in general and for teachers like me in particular. As the hubbub began to settle, two things became apparent.

First: Whatever its promise as a tool of instruction or as a labor-saving device, this technology was sooner or later going to be a problem as a matter of irresistible shortcuts for students. My textbook on essay writing has gone through four editions; I can’t see there being a fifth with Large Language Models (LLMs) to do the heavy lifting. 

Second: ChatGPT and other LLMs were not yet ready for prime time, whether as a matter of clunkiness in terms of what answers they delivered to questions, or in their oft-noted tendency to ā€œhallucinateā€ falsities. This meant that my colleagues and I could stick our heads in the sand a little while longer.

Time’s up. ChatGPT-4, released this spring, has taken another big leap forward, as have other LLMs such as Claude. In his new book Co-Intelligence: Living and Working with AI, Wharton professor Ethan Mollick calls this ā€œthe Homework Apocalypse.ā€

For this new academic year, I’m increasingly leaning in an if-you-can’t-beat-’em-join-’em direction. I would be very happy to never have to correct faulty student spelling or punctuation. Wouldn’t it be nice just to consider student writing on the merits of its ideas?

Of course, the rub is: Whose ideas? LLMs can only rehash what’s already out there, and there may be limits to their voraciousness as copyright holders begin to push back on the use of their content. But originality has never really been the hallmark of student thinking and writing in any case. It’s really a matter of how to organize information, and how the information one gets is really a function of the questions one asks. Which means knowing which questions to ask and getting better at figuring out how to do that. These are the kinds of critical thinking skills we want to foster and foreground.

I’m moving toward a model in which I say to students: Hey, use AI in any form you want, and hand in what you’d like. But I’m going to hold you responsible for anything that bears your name, and I’m going to expect you to be able to answer any questions I may have for you on the spot, whether as part of an oral exam or a presentation to the class. 

Now, there are some real problems of implementation here. One is structuring class and assessment time to make such measures possible. Another is that they will pose a significant challenge for many students, whether because they have secretly been relying on crutches (like tutors) all along, or because these intellectual challenges are hard to meet in any case. Administrators, unions, faculty and families will no doubt find reasons to object as part of a long tradition of mandating mediocrity. But there’s little question that humans who manage these kinds of skills will be those with the brightest futures in the economy—insofar as any of us has one.

One thing is certain: The time for rethinking education in the face of this major technological development is now.

Jim Cullen teaches history at Greenwich Country Day School in Greenwich Connecticut. His books include The American Dream: A Short History of an Idea that Shaped a Nation and Bridge & Tunnel Boys: Bruce Springsteen, Billy Joel, and the Metropolitan Sound of the American Century.

***

What to say about AI in a course syllabus 

Una M. Cadegan

One of the main tasks of teaching, in my mind, is to help students see invisible things—not in the metaphysical sense (that’s a different department) but the structures that shape their everyday lives as students, citizens, and inhabitants of a shared planet. 

I don’t think this task has ever been easy—and I talk with students about the disruption and upset most new technologies caused when they were introduced. But what I think is new about the present moment is that in the age of the app, they (we) experience a seamless, faceless interaction designed to entice and engage. Asking them to gain knowledge of the processes by which such devices and systems get into their hands and, most important, of the human beings affected by those processes, is a bigger task than one class can achieve, but it’s the overall goal I had in mind when last year I decided to include for the first time the following policy in my course syllabi:

A note on ChatGPT and other AI writing assistants: These tools are very new, and their implications for education are not yet clear. In this class, for now, you are not permitted to use them in the preparation of any of your assignments, either reading or writing, for four reasons: 

  • In my experience so far, they are at least as likely to make your work worse as they are to improve it. Let me emphasize this: you will write worse papers if you use any AI tool than if you work to express yourself clearly. 
  • A major reason for taking courses in the humanities as a university student is to develop your voice as a writer and thinker. These tools by definition replace a distinctive individual voice with a generic substitute. They will not help you with the skills the course is intended to develop. 
  • The massive databases on which these models are built use the work of writers without attribution or compensation—in other words, they steal it, violating a foundational principle of academic integrity. In addition, they often rely on the underpaid and exploited labor of people in other countries to make the databases usable.
  • Finally, we are coming to realize that these systems use enormous amounts of energy, and may entirely cancel out current efforts to limit the damage from climate change if not implemented carefully and cooperatively.

When I introduce this policy in class, I emphasize the following: (1) While I realize they will have to learn how to use these tools as part of their professional education, my course has different goals. (2) While this is how the ethical issue looks to me, other professors will establish different policies. I respect those differences and mean no criticism of other instructors’ decisions.

One syllabus can’t do much, but it does reflect my commitment to help students understand complex systems that affect them—and that can be affected by them as they move into the civic and professional world. 

Una M. Cadegan is Professor of History at the University of Dayton and author of All Good Books Are Catholic Books: Print Culture, Censorship, and Modernity in Twentieth-Century America (Cornell University Press, 2013).

***

The freedom to do one’s own thinking

Julie Durbin

I write for some of the same reasons I talk: to find out what I think. I often don’t know what I think until I have struggled to articulate it in words that I have wrestled with and chosen, on purpose. I write to find out what I think. Can I afford to outsource my thinking?

There may be things that AI helps us do more quickly or more effectively, but my business is helping students think and write. Therefore, my approach—in concert with colleagues—has been primarily to detect and deter use of AI in writing assignments. It has not felt at all clear-cut, and the burden has been significant.

I am taking on a new role this semester as coordinator of first-year English composition at Geneva College and director of our campus writing center. I’ve taught reading and writing courses to academically at-risk students for thirteen years. I also am part of a teaching team that offers the introductory Humanities course in our core curriculum. 

In preparation for the challenges of this new academic year, our humanities teaching team is reading a recent essay by Alan Jacobs, ā€œChatbots and the Problems of Life: Resisting the Pedagogy of the Gaps.ā€ Jacobs likens our predicament to an ā€œarms raceā€: technology develops to help students cheat, technology is created to help teachers catch it, and on and on it goes. We all pay for this, as Jacobs laments, ā€œI don’t like this collapse of trust; I don’t like being in a technological arms race with my students.ā€

While lamenting, we still have to figure out what we are going to do. We will continue to use tools like Turnitin to detect plagiarism and use of artificial intelligence in the papers our students write. We attempt workarounds, like essay-writing in class and ever-more-creative assignments. We make policies and try to enforce them. But we are in trouble if we approach this solely as a technological problem with a technological solution.

Every semester our humanities students read Thomas Merton’s letter to Rachel Carson in response to Silent Spring. Though he has pesticides in mind, his words are broadly applicable: ā€œOur remedies are instinctively those which aggravate the sickness: the remedies are expressions of the sickness itself.ā€

Spending more energy on ways to outsmart students feels like an expression of the sickness, which, according to Merton, is ā€œa hatred of lifeā€ in which ā€œin order to ā€˜survive’ we instinctively destroy that on which our survival depends.ā€ If our survival depends on trust, we can’t engage in practices that collapse the ground of trust. With Alan Jacobs, I want to have the courage to ask my students what leads them to turn to the chatbots. 

As I write this response, I remember that writing is hard. The temptation to be relieved of the burden is real. But I want to teach students that writing is a particular kind of hard that is worth it for people who value the freedom to do their own thinking.

Julie Durbin is Associate Professor of Humanities and Writing at Geneva College, where she directs first year English composition and the writing center. She served with Free Methodist World Missions for ten years in Ukraine and did research on worship in Ukrainian simple churches.

Filed Under: Forums