• Skip to main content
  • Current
  • Home
  • About
    • About Current
    • Masthead
  • Podcasts
  • Blogs
    • The Way of Improvement Leads Home
    • The Arena
  • Reviews
  • šŸ”Ž

FORUM: AI and Education

Peter Slade, Christopher W. Jones, David McFarland and Nadya Williams   |  September 3, 2024

What do we do now that robots have entered the classroom?

Three semesters into the ChatGPT era, we know that this technology has changed and challenged education. But how? We asked eight educators teaching at a variety of institutions and at different educational levels to tell us what they are doing differently to respond and (perhaps) adapt. How does each of them, as a teacher, intend to approach the reality of AI in this new academic year? Today is the first day of a two-day forum.

***

The Bridge on the River AI 

Peter Slade

With the flood of AI I had to redesign the assignments and assessments in my courses this semester. A single realization informs this conviction: Academic integrity is essentially about the structural integrity of our courses, programs, and academic institutions. The situation is that serious. If my degree-awarding institution is like a bridge, does it have the structural integrity to hold up under the new stresses of AI? To understand academic integrity as structural integrity, we must move on from viewing academic integrity as solely an issue of an individual’s moral failure. 

In other words, generative AI presents a new engineering challenge rather than a problem for law enforcement.  

We are used to ensuring academic integrity by detecting and punishing students who cheat. We have developed academic integrity policies, plagiarism detection tools, and committees to judge and prosecute offenders. But this fall, students will use AI to write papers, summarize assignments, answer quizzes (etc.) on a scale that will overwhelm these old tools and systems. Anecdotal evidence from colleagues and data from Turnitin suggest that over half of college students use AI to some degree in their writing assignments. And that was last year! Surely the numbers will only climb as students become more familiar with this technology. 

I have moved away from designing my course to detect the use of AI. First, I don’t like its effect on my relationship with my students: It turns professors into police officers and courses into crime scenes. Second, the scale of AI’s use means that if I had a decent method of detecting AI and prosecuting students consistently I would break the machine. There isn’t enough time in my day to assemble academic integrity reports for over half my students; the Registrar’s office doesn’t have time to process the reports; the Academic Integrity Committee doesn’t have time to hear all the appeals. And the university would not survive the financial impact of all the students who get suspended or drop out.  

Assuming most students will use AI on out-of-class writing assignments, I’ve concluded that to assess their skill at thinking and organizing thoughts, their writing must be conducted in person. Proctored exams with pen and paper are AI-proof. My department has implemented the policy that all face-to-face classes must include such proctored exams. The blue books are back! (Obviously, proctoring online exams is a greater engineering design problem, but there is a whole tech industry rising to the challenge.)  

Have I stopped requiring research papers? No. I am loath to do that. I still set writing assignments, but I keep my courses AI-resistant by playing with weights and measures. 

Changing the weights of assignments protects the integrity of the final grade. For example, if the in-class exams are worth fifty percent of the final grade and research papers are worth forty percent, an A student must do well on all the assignments. However, I discovered last year that students who took AI shortcuts on written assignments invariably did poorly on the exams; their final grades sank without a trace. 

I am also changing some of the measures. Half of the points for a research paper are now assigned for the student’s presentation of that paper in class and then guiding class discussion. I will assign points based on the student’s understanding of the material in the paper.

I wish everyone the best this semester. I hope the bridge holds.

Peter Slade teaches at Ashland University, is the chair of the Religion Department, and recently served on the institution’s AI Task Force.

***

Rehumanizing the classroom

Christopher W. Jones

As I enter this new semester I’m moving past asking ā€œhow do I stop students from using ChatGPT?ā€ towards asking more fundamental questions: What does ChatGPT do? What is it good for? When students use ChatGPT, is it in the service of a greater good?

In discussion with my colleagues, I have increasingly come to view ChatGPT within a broader context of the increasing dominance of digital simulacra over all aspects of life. For today’s students, the desire to use ChatGPT to write their assignments is not unrelated to other aspects of growing up as digital natives, whether it be smartphone addiction, information overload, or social isolation.

In this world—marked by a decreased ability to tell the difference between what is real and what is imaginary—it is all the more important to recognize how Large Language Models (LLMs) like Claude and ChatGPT work. This means also understanding their limitations: ChatGPT cannot create anything truly original because it is a closed system. It can only imitate human writing through recombining pieces of already existing data. It is therefore ontologically incapable of determining truth or falsehood.

Much has been made of LLM’s inability to detect sarcasm or parody, but the reasons for this remain underexplored: Language and communication are cultural processes, and interpretation of a message depends on cultural contexts understood by both the recipient and the speaker. As a computer program, ChatGPT has no received culture and therefore cannot communicate. It can offer only a simulacrum of communication.

One-and-a-half years into the ChatGPT era, those of us who teach college can now expect to teach students who have been using ChatGPT during high school (and according to a recent survey by RAND Corp, so have nearly one-fifth of their teachers). As educational technology companies push AI products into schools, the students we receive each fall will have progressively worse skills in reading, writing, and processing information. We are in serious danger of entering a new era of social and educational inequality in which those who learned the old skills of communication, writing, or even programming at a young age will have a lifelong advantage over those who only learned how to use AI to do these things for them. 

This fall, my AI policy is just one part of a group of policies I call ā€œre-humanizing the classroom.ā€ I banned AI and structured assignments to be AI-resistant (more in-class exams and papers with instructions that are difficult for AI to replicate). Also, laptops and cell phones are to be put away during class time—the digital distractions have become too overpowering for most students to successfully self-regulate. 

But encouraging students to grasp the importance of the real over the virtual cannot be achieved only by banning things. As instructors, we must cultivate human connection. In the first week of my survey of world history, a class of forty-one students, I discuss the importance of geography for understanding ancient cultures. At the end of the first day of class I ask my students to first introduce themselves to someone in the class whom they haven’t met, and to find out where they grew up and from what material their home was built. 

For instructors in the humanities, building an institutional culture of relational inquiry into the things that make us human is our solemn duty—and a duty we are well-equipped to fulfill. 

Christopher W. Jones is an assistant professor in the Department of History at Union University and an associate member of the Centre of Excellence in Ancient Near Eastern Empires at the University of Helsinki. He specializes in the history of Neo-Assyrian empire, and is currently working on a book titled ā€œThe Structure of the Late Assyrian State, 722-612 B.C.ā€

***

Teaching and learning mid-revolution

David McFarland

With the advent and rapid advancement of generative AI technology, there is no denying that we are in the midst of one of the most significant disruptions to teaching and learning in a generation—or, perhaps ever. Revolutionary moments have always been subject to hyperbole. The doomsayers and the boosters both get their moment in the early days of any accelerating uncertainty.

Generative AI seems to be no exception when it comes to exaggerated claims, some optimistic and other pessimistic, that flood our newsfeeds and, for educators, pervade our professional development workshops. I’ll attempt to add something that is as non-alarmist as I possibly phrase it, as I stare down our AI future: The students that will fill my classroom in just a few short years will be unable to recall a world without the type of generative AI tools, such as ChatGPT, with which we’re trying to come to terms as educators. Our students soon might not even be able to imagine such a world.

In no way am I abdicating responsibility for helping students navigate the significant challenges—and dare I say opportunities?—that artificial intelligence will bring to this upcoming semester. We will operate differently, liberated from certain types of schooling drudgery even as pedagogy and assessment possibilities become constrained by ever-present algorithms accessible to students and teachers alike. And I must admit that I find the cat-and-mouse game of authenticating student work—the prevention, detection, and enforcement of consequences for unoriginal work—less interesting than working to mitigate the long-term erosion of the imaginative and creative capacity of learners that is inevitable in the wake of ChatGPT.

If that’s still too curmudgeonly of a ā€œtakeā€ on our artificially intelligent moment, then allow me to prognosticate once more: The dystopian futures lamented by the doomsayers will likely turn out to have been oversold. Likewise, the techno-utopian boosters of these powerful new tools will likely find that they too may need to walk back some of their unbridled enthusiasm for what AI affords education.

My approach is decidedly non-doctrinaire about all of this. Yet I retain convictions about what is in the best interest of students and their learning in all manner of things. For example, we’re about to embark on a significant experiment to ban all smartphones ā€œbell-to-bellā€ at our school, something that is happening in many jurisdictions. Along with my administrators, I believe this to be best practice in light of a growing body of research. My own views on this have shifted to seeing this as a necessary step. Ask me again in two or three years. No doubt our collective convictions about AI and its uses in a classroom will likewise evolve and, I hope, stabilize around a consensus that enhances learning for all.

David McFarland is the Social Studies Department Head at Pacific Academy (Surrey, British Columbia) where he teaches Grades 9 – 12 Social Studies, IB History, and IB Theory of Knowledge. He is currently serving as President of the Western Regional Conference on Faith & History.

***

There’s nothing new under the sun (but old things are still better)

Nadya Williams

In the Fall semester of 1985, driven to despair by the poor writing that plagued undergraduate and graduate papers he was grading, the historian and social critic Christopher Lasch published an early version of Plain Style: A Guide to Written English for the benefit of his students. As he kept tinkering with this guide over the years that followed, a machine emerged that posed a new challenge to the work of writing: the personal computer. 

In his introduction to the updated version of Plain Style, published after Lasch’s death in 1994, Stewart Weaver describes Lasch’s own difficult experience with this innovation: ā€œLasch himself remained unacquainted with the computer until 1989, when at the request of his publisher he tried to shift the enormous, 250,000-word manuscript of The True and Only Heaven onto disks. Big mistake. Far from making things easier for him, the computer inevitably introduced new errors into what had been, he said, ā€˜a perfectly accurate text.ā€™ā€ 

Lasch’s hazing by the newest technology of his day gives one possible route for how to field such innovations: Grin and bear it, noting the (historical) failures of technology to really improve things. That is not the only possible response, however. Wendell Berry, who to this day (as far as I know) has not gotten a computer—a decision that he has written about several times—presents another route. So did C.S. Lewis in 1959. In response to a schoolgirl who asked for writing advice, he noted: ā€œDon’t use a typewriter. It will destroy your sense of rhythm, which still needs years of training.ā€

Lewis and Berry remind us that we can opt out—and opting out of using AI seems much easier than opting out of using a typewriter or a computer. But maybe that’s just my experience speaking. After all, I’m of the computer age; I was required to type out all papers from high school on. 

So what does all this technology do to us as students, teachers, writers? Complaints about AI now echo some of the earlier jeremiads about ways in which the use of a computer for writing would change and distort the creative process. And maybe they are not wrong. 

But the repetitive nature of these well-worn plaints reminds us that such concerns are also nothing new, even if inspired by tech that is more invasive than ever. 

Is it, though?

Last year, my favorite American historian wrote an essay about the limitations of AI here at Current, “What if AI Wrote the Gettysburg address.” This piece is a wonderful reminder of what people can do and AI cannot. Creativity and innovation are ours and cannot be mechanized. But they also require nurturing—which is to say, space and time. 

Tradition holds that the great Roman poet Vergil composed one line of poetry per day. How slow that sounds! AI could generate an entire new epic in seconds if you just punch in some parameters for it to work with. But who, other than another robot, would care enough to read the words of a robot? 

Vergil’s Aeneid, however, is a different story. We still read it today.

Nadya Williams is the author of Cultural Christians in the Early Church (Zondervan Academic, 2023), Mothers, Children, and the Body Politic: Ancient Christianity and the Recovery of Human Dignity (forthcoming, IVP Academic, 2024), and Christians Reading Pagans (forthcoming, Zondervan Academic, 2025). She is Managing Editor for Current.

Filed Under: Forums