• Skip to main content
  • Skip to secondary menu
  • Current
  • Home
  • About
    • About Current
    • Masthead
  • Podcasts
  • Blogs
    • The Way of Improvement Leads Home
    • The Arena
  • Reviews
  • 🔎
  • Way of Improvement

Grading robots?

John Fea   |  June 20, 2024

I am heading back to the classroom in August after a year-long sabbatical. I am wondering just how much has changed since I left, especially in terms of student use of AI.

Here is Beth McMurtrie at The Chronicle of Higher Education:

Jeff Wilson is a professor of religious studies at the University of Waterloo. Since ChatGPT appeared on the scene, he has warned his students against using artificial intelligence to do their work. Even so, he says, he saw a “massive” uptick in its use over the past academic year, estimating that about 25 percent of his students at the Canadian institution used generative AI in their assignments.

Some relied on AI to write responses to 150-word prompts. Others used it to complete an experiential-learning assignment, in which they were supposed to do mindfulness meditation, say, and then write about the experience. When he asked why, some students said they knew it was a mistake to do so, but they were pressed for time. A few didn’t know they had used generative AI because it’s embedded in so many other tools, like Grammarly. Others flat-out denied using AI, knowing, Wilson surmises, that it was unlikely they’d be investigated further.

The explosion in AI use, the endless hours spent figuring out whether — as he put it — there was a person on the other side of that paper, and the concern that students who cheat could end up getting the same grades as those who did the work sent Wilson reeling.

“I’ve been teaching at this university for 17 years and suddenly this comes along to devalue everything I’ve done to become a caring, competent instructor, and the students are creating make-work for me,” he says, describing the shift as “devastating.” “I’m grading fake papers instead of playing with my own kids.”

The tension surrounding generative AI in education shows no signs of going away. If anything, faculty members are sorting themselves into two camps. Some, like Wilson, are despairing over its interference with authentic learning, and deeply worried they will have to scuttle the meaningful assignments and assessments they’ve developed over the years because they have become too easily cheatable. Others agree that AI abuse is a problem but focus instead on how AI could enhance learning. Or they have found ways — in the short term, at least — to minimize its abuse while maintaining the integrity of their assignments. (Some argue there’s a third camp: the professors who so far are ignoring AI’s existence.)

Members of both groups, however, agree that administrators need to provide more and better support for faculty members, who remain largely on their own as they try to adapt to this rapidly changing landscape. Professors complain of receiving generic AI guidance that encourages them to experiment with the tools in their teaching but without providing tested examples of how to do so. Others say that unless students confess, it’s often pointless to try to bring forward an academic-integrity case, even with evidence, because of the difficulty in proving AI use.

The lack of guidance and training is of particular concern, experts say, because AI will soon be everywhere. AI tools can now listen to and summarize a lecture, as well as read and summarize long academic articles. “Now we have to start thinking about more than just assessments in AI. We have to think about learning itself,” says Marc Watkins, a lecturer in the department of writing and rhetoric at the University of Mississippi.

Read the rest here.

Filed Under: Way of Improvement Tagged With: artificial intellgence, ChatGPT, teaching

Reader Interactions

Comments

  1. Storm says

    June 21, 2024 at 11:35 am

    John, If your last teaching semester was Spring 23, remember that was the first full semester that chatGpt even *existed*. I saw some signs that a few of my intro philosophy students were using it that semester for writing assignments on the readings.

    Starting at the beginning of last year (F23) I changed one of my main non-exam assignments in my Ethics course: a short (4-5p) essay applying stuff from the course to an issue of choice. Everything about the assignment stays the same (including instructions not to use AI bots), but instead of writing the paper, come to my office for a 10-minute conversation in which you explain the issue and present the argument.

    As you will know, you can tell in about 60 seconds whether the student understands the issue and the material. Further, even if they don’t understand it coming in, they know a lot more by the time the conversation is over, which is just one benefit over getting the same C- (or F) on a submitted paper and not learning much of anything from it.

    After 2 semesters, students like it; I really like having these one-on-one conversations, and, though the schedule is intense for a couple days, it isn’t really much more time than grading papers individually–with a lot of benefits.

    This is just one small change, but I don’t see any other direction–though I am retiring after next year so this old dog does not have to face the large-scale revamping …

    One more thing: for the last two years I have lectured our entire first-year class on Plato’s Allegory of the Cave and Socrates’s Apology–and used the format of speculating about what Socrates would say about chatGpt. One question this raises–and thus I tried to forcefully convey–is whether this technology means that liberal arts colleges, and the liberal arts, are even necessary any more. THAT’s the question we have to wrestle with as well. Of course the machines don’t think, but if what they *can* do will pass our courses, then what do we have to do to teach people to think?