Even at Harvard, academic efforts to combat ChatGPT are mixed

For now, premier US university sees few good options beyond in-class assignments – at least among faculty taking the threat seriously

September 7, 2023
Bridge at Harvard

Entering a year in which ChatGPT has been shown to be a formidable disruptive threat to its curriculum, Harvard University has put a priority on trying to make its offerings “AI-proof”. The verdict so far from its dean of undergraduate education: there’s still a way to go.

“I’m finding that the transition is more uneven than I would have guessed,” Amanda Claybaugh, a professor of English, said of her efforts to prevent Harvard students making AI-powered sprints through their coursework.

“Some of our faculty have already reimagined their teaching entirely, while others still haven’t even tried ChatGPT.”


Campus collection: AI transformers like ChatGPT are here, so what next?


That divide reflects the suddenness with which ChatGPT and similar online systems have made it possible for students worldwide to upload classroom assignments to AI tools that can produce competent and even quality essays.

Harvard got an especially stark warning this summer when one of its undergraduates, Maya Bodnick, ran an experiment in which she gave ChatGPT-generated essays to seven Harvard professors and teaching assistants – matching most of her freshman year in social science and humanities – and found that the papers earned an average grade of 3.57 on a four-point scale.

The result might partly reflect grade inflation at Harvard, but it also suggests that AI-generated essays “can probably get passing grades in liberal arts classes at most universities around the country”, Ms Bodnick says in reporting her findings.

Professor Claybaugh worked with academics this summer on ways to counteract student use of ChatGPT-type technologies – suggesting strategies to professors, but not mandating any. “I trust my colleagues to make the choices that are best for their subject matter and their students,” she said.

Along with taking the formal step of prohibiting their students from using AI systems, some Harvard faculty are planning to reduce or eliminate the use of essays written outside the classroom. It’s unlikely that faculty can rely solely on software that claims to detect AI use by students, because those systems are not reliable, Professor Claybaugh said. “Instead, we need to adapt our assignments so that they remain meaningful in the age of AI,” she said.

The more enduring solutions will likely involve both relatively newer teaching approaches such as active learning and flipped classrooms, where in-class discussion is prioritised, and greater emphasis on the process of writing or problem-solving “rather than simply evaluating the student’s finished product”, Professor Claybaugh said.

Ms Bodnick agreed that her professors had few good options for trying to work with AI. For now, she accepted that the professors would need to base most of their grades on students’ classroom participation and in-class exams. “Which feels really terrible,” she said, “because you definitely have students producing worse-quality work if they can’t spend time on it on their own, or consult more resources.

“Unfortunately, there’s going to have to be some pretty draconian ways to just completely make sure that students aren’t using the technology.”

Professor Claybaugh begins the academic year nervously observing the range of responses among faculty to a clear and widespread need to revise practices: “Some early and eagerly; some not at all,” she said. But, she predicted, “the advent of generative AI will push more of them to do so more quickly”.

And to be clear, Professor Claybaugh said, some of the same variations in adoption rates can be seen among students. “Some use generative AI frequently and comfortably, some tried it and found it unhelpful, and some have never even tried it at all,” she said. “I’m guessing that historians of technology would tell us that it is ever thus.”

paul.basken@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Reader's comments (1)

The artificial intelligence revolution in learning is a hallmark of the 21st century. I believe that ChatGPT should be supported by a standard whether or not we call it SPDF (Standard Process Description Format) or something else, the target is to produce a universal framework for industrial engines not only to be used for higher education but also to produce new generation of servers. This call for papers is addressed to all those who are interested in the AI ​​revolution which is as important as the Internet revolution in the 20th century. Link to "How to increase our learning considering The impact of artificial intelligence": https://www.academia.edu/104592024/How_to_increase_our_learning_considering_The_impact_of_artificial_intelligence To participate to the discussion: https://www.academia.edu/s/b053744cac

Sponsored