Less than half of top universities publish AI guidelines

Lack of clear guidelines may put instructors on ‘defensive’ over students’ use of ChatGPT, researchers say

十一月 20, 2023
Robot arm
Source: iStock

Even as artificial intelligence becomes an integral tool for students and academics, less than half of the top 50 universities worldwide have developed publicly available guidelines for its use in academic settings, a study has found.

Only 23 of the 50 institutions, which were chosen based on their performance on Times Higher Education’s 2023 World University Rankings, have listed on their webpages clear directions for instructors on the use of generative AI (GAI) tools in assessments – a “concerning” finding, according to Benjamin Moorhouse, an assistant professor of education at Hong Kong Baptist University who carried out the study, and his co-authors.

“Without clear guidelines, instructors may take a defensive approach to GAI and adopt more in-class assessments or feel frustrated as they struggle to adapt their assessment practices without institutional guidance,” the researchers warn.

Believed to be the first such review of its kind, the study in Computers and Education Open examines the extent to which institutions have created guidance that helps “raise awareness about academic integrity and reduce academic misconduct” amid the broad uptake of game-changing new technology in higher education.

It comes as universities around the globe grapple with how to adapt to the arrival of ChatGPT, which has already changed the nature of coursework and assessments for students and those teaching them. In the 12 months since the introduction of the technology, which prompted bans at some institutions, such tools have become near-ubiquitous in academia.


Campus resource collection: AI transformers like ChatGPT are here, so what next?


Speaking with THE, Dr Moorhouse cautioned that lack of a clear stance on the tools can put universities “in a difficult position”, leaving faculty flying blind even as many learners use AI in their coursework.

“I was talking to colleagues at another institution, which has no firm guidelines, and they’re struggling to communicate with students on this,” he said.

So why do so many institutions still lack public guidelines on AI? Dr Moorhouse believed some may be taking a “wait and see” approach.

“It can be hard to be first – also people weren’t sure of the exact effect GAI was going to have until we went through one semester cycle of it,” he said.

Dr Moorhouse noted that there are some universities that have created guidelines, but do not make them publicly available, HKBU included. He said it was not always clear why institutions refrained from making internal policies public, but believed this could be to do with the “sensitivity” around assessment design.

“They might be concerned [about] things in docs they don’t want students to know about, things around AI detection tools,” he said.

While he recognised the need to proceed with caution, he hoped institutions would be more transparent about their policies.

“It’s going to be really important that university instructors become AI-literate,” he said. “We do need to gain those skill sets and our institutions need to support us.”

pola.lem@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

相关文章

Reader's comments (2)

Interesting... I mentioned to one of our librarians that I was putting some notes together on this topic for computer scientists and she nearly bit my hand off asking for a copy to be used university-wide!
It's not entirely surprising that few are out there and fewer say or include anything meaningful, because universities in general aren't great at taking stock of what they already do in anything (in this case assessments) to gauge an appropriate response. They benchmark by looking at the bigger pieces and strategic decisions made by others to set their own bar, which in this case doesn't yet really exist. In some ways the lack of guidance on gAI is scary for academics because of fear of erosion in robustness of assessments, unintentional grade inflation (or deflation by fearfully downplaying good work under a currently unprovable assumption of cheating), or feeling expected to develop new assessment methods to counter gAI when they know not how. But in others it's a good thing many universities aren't laying down some half-baked, uninformed, not thought through in the slightest rules that would likely just dumb down how we go about designing the learning experience; gives we, the academics room to move to lead how we use and make sense of this shift. The most I've seen really, "led" by the Russell group is "empowering" staff to work with it or mitigate for it best they choose - really just being non-committal but feeling they had to throw something out there. We just need to talk to one another to figure out what will work and not work, and not back pedal on evolutions of our assessments we've made in the past decade.