Centaurs, cyborgs and roboprofs

A year after ChatGPT brought generative AI to the masses, we now know it has the potential to change everything – the AI-augmented future is now

November 23, 2023
Concept of a  centaur walking on a data background to illustrate Centaurs and cyborg type of people
Source: IStock montage

During a discussion at the recent Times Higher Education World Academic Summit, a panellist asked the audience how many of them had used ChatGPT or another generative AI tool in their work this year.

A brief pause followed as a room full of brains generated a response, then a room full of hands were raised.

Why would you expect your students to be any different, asked the panellist.

Like AI itself, we have come a long way very quickly in our attitudes to the use of AI in professional and, yes, educational settings – but, as regarding AI itself, we also know we have much further to go.

At the start of the year, as jaws dropped – along with a thousand dodgy AI-generated poems – at the uncanny abilities of ChatGPT, it was common to hear calls for bans on its use by students.

Nine months later, however, the questions facing universities are much more fundamental.

Writing for us this week, Mariët Westermann, vice-chancellor of NYU Abu Dhabi, says the question today “is not how to police plagiarism more effectively. The question is whether plagiarism or cheating are even useful categories of pedagogic concern when the world is adopting generative AI at breakneck speed.

“If ChatGPT has undone the time-honoured system of tests, take-home exams, essays and problem sets, how should universities measure, assess and verify learning? On what skills, abilities and dispositions should we actually assess our students? What and how should we be teaching them?”

That this new frontier of user-friendly AI is already changing the game itself is evident in research by academics at NYU Abu Dhabi, which found a “bifurcation” in the response within universities.

The study of 1,600 professors in multiple countries broadly found that academics responded with concern, whereas students embraced it as a tool almost instantly.

Employers and workforces, Westermann writes, are much closer to the students than the faculty, reinforcing the idea that universities must re-engineer their education for a changed world.

Another study highlights quite how changed the world of work is going to be for future graduates.

The Harvard Business School study assessed the impact of using GPT-4 – the most sophisticated offering from ChatGPT’s developer OpenAI – among consultants at Boston Consulting Group.

It found that “consultants using AI were significantly more productive (they completed 12.2 per cent more tasks on average, and completed tasks 25.1 per cent more quickly) and produced significantly higher-quality results (more than 40 per cent higher quality compared to a control group)”.

Performance improved across the spectrum of ability, though this was more marginal at the upper end.

In another sign of how quickly a new era of AI-powered work is evolving, the Harvard team identified different methods of using GPT-4 among the consultants, describing one group as “centaurs”, who divided and delegated activities to the AI or to themselves, and another as “cyborgs”, who completely integrated their workflow with the AI tool, interacting with it throughout.

Back on X, the almost stone-age social media platform formerly known as Twitter, generative AI is still being used mainly to poke fun.

An example last week was the use of ChatGPT to vastly improve on a grammatically unhinged letter written by the one-time skills minister Dame Andrea Jenkyns in response to the prime minister Rishi Sunak’s sacking of home secretary Suella Braverman.

Peter Mandler, professor of modern cultural history at the University of Cambridge, posted in response: “Equally amusing and disturbing – have we already reached the point where AI can do a better job than ministers? Does that speak more to the high bar of AI or the low bar of ministers?”

In light of the Harvard study, though, the real question is how all of us should recalibrate both our thinking and our organisations to adopt and adapt to an AI-augmented world most effectively.

“This debate should move beyond the dichotomous decision of adopting or not, and instead focus on the knowledge workflow and the tasks within it, and, in each of them, evaluate the value of using different configurations and combinations of humans and AI,” the paper advises.

This is the challenge facing universities too – to reimagine teaching, assessment and learning for a world of future centaurs and cyborgs, a task that will require academics to be just as AI-augmented as those in their classrooms.

john.gill@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

It may be true that higher education rarely tops the agenda in electoral campaigns, but don’t be fooled – the politics is as fevered as ever

9 November

If you get what you measure, then a new framework for assessing universities’ efforts to support interdisciplinarity will provide welcome impetus

26 October

University rankings wield enormous influence. But if constructed and used correctly, they should be a mirror and support, not a straitjacket

28 September

Sponsored