Logo

It’s time to radicalise the writing proficiency exam

Written exams should generate writing that vibrates with personality, exploration of ideas and an urge to communicate, says Tyler Thier

Tyler Thier's avatar
Hofstra University
20 Jul 2023
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
Writing exams need to be rethought

You may also like

Creating bridges to academic writing for first-year university students
3 minute read
A student writing an essay at their desk

This may discredit my job as the coordinator of Hofstra University’s institutional writing assessment, but I’ll say it anyway: writing proficiency exams (WPEs) should be abolished. WPEs are often exit examinations in which a student must write a particular type of academic essay (argumentative, expository, etc) that demonstrates the base-level skills they are expected to know at that point in time. These exams can be student-authored portfolios, “diagnostics” resembling pop quizzes or, in Hofstra’s case, a timed prompt with some prework in the week before it’s taken.

But what does proficiency look like? Hofstra’s official rubric lists four dimensions of grading criteria: thesis; evidence; citation; grammar. This can’t be it, though. How about students who find themselves speaking to a non-academic audience? What of the students who wish to express themselves more creatively? Do I even have to mention students whose native language is not English, or whose cultural experiences differ significantly from traditional academic composition? Defining proficiency within the staid, five-paragraph formula does not foster critical thinking, nor does it prompt student writers to think rhetorically about diverse audiences – much less so to think about themselves as writers. Instead, under this model, proficiency means obedience to the academy.

At Hofstra, the WPE is a graduation requirement for all undergrads, with very little exception. This means that the university holds ownership; all I can do is coordinate and suggest changes. Without oversight, however, I found that I can reframe its purpose and publicise it in my own way to students and faculty, which seems especially necessary in the face of AI advancements and my institution’s overbearing response.

To start, the exam I oversee has shifted to a more promising design in recent years. It used to be taken under a two-hour limit, in person, and would present students with a couple of sources connected by topic (for example, plastic waste in oceans). From there, they would craft an argumentative essay about that respective issue.

While the two-hour limit persists, the exam’s focus and options for delivery have changed. Students upload a college-level essay they’ve written and consult that as one of their sources in conjunction with a published essay from a writing studies scholar. The thesis that emerges is supposed to represent the individual student’s relationship to the writing process.

I had nothing to do with this redesign, but I love it: its flexibility, its interrogation of academic writing and the way each student perceives it or has been taught to perceive it. Pushing students to think about their writing education in a more critical way, how they can adapt it to fit their experiences, how they can reject it and find their own modes of expression – this process of experimentation is a hallmark of higher education. This is proficiency.

I hereby commit myself to radicalising this new design even further. I am, first and foremost, an educator, so I’ve approached this new role from a pedagogical standpoint. The exam favours metacognition – understanding one’s writerly choices and how they suit different contexts or audiences. Students are prompted to insert themselves unapologetically into the writing by using first-person perspective, since this simple reassurance can produce wonderful essays informed by multiculturalism, ESL, race, gender, learning disability and the pedagogy of college writing. By doing so, students seem to be more open to debate and critical dialogue, between themselves and an external reader; they no longer replicate what they’ve been taught in class assignments but rather use those skills to inform their thinking on something more exciting, distinctly their own and, yes, imperfect.

Not every student is privileged enough to write with perfect grammar and syntax as defined by academia or to develop the expected diction of, say, a distinguished philosophy journal partial to publishing white, native English-speaking academics. In other words, we shouldn’t be assessing for perfections in fitting the established forms but rather for curious and engaging approaches to a writing situation.

Destigmatising use of the first-person is one thing, but how about the primary point of intimidation: the two-hour time limit? This basic roadblock can very easily revert the exam to an outdated version of itself – students might be too nervous to move beyond the rigid five-paragraph formula, for a start. The rhetorical situation we’re asking them to navigate is one that would never have a two-hour deadline beyond a classroom. For one, this article you’re reading was given a two-week deadline, so we should ask ourselves what we’re testing students for. Is it the ability to generate informed assertions? Then two hours isn’t enough. Is it the focus to respond to a string of time-sensitive emails, to post an unfolding social media thread, to write a report about breaking news? Then two hours is warranted.

With all that said, the advent of ChatGPT is upon us; my institution is beginning to crack down on the potential for chatbot plagiarism, heightening surveillance of test takers by making in-person exams mandatory. No matter how much pedagogical reasoning and critical theory I toss around, the two-hour limit is here to stay. This will deter no one, and as use of ChatGPT grows the exam’s purpose will fade.

The kind of writing produced under these constraints reproduces power structures long embedded in academia, encouraging students to assimilate to dominant standards of speaking and writing in English (like rejecting one’s cultural dialect in favour of “proper” English phrases) and hindering the contribution of fresh knowledge. As David Shariatmadari puts it, when linguistic expectations are subject to hard borders, such as unrealistic time limits or rigid formulae for constructing something as multifaceted as a sentence, then “students who could not master standard English would be at a disadvantage”. When this kind of atmosphere is fostered, we stop being facilitators; we are now gatekeepers.

ChatGPT shouldn’t be used as an excuse for enforcing these “hard borders”, especially since it churns out characterless writing. I’ve tested the AI on its ability to complete the WPE; it can’t speak to students’ individual experience with a writing assignment (it told me those exact words). Even more, I fed it some of my own writing and asked it to use that along with one of the exam’s secondary readings. It struggled to write beyond a summary of the subject matter, it cited words that neither I nor the scholarly literature’s author have written, and it could not think critically about the act of writing. Radicalisation is crucial in this exact moment, when the policing of students who may or may not use AI threatens to make the exam even more intimidating and less inclusive than it’s ever been.

The personal dimension is key to the WPE’s future as an assessment that invites enquiry, play and challenging perspectives. Think of how exciting these essays could truly be if we established a realistic time frame, one that mirrors professional workplaces, magazine submissions and other contexts that consider diverse audiences? Suddenly, experimental approaches, creative-academic fusions and more dynamic responses to the prompt are possible.

If WPEs are here to stay, let’s make them a formative stepping stone that offers us one last glimpse into a student’s college journey, one that proves their development in terms of confidence, inquisitiveness and drive for change. WPEs should generate writing that vibrates with personality, exploration of ideas and an urge to communicate. If this isn’t the goal, we’re simply pulling the lever on a conveyor belt – or is AI doing that for us?

Tyler Thier is an adjunct professor and writing administrator at Hofstra University, US. His current research is concerned with writing produced by hate groups, suspected cults and other authoritarian factions. He lives in New York City and is obsessed with carnivorous plants and frogs.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site