Here I sit, giving myself over to the arduous process of putting words on a page, a task more daunting than scrubbing my bathtub or reorganizing my entire filing system. And I’m not alone. Many of us would rather endure the most monotonous of tasks than face the blank page. It brings to my mind the term ‘writer’s block’, a term we’ve invented as if to confirm the sheer weight of the task at hand.
And in the midst of this, enter AI tools like ChatGPT, waving a shiny flag of promise. They whisper in our ears that they can optimize this laborious process. But can they really? Sure, they can churn out prose faster than I can type, and it usually passes as human-generated. But let’s not kid ourselves, the quality of it is far from exceptional. It’s competent, yes, but unremarkable.
I teach writing at the University of California, Los Angeles, and let me tell you, the common sentiment among us faculty is that while ChatGPT can write, it certainly doesn’t write well. Some of us even caution our students against using these tools, poking at their pride: “You could use AI to cheat on your essay, but do you really want a C+?”
But here’s the thing, AI tools are permeating into the working world, the world our students will step into once they graduate. So some of us are letting our students use these tools, albeit in a controlled way, framing them as automated writing tutors or advanced grammar-checking tools. But even the most AI-friendly among us advise students to maintain authorial control and edit any AI-generated content for accuracy, style, and sophistication.
The predictability of AI-generated writing is a result of its algorithmic nature. Tools like ChatGPT, Bard, Bing, and Claude have been trained on vast databases of human text – books, articles, internet content – and they function like sophisticated autocomplete tools, predicting phrase patterns. In essence, they’re just regurgitating what they’ve learned.
But does predictable writing equate to bad writing? It’s time we reflect on what we value in our prose. Good writing isn’t just clear, concise, and grammatically correct. It’s innovative, evokes an emotional response, employs virtuosic syntax and sophisticated diction, and most importantly, it has a strong sense of voice.
However, the question that lingers is: what makes a strong voice and why does ChatGPT’s voice so often fall flat? Is it because it sticks too closely to the rules, rarely straying from conventions unless instructed to do so? Or is it because it lacks the human-like energy that comes from our organic deviations from the norm?
We seem to value a certain amount of idiosyncrasy in our writing, and we even expect it. Conventions can be murky and variable. They evolve over time. And while writing that consistently adheres to convention is easy to read, simply abiding by the rules doesn’t make excellent writing.
Good writing seems to require a balance of conformity and nonconformity, and at times, deliberate rule-breaking. Good writers recognize that grammatical rules are dictated by problematic power structures and are not independent measures of correctness. They assess the rhetorical context for their writing and make deliberate decisions about where to conform and where to stray.
Good writing isn’t about sophisticated sentences or complex ideas; it’s about unifying all elements into a coherent whole. It’s about being intentional in your choices. ChatGPT can understand grammatical conventions, imitate them and break them on command. But because it has no intention, it can’t be purposeful in how it adheres to or strays from the rules.
We often say that good writing has a strong sense of voice. Speaking voices can be recognized from their tone and pitch, but what rhetorical features define a writer’s voice on the page? Is it patterns or tics, stylistic quirks, a repeated word or sentence structure, or is it sections in which they convey strong opinions or a particularly well-defined point of view?
AI tools like ChatGPT can generate text, but they can’t generate a voice. They can’t make intentional choices. They can’t understand the context of their writing. They can’t stray from the rules in a deliberate, nuanced, powerful way. They can’t add layers of meaning. They can’t be creative, expressive, or exploratory. They can’t be human.
So while AI tools may help us save time, they can’t replace the unique, irreplaceable human touch. They can’t replace us. Don’t let them fool you into thinking they can.