Her NY Times Modern Love Essay Was Too Clean to Be Human. I Haven't Touched Mine Since.
AI has me questioning everything I was ever taught about good writing. And that terrifies me.
Writer and editor Becky Tuch’s post on X should have had nothing to do with me. I wasn’t the writer she accused. It wasn’t my Modern Love essay she was calling AI slop. And yet I read it on a Tuesday night and by Wednesday morning I was standing in front of my strategic writing class, teaching students how to write a strong lede for a feature profile, and all I could think was: would this get them flagged?
I have a Modern Love essay sitting in Google Docs. I’ve been editing it daily, the way I teach my students to edit, cutting the fluff, earning every sentence, trusting the reader. It’s the most carefully written thing I’ve produced in years. And now I can’t read a single line of it without hearing Tuch’s voice in my head, asking the question she asked about someone else’s work.
Too clean. Too precise. Too good to be human.
She wasn’t talking about me, but I can’t stop thinking she could be. I don’t use AI to write my articles but that doesn’t mean I won’t be called out for it. Then the story got more complicated. The Atlantic reported that Kate Gilgan, the author of the Modern Love essay, confirmed she had used AI tools in her process, not to write the column, she said, but for inspiration and guidance. She’d prompted ChatGPT, Claude, Gemini, and Perplexity to help her stay on topic, using AI as a collaborative editor rather than a content generator. She claimed that she hadn’t copied and pasted. She also claimed that she hadn’t outsourced the writing. But she had let the machines into her process. The New York Times, for its part, said its ethical-journalism handbook requires freelancers to abide by established journalistic standards and editing processes, and that substantial AI use should be clearly disclosed to readers. It wasn’t.
That distinction between using AI and being AI is exactly the line nobody knows how to draw anymore. And it’s the line I keep tripping over every time I open my own Google Doc.
Modern Love is one of my favorite reads. It’s where real people bare their souls in polished, carefully crafted essays. It’s not a place you’d expect to find slop, artificial or otherwise. And yet here we are, in a time where writing something too professional can get you flagged.
A dear friend of mine had an essay published in Modern Love. I was proud of her in the way you’re proud of someone when you know exactly how hard that is. I’ve done the research. I attended a Modern Love panel at the Tribeca Film Festival that featured its editors. They receive thousands of submissions per year and publish about 52. That’s less than one percent. The odds are not in my favor, but my friend inspired me to try. Somebody must be one of the 52.
But thanks to Tuch’s post, I may never finish my essay because somewhere between her accusation and my own blinking cursor, AI got inside my head and it’s messing with everything I thought I knew about my own writing.
I’ve spent years being shaped by the best. Professors and professional writers who drilled into me that good writing is clean writing. Precise. Concise. Intentional.
Even that staccato last sentence sounds like AI.
I once sat on a panel at a journalism conference in New York with Roy Peter Clark. If you don’t know that name, you should. Clark has been called America’s most influential writing teacher. He’s a senior scholar at the Poynter Institute who has spent decades shaping how journalists and writers think about their craft. His book Writing Tools: 55 Essential Strategies for Every Writer has lived on my desk, in my lecture slides, and in my bones. I’ve taught from it in nearly every magazine writing and newswriting class I’ve ever stood in front of.
His tools are gospel to me. And now, one by one, AI is turning them into evidence.
Begin sentences with subjects and verbs. Activate your verbs. Cut the adverbs. Lead with meaning. Make every word count. Use periods as stop signs. Clean, direct, purposeful sentences.
Sound familiar? It should. It’s also a checklist for what an AI detector flags as suspicious.
The smooth transitions. The active voice. Everything Clark spent a career teaching writers to do is now, in the age of AI, a potential red flag. The fundamentals of good writing look like something a machine would produce.
Here’s the devastating irony: AI learned to write by consuming the best of us. It studied Clark’s tools right alongside us. It read the same essays, the same journalism, the same carefully crafted prose that shaped a generation of writers. And now it mimics it well enough that we can’t tell the difference, or worse, we’ve stopped trying.
So what are we supposed to do? Write badly on purpose? Ignore everything we were taught? Bury the lede just to seem more human?
Clark built his life’s work on the belief that writing is a craft worth learning, worth teaching, worth fighting for. I believe that too. I’ve believed it every time I’ve put his tools in front of a student and said: this is how it’s done.
But now, when a student asks me what good writing looks like, I pause.
Today I have 33 papers to grade, all written by seniors and grad students in my entertainment public relations class. I already know what I’m going to find. Many will have the telltale flags we’ve all been trained to spot: the overly smooth transitions, the suspiciously balanced sentences, the kind of writing that technically says everything and somehow feels like nothing. The em-dashes. So many em-dashes.
But here’s what’s keeping me up at night.
As I work through my undergraduate strategic PR writing class where students are just starting to learn how to write short, concise, tight press pieces, I find myself lingering over the worst ones. The clunky ones. The ones that miss the mark. And I wonder if these might be the real ones.
Is bad writing the new proof of life?
I brought it up in the entertainment PR class last night and asked my students directly: have any of you had a paper or something you wrote flagged for AI?
Several raised their hands with stories of false accusations. Students who swore that the work was theirs, only to be told that the detector said otherwise. AI detectors, by the way, are wrong more often than anyone wants to admit.
But then one student said something that stopped me cold.
We know how to tell AI to make our writing sound more amateur.
I haven’t been able to shake that.
We have arrived at a moment where students are actively prompting AI to write worse, to mimic the stumbles and roughness of someone still learning, because clean, competent writing has become suspicious. The very thing I stand in front of a classroom to teach is now a liability. Polish is a red flag. Everything I learned from brilliant professors and working journalists now must be hidden or dumbed down to pass as human.
I keep coming back to something that should be simple: what does human writing actually look like anymore?
For most of my career, I thought I knew. Human writing is specific. It’s the detail only you would notice, the memory only you could have, the sentence that embarrasses you a little because it’s too true. Human writing leaves marks on you. It has the writer’s fingerprints all over it.
But AI has been trained on all of it. Every feature story, every memoir, every deeply personal Modern Love column ever published. It has consumed the fingerprints. It can do specific. It can write the sentence that feels almost too true.
So where does that leave us?
I’ve been a journalism professor long enough to have watched the entire landscape shift beneath my feet. I watched the internet gut our student newsroom. I watched social media rewrite the rules of storytelling. I watched SEO turn good writers into keyword masters. And every time, we adapted. But this feels different.
Because this time the threat isn’t coming from outside. It’s inside the sentences themselves. It’s in the way a paragraph flows, the way an argument lands, the way a writer knows when to stop. AI has moved into the craft and set up shop, and now none of us can look at a clean piece of writing without wondering who or what is behind it.
That’s not a technology problem. It’s a trust problem.
And once trust goes, everything gets harder. The writer doubts herself. The reader doubts the page. The professor doubts the student. The editor doubts the submission. We are all now suspects.
My undergraduate students are navigating this in a way my generation never had to. They’re learning the craft at the exact moment the craft is being weaponized against them. Write too well and you’ll be accused of cheating. Write badly enough and maybe you’ll pass as real.
What a thing to teach someone.
It is becoming more difficult to stand in front of a classroom and say: here is how you write a strong lede, here is how you cut the fluff from a sentence, here is how you earn a reader’s trust knowing that those same skills might someday get them flagged, questioned, or dismissed.
I still teach it anyway. Because the alternative of telling students to write worse, to perform imperfection as proof of humanity, is not something I’m willing to do.
My Modern Love essay still sits in Google Docs. I’ll finish it, and I’ll send it in, clean and precise, the way I was taught. I’m not roughing up the edges to prove I’m human. I’m not writing worse to seem more real.
If that makes it suspicious, then the thing we’ve broken isn’t trust in writers. It’s trust in writing itself.


