

What if AI is a means to love our neighbor better?
In his 1992 book Technopoly, Neil Postman presciently worried about technology advocates creating a Huxleyan brave new world. In his dystopian vision, Postman feared not only that techno-optimism would win the day but that its alternatives would become irrelevant and forgotten. As we grapple with generative AI’s implications, Postman’s decades-old critique has acquired a new and unsettling relevance.
Postman’s analysis of how technology impacts medicine and education has proven remarkably accurate. As technology’s influence grows, human agency weakens, leading to a lopsided emphasis on every breakthrough and an inability to remember that using technology is optional, not mandatory. Now, generative AI threatens to reduce human responsibility to an unprecedented degree.
Or at least that is one narrative.
But there’s another story to tell—not about technology’s power over us, but about our power to direct technology toward helping others. The most important question we can ask ourselves when adopting a new technology is, “How can it help us serve others?” It is the Golden Rule in action. By imagining how AI can help others, we avoid the technopoly Postman feared.
Despite widespread concerns, the triumph of a technopoly is not inevitable, as Cal Newport persuasively argues in The New Yorker. Far from seeking a return to pre-computer days, critics instead demand thoughtful limits on AI’s expanding role in medicine, education, and beyond. Contrary to expectations, skepticism about generative AI has found a voice in a diverse array of publications, from The New York Times and The Atlantic to The Chronicle of Higher Education, First Things, and even Current. These critiques demonstrate that we haven’t surrendered to technopoly; we’re still actively debating AI’s role in society.
Ironically, generative AI can articulate its critics’ deepest concerns. Given the right prompt—“What are problems with uncritical AI adoption?”—ChatGPT can give users exactly the kind of counterarguments to techno-optimism Postman feared would vanish. Yet discussions of AI typically focus on individual users rather than collective benefit, a myopia that limits our understanding of its potential.
Writing instruction illustrates this blind spot perfectly. Instead of examining how AI might help writers better serve their readers, critics focus almost exclusively on the writer’s experience. Consider these three representative critiques:
Writing in First Things, Bruno Chaouat warns that AI use robs writers of creative discovery, the “wonder of creating almost ex nihilo” and the “thrill of intellectual and poetic experience.” While these personal rewards matter, Chaouat’s focus on writerly experience ignores writing’s fundamental purpose: communicating with readers.
Next, St. Louis University English professor Nathaniel Rivers bans AI use because, as he asserts in his syllabi, “To write is to both discover and invent ourselves.” Like Chaouat, Rivers reduces writing to self-discovery, ignoring its communicative purpose.
Finally, in her article for Current, Dixie Dillon Lane argues against using Grammarly, likening writing to a craft: “Each individual will have to make a decision for himself about whether to put authenticity, skill, and personhood before the opportunity for immediate success.” Lane, too, frames writing primarily as self-expression, treating the craft as a path to authenticity rather than a means of serving readers.
Notice how each critic fixates on the writer: Chaouat’s near-idolatry of creative struggle, Rivers’ Romantic ideal of self-expression, and Lane’s preoccupation with authenticity. While these concerns matter, they all neglect the person who reads what the writer produces.
What if we approached AI writing tools with one question: How might we use them to help love our neighbors better? Each time we use Grammarly or seek AI feedback, we would then have to examine our true motivation. Are we striving to serve others or help ourselves produce text more quickly?
This question excludes both simple prohibition and wholesale adoption. The answer is as easy—and as hard—as loving someone as we love ourselves.
Jonathan Sircy is Professor of English and the chair of the School of Religion and Humanities at Southern Wesleyan University.
It would seem AI is, according to the author, essentially the equivalant of buying a Hallmark card and sending it to someone, rather than writing a note ourselves. Yes, it’s less indicative of who we are or how we think or feel, and less creative effort went into it–which may say something about how important the occasion or even the recipient is to us–but it’s legible, it features a pleasing font, maybe it even rhymes. It’s a better experience in many ways for the reader than a hand-written self-composed card, and therefore preferable, because more “loving of my neighbor.”
That’s an interesting take on the matter, and I suspect it holds a lot of truth. As with say birthday cards, what will come to matter, as the practice embeds itself in our various relations, will be that it’s there, not how it came to be. Give it a decade or so, and an entire generation of students will have lived in a world where you print up your paper and hand it in, rather than write it yourself. The idea of composing a paper yourself will become as arcane–and as unimaginable, I suppose–as composing your own birthday thoughts to your mother. The recipient will see receiving the paper as the most important part of the act–what difference how it came to be? We don’t get all analytical about the sentiments on birthday cards, do we?
The author thinks this is a good future, because it will be a better reading experience for the recipient. I think that notion might deserve a little more thought. What does ChatGPT have to say about that?
Dear John,
My article’s gist: when people talk about AI writing tools, they should consider readers, not just writers.
If this consideration leads them to argue against AI, so be it. I’ll be happy to read and think about what they’ve written.
As for your final prompt, here’s AI’s response:
“The commenter presents an interesting but flawed analogy between AI-written content and Hallmark cards. While both prioritize reader experience, they differ fundamentally in purpose and context.
A Hallmark card’s standardized sentiment serves a basic social function where personalization is appreciated but optional. Academic writing and professional communication, however, aim to convey unique insights, develop arguments, and demonstrate understanding – goals that require authentic engagement from the writer.
The future the commenter envisions – where content generation is divorced from intellectual engagement – would actually diminish value for readers. Readers benefit most from writing that reflects genuine thought, expertise, and consideration of their needs. AI can enhance this process (improving clarity, catching errors), but shouldn’t replace the cognitive work that makes writing meaningful.
Rather than simply optimizing for surface-level readability, truly ‘loving our neighbor’ through writing means providing content that educates, challenges, or moves them – something that requires human judgment and understanding.
The essay argues not for replacing human writing with AI, but for thoughtfully using AI to better serve readers while maintaining the essential human elements that make writing valuable. This is quite different from the Hallmark card scenario the commenter suggests.”
“The essay argues not for replacing human writing with AI, but for thoughtfully using AI to better serve readers while maintaining the essential human elements that make writing valuable. This is quite different from the Hallmark card scenario the commenter suggests.”
I realize the scenario I described is somewhat different from the ideal future state the essay envisions. Technologies tend to fall somewhat short of the optimistic (sometimes, as with the internet back in Stewart Brand’s day, utopian) expectations of their enthusiasts. I was trying to meld 1) my appreciation for the way our wishes for various technologies are eclipsed by less ideal realities (Porn Hub, eg), and b) my knowledge of human–including student–behavior. I intuited that that might be an interesting exercise, one perhaps more realistic than merely assuming “thoughtfulness” and “essential human elements” will be the chief features of our brave new future.