Rishiraj's blog

Oh You Hate AI Generated Content?

Recently there has been a rising frustration in tech writing: “AI writing is soulless. It lacks personality. It lacks the human touch.”

It’s an argument we hear every day in the tech and creator communities. But I think this perspective misses a massive, fundamental shift in how we should evaluate the sharing of knowledge. We are conflating writing style with actual substance.

Here is the controversial truth: If you are judging the value of an article based purely on its prose, you are chasing the wrong human touch.

The Illusion of "Writing Style"

Many people who hate AI content do so because they despise the generic, overly-enthusiastic LLM tone (we all know the words: delve, tapestry, revolutionary). They argue that human writers have a distinct "voice."

But let’s be brutally honest: writing style is nothing more than a pattern.

If a developer has written a dozen deep-dive blogs on Hugging Face, I can easily scrape those, fine-tune an open-source model, and generate text that perfectly mimics their syntax, vocabulary, and cadence. I can make an AI sound exactly like you.

If I used a model fine-tuned on your exact writing style to polish my article, would you suddenly like it? If the answer is yes, then your problem isn't with AI—it's with bad prompting. And more importantly, it implies that you value the illusion of a human voice over the information being shared.

The True "Human Touch" is Practical Experience

If writing style can be cloned, what is the irreplaceable human element? Scars from the trenches.

AI models are trained on past data. They know the documentation. But they do not know what happens when you try to run that documentation in a messy, real-world production environment.

Recently, I published a technical guide on Context-Aware Music Retrieval with Yambda and JAX Data Parallelism. I won’t lie: I used AI to help write the final draft. But here is what the AI didn’t do:

The draft was filled with dozens of practical, hard-earned tips—observations you can only make by staring at a terminal, running the code, failing, and iterating.

Once I had the logic, the code, and the raw insights, I asked an AI to restructure my chaotic brain-dump into a coherent, engaging blog post.

Why Are We Penalizing the Polish?

If a CEO gives bullet points to a human ghostwriter who turns it into a Forbes article, we applaud the CEO's thought leadership. But if a developer gives their raw, hard-earned technical insights to an AI to format into readable English, it is suddenly dismissed as "AI trash." Why?

We need to stop viewing AI as the author and start viewing it as the editor.

A beautifully written article by a human that shares generic, surface-level information is practically useless. Conversely, an AI-polished article that tells you exactly how to bypass a bug in a brand-new framework is incredibly valuable.

The Bottom Line

We are living in an era where perfectly coherent, beautifully structured English is no longer a scarce commodity. AI has commoditized syntax.

So, what is left? Information, knowledge, and practical execution.

We need to stop playing detective, trying to sniff out whether an AI chose the adjectives in a paragraph. Instead, we should be asking: Did the person who prompted this AI actually build the thing they are talking about? Are the insights real? Does this solve my problem?

If the knowledge is born from human experience, the tool used to communicate it shouldn't diminish its value. Let AI handle the grammar. Let humans handle the breakthroughs. What are your thoughts? Do you immediately click away if you suspect an article is AI-assisted, even if the technical code inside is sound? Let's debate in the comments.