Science-Adjacent: AI is Ultra-processing Academic Writing and We are on the Verge of Starving

academic publishing bias academic voice and ai ai academic writing ai and peer review problems ai and scientific publishing ai in research writing ai writing tools research artificial intelligence scientific writing does ai writing affect scientific voice efl academic writing efl researchers publishing efl writers and academic publishing bias finding your academic voice how ai is changing academic writing how to find your voice in academic writing is ai writing a problem in scientific journals journal paper writing peer review ai detection scientific writing tips ultra-processed science Apr 01, 2026
2026_04_Blog_Academic_language_vs_individual_voice
13:54
 

About two weeks ago...

I opened a manuscript to peer review. I saw it was a resubmission so I was in the positive mindset that the authors were well on their way to publication. As I settled into my chair, I felt an unsettling. I felt a creeping suspicion. A weird pattern that I'd started to notice a few months (and several manuscripts) earlier, but it hit me immediately this time. The words were there, but—what was I missing…? As I read through the Introduction, I felt a sense of disorientation. Ah, right, I was consuming science-adjacent prose. It hit the surface markers of a well-written paper, but something was missing.

I've been thinking more and more about that missing thing. And I'm ready to talk about it.

 

"Academic voice" is a moving target — and not a neutral one

Let me clarify what I mean when I refer to academic voice. I don't mean grammar and clarity, although those are the starting point. I mean a particular kind of English that's formal, established, citation-heavy, and developed by particular institutions, for particular readers, in particular languages. The academic publishing system has been built around it, and everyone complies.

EFL writers (researchers writing in English as a foreign language) navigate this their entire careers*. They are evaluated, desk-rejected, and penalized not because their ideas are insufficient but because their syntax and word choice doesn't sound quite 'right.' It's hard to put a finger on it as a peer reviewer and it defaults into the category of 'you know it when you see it,' which means it's nearly impossible to recommend revisions. If you can't name it, you can't change it. I cannot imagine how frustrating it must be to receive reviewer comments that the paper requires major revision due to lack of scientific language. What does that even mean?

The bias is structural. It's clear in the record of review, and yet rarely explicit in the conversations where it actually matters: editorial decisions, reviewer comments, hiring committees. Scientific language is questioned, but without definition, the solution remains equally ambiguous.

*This is where my experience as a reviewer is concentrated, but the pressure to sound 'right' rather than think clearly is not exclusive to anyone writing in a second language.

All of this is not a digression. It's the whole context I’m asking you to hold onto.

 

AI has become the solution to an unfair problem

Of course it did. If the system punishes your voice and a tool can approximate the acceptable one, the adoption is rational. I want to be clear about that before anything else: researchers using AI to navigate a gatekeeping system that was never designed for them are making a reasonable choice in an unreasonable situation. This might be a controversial statement, but you don't need a PhD in behavioural science (like me) to understand the decision-making logic.

Whether you're navigating a system that was never built for your language, or simply one that was never built for your voice, the problem is the same.

However, reasonable choices can have unintended consequences. Ok, now I'm ready to digress…

I've been reading Chris Van Tulleken's Ultra-Processed People. It’s a book about what happens when food gets engineered to hit the sensory markers of nourishment while systematically replacing what nourishment actually does. The fiber is gone. The micronutrients are gone. What remains is the performance of food. Ultra-processed food didn't explode into the market because people made bad choices. It succeeded because the system created conditions where it was the most accessible, affordable, and available option.

I kept thinking about the manuscript I knew I had to review as I swiveled back and forth in my chair. About the slush bucket paradox of it — the density of words, the surface fluency, the 'almost right' verbs; the disquieting hollowness of it.

Then I had an even more disquieting thought: We are producing ultra-processed science. And like UPF, it didn't happen because researchers are lazy or dishonest. The biased system made it the path of least resistance.

 

What gets lost? The fingerprint

Here is what I noticed in that manuscript, and in others in retrospect.

There are a few specific tells.

First, the vocabulary performs expertise without landing it. Words that are almost right — scientific-adjacent, borrowed from the right synonym bank, assembled correctly — but organized so densely and so consistently that they stop signifying and start signaling. Proficiency becomes smoke and mirrors. I find myself stopping on a word and questioning if it's a new term for what was — what I thought was — an established one.

Next, syntactic complexity stands in as a proxy for reasoning. Subordinate clauses are layered over with the promise of building toward an argument. Long, complex sentences are grammatically robust but logically empty. The sentence ends and you realize nothing was actually said. I pause and re-read, questioning my own ability to find the logic — what am I missing?

And then the cruelest tell: eloquence without location. There is no angle. No hesitation. No moment where a particular mind is caught mid-thought. The prose is competent in a way that has no origin. Dare I say it's like a mid-western American accent? In-place and placeless at the same time.

Voice (spoiler alert!) is not innate. It's something you develop — slowly, through the act of writing badly first. Even the best writers must hone their craft. I remember writing my fourth journal paper. After three papers, I was bored with writing the (relatively) same literature review. I decided to challenge myself and begin with my own thoughts rather than copy-pasting and rephrasing my notes. I sent it to my advisor to review, and she told me it was the first time she could hear my voice. Not my sources. Me. I didn't even know what she meant until she said it — I'd been so focused on sounding like the literature that I hadn't noticed when I'd finally stopped. The mirror phase is real and necessary. You start by reflecting everyone you've read. Then, gradually, something shifts. In my case, boredom was my muse.

AI doesn't shortcut that process. It makes the mirror permanent.

It's challenging to get through the mirror phase, and it can take many more than three papers. Every writer moves through this phase—native English speakers included, though the system is far less patient with those for whom English is a second, third, or fourth language. EFL writers aren't behind; they're in a phase every academic writer moves through. The difference is the system doesn't give them the runway. AI offers a shortcut, but the cost is the fingerprint. The fingerprint is proof that a specific mind encountered specific evidence and moved, however slightly, toward something new.

 

Performative language isn't scientific language

This matters beyond aesthetics. Science has a directionality. It moves forward. Even a study that confirms what we already know does so from a new perspective: new data, new context, new limitations. That movement, however incremental, is the mechanism. The smallest study with a clear question and a rigorous method advances knowledge in a way that no synthesis of existing work can replicate. I've written about this elsewhere in the context of small studies. The value was never in the scale; it was in the specificity of the viewpoint.

AI has no perspective. It will always occupy the viewpoint directly at the center of existing knowledge. The average of what already exists. AI is consensus performing as argument. It can hedge without reasoning, cite without reading, conclude without arriving anywhere. The words are doing the right things in the right order, and yet the science is not happening.

That is not a stylistic failure. That is an epistemological one. Should I sound the alarm?

 

A field feeding on itself

Here is where it gets harder to see.

AI-generated prose is entering the published record. It is being indexed, scraped, and ingested as training data for the next generation. More than just stalling, science is feeding on itself. The mirror is no longer just reflecting; it's replicating like a house of mirrors. A house of horrors.

And what it replicates is not neutral. The existing literature was produced by a system already biased toward certain methods, certain institutions, certain languages. AI amplifies what's already there. It doesn't introduce new bias so much as it entrenches existing bias at scale, all while giving it the appearance of synthesis and rigor. This whirlpool has a center of gravity. Everything gets pulled toward it. The undertow is invisible until you're already in it. And then you notice that everything sounds the same, points the same direction, and you can't remember when it started.

This is the part I find hardest to sit with. Not the individual manuscript (remember this started with that manuscript I reviewed a few weeks ago?). Yeah, not just one manuscript, but the dizzying shift I now see coming.

 

I don't know how this ends

I'm not going to tell you to stop using AI. I'm not positioned to make that argument, and I'm not sure it's the right one anyway. The AI is here. The bias that made it rational to use in writing journal papers is also still here.

What I will say is this: we are scientists because we seek to find something new. Not necessarily something large, but something we didn't already know: a confirmation, a contradiction, a tiny extension beyond what exists. Something that wasn't in the literature until we put it there. AI can only give us back what's already out there. It has no capacity for the new.

Let me state that again: AI has no capacity for the new.

It can dress the existing record up, smooth it out, make it sound more confident than it is, but it cannot move us forward, because it cannot imagine what does not yet exist.

Peer review is struggling to 'taste' the difference. The nutrition label is missing. We need to know what we are consuming before science ends up feeding on itself.

I don't know what the way forward looks like at a systems level. But I think it starts with naming what we're losing — which is not just voice, not just style, but the specific irreplaceable thing that happens when a located mind works through a hard problem and arrives somewhere it didn't expect.

I went back to that manuscript. I finished the review and recommended major revisions on the scientific content of the paper. I've read enough papers written by EFL authors to see behind the language, although seeing behind the AI is a new skill I'm still developing. Then I asked simply whether AI had written the paper and why it hadn't been acknowledged. Finally, I wrote privately to the Editor to recommend rejecting the paper, and I pointed to the AI policies I suspected the authors had violated.

The alarm has sounded.

 

Footnotes

Don’t forget to check out my new video this month on figure permissions and what academics get wrong.

You can find it in the Publish It! Library 

Or watch on the Publish It! YouTube channel. I upload new content monthly so subscribe if you are interested!

If you are ready to Draft It! check out The Essential 10-Week Workshop – a self-paced course with tutorials, community discussion board, and workbook designed to guide you in preparing your first draft in 10 weeks for submission to a peer reviewed journal!  

And while you are at it—join the Publish It! Community and share your experiences with other academic writers. It’s free!

Be the first to know when a new blog is posted!