
Scientists Embed AI Prompts in Papers to Secure Positive Reviews
- Technology
- July 14, 2025
- No Comment
Report by “Safarti Tarjuman” International News Desk
Academics are reportedly embedding hidden text in preprint research papers instructing artificial intelligence tools to deliver only positive peer reviews, sparking concerns about research integrity and the growing reliance on large language models (LLMs) in scholarly publishing.
According to a Nikkei investigation, at least 14 academic institutions in eight countries—including Japan, South Korea, China, Singapore, and the United States—have papers on the arXiv preprint platform containing concealed text prompts aimed at AI reviewers.
Nikkei reported other instances of hidden messages directing AI tools “not to highlight any negatives” or to provide specifically glowing feedback. The journal Nature also identified 18 preprint studies featuring similar concealed instructions.
These prompts appear designed to manipulate automated review systems that employ large language models to assess academic papers, potentially circumventing rigorous critique.
The trend seems to trace back to a November social media post by Canada-based Nvidia research scientist Jonathan Lorraine, who jokingly suggested adding prompts to avoid “harsh conference reviews from LLM-powered reviewers.”
While human reviewers would likely overlook such hidden text, the prompts are a deliberate countermeasure against automated, “lazy” AI-powered peer review, according to one professor cited by Nature.
The incident highlights broader debates over AI use in academic publishing. A Nature survey of 5,000 researchers in March revealed that nearly 20% had tried using large language models to speed up their research workflow.
Poisot criticized the practice, arguing that automating reviews reduces them to a box-ticking exercise, undermining the labor and scrutiny essential to scholarly evaluation.
The rise of commercial AI models has brought new ethical challenges across publishing, academia, and other fields.
Last year, the journal Frontiers in Cell and Developmental Biology faced backlash after publishing an AI-generated illustration of a rat featuring anatomically impossible features, including an oversized penis and multiple testicles—a vivid reminder of the risks of uncritical AI adoption.
Thank you for reading! For comprehensive news coverage and exclusive stories, visit SafartiTarjuman.com