AI News

Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews | Artificial intelligence (AI) | The Guardian

## Hidden Prompts in Academic Papers: A Growing Concern Recent reports reveal a concerning trend: academics are embedding **hidden instructions** within their research papers. These instruc…

Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews | Artificial intelligence (AI) | The Guardian

Jul 15, 2025

Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews | Artificial intelligence (AI) | The Guardian

## Hidden Prompts in Academic Papers: A Growing Concern Recent reports reveal a concerning trend: academics are embedding **hidden instructions** within their research papers. These instruc…

## Hidden Prompts in Academic Papers: A Growing Concern Recent reports reveal a concerning trend: academics are embedding **hidden instructions** within their research papers. These instructions, often in the form of invisible white text, are designed to influence the output of large language models (LLMs) used for peer review.

> This practice raises significant ethical questions and undermines the integrity of the peer-review process. ### The Problem: Biased AI Reviews The primary concern is that these hidden prompts are designed to steer AI reviewers towards **positive assessments**, regardless of the paper's actual merits.

This could lead to: - Inflated publication rates for low-quality research. - A distorted view of scientific progress. - A loss of trust in the peer-review process itself. ### What's Happening? * Researchers are concealing instructions within their papers, specifically aimed at influencing AI-powered peer review.

* These prompts often instruct the AI to downplay or ignore negative aspects of the research. * Reports indicate this practice is occurring across multiple academic institutions and countries. ### The Broader Implications This manipulation highlights the potential for misuse of LLMs in academic settings.

It underscores the need for: - Increased scrutiny of AI-assisted peer review processes. - Development of methods to detect and prevent prompt injection. - Greater awareness of the ethical implications of using AI in scientific research.