Ex Ante Evaluation of AI-Induced Idea Diversity Collapse
Generative AI is often judged by how well it helps an individual user create a better story, slogan, or idea. However, this focus ignores a hidden problem: when many people use the same AI model, their outputs can become increasingly similar, leading to a "diversity collapse." This paper introduces a new framework to measure this risk before a model is ever released to the public. By comparing AI-generated content against a baseline of human-only work, the researchers provide a way to predict whether a model will cause a population of users to produce redundant, less unique ideas.
Measuring Crowding Without Humans
Current methods for studying diversity collapse are "post hoc," meaning they require expensive and time-consuming studies where humans interact with AI to see what they produce. This paper proposes an "ex ante" (before the fact) protocol that eliminates the need for human-AI interaction data. Instead, it uses model-only generations and compares them to a matched baseline of unaided human work. By treating ideas as "congestible resources"—much like a crowded road—the researchers can calculate an "excess-crowding coefficient" that quantifies how much more similar AI outputs are compared to the natural overlap found in human creativity.
The Parity Condition
The framework establishes a benchmark called the "human-relative diversity ratio." If this ratio is 1 or higher, the model is considered to be at "parity," meaning it does not introduce any more crowding than what humans would naturally produce on their own. If the ratio falls below 1, the model is creating excess crowding. The authors demonstrate that this metric is not just a theoretical number; it directly relates to an "adoption game." As more people use a model that falls below the parity threshold, the value of the ideas produced by that model drops because they become less unique, creating a "redundancy cost" for the user.
Results and Practicality
The researchers tested three frontier Large Language Models (LLMs) across three creative tasks: writing short stories, generating marketing slogans, and finding alternative uses for common objects. They found that all three models fell below the parity threshold, indicating that they consistently produce more crowded, less diverse outputs than humans do when working without AI. Importantly, the study shows that this crowding is not an unchangeable trait of the AI. By adjusting generation protocols—such as changing the model's "temperature" or using persona-mixture prompting—developers can actively reduce crowding.
Why This Matters
This research shifts the conversation around AI creativity from a retrospective diagnosis to an actionable design goal. By providing a standardized way to audit models for diversity collapse during the development phase, the framework allows developers to identify and mitigate crowding risks before deployment. For users, it highlights a crucial trade-off: the benefit of using an AI assistant must be weighed against the potential loss of distinctiveness that occurs when many others are drawing from the same generative source.
Comments (0)
to join the discussion
No comments yet
Be the first to share your thoughts!