This paper introduces a new framework called LAPITHS (Language model Analysis through Paradigm grounded Interpretations of Theses about Human likenesS) to critically evaluate whether modern AI models, such as the "Centaur" system, truly possess human-like cognitive abilities. The authors argue that current AI research often falls into a "behaviouristic" trap, where achieving human-level performance on tests is mistakenly taken as proof that the AI is "thinking" or processing information like a human. LAPITHS provides a structured way to challenge these claims by separating functional performance from actual cognitive plausibility. The Problem of "Ascription Fallacy" The authors identify a major conceptual error they call the "ascription fallacy." This occurs when researchers observe an AI producing the same output as a human and conclude that the AI must be using the same internal cognitive or biological mechanisms. The paper points out that this is a logical leap: just because an airplane flies, it does not mean it uses the same biological mechanisms as a bird. Because models like Centaur are trained specifically to mimic human responses in psychological experiments, their success is a result of optimizing for "behavioural agreement" rather than developing a genuine understanding of human cognition. The Minimal Cognitive Grid To move beyond simple performance metrics, the authors utilize the Minimal Cognitive Grid (MCG). This tool evaluates artificial systems based on three specific dimensions: the ratio between their functional and structural design, their generalizability, and how well their performance matches human benchmarks. By using the MCG, the framework forces researchers to look "under the hood" of an AI. It treats behavioural success as only one small piece of evidence, requiring that a system also demonstrate structural constraints—how it organizes and retrieves information—that are consistent with human cognitive theory. Challenging the "Centaur" Model The researchers applied the LAPITHS framework to analyze Centaur, a model built on Llama 3.1 70B and fine-tuned on a massive dataset of human psychological experiments. Their empirical tests revealed that other state-of-the-art language models—which were not specifically trained on these tasks but were simply provided with instructions—could achieve similar levels of performance. Furthermore, they found that even "neural alignment" (where an AI’s internal activity appears to mirror human brain data) can be reproduced by models that lack any real cognitive architecture. Key Takeaways for AI Research The study concludes that while models like Centaur are impressive at predicting human behaviour, they function as high-coverage "emulators" rather than cognitive models. The authors emphasize that if we want to use AI to understand the human mind, we must stop assuming that matching an output is the same as replicating a process. Future research should prioritize models that are not just functionally adequate, but also structurally grounded in the principles of cognitive science, rather than relying on the statistical regularities found in large datasets.