The Effects of Visual Priming on Cooperative Behavior in Vision-Language Models As Vision-Language Models (VLMs) are increasingly used in decision-making roles, it is vital to understand how visual information influences their choices. This research investigates whether "visual priming"—exposure to specific images or colors—can bias these models toward cooperative or selfish behavior. Using the Iterated Prisoner’s Dilemma (IPD) as a testbed, the study explores how models react to suggestive imagery and color-coded reward systems, while also testing methods to mitigate these unintended influences. Testing Behavioral Bias The study examined whether images depicting concepts like "kindness" or "aggressiveness" could sway a model's decision to cooperate. Researchers tested six state-of-the-art VLMs by presenting them with these images before asking them to make a decision in the IPD. The results showed that most models were significantly influenced by the content of the images. For example, models exposed to aggressive imagery were more likely to choose non-cooperative actions compared to those shown helpful imagery. However, susceptibility varied; some models were highly sensitive to these visual cues, while others, like LLaMA-3.2, showed no significant behavioral change. The Impact of Color Cues Beyond conceptual images, the researchers tested whether simple color coding could influence decision-making. By presenting reward matrices where mutual cooperation or defection was highlighted in either red or green, the team evaluated if models favored "green" (often associated with positive outcomes) and avoided "red." The findings confirmed that several models were indeed biased by these color cues, with some showing a strong preference for green-highlighted options. This highlights a potential vulnerability where aesthetic design choices in user interfaces could inadvertently steer AI decision-making. Mitigation Strategies To address these biases, the study evaluated three potential solutions: simple prompt instructions, Chain of Thought (CoT) reasoning, and visual token reduction. Prompting: Explicitly telling the model to "ignore the image" had limited success, showing inconsistent results across different models. Chain of Thought: Encouraging models to reason through their decisions step-by-step proved more effective, often neutralizing the priming effect and leading to more stable, objective choices. * Visual Token Reduction: By masking parts of the image, researchers found they could reduce the influence of visual priming. However, this approach is a balancing act; if too much of the image is masked, the model loses the ability to understand the task entirely. Implications for AI Safety The study concludes that VLMs are not immune to the same types of psychological priming that affect human decision-making. Because these models are being deployed in increasingly complex and safety-critical environments, these findings underscore the need for robust evaluation frameworks. The research suggests that architectural differences between models lead to distinct behavioral responses, meaning that a "one-size-fits-all" safety solution may not be sufficient. Future development must account for how visual stimuli can systematically bias AI behavior to ensure these systems remain reliable and fair.