Robert F. Kennedy Jr.'s "Make America Healthy Again" (MAHA) report, intended to address declining US life expectancy, is under scrutiny for potential use of AI and resulting inaccuracies. I…
Robert F. Kennedy Jr.'s "Make America Healthy Again" (MAHA) report, intended to address declining US life expectancy, is under scrutiny for potential use of AI and resulting inaccuracies. Investigations by NOTUS and The Washington Post have uncovered numerous errors in the report's citations, including broken links, incorrect authors, and misstated or nonexistent sources.
The presence of "oaicite" markers in several URLs, indicative of OpenAI's AI models like ChatGPT, strongly suggests that AI tools were employed in the report's creation, raising serious questions about its credibility. The investigations revealed that at least seven cited sources were entirely fabricated, and at least 37 citations appeared multiple times throughout the report.
This pattern of errors aligns with the known tendency of generative AI tools to produce false or misleading information, often referred to as "hallucinations." Such issues have been observed in other contexts, including legal filings, where AI-generated citations have been found to be inaccurate.
Despite these findings, the White House has attributed the citation errors to "formatting issues." Press Secretary Karoline Leavitt defended the report, stating it was based on "good science" and avoiding any direct mention of AI tools. The MAHA report file was subsequently updated to remove some "oaicite" markers and replace some of the nonexistent sources.
However, the core substance of the report, according to the Department of Health and Human Services, remains unchanged, portraying the report as a "historic and transformative assessment." The controversy highlights the potential pitfalls of using AI in creating official reports, particularly when accuracy and reliability are paramount.
The incident raises concerns about the integrity of the report's findings and the validity of its recommendations, given the substantial evidence suggesting the use of AI tools and the resulting errors in its supporting sources.