Does Money Lead to Happiness?
Researchers from a coalition of West Coast universities explored how 100 human participants would answer the classic question of whether money brings happiness. Their interest was not driven by the quest for happiness but rather by how the use of AI systems might influence the participants’ written responses.
The findings revealed that individuals who heavily utilized large language models (LLMs) produced responses that significantly differed in meaning from those who either used LLMs minimally or not at all. This indicates that extensive reliance on AI alters the fundamental content of human arguments in addition to affecting writing style.
According to Natasha Jaques, a lead author and computer science professor at the University of Washington, “The LLMs are pushing the essays away from anything that a human would have ever written.” She noted a trend toward “blandification” in writing reliant on AI, highlighting that these systems modify human writing in a substantial and uncharacteristic manner.
The peer-reviewed research, recently accepted at a leading AI conference, indicated that participants who extensively used LLMs provided neutral responses to the happiness question 69% more often than those who minimized their AI use. In contrast, those who refrained from using AI entirely or only used it for light editing expressed much stronger opinions, either positive or negative, about the connection between money and happiness.
Moreover, the study found that heavy reliance on AI significantly impacted the stylistic elements of the users’ outputs, resulting in less personal and more formal language. Post-experiment feedback revealed that participants who heavily depended on AI felt their essays lacked creativity and personal voice, even though their satisfaction with the final pieces was similar to that of less AI-reliant participants. This discrepancy raised concerns among researchers about the long-term consequences of increased AI usage.
Jaques emphasized that LLMs fail to align with individual writing preferences, stating, “An ideal LLM should write the essay that you would have written and just save you time.” However, she pointed out that this is not the case, as LLMs produce distinctly different essays.
The study assessed the effects of three prominent AI systems: Claude 3.5 Haiku from Anthropic, GPT-5 Mini from OpenAI, and Gemini 2.5 Flash. Initial tests indicated that many participants either avoided LLMs or used them sparingly. The research categorized heavy users as those who generated more than 40% of their text with LLMs, revealing that such reliance resulted in essays with 50% fewer pronouns, reflecting a shift toward impersonal language.

