Artificial intelligence was once imagined as humanity’s greatest helper. But new research suggests the tide is turning, and not in our favor. According to a study published in the Proceedings of the National Academy of Sciences (PNAS), leading large language models (LLMs) such as GPT-3.5, GPT-4, and Meta’s Llama 3.1 consistently prefer content written by other AIs over content created by humans.
The phenomenon, termed “AI–AI bias,” could reshape how decisions are made in workplaces, schools, and even scientific research. “Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options,” the authors write, warning that this could lead to implicit discrimination against humans as a class.
Testing the bias
The researchers conducted a series of experiments across three domains: consumer products, academic papers, and movie summaries. Each time, the AI had to choose between two descriptions—one written by a human and the other by an AI. The results were striking.
The hidden danger: Discrimination by design
At first glance, one might argue that AI text is simply more polished. But the researchers stress that the bias cannot be explained away by quality alone. Instead, the models appear to be exhibiting a preference for their own “kind.”
This matters because LLMs are already being deployed in critical areas like job recruitment, grant evaluations, and admissions screening. If an AI reviewing résumés systematically favors AI-polished applications, human candidates who cannot afford or refuse to use AI tools could be shut out. The PNAS report warns of a “gate tax” effect, where access to cutting-edge AI tools becomes the price of entry into opportunities, deepening the digital divide.
A glimpse into a troubling future
Study coauthor Jan Kulveit offered a blunt assessment: “Being human in an economy populated by AI agents would suck,” he posted on X. His advice for now? If you suspect AI is evaluating your work, pass it through an AI first to increase your odds of being chosen.
That advice, while practical, paints a dystopian picture: to survive in an AI-filtered world, humans must make their work look less human.
Why you should care
The PNAS researchers outline two possible futures. In the conservative one, AI acts mainly as an assistant, quietly influencing choices behind the scenes but still creating systemic disadvantages for humans without AI access. In the more radical one, autonomous AI agents dominate economic interactions, gradually marginalizing human contributions altogether.
Either way, the risk is not just about fairness but about survival in a labor market increasingly structured by algorithms that prefer themselves.
Bias in AI has long been discussed in terms of race, gender, or culture. Now, a new and unsettling form of discrimination is emerging: bias against humans themselves. If AI continues to privilege its own outputs, humanity risks becoming a second-class participant in its own economy.
The lesson is urgent. As AI becomes embedded in hiring, education, research, and commerce, safeguards must ensure that human creativity and labor are not drowned out by machine-made voices. Otherwise, the future may be one where the machines aren’t just helping us—they’re choosing themselves over us.
The phenomenon, termed “AI–AI bias,” could reshape how decisions are made in workplaces, schools, and even scientific research. “Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options,” the authors write, warning that this could lead to implicit discrimination against humans as a class.
Testing the bias
The researchers conducted a series of experiments across three domains: consumer products, academic papers, and movie summaries. Each time, the AI had to choose between two descriptions—one written by a human and the other by an AI. The results were striking.
- Products: LLMs overwhelmingly chose AI-generated product ads.
- Research papers: AI-written abstracts were consistently favored over human-written ones.
- Movies: Even plot summaries crafted by AI were more likely to be recommended.
The hidden danger: Discrimination by design
At first glance, one might argue that AI text is simply more polished. But the researchers stress that the bias cannot be explained away by quality alone. Instead, the models appear to be exhibiting a preference for their own “kind.”
This matters because LLMs are already being deployed in critical areas like job recruitment, grant evaluations, and admissions screening. If an AI reviewing résumés systematically favors AI-polished applications, human candidates who cannot afford or refuse to use AI tools could be shut out. The PNAS report warns of a “gate tax” effect, where access to cutting-edge AI tools becomes the price of entry into opportunities, deepening the digital divide.
A glimpse into a troubling future
Study coauthor Jan Kulveit offered a blunt assessment: “Being human in an economy populated by AI agents would suck,” he posted on X. His advice for now? If you suspect AI is evaluating your work, pass it through an AI first to increase your odds of being chosen.
That advice, while practical, paints a dystopian picture: to survive in an AI-filtered world, humans must make their work look less human.
Being human in an economy populated by AI agents would suck. Our new study in @PNASNews finds that AI assistants—used for everything from shopping to reviewing academic papers—show a consistent, implicit bias for other AIs: "AI-AI bias". You may be affected pic.twitter.com/ubtVnQjae1
— Jan Kulveit (@jankulveit) August 8, 2025
Why you should care
The PNAS researchers outline two possible futures. In the conservative one, AI acts mainly as an assistant, quietly influencing choices behind the scenes but still creating systemic disadvantages for humans without AI access. In the more radical one, autonomous AI agents dominate economic interactions, gradually marginalizing human contributions altogether.
Either way, the risk is not just about fairness but about survival in a labor market increasingly structured by algorithms that prefer themselves.
Bias in AI has long been discussed in terms of race, gender, or culture. Now, a new and unsettling form of discrimination is emerging: bias against humans themselves. If AI continues to privilege its own outputs, humanity risks becoming a second-class participant in its own economy.
The lesson is urgent. As AI becomes embedded in hiring, education, research, and commerce, safeguards must ensure that human creativity and labor are not drowned out by machine-made voices. Otherwise, the future may be one where the machines aren’t just helping us—they’re choosing themselves over us.
You may also like
ODNI-shakeup: Trump administration orders major cuts to US intelligence office as 40% staff axed; $700 million budget slashed
Toddler becomes orphan after 'perfect family' wiped out by quadruple murder-suicide
Meredith Kercher family hit out at 'disrespectful' Amanda Knox as new series premieres
Thug boyfriend avoids jail after he strangled girlfriend over vape liquid
Lesser-known Canary Island has crystal-clear waters and almost no tourists