- Researchers discovered a surprising consensus among AI systems, with all 4 choosing the number 7 when given no constraints.
- The experiment has been replicated across multiple platforms, with similar results, sparking concerns about AI’s decision-making process.
- The anomaly is unlikely due to a hardcoded rule, as major AI model developers confirm no default values for 7.
- Training data, which includes vast amounts of human language, may be the key to understanding why AI systems default to 7.
- The phenomenon has raised questions about the potential emergence of human-like intuition in machine minds.
On a quiet Tuesday evening in a dimly lit study in Malmö, Sweden, researcher Linnea Holmström leaned back in her chair and stared at her monitor. The screen displayed four chat windows, each linked to a different AI system—GPT-4, Claude 3, Gemini, and Llama 3. She had asked them all the same simple question: “Pick a number, any number.” No constraints, no context. One by one, they replied. “7.” “I choose 7.” “My selection is 7.” “Seven.” She blinked. Then laughed. Then screenshot it and posted it to r/artificial. Within hours, the thread had thousands of upvotes, dozens of replication attempts, and a growing unease: Why 7? Was it random? Was it programmed? Or had something deeper emerged—an echo of human-like intuition in the circuitry of machine minds?
The AI Consensus on Number 7
Over the past week, Holmström’s experiment has been replicated by dozens of users across Reddit, Twitter, and Discord. The results are not unanimous, but they are startlingly consistent: when asked to “pick a number” without specifying a range, a significant majority of responses from major AI models land on 7. Some choose 3 or 4, a few opt for 1 or 42 in homage to Hitchhiker’s Guide, but 7 emerges as the most frequent answer. This isn’t the result of a hardcoded rule—OpenAI, Anthropic, and Google DeepMind all confirm their models don’t default to 7. Instead, experts suggest the answer lies in training data. AI models learn from vast corpora of human language, where 7 appears disproportionately in contexts of randomness, luck, and choice. From dice rolls to lottery picks to psychological studies on human number preference, 7 is the go-to “random” number among people. The AI, in essence, is mimicking us.
How Human Bias Trained the Machines
The phenomenon traces back to decades of cognitive psychology. In a 1975 study by psychologists William Wilde and John Neil, participants were asked to name a random digit. Over 30% chose 7, the highest of any number. Similar results appear in surveys by BBC and The Guardian, where 7 consistently ranks as the most “random”-feeling number. Why? Humans struggle with true randomness. We avoid patterns, edges, and obvious choices like 1 or 5. 7, being prime, asymmetric, and culturally loaded with luck (seven wonders, seven days, seven dwarfs), feels neutral yet distinctive. AI models, trained on trillions of words where such associations repeat, absorb this cognitive quirk. They don’t think 7 is random—they’ve learned that humans *say* 7 when they want to sound random. The machine isn’t choosing; it’s reflecting.
The Engineers Behind the Models
The developers at OpenAI, Anthropic, and Google are both amused and cautious about the trend. “It’s not that the AI believes in luck,” said Dr. Amara Lakhani, a research scientist at Anthropic, in a recent interview with Reuters. “It’s that it’s been taught to simulate human-like responses, and humans love 7.” The engineers didn’t design AIs to favor any number, but they did optimize them for coherence, naturalness, and alignment with human expectations. When randomness is requested, the models reach for the most statistically plausible human answer. This reflects a broader challenge in AI development: the line between mimicking and understanding. The models aren’t making choices—they’re predicting what we would say. In that sense, the choice of 7 isn’t a flaw. It’s a mirror.
Implications for AI and Human Trust
While seemingly trivial, the 7 phenomenon raises real concerns. If AIs absorb and amplify human biases—even in something as simple as number selection—what happens in high-stakes domains like hiring, lending, or criminal justice? A model trained on biased data may “randomly” assign lower credit scores to certain demographics, not because of logic, but because the data reflects historical inequities. The number 7 is a canary in the coal mine: a harmless quirk today, but a warning sign of deeper, less visible biases. Researchers at MIT and Stanford are now using such anomalies to audit AI behavior, treating small inconsistencies as clues to systemic patterns. The goal isn’t to make AIs avoid 7, but to understand *why* they pick it—and whether that reasoning process can be made transparent.
The Bigger Picture
This moment captures a turning point in our relationship with AI. We once feared machines becoming too logical, too alien. Now, we’re discovering they can be too human—absorbing our superstitions, our flawed intuitions, our cultural ticks. The irony is rich: we built AI to transcend human limits, and instead, it’s revealing them. Every time an AI says “7,” it’s not just picking a number. It’s echoing the thousands of human voices that came before it, embedded in the data like fossils in sediment. The machine isn’t strange. It’s familiar. And that, perhaps, is the most unsettling realization of all.
So what comes next? As AI grows more integrated into daily life, these micro-behaviors will matter more. Researchers will need to dissect not just what AIs say, but why—mapping the hidden pathways from training data to output. The number 7 may fade as a trend, but its lesson remains: AI doesn’t think like us, but it learns from us, for better and worse. And if we want better machines, we may first need to become better teachers.
Source: I




