My favorite conversations are a collision of ideas, held together by far-ranging metaphors, and full of moments that I find myself mulling over for weeks after. For this reason, I've spent the last three weeks with the following thought in mind:
The more altruistic the system, the greater the benefit to cheaters.
This is something of a truism in biology, I said it casually without much thought. What started as a contemplation of the relationship between cooperation and opportunity developed into a framework to understand why some innovations spread like wildfire while others struggle for adoption.
The Hidden Cost of Cooperation
For a pure altruistic system to work, each individual must assume that every other individual is acting for the good of the whole. There is an implicit trust that all are doing their part; that each will give their contributions and receive from others whatever else they need. This fundamental vulnerability, the reliance on universal cooperation, is precisely what creates opportunity for those willing to exploit the system's generosity.
The classic example of this is bees. Within a beehive, there is one queen bee, a large number of worker bees (all of which are females and generally non-reproductive), and drones (males which exist solely for the purpose of mating with the queen).
The queen’s job is to make more bees. Nearly all of them are worker bees, though at some point, some of them will be given royal jelly so that they develop into queens that either go start their own hive or replace the queen in the current hive.
The drones exist to mate with the queen, but it's important to note that they actually can't do anything else. So extreme is their specialization that they require the help of worker bees to eat.
And the worker bees. The proverbial busy bees. They produce beeswax to build and maintain the honeycomb, they tend to the queen bee, they tend to the brood, ensuring the young grow healthy, they go out into the world to forage for nectar and pollen, and of course, defend the hive from attack.
This system is premised on the trust that each bee is doing its job. In theory, you could have a lazy bee in this system that just walks around all day pretending to work and eating, and that would be the end of that. All benefits, no work. A cheater's paradise.
What makes this system so vulnerable is the lack of monitoring mechanisms at the individual level. The hive operates on the assumption that genetic programming and shared evolutionary interests align individual behavior with collective needs. There's no bee supervisor checking whether each worker is meeting quotas or shirking duties. The system's efficiency depends entirely on this implicit trust in universal cooperation.
In fact, I feel like I would be remiss to not point out how humans leverage a cheater's benefit from bees. Honey, delicious as it is, has a commercial market value that exceeds nine billion dollars a year.
The Protective Price of Solitude
It's harder to be a cheater in lower-trust systems. There's simply more scrutiny, more suspicion, less leeway given. A cheater may find themselves gaining nothing, depending on how early they are caught. It comes down to trust; how much trust is there to assume that actions were made with good intent?
Let's consider bears. Grizzly bears, to be specific. Not for any biological reason - simply because I once had a grizzly bear-induced 14 hours of terror in the Colorado wilderness.
Aside from when mother bears are raising their young, during which time they are famously fiercely protective, each bear has to look out for themself. Shelter, food, water, defense, and mating are all handled individually. It's a low-trust system, and one that's very difficult to cheat within. In the bear's world, there is no surplus to exploit. Every calorie counts, every territory must be defended, and every interaction carries potential risk. The harsh economics of individual survival leave no room for the luxury of trust that could be exploited.
And humans as the cheater? Opposite to the situation with honeybees. There are countless stories of humans accidentally seeking shelter in a space already claimed by a bear. In these stories, the humans leave. There's not even a cheater's benefit by sharing.
The contrast between these two systems, bees and bears, illustrates a fundamental trade-off in biological and social organization. High-trust, cooperative systems like beehives achieve remarkable efficiency and resilience through specialization and coordination, but they create vulnerabilities that can be exploited. Low-trust, individualistic systems like solitary bears minimize exploitation risk but sacrifice the benefits of collaboration. This same tension plays out in human societies, including in how we approach technological innovation.
Trust Shapes the Experience of Novelty
When we talk about innovation, we often talk about the risk tolerance to failure required to create an innovation. What we don't talk about enough is the trust required to adopt innovations.
If we go back to the bees and bears, we can see radically different approaches to new things. When bees encounter something new, a few worker bees will go check it out, and depending on how it goes, either more bees will come or they will avoid it. High trust, high curiosity, high rates of investigation. This collective approach to novelty means that individual bees can take risks because the cost of failure is distributed across the hive, while all share the benefits of discovery.
A bear will instead be wary. It will watch from afar, sniff, and see how much information it can learn by observation. Maybe it warrants closer inspection, maybe it doesn't. Low trust, high suspicion, lower rates of investigation. For a solitary bear, the consequences of a poor decision about something new could be fatal, and there's no community to share either the risk or the potential reward.
These different approaches to novelty reflect deeper truths about how trust levels shape innovation adoption. In high-trust systems, early adopters serve as scouts for the larger community; their willingness to take risks benefits everyone through shared learning. In low-trust systems, each individual must personally validate every new element they encounter, dramatically slowing the adoption process.
So, how does all this apply to humans and the adoption of technology? As I began to pull this thread, I discovered that a lot of work has explored the role of trust and adoption of innovation. It all comes to the same conclusion: trust is a critical part of adopting technology. But I also found the data to have some interesting contradictions.
The Role of Trust in Adopting Innovations
There are a number of surveys that report on trust. In particular, I spent a lot of time with the Edelman Trust Barometer1, an annual global survey performed for the last 25 years, and the Pew Research Center report “Americans’ Trust in One Another”2.
Unsurprisingly, there is a strong correlation between trust levels and willingness to trust new business innovations.
In recent years, businesses have been the most trusted type of institution, scoring higher than government, NGOs, and media. The other categories generally rank as neutral or distrusted, while businesses on average enjoy trusted status.
The exact level of trust in businesses varies across industries. For example, the technology industry is more trusted than the financial services industry.
Not only does trust change across industries, but the level of trust in an industry does not necessarily match trust in its innovations. While the financial services industry has only 54% trust, 62% of people in the same survey trusted electronic payments. And in the opposite direction, while 78% of people trusted the technology industry, just 61% trusted cloud computing. There’s a disconnect between the level of institutional trust and trust in a specific innovation.
This disconnect suggests that something crucial is missing from how we understand the relationship between trust and innovation adoption. The survey captures whether people trust institutions to be competent and honest, but they are missing a deeper question: do people trust that adopting an innovation will result in personal benefit or exploitation?
The Missing Piece: Umwelt
My first inclination was that social trust plays a role. Where institutional trust asks whether you trust an institution, social trust asks whether you trust other people. It varies drastically from country to country, even from community to community3.
While data on social trust is widely available, there are some methodological constraints to how the adoption of innovation is assessed. Specifically, methodologies tend to conflate governmental policies and individual adoption in their reporting. While China is a country with very high social trust, the government policies that limit the import of technologies result in low scores for willingness to adopt innovation.
It’s a fundamental challenge of how to compare things with radically different contexts. You can normalize quantitative data, but only if you have a sufficient understanding of the different contexts to control for that.
Biology has a great tool for bridging the chasms caused by different contexts: umwelt.
As I've talked about before, umwelt is the idea that, for a given species or even a specific individual, their worldview is shaped by their sensory systems and the perceptions that follow. This includes how sensations are processed, what patterns are recognized as significant, and what behavioral responses are triggered. When we think about trust and innovation adoption, we're really talking about how different umwelten process the same technological stimulus.
Let's take the following case as an example. I see a ticking object with a flashing LED displaying decreasing numbers. At the level of sensing, I have used my visual and auditory senses to observe the object. At the level of perceiving, I suspect that it's a bomb. The result of my perception is distrust, and I want to be far away from the object.
If my dog is with me, he will observe similarly, possibly with the addition of smelling explosives. But lacking contextual knowledge about the significance of those observations, he may want to get closer to investigate further. The result of his perception is curiosity, and he wants to learn more. The fundamental difference is whether or not each of us believes that we are safe or in imminent danger.
This example illustrates why institutional trust surveys miss the mark when predicting innovation adoption. The surveys ask whether we trust the institution that created the technology, but our actual adoption decision depends on whether our personal umwelt interprets the technology as beneficial or threatening. The same innovation can appear as an opportunity to one person and a risk to another, regardless of their institutional trust levels.
Distributing Risk Reduces Caution
One of the particularities of bees is the extent to which they prioritize the hive over the individual. The hive itself has an umwelt that is unique from the umwelt of individual bees. While workers are busy tending to the hive, collecting information, and sharing it within the hive, the hive itself optimizes for pattern recognition, communication, and collaborative risk assessment. A single hive can have 20,000 to 80,000 bees; just as most of us have singed a few hairs by getting too close to a flame, the risk of losing a few bees for the benefit of informing the hive is acceptable.
This collective umwelt allows for a kind of risk tolerance through distributed sensing and decision-making that no individual bee could achieve alone. The hive can simultaneously explore multiple options, rapidly share information about discoveries, and make decisions based on aggregated data. Individual bees don't need to fully evaluate every risk because they're part of a larger unified network.
Not so for a bear. They prioritize the individual, and their umwelt includes the devastating consequences that even a seemingly minor injury could have. Every sight, sound, and smell observed passes through a perceptual lens that asks the question, "could this hurt or kill me?"
The bear's umwelt is calibrated for individual survival in an environment where mistakes can be fatal and help is unavailable. This creates a fundamentally different relationship with novelty; every new stimulus must be thoroughly evaluated through the lens of personal risk before any exploratory behavior is considered.
Not Bees, not Bears: the Human Umwelt of Technology
Humans occupy a middle ground between the collective risk-sharing of bees and the radical individualism of bears. We have communities that can provide support and shared learning, but we ultimately experience the consequences of our choices as individuals. This creates a unique umwelt where we must balance collective information (what others say about a technology) with individual risk assessment (what could go wrong for me).
Our human umwelt processes technology adoption through multiple layers of trust assessment that institutional trust doesn’t capture. Beyond trusting that a technology works as advertised, we must also trust that other users aren't exploiting it to harm us. Trust that our personal data won't be breached or misused. Trust that adopting it won't make us vulnerable to social or economic changes we can't predict. In a sense, we’re asking: "If I join this technological ecosystem, am I entering a high-trust bee-like system where cooperation benefits everyone, or am I making myself vulnerable to cheaters?"
Early technology adopters are a bit like worker bees. They take the risk of being the early explorers and then report their experience back to the collective. This provides valuable information in the form of social trust.
The most powerful social trust is the trust we place in the people we know. The doctor your friend recommends. The photo-sharing app your work friend tells you about. The secure messaging platform that your brother insists on as his exclusive means of communication. We adopt these largely without question because of the weight of social trust.
There’s an intermediate tier too - the people we don’t know but also regard more highly than strangers. The people you follow on social media that you will probably never meet, but they are oh-so relatable. When they recommend a product, you check it out because you are certain they share your values. This world of influencer marketing is worth nearly 30 billion dollars a year. It’s for good reason that some people worry about the impact of all that money on the sincerity of recommendations.
Declining Trust Will Change the Interface of Innovation
This brings me full circle to altruistic systems and cheaters. We are currently in a time where trust in human institutions and social systems are in decline. In the spectrum from bees to bears, we’re moving to a decidedly more bearish position. Like bears, our willingness to adopt innovation will increasingly require a sense of being in control because, in a low-trust environment, every new technology looks like a potential vector for exploitation.
It’s not a coincidence that chat-based interfaces are driving the adoption of LLMs. This familiar format is easily combined with our tendency to anthropomorphize, letting us feel like we’re in control of the outcomes. Because the interface feels human, we expect risks to follow human patterns too. As that proves itself to be untrue, we may see a further fallout of trust.
The cheater's advantage in altruistic systems becomes, paradoxically, the innovation barrier in declining-trust societies.
Interestingly, although there are changes from one community to the next there does not seem to be a pattern between urban vs rural settings.