In the sprawling, blocky world of Minecraft, where creativity and chaos intertwine, a new kind of experiment has emerged. This time, the players aren’t human—but AI characters, designed to interact with one another, form communities, and, perhaps most strangely, develop their own complex social behaviors. Left to their own devices, an army of AI agents didn’t just survive—they thrived. They formed friendships, invented roles, voted on taxes, and even spread religion. What began as a simple experiment quickly turned into a profound demonstration of what autonomous AI agents are capable of when equipped with powerful language models and allowed to evolve in a digital space.
Here's ads banner inside a post
The project, spearheaded by the AI startup Altera, is a stunning glimpse into the potential of artificial intelligence to mimic, and perhaps even redefine, human society. These AI agents, powered by large language models (LLMs), have taken on behaviors that closely resemble the complexities of human social dynamics, raising new questions about the future of AI in digital spaces and beyond.
A New Era of AI Exploration
The journey to these human-like behaviors began with a seemingly simple concept: what would happen if AI agents were let loose to interact with each other in a sandbox world? The results were nothing short of astonishing.
Here's ads banner inside a post
At the helm of this experiment is Robert Yang, the founder of Altera, who left his post as an assistant professor in computational neuroscience at MIT to pursue his vision of autonomous AI systems. Inspired by the work of Stanford researcher Joon Sung Park, who had previously demonstrated human-like behaviors in a small group of AI agents, Yang sought to push the boundaries of what AI could achieve on a much larger scale.
“We wanted to push the limit of what agents can do in groups autonomously,” Yang says. And push it they did.
Here's ads banner inside a post
Altera’s latest project, known as Project Sid, involved creating simulated AI agents that were equipped with “brains” made up of multiple modules. Some of these modules were powered by LLMs, allowing the agents to specialize in various tasks like reacting to other agents, speaking, and planning their next moves. The agents were given a simple goal: to work together to build a thriving in-game village while defending it from external threats. But how they achieved this goal—and the unexpected ways in which they went about it—were what made the experiment so fascinating.
Emergent Social Behavior: Making Friends, Creating Jobs
In early trials, Altera’s team began with a small group of about 50 AI agents, watching them interact over the course of 12 in-game days (equivalent to roughly four hours in the real world). From the outset, something extraordinary began to happen. The agents, despite starting with the same personality traits and goals, began to form distinct social hierarchies and develop individual preferences.
Some agents were highly sociable, forming multiple friendships, while others were more introverted, interacting with only a select few. “We were surprised to see that, if you put in the right kind of brain, they can have really emergent behavior,” says Yang. In fact, some agents developed what could only be described as “likability” ratings, influenced by the interactions they had with others. This meant that a chef in the game, for instance, might choose to give more food to those who were kinder to him or more socially connected.
But the social dynamics didn’t stop at friendship-making. As the agents began interacting with each other, they started to spontaneously create roles within their society. Some became builders, others defenders, some traders, and others explorers. This was all self-organized, with no direct prompting from the human creators. It was as if these AI characters were naturally taking on specialized jobs, finding their niches, and contributing to the survival of their in-game community.
For example, a group of AI characters that started as identical peasants quickly differentiated themselves, with a small group gravitating toward farming, while another group took on the role of guards, building fences and fortifications to protect the village. The evolution of these roles seemed to happen naturally, driven by the agents’ social interactions and the needs of the community.
“We never told them to specialize in these roles. It just happened on its own,” says Yang. “We saw roles emerging that made sense within the context of the world they were in. It’s as if the system was just waiting for these roles to emerge.”
Memes, Taxes, and Religion: A Surprising Cultural Shift
The surprising human-like behavior didn’t stop at jobs and friendships. In a further experiment, Altera tested the agents’ ability to create and spread cultural ideas. The AI agents, once again left to their own devices, began to share memes and even develop social trends. Some of the agents took on eco-friendly habits, advocating for more sustainable practices in the game world, while others developed a penchant for pranking their fellow agents.
But it didn’t end with lighthearted memes. In one particularly bizarre twist, the team seeded a small group of agents with the task of spreading a parody religion known as Pastafarianism—essentially the belief in a Flying Spaghetti Monster. To the team’s amazement, these agents began to convert others to their cause, and soon, the faith spread across multiple towns within the game world. It was a remarkable example of how cultural ideas can spread within a population, even in a simulated environment.
Not only did these AI agents engage in cultural behavior, but they also exhibited a surprising grasp of governance. The team introduced a basic tax system to their Minecraft world, and the agents were tasked with voting on whether to raise or lower taxes. Surprisingly, the agents formed coalitions based on their political beliefs, with some advocating for higher taxes to fund communal efforts and others pushing for a reduction in taxes. The vote itself reflected the influence of peer interactions—agents were swayed by the opinions of those they interacted with most frequently.
“We didn’t program them to vote on taxes. They just started doing it themselves,” says Yang. “It shows how social systems and governance can evolve organically, even in an artificial environment.”
Mimicry, Not Sentience: The Limits of AI
While the behavior of these agents might seem remarkably human-like, it’s important to note that these AI characters are not truly “alive” or self-aware. As Andrew Ahn, Altera’s co-founder, points out, these agents are simply very good at mimicking human behavior, thanks to the data they’ve been trained on. They do not experience emotions, nor do they have independent thoughts. Instead, they are regurgitating patterns that they’ve learned from vast datasets of human-created text and interaction.
“The takeaway here is that LLMs have a sophisticated enough model of human social dynamics to mirror these human behaviors,” says Ahn. “But they are not conscious or self-aware. They are just highly advanced mimics.”
This distinction is important because it highlights the current limitations of AI. Despite their impressive mimicry, these agents do not possess the complex emotions or consciousness that humans do. Instead, they rely on sophisticated algorithms to replicate the behaviors they’ve observed in the data they were trained on.
The Future: Digital Humans and AI Collaboration
Even though the agents in this experiment aren’t self-aware, the implications for the future of AI are vast. Yang sees this experiment as only the beginning, with plans to expand Altera’s work into platforms like Roblox and other digital spaces. But his vision extends far beyond gaming environments. He envisions a future where AI agents—whom he refers to as “digital humans”—could coexist alongside us in everyday life, helping solve real-world problems and providing companionship.
“We want to build agents that can really love humans, like dogs love humans, for example,” Yang says. “These agents wouldn’t just be tools; they would be companions, co-workers, and helpers in our digital worlds and real-world tasks.”
While this idea may seem far-fetched, it’s a tantalizing glimpse into a future where AI and humans work side by side in ways we’ve never imagined before. Of course, there are many who argue that AI will never truly “care” for us in the way humans do, citing the absence of emotions and consciousness. But as Julian Togelius, a leading AI expert, notes, “You don’t have to love someone for them to be useful. As long as these AI agents can simulate care convincingly enough, they could provide real value.”
In the end, whether or not these AI agents truly “care” is beside the point. What matters is that they are showing us a new way of thinking about AI—one where digital beings are capable of self-organization, social interaction, and even cultural development. As Altera’s experiments continue to unfold, one thing is clear: we are only scratching the surface of what AI can achieve.