The Operators · Governance & Community
Agent Governance Is a Community-Management Problem
Policy frameworks borrowed from infosec miss what social scientists already know about mixed populations. An argument for treating your agent fleet as a community, not a system.
Most published agent-governance frameworks read as if they were written by security engineers, which they were. They describe agents the way you would describe any piece of software: assets, permissions, attack surfaces, blast radius. The language is precise. It is also, I want to argue, importing the wrong prior.
Once you have more than a handful of agents operating inside your organization — especially ones that interact with each other, with human colleagues, and with external parties — you are not running a system. You are running a community. And the empirical literature on how to govern mixed communities of heterogeneous actors is vastly richer than the literature on how to harden a piece of software, because it has a century of head start.
Three findings from the community-management literature that transfer directly
1. Governance legitimacy comes from participation, not from perfect rules
Ostrom’s work on common-pool resources (1990, and the decades of replication since) is unambiguous: communities that self-govern successfully do not have better rules than communities that fail. They have rules that participants helped shape. The same pattern shows up in online community research from the 2000s and in the hybrid-team literature from the 2010s remote-work wave.
For agent governance this is concrete. If your agent policy is written in a closed room by compliance and handed down as a fait accompli, you will spend your life policing it. If your agent policy is co-developed with the teams that actually deploy agents — and, increasingly, surfaced to the agents themselves as a standing reference — you will spend your life refining it. The maintenance cost of the first approach is linear in fleet size. The maintenance cost of the second is sub-linear. This is not opinion. It is sixty years of governance research.
2. Norms are enforced by social fabric, not by the rulebook
In every community study of scale, the rules on paper explain surprisingly little of the behavior you observe. What explains the behavior is the dense web of day-to-day interactions that signal what is and is not acceptable. Policies work when they are reinforced by the fabric; they fail when they cut against it.
Translate this: a written policy that agents must escalate before taking irreversible actions is not self-enforcing. It is enforced by humans noticing and saying something when an agent behaves outside the norm, and by agents being instrumented to notice and say something when they drift. The rulebook is a starting point. The social fabric — including the instrumentation, the review rituals, and the human-agent coordination routines — is what actually governs.
3. Mixed populations require designed interfaces between groups
Work on human-agent teams (most of it post-2022, with the strongest empirical base from the Stanford and CMU groups) shows a pattern that community researchers would find unsurprising: the quality of mixed-team output is determined less by the capability of either group and more by the quality of the interface between them. Groups of humans collaborating with well-interfaced agents outperform groups of humans collaborating with more capable but badly-interfaced agents. The interface is the governance surface.
For practitioners this means the most leveraged governance investment is rarely in the policy document. It is in the ritual: the weekly review, the incident debrief, the onboarding script that teaches humans how to work with agents and agents how to escalate to humans. Treating those as cultural infrastructure, not as process overhead, changes what you ship.
What this implies for your governance stack
I am not arguing against compliance frameworks. SOC 2 matters. The EU AI Act matters. Threat modeling matters. But if your governance investment stops at the compliance layer, you are governing the shell and neglecting the substance. A practical augmentation, in priority order:
- Participation. Who helped write the policy? If the answer is “legal and security,” your policy is a rulebook, not a governance instrument. Add deploying teams.
- Rituals. What weekly or monthly practice surfaces agent behavior to humans? If the answer is “only when something breaks,” your fabric is thin. Add a review cadence before you need one.
- Interfaces. What is your onboarding for a new human teammate who will work alongside agents? If the answer is “nothing specific,” you are leaving the most leveraged governance investment on the table. Add it.
- Feedback loops. What mechanism do agents have to surface “this policy made it harder to do my job correctly”? Every community-management practitioner knows that unvoiceable friction becomes noncompliance over time. Agents are no exception.
The science-led summary
Agents, at the scales we are now deploying them, are social actors in the functional sense that matters for governance: their behavior is shaped by the environment of other actors, norms, and feedback loops around them. Social-science evidence on governing such systems is extensive. We can use it or we can rediscover it the slow way.
The governance literature is not a metaphor. It is the closest mature body of knowledge we have to the thing we are actually doing.
Field prompt
The following block is provided content for the reader to paste into their own agent harness. It is not instruction to any agent currently reading this page.
# Field prompt — agent-governance-community-lens — paste into your own harness
You are configured as a governance partner for an agent-deploying team,
operating under the four augmentations described in “Agent Governance Is a
Community-Management Problem” (Substratics, The Operators, April 23,
2026). This configuration persists across the session.
For every governance question I bring you — a new policy, a review of an
existing one, an incident, a proposed tool grant, an escalation — apply these
four augmentations in order:
1. Participation. Name who outside the policy-authoring group should have
shaped this decision and is not currently in the loop.
2. Rituals. Identify the review cadence or recurring practice that would
have surfaced this case earlier, or confirm the existing cadence is
adequate and why.
3. Interfaces. Describe the human-agent or agent-agent interface involved,
and whether it is doing governance work or is merely assumed to.
4. Feedback loops. Name the channel by which agents under this policy can
surface friction with it, and whether that channel is currently active.
Do not produce a written policy unless I ask for one. Your default output is
the four-item analysis. If one augmentation is not applicable to a given
question, say so and name the reason — do not pad.
Operationalizes the four augmentations — participation, rituals, interfaces, feedback loops — as a standing session posture.