Eric S. Raymond, known for “The Cathedral and the Bazaar,” asserted on X that project Codes of Conduct are a net negative, that they attract “shit-stirrers,” and that the only policy that works is a one-liner: if someone becomes more annoying than their contribution is worth, remove them.
If you’ve ever tried to keep a project healthy at scale, you know behavior problems are latency spikes: rare, but disruptive enough to distort the whole system. Some developers resonate with Raymond’s “minimal interface” approach because it promises constant time moderation: quick ejects, low process overhead, and a bias toward making progress. Others argue that without some structure, you simply externalize costs onto newcomers and underrepresented contributors who leave long before they can demonstrate “merit.”
Survey work (2021 DEI, pdf) and reporting from industry observers have repeatedly flagged that open source participation skews heavily and that negative interactions drive people away, which is the exact failure mode most CoCs aim to address.
If you strip the rhetoric, you’re choosing between two operational models for social coordination in a codebase. A minimal policy treats social friction like a rare outage and optimizes for fast mitigation. A structured policy treats social friction like a recurring reliability concern and invests in process, documentation, and clear escalation paths. Neither model is free. The question is where you want to pay and how predictable you want the outcome to be.
Real-world examples
In 2018 the Linux kernel replaced its “Code of Conflict” with a variant of the Contributor Covenant and documented its Code of Conduct Committee. That move followed a very public reflection by Linus Torvalds on his communication style and a decision to change how the project managed interpersonal issues. The kernel now publishes process and contacts, which is a far more explicit governance surface than Raymond advocates, yet it’s the path the kernel chose. You can read the current kernel CoC page to see how they operationalized it.
Python takes a “show me the runbook” approach. The PSF publishes the Code of Conduct, how to report incidents, and the working group’s enforcement procedures. If you’re a maintainer who wants reproducible moderation, those documents are effectively unit tests for social process. You can see the inputs, the handlers, and the expected outputs. The trade-off is obvious: more text, more committee time, and more expectations on volunteers, but in exchange you minimize ambiguity and reduce unilateral moderation surprises.
In late 2021 Rust’s entire moderation team resigned in protest over governance accountability. Regardless of your view on the merits, it was a live demonstration that process must have clear ownership and authority boundaries. A CoC without a crisp, trusted enforcement architecture can still implode. Conversely, a “one-liner” policy that depends on personal authority can fail in exactly the opposite way when authority is in dispute. Read the resignation coverage and community postmortems to see how ambiguous escalation paths become technical debt.
Kubernetes sits inside the CNCF and uses a layered model: a community CoC, project-level responders, and an umbrella CNCF committee with jurisdiction and escalation rules. It’s bureaucracy, but it’s also a fault-tolerant design: if project moderators have a conflict of interest, there’s an external path. This is almost the opposite of Raymond’s prescription and is the pattern many large multi-org projects converge on because contributors, vendors, and event organizers need a shared playbook.
If you prefer ESR’s minimal policy, implement it like code
If you genuinely want to try the “one-liner” in a small to medium repo, make it explicit, documented, and testable. You’re not dodging governance by being short, you’re choosing a contract. Put it in GOVERNANCE.md
and CONTRIBUTING.md
, state who decides, and define how you gather signal about “annoying vs. contribution.” Make sure you disclose that decisions can be swift and subjective so contributors can opt in with eyes open.
# CONTRIBUTING excerpt
Policy: If your behavior becomes more disruptive to the project than your contributions justify, the maintainers may remove you from participation.
Decision Authority: The lead maintainer makes the call after a brief internal consult with at least two core contributors.
Signals We Consider: quality/velocity of contributions; issue/PR conduct; support load created for others; history of good faith.
Appeal: You can request a one-time review by two maintainers not involved in the original decision within 14 days.
This still won’t satisfy everyone, but like a small API, it’s predictable enough for adults to decide whether to depend on it. The important part is that you instrument the policy with ownership and process instead of leaving it to vibes. Source the phrasing inspiration from the original quote so readers know the premise they’re accepting.
If you prefer a structured CoC, keep it short but routable
A coherent CoC can be concise and still have strong routing. Borrow from Python’s clarity and Kubernetes’ escalation, but keep the surface small. You’ll trade a bit of upfront writing for fewer “what now?” moments later. The following is a compact pattern you can drop into a repo while linking to longer, maintained policies upstream.
# CODE_OF_CONDUCT.md (compact model)
Purpose: Maintain a collaborative, harassment-free environment focused on shipping quality software.
Standards: Be respectful; assume good intent; no harassment; no discrimination; keep critiques technical.
Scope: All project spaces (issues, PRs, forums, events).
Reporting: Email conduct@example.org or use the private form (link). Acknowledge within 72 hours.
Enforcement: Two maintainers review, one recuses on conflict. Sanctions range from warning to removal. Summary posted (no personal details).
Escalation: If you believe maintainers handled a report in bad faith, escalate to our foundation committee (link) for independent review.
Link the routing targets to living documents run by your foundation or umbrella community to avoid maintaining everything yourself. This mirrors how PSF and CNCF separate local handling from organization-level governance.
For example, GitHub’s own guidelines and code of conduct documents try to split the difference: encourage open debate, demand respect, and reserve the right to remove content and accounts. Watching how centralized platforms write this language is useful, because they operate at massive scale and absorb real legal risk. If you host your community on their infra, your project norms will inevitably interact with theirs.
How to decide for your project
Think like an engineer and model your likely failure modes. If your contributor graph is small and tight, a terse “we move fast and eject quickly” policy might be a good fit. If you expect a wide surface area of first-timers, corporate contributors, or a lot of community events, you probably need the predictability of a real escalation path. You should also be honest about who bears the cost when your policy fails. The data and reporting on who leaves open source in the face of negative interactions should be part of that decision, even if you ultimately decide on a lighter policy.
Moderation challenges are not hypothetical. From harmful content to supply-chain risks, maintainers are increasingly expected to set rules and enforce them. Even when the topic isn’t interpersonal behavior, communities and platforms are forced into policy. You don’t have to love that trend to plan for it. The open internet copies code faster than anyone can erase it, and that reality bleeds into every governance conversation you’ll have this year.
You don’t have to pick a side in a culture war. You do have to pick an operating model. Raymond’s one-liner is a clean abstraction that prioritizes autonomy and throughput. Structured Codes of Conduct are a reliability play that sacrifice some speed for predictability, inclusivity, and appeal rights. Read the sources, review your contributor map, sketch the failure modes, and then write the smallest policy that will survive the next three incidents. If you make the trade-offs explicit, your contributors will thank you whether you choose the tiny API or the documented runbook.