Community Health Metrics
Also known as:
Defining and measuring what constitutes a healthy commons—belonging, agency, equity, resilience, vitality—beyond GDP or efficiency metrics. Metrics that matter for commons flourishing.
Defining and measuring what constitutes a healthy commons—belonging, agency, equity, resilience, vitality—beyond GDP or efficiency metrics.
[!NOTE] Confidence Rating: ★★★ (Established) This pattern draws on Community Wellbeing.
Section 1: Context
Most commons—whether open-source software collectives, neighborhood land trusts, social movements, or product communities—operate without a shared language for what “health” actually means. They inherit metrics from extractive systems: engagement counts, transaction volume, retention rates, output per unit input. These numbers reveal nothing about whether members feel they belong, whether decisions reflect their actual needs, or whether the system can adapt when conditions shift.
The commons is in a state of invisible fragmentation. A tech community might show high activity metrics while members are burning out. A public service might report efficiency gains while trust erodes. An activist network might grow in size while losing shared purpose. Without healthy metrics, stewards make decisions blind—optimizing for signals that don’t measure what matters.
Across all contexts—corporate teams stewarding shared knowledge, governments designing public goods, movements building collective power, product platforms hosting communities—the pattern emerges identically: systems thrive when they measure belonging, agency, equity, resilience, and vitality. These aren’t soft metrics; they’re the root system of any commons. A system measuring only outputs while ignoring root health will eventually hollow out.
Section 2: Problem
The core conflict is Individual Agency vs. Collective Coherence.
Each member needs felt autonomy—real influence over decisions that affect them, room to experiment, freedom from surveillance. Simultaneously, the commons needs coherence—shared direction, mutual accountability, nested agreements that hold the system together. When you measure only individual agency (participation rates, voice count, autonomy metrics), you optimize for fragmentation; members pull in uncoordinated directions. When you measure only collective coherence (consensus speed, alignment, compliance), you optimize for conformity; the system ossifies and agency dies.
The tension breaks most commons at the metrics level. Leaders choose: track individual satisfaction or collective output? Track equity gaps or total value creation? This false choice fragments practice. Members sense they’re being measured for something other than their actual flourishing—they’re data points in someone else’s story.
Without shared health metrics, stewards default to extractive measures: How much did we produce? How many did we retain? How fast did we grow? These are velocity questions, not vitality questions. A commons can grow vigorously while dying. Burnout, knowledge hoarding, decision opacity, siloed fiefdoms—all these decay patterns hide under good productivity numbers.
The deepest problem: no mirror exists. The commons cannot see itself. Members cannot name what they’re experiencing or track whether conditions are improving. Stewards cannot distinguish between growth that builds resilience and growth that creates brittleness.
Section 3: Solution
Therefore, co-design and continuously tend a small set of shared health metrics that surface the five conditions of commons flourishing: belonging, agency, equity, resilience, and vitality—and embed the process of measurement itself as a generative practice that deepens collective intelligence.
This pattern works because it inverts the causal direction. Instead of metrics driving behavior, you design measurement as inquiry. Each metric becomes a living question the commons asks itself: “Are we creating conditions where people feel they belong?” “Do decisions actually reflect the agency of those affected?” “Are benefits and burdens equitably distributed?” This inquiry process—done together, regularly, with vulnerability—is itself the health-building mechanism. It surfaces patterns no individual could see alone.
The five dimensions anchor this inquiry:
Belonging: Do members experience genuine welcome? Can they name what they’re part of? Is there space for difference without exile?
Agency: Can members influence decisions affecting them? Do their voices shape direction? Is power visible and contestable?
Equity: Are benefits, burdens, and decision-making power distributed according to need and contribution? Who is invisible in this system?
Resilience: Can the commons absorb shocks? Does it have redundancy, distributed capacity, diverse ways of knowing? Can it learn and adapt?
Vitality: Is there energy, creativity, emergence? Are new capacities being born? Does the system feel alive or mechanical?
This is grounded in Community Wellbeing traditions, which reject the false separation of individual and collective flourishing. A healthy commons is not a sacrifice of self for group—it’s an ecology where individual flourishing and collective coherence feed each other. The metrics reflect this integration.
Measurement becomes a stewardship practice, not a surveillance tool. You’re not measuring to control; you’re measuring to tend together.
Section 4: Implementation
Step 1: Surface the current state through collective listening. Gather the commons (or representative stewards) for a structured conversation: “What would it feel like if we belonged here fully? What would real agency look like for you? Where do you see inequity?” Don’t sanitize these conversations. Record patterns, not consensus. This surfaces what people actually experience, not what leadership assumes.
Step 2: Co-design 2–3 metrics per dimension. For each of the five dimensions, choose 2–3 metrics that are:
- Observable (not inferred): “Members can name three decisions they influenced” (observable) vs. “agency is high” (inferred).
- Actionable: The metric points toward what to change.
- Honest: The metric shows reality, not hope. If belonging is actually fragile, the metric should reveal that.
For corporate contexts (Community Health Metrics for Organizations): Design metrics around team interdependence and decision-making transparency. Example belonging metric: “In the past month, I’ve had a genuine conversation with someone outside my functional silo.” Example agency metric: “When I disagreed with a decision, I understood why it was made the way it was, and my input was genuinely considered.” Track these through brief pulse surveys quarterly.
For government contexts (Community Health Metrics in Public Service): Center metrics on constituent trust and co-design participation. Example resilience metric: “How many distinct pathways exist for citizens to influence policy before implementation?” Example equity metric: “What percentage of decisions that affect neighborhood X included voices from neighborhood X in the design phase?” Use participatory budgeting, citizen assemblies, and service feedback loops as data sources.
For activist contexts (Community Health Metrics for Movements): Emphasize metrics of sustained commitment and distributed power. Example vitality metric: “How many new practices, campaign tactics, or strategies emerged from the edges of the movement in the past quarter?” Example agency metric: “What proportion of decisions about campaign direction included voices from frontline-most-affected communities?” Track through movement retrospectives and distributed leadership rotation.
For tech contexts (Community Health Metrics for Products): Embed health metrics into product health dashboards alongside engagement metrics. Example belonging metric: “What percentage of new members receive a genuine welcome and onboarding from an existing member (not automated)?” Example equity metric: “Whose voices are overrepresented in feature requests? Whose are absent?” Map voice by geography, demographic, tenure, power level.
Step 3: Establish a measurement cadence and ritual. Monthly or quarterly, gather to review metrics together. This is not a reporting exercise; it’s a diagnostic ritual. Ask: “What do these numbers reveal? What patterns are we missing? What should we change?” Make space for emotional response, not just analytical response. Someone might say, “This metric shows I’m heard, but I don’t feel it.” That gap is the information.
Step 4: Close the feedback loop visibly. When data reveals a problem (equity gap, agency bottleneck, fragile resilience), name the problem publicly and design a response together. Change something based on the metrics. If you measure but don’t act, the commons learns that measurement is theater.
Step 5: Evolve the metrics as the system matures. Every 12 months, revisit: Are these metrics still revealing? Have we optimized the metric itself (game the numbers) without improving the reality? Are there new health questions this commons needs to ask? Retire metrics that have become hollow; grow new ones that surface emerging tensions.
Section 5: Consequences
What flourishes:
This pattern generates collective self-awareness. Members develop a shared language for health. Instead of individual complaints (“I feel unheard”) or vague discontent, the commons can say: “Our agency metrics show decision-making is actually concentrated; let’s redesign how we choose priorities.” Belonging deepens because the act of measuring together—being honest about what’s broken and what’s working—builds trust. People feel seen.
Resilience increases because you’re no longer flying blind. When a shock hits (leadership turnover, budget cut, external threat), you have baseline data on what the commons depended on. You can act from pattern-awareness, not panic. New capacity emerges because the inquiry itself teaches the commons how to think together. Members develop literacy in systems thinking. Distributed leadership becomes possible when decisions are transparent and agency is distributed across the metrics.
What risks emerge:
The primary risk is metric capture: you optimize the number instead of the reality. A belonging metric becomes a box-checking exercise. Agency gets reduced to “were you asked?” without whether your answer mattered. The commons starts gaming metrics to look healthy while actually hollowing out. This risk is real because measurement creates incentives.
Mitigation: Build redundancy into metrics. Never rely on a single number. Pair quantitative metrics (e.g., “percentage of decisions with pre-decision input”) with qualitative ones (“do you experience your input shaping outcomes?”). If the numbers diverge, that gap is crucial data.
A secondary risk: metric burden. Constant measurement can feel like surveillance, especially in activist or vulnerable contexts. Members may experience survey fatigue or feel their labor is being extractively measured.
Mitigation: Keep metrics minimal (2–3 per dimension, not more). Embed measurement into natural rhythms (existing retrospectives, town halls) rather than creating new overhead. Make clear: we measure to tend together, not to judge individuals.
Note the composability score (3.0): This pattern doesn’t always transfer cleanly across nested scales. A health metric at team level may not scale to network level. Design nested metrics deliberately, allowing different scales to ask different questions while staying coherent.
Section 6: Known Uses
The Stocksy Workers’ Cooperative (artist collective, co-ownership model): After three years of rapid growth, Stocksy’s stewards noticed that newer members felt less agency in platform decisions despite formal voting rights. They designed health metrics including “percentage of major decisions where you had a voice before the choice was made” and “do you know how to influence platform direction if you wanted to?” Measurement revealed a belonging gap: newer members didn’t know how to participate, even though structures existed. Stocksy redesigned onboarding and opened decision processes to asynchronous input. The metric became the mirror that showed the gap between formal and felt agency. Within two years, belonging scores for newer members increased measurably, and governance actually deepened because participation became real, not theoretical.
Participatory Budgeting processes (government, Chicago, Paris, New York): Cities adopting PB found that initial metrics (participation rates, budget disbursed) didn’t reflect whether residents actually experienced agency. A government context innovation: they added metrics around “Did you learn something new about how decisions get made?” and “Will you participate again?” and “Do you trust the city more because of this process?” These revealed that high participation didn’t always mean high agency; residents sometimes felt their votes didn’t matter. Cities responded by redesigning feedback loops—showing explicitly how resident votes shaped final budgets, hosting deliberation sessions before voting, building in accountability mechanisms. The health metrics shifted the conversation from “how many participated?” to “did this build democratic capacity?”
The Loomio open-source governance platform (tech community): Loomio measured health metrics of its own user communities and eventually made them visible to users. They track: “Can you name someone who disagrees with you who you still trust?” (belonging through difference), “Have you seen a decision change based on your input?” (agency), “Do newer members participate in shaping the platform?” (equity and vitality). By embedding these metrics visibly into product features, they shifted user expectations. Communities using Loomio started asking themselves similar questions. The metric design became generative—it taught distributed groups how to think about their own health. This is the activist context translated: metrics became a form of collective intelligence-building, not just measurement.
Section 7: Cognitive Era
In an age of AI and distributed intelligence, Community Health Metrics take on new power and new peril.
New leverage: AI can surface patterns in qualitative data (retrospectives, conversation transcripts, governance discussions) that humans would miss. Sentiment analysis, network mapping, and discourse analysis can reveal belonging gaps, equity blind spots, and agency concentrations far faster than manual review. For tech products especially, AI can detect whether community voices are actually shaping platform evolution or being listened-to performatively.
But new risk: Algorithmic measurement introduces opacity. If an AI system scores “community health,” who validated that the scoring reflects reality? Metrics generated by black-box systems can feel more authoritative than they are, leading to faster metric capture. A community might optimize for an AI-generated health score that measures something real but unintended.
The distinctive challenge: In cognitive-era commons, the question of whose intelligence is actually shaping decisions becomes acute. If AI systems are filtering which voices get amplified, which proposals get surfaced, which patterns get highlighted, then “agency” and “belonging” metrics become measurements of how well the system is manipulating perception, not of actual health. A member might feel heard because AI personalized their experience while actually being increasingly isolated from genuine disagreement.
Implementation shift for tech contexts: Don’t embed health metrics only in automated dashboards. Make the measurement process itself distributed. Train community moderators and stewards to read health signals. Pair AI-generated patterns with human validation. Create feedback loops where communities can contest the AI’s reading of their health. The metric design itself should be decentralized—not a single algorithm scoring community health, but distributed conversation about what signals matter most to this particular commons.
Critical move: Measure the measurement system itself. Ask: “Is our health metric helping us see reality, or is it making us legible to machines in ways that actually degrade our autonomy?” This is especially important in tech contexts where the commons’s data becomes valuable and the temptation to extract it grows.
Section 8: Vitality
Signs of life:
-
Members use the language of metrics unprompted. In conversations, retrospectives, and decisions, people reference the health metrics naturally: “This decision feels like it’s eroding our agency metric—let’s pause.” Language shapes thought; when metrics are alive, they’re woven into how the commons thinks.
-
The metrics change. A living system evolves. If the same metrics are used year after year without evolution, they’ve become hollow. Vitality shows as regular redesign: “This metric used to matter; now it’s masking a new question we need to ask.”
-
Disagreement surfaces visibly. Members dispute what the metrics reveal. “That belonging score doesn’t match my experience—here’s why.” This conflict is health. It means people trust the process enough to argue with it, and the commons is learning from disagreement.
-
Behavior shifts based on metrics. This isn’t gaming the numbers; it’s genuine change. A resilience metric reveals that knowledge is too concentrated; the commons deliberately distributes it. An equity metric shows voice concentration; decision-making gets redesigned. The metrics drive adaptation because they’re trusted mirrors.
Signs of decay:
-
Metrics are only reported, never discussed. The dashboard exists; the conversation doesn’t. Numbers get posted; nothing changes. The commons is no longer asking, “What does this reveal?”
-
Gaming appears. Members start optimizing the metric instead of the reality. Belonging scores improve while actual community fragmentation deepens. People participate in surveys because they’re required, not because the inquiry feels genuine. The metric has become decorative.
-
Silent collapse in one dimension while others stay high. Agency scores are fine, but resilience is fragmenting invisibly—knowledge hoards, only a few people know how to renew membership, no succession plan exists. This happens when metrics stop being a system and become isolated numbers. Vitality requires seeing the relationships between dimensions.
-
New members can’t explain what the metrics mean. When belonging erodes, new members aren’t initiated into the health-inquiry practice. They’re targets of measurement, not participants in it. This signals that the practice has become centralized, something done to the commons rather than something the commons does together.
When to replant:
Redesign this practice when the metrics have become external (something leadership tracks) rather than internal (something the commons owns). This happens around 18 months if you’re not intentional. The rhythm of replanting: every 12–18 months, gather to resurrect the inquiry. Ask: “What are we still measuring that no longer matters? What new health questions are emerging? Do we still believe in these metrics, or have we been performing agreement?” Replanting happens through genuine collective redesign, not through leadership announcement. Give the redesign time; let conflict surface. A commons that argues about its own health metrics is a commons learning to think together.