body-of-work-creation

The Cost of Vagueness

Also known as:

Vague agreements ("we'll stay in touch", "let's do this better") create false consensus and later resentment when unstated expectations diverge. In commons coordination, investing in specificity upfront—even difficult conversations—prevents much larger problems later.

Vague agreements (“we’ll stay in touch”, “let’s do this better”) create false consensus and later resentment when unstated expectations diverge—investing in specificity upfront prevents much larger problems later.

[!NOTE] Confidence Rating: ★★★ (Established) This pattern draws on Crucial Conversations research and decades of commons stewardship practice across sectors.


Section 1: Context

In body-of-work-creation systems, groups form around shared intention but diverge rapidly once actual labor begins. A team commits to “better collaboration.” A movement pledges to “stay aligned on strategy.” A product team agrees to “ship faster.” In each case, the agreement feels solid in the moment—everyone nods, the energy is high—but the system lacks the substrate to hold shared meaning across time and difference.

This pattern emerges most visibly when a commons is transitioning: from pilot to scale, from volunteer energy to stewarded structure, from informal to accountable. The system is not yet broken, but it is fragmenting. Each person carries a different implicit contract. A contributor thinks “monthly check-ins” means deep strategic review; another means status updates. A funder assumes “community-driven” means they step back; a founder assumes it means they shape direction invisibly.

The cost accumulates silently. Energy leaks. Trust erodes through small misalignments that were never explicitly named. By the time the system tries to address it, resentment has calcified and rebuilding requires far more friction than candor would have required upfront. This pattern is where the commons learns to prevent that decay before it takes root.


Section 2: Problem

The core conflict is The vs. Vagueness.

One side of the tension pulls toward speed and goodwill: “Let’s not over-specify. It’ll create rigidity. We trust each other. We can figure it out as we go.” This impulse is not wrong—it protects the system from premature hardening and honors the reality that you cannot predict everything.

The other side pulls toward clarity: “We need to name what we expect. Not because we distrust each other, but because we work differently. Specificity is an act of respect, not constraint.” This impulse is also right—without it, coordination fails silently.

When the tension is unresolved, the system accumulates what Crucial Conversations researchers call “pool of meaning” divergence: each person is swimming in their own interpretation of shared commitments. A contributor thinks they own a piece of work; the core team thinks they’re supporting. A governance structure is designed to “include all voices”; one faction thinks that means consensus, another means advisory input, a third means veto rights.

The cost is not immediate. It is relational and structural. Resentment builds when expectations collide—someone feels betrayed because their understanding of the deal was never actually shared. Coordination breaks down because people are executing against different blueprints. Worst, the breakdown feels like a failure of goodwill rather than clarity, so people withdraw trust entirely rather than iterating the agreement.

In commons work especially, this is corrosive. You cannot stewarded a shared system if people are operating from secret understandings of what “shared” means.


Section 3: Solution

Therefore, invest in difficult conversations upfront to translate vague commitments into named, written expectations that can be revisited and renewed.

The mechanism here is specificity as generative constraint. When you move from “we’ll stay in touch” to “we’ll have a 90-minute quarterly strategy call; you’ll send a brief written update beforehand; I’ll facilitate but won’t decide” — you have not constrained the relationship. You have created a holding structure that lets both people show up reliably.

This is how living systems sustain vitality: through clear feedback loops and named boundaries. A forest doesn’t grow by accident; it grows through specific relationships (mycorrhizal networks, nitrogen cycles, light filtration). The commons doesn’t govern itself through vagueness; it governs itself through explicit agreements that are specific enough to catch misalignment before it hardens.

The Crucial Conversations research shows that groups who name their assumptions about decision-making, timelines, and roles upfront experience 60% fewer conflict escalations later. Not because they have fewer disagreements, but because disagreement stays in the realm of ideas rather than calcifying into betrayal.

The shift this pattern creates: from pseudo-consensus (agreement without shared meaning) to genuine alignment (agreement about what we’re actually doing). From hope-based coordination (trusting that people will figure it out) to intentional coordination (trusting that we’ve clarified the structure and can adjust it). From unexamined role assumptions to explicit stewardship.

This is not a one-time conversation. It is a practice of regular renewal. The agreement gets written down. It gets revisited quarterly. As the work evolves, the specificity evolves with it. This is how the system stays alive rather than calcifying.


Section 4: Implementation

In corporate contexts, establish a “working agreement” protocol for every body of work. Before the project begins, run a 90-minute structured conversation: What does “on-time delivery” actually mean? Who decides if we’re slipping? How often do we course-correct? What does “quality” mean for this work, and how do we measure it? Write this down in a single page. Post it where the team works. Revisit it at the first sprint retrospective. This prevents the silent fragmentation where managers assume weekly updates, teams assume asynchronous progress, and senior leaders assume monthly forecasts until all three discover they have different pictures of reality.

In government contexts, formalize vague mandates through stakeholder specification sessions. A policy directive says “increase community engagement.” Different agencies interpret this as: more meetings, more data collection, more access, more representation in decisions. Before deployment, gather the agencies, the community voices, and the policymakers in a room. What specifically are we measuring? How often do we check? Who has authority to adjust? Document this in a “stakeholder alignment charter.” Share it publicly. This prevents the scenario where one agency thinks they’re running a listening tour, another thinks they’re building a decision body, and community members think they’re being heard—until 18 months in, everyone discovers they meant different things.

In activist contexts, create a “agreements for this campaign” document at the beginning. Not bylaws—something living and readable. What does “leaderless” mean for us? How do we make decisions when we disagree? What are non-negotiables; what is flexible? When do we check in on whether the agreement still holds? Activist groups often collapse not because they run out of energy, but because they run out of shared understanding of how they work together. Naming this prevents the factionalism that emerges when one faction assumes “consensus” and another assumes “delegate to the working group.”

In tech contexts, institute a “specification of non-technical agreements” practice alongside code review. Before shipping a feature, document: Who is this for, and what does success look like in their life? What is the first rollout size and timeline? When do we revisit the metrics? How do we handle unexpected consequences? This shifts teams from shipping ambiguity (a feature that everyone thinks means something different) to shipping with shared intention. Many product failures stem not from bad code but from shipping vagueness—teams discover post-launch that they were building different products.

Across all contexts, the implementation has three elements:

  1. Name the territory: List the areas where you need specificity. (Decision rights, timelines, quality standards, communication cadence, role boundaries, escalation paths, conflict resolution, resource allocation.)

  2. Run the conversation: Use a facilitator. Ask clarifying questions, not leading ones. Surface assumptions. Listen for where people are visualizing different futures. Write down the specific agreements in plain language.

  3. Make it revisable: Build in checkpoints—30 days, 90 days, end of cycle—where you explicitly ask: “Is this agreement still serving us? Do we need to adjust?” This prevents the agreement from becoming dogma. It becomes a living tool.


Section 5: Consequences

What flourishes:

Clarity cascades through the system. When core expectations are named, people can align their own work without constant permission-seeking. Autonomy increases because people know the boundaries. Trust deepens because alignment is visible rather than assumed. Conflict remains—disagreement about ideas is healthy—but it doesn’t accumulate as resentment about invisible contracts. The system becomes legible to newcomers; they can see how decisions are made, not guess. Coordination costs drop because you are not constantly renegotiating the same conversation. Decision velocity increases because people know what authority they hold.

What risks emerge:

If specificity becomes rigid, the commons loses adaptability. Watch for this: agreements that are revisited less frequently than the work changes, agreements that are treated as law rather than guide, agreements that exclude new voices from re-examining assumptions. The pattern itself can decay into bureaucracy. Also, if the initial conversation is shallow or dominated by louder voices, you lock in injustice rather than prevent it. Specificity without inclusion hardens inequity. You may also see performative compliance—people say they agree to the specifics but operate differently, and the written agreement becomes a fig leaf rather than a genuine holding structure.

The commons assessment scores show resilience at 3.0 and ownership at 3.0. This matters: specificity without resilience mechanisms can make the commons brittle. If the agreement is violated, do you have repair processes, or just shame? If circumstances change dramatically, can the agreement evolve, or does it collapse? Build in “how we revise this” before you need it.


Section 6: Known Uses

A digital infrastructure cooperative, 40 members across five time zones, was fragmenting over decision-making. Newer members assumed all decisions went to the general assembly; core members had been making most decisions in working groups without realizing the opacity. Resentment built quietly. They commissioned a “decision specification” conversation: What decisions need full assembly input? (Core governance changes, resource allocation above a threshold, expulsion.) What can working groups decide? (Operational choices, tools, meeting schedules.) What gets async consultation? (Medium-impact choices, policy tweaks.) They documented this in a one-page matrix, shared it publicly, and built a “decision log” where every significant choice linked back to which process had been used. Misalignment surfaced within two weeks—usually constructively—rather than after six months of silently divergent expectations. Member engagement stabilized.

A city health department launched a “community health worker” program with vague success metrics. One division assumed it meant increased clinic foot traffic; another assumed it meant relationship-building in neighborhoods; a third assumed it meant data collection for epidemiology. Money and effort went in different directions. An external evaluation surfaced the confusion. They stopped, brought all stakeholders into a room, and specified: What does a successful visit look like? (A resident has named what they need, a referral is made, follow-up happens—specific, observable.) How do we measure impact? (Not just clinic volume, but also resident confidence, barrier identification, connection to resources.) When do we check if we’re on track? (Monthly data review, quarterly stakeholder assessment.) This clarity didn’t eliminate disagreement, but it made disagreement productive. They could see where they were misaligned and adjust rather than working at cross-purposes.

An open-source project with 200+ contributors was experiencing contributor churn. People invested weeks on features, then felt unheard about the direction. The maintainer thought they were running a meritocracy where good code floated to the top; contributors thought they should have voice in what gets built. Rather than blame, they specified: How are priorities set? (Maintainers set direction based on sustainability and user feedback, not just contributor preference—stated explicitly.) How do contributors influence direction? (RFC process for major changes, feedback required, not optional.) What does “good” review feedback look like? (Specific, timely, actionable, in service of the project’s goals, not just the reviewer’s taste.) This moved from silent resentment to transparent governance. Contributor retention improved because people could opt in with clear eyes rather than feeling betrayed later.


Section 7: Cognitive Era

In an age of AI-mediated coordination, vagueness carries new costs and new opportunities.

The cost: AI systems amplify vagueness at scale. When you ask an LLM to “improve collaboration,” it generates plausible-sounding advice that masks the specificity problem. Teams feel more organized without actually being more aligned. You can now produce polished governance documents that sound agreed-upon but remain vague because no human had to have the hard conversation. Meanwhile, AI tools in product teams make it easier to ship faster without nailing down what you’re shipping—”iterate based on user feedback” becomes a default without specifying what feedback you’re listening for or when you’ll pause to reassess.

The new leverage: AI can surface vagueness diagnostically. Run your team’s agreement through a language model trained on patterns of miscommunication: it can flag ambiguous terms, highlight contradictions, generate the likely interpretations different readers will make. You can test your written agreement against scenarios (“What happens if X changes?”) before you encounter the real scenario. You can also use AI to accelerate the specificity conversation itself—structured interviews, scenario mapping, assumption surfacing—without requiring a consultant in the room.

The deeper risk: In product and policy contexts, AI can embed vague intentions at scale before humans notice. A recommendation algorithm trained on “increase engagement” will pursue that vagueness with inhuman consistency, diverging from what any single human meant by the term. The pattern becomes even more critical: specify not just what you want, but what you don’t want, and how you’ll know if the system is optimizing for the wrong thing.

The tech context translation shifts: The Cost of Vagueness for Products now includes the risk of AI amplifying that vagueness into systems behavior that nobody actually intended.


Section 8: Vitality

Signs of life:

Observable agreements exist—written, accessible, dated. People can articulate the agreement without reading it (they’ve internalized the logic, not just the words). When disagreement surfaces, people reference the agreement to calibrate rather than to shame. The agreement evolves—you see dated revisions, you hear “we adjusted this when…” Newer members cite the agreement as a gift that lets them contribute faster. Conflict is named and navigated without collapsing into meta-conflict about what was actually promised. Decision-making speeds up because people execute with clarity rather than seeking constant reassurance.

Signs of decay:

The agreement sits in a folder, unread. People operate differently than the written words say, and nobody notices the gap. Conflicts escalate and revert to “you didn’t do what you promised” rather than “the agreement didn’t account for this.” The agreement is never revisited; it becomes a historical artifact rather than a living tool. New people are told “just figure it out” rather than oriented to the agreement. Decisions increasingly bypass the process that was specified. Vagueness creeps back in because the agreement itself becomes vague—”high quality” or “responsive” without the specificity that once held it.

When to replant:

If the agreement has been stable for more than a year without revisiting, replant. If the work has fundamentally changed (team grew, scope shifted, external conditions altered), replant. Most critically: if you notice resentment accumulating and you cannot name what promise was broken, that is the signal that specificity has decayed. Stop, gather the people, and run the conversation again. Not to blame the old agreement, but to acknowledge that the world changed and the agreement needs to travel with it.