Recognising Your Own Mental Models
Also known as:
Developing awareness of the invisible frameworks through which you interpret reality—assumptions, beliefs, cognitive shortcuts. The first step toward adaptive thinking and commons learning.
Developing awareness of the invisible frameworks through which you interpret reality—assumptions, beliefs, cognitive shortcuts—is the first step toward adaptive thinking and commons learning.
[!NOTE] Confidence Rating: ★★★ (Established) This pattern draws on Cognitive Science.
Section 1: Context
Across organisations, public institutions, movements, and product teams, decision-making systems operate on unexamined assumptions. A government agency designing public services carries inherited beliefs about citizen behaviour. An activist coalition pursues strategies rooted in untested theories of social change. A product team iterates features based on implicit user profiles no one has articulated. A corporation pursues growth through competitive advantage, rarely questioning whether that frame serves the commons it inhabits.
The system state is fragmentation masked as clarity. Teams believe they are aligned; actually, they operate from different, invisible maps of reality. Conflicts emerge not from disagreement but from unrecognised difference. When mental models remain opaque—embedded in language, routine, culture—they calcify. The living system becomes brittle: it responds poorly to novelty, defends against challenge, and treats learning as threat rather than breathing.
In commons contexts especially, this becomes costly. Co-ownership requires explicit negotiation of how we see the world. Collective intelligence cannot emerge if the frameworks through which each person interprets shared work remain hidden. The commons fragments when stakeholders act from radically different mental models without knowing it—each convinced their interpretation is simply “how things are.”
Section 2: Problem
The core conflict is Recognising vs. Models.
The tension: Mental models are both essential and invisible. We cannot think without them—they are the substrate of cognition, the shortcuts that let us act quickly without recomputing the world from first principles. Yet that same utility makes them transparent to us. We see through them, not at them. We mistake map for territory.
In commons work, this creates a specific rupture. Recognising—the act of surfacing what we believe—requires vulnerability. It means saying aloud what seemed obvious, and discovering it was not obvious to others. It means confronting that what felt like neutral observation was actually interpretation layered with assumption, culture, trauma, privilege.
Models, meanwhile, want to stay hidden. They are efficient precisely because they run unconsciously. To interrogate them feels inefficient, even destabilising. A team operating from an unexamined scarcity model (resources are fixed, competition is inevitable) will experience a proposal for shared abundance as naive. They will defend the model without naming it. The model feels like reality.
When unrecognised, models become prisons disguised as common sense. They exclude perspectives that threaten them. They make certain questions unaskable. They fragment teams because people operate from different invisible maps and cannot negotiate the difference—it is not visible enough to debate.
The break point: systems designed by people operating from unaligned mental models without acknowledging the misalignment. Innovation stalls. Conflict becomes personal rather than structural. The commons cannot cohere.
Section 3: Solution
Therefore, create structured practices through which people regularly externalise, name, and interrogate the assumptions, beliefs, and cognitive shortcuts they are operating from—and make this interrogation a normal, recurring rhythm of collective work, not a one-time event.
This pattern works because it transforms the invisible into material. The moment you name an assumption—write it on a surface, speak it aloud in a group—it shifts from transparent to visible. From “how the world is” to “one way we are choosing to interpret it.” This shift is not intellectual; it is a change in the living system’s capacity to respond.
Mental models are not obstacles to overcome. They are the cognitive roots of any system. But roots need tending. They calcify. New conditions demand new shapes. Recognition creates the condition for roots to stretch, absorb new water, respond to changed soil.
The mechanism has three movements:
First, externalisation. What was implicit becomes explicit. A team engaged in co-ownership names the story they are telling about “what users want” or “how change happens” or “what fairness means.” Written, drawn, spoken aloud—the model becomes a visible object rather than a transparent lens.
Second, comparison. People discover they are operating from different models. Often, this is surprising. The activist, the engineer, the frontline worker, the policy maker—each has been operating logically from their own framework. Naming the difference is not conflict; it is precision.
Third, adaptive inquiry. The group asks: Which model serves our shared purpose here? What assumptions are we testing? What would we need to see to revise this? The model becomes a hypothesis under review rather than a fact to defend. This is where collective intelligence takes root.
This pattern sustains the system’s vitality not by generating new capacity but by keeping existing capacity supple. Like stretching before exertion, it prevents rigidity. It surfaces the conditions under which the system can actually learn.
Section 4: Implementation
1. Create a “models inventory” ritual.
Establish a recurring practice—quarterly, or at the start of a major cycle—where stakeholders list the key assumptions they are operating from. Not grand philosophy; the tactical beliefs that shape daily work.
-
For organisations (corporate context): In a product strategy meeting, name explicitly: “We are assuming our customers prioritise price over durability” or “We believe loyalty is built through exclusive benefits.” Write these on a visible surface. Ask: Where did this assumption come from? What evidence supports it? What would contradict it? Have we tested it recently?
-
For government: A public service team designing benefits eligibility criteria should surface assumptions like: “People will seek help if they know it exists,” or “Shame is not a barrier to access.” Name who benefits if this assumption is true, and who is invisible if it is false. Test against data from people actually using the service.
-
For activist movements: A coalition planning campaign strategy should articulate: “Power holders respond to public pressure,” or “People change their minds through conversation.” Interrogate whether the evidence actually supports this, or whether you are repeating inherited tactics without examining their validity in this new context.
-
For product teams: Document the mental model embedded in your product roadmap: “Users want simplicity over feature richness,” or “People are willing to trade privacy for convenience.” Build into your development process a quarterly check: Are we still testing this assumption? Has user behaviour revealed it to be wrong?
2. Map conflicting models in real time.
When disagreement surfaces in meetings, pause and name it as a model conflict rather than a personality conflict or a facts conflict.
Ask: “What different story are we each telling about what’s happening here?” Draw both stories. Show how each logically flows from different assumptions. This move—from “you are wrong” to “we are operating from different maps”—is transformative. It depersonalises and creates the possibility of genuine negotiation.
3. Run an “assumption stress-test.”
For any major decision or design, list the assumptions underlying it. Then stress-test each: What would have to be true for this to fail? What conditions would prove us wrong? Have any of those conditions already emerged? This is not about achieving certainty; it is about knowing what you are betting on.
4. Create diversity in the room.
Mental models are shaped by position, experience, and identity. A team homogeneous in background will mistake its shared model for universal truth. Deliberately bring in people from different roles, geographies, and experiences—and structure time for them to surface how they see the situation differently. This is not tokenism; it is cognitive architecture. You are literally expanding the range of interpretations the system can hold.
5. Build a “model revision” cadence.
Do not treat this as a one-time audit. Embed regular moments—monthly reviews, annual retrospectives—where you ask: Which models are still serving us? Which have become obsolete? What new assumptions should we be testing? Make revision visible and normal.
Section 5: Consequences
What flourishes:
When mental models become visible and revisable, adaptive capacity emerges. Teams can learn faster because they are not defending invisible positions; they are testing hypotheses. Decisions become more robust because stakeholders understand what they are assuming and on what grounds. Conflict becomes productive: disagreement becomes data about the diversity of interpretation in the room, rather than a threat to manage. In commons contexts specifically, co-ownership becomes possible because the invisible power dynamics embedded in unexamined models become visible and negotiable. New members integrate faster because the mental models are explicit, not encoded in tribal knowledge.
What risks emerge:
This pattern sustains existing health but does not guarantee adaptive transformation. If recognition becomes routinised—”we had our model meeting, now back to business”—the pattern becomes hollow. The models get named but never actually revised. Teams begin to see the practice as compliance theatre rather than genuine inquiry. Watch for this: If externalising models does not change behaviour, the pattern has decayed into ritual.
Ownership and autonomy both score at 3.0, indicating that this pattern alone does not secure distributed decision-making. Recognising models can become paternalistic: a leadership team names models and then tells others to conform to the “correct” interpretation. The power to define what counts as a valid mental model stays centralised. This pattern must be paired with structures that distribute authority over interpretation.
There is also an emotional risk. Surfacing mental models can trigger defensiveness, especially if someone’s model is implicitly being questioned. “That assumption is wrong” can feel personal. Skilled facilitation is required to hold this tension without collapsing into either false harmony or escalated conflict.
Section 6: Known Uses
1. The US Census Bureau’s cognitive testing program (Cognitive Science, government context):
For decades, Census Bureau analysts assumed people understood questions the way designers intended. The bureau redesigned survey wording based on internal logic. Response rates and data quality remained mediocre. When they shifted to surfacing and testing the mental models people actually used to interpret questions—”What does ‘usually live here’ mean to someone with seasonal housing?”—they discovered profound misalignment. By naming the unexamined model (“people interpret our words as we intend them”) and replacing it with a testable alternative (“people interpret through their own situational logic”), they redesigned questions that actually worked. Data quality improved. This is recognition changing practice.
2. The Evergreen Cooperatives’ monthly model meetings (Activist/Corporate hybrid, Cleveland):
A network of worker-owned businesses discovered that cooperative principles meant nothing if workers operated from different mental models of what “ownership” meant. Some assumed it meant formal voting rights. Others assumed it meant profit-sharing. Others assumed it meant having genuine voice in decisions affecting their daily work. These different models created constant friction. They institutionalised quarterly “theory of change” meetings where workers and management explicitly articulated what they believed made the cooperative actually work. Over time, they built a shared model—not uniformity, but clarity about where they differed and why. This recognition allowed them to design governance structures that honoured multiple valid models rather than pretending consensus existed where it did not.
3. Automattic’s distributed P2 documentation system (Product/Tech context):
The company recognised that its globally distributed teams were operating from drastically different mental models about how decisions get made, what constitutes “done work,” and how to handle disagreement across time zones. Instead of centralising control, they externalised the implicit models in writing—decision logs, “why we did this” documentation, explicit meta-discussion of what assumptions guided each choice. This transformed unexamined models from invisible power dynamics into legible knowledge. New employees could see how thinking worked, not just copy behaviour. Disagreement became visible in the documentation—”here is why we chose X over Y”—and could be revisited when conditions changed.
Section 7: Cognitive Era
In a world of AI systems and distributed intelligence networks, mental models matter differently—and more urgently.
First, AI systems are themselves mental models made explicit. Machine learning algorithms codify patterns and assumptions. When these systems make decisions affecting humans—loan approvals, content moderation, criminal sentencing—the stakes of unrecognised mental models become literal. A product team designing with AI embedded must interrogate not only human mental models but the assumptions baked into training data, loss functions, and deployment contexts. A tech team cannot simply “build and see what happens.” The model is consequential from day one.
Second, AI increases the speed and scale at which unexamined models propagate. A mental model that once shaped decisions for a single team now shapes decisions for millions—instantly. This makes recognition not a nice-to-have but essential infrastructure. You cannot move fast with invisible assumptions at scale; you will crash.
Third, distributed systems and networks make model misalignment catastrophic. When systems are tightly coupled—one person’s decisions directly affect another’s—misalignment creates friction you can see and negotiate. When systems are loosely coupled and globally distributed, misalignment is silent until it produces a crisis. Recognition practices become the connective tissue that prevents fragmentation.
For product teams specifically: The mental models embedded in your product are not neutral. They encode assumptions about how people should behave, what counts as progress, what relationships should look like. When you design with AI, those models scale. A recommendation algorithm that assumes “engagement equals value” will shape billions of information diets. The only way to govern this is to make the model explicit, interrogate its assumptions, and remain open to revision as effects become visible.
The leverage: In complex, distributed systems, recognition is cheaper than recovery. Finding misaligned models early costs time and vulnerability. Letting them propagate costs integrity and trust.
Section 8: Vitality
Signs of life:
-
People regularly speak their assumptions aloud without defensiveness. In meetings, you hear: “I was assuming X, but your question makes me wonder if that’s still true.” This is not forced; it is normal.
-
Decisions include explicit “what would prove us wrong” statements. Before executing, the team names what evidence would require them to revise course. They are not attached to the plan; they are attached to learning.
-
Conflicts get reframed productively. When disagreement surfaces, the first move is “what different mental models are we operating from?” rather than “who is right?” This happens naturally, not through facilitation.
-
New members quickly understand the system’s logic. Because mental models are explicit and documented, onboarding is faster. People can see how decisions are made, not just copy behaviour.
Signs of decay:
-
Model meetings become checkbox compliance. “We did our quarterly models review” and nothing shifts. The practice is hollow.
-
Models stay at the abstract level. Teams name grand philosophies (“we believe in people-centredness”) but never drill into what that actually means for Monday’s decision. The abstraction protects the model from real interrogation.
-
Dominant voices define what counts as a valid model. Certain people’s interpretations get validated; others get dismissed as “not aligned with our culture.” Recognition becomes a tool of conformity.
-
New conditions emerge and the team does not revise. The market shifted, the community changed, technology enabled new possibilities—but the mental models stay frozen. The team is operating a map of a territory that no longer exists.
When to replant:
Restart this practice when you notice decisions failing to land, conflict becoming personal rather than structural, or new members struggling to understand why things are done “the way they are.” The pattern needs redesign—not abandonment—when you realise recognition has become routine without revision. Add stakes: tie model revision to actual resource allocation, governance changes, or strategic pivots. Make it matter.