feedback-learning

Intersectional Power Analysis

Also known as:

Understand power at the intersection of multiple oppressions: race, class, gender, sexuality, ability, and more. Recognize compounding disadvantages and complex privilege.

Understand power at the intersection of multiple oppressions to recognize compounding disadvantages and complex privilege in your commons.

[!NOTE] Confidence Rating: ★★★ (Established) This pattern draws on Intersectionality.


Section 1: Context

Commons systems are fragmenting precisely where they need integration. A tech platform claims to serve “all users” while its interface accessibility fails disabled people of color. A government agency works to reduce poverty without recognizing how gender, migration status, and ableism compound the disadvantage it’s measuring. An activist coalition builds power while reproducing the same hierarchies it fights. The domain of feedback-learning reveals this: communities gather data, run surveys, hold meetings—but the very people most affected by compounding oppressions are often absent from, or harmed by, how the feedback is gathered and interpreted.

The living system is stagnating because power analysis has become one-dimensional. Traditional power-mapping asks: “Who has it? Who doesn’t?” But real commons operate across interlocking systems of advantage and disadvantage. A working-class Latina engineer in tech has material power in one domain (technical skill, income) and structural powerlessness in another (gender discrimination, migration precarity). When a commons ignores this complexity, it optimizes for those whose identities align with existing privilege structures and perpetuates harm to those navigating multiple, simultaneous marginalities.

The pattern emerges from necessity: organizations, movements, and public agencies are discovering that without intersectional power analysis, their feedback loops calcify. They hear from the comfortable, optimize for the privileged, and call it “working.”


Section 2: Problem

The core conflict is Intersectional vs. Analysis.

“Intersectional” wants wholeness: it insists that power operates at the overlap of race and class and gender and ability and more, all at once, in ways that cannot be separated. It resists the comfort of single-axis thinking. “Analysis” wants clarity: it seeks to isolate variables, measure effects, build tools, scale solutions. Analysis says “let’s break this down”; intersectionality says “the breaking itself is the problem.”

The tension surfaces in real practice. An organization runs a power audit and discovers it has 40% women in senior roles—a win on the gender axis. But 85% of those women are white. The analysis tool was not designed to see that intersection. A government agency disaggregates health outcome data by race, then by income, in separate reports—missing that Black low-income women experience a distinct, non-additive harm. An activist collective talks constantly about intersectionality in their mission statement while their meeting times, language, and accessibility decisions stay frozen, serving able-bodied people with flexible schedules.

When unresolved, this tension produces what practitioners call “analysis paralysis” on one side (too many lenses, no action) or “strategic ignorance” on the other (one axis solved, others worsened). The feedback-learning cycle breaks: communities provide data, practitioners extract meaning through a simplistic lens, and the system optimizes for its own blindness. People at multiple intersections experience the commons as not-for-them, or worse, as actively hostile, even when it claims to be building power.


Section 3: Solution

Therefore, map power by naming the specific intersections where your commons members live, gathering feedback from those intersections directly, and redesigning systems when compounding harms appear.

This pattern works by shifting from “adding up oppressions” to naming the lived ecology of power. Intersectionality is not a checklist (race ☐ class ☐ gender ☐ ability ☐). It is a way of seeing how a person’s experience of the commons is rooted in the soil where multiple systems of advantage and disadvantage meet.

The mechanism has three roots:

First: Specificity over universality. Instead of asking “do we serve low-income people?” ask “do we serve low-income disabled Black mothers?” The more specific the intersection, the more actual people you can see. Generic inclusion fails because it assumes the baseline experience is a white, able-bodied, economically stable person—and everyone else is a deviation. Specificity inverts this: it asks what the commons looks like from the vantage point of greatest complexity.

Second: Feedback from position, not just proximity. Invite people who live at specific intersections to reflect back what they experience in your commons. Not as “representatives of women” or “representatives of poor people” (flattening), but as people navigating specific combinations of advantage and disadvantage. A Deaf immigrant man and a white Deaf woman both navigate deafness but from utterly different positions. Both matter; they are not interchangeable.

Third: Redesign in response, not apology. When intersectional analysis reveals that your feedback process, meeting access, or value distribution harms people at specific intersections, the pattern commits to changing the system, not inviting more harm. This is where it touches commons resilience: the system becomes more robust because it stops optimizing for one invisible baseline.


Section 4: Implementation

The cultivation moves differently across contexts, but each shares a core logic: notice the intersection, gather from it, redesign for it.

In organizations (corporate context): Begin a Power + Positionality Audit. Map your membership, workforce, or user base not by single categories but by intersections. Use this actual distribution as your baseline. Then map decision-making power: Who chairs meetings? Who speaks first? Whose input shifts strategy? You will likely find concentration at one or two privileged intersections. Host feedback gatherings with people from underrepresented intersections—and crucially, keep them separate from dominant groups initially. A Latina immigrant woman may speak differently about workplace barriers in a room with other immigrant women of color than she will in a mixed-power room. Document specific patterns. A tech company discovered through this that their “inclusive hiring” initiatives helped white women and Black men, but Caribbean and Central American women faced higher attrition—a pattern invisible in aggregate data. Redesign hiring and retention around that intersection.

In government (public service context): Rebuild how you collect and analyze feedback. Disaggregate data by multiple axes simultaneously. Instead of separate reports on outcomes by race, then by gender, then by disability, create “intersection maps” that show: What is the health outcome for Black disabled women? For Latinx trans men? For Asian elderly people living in poverty? Many agencies resist this, claiming “we don’t have that data.” You do; you are choosing not to disaggregate it that way. Start with two domains (e.g., health access × race × gender). Hold listening sessions in neighborhoods and community spaces—not government offices. Ask specifically about compounding barriers. A city’s transportation department discovered through intersectional listening that their “accessible bus routes” served disabled white professionals but did not serve disabled immigrant parents because the stops didn’t align with schools, childcare, and informal employment hubs. They redesigned the entire map.

In activist movements (activist context): Audit your own structures for intersectional reproduction of harm. Who shows up to your meetings? Whose energy sustains them? Then ask the harder question: Who doesn’t show up, and why? An activist coalition in a major city held 7 PM evening meetings, thinking this was accessible. They discovered through direct conversation that single mothers, shift workers, and people with evening caregiving responsibilities couldn’t come. They shifted to rolling meetings at different times and locations, with childcare and transit support. But here: the shift only happened because they explicitly asked people at those intersections why they weren’t there, not why they “should be.” The pattern requires naming the intersection before inviting participation.

In tech (product context): Integrate intersectional analysis into your design feedback loops. When testing an accessibility feature, don’t just include “disabled people”—include disabled people across intersections: disabled people of color, disabled LGBTQ+ people, disabled elderly people, disabled parents. Document where the design fails for which intersections. A social media platform’s alt-text accessibility feature worked well for users with visual impairments who read English but failed for Deaf and DeafBlind users and for speakers of languages with smaller screen-reader support. Only intersectional testing caught this. Build intersectional persona work into product discovery, not as afterthought. Your personas should be specific enough to be real: “Maria, 34, Latina immigrant single mother with a chronic disability, works two part-time jobs, uses social media to manage her child’s special education IEP and connect with her community.” Not “person with disability” (generic, invisible).

Across all contexts: Document what you learn. When you discover a compounding harm, write it down. When you redesign in response, track the outcomes. This builds collective knowledge instead of repeating the same blindness cycle.


Section 5: Consequences

What flourishes:

The pattern generates adaptive capacity at exactly the points where commons systems were previously rigid. When you can see power clearly at intersections, you can design for it. The feedback-learning loop becomes actually alive: information flows from positions that were previously invisible, generating new options. Members at underrepresented intersections—once they experience being seen and heard specifically—often become some of the most committed stewards of the commons because they recognize the system is learning to serve them. A tech company that implemented intersectional power analysis found that women of color engineers became active mentors and culture-shifters after years of quiet attrition. The commons becomes more porous and adaptive because it stops forcing people to fit a baseline shape.

What risks emerge:

The pattern can calcify into performative intersectionality—gathering data, writing reports, making no changes. This is where the commons assessment scores warn us. Resilience sits at 3.0 because the pattern sustains existing function without generating new robust structures. If power analysis becomes decoupled from redesign, the commons appears to listen while actually reproducing the same harms. Members experience this as deeper violation than silence: “They asked me to share my pain and did nothing.” The pattern also risks analysis paralysis: if every decision requires examining every intersection, nothing moves. Some commons collapse under the weight of seeing too much complexity without building decision-making capacity to hold it.

A critical risk at this assessment level (3.2 overall): Rigidity through routinization. If Intersectional Power Analysis becomes a checkbox—annual audit, diversity dashboard, done—the pattern loses its living quality. Intersectional power shifts constantly. A person’s position changes. New intersections emerge. The analysis must be continuous, not episodic. The vitality reasoning flags this clearly: the pattern maintains health but doesn’t necessarily generate new adaptive capacity. Watch for hollow practice.


Section 6: Known Uses

The Highlander Center (activist tradition, 1930s–present): Myles Horton and his successors at this legendary adult education institution embedded intersectional power analysis into every workshop long before the term existed. They gathered poor people, Black people, women, working people—then noticed that a poor white Appalachian man’s oppression and a poor Black woman’s oppression were different kinds, shaped by different histories and structures. Their pedagogy explicitly taught people to see this. When the Civil Rights Movement emerged, Highlander was already training leaders to understand that a movement for Black liberation that didn’t also address gender, class, and regional inequality would reproduce existing hierarchies. They trained Rosa Parks, John Lewis, and countless others not just in tactics but in intersectional consciousness. The pattern: they made power visible at intersections by having people sit together across difference and name what they lived.

The National Domestic Workers Alliance (activist, 2007–present): NDWA organized workers at a complex intersection: women of color (largely immigrant, many undocumented) in jobs (housecleaning, nanny work, elder care) that are gendered, racialized, and exploited. Traditional labor organizing assumed a male manufacturing worker. NDWA’s power analysis began with the question: “What does an economy built on our exploitation look like?” They organized at intersections—documented and undocumented, different racial groups, different immigration statuses—and built power differently. They created peer training circles where a Dominican housekeeper and a Filipina nanny and a Guatemalan home care aide could name shared oppression while also acknowledging their different relationships to immigration status and family obligation. They won workplace standards not for “workers” but for women in feminized, racialized care work. The pattern: they gathered feedback and power analysis from the intersection, not about it.

The Equality Labs data initiative (tech context, 2015–present): This initiative uses intersectional power analysis to understand algorithmic discrimination in the U.S. Caste system. Most bias research treats race, religion, disability as separate. Equality Labs collected data and feedback from people at the intersection of Caste identity, race, disability, and gender. They discovered that the algorithm bias experienced by a Dalit disabled woman is not the sum of “Dalit bias” + “disability bias”—it is its own distinct harm, shaped by how caste and disability and gender combine in surveillance and employment systems. They published findings that shifted how tech companies think about bias testing. The pattern: they named the intersection, gathered specific feedback, and redesigned the research methodology itself.


Section 7: Cognitive Era

In an age of machine learning and predictive systems, intersectional power analysis becomes both more critical and more fragile.

The risk is severe: AI systems are trained on data that typically flattens power. A hiring algorithm trained on historical data “learns” that women of color are less successful in tech—not because it’s true, but because the system reproduces existing discrimination. The algorithm doesn’t see that a Latina engineer with a disability who left after two years did so because of intersecting harassment and inaccessible infrastructure, not lack of capability. It sees a churn statistic. Without intersectional analysis, AI replicates and accelerates the very blindness the pattern is designed to interrupt.

The opportunity is equally significant: Distributed data systems can now make intersectional analysis visible and continuous in ways that were impossible before. Instead of annual audits, a commons can track how people at specific intersections experience systems in real time. A platform can monitor not just “engagement” but “engagement by Deaf + low-income + person of color” and notice when it drops. A government agency can flag policy outcomes that compound disadvantage automatically. This requires building intersectional analysis into the feedback infrastructure itself, not layering it on top.

The tech context translation demands this specifically: Products designed for “users” are already being designed for people at intersections. The question is whether the product team sees those intersections. If you’re building any commons in a networked, algorithmic environment, you must audit your feedback systems for intersectional blindness before deployment. You must test with people at specific intersections. You must design for continuous intersectional monitoring. This is not optional; it is foundational. Without it, your digital commons will reproduce the same concentrated privilege as every other system.


Section 8: Vitality

Signs of life:

  1. Specificity appears in language. Conversations shift from “low-income people” to “low-income disabled Latina mothers.” The commons names intersections; people at those intersections recognize themselves.

  2. Feedback loops include people previously absent. The same faces stop showing up to decisions. New people—particularly those navigating multiple, simultaneous marginalities—become regular contributors to strategy conversations.

  3. Redesigns happen in response to intersectional analysis. Not reports written and shelved. When you gather that a particular intersection experiences compounding harm, systems change: meeting times shift, accessibility updates, processes are redesigned, power is redistributed.

  4. The commons learns faster at intersection points. When you pay attention to feedback from people at complex intersections, you spot system failures earlier. These are often the people experiencing the system’s greatest brittleness first.

Signs of decay:

  1. Analysis becomes decoupled from action. The organization runs an intersectional audit annually, produces a report, and makes no structural changes. The pattern becomes performative theater.

  2. Intersectional language enters while intersectional practice stays frozen. Staff talk about “centering intersectionality” while meetings remain inaccessible, power remains concentrated, and the same invisible baseline continues to shape decisions.

  3. Intersectional feedback is gathered from small groups while systems are designed for the majority. You listen deeply to people at margins and then optimize for the center. They feel tokenized.

  4. The work becomes abstract. “Intersectionality” becomes a noun to discuss rather than a living practice. People reference it without doing it. The commons becomes conceptually sophisticated and structurally unchanged.

When to replant:

Restart this practice when you catch your commons becoming rigid around a simplified analysis. The moment you notice decisions being made without seeing power at intersections—or notice people at those intersections withdrawing or speaking cautiously—the pattern needs replanting. The right moment is before decay sets in: when you sense routinization, when language gets ahead of practice, restart the intersectional feedback-gathering immediately. Make it continuous, not episodic. Treat it as the living heartbeat of the commons, not a periodic health check.