Effective Altruism Personal
Also known as:
Applying evidence and reason to do maximum good—by measuring impact, comparing causes, and choosing high-leverage giving—enables significant difference in outcomes of social problems.
Applying evidence and reason to personal giving and action choices—by measuring impact, comparing causes, and directing resources toward high-leverage interventions—enables significantly better outcomes on the problems you care about solving.
[!NOTE] Confidence Rating: ★★★ (Established) This pattern draws on Effective Altruism.
Section 1: Context
We inhabit a moment of fractured attention and competing moral claims. A corporate leader faces requests from dozens of charities. A government staffer must allocate a limited policy budget across healthcare, education, climate, and criminal justice. An activist choosing which campaign to join sees a landscape of urgent crises—each demanding full commitment. An engineer exploring “tech for good” finds dozens of problem spaces that look equally noble. The ecosystem is not fragmented from lack of caring; it’s overwhelmed by legitimate need and unclear leverage points. Evidence of what actually works—at what cost, with what durability—is unevenly distributed. Many actors make giving and impact choices based on emotion, proximity, or narrative salience rather than comparative effectiveness. The result: significant resources flow toward visible, familiar problems while neglected, high-leverage interventions remain underfunded. The system sustains itself without generating the adaptive insight needed to shift which problems get tackled and how.
Section 2: Problem
The core conflict is Effective vs. Personal.
The tension runs deep: should you allocate your time, money, and attention based on cold calculation of impact-per-unit-invested, or based on what moves you personally, what your community needs, what feels like it matters to you?
The Effective side says: one life saved in a malaria-endemic region through insecticide nets costs $2,000. One life prevented from blindness costs $15. One child pulled from trafficking costs $8,000. If resources are finite—and they are—shouldn’t you maximise lives improved per dollar, per hour, per career decision? This logic is ruthless. It can point to neglected causes: wild animal suffering, institutional biosecurity, long-termist AI safety. It can reveal that your favourite charity, however sincere, produces half the impact-per-dollar as a less emotionally resonant alternative.
The Personal side resists: impact is not only measurable. Motivation matters. If you’re forced to work on a cause that leaves you cold, you’ll burn out, drop out, become cynical. Community ties, cultural meaning, local knowledge—these are real goods that spreadsheets cannot capture. Outsourcing moral judgment to an algorithm feels like a betrayal of what it means to act with integrity.
The system breaks when either pole dominates. Pure effectiveness becomes hollow: you optimise for metrics that miss what humans actually value. Pure personal choice becomes parochial: resources cluster around the visible and comfortable, while preventable suffering persists in the margins. The tension unresolved leaves practitioners oscillating between guilt and resentment.
Section 3: Solution
Therefore, practise systematic cause comparison within your authentic commitments, using evidence to sharpen your leverage rather than to override your judgment.
This pattern does not ask you to become a utilitarian calculator. It asks you to bring rigour to the choices you are already making.
Begin with what you care about. What problems actually call to you? What communities do you belong to or feel drawn to serve? This is your rooting—the genuine soil where your motivation grows. This is not to be engineered away. Then apply the discipline of evidence. Within that domain, what interventions produce measurable change? At what cost? What do we know and what remains uncertain? This is where EA reasoning becomes a tool, not a master.
The shift is cognitive: from defending your chosen cause against challenge to comparing approaches within it. A corporate leader committed to global health doesn’t need to abandon that commitment to fund malaria prevention instead of cancer research. But she can ask: within global health, what returns the most improvement per dollar? An activist fired by criminal justice reform doesn’t pivot to AI safety because the EA calculator says it matters more. But she can ask: which policy campaigns in criminal justice have the strongest evidence of success? Which interventions actually reduce incarceration versus those that merely sound reformist?
This pattern regenerates the system by creating feedback loops. You measure what works. You adjust. You learn. Your giving becomes a living experiment, not a habit. The personal commitment provides staying power; the evidence provides course correction. Over time, both clarity and motivation compound. You develop what the EA tradition calls “moral growth”—not abandonment of who you are, but deepening understanding of how to express your values effectively.
The mechanism is one of integration: treating the effective and personal as partners, not antagonists. This sustains the vitality of the commons because it keeps practitioners engaged and learning, not oscillating between zealotry and burnout.
Section 4: Implementation
For corporate givers: Establish a “cause model” workshop with your leadership and grants team. Map three to five problem domains your firm authentically connects to: a technology company might choose digital literacy, employment for underrepresented populations, and biosecurity. Within each domain, commission or review an evidence summary: what does rigorous evaluation tell us about intervention effectiveness? Ask your grants team to score candidate charities not only on mission alignment but on evidence of impact per dollar deployed. Fund three to five “pilot partners” in the highest-evidence approaches within your chosen domains. Commit to a two-year evaluation cycle: measure results, compare to benchmarks, reinvest in proven approaches, and sunset underperformers. This prevents mission drift while maintaining the personal stake—you’re deepening work you already believe in, not outsourcing judgment.
For government policymakers: Build an “impact framework” into your policy development process. When a new program is proposed, ask: what is the evidence of effectiveness from similar interventions? What is the cost per unit outcome? This does not mean paralysis by analysis—it means systematic comparison rather than intuition-based ranking. Allocate 10–15% of your budget to randomised pilots in your high-priority areas. Partner with evaluation firms to rigorously measure outcomes. Use the data to make the case to elected officials: “We tried three approaches to reduce recidivism; this one cut return rates by 18%, not 5%, at similar cost.” Evidence becomes political armour, not obstacle. Your authentic commitments (criminal justice, mental health, education) remain fixed; your method for achieving them becomes evidence-informed.
For activists: Before committing years to a campaign, conduct a “leverage assessment.” Define the problem: homelessness, police violence, housing access. Then research: which interventions addressing this problem have the strongest track record? Interview campaigners who’ve worked on this issue for a decade. Read independent evaluations. Ask yourself: is this campaign designed to address the root cause or to perform activism? Does it actually move the needle? You may still choose a campaign that is harder to measure—community organising, cultural narrative work—but you do so knowing the trade-off, not hiding from it. Some activists now pair their on-the-ground work with data collection partners, creating feedback loops that sharpen strategy.
For engineers in tech: Apply the same comparative thinking you use in systems design to problem selection. When evaluating “tech for good” projects, model the Theory of Change: what specific change do you expect? What evidence supports that causal link? What is the counterfactual—would this have happened anyway? Too many engineers build solutions to problems they’ve imagined rather than diagnosed. Run small pilots: deploy your tool in a genuine context, measure actual use and outcomes, compare against baseline. A team building an education app should know: does it improve learning outcomes relative to the existing alternative? By how much? For which students? Does it sustain engagement beyond month three? Use metrics, but ones that matter. This keeps you grounded in actual problem-solving rather than chasing technical elegance.
Section 5: Consequences
What flourishes:
Practitioners report increased motivation over time. The initial discomfort of confronting hard trade-offs—”This cause I love produces less impact than I believed”—shifts into clarity and agency. You stop defending indefensible choices and start refining good ones. Teams develop better institutional memory: “We tried three approaches; here’s what we learned.” This becomes a commons asset, passed to new team members. Resource allocation becomes more efficient without becoming dehumanised; you’re spending more per unit of real, measured change. Perhaps most vitally, this pattern breeds intellectual humility. When you measure, you see failure. You also see surprise successes. The system becomes adaptive rather than static.
What risks emerge:
Measurement bias is acute. You begin to count what is easy to quantify—lives saved, dollars spent—while underweighting what is hard to measure: cultural shifts, dignity, community cohesion, long-term institutional change. A criminal justice funder obsessed with recidivism rates may miss a program that builds social belonging even if it doesn’t reduce re-arrest. The commons assessment reflects this: resilience sits at 3.0, ownership at 3.0. If practitioners become dependent on external evidence—published studies, randomised trials—they may lose local knowledge and adaptive capacity. A government team may wait for a peer-reviewed evaluation while a grassroots organisation already knows what works in their neighborhood. Worst-case decay: EA thinking becomes performative. You gather impact data to justify a predetermined choice, then call it evidence-based. The pattern becomes hollow, a box-ticking exercise that sustains the appearance of rigour without its substance.
Section 6: Known Uses
GiveWell and major donors (2010–present): GiveWell, founded on EA principles, publicly ranked charities on evidence of impact, creating massive reallocation of philanthropic dollars. A donor previously giving $50,000 yearly to a beloved local cause, after encountering GiveWell’s research, shifted giving to malaria prevention. The psychological moment mattered: not guilt or coercion, but genuine shift in understanding leverage. Over a decade, GiveWell’s top charities have absorbed hundreds of millions in funding. Some traditional charities initially resisted, then adapted: they began measuring outcomes more rigorously, redesigning programs based on data. Vitality marker: the donors didn’t burn out; they deepened their engagement because they saw evidence that their money was preventing real suffering.
UK government and crime reduction (2013–2020): The UK’s Behavioural Insights Team and the Home Office partnership took specific criminal justice policies—from police stop-and-search practices to rehabilitation programs—and subjected them to randomised trials. When a policy showed weak evidence of impact, they didn’t shutter it from ideology; they redesigned it. One example: “police-led triage” for mental health crises reduced both police time spent and re-crisis rates. The pattern worked because civil servants remained committed to crime reduction (personal commitment) while shifting how they pursued it (evidence discipline). Practitioners reported higher morale: “We’re actually solving the problem, not just performing solutions.”
An engineer’s pivot (Google, 2018): An engineer deeply committed to education access spent two years building an offline learning platform for low-bandwidth regions. After deployment, measurement showed: learners used it, but dropout was high and learning gains were small. Rather than defend the work, the team reanalysed the data, interviewed users, and discovered: students lacked stable electricity and device ownership, not content access. They pivoted entirely to solar microgrids and device refurbishment—utterly different work. The personal commitment (education access) stayed constant; the evidence shifted the mechanism. The engineer described it as “heartbreaking and clarifying”: heartbreaking to abandon a project you’d built, clarifying to finally address the actual constraint.
Section 7: Cognitive Era
In an era of machine learning and distributed intelligence, the pattern accelerates and mutates. AI systems can now analyse thousands of interventions simultaneously, identifying neglected high-impact opportunities humans might miss: a ML model trained on global health data might surface that improving maternal nutrition in region X produces more downstream health gains than three currently-funded programs combined. This is powerful and dangerous.
The risk: you delegate judgment to a system trained on historical data that reflects past biases and gaps. You outsource moral reasoning to an algorithm that cannot feel community, cannot attend to dignity, cannot distinguish between “countable impact” and “actually-matters impact.” In the tech context, engineers now often use EA frameworks to justify AI projects themselves—”This system will prevent x number of harms”—without questioning whether the harms are real or measured well. The commons assessment scores resilience at 3.0 for good reason: dependency on external, algorithmic truth-telling is fragile.
The leverage: if you pair AI-assisted cause comparison with grounded deliberation, you gain adaptive capacity. A foundation can use machine learning to identify high-uncertainty, high-potential-impact areas, then convene practitioner networks to test whether the signal is real. An activist can use AI to model which policy levers actually move the metrics they care about, then deploy on-the-ground intelligence to validate the model. The AI becomes a lens, not a dictator.
The critical shift: bring EA reasoning into AI system design, not after. Engineers must build impact measurement into platforms from day one, not bolt it on. Data governance becomes moral work: who decides what counts as impact? Who is harmed by our metrics? What do we refuse to measure because it cannot be measured? Practitioners in the tech context must learn to question the measurement itself, not just apply it faithfully.
Section 8: Vitality
Signs of life:
A practitioner regularly updates their cause model—not annually, but in response to new evidence. They can articulate both why they chose this domain and how they’d know they’re wrong. They report genuine curiosity about comparative alternatives, not defensiveness. Organisations using this pattern show decreasing staff turnover in high-impact roles; people stay because they see evidence that their work matters. You see practitioners openly discussing failures: “We funded this approach for two years; the data shows it didn’t work; here’s what we’re trying instead.” This transparency—rarity in most sectors—signals vitality. Communities of practice form: a network of corporate funders, government officials, or activists sharing impact evaluations, learning collectively rather than guarding secrets.
Signs of decay:
A practitioner has not revisited their cause choice in three years, though significant new evidence has emerged. They gather impact metrics but rarely act on them—data gathering becomes performative. Staff express burnout not from hard work but from meaninglessness: “We measure impact, but nothing changes.” Organisations using EA frameworks become brittle around donors who question the evidence: rather than updating, they double down on existing commitments and dismiss alternatives. The pattern has become ritual, not reasoning. You see metrics that are clearly gamed: a program reports 85% success rate on a measure no peer evaluator uses, suggesting the metric was designed to flatter, not inform. Leadership uses impact data selectively, highlighting hits and burying misses. The commons becomes extractive: evidence is gathered but not shared.
When to replant:
If your team’s cause model has been stable for three years despite significant shifts in evidence or capacity, you need replanting. Convene your core team and practitioners from adjacent domains. Rerun your cause comparison from scratch, not to confirm but to genuinely test your current bet. If you find yourself defending your chosen approach rather than refining it, replant. The pattern works only when you’re willing to be wrong in public and adjust accordingly. This moment of replanting—uncomfortable and humbling—is when the pattern regenerates its vitality.