Epistemic Guardrails I Use
It's easy to believe your own framing. Separate thinking output from identity. Revise without ego.
The Observation
The more you write, the more you develop positions. The more you develop positions, the more you become identified with them. The more you’re identified with them, the harder they are to revise.
This is a trap.
I write to clarify thinking—to take vague intuitions and force them into coherent statements. But coherent statements can become cages. Once I’ve said something publicly, there’s pressure to defend it even when new evidence suggests it’s wrong.
Over time, I’ve developed guardrails—explicit checks that help me notice when thinking has ossified into position-holding.
What Breaks Without Guardrails
When epistemic rigor slips, specific failures emerge:
Confirmation bias. I notice evidence that supports what I already believe. I don’t notice evidence that contradicts it. The filtering is automatic and invisible—I feel like I’m being objective while systematically ignoring inconvenient data.
Motivated reasoning. The conclusion comes first; the argument follows. Instead of asking “what does the evidence suggest?” I ask “how can I justify what I want to believe?” The reasoning looks logical but runs backward.
Identity fusion. My positions become part of who I am. Attacking the position feels like attacking me. I defend ideas that should be updated because updating them feels like losing part of myself.
Tribal capture. Positions align with groups I identify with. Revising them means diverging from the group. The social cost of changing my mind exceeds the epistemic cost of being wrong—so I stay wrong.
Overconfidence. I become more certain than the evidence warrants. Probabilities collapse into certainties. Nuance dissolves into assertion. The world is more complicated than my model, but my model feels complete.
The Guardrails
These are specific checks I try to run. Not always successfully—but the attempt helps.
State the confidence level. Before asserting something, ask: how confident am I actually? Is this “pretty sure” or “definitely true”? Making confidence explicit prevents certainty from creeping in where it doesn’t belong.
Name what would change my mind. For any position, there should be evidence that would cause me to revise it. If I can’t name what that evidence would look like, the position isn’t actually responsive to reality—it’s just something I’ve decided to believe.
Steelman the opposition. Can I articulate the strongest version of the opposing view? Not the weak version that’s easy to dismiss—the version that someone thoughtful would actually hold. If I can’t, I probably don’t understand the issue well enough to have a strong opinion.
Track revision history. What positions have I changed over the past few years? If the answer is “none,” something is wrong. Either I was right about everything before (unlikely) or I’ve stopped updating (more likely).
Check for social pressure. Would I hold this position if everyone I respect disagreed? If not, how much of my belief is actually about the object-level claim versus about group membership?
Notice emotional charge. When someone challenges a position and I feel angry or defensive rather than curious, that’s diagnostic. The emotional reaction suggests the position is doing identity work, not epistemic work.
Separate author from argument. Would I evaluate this argument differently if it came from someone I disagreed with? If yes, I’m reasoning about the source rather than the content.
The Separation Practice
The most useful shift has been learning to separate output from identity.
When I write something, it’s a snapshot of thinking at a particular time. It’s not a permanent statement of who I am. Tomorrow’s evidence might make today’s writing obsolete. That’s fine—that’s how learning works.
This requires deliberate practice. The instinct is to defend past work. The practice is to evaluate past work with the same skepticism I’d apply to someone else’s.
Questions I try to ask:
- If I read this without knowing who wrote it, would I find it convincing?
- What would I think of this argument if someone I disagreed with made it?
- Has anything happened since I wrote this that would change the conclusions?
The answers are often uncomfortable. Sometimes I was wrong. Sometimes I was right but for the wrong reasons. Sometimes I overstated confidence. Sometimes I missed obvious counterarguments.
The Revision Commitment
Being willing to revise is different from being willing to abandon.
Every position should be revisable, but not every challenge warrants revision. The guardrails help distinguish between legitimate updates and noise:
- New evidence that I wasn’t aware of → probably warrants revision
- Same evidence, new interpretation → worth considering
- No new evidence, just disagreement → probably doesn’t warrant revision
- Emotional reaction without substance → definitely doesn’t warrant revision
The goal isn’t to be infinitely malleable. It’s to be responsive to legitimate information while resistant to illegitimate pressure.
The Public Thinking Problem
Writing publicly creates specific challenges for epistemic rigor.
Public positions feel like commitments. Changing them publicly feels like admitting failure. The incentive is to double down rather than update.
I’ve tried to build the opposite norm: revision as strength, not weakness. When I change my mind publicly, I try to explain what changed and why. The goal is to model that updating is normal, not embarrassing.
This is harder than it sounds. The instinct is to quietly shift positions and hope no one notices the inconsistency. Being explicit about revision requires overcoming that instinct.
The Limits of Guardrails
These guardrails aren’t sufficient. They’re reminders, not guarantees.
Even with conscious effort, I have blind spots. I’m embedded in particular communities, exposed to particular information, shaped by particular experiences. The guardrails help at the margin, but they can’t eliminate bias—they can only reduce it.
The appropriate response is humility, not despair. I’m going to be wrong about some things. The goal isn’t to be right about everything; it’s to be the kind of thinker who updates when wrong and holds positions provisionally rather than absolutely.
The Outcome
Separate thinking output from identity. Revise without ego.
Positions should be tools for understanding, not badges of identity. When they become badges, they stop being responsive to evidence and start being self-protective.
The guardrails help notice when the transition is happening. They don’t prevent it—nothing fully prevents it—but they make correction possible.
What I believe today isn’t what I believed five years ago. Five years from now, I expect it will be different again. That’s not failure. That’s how thinking is supposed to work.
Related
How I Decide What to Work On
January 13, 2026
Not everything interesting is worth doing. Constraints clarify. Say no more often.
What I've Learned About My Own Operating Limits
January 13, 2026
I work in bursts. Sustained output degrades quality. Publish when clarity stabilizes, not on a schedule.
I Used to Think Technology Would Make People Smarter
February 10, 2026
Technology doesn't make people smarter. It amplifies what's already there. If you can navigate, you move at blazing speed. If you can't, you're stranded in a sea of menus.
Related Deep Dive
Get notified when I publish