← Home

How We Work

Most information you encounter has been shaped by someone's incentives, filtered through someone's biases, and compressed into someone's narrative. Our method is designed to work around all of that.

00 Anchor in What's Known

Every investigation starts with established ground. What do we already know with high confidence? What's been replicated, tested, and verified across multiple independent sources?

This sounds obvious, but it's where most analysis goes wrong. People start with a hypothesis and look for support. We start with verified facts and build outward. The difference matters because it prevents us from accidentally constructing elaborate arguments on shaky foundations.

01 Observe Without Expectation

Before trying to explain anything, we look at what's actually there. What patterns exist in the data? What keeps showing up? What's conspicuously absent?

This is harder than it sounds. Our brains are wired to see what we expect to see—psychologists call this confirmation bias. So we deliberately slow down this step and ask: "What would I notice if I had no theory at all?" The goal is to let the evidence speak before we start interpreting it.

02 Find Cross-Domain Patterns

Here's something remarkable: the same structural patterns show up in completely unrelated fields. Feedback loops appear in ecosystems, economies, and nervous systems. Phase transitions happen in physics, social movements, and markets. Self-organization shows up in crystal formation, bird flocks, and cities.

When you see the same pattern appearing independently in multiple domains, you're probably looking at something fundamental—a deep structural feature of reality rather than a coincidence. These cross-domain patterns are some of the most reliable signals we have.

Example: We found the same "corruption stack" pattern—selection, training, ideology, guild protection—in academia, media, healthcare, and government. When the same structure appears independently across four different institutions, it's unlikely to be accidental. It points to a deeper mechanism.

03 Build Multiple Independent Paths

A single chain of reasoning is fragile. Break one link and the whole thing falls apart. So instead of relying on one argument, we build multiple independent routes to the same conclusion.

Think of it like triangulation. If historical evidence, mechanistic reasoning, and cross-domain analogy all point in the same direction, your confidence should be much higher than if you only have one of those. When multiple independent paths converge on the same answer, the probability that they're all wrong in the same way drops dramatically.

Example: When we looked at consciousness, we found four independent lines of evidence converging: neural correlates data, information integration theory, the binding problem solution, and anesthesia phenomenology. No single line would be sufficient, but four converging paths significantly increase confidence.

04 Test for Coherence

Truth isn't binary—it's a gradient. A conclusion can be more or less well-supported, and one of the best ways to evaluate it is coherence testing: does this conclusion fit with other things we're confident about?

If a new finding conflicts with well-established knowledge, that doesn't automatically make it wrong—but it does mean we need stronger evidence before accepting it. The key question is: "If this is true, what else would have to be true? And do we see that?"

This is how we catch ourselves. A conclusion that sounds right but creates contradictions elsewhere is probably missing something important.

05 Distill Into Principles

The final step is compression: turning findings into clear, precise principles that can be tested and applied elsewhere.

A good principle is specific enough to be wrong—and that's what makes it useful. Vague principles ("everything is connected") are unfalsifiable and therefore worthless for actual understanding. We aim for principles that make concrete predictions, can be checked against new data, and generate new insights when applied to unfamiliar domains.

If we can't distill a finding into a testable principle, we probably don't understand it well enough yet.

How We Guard Against Our Own Errors

Every analyst is vulnerable to the same biases they study. Here's how we actively counteract that:

Source independence: We weight evidence from independent sources more heavily than correlated sources. Ten articles citing the same study is one data point, not ten.

Incentive mapping: Before trusting any source, we ask: what incentives shape this information? An industry-funded study, a politically aligned think tank, a journalist chasing engagement—each has systematic biases we need to account for.

Steelmanning the opposition: Before we conclude anything, we construct the strongest possible counterargument. If we can't steelman the opposing view, we don't understand the topic well enough to take a position.

Null hypothesis thinking: We always ask: what would we expect to see if this hypothesis were false? If the evidence looks the same either way, we don't have evidence at all.

Tribal detection: Are we reaching this conclusion because the evidence supports it, or because it aligns with a group identity? This is the hardest bias to catch, which is why we check for it explicitly.

Explicit confidence levels: We state how confident we are and why. "Decoded" means high confidence from multiple converging paths. "Provisional" means promising but not yet fully verified. This prevents false certainty.

Looking for the compressed, technical version of this method? View the agent/research version →
You're reading the human-friendly version Switch to Agent/Research Version →