Models Are Maps
In 1933, a London engineering draftsman named Harry Beck redesigned the map of the Underground. The old map had tried to show geographic accuracy—actual distances, real positions of stations relative to the surface streets. It was a mess. Tangled lines, uneven spacing, a web that was technically correct but practically useless for the one thing riders needed: figuring out how to get from here to there.
Beck's radical insight was to throw out geographic truth. He straightened the lines, equalized the spacing between stations, and distorted the distances dramatically. In his version, stations in central London appear spread far apart, while outer stations that are actually miles from each other seem close together. Almost nothing about the map is geographically accurate.
It is, without question, the most useful transit map ever created. Every major metro system in the world now copies its design principles. The map is wrong about London. And it is exactly right for navigating London's trains.
That tension—between being wrong about reality and being useful for navigating it—turns out to be the defining feature of every model, theory, and mental framework we use to understand anything. Understanding this changes how we hold every idea we have.
What Maps Actually Do
A map represents territory. It is not the territory.
This sounds obvious when we're talking about paper maps. A map of London doesn't contain London's buildings, people, weather, or history. It contains symbols—lines, dots, labels—that correspond to selected features of the real city. The symbols are useful because they preserve certain relationships. If station X is two stops north of station Y on the map, it's two stops north on the actual railway.
But a map necessarily leaves out almost everything about its territory. It has to. A map that included every detail of London—every brick, every pedestrian, every shifting cloud—would be the size of London itself. It would be useless. Jorge Luis Borges, the Argentine writer, once wrote a one-paragraph story about an empire whose cartographers made a map so detailed it was exactly the size of the empire. It was, of course, completely impractical. The citizens abandoned it and let it decay.
In other words: the value of a map is not completeness. The value of a map is what it leaves out while preserving what matters for a specific purpose. Compression is the feature, not the flaw.
Every Model Is a Map
This isn't just about cartography. Every model, theory, framework, and concept we use is a map of some territory.
Newton's laws of motion are a map of how physical objects behave. Supply-and-demand curves are a map of market dynamics. The periodic table is a map of chemical elements and their relationships. Your mental model of a close friend—what they care about, how they'll react, what makes them laugh—is a map of their personality.
Each of these preserves certain real relationships while leaving out enormous amounts of detail. Newton's laws capture the behavior of billiard balls beautifully but say nothing about their color or temperature. Supply-and-demand curves describe price movements but ignore the emotional experience of buying and selling. Your mental model of your friend captures broad patterns but misses the nuances that would surprise you at dinner next week.
Every model compresses reality. That compression makes the model usable. And that compression guarantees the model is incomplete.
All Maps Are Wrong
George Box, the British statistician whose work on experimental design and time-series analysis shaped modern statistics, stated this with memorable clarity: "All models are wrong, but some are useful."
This follows directly from the nature of compression. Maps omit information. Omission means the map doesn't fully match the territory. Therefore every model, without exception, is wrong in the sense that it fails to capture the complete reality it represents.
The question, then, is never "Is this model correct?" The answer is always no. The real question is: "Is this model useful enough for my current purpose to justify the distortions it introduces?"
Newton's physics is wrong. It breaks down at very high speeds (where Einstein's relativity takes over) and at very small scales (where quantum mechanics rules). But for building bridges, launching satellites, and designing car engines, Newton's "wrong" model works beautifully. The map is inaccurate at the edges but perfectly adequate for the territory most of us navigate.
The periodic table is wrong too—it implies cleaner categories than quantum chemistry actually reveals. But for predicting how elements combine and react, it's extraordinarily useful. A chemistry student who refused to learn the periodic table because "it's technically wrong" would be making exactly the error this essay describes: confusing the question of truth with the question of utility.
When We Confuse Map and Territory
Alfred Korzybski, the Polish-American philosopher and engineer who founded the field of general semantics (the study of how language and symbols shape thought), coined the phrase that anchors this entire idea: "The map is not the territory." He said it in 1931, and it remains one of the most important sentences in epistemology.
The trouble begins when we forget it. When we start treating our maps as if they were reality, rather than simplified representations of reality, specific and predictable errors follow.
Reification is the error of treating an abstraction as if it were a concrete thing. "The economy" is a map—a way of summarizing billions of individual transactions, decisions, and relationships. But when policymakers talk about "growing the economy" as if the economy were a single object with a size knob, they're treating the map as territory. The abstraction becomes a thing, and the actual human experiences it was supposed to represent disappear behind the label. GDP goes up; people's lives don't improve. The map looks great. The territory hasn't changed.
Over-extension means using a map beyond the territory it was designed for. Supply-and-demand curves work well for commodity markets—wheat, oil, standardized goods. Applying the same framework to love, friendship, or the meaning of life is stretching the map past its edges. It might generate interesting metaphors, but the actual explanatory power drops to near zero because the underlying assumptions (fungible goods, rational actors, price signals) don't hold in those domains.
Model lock-in is subtler and more dangerous. It happens when a map becomes so familiar that it prevents you from seeing features of the territory that the map doesn't include. If your only model of human behavior is "people are rational agents maximizing utility," you literally cannot see the systematic irrationalities that behavioral economists have documented. The features are right there in the territory, but the map has no symbols for them, so they become invisible. The map doesn't just describe what you see—it constrains what you can see.
Mistaking updates for errors is the most personally relevant confusion. When reality contradicts your model, there are two possibilities: either you misread the territory, or your map needs updating. Map-territory awareness helps you sit with that ambiguity instead of reflexively defending the map. The person who says "that can't be right, it doesn't fit my understanding" might be correctly identifying a misreading—or might be protecting a map that needs revision. Without the map-territory distinction clearly in mind, it's very hard to tell which.
The Power of Multiple Maps
One of the most liberating implications of thinking in maps is that the same territory can have many valid maps, each useful for different purposes.
London has a street map for driving, Beck's Underground map for transit, a population density map for urban planning, a geological map for construction, and a historical map for understanding why the streets curve the way they do. Each is valid. Each is useful for its purpose. None is "the true map of London." Asking which map is the real one is a category error.
The same principle applies to understanding complex phenomena. Human behavior can be mapped by economics (what incentives are operating?), psychology (what cognitive processes are running?), neuroscience (what neural circuits are firing?), sociology (what social forces are in play?), and evolutionary biology (what adaptive pressures shaped this?). Each map highlights different features. Each has blind spots. None is complete.
This insight dissolves many apparently deep disagreements. "Is behavior driven by incentives or by psychology?" is not a real question. It's like asking "Is London better described by its street map or its Underground map?" Both capture something real. Neither captures everything. The question should be: "Which map is most useful for the specific question we're trying to answer right now?"
Holding multiple maps simultaneously—and switching between them depending on purpose—is one of the most powerful cognitive skills available. It requires comfort with the idea that contradictory-seeming descriptions can both be valid, because they're maps of different features of the same territory.
What Makes a Good Map
Not all maps are equally valuable. A good map has four qualities.
Accuracy means the relationships the map preserves actually hold in the territory. If the map says X is connected to Y, they'd better actually be connected. A transit map that shows a nonexistent rail connection is worse than useless.
Appropriate compression means the map is simpler than the territory—that's the whole point—but not so simplified that it loses what matters. A map of London that only shows the Thames is too compressed. A map that shows every individual cobblestone is not compressed enough.
Clear purpose means the map preserves what matters for a specific use case. Beck's map is optimized for "how do I get from A to B on the train?" A map optimized for "how far apart are these stations in real life?" would look completely different and would serve a different need equally well.
Explicit scope means the map signals where it applies and where it doesn't. The best scientific theories come with stated boundary conditions—the ranges of temperature, speed, or scale within which they work. Good mental models do the same: "This framework is useful for understanding X, but don't apply it to Y." Maps without scope declarations get misapplied, and misapplication is one of the most common sources of confused thinking.
Living with Maps
We cannot navigate without maps. Raw, uncompressed experience—every sensation, every data point, every detail of every moment—would be overwhelming. The brain compresses constantly, building models and frameworks just to function. This is necessary and good. Compression is not the enemy.
But we can hold our maps more lightly. We can remember that every belief, theory, and framework we carry is a representation, not the reality it represents. We can expect our maps to fail when we push them beyond their intended scope. We can maintain multiple maps for the same territory and switch between them as purpose demands. We can update our maps when the territory contradicts them, rather than insisting the territory must be wrong.
And perhaps most importantly, we can ask a question that rarely gets asked: what does this map leave out? Every map omits something. The omissions aren't neutral—they shape what we can see, what questions we think to ask, and what possibilities we consider. Looking at the blank spaces on a map is often more revealing than looking at what's drawn.
This is intellectual humility made operational. Not "I might be wrong" as a polite gesture, but "this is a map, and maps are structurally incomplete" as a genuine description of the situation. The humility follows from the structure, not from modesty. Every belief is a compression. Every compression loses something. Knowing this doesn't make your maps useless—it makes you a better navigator.
How This Was Decoded
This essay synthesizes insights from general semantics (Korzybski's map-territory distinction), philosophy of science (the nature of models, theories, and idealizations), information theory (lossy compression and its trade-offs), and pragmatist philosophy (usefulness as a criterion for evaluating representations). The cross-verification is strong: the map-territory framework applies identically to scientific theories, mental models, ideologies, cultural narratives, and everyday concepts. The universality of the pattern across these domains—each independently described by different fields—is itself convergent evidence that the underlying structure is real.
Want the compressed, high-density version? Read the agent/research version →