1
Appearance and Reality
Is there anything you know so firmly that no reasonable person could doubt it?
That sounds like an easy question—until you actually try to answer it carefully. The moment you do, you run straight into the kind of puzzles philosophy is built for. Philosophy, at bottom, is just the attempt to answer the “ultimate” questions we usually glide past. In everyday life—and even in science—we tend to speak confidently and move on. Philosophy slows us down. It asks: What exactly do we mean? What are we assuming? Where are the hidden confusions?
That slowdown matters because ordinary life is full of things we treat as obvious that start to look shaky when you examine them closely. We often talk as if the world is perfectly straightforward, but when you press for precision you find apparent contradictions everywhere. It can take real effort just to figure out what you’re actually justified in believing.
If you’re hunting for certainty, the natural place to start is with what you’re experiencing right now. In some sense, all knowledge has to grow out of experience. But here’s the trap: the moment you try to put into words what your immediate experience tells you, you’re very likely to say something false—or at least something that’s not as solid as it sounds.
Think about a perfectly ordinary moment. Right now, it seems that:
- You’re sitting in a chair at a table of a certain shape.
- On the table you see paper with writing or print.
- If you turn your head, you see buildings, clouds, the sun.
- You believe the sun is about ninety-three million miles away, a hot sphere far larger than Earth, and that because Earth rotates it rises each morning—and will keep doing so for a long time.
- You expect that if another typical person walked into the room, they’d see the same chairs, tables, books, and papers you see.
- You assume the table you see is the same table you feel pressing against your arm.
All of that feels so obvious it seems silly to write it down—unless someone challenges whether you know anything at all. And yet every one of those claims can be doubted in a reasonable way. To defend them, you have to be much more careful than everyday language encourages.
To see why, focus on one object: the table.
At first glance it seems simple. Visually, the table looks oblong, brown, and shiny. By touch, it feels smooth, cool, and hard. If you rap it with your knuckles, it makes a wooden sound. Other people who look, touch, and listen will usually agree. So where’s the problem?
The problem starts the instant you try to be precise.
Color is the easiest place to see the trouble. You might say, “The table is brown.” But look closely: the parts reflecting light look brighter than the rest, and some spots can even look white because of glare. Move your head and the bright patches shift. That means the “pattern” of colors you see depends on where you are standing and how the light hits the surface.
Put several people around the table at once and you get the same result: no two of them will see exactly the same distribution of shades, because no two pairs of eyes occupy the same point in space. Change the viewing angle even slightly, and the reflections change with it.
Most of the time, this doesn’t matter. But to a painter, it matters enormously. Painters have to unlearn the automatic habit that says, “This object has the color it really has.” Instead, they train themselves to notice what the color is doing in the light—what it actually looks like from here, right now.
And with that, we stumble into one of philosophy’s most famously slippery distinctions: appearance versus reality.
- The painter is trained to care about appearance: how things look.
- The practical person—and the philosopher—wants reality: how things are.
The practical person usually doesn’t worry much about the gap between the two. The philosopher does, precisely because the gap turns out to be hard to describe without confusion.
Back to the table. Once you see how much color varies, it becomes hard to defend the idea that there’s one “true” color that the table really has. Even if you freeze your viewpoint, the color changes under different conditions:
- Under artificial light, it looks different than in sunlight.
- To a color-blind observer, it looks different.
- Through tinted glasses, it looks different.
- In complete darkness, it has no visible color at all—though touch and sound still behave as usual.
So the color you experience isn’t a feature that sits inside the table all by itself. It’s something produced by a relationship: the table, the observer, and the lighting.
When everyday speech talks about “the table’s color,” it usually means something like: the color it tends to look to a normal observer, from an ordinary angle, in typical lighting. That’s a useful shortcut. But the other colors the table presents under other conditions aren’t less “real” in any obvious way. If you don’t want to play favorites, you’re pushed toward an uncomfortable conclusion: the table, considered all by itself, doesn’t have any single, fixed color.
The same story repeats with texture. With your naked eye, the surface may look smooth and uniform, with a visible grain. But under a microscope it turns into a landscape—bumps, ridges, pits, irregularities you couldn’t see before. Which one is the “real” table?
It’s tempting to answer: the microscopic one, because it’s more detailed. But then a stronger microscope reveals more detail and changes the picture again. And now you’re stuck: if you can’t fully trust the naked eye, why trust the microscope? You’re using senses either way, just with a tool in front of them. The confidence you started with begins to drain away.
Shape causes just as much trouble. We’re so practiced at judging “real” shapes that we feel as if we see them directly. But drawing teaches the truth quickly: the same object looks like a different shape from every viewpoint.
A rectangular table, viewed from almost anywhere, doesn’t look like a perfect rectangle. It looks like a shape with two acute angles and two obtuse angles. Parallel sides look like they’re converging into the distance. Equal-length sides can look unequal; the nearer edge looks longer. You usually don’t notice these effects because you’ve learned, without thinking, to reconstruct a stable “real shape” from shifting appearances. That reconstruction is what you care about in practical life.
But notice what that implies: the “real” shape isn’t something you simply see. It’s something you infer from what you see. What your eyes deliver is an ever-changing appearance that shifts as you move around the room.
Touch brings similar problems. Yes, the table feels hard. It resists pressure. But the exact sensation changes with:
- how hard you press, and
- what part of your body you press with.
So even here, the immediate feelings don’t cleanly reveal a single, definite property “in” the table. At most, they might be clues—signs pointing toward some underlying feature that produces the sensations, even though that feature isn’t directly present in any one of them. The same goes, even more obviously, for the sounds you get by knocking on the table.
Put all of this together and a striking claim emerges: the real table, if there is one, is not identical with what you immediately experience through sight, touch, and hearing. What you directly have are colors, shapes, textures, resistances, sounds—the deliverances of your senses. If there’s something “behind” them, you don’t grasp it directly. You only reach it by inference.
And that lands us with two hard questions:
- Is there a real table at all?
- If there is, what kind of thing is it?
To make progress, we need some terms that don’t wobble.
Let’s call sense-data the things you are immediately aware of in sensation: colors, sounds, smells, hardnesses, roughnesses, and so on. And let’s call sensation the experience of being directly aware of those sense-data.
So:
- When you see a color, you have a sensation.
- The color itself—the thing you’re directly aware of—is a sense-datum.
In other words, the sense-datum is the content; the sensation is the awareness of that content.
If you’re going to know anything about the table, it has to be through these sense-data—brownness, oblongness, smoothness, and the rest—that you associate with “the table.” But given everything we’ve just noticed, you can’t simply identify the table with the sense-data. You can’t even safely say the sense-data are straightforward properties of the table. That leaves a problem: what is the relationship between the sense-data and the real table, assuming there is one?
If the real table exists, let’s call it a physical object. And the collection of all physical objects is what we call matter.
So our two questions can be restated more broadly:
- Does matter exist at all?
- If it does, what is it like?
The philosopher who made the case especially vivid that the immediate objects of the senses don’t exist independently of us was Bishop Berkeley (1685–1753). In Three Dialogues between Hylas and Philonous, he tries to show that there is no such thing as matter, and that reality consists only of minds and their ideas. In the dialogues, Hylas begins as a believer in matter, but Philonous relentlessly pushes him into contradictions until denying matter starts to feel—surprisingly—almost like common sense.
Berkeley’s arguments aren’t all equally good. Some are powerful; some are muddled or hair-splitting. Still, he earned a lasting achievement: he showed that rejecting matter isn’t automatically absurd, and that if anything exists independently of us, it can’t be the immediate objects of our sensations.
When people ask “Does matter exist?” they often slide between two different questions without noticing. It’s crucial to separate them.
In everyday thinking, matter usually means something contrasted with mind: something that occupies space and is fundamentally incapable of thought or consciousness. That’s the sense of “matter” Berkeley denies. He doesn’t deny that the sense-data we take as signs of a table are signs of something independent of us. He denies that this “something” is non-mental.
Berkeley agrees there must be something that continues to exist when you leave the room or close your eyes. And he agrees that “seeing the table” gives you good reason to believe something persists even when you aren’t looking. But he insists that this persisting thing can’t be radically different in kind from what is seen, and it can’t exist wholly apart from being perceived—though it must exist independently of your perceiving. That’s why he ends up saying the “real” table is an idea in the mind of God: permanent and independent of us, yet not an unknowable material thing forever beyond direct awareness.
After Berkeley, other philosophers have taken similar routes. They argue that although the table doesn’t depend on me seeing it, it does depend on being seen—or otherwise sensed—by some mind. Not necessarily God’s; sometimes they imagine something like a universal or collective mind. Their motivation is usually the same: they think nothing can count as real, or at least nothing can be known to be real, except minds and their thoughts and feelings.
One common argument goes something like this: “Whatever you can think about is an idea in a mind. Therefore nothing can be thought about except ideas in minds. Anything else is inconceivable; and what’s inconceivable can’t exist.”
The argument is often presented in subtler form, but that’s the basic shape. The author of this chapter considers it mistaken. Still, it has been enormously influential. Many philosophers—perhaps a majority—have held that nothing is ultimately real except minds and their ideas. Philosophers of that type are called idealists. When they explain “matter,” they either say, like Berkeley, that it’s really just a collection of ideas, or they say, like Leibniz (1646–1716), that what appears as matter is actually a collection of simple, undeveloped minds.
Here’s the twist: even philosophers who deny matter in the mind-versus-matter sense often accept matter in another sense.
Remember the two questions we started with:
- Is there a real table at all?
- If so, what kind of thing is it?
Both Berkeley and Leibniz answer “yes” to the first. They agree there is a real table. They simply give an unusual answer to the second: Berkeley says it’s certain ideas in God’s mind; Leibniz says it’s a community of souls.
And in fact, most philosophers seem to agree on the point that matters first: there is a real table of some kind. Even if our sense-data—color, shape, smoothness—depend partly on us and on conditions, their presence seems to signal something that exists independently of us. That “something” may be utterly unlike the sense-data we experience, but it is still treated as the cause of those sense-data when we stand in the right relation to it.
That shared belief—that there is a real table, whatever its nature—matters so much that it deserves careful support. Before asking what the real table is like, we should ask why we should believe there is a real table at all. That will be the task of the next chapter.
For now, notice what we’ve learned. Take any ordinary object you think you know through the senses. What the senses give you immediately is not the object “as it is in itself,” separate from you. What they give you are sense-data that appear to depend on the relationship between you and whatever is out there. What you directly see and feel is appearance. You treat it as a sign of a reality behind it.
But once you say that, more questions rush in. If reality isn’t what appears, how could you ever know whether there is any reality at all? And if there is, can you learn anything about what it’s like?
Questions like these can be dizzying. Once you start asking them, you can’t confidently rule out even very strange possibilities. The table you’ve barely thought about your whole life suddenly becomes a mystery with multiple live options. The one thing you do know is modest but unsettling: it isn’t exactly what it seems.
Beyond that, you have wide room to speculate. Leibniz says the table is a society of souls. Berkeley says it’s an idea in the mind of God. Modern science—hardly less astonishing—says it’s a vast swarm of electric charges moving violently.
And doubt adds one more possibility: maybe there’s no table at all.
Philosophy may not answer every question we want answered, but it can do something valuable even when it falls short: it can ask the kinds of questions that make the world feel newly interesting, and reveal the strangeness and wonder that sit just beneath the surface of the most ordinary things in daily life.
2
The Existence of Matter
In this chapter we have to face a question that sounds simple and ends up shaking everything: is there such a thing as matter—in any meaningful sense?
Take the table in front of me. Is it something with its own built-in nature that keeps existing when I stop looking? Or is “the table” just my mind’s ongoing performance—like a dream-table inside an unusually long, unusually consistent dream?
This isn’t a parlor trick. If we can’t be confident that ordinary objects exist independently of us, then we can’t be confident that other people’s bodies exist independently of us either—and that’s disastrous, because our only route to other people’s minds runs through what we observe of their bodies. Push skepticism far enough and you end up with a bleak possibility: maybe the entire outside world is a dream, and I’m the only thing that exists.
That’s not a pleasant thought. And while you can’t strictly prove it false, there’s also no reason at all to think it’s true. The goal of this chapter is to explain why.
A starting point we can actually trust
Before we dive into the fog, we need something solid to stand on. Even if we’re doubting whether the table exists as a physical thing, we’re not doubting the sense-data that made us talk about a table in the first place.
- When I look, a certain color and shape show up in experience.
- When I press, a sensation of hardness shows up in experience.
Whatever else might be questionable, those immediate experiences—those psychological facts—aren’t what we’re challenging right now. In fact, even if the rest of the world turns out to be shaky, at least some of what we directly experience feels absolutely certain.
Descartes and the art of radical doubt
René Descartes (1596–1650), often treated as the founder of modern philosophy, made this idea into a method: systematic doubt. His rule was ruthless: he would accept nothing as true unless he could see it with complete clarity. Anything he could honestly doubt, he would doubt—until he had a reason not to.
So he imagined a worst-case scenario: a cunning demon feeding him an endless stream of convincing illusions. It’s wildly improbable, sure. But Descartes’s point was that as long as it’s possible, doubt about what the senses report is also possible.
But there’s one thing the demon can’t fake: Descartes’s own existence. If he didn’t exist, there would be no one to deceive. The very act of doubting proves there’s a doubter. If he’s having experiences at all, then something exists that is having them. That’s why his famous line lands:
I think, therefore I am.
On that single certainty, he tried to rebuild knowledge from the ground up. And for philosophy, this was a real breakthrough: Descartes showed how far doubt can go, and he highlighted a crucial fact—our subjective experiences are the hardest thing to doubt.
A careful correction: what exactly is certain?
Still, we have to handle Descartes’s slogan with care. “I think, therefore I am” says a bit more than the evidence strictly guarantees.
It’s tempting to assume we’re absolutely certain we’re the same person today that we were yesterday—and in some everyday sense, that’s probably right. But the “real Self,” the permanent “I,” is surprisingly slippery. It’s not obviously more secure than the “real table.”
What’s immediately certain isn’t “I am seeing a brown color.” The certainty is simpler and more direct:
- A brown color is being seen.
That statement does imply something that sees. But it doesn’t automatically give you the full, stable, continuing person we call “I.” For all that immediate certainty tells us, the “something” that experiences the brown color could be momentary—one flash of awareness now, a different flash the next moment.
So the most basic certainty attaches to particular experiences: specific thoughts, feelings, sensations.
And this holds in strange cases too. In dreams and hallucinations, you can be wrong about whether there’s a corresponding physical object, but you’re not wrong about having the experience. If you dream or think you see a ghost, you really do have those sensations—yet we often say no external object matches them.
That means the certainty of our own experiences doesn’t need to shrink just because exceptional cases exist. Whatever its limits, this is at least a firm platform from which to start.
The real problem: do sense-data point to something beyond themselves?
Here’s the central question:
If we’re sure about our sense-data, do we have any reason to treat them as signs of something else—something we can call a physical object?
If we list everything we naturally associate with “the table” in experience—its colors, shapes, hardnesses, sounds—have we captured the whole table? Or is there something more: something not a sense-datum, something that continues when I leave the room?
Common sense answers without hesitation: of course there’s more. After all, tables are the sorts of things you can buy, sell, shove, and throw a tablecloth over. They don’t seem like mere bundles of private impressions.
Think about the tablecloth. If it completely covers the table, I no longer receive any sense-data from the table. If the table were nothing but sense-data, it would literally stop existing at that moment—and then the tablecloth would have to hang there in midair, suspended by a miracle where the table used to be. That seems ridiculous.
But if you want to do philosophy, you have to develop a tolerance for ideas that sound ridiculous at first. Some of them are false. Some of them aren’t. The point is: you don’t get to dismiss a position just because it feels absurd.
Why we want “public” objects, not private experiences
One powerful reason people insist there must be a physical object beyond sense-data is this: we want the same object for different people.
If ten people sit around a dinner table, it sounds crazy to say they aren’t all seeing the same tablecloth, the same knives and forks, the same glasses. But here’s the snag: sense-data are private. What is immediately present to my sight is not immediately present to yours. Even looking at “the same” table, we’re each seeing it from a slightly different angle, under slightly different lighting, with slightly different reflections. So we each get slightly different sense-data.
If there are to be objects that are public—neutral things that many people can in some sense know—then there must be something over and above the particular sense-data appearing in each person’s experience.
So what reason do we have for believing in these public, neutral objects?
The obvious idea—and why it doesn’t fully work
The first answer is the one you probably reached already:
Even if different people see the table a bit differently, they still see broadly similar things, and the differences follow regular patterns (perspective, lighting, reflection). That makes it natural to infer a single, stable object that underlies and explains all these shifting appearances.
It also seems to fit everyday life. I bought my table from the previous occupant of my room. I couldn’t buy his sense-data—those vanished when he left. But I could buy, and did buy, the reliable expectation that if I look in the right place, I’ll have experiences of roughly the same sort.
So we’re inclined to say: because many people have similar sense-data, and because the same person tends to have similar sense-data in the same place over time, we infer a permanent public object that causes or underlies those sense-data.
But there’s a problem. This reasoning quietly assumes what we’re trying to prove—namely, that there are other people.
Other people show up in my life as part of my own sense-data: I see their bodies, hear their voices, watch their expressions. If I don’t yet have reason to believe in physical objects independent of my experience, then I don’t yet have reason to believe other people exist as anything more than characters inside my dream.
So if we’re trying to argue for an external world, we can’t lean on “other people’s testimony,” because that “testimony” is itself just more of my sense-data. It doesn’t give me direct access to their experiences unless we already assume my sense-data point to independent realities.
So we need, if possible, to find features within our own private experience that suggest—at least indirectly—that there is something out there beyond ourselves and our sensations.
The dream hypothesis: possible, but not persuasive
In one strict sense, we have to admit something humbling: we can never prove that anything exists beyond ourselves and our experiences. There’s no logical contradiction in the idea that the world is just me and my thoughts, feelings, and sensations—and everything else is imagination.
Dreams show why. In a dream, an elaborate world can unfold with its own drama and detail. Then you wake up and realize: none of those sense-data corresponded to physical objects the way you assumed they did.
You might object: sometimes waking life can explain the sense-data of a dream. A door slams, and you dream of cannon fire and a naval battle. Sure—there can be a physical cause for the dream sensations. But even then, there isn’t a physical object that matches the dream scene the way a real battle would.
So yes: it’s logically possible that all of life is one long dream we generate ourselves.
But possibility isn’t evidence. And there is no reason at all to think this is true. In fact, as an explanation of our lived experience, it’s less simple than the commonsense view that there are real objects outside us and that their effects on us produce our sensations.
Why the “real objects” hypothesis is simpler
You can see the simplicity advantage with an ordinary example: a cat.
If the cat appears first on the couch and later by the door, the natural explanation is that it moved, passing through a sequence of positions in between.
But if the cat is only a collection of sense-data, then it can’t have been anywhere I didn’t perceive it. That forces a bizarre story: the cat didn’t exist when I wasn’t looking, and then suddenly popped into existence somewhere else.
And it gets worse. If the cat exists whether I see it or not, it makes sense that it gets hungry between meals. But if the cat doesn’t exist when I’m not perceiving it, it’s strange that its appetite would “grow” during periods of nonexistence just as it does during periods of existence. More than that: if the cat is nothing but sense-data, then it can’t be hungry at all—because the only hunger that can be a sense-datum to me is my own hunger.
So the behavior of the sense-data that represent the cat—so easy to understand as “the cat is hungry”—becomes nearly impossible to make sense of if we treat them as nothing more than shifting patches of color and sound. Those patches are no more capable of hunger than a triangle is capable of playing football.
Other minds make the dream hypothesis even clumsier
The cat is challenging enough. Human beings are harder.
When people speak, we hear certain sounds we connect with meanings, and at the same time we see lip movements and facial expressions. It’s extremely difficult to believe that all of that isn’t the outward expression of thought—especially because we know what it’s like from the inside when we make similar sounds to express our thoughts.
Yes, dreams can mimic this, and we can be fooled. But dreams tend to borrow their materials from what we call waking life, and if we assume a real physical world, dreams can often be partly explained by ordinary science.
So, again, the principle of simplicity pushes us toward the natural view: there really are things beyond ourselves and our sense-data, and they exist whether or not we are currently perceiving them.
Instinctive belief—and why we shouldn’t throw it away
Importantly, argument isn’t where our belief in an external world comes from in the first place. We don’t reason our way into it as children. We simply find it already installed in us when we start reflecting. It’s an instinctive belief.
We only start questioning it because of a specific discovery—especially in vision—that trips us up: we instinctively treat the sense-datum itself as though it were the external object. But careful reasoning shows the object can’t be identical with the sense-datum.
That mismatch is obvious with taste, smell, and sound, and only a little surprising with touch. With sight, it’s more unsettling. Still, noticing the mismatch doesn’t destroy our instinctive confidence that there are external objects corresponding to our sense-data.
And because that instinctive belief doesn’t create new problems—because it actually helps us organize and simplify our experience—there’s no strong reason to reject it. So we can accept, with a small leftover doubt inspired by dreams, that the external world really exists and doesn’t depend entirely on our continuing to perceive it.
What this tells us about philosophy itself
The reasoning that gets us here is weaker than we might like. But it’s typical of philosophy, so it’s worth noticing what kind of reasoning it is.
All knowledge, it seems, has to be built on instinctive beliefs. If you reject them all, nothing remains to build with.
But our instinctive beliefs aren’t all equally strong. And many beliefs that feel “instinctive” are actually mixtures—true instincts tangled up with habits and assumptions we picked up along the way.
One of philosophy’s real jobs is to map this landscape:
- Identify our instinctive beliefs and arrange them in a kind of hierarchy, starting with the ones we hold most strongly.
- Separate the core instincts from irrelevant add-ons.
- Present them in a form where they don’t clash, but fit together into a coherent system.
You never have a reason to abandon an instinctive belief except that it conflicts with other beliefs you accept. If your core instincts can be organized so they harmonize, the system deserves acceptance.
Of course, any belief might be wrong. So every belief should be held with at least a small margin of doubt. But notice the constraint: you can’t reject a belief except by appealing to some other belief. That means the best we can do is to organize what we instinctively accept, explore the consequences, and see which beliefs are hardest to give up and which might be modified if conflicts arise.
Working this way, we can build a more orderly, systematic picture of our knowledge. Error always remains possible—but its likelihood shrinks when our beliefs support one another and when we’ve subjected them to careful scrutiny before settling into acceptance.
At minimum, philosophy can do that much. Many philosophers—rightly or wrongly—think it can do more: that it can reveal truths about the universe as a whole, and about ultimate reality, that no other method can reach.
Whether or not that larger ambition succeeds, this more modest role certainly can. And for anyone who has begun to doubt the adequacy of common sense, it’s enough to justify the hard, demanding work that philosophical problems require.
3
The Nature of Matter
In the last chapter, we reached a cautious but practical conclusion. Even though we couldn’t prove it with knock-down logic, it still seems reasonable to believe that our sense-data—the immediate stuff of experience, like the colors and shapes I associate with “my table”—are signs of something that exists independently of me.
Here’s the basic idea. When I look at a table, I experience a bundle of sensations: color, hardness, maybe a faint sheen, maybe a sound if I tap it. But those sensations clearly come and go with my body’s condition:
- The color disappears when I shut my eyes.
- The hardness disappears when I pull my hand away.
- The sound disappears when I stop rapping my knuckles on the surface.
And yet I don’t seriously believe the table itself blinks in and out of existence along with those sensations. I believe the table continues to exist when I’m not perceiving it, and that because it continues to exist, the familiar sensations return when I open my eyes, touch it again, or knock on it again.
So the question for this chapter is straightforward, but slippery: What is this “real table” that supposedly persists whether I perceive it or not? What is it made of, in the deepest sense?
What science says (and what it leaves out)
Physical science offers an answer—unfinished in places and still partly speculative, but serious and worth hearing. Over time, science has drifted toward a powerful simplifying strategy: explain natural phenomena by reducing them to motion, especially the motion of waves.
On this picture:
- Light, heat, and sound are all linked to wave motion traveling from a source to an observer.
- Whatever “carries” the waves is either the ether (as older physics imagined) or ordinary “gross matter.” Either way, it’s what a philosopher would call matter.
- And science, as science, treats matter as having only a small set of usable properties: position in space and the ability to move according to laws of motion.
Science doesn’t insist matter has only those properties. It just says: if matter has anything else, those extra features aren’t doing any work in scientific explanations.
Why “light is a wave” can mislead you
People often say, “Light is a kind of wave motion.” Taken literally, that’s misleading. The light you actually see—the vivid, immediate experience of brightness and color—is not itself a wave motion. It’s something else entirely: something every sighted person knows intimately, and yet something we can’t describe in a way that would let a person born blind understand it.
A wave motion is different. A blind person can grasp wave motion perfectly well. They can learn about space through touch, and they can even feel waves directly, say on a boat at sea. That kind of wave is describable and shareable. But that’s not what we mean by “light” in everyday experience. By “light,” we mean the visual experience itself—exactly the thing a person born blind can’t have, and that sighted people can’t fully translate into words.
So when science talks about light as waves, it’s really saying something more careful:
- The waves are the physical cause of our visual sensations.
- The sensation—the felt experience of light—is not, on the scientific story, a feature of the external world as it exists independent of us. It’s an effect produced in the eyes, nerves, and brain.
And the same style of point applies to other sensations, too. What you immediately experience isn’t what science places “out there” in matter; it’s what science treats as the mind-and-body result of outside causes.
Not just missing colors and sounds—missing “your” space, too
It isn’t only qualities like color and sound that drop out of the scientific picture. Something even more surprising disappears: space as you directly experience it.
Science absolutely requires matter to be located in a space. But that space can’t be exactly the space you see or the space you feel with touch.
For one thing, visual space and tactile space don’t come pre-synchronized. As infants we have to learn—through experience—how what we see lines up with what we can reach and touch. Science, though, needs a space that’s neutral between sight and touch, a single framework in which it can describe positions and motions. So the “space of physics” can’t simply be the private space delivered by any one sense.
There’s another reason. The same object can look differently shaped depending on where you view it from. A coin is the classic example: you know it’s circular, but from an angle it looks oval. When you judge that it’s really circular, you’re already doing something important—you’re distinguishing between:
- the apparent shape (what it looks like from your viewpoint), and
- the real shape (what it is taken to be in itself, regardless of viewpoint).
Science cares about that “real shape.” But a real shape has to exist in a real space—a space that isn’t identical with any individual person’s private field of appearance.
So we’re pushed toward a distinction:
- Apparent space is private, tied to a particular perceiver’s viewpoint and sensory setup.
- Physical space is public, shared, and is where science places physical objects with their intrinsic shapes and motions.
Physical space is related to the spaces we see and feel, but it isn’t identical with either. Exactly how they connect is something we have to figure out, not assume.
Why science needs a public physical space
Earlier we agreed—provisionally—that physical objects aren’t the same kind of thing as our sense-data, but that they can be treated as causes of our sensations. If that’s true, then objects, sense organs, nerves, and brains all have to exist together in one shared physical space.
And once you think that way, a lot of ordinary facts fall neatly into place:
- You feel touch when your body is in physical contact—when your skin and the object are adjacent in physical space.
- You see an object when, in physical space, there isn’t an opaque barrier between it and your eyes.
- You hear, smell, and taste only when physical conditions put the source near enough, or in the right kind of contact, with your body.
To even state how your sensations change with circumstances, you have to treat both the object and your body as occupying positions in the same physical space. In practice, it’s the relative positions that largely determine what sensations you get.
How private spaces line up with public space
Your sense-data live in your private spaces: the space of sight, the space of touch, and the looser “spaces” suggested by smell, hearing, and so on. If there really is one all-inclusive physical space containing physical objects, then the spatial relations among physical objects should roughly correspond to the spatial relations among the sense-data you experience.
That’s not hard to believe. When you see one house as nearer than another on a road, your other senses tend to confirm it: you reach it sooner when you walk. Other people agree. A map agrees. Everything points to a consistent spatial ordering that matches what our sense experience suggests.
So it’s reasonable to assume this: there is a physical space in which physical objects stand in relations—nearer/farther, left/right, aligned/not aligned—that correspond to the relations among our sense-data in private spaces. This is the space geometry works with and physics and astronomy assume.
What we can (and can’t) know about physical space
Suppose physical space exists and does correspond in that way to our private spaces. What can we actually know about it?
We can know only what’s needed to preserve the correspondence. That means:
- We don’t know what physical space is like “in itself,” the way we know the look of space in vision or the feel of distance in touch.
- But we can know the structure of relations among objects—how they’re arranged relative to one another.
For example, we can know that during an eclipse the Earth, Moon, and Sun are in a straight line. But we don’t have direct acquaintance with what a physical straight line is like in itself, the way we’re acquainted with a straight line as it appears in our visual field.
So our knowledge is richer about comparative and relational facts—that one distance is greater than another, that points are aligned, that one route is continuous—than it is about the intrinsic “feel” or nature of physical distances. In this respect, our situation is like what a person born blind could learn—through testimony and inference—about visual space: they could learn a lot about its relations, but not the particular qualitative character sighted people directly experience. Likewise, we can know the properties of the relations that preserve the mapping from sense-data, but we can’t know the intrinsic nature of the things that stand in those relations.
Time: public order vs private duration
Time has a similar split.
Our sense of duration—how long something feels—is famously unreliable. Boredom drags. Pain can stretch a minute into an hour. Good company makes hours vanish. Sleep can seem like no time at all.
So if time were just “felt duration,” we’d need the same distinction as with space:
- a private time of how time feels, and
- a public time measured by clocks.
But time also has another aspect: the order of events—before and after. And for that, we don’t have the same reason to split things in two. The sequence in which events seem to occur appears, as far as we can tell, to match the sequence in which they actually occur. There’s no clear argument that the two orders diverge.
Something similar often holds for space. Different viewpoints distort shape, but not necessarily order. A marching regiment might look differently shaped from different angles, but the soldiers still appear arranged in the same sequence along the road. That’s why we tend to treat order as something that tracks reality, while treating shape as something that varies with perspective and only needs to correspond to reality enough to preserve the order.
A crucial warning: perception-time isn’t object-time
Still, saying that the time-order of events “as they seem” matches the time-order “as they are” needs careful handling. Don’t confuse the order of physical events with the order of the sense-data that make up our perceptions of those events.
Take thunder and lightning. As physical events, the lightning and the air disturbance at its source are simultaneous. But your sensation of hearing thunder happens later—because the sound wave has to travel through the air to you.
Or consider sunlight: it takes about eight minutes for light from the Sun to reach Earth. When you look at the Sun, your visual experience corresponds to the physical Sun of eight minutes ago. If the Sun (impossibly, but suppose) had ceased to exist within the last eight minutes, your sense-data of “seeing the Sun” would be unchanged for that interval. This is another vivid reminder that we must distinguish sense-data from physical objects.
What correspondence can tell us about matter—and what it can’t
What we’ve found about space mirrors a general pattern. If one object looks blue and another looks red, it’s reasonable to think there’s some corresponding difference in the physical objects. If two objects both look blue, it’s reasonable to think there’s some corresponding similarity.
But we shouldn’t expect direct acquaintance with whatever physical property produces the “blue” experience or the “red” experience. Science tells us, for example, that the relevant difference is a kind of wave motion. That sounds familiar because we imagine waves in the space we see. Yet the wave motions science is talking about must exist in physical space, which we don’t directly experience. So even the “wave” explanation isn’t as intuitively familiar as it seems at first glance.
And what’s true of color is true, in closely parallel ways, of the rest of our sense-data. The relations among physical objects have many knowable features because they can be mapped onto the relations among sense-data. But the physical objects themselves—their intrinsic nature—remain unknown, at least as far as the senses can take us.
So the question presses again: Is there any other way to discover what physical objects are in themselves?
A tempting hypothesis: maybe objects are sort of like appearances
A natural first guess—especially if you start from vision—is this: even if physical objects can’t be exactly like sense-data (for the reasons we’ve covered), maybe they’re at least roughly similar. Maybe objects really do have colors, and sometimes we see an object as the color it truly is.
On that kind of view, you might say: from different angles the color shifts a bit, so perhaps the “real” color is a kind of average—something like a middle value between the various shades you see from different viewpoints.
This idea can’t be decisively disproved. But it turns out to have no real support.
Here’s why. The color you see depends on the light waves reaching your eye. That means it depends not only on the object, but also on:
- the medium between you and the object (air, haze, smoke, fog),
- and how light is reflected off the object toward your particular line of sight.
Unless the air is perfectly clear, it changes what you see. Strong reflections can change colors dramatically. In other words, the seen color is a property of the light as it arrives at your eye, not simply a property of the object where the light originated.
And there’s an even sharper point: if the right waves reach your eye, you’ll see a certain color whether the source object “has a color” or not. So it’s an unnecessary extra assumption to claim that physical objects literally possess colors. There’s no solid justification for it. The same style of argument applies to the rest of our sense-data.
The idealist challenge (preview)
One last question remains: is there any broad philosophical argument that tells us what matter must be like, if it’s real at all?
Many philosophers—perhaps most—have argued that whatever is real must, in some way, be mental, or at least that anything we can know must be mental in some sense. Philosophers who hold that sort of view are called idealists.
Idealists argue that what appears to us as matter is actually something mental—for example:
- Leibniz’s view: reality is made up of countless more-or-less rudimentary minds.
- Berkeley’s view: what we call material things are really ideas in minds that “perceive” them.
So idealists deny that matter exists as something intrinsically different from mind. At the same time, they don’t deny that our sense-data are signs of something that exists independently of our private sensations.
In the next chapter, we’ll look briefly at the arguments idealists use to defend their view—and at why, in my opinion, those arguments don’t succeed.
4
Idealism
The word “idealism” doesn’t mean exactly the same thing to every philosopher. In this chapter, I’ll use it in a specific way: idealism is the view that whatever exists—or at least whatever we can know exists—must be mental in some sense. A lot of philosophers have held some version of this idea, and they’ve defended it for different reasons. Since it shows up so often (and because it’s genuinely fascinating), even the quickest tour of philosophy needs to say something about it.
If you’re not used to philosophical argument, idealism can sound like obvious nonsense. Common sense draws a sharp line between minds (and the things happening in them) and material objects like tables, chairs, the sun, and the moon. On the everyday picture, the physical world is the kind of thing that could keep going even if no minds existed at all. We also naturally think matter existed for ages before there were any conscious creatures, so it’s hard to treat matter as something produced by mental activity. Still—whether idealism turns out true or false, you don’t refute it just by laughing at it.
Here’s why. Even if physical objects really do exist independently of us, we’ve already seen that they can’t be identical with our sense-data (the immediate colors, sounds, textures, and so on that show up in experience). At best, physical objects would correspond to sense-data the way a catalog corresponds to the items it lists. That means common sense doesn’t actually tell us what physical objects are in their own nature; it just tells us how they show up to us. So if there were strong reasons to think physical objects are, deep down, mental, we wouldn’t be entitled to dismiss that claim merely because it feels weird. The truth about physical reality is likely to be weird. It might even be beyond our reach—but if someone claims they’ve reached it, “That’s strange” isn’t, by itself, a serious objection.
Most arguments for idealism come from epistemology—the theory of knowledge. They start by asking: What conditions must something satisfy for us to be able to know it? The first major attempt to build idealism on that kind of foundation came from Bishop Berkeley.
Berkeley’s first move was this: he argued (often convincingly) that our sense-data can’t exist independently of us. The color you see, the sound you hear, the warmth you feel—those things don’t just float around in the world on their own. At least part of what they are depends on there being seeing, hearing, touching, smelling, or tasting. In that limited sense, sense-data are “in the mind”: if there were no perceiving, those particular sense-data wouldn’t exist. Even if some of Berkeley’s specific arguments overreach, this general point is close to certainly right.
But then Berkeley went further. He claimed that sense-data are the only things whose existence our perceptions can directly guarantee. And from there he slid into a bigger thesis: to be known is to be “in” a mind, and therefore to be known is to be mental. So he concluded:
- We can never know anything except what exists in some mind.
- If something is known but isn’t in my mind, it must be in some other mind.
To follow his reasoning, you need to understand what Berkeley means by the word “idea.” He uses “idea” for anything we know immediately—the kind of thing we’re directly aware of, without inference. Sense-data are his main examples: the particular shade of green you see, the exact sound of a voice you hear, and so on. But he doesn’t limit “ideas” to sense-data. Memories and imaginings count too, because when you remember or imagine, you’re still directly aware of something at that moment. All of these immediate items—sensations, images, memories—Berkeley calls “ideas.”
Now take an ordinary object, like a tree. Berkeley says: when you “perceive” the tree, everything you’re immediately aware of is just a collection of ideas in his sense—colors, shapes, textures, maybe smells. And he argues there’s no good reason to suppose that there’s anything “real” about the tree beyond what is perceived. In his famous slogan, the tree’s being consists in being perceived—its esse is percipi.
Berkeley does not deny that the tree continues to exist when you close your eyes, or when nobody is around. He just explains that persistence differently: the tree keeps existing because God keeps perceiving it. On this view, what we would have called the “physical object” is really a stable system of ideas in God’s mind—ideas broadly similar to ours when we look at the tree, except that God’s are constant and uninterrupted as long as the tree exists. When different people see “the same” tree, Berkeley says it’s because our perceptions are partial, limited participations in God’s perception. So, for Berkeley, reality contains nothing but minds and ideas—and nothing else could ever be known, because whatever is known is necessarily an idea.
This argument contains several mistakes that have mattered a lot in the history of philosophy, so it’s worth exposing them. The first big problem is a confusion encouraged by Berkeley’s use of the word “idea.” In everyday talk, an “idea” sounds like the kind of thing that obviously lives inside someone’s mind. So when you’re told that a tree is made entirely of ideas, it’s natural to imagine the tree must literally be inside minds.
But the phrase “in the mind” is slippery. We say we’re “keeping someone in mind,” but we don’t mean the person is literally inside our head—we mean a thought of the person is present. If someone says, “That errand went completely out of my mind,” they’re not claiming the errand itself used to reside in their mind; they mean the thought of it was there and then stopped being there. In the same way, when Berkeley says the tree must be “in” our minds if we can know it, what he’s really entitled to say is only that a thought of the tree must be in our minds. To claim the tree itself must be in our minds is like claiming the person you’re thinking about is literally inside your head.
That may sound like too crude a mistake for a serious philosopher to make. But the surrounding assumptions make it easier to slip into, and to see how, we need to look more carefully at what an “idea” amounts to.
Before we do that, we have to separate two different questions that Berkeley runs together—one about sense-data, and one about knowledge.
-
One question is about the relationship between sense-data and physical objects: Berkeley is largely right that the sense-data involved in seeing a tree are in an important way subjective. They depend not just on the tree but also on us—on our sense organs, our position, the lighting, and so on. And in that sense, those particular sense-data wouldn’t exist if nobody perceived the tree.
-
A completely different question is the one Berkeley needs for idealism: whether anything that can be immediately known must therefore be mental. Detailed arguments about how sense-data depend on our bodies and circumstances don’t help with that. To establish idealism, Berkeley would need a general proof that being known shows a thing to be mental. That’s what he thinks he has done. And that is the question we have to focus on now—not the earlier issue of how sense-data differ from physical objects.
Once we adopt Berkeley’s use of “idea,” there are always two distinct ingredients whenever an idea is “before the mind”:
- The object we’re aware of—for example, the particular color of the table.
- The awareness itself—the mental act of noticing or apprehending that color.
The act of awareness is obviously mental. But why should we assume the thing we’re aware of—the color itself, as presented in experience—is mental? Our earlier arguments about color didn’t show that. They showed only that what color appears depends on the relation between our sense organs and the physical object. Put more concretely: a certain color will appear under certain lighting if a normal eye is positioned in a certain place relative to the table. None of that implies that the color is inside the perceiver’s mind.
The pull of Berkeley’s “obviously the color must be in the mind” comes from mixing up these two things: the act of apprehension and the object apprehended. Either one might get labeled an “idea,” and Berkeley likely slid between them. When you focus on the act, it’s easy to agree that “ideas are in the mind,” because the act is in the mind. But then, without noticing the shift, you transfer that claim to the other sense of “idea”—the object presented to the mind—and conclude that whatever we can apprehend must be in the mind. That unnoticed switch in meaning is the core mistake behind Berkeley’s argument.
This distinction between act and object matters enormously. In fact, our whole ability to gain knowledge depends on it. The defining feature of a mind is that it can be acquainted with things other than itself. Acquaintance is a relation between a mind and something that is not the mind; that relation is exactly what makes knowledge possible. So if you insist that whatever we know must be “in the mind,” one of two things happens:
- Either you’re artificially shrinking what a mind can know, by ruling out acquaintance with anything genuinely external; or
- You’re saying something empty—just a tautology—because you really mean only that what is “in the mind” is what is “before the mind,” i.e., what the mind is aware of.
And if you mean the second, you have to allow that something can be “before the mind” in that sense—apprehended by it—without being mental. Once you see what knowledge is, Berkeley’s argument collapses in content as well as in form. His reasons for thinking that the objects we apprehend must be mental turn out to have no force. So Berkeley’s case for idealism can be set aside. The question is whether there are better arguments for it.
One popular argument starts with something that sounds like a self-evident slogan: “We can’t know that anything exists unless we know it.” From this, people infer that anything relevant to our experience must at least be knowable by us. And then they claim: if matter were something we could never become acquainted with, then we could never know matter exists—and so matter would be pointless for us. Finally (often with no clear justification), they add a further step: if something can have no importance for us, then it can’t be real. So, they conclude, matter—unless it’s made of minds or mental contents—must be impossible, a mere phantom.
We can’t fully dismantle this argument here, because doing it properly requires groundwork we haven’t laid yet. But we can spot serious problems immediately.
Start with the last step: there’s no good reason to think that something lacking practical importance for us can’t be real. If we broaden “importance” to include theoretical importance, then sure—everything real matters to us in some way, because anyone who wants the truth about the universe has a stake in whatever the universe contains. But once you include that sort of interest, matter wouldn’t be “unimportant” even if we couldn’t know it exists. We could still suspect it might exist, wonder whether it does, and care about the answer. It would matter precisely because it could either satisfy or frustrate our desire to understand reality.
Next, the supposedly obvious slogan is not obvious—and in fact it’s false—because the word “know” is being used in two different ways.
-
Knowing that something is the case: the kind of knowledge opposed to error, the kind involved in beliefs and convictions—what philosophers call judgements. This is knowledge of truths.
-
Knowing a thing by direct awareness: the kind of knowing involved when you know a particular sense-datum—what we’ve been calling acquaintance. (This roughly matches the difference between savoir and connaître in French, or wissen and kennen in German.)
If we rewrite the slogan so it keeps its meaning consistent, it turns into: “We can never truly judge that something exists unless we are acquainted with it.” That isn’t a truism at all—it’s plainly false. I’ve never been acquainted with the Emperor of China, yet I can truly judge that he exists.
Someone might respond: “But you judge that because other people are acquainted with him.” That reply misses the point. If the principle were true, I couldn’t know that anyone else is acquainted with him in the first place. And more importantly, there’s no good reason why we couldn’t know that something exists even if nobody is acquainted with it. That claim is crucial, and it needs explanation.
If I’m acquainted with a thing that exists, then that acquaintance gives me knowledge that it exists. But the converse does not hold: it’s not true that whenever I can know that something of a certain kind exists, then I (or someone else) must be acquainted with the thing itself. When I make a true judgement without acquaintance, what’s going on is that the thing is known to me by description. That is: using some general principle, I infer that something fitting a certain description exists, based on things I am acquainted with.
To make that fully clear, we’ll need to sort out the difference between knowledge by acquaintance and knowledge by description, and then ask what sort of certainty—if any—belongs to our knowledge of general principles, compared with the certainty we have about the existence of our own experiences. Those are the topics of the next chapters.
5
Knowledge by Acquaintance and Knowledge by Description
In the last chapter we split knowledge into two big types: knowledge of things and knowledge of truths. Here we’re going to focus only on knowledge of things—and even that divides into two very different kinds.
- Knowledge by acquaintance is the simplest kind. It doesn’t depend on proving anything, and it doesn’t logically require that you already know any “facts” or “truths.”
- Knowledge by description, in contrast, always leans on some truths. It’s built out of what you know about a thing, not a direct encounter with the thing itself.
Before we can go further, we need to be clear about what “acquaintance” and “description” mean.
Knowledge by Acquaintance: Direct Awareness
You’re acquainted with something when you’re directly aware of it—no reasoning step in between, no inference, no “therefore,” and no need to already know a proposition about it.
So, when I’m sitting in front of my table, what I’m directly aware of isn’t “the physical table” as science talks about it. What I’m directly aware of are the sense-data that make up how the table appears to me right now:
- its color,
- its shape,
- its hardness,
- its smoothness,
- and so on.
These are present to me immediately in seeing and touching.
Now, I can say things about the color I’m seeing: “It’s brown,” “It’s dark,” “It’s glossy,” whatever. Those statements might give me truths about the color. But they don’t improve my knowledge of the color itself. As far as the experience goes, I already have it completely the moment I see it. There isn’t some deeper, more perfect “knowledge of that exact shade” waiting to be discovered by adding more sentences about it.
So the sense-data that make up the appearance of my table are things I know directly, just as they are. That’s acquaintance.
Knowledge by Description: Knowing an Object Indirectly
My “knowledge of the table” as a physical object, on the other hand, is not like that. I don’t encounter the physical table directly. What I encounter are sense-data, and then I interpret them.
And notice something important: we can doubt the existence of the table as a physical object without being ridiculous, but we can’t doubt the sense-data in the same way. I can sensibly ask, “Is there really a table there?” (maybe I’m hallucinating, dreaming, being fooled by lighting). But I can’t sensibly ask, “Am I having these visual and tactile appearances right now?” Those appearances are what show up first.
So the physical table, as I ordinarily mean it, is known in a different way: by description.
A natural description is:
the physical object that causes these sense-data.
That phrase points to the table by using the sense-data as clues. But for that move to work, I need at least one truth connecting the appearances to something beyond them—something like: “These sense-data are caused by a physical object.”
And here’s the key consequence: there isn’t any mental state in which I’m directly aware of the table itself (as a physical object). Strictly speaking, everything I “know of the table” is really a network of truths: truths about appearances, causes, and how these fit together. The table—the thing that supposedly causes the appearances—isn’t directly present to my mind at all.
In cases like this, what I really have is:
- a description,
- plus knowledge that exactly one thing fits that description.
That’s what knowledge by description is.
Acquaintance Is the Foundation
Even though knowledge by description is indirect, it doesn’t float free. Ultimately, all knowledge—both knowledge of things and knowledge of truths—rests on acquaintance somewhere in the background. So if we want a clear picture of what we can know, we have to ask: what kinds of things can we be acquainted with?
Sense-data are the most obvious example. But if that were all, our knowledge would shrink to almost nothing. We would only know what is happening right now in our senses. We couldn’t know the past—not even that there ever was a past. And we couldn’t know truths about our sense-data, because (as we’ll argue later) truth-knowledge requires acquaintance with something importantly different from sense-data: things sometimes called “abstract ideas,” but which we’ll call universals.
So we need to expand the list beyond present sensation.
Acquaintance by Memory
First extension: memory.
It’s obvious that we often remember what we saw or heard. And in those moments, we’re not merely inferring that something happened—we have a direct awareness of what we remember, even though it shows up as past rather than present.
This immediate awareness in memory is the root of everything we know about the past. Without it, we couldn’t even begin to infer the past, because we’d never have any direct contact with “something having been.”
Acquaintance by Introspection (Self-Consciousness)
Next extension: introspection.
We’re not only aware of things; we can also be aware of ourselves being aware. When I see the sun, I can notice that I’m seeing it. Then “my seeing the sun” becomes something I’m directly aware of. Same with wanting food: I can be aware of the desire itself. Same with pleasure, pain, and the general stream of events in my mind.
This sort of acquaintance—call it self-consciousness—is the source of our knowledge of mental life.
But there’s an obvious limit: this immediate access applies only to what happens in my mind. What happens in other people’s minds isn’t given to me directly. I get to them only through their bodies—through the sense-data I have that I associate with them.
And without acquaintance with the contents of our own minds, we couldn’t even form the idea of another mind. We wouldn’t know what “a mind” is supposed to be. So we wouldn’t be able to reach the conclusion that other people have minds at all.
It’s tempting, then, to think that self-consciousness is a key difference between humans and other animals: maybe animals have sense-data, but never notice the fact that they’re having them. Not that they sit around doubting their existence—just that they don’t become conscious of the fact that they have sensations and feelings, and therefore don’t become conscious of themselves as the subjects of those sensations and feelings.
Are We Acquainted With the Self?
Here things get tricky.
We’ve been calling acquaintance with mental contents “self-consciousness,” but that phrase can mislead. What we’re directly aware of are particular thoughts and feelings—not an “I” floating behind them.
When you try to “look inside” and find the self, you seem to run into a specific thought, a specific feeling, a specific desire—not the bare “me” that has them.
Still, there are reasons to suspect we may be acquainted with the self in some sense, even if it’s hard to separate that from everything else. Here’s the kind of reasoning that pushes in that direction.
When I’m acquainted with “my seeing the sun,” I seem to be aware of two things related to each other:
- the sense-datum that represents the sun to me, and
- whatever it is that is seeing that sense-datum.
Acquaintance always looks like a relation: someone is acquainted with something. And when I’m aware of my own act of being acquainted—when I’m acquainted with my acquaintance—it’s clear who the “someone” is: it’s me.
So, when I’m aware of myself seeing the sun, the whole thing I’m directly aware of has the shape:
Self-acquainted-with-sense-datum
On top of that, we can know the truth, “I am acquainted with this sense-datum.” It’s hard to see how we could even understand that sentence—let alone know it—unless we were acquainted with what we mean by “I.”
This doesn’t force us to assume that we’re acquainted with some permanent, unchanging soul that stays identical day after day. But it does suggest that we must be acquainted with whatever it is that does the seeing and does the sensing.
So it seems likely that, in some sense, we’re acquainted with Self as distinct from our particular experiences. But the issue is difficult and arguments can go both ways. For that reason, it’s safer to say: acquaintance with ourselves is probable, not certain.
What We’re Acquainted With (So Far)
Here’s the picture we’ve built about acquaintance with things that exist:
- In sensation, we’re acquainted with the data of the outer senses: sense-data like colors, sounds, textures, and so on.
- In introspection, we’re acquainted with the data of the “inner sense”: thoughts, feelings, desires, and other mental events.
- In memory, we’re acquainted with past items that were once given either in outer sensation or inner sense.
- And it’s likely (though not guaranteed) that we’re acquainted with Self—the “something” that is aware and that desires.
Acquaintance With Universals (Concepts)
So far we’ve talked about particular, existing things. But our acquaintance doesn’t stop there.
We’re also acquainted with what we’ll call universals: general ideas like whiteness, difference, brotherhood, and so on.
In fact, every complete sentence needs at least one universal, because verbs express universal meanings. (We’ll come back to universals in detail later, in Chapter 9.) For now, the important warning is this: don’t assume that everything we can be acquainted with must be a particular object that exists in space and time.
Awareness of universals is called conceiving, and a universal you’re aware of is called a concept.
What We Are Not Acquainted With
Notice what’s missing from the list of things we’re directly acquainted with:
- physical objects (as distinct from sense-data), and
- other people’s minds.
Those are things we know indirectly, by knowledge by description. So now we need to look more closely at what “description” is.
Descriptions: “A …” vs. “The …”
By a description, I mean a phrase of the form:
- “a so-and-so” (like “a man”), or
- “the so-and-so” (like “the man with the iron mask”).
A phrase like “a man” is an ambiguous description. A phrase like “the man with the iron mask” is a definite description.
There are complicated issues about ambiguous descriptions, but they aren’t central here. What we care about is the situation where you know there is one and only one object that fits a definite description, even though you aren’t acquainted with that object. So from here on, when I say “description,” I mean definite description: a singular phrase of the form “the so-and-so.”
Knowing Something by Description
We’ll say an object is known by description when you know that it is “the so-and-so”—meaning: you know there exists exactly one thing with a certain property, and nothing else has it. Usually this also implies you don’t know that object by acquaintance.
For example:
- We know “the man with the iron mask” existed, and we know many truths about him. But we don’t know who he was.
- We know “the candidate who gets the most votes will be elected.” And we may even be acquainted (in the only way you can be acquainted with another person) with the man who will in fact get the most votes. Still, we might not know which candidate he is. That is, we might not know any truth of the form “A is the candidate who gets the most votes,” where A is one of the candidates we can pick out by name.
Let’s name that situation. We have merely descriptive knowledge of “the so-and-so” when:
- we know the so-and-so exists, and
- we might even be acquainted with the object that actually is the so-and-so,
- but we don’t know any proposition of the form “a is the so-and-so,” where a is something we’re acquainted with.
What “The So-and-so Exists” Means
When we say “the so-and-so exists,” we mean: there is exactly one object that is the so-and-so.
And when we say “a is the so-and-so,” we mean:
- a has the property “so-and-so,” and
- nothing else has it.
So:
- “Mr. A is the Unionist candidate for this constituency” means: Mr. A is a Unionist candidate for this constituency, and nobody else is.
- “The Unionist candidate for this constituency exists” means: someone is a Unionist candidate for this constituency, and nobody else is.
If you’re acquainted with an object that is the so-and-so, then you can know the so-and-so exists. But you can also know the so-and-so exists even if you aren’t acquainted with any object you recognize as the so-and-so—and even if you aren’t acquainted with any object that, as it happens, really is the so-and-so.
Why Proper Names Usually Hide Descriptions
A surprising point: in ordinary life, even proper names usually function like descriptions.
The thought in a person’s mind when they use a name correctly can typically be made explicit only by replacing the name with some description. And that description will vary:
- from person to person, and
- even for the same person at different times.
What stays constant—assuming the name is used correctly—is the object the name refers to. And as long as that reference stays fixed, the particular descriptive route you take usually doesn’t affect whether the proposition you’re expressing is true or false.
Bismarck as an Example
Take some statement about Bismarck.
If it’s possible to have direct acquaintance with oneself, then Bismarck could have used “Bismarck” in the way a name seems to want to be used: simply to point to the very person he was directly acquainted with. If he judged something about himself, he himself might literally be a constituent of that judgment.
But if a friend who knew Bismarck judged something about him, the situation changes. What the friend was directly acquainted with were certain sense-data—visual and auditory appearances—that the friend correctly connected with Bismarck’s body. Bismarck’s body as a physical object, and even more Bismarck’s mind, were known only indirectly: as “the body” and “the mind” associated with those sense-data. That is: they were known by description.
Which features of someone’s appearance pop into the friend’s mind is basically accidental. The crucial point is that the friend knows that different descriptions—different ways of thinking about Bismarck—all point to the same individual, even though the individual himself is not directly presented to the friend’s mind.
Now consider us—people who never knew Bismarck personally. The description in our minds will usually be a loose bundle of historical information, often far more than is strictly needed to pick him out. But to keep things simple, suppose we think of him as:
the first Chancellor of the German Empire.
Notice what’s going on: most of those words are abstract. Even “German” won’t mean the same thing to everyone. For some, it calls up trips to Germany; for others, a map; for others, what they’ve read or heard.
Here’s the deeper point. If we want a description we actually know applies to something real, we’re eventually forced to connect it to some particular we’re acquainted with. This happens anytime we refer to:
- past, present, and future (as opposed to precise dates),
- “here” and “there,”
- or what other people have told us.
In other words, if your knowledge about “the so-and-so” is going to be more than whatever follows logically from the description itself, then the description has to hook into acquaintance somewhere.
Consider “the most long-lived of men.” That description uses only universals. It must apply to someone. But we can’t make any judgments about that person beyond what the description already guarantees.
If, however, we say, “The first Chancellor of the German Empire was an astute diplomat,” then our confidence in that claim depends on something we’re acquainted with—typically testimony we’ve read or heard. And if you look closely at the thought in your mind, it contains one or more particulars (like a specific book, a remembered lecture, a trusted document), and everything else is built out of concepts.
The point matters beyond famous people. Place names—London, England, Europe, the Earth, the Solar System—work the same way. When we use them, we’re usually relying on descriptions that ultimately begin from one or more particulars we’re acquainted with.
I have a hunch that even the “Universe” philosophers talk about in metaphysics still depends, in some way, on a link to particular things. Logic is different. In logic we’re not only interested in what does exist, but in anything that could exist—so we don’t have to point to any actual, real-world individual at all.
Now here’s a subtle but important point: when we say something about a person or thing we know only by description, we usually mean to be saying something about the actual thing itself, not about the description.
Take Bismarck. If we could, we’d like to make a judgment that has Bismarck himself inside it—Bismarck as one of the “ingredients” of what we’re thinking. Only someone acquainted with him can do that. We can’t, because the real Bismarck isn’t directly known to us.
What we can do is this:
- We know there was some individual, call him B, who was Bismarck.
- We also know that B was an astute diplomat.
So the proposition we want to affirm is, roughly: “B was an astute diplomat,” where B means “the actual person who was Bismarck.”
And notice what happens when we change the description. Suppose we describe Bismarck as “the first Chancellor of the German Empire.” Then the proposition we want to affirm can be put like this:
- “The proposition that says, of the actual person who was the first Chancellor of the German Empire, that this person was an astute diplomat.”
We may swap descriptions—“Bismarck,” “the first Chancellor of the German Empire,” “the famous Prussian statesman,” and so on. What makes communication possible is that, as long as the description is true of the same individual, we’re still aiming at one and the same underlying proposition about the real Bismarck. That underlying proposition is what we care about.
But there’s a catch: we aren’t acquainted with that proposition itself. We don’t “have it in hand,” so to speak. We can know that there is such a true proposition about Bismarck, and we can describe it accurately, yet we still don’t directly know the proposition as the sort of thing that includes Bismarck as a constituent.
You can see a whole ladder here—different degrees of distance from direct acquaintance with a particular person:
- Bismarck, as known by people who met him. This is as close as you can get to acquaintance with another human being.
- Bismarck, as known only through history. We still say, quite reasonably, that we know who Bismarck was.
- “The man in the iron mask.” Here we don’t know who the person was at all, even though we might know plenty of true facts about him that don’t follow just from the phrase “wore an iron mask.”
- “The longest-lived man.” At this extreme, we know nothing except what follows logically from the definition—someone lived longer than anyone else.
As the description gets thinner and more abstract, we slide farther away from any contact with a concrete individual.
Something similar happens with universals (general things like properties and relations). Just as many particular individuals are known to us only by description, many universals are too. But even here, the same pattern holds: knowledge about what we know by description ultimately has to be grounded in what we know by acquaintance.
That leads to a core principle for analyzing statements that use descriptions:
Any proposition we can genuinely understand must be built entirely out of parts we are acquainted with.
I’m not going to tackle every objection to that principle right now. For the moment, it’s enough to see why it has to be true in some form. It’s hard to imagine how you could make a judgment—or even entertain a “what if?”—without having some grasp of what you’re judging or hypothesizing about. If our words are going to mean something, and not just be sounds, then the meaning we attach to them must be something we can actually latch onto in acquaintance.
Think about Julius Caesar. When you say, “Julius Caesar was ambitious,” Caesar himself isn’t sitting in your mind—you aren’t acquainted with him. What’s in your mind is some description, such as:
- “the man assassinated on the Ides of March,”
- “the founder of the Roman Empire,” or
- maybe just “the person named ‘Julius Caesar.’”
In that last case, what you’re directly acquainted with is really just the name—the sound or the written shape of the words.
So your statement doesn’t mean quite what it appears to mean on the surface. It means something that involves, not Caesar himself, but a description of Caesar—and that description is made entirely from particulars and universals you are acquainted with.
The big payoff of knowledge by description is that it lets us reach beyond the tiny bubble of our private experience. Even if we can only directly know truths whose components all come from what we’ve encountered in acquaintance, we can still know—by description—things we’ve never encountered at all.
Given how narrow our immediate experience is, that ability is essential. Until we understand it, a huge portion of what we claim to know will feel mysterious—and once something feels mysterious, it also starts to feel shaky and suspect.
6
On Induction
In almost everything we’ve done so far, we’ve been trying to pin down what we actually have as evidence for claims about what exists. What, in the whole universe, do we know exists because we’re directly acquainted with it?
Up to now, the answer has been pretty spare:
- We’re acquainted with our sense-data—the colors, sounds, textures, and so on that show up in experience.
- And, probably, we’re acquainted with ourselves.
Those, at least, we know exist. And when we remember past sense-data, we know those experiences did exist in the past. That’s our starting dataset.
But here’s the problem: if we want to go beyond that tiny private bubble—if we want to know that matter exists, that other people exist, that there was a past before our own memory begins, or that there will be a future at all—we have to be able to infer things from what we’re directly given. And inference needs general principles.
In other words, we need to know something like this: when one kind of thing, A, shows up, it’s a sign that another kind of thing, B, is or was or will be there too. Thunder is a sign that lightning happened a moment earlier. If we didn’t know (or assume) connections like that, we’d be trapped inside our own immediate experience forever. And that experience, as we’ve already noticed, is incredibly limited.
So the question now is simple to ask and hard to answer: Can we legitimately extend our knowledge beyond what we’re directly acquainted with? If we can, how?
A belief we all treat as obvious
Take a case that, in real life, almost nobody doubts: we’re convinced the sun will rise tomorrow. Why are we so sure?
Is that confidence just a mental reflex built out of habit? Or is it something we can defend as a reasonable belief?
It’s not easy to find a perfect “reasonableness test” for beliefs like this. But we can at least do something useful: we can identify what kinds of general assumptions would have to be true for the belief “the sun will rise tomorrow” to count as justified—and the same goes for the thousands of similar expectations that quietly steer our daily choices.
If someone asks why we think the sun will rise tomorrow, we naturally say: “Because it always has.” It rose yesterday. It rose the day before. So we expect it to rise again.
If someone presses harder—why think it’ll keep doing what it’s done until now?—we might reach for physics. We say: the Earth is a rotating body, and rotating bodies don’t just stop unless something interferes. Nothing is going to interfere between now and tomorrow, so the rotation continues, and sunrise follows.
Maybe you could doubt whether we’re absolutely certain nothing will interfere. But that’s not the most interesting doubt. The deeper question is: Why are we confident the laws of motion will still be true tomorrow? The moment that doubt shows up, we’re right back where we started.
The real issue: past success vs. future trust
Why do we believe the laws of motion will keep holding? The only honest answer is: because they’ve held so far, as far back as our evidence reaches.
Sure, we have much more evidence for the laws of motion than for tomorrow’s sunrise, because sunrise is just one specific case among countless cases where those laws appear to work. But that doesn’t touch the heart of the matter.
The real question is this:
Does any amount of past confirmation of a law count as evidence that it will still hold in the future?
If the answer is no, then we’re in trouble fast. We’d have no real reason to expect the sun to rise tomorrow. Or to expect the bread we eat at the next meal not to poison us. Or to expect any of the barely noticed assumptions that keep our lives running.
Notice what we are—and aren’t—asking for here. We don’t need a proof that these expectations must come true. Everyday life doesn’t demand certainty. These expectations are matters of probability. What we want is some reason for thinking they’re likely.
Habit is a fact; justification is a separate question
To tackle this, we have to draw a distinction that saves us from confusion.
First, a psychological fact: experience shows that when we repeatedly observe a regular pattern—two things occurring together, or one following the other—we form an expectation that the same thing will happen next time.
Food that looks a certain way usually tastes a certain way; it’s a nasty surprise when the familiar look comes with a completely different taste. Objects we see become linked, through habit, with expected sensations of touch; part of the “creep factor” of a ghost in many stories is that your eyes tell you it’s there, but your hand meets nothing. And people who travel abroad for the first time can be genuinely shocked to discover that their native language isn’t understood.
And this habit-forming association isn’t just human. Animals do it too. A horse that has been driven down the same route over and over may resist being taken a different way. Pets anticipate food when they see the person who usually feeds them.
We also know how badly these expectations can fail. The man who fed the chicken every day eventually wrings its neck. The chicken’s “rule” had worked perfectly—until it didn’t. A more sophisticated view of nature’s regularities would have served the chicken well.
So yes: these expectations exist, and they’re powerful. Repetition alone pushes both animals and humans toward “it’ll happen again.”
But now comes the philosophical question. The existence of the habit is one thing; the rational standing of the habit is another.
So we have to separate:
- The fact that past regularities cause us to expect future regularities,
- from the question of whether those expectations deserve any weight once we stop and ask whether they’re valid.
Maybe our certainty about tomorrow’s sunrise is no better than the chicken’s certainty about tonight’s dinner.
What “the uniformity of nature” claims
This takes us to the central issue: do we have any reason to believe in what people call the uniformity of nature?
That phrase means: everything that happens is an instance of some general law with no exceptions.
The rough-and-ready expectations we’ve been talking about clearly do have exceptions—hence the disappointments. But science typically assumes, at least as a working method, that if we’ve got a rule with exceptions, we can replace it with a deeper rule that has none.
Take a crude rule like: “unsupported bodies in air fall.” Balloons and airplanes are obvious exceptions. But the more fundamental laws—motion and gravitation—explain not only why most things fall but also how balloons and airplanes can rise. So the deeper laws aren’t refuted by those “exceptions”; they include them.
Similarly, “the sun will rise tomorrow” could turn out false if, say, the Earth collided with some massive object that disrupted its rotation. But even then, the laws of motion and gravitation wouldn’t be broken. That’s what science is aiming for: to identify uniformities—laws like these—that, as far as our experience goes, admit no exceptions.
Science has been impressively successful at this project, and we can grant that these laws have held up to now. But that just brings the original puzzle back in a sharper form:
If a law has always held in the past, do we have any reason to think it will hold in the future?
A tempting argument—and why it fails
People sometimes argue like this: “Of course the future resembles the past. What used to be the future has repeatedly become the past, and when it did, it matched what came before.” In other words, we’ve already seen “the future” turn into “the past,” and it behaved consistently.
But that argument quietly smuggles in the very thing it’s trying to prove. Yes, we have experience of times that were once future—call them past futures. But we don’t have experience of future futures. The question is exactly whether those future futures will resemble the past futures.
You can’t answer that by leaning only on past futures. That’s just going in a circle. So we still need some principle that would justify the claim that the future will follow the same laws as the past.
It’s not only about the future
Also, this isn’t really a future-only problem. The same issue comes up when we extend present-day laws backward into parts of the past we never observed—like in geology, or theories about how the solar system formed.
So the question is broader and cleaner when stated this way:
When two things have been found together again and again, and we’ve never seen one occur without the other, does the appearance of one in a new case give us a good reason to expect the other?
How we answer that determines the legitimacy of almost everything we take for granted: our expectations about the future, the results of induction, and basically the background beliefs daily life runs on.
What induction can (and can’t) give us
We have to admit something right away: the fact that two things have always shown up together in our experience doesn’t logically guarantee they’ll show up together next time. No matter how many confirmations we’ve collected, we can’t squeeze out a demonstrative proof that the next case must match.
The best we can hope for is something weaker but still useful:
- the more often two things have been linked,
- the more probable it becomes that they’ll be linked again,
- and after enough repetitions, that probability can come close to certainty.
But it never reaches absolute certainty, because we know that even after long runs of regularity, failure can happen—like the chicken’s last meal.
So probability is all we should demand.
A common pushback—and two replies
Someone might object: “But nature is governed by law. And sometimes, once you’ve observed enough, you can see that only one law could possibly fit the facts.”
There are two answers.
-
Even if there really is a strict law with no exceptions governing the case, we can almost never be sure, in practice, that we’ve found that law rather than a looser rule that only seems exceptionless within our limited experience.
-
The claim that nature is governed by law—the “reign of law”—looks itself like a belief supported by induction. We believe it will hold tomorrow and in unobserved parts of the past because it has held in the cases we’ve examined. That means this supposed foundation already depends on the very principle under investigation.
The principle of induction, stated plainly
The principle we’re probing is what we can call the principle of induction. It has two parts:
-
If things of type A have been associated with things of type B, and we’ve never found A occurring without B, then the more cases of this association we’ve observed, the more probable it is that when one appears again, the other will too.
-
With enough observed associations (and still no counterexamples), the probability of the association in a new case can become almost certain—approaching certainty as closely as you like, though never reaching perfect certainty.
As stated, this principle supports an expectation about a single new case. But we usually want more than that. We also want to believe a general law: that all things of type A are associated with B, as long as we’ve seen enough examples and no failures.
Here’s an important point: the probability of the general law is always lower than the probability of a particular new case. If the general law is true, then the particular case must be true; but the particular case could be true even if the general law is false. Still, repetitions increase the probability of the general law the same way they increase the probability of the particular prediction.
So we can restate the principle for general laws:
-
The more cases we’ve observed of A occurring with B (and no failures), the more probable it is that A is always associated with B.
-
Given enough such cases (and still no failures), it becomes nearly certain that A is always associated with B, with the probability approaching certainty without limit.
Why probability depends on what you know
One more crucial caution: probability is always relative to the data you’re using.
In our case, the “data” are just the known instances where A and B occurred together. But there may be other relevant information that would drastically change the probability.
Suppose someone has seen countless white swans and has never seen a non-white one. Using induction, it could be perfectly reasonable, given that data alone, to conclude that it’s probable all swans are white.
That argument isn’t “refuted” by the later discovery of black swans, because unlikely things can happen even when your data made them improbable. What changes is that you now have new data. Or you might have additional background knowledge—say, that color varies widely in many animal species—making any induction about color especially risky. But that would be an extra piece of evidence, not a retroactive proof that the earlier probability estimate (relative to the earlier data) was irrational.
So the mere fact that expectations sometimes fail doesn’t show that inductive reasoning is generally unreliable. It just shows that induction doesn’t deliver certainty.
In that sense, the principle of induction can’t be disproved by experience.
Why experience also can’t prove induction
But here’s the twist: induction can’t be proved by experience either.
Experience might seem to support induction for cases we’ve already checked, because it often “works” there. But the moment we try to justify an inference about unexamined cases—about tomorrow, about unseen regions of the past, about places we’ve never visited—experience alone can’t do it. To move from “it held in the examined cases” to “it will hold in the unexamined cases,” you have to assume the inductive principle in the first place.
That means any attempt to prove induction by appealing to experience is circular: it assumes what it’s trying to establish.
So we face a stark choice:
- either we accept the inductive principle because it seems self-evident in some broader sense,
- or we give up any real justification for expectations about what we haven’t experienced.
And if induction isn’t trustworthy, the consequences are wild. We’d have no reason to expect the sun to rise tomorrow, or bread to nourish us more than a stone, or a jump off a rooftop to end in a fall. If we see what looks like our best friend walking toward us, we’d have no reason to think that body isn’t controlled by the mind of an enemy—or a stranger entirely.
In practice, everything we do rests on associations that have worked before and that we therefore treat as likely to work again. Whether that “likely” is justified depends on the inductive principle.
Science depends on it too
And it’s not just everyday common sense. The big organizing commitments of science—the belief in the reign of law, the belief that every event has a cause—depend on induction just as much.
We believe those general principles because humanity has seen countless cases that fit them and no clear cases that contradict them. But that history doesn’t tell us they’ll hold tomorrow, or in unobserved parts of the past, unless we assume induction.
So any knowledge that uses experience to tell us something about what we haven’t experienced rests on a belief that experience can neither confirm nor refute—yet that belief feels, in ordinary life, as solid and unavoidable as many direct observations.
Understanding the existence and justification of such beliefs—since induction isn’t the only one—opens up some of the hardest and most argued-over problems in philosophy. In the next chapter, we’ll look briefly at what might be said to explain how this kind of knowledge is possible, and what its limits and degree of certainty might be.
7
On our Knowledge of General Principles
In the last chapter we met a strange but unavoidable fact: the principle of induction is essential for any argument that leans on experience, yet you can’t prove it from experience. And still, everyone trusts it—at least in the ordinary, everyday ways we constantly use it.
What’s easy to miss is that induction isn’t the only principle like this. We rely on plenty of other principles that experience can’t settle—principles that can’t be confirmed or refuted just by collecting more observations—yet we use them every time we reason from what we sense to what we conclude.
Some of these principles are, if anything, even more certain than induction. Our confidence in them can be as firm as our confidence that we’re having certain sense-experiences at all. And that matters, because these principles are the “connective tissue” that lets us draw conclusions from what sensation gives us. If we want our conclusions to be true, it’s not enough that our sensory data be accurate; the rules we use to infer from that data have to be accurate too.
The trouble is that these inferential rules often hide in plain sight. They feel so obvious that we agree to them automatically, without even noticing we’re making an assumption. But if we want a serious theory of knowledge, we have to bring these principles into focus—because once we do, they raise some of the hardest questions in philosophy.
How We Learn General Principles
When we come to know a general principle, we usually don’t start with the general statement. We start with a particular case, see that it works, and then realize that the particular details don’t matter. The “this specific example” turns out to be irrelevant; what matters is the underlying pattern, which holds just as well in any case of the same form.
That’s how arithmetic is taught. You first learn “two and two are four” with something concrete—two apples and two apples, say. Then with two books and two books. Eventually you see that the kind of thing doesn’t matter. Once you strip away the irrelevant details, you grasp the general truth: any pair plus any pair makes four.
Logic works the same way. Picture a simple conversation:
Two people are trying to remember the date. One says, “If yesterday was the 15th, then today must be the 16th.” The other agrees. Then the first adds, “And you know yesterday was the 15th because you ate dinner with Jones, and your diary shows that dinner was on the 15th.” The second agrees again—and concludes, “So today is the 16th.”
Nobody finds this hard. And if the premises are true, nobody denies that the conclusion has to be true. But the argument only works because it’s an instance of a much more general logical rule.
A Core Rule of Inference (Implication)
Here’s the general form:
- Suppose you know: if this is true, then that is true.
- Suppose you also know: this is true.
- Then you can conclude: that is true.
When it’s true that “if this is true, then that is true,” we say this implies that, and that follows from this. So the principle can be stated like this:
- If this implies that, and this is true, then that is true.
Or more compactly:
- Anything implied by a true proposition is true.
- Whatever follows from a true proposition is true.
This isn’t some quirky rule for special cases. It’s built into every demonstration. Any time you use one belief to justify another—any time you say “because X, therefore Y”—you’re leaning on this principle.
And if someone asks, “Why should I accept the results of valid reasoning from true premises?” you eventually have to point back to something like this. There isn’t a deeper proof that doesn’t already assume the very thing you’re trying to justify. The principle feels impossible to doubt, and because it’s so obvious it can seem almost trivial. But it isn’t trivial in philosophy, because it shows something surprising: we can have certainty that doesn’t come from the senses at all.
The ‘Laws of Thought’—And Why the Name Misleads
That principle is only one among many self-evident logical truths. Some of them have to be accepted before any proof or argument can even get started. Once a few are granted, others can be proved—though the simpler ones often feel just as obvious as the starting points.
Tradition has singled out three, for no particularly good reason, and given them a grand title: the Laws of Thought.
They’re usually listed like this:
- Law of identity: “Whatever is, is.”
- Law of contradiction: “Nothing can both be and not be.”
- Law of excluded middle: “Everything must either be or not be.”
They’re good examples of self-evident logical principles, but they’re not uniquely fundamental. The earlier rule—“whatever follows from a true premise is true”—is just as basic.
And the label “laws of thought” is actually a bit misleading. What matters isn’t merely that we tend to think in these ways. What matters is that reality itself respects these structures: thinking in line with them is a way of thinking truly. That opens a much larger question, which we’ll have to return to later.
Certainty vs. Probability: Two Kinds of Logical Principles
Not all logical principles deliver certainty. Some rules of reasoning let you show that a conclusion is definitely true if the premise is true. Others let you show only that a conclusion is more or less likely given the premise.
The most important example of this second kind is the inductive principle we discussed before.
Empiricists vs. Rationalists: Who Was Right?
One of the biggest long-running fights in philosophy is between empiricists and rationalists.
- The empiricists (especially the British philosophers Locke, Berkeley, and Hume) argued that all knowledge comes from experience.
- The rationalists (notably the seventeenth-century Continental thinkers Descartes and Leibniz) argued that, in addition to experience, we have certain principles and ideas we know independently of experience—often described as “innate.”
Today we can judge these positions with more confidence. On the main point—the status of logical principles—the rationalists had the better argument. We do know logical principles, and experience can’t prove them, because every proof already presupposes them.
But the empiricists weren’t simply wrong. Even when a piece of knowledge isn’t provable by experience, it may still be triggered by experience. Particular experiences are often what make us notice a general law in the first place. It would be ridiculous to say that babies are born already knowing every truth that can’t be deduced from sensation. So calling logical principles “innate” isn’t very helpful.
A better label is a priori. It’s less misleading and more common in modern philosophy. On this view:
- Experience can prompt us to consider an a priori truth.
- But experience doesn’t prove it.
- Experience simply directs our attention until we “see” the truth without needing sensory evidence as a proof.
What Experience Is Still Needed For: Existence
Here the empiricists had the advantage: you can’t know that something exists without experience.
If you want to establish the existence of something you haven’t directly experienced, your argument must still include, somewhere among its premises, the existence of something you have experienced. Take a mundane example: believing that the Emperor of China exists (or any distant person you’ve never met) rests on testimony. And testimony, when you trace it back, is built out of sensory inputs—things you saw on a page, heard someone say, and so on.
Rationalists sometimes thought they could deduce the existence of real things purely from general reasoning about what “must be.” That seems to be a mistake.
What we can know a priori about existence is, as far as we can tell, always conditional. It tells us things like:
- If one thing exists, then another must exist.
- If one proposition is true, then another must be true.
That’s exactly the shape of the principles we’ve already discussed:
- If this is true, and this implies that, then that is true.
- If two events have repeatedly been connected, they will probably be connected the next time.
So the reach of a priori principles is limited. They can map connections—what follows from what—but they can’t, all by themselves, supply actual existence.
That gives us a useful distinction:
- Knowledge is empirical when it depends wholly or partly on experience.
- Any knowledge that asserts that something exists is empirical.
- The only a priori knowledge involving existence is hypothetical: it links possible or actual things, but doesn’t guarantee that any of them are actually there.
If we know something immediately, we know its existence through experience alone. If we prove something exists without experiencing it directly, then both experience (as a starting point) and a priori principles (as the inferential machinery) are needed.
A Priori Knowledge Beyond Logic: Value
A priori knowledge isn’t limited to logic. One of the most important non-logical examples is ethical value.
This isn’t about what’s useful or what counts as virtuous behavior—those judgments depend on facts and therefore need empirical premises. It’s about judgments of intrinsic value: what is desirable in itself.
Something is useful only because it helps achieve some goal. But if you keep asking “Why does that goal matter?” you eventually reach an end you treat as worthwhile on its own, not as a tool for something else. So judgments about usefulness ultimately depend on judgments about what has intrinsic value.
We make judgments like these all the time:
- Happiness is more desirable than misery.
- Knowledge is more desirable than ignorance.
- Goodwill is more desirable than hatred.
At least some of these judgments must be immediate and a priori. Experience can bring them to our attention—and likely has to, because it’s hard to judge the value of something without encountering something like it. But experience can’t prove them. The mere fact that something exists (or doesn’t) can’t establish that it ought to exist or that it’s bad that it exists.
This is the gateway to ethics, and in particular to the famous problem that you can’t deduce “ought” from “is.” For our purposes here, the key point is simpler: knowledge of intrinsic value is a priori in the same way logic is—a kind of truth that experience can neither prove nor disprove.
Why Mathematics Isn’t Just “Really Reliable Experience”
Pure mathematics is a priori too, just like logic. Empiricist philosophers fought this hard. They claimed experience is as much the source of arithmetic as it is the source of geography. On their view, we repeatedly see two things plus two things make four things, and by induction we conclude that this will always happen.
But if that were really how we knew that two and two are four, we’d reason very differently than we actually do.
Yes, we need some concrete examples at first, because they help us understand what “two” even means in an abstract way—separate from “two coins” or “two books” or “two people.” But once we strip away those irrelevant details, we grasp the general principle directly. At that point, one carefully understood example is enough, because we can see that the example is merely typical; checking more cases adds nothing essential.
Geometry shows the same pattern. To prove something about all triangles, we draw one triangle and reason about it—but we take care not to use any feature that belongs only to that particular drawing. By ignoring the irrelevant quirks, we extract a general truth.
Notice what happens to our certainty. We don’t feel more certain that two and two are four after seeing more examples. Once we see it, our confidence becomes as high as it can be.
And there’s something else: the statement “two and two are four” feels necessary in a way that even our best empirical generalizations don’t.
An empirical generalization—no matter how well supported—still feels like a fact that might have been otherwise. We can imagine a world where it fails, even if it never fails in ours. But with “two and two are four,” we don’t just think it’s true here; we feel it must be true in any possible world. It’s not merely a fact—it’s a necessity that everything actual or possible must obey.
Comparing Math to a True Empirical Generalization
Take a genuinely empirical claim: “All men are mortal.”
We believe it partly because we’ve never observed a counterexample—people don’t live past a certain age—and partly because we have physiological reasons to think the human body must eventually wear out.
But if we set aside physiology and look only at experience, a big difference appears. One clear instance of a man dying wouldn’t satisfy us. We’d want many cases. By contrast, with “two and two are four,” one carefully examined instance can be enough to convince us it must hold in every instance.
Even more, we can admit—if we’re honest—that there’s at least a tiny sliver of doubt about “all men are mortal.” To see this, try imagining two possible worlds:
- One world contains humans who never die.
- Another world contains arithmetic where two and two make five.
The first is weird, but we can at least go along with it in imagination—Swift’s story of the immortal Struldbrugs shows that. The second feels different altogether. A world where two and two make five seems like it would tear up the foundation of thought and leave us unable to trust anything.
So here’s the real picture: in simple mathematical judgments like “two and two are four,” and in many logical judgments as well, we can know the general truth without inferring it from repeated instances. Still, we often need some example to understand what the general statement means.
Why Deduction Can Add Knowledge
This also explains why deduction has real value. It isn’t just induction that matters.
- Induction often moves from particular cases to a general rule, or from one particular to another.
- Deduction can move from general to particular, or from general to general.
Philosophers have argued for ages about whether deduction ever gives genuinely new knowledge. We can now see that sometimes it does.
If you already know the general truth “two and two are four,” and you also know that Brown and Jones are two people, and Robinson and Smith are two more, you can deduce that there are four people total. That conclusion is new in a meaningful sense: the general principle didn’t mention Brown, Jones, Robinson, or Smith, and the particular facts didn’t explicitly state “there are four.” The deduction combines them into a new, more informative statement.
But this “newness” is less clear in the classic textbook example:
- All men are mortal.
- Socrates is a man.
- Therefore Socrates is mortal.
In real life, what we know with high confidence is that particular men—call them A, B, C—have died. If Socrates is one of those known cases, it’s silly to take a detour through “all men are mortal” to reach a conclusion you already effectively have. And if Socrates isn’t one of the cases, it’s still better to reason directly from the observed deaths of A, B, C to Socrates than to route the argument through the sweeping generalization “all men are mortal.”
Why? Because on the evidence, it’s more likely that Socrates is mortal than that all men are mortal. If all men are mortal, then Socrates is mortal—but if Socrates is mortal, it doesn’t follow that all men are. So the particular conclusion can have higher probability than the universal statement used to derive it.
That contrast highlights the key difference we’ve been tracing: between general truths we know a priori, like “two and two are four,” and broad empirical generalizations, like “all men are mortal.”
When you’re dealing with facts you’ve directly observed, induction is the ideal way to argue: you build up from evidence. It’s also the approach that should make you most confident—at least in theory—because a general statement based on experience is always shakier than the concrete cases you used to support it. In other words, every empirical generalization carries more uncertainty than its individual examples.
So far, though, we’ve run into a different kind of knowledge—things we seem to know a priori, without needing to check the world first. That includes the core truths of logic and pure mathematics, and also the basic principles of ethics. Now we hit the next problem: How can that be possible?
More specifically: how can we legitimately know general laws in situations where we haven’t checked every case—and in fact couldn’t check every case, because there are infinitely many? These questions are notoriously hard, and they matter a lot historically. They were first pushed into the spotlight by the German philosopher Immanuel Kant (1724–1804).
8
How A Priori Knowledge is possible
Immanuel Kant is usually treated as the heavyweight champion of modern philosophy. He lived through major upheavals—the Seven Years’ War, the French Revolution—yet he spent his life teaching in Königsberg in East Prussia without much interruption. His signature move was what he called “critical” philosophy: start by admitting an obvious fact (we do have knowledge), then ask a sharper question—how is that knowledge even possible? From the answer, Kant tried to draw big conclusions about what the world must be like. You can reasonably doubt whether those metaphysical conclusions really follow. But Kant absolutely deserves credit for two breakthroughs:
- He saw that we have a priori knowledge that is not merely analytic (true only because denying it would be self-contradictory).
- He made the theory of knowledge—the study of what knowledge is and how it works—central to philosophy.
Analytic vs. Synthetic: What’s the Difference?
Before Kant, philosophers mostly assumed a simple rule: if something is knowable a priori (known independently of experience), then it must be analytic.
The easiest way to see what “analytic” means is with examples. If I say:
- “A bald man is a man.”
- “A plane figure is a figure.”
- “A bad poet is a poet.”
I’m not adding any new information. I’m basically unpacking what was already built into the subject. These are analytic judgments because the predicate (“is a man,” “is a figure,” “is a poet”) is contained in the subject itself. You get the predicate just by analyzing the concept you started with.
That’s why these statements are trivial. In real life, nobody says them unless they’re warming up the audience for a rhetorical trick.
And here’s the key point: philosophers before Kant thought all a priori certainty worked like this. If the predicate is already part of the subject, then denying the statement would be a straight-up contradiction. Saying “A bald man is not bald” would both assert and deny baldness of the same person at the same time. That violates the law of contradiction: nothing can both have and not have the same property in the same respect at the same time. So, on the old view, the law of contradiction alone could guarantee every a priori truth.
Hume’s Shock: Cause and Effect Aren’t “Contained” in Each Other
Then David Hume came along. He accepted the standard definition: a priori truths must be analytic. But when he looked carefully, he realized that many claims people had treated as analytic—especially claims about cause and effect—aren’t analytic at all. The link between a cause and its effect isn’t something you can uncover just by inspecting the idea of the cause.
Before Hume, at least the rationalist philosophers believed this: if you knew enough, you could deduce the effect from the cause by pure reasoning. Hume argued (and most people today would agree) that you can’t. No amount of concept-analysis lets you logically extract the future from the present in that way.
From this, Hume drew a stronger and much more controversial conclusion: we can’t know anything a priori about causation at all.
Kant was raised in the rationalist tradition, and Hume’s skepticism rattled him. He went looking for an answer.
Kant’s Big Move: Mathematics Is A Priori but Not Analytic
Kant’s key insight was that not only causation, but even arithmetic and geometry, are synthetic rather than analytic. In other words, in these truths the predicate isn’t sitting inside the subject waiting to be “unpacked.”
His famous example is:
- 7 + 5 = 12
Kant points out—correctly—that you don’t get 12 just by analyzing the ideas 7 and 5. You have to combine them. The result is new information, not a mere restatement. Even the idea of “adding them” doesn’t automatically hand you the specific sum 12 unless you actually carry out the operation.
So Kant concluded: pure mathematics is synthetic yet a priori.
And that creates a new problem. If mathematics is both:
- a priori (known independently of experience), and
- synthetic (not true merely by definition),
then how on earth is it possible?
Why “It Comes from Experience” Doesn’t Work
Kant framed his whole philosophy around a question like this: How is pure mathematics possible? Any philosophy that isn’t pure skepticism owes us some answer.
The strict empiricist answer is: “We get mathematics by induction from repeated experiences.” But we’ve already seen why that doesn’t hold up, for two reasons:
- Induction can’t justify itself. You can’t prove the principle of induction by using induction without going in a circle.
- Mathematics doesn’t gain certainty by piling up examples. The statement “two and two always make four” doesn’t become more certain because you check it in 10,000 cases. In fact, you can see its truth from a single clear instance; enumerating more instances adds nothing.
So mathematical knowledge—and the same goes for logic—can’t be explained the way we explain everyday generalizations like “all humans are mortal,” which are always a bit uncertain because they’re based on experience.
The Weirdness: Experience Is Particular, but Math Is Universal
Here’s the pressure point: experience comes in particular episodes, but a priori knowledge is general.
It is genuinely strange that we seem able to know, ahead of time, truths that will apply to cases we’ve never encountered. We have no idea who will live in London a hundred years from now. But we’re confident that if you pick any two of those people and then pick any other two, you’ll have four people total.
That ability to “see” how things must go in cases we’ve never observed is what needs explaining.
Kant’s solution is fascinating, even if (as I’ll argue) it doesn’t ultimately work. It’s also famously hard, and different interpreters read it differently. So all we can do here is give the bare outline—though even that outline will annoy some Kant experts.
Kant’s Proposal: We Supply the Form of Experience
Kant says every bit of experience has two distinguishable ingredients:
- What comes from the object (what we’ve been calling the “physical object”).
- What comes from us—from the structure of our own minds.
This much fits with the earlier idea that sense-data aren’t simply the physical object itself, but arise from an interaction between the object and the perceiver.
Where Kant becomes distinctive is in how he divides the labor. He argues:
- The raw sensory material—colors, hardness, and so on—comes from the object.
- But the organization of that material comes from us: its arrangement in space and time, plus the network of relations we impose when we compare things, order them, treat one as the cause of another, and so forth.
Why believe that? Kant’s main reason is that we seem to have a priori knowledge about things like:
- space
- time
- causality
- comparison and relation
But we don’t seem to have a priori knowledge about the specific sensory “stuff” that shows up—this color rather than that, this particular texture, this precise temperature.
So Kant says: we can be confident that anything we ever experience will fit our a priori knowledge because the features our a priori knowledge describes are features we ourselves contribute. Nothing can enter experience without being shaped by the mind’s built-in forms and rules.
Phenomena vs. Things-in-Themselves
Kant draws a sharp line:
- The thing in itself (the physical object as it exists independently of our experience) is, he claims, essentially unknowable.
- What we can know is the phenomenon: the object as it appears in experience.
Because a phenomenon is a joint product of the thing in itself and the mind’s structuring activity, it is guaranteed to have the mind-supplied features—space, time, causal order, and so on. That’s why a priori knowledge is reliable within experience.
But Kant insists we must not extend that a priori knowledge beyond experience. A priori truths apply to every actual or possible experience, but they don’t tell us what reality is like “in itself,” outside any possible experience.
That’s how Kant tries to keep the best of both sides:
- The rationalists are right that we have necessary, a priori knowledge.
- The empiricists are right to warn us against pretending that necessity lets us read off the ultimate nature of reality.
A Serious Objection: Kant Doesn’t Actually Secure Certainty
There are plenty of smaller criticisms one can make of Kant. But there’s one big problem that seems fatal for his approach.
What we’re trying to explain is our certainty that facts must always obey logic and arithmetic. Kant says these forms are contributed by us. But that doesn’t explain why they’re guaranteed.
After all, our mental constitution is itself a fact about the world. And facts about the world don’t come with a built-in promise of permanence. If Kant were right, it seems at least possible that tomorrow our nature could change in such a way that two and two would “come out” as five.
That possibility apparently never occurred to him. Yet it undercuts exactly what he wants to protect: the universality and necessity of arithmetic.
You might object that, in Kant’s system, “tomorrow” is already suspect—since he claims time is a form the subject imposes on appearances, and the real self isn’t in time at all. But even then, he still needs the order of appearances over time to be grounded in something behind appearances. That’s enough to make the objection bite: if the mind’s structuring role is just another contingent feature, it can’t deliver the kind of unconditional certainty mathematics seems to have.
A Priori Truths Aren’t Just About Our Minds
On reflection, it becomes hard to avoid another conclusion: if arithmetic is true at all, it must be true whether or not we think about it.
Two physical objects plus two physical objects must make four physical objects—even if, in some bizarre scenario, physical objects could never be experienced. When we say “two and two are four,” we don’t mean “two phenomena plus two phenomena are four phenomena.” We mean it in the broader, straightforward way. And that broader claim seems just as undeniable.
So Kant’s solution doesn’t just fail to explain certainty; it also shrinks the scope of a priori truths too much.
The Temptation: Calling A Priori Principles “Laws of Thought”
Even setting Kant aside, many philosophers have been tempted by a similar idea: that what’s a priori is “in the mind”—that it concerns how we must think, rather than how the world is.
That’s part of why certain principles are traditionally called the laws of thought. In the previous chapter we noted three such principles. The label feels natural, but there are strong reasons to think it’s mistaken.
Take the law of contradiction. It’s often put like this:
- “Nothing can both be and not be.”
What that’s aiming to say is: nothing can simultaneously have and not have the same quality. If a tree is a beech, it can’t also be not-a-beech. If my table is rectangular, it can’t also be not rectangular.
It’s easy to see why people call this a “law of thought.” We don’t usually verify it by looking around. Once we’ve seen that the tree is a beech, we don’t need to stare at it longer to check whether it’s also not a beech. We know by reflection alone that this is impossible.
But that doesn’t mean the law is about thought.
When you believe the law of contradiction, you are not mainly believing: “the human mind is built so that it has to believe this.” That psychological claim—about how our minds are wired—is something you might reach later, after reflecting on your own thinking. And it already presupposes the law of contradiction.
What you originally believe is a claim about things:
- Not “If we think a tree is a beech, we can’t also think it’s not a beech,” but
- “If the tree is a beech, it can’t at the same time be not a beech.”
So the law of contradiction is a fact about reality, not a rule about mental behavior. Yes, believing it is an act of thought. But the law itself isn’t a thought. If the world didn’t obey the law, then our being psychologically compelled to accept it wouldn’t make it true—and that shows it isn’t a mere “law of thought.”
The same point applies to any a priori judgment. When we say “two and two are four,” we are not talking about our mental habits. We are talking about all actual or possible pairs, whatever they may consist of. It may be true that our minds are built to believe this. But that is not what we mean by the statement, and no fact about our psychology could make it true if it weren’t already true.
So if our a priori knowledge isn’t mistaken, it isn’t merely knowledge of the mind’s structure. It applies to whatever the world contains—mental or non-mental.
Where A Priori Knowledge Really Points: Qualities and Relations
The best way to put it seems to be this: our a priori knowledge concerns entities that don’t “exist,” strictly speaking, as physical things or as mental events. These are entities we name with words that aren’t nouns—things like qualities and relations.
For example, suppose I’m in my room. I exist, and my room exists. But does the word “in” name something that exists the way I do? Not exactly. Still, “in” clearly means something: it denotes a relation that holds between me and my room.
That relation is “something” we can understand and reason about. Otherwise we couldn’t even understand the sentence “I am in my room.”
Many philosophers, influenced by Kant, say relations are produced by the mind: things “in themselves” have no relations until the mind connects them in thought and thereby creates the relations it judges them to have.
But this view runs into the same kind of objection as Kant’s.
It’s not my thinking that makes “I am in my room” true. An earwig might be in my room even if nobody—me, the earwig, anyone—knows it. The truth depends only on the earwig and the room, not on any act of awareness.
So relations, as we’ll see more clearly in the next chapter, have to belong to a realm that is neither mental nor physical. And that realm matters enormously for philosophy—especially for understanding how a priori knowledge is possible. In the next chapter we’ll start spelling out what this “world of relations” is like and why it changes the whole picture.
9
The World of Universals
By the end of the last chapter, we’d arrived at a strange conclusion: some things—especially relations—seem to “be” in a way that doesn’t match physical objects, minds, or even sense-data. In this chapter, we need to pin down what kind of being this is, and which kinds of things have it. Let’s start with the second question: what belongs in this category at all?
This problem is ancient. Plato dragged it to the center of philosophy, and his famous “theory of Ideas” was an attempt to solve it. In my view, it’s one of the most successful attempts ever made. What I’m going to defend here is basically Plato’s view, with a few updates that later thinking has forced on us.
How Plato got there is easier to see if we take a familiar concept—say, justice. If you ask, “What is justice?”, the natural move is to look at many just actions and try to identify what they share. Those actions must, in some sense, “have something in common”—a single nature that appears wherever something is just, and nowhere else. That shared nature is justice itself: the pure essence that, when it shows up in the messy world of everyday life, produces the many different just acts we actually see.
The same logic applies to other words we use across many cases. Take whiteness. The word “white” applies to lots of particular things because, in some way, they share a common nature or essence. Plato called that pure shared essence an “Idea” or “Form.” And it’s crucial not to misunderstand him here: Plato’s “Ideas” aren’t little thoughts floating around inside someone’s mind, even though minds can grasp them.
So the Form of justice isn’t the same as any particular just act. It’s something other than the particular things—something those particular things “participate in.” Because it isn’t a particular, it can’t be located in the sensory world the way tables, colors on a wall, or individual events are. And unlike sensory things, it doesn’t come and go. It doesn’t age, decay, or change. It’s simply itself—eternal, fixed, indestructible.
That pushes Plato toward a world beyond the senses: an unchanging realm of Forms that is, for him, more real than the shifting world of everyday experience. The sensory world, on this view, only has whatever thin “borrowed reality” it has because it reflects—or participates in—that deeper realm. When we try to say what a sensible thing is like, we end up describing it by listing the Forms it participates in. Those Forms, then, supply the thing’s whole character.
From there, it’s an easy slide into mysticism: you start imagining that maybe, with some kind of spiritual spotlight, we could see the Forms the way we see colors and shapes, or that the Forms literally live in heaven. Those mystical add-ons are understandable, but they’re not the foundation. The foundation is logical. And it’s the logical core we need to examine.
Unfortunately, the word “idea” now carries associations that badly distort what Plato meant. So instead of “idea,” I’ll use the word universal for the kind of thing Plato was talking about.
Here’s the key contrast:
- A particular is something given in sensation (or at least something of the same general type as what sensation gives us): this specific white sheet of paper, that particular sound, this one person, this moment.
- A universal is something that can be shared by many particulars: what justice has in common across just acts, what whiteness has in common across white things.
Once you look at language with this in mind, a pattern jumps out. Roughly speaking:
- Proper names (“London,” “Edinburgh,” “Charles I”) point to particulars.
- Most other kinds of words—common nouns, adjectives, prepositions, and verbs—point to universals.
- Pronouns (“he,” “this,” “they”) point to particulars, but in an ambiguous way: you only know which particular by the context.
- Even “now” points to a particular—the present moment—but it’s an especially slippery one, because “the present” keeps changing.
In fact, it’s hard to make a sentence at all without using at least one universal. The closest you might get is something like: “I like this.” But even there, the verb “like” is universal. I can like other things; other people can like things. The word applies across cases. So every truth involves universals, and any knowledge of truths requires some acquaintance with universals.
Given that most dictionary words stand for universals, it’s surprising how rarely anyone outside philosophy even notices that universals are “a kind of thing” at all. In ordinary thought, we naturally focus on the words that name particulars. The universal-words fade into the background. And if we’re forced to focus on a universal-word, we tend to treat it as if it were just shorthand for some particular example.
Consider: “Charles I’s head was cut off.” Your mind likely jumps to Charles I, his head, the act of cutting—concrete, particular items. You don’t naturally pause to ask what “head” means in general, or what “cut” expresses as such. Those universal-words feel unfinished on their own. They seem to need a setting before you can “do” anything with them. So we glide past universals, and philosophy is what forces us to stop and look.
Even philosophers have often noticed only some universals. They’ve tended to recognize the universals expressed by adjectives and nouns—qualities and properties—while neglecting the universals expressed by verbs and prepositions, which often express relations. That neglect has mattered enormously. In fact, it’s not too much to say that a lot of metaphysics since Spinoza has been shaped by it.
Here’s the rough mechanism.
- Adjectives and common nouns usually express properties of a single thing (“red,” “heavy,” “human”).
- Verbs and prepositions often express relations between two or more things (“is taller than,” “loves,” “between,” “north of”).
If you overlook relational universals, you start thinking every statement can be analyzed as: “this one thing has a property.” You stop seeing that many statements are fundamentally about connections between things. Push that far enough and you get a dramatic conclusion: there are no real relations between things.
And if there are no relations, you end up with two stark options:
- Maybe there is only one thing in the universe (because plurality without relations becomes hard to make sense of).
- Or maybe there are many things, but they can’t interact at all—because interaction would be a relation, and relations are “impossible.”
The first view—associated with Spinoza, and in more recent times with Bradley and others—is called monism. The second—associated with Leibniz, though less common today—is called monadism, because each isolated “thing” is treated as a self-contained unit, a monad.
Both positions are fascinating. But in my view, both are pushed along by giving far too much attention to one kind of universal (qualities named by nouns and adjectives) while ignoring the equally important kind (relations expressed by verbs and prepositions).
In fact, if someone wanted to deny universals altogether, the situation is a bit ironic. We can’t strictly prove that there must be universals of the “quality” type (like whiteness) in the way some people have wanted to. But we can prove that there must be relations—and relations are universals.
Take whiteness as an example. If you accept that there’s a universal whiteness, you’ll say: things are white because they have the quality of whiteness. Berkeley and Hume fought this hard. They denied the existence of what they called “abstract ideas.” When you try to think of whiteness, they said, you don’t grasp some abstract entity. Instead, you picture a particular white thing—say, a patch of snow or a sheet of paper—and you reason using that mental image, carefully avoiding any step that wouldn’t hold for all white things.
As a description of how we often think, that’s largely right. Geometry shows it clearly: to prove something about all triangles, we draw one triangle and reason about it, making sure we don’t rely on some accidental feature that another triangle might lack. Beginners often draw several triangles—skinny, fat, tilted—just to confirm the reasoning doesn’t secretly depend on one special diagram.
But a serious problem appears as soon as you ask: How do we know that something counts as white, or as a triangle, in the first place?
If you want to avoid the universals whiteness and triangularity, you might pick a particular sample—this particular white patch, this particular triangle—and say: “Anything is white (or triangular) if it resembles my chosen example in the right way.”
But now you’ve smuggled in what you were trying to avoid. The “right kind of resemblance” has to be something that can hold between many different pairs of things. There are many white objects, so the relevant resemblance must apply across countless pairings. And that repeatability—one and the same sort of thing showing up across many cases—is exactly what a universal is.
You might try to dodge this by saying: “Fine, there isn’t one resemblance; there’s a different resemblance for every pair.” But then you immediately face the next question: those many “resemblances”—do they have anything in common? If they do, then that commonality is itself a universal. And if you say they don’t, you’ve made the term “resemblance” meaningless. In practice, you’ll be forced to admit resemblance as a universal relation. So the relation of resemblance has to be a genuine universal.
And once you’ve admitted that, it becomes hard to justify elaborate, unnatural theories designed solely to avoid admitting universals like whiteness and triangularity.
Berkeley and Hume missed this response because they focused almost entirely on qualities and ignored relations as universals. In that respect, we have another case where the rationalists look closer to the truth than the empiricists. Still, because rationalists often neglected or denied relations too, their deductions were sometimes even more prone to error than the empiricists’ were.
So far, then, we’ve seen that universals must exist in some sense. Next we need to show that their being is not merely mental. In other words: whatever kind of being universals have, it doesn’t depend on being thought about. It doesn’t depend on being “in” a mind. We touched this at the end of the last chapter, but now we need to face it directly: what sort of being do universals have?
Consider the claim: “Edinburgh is north of London.” That statement involves a relation between two places, and it seems obvious that the relation holds whether or not anyone knows it. When you learn that Edinburgh is north of London, you don’t make it true. You simply grasp a fact that was already the case.
Even if no humans existed, the bit of Earth’s surface where Edinburgh stands would still be north of the bit where London stands. And even if there were no minds anywhere in the universe, the relation would still hold. Many philosophers deny this—some for Berkeley-style reasons, some for Kantian ones—but we’ve already examined those arguments and found them unconvincing. So we’ll assume that the fact “Edinburgh is north of London” doesn’t presuppose anything mental.
But notice what that commits us to. The fact includes the universal relation north of. If the overall fact involves nothing mental, then a constituent part of it can’t secretly be mental either. So we have to say: the relation is not dependent on thought. Like Edinburgh and London themselves, it belongs to an independent reality that thought can discover but does not create.
This leads to a new puzzle, though. The relation north of doesn’t seem to exist the way Edinburgh and London exist. If you ask, “Where is the relation? When does it exist?” the only honest answer is: nowhere and at no time. You can’t point to a spot and say, “The relation is located right there.” It isn’t “in” Edinburgh more than it is “in” London. It connects the two and is neutral between them. And it’s not something we can assign to a particular moment the way we assign a thought, a feeling, a sound, or a flash of color to a time.
Everything we can directly sense—or directly catch through introspection—exists at some particular time. But north of doesn’t. So it’s radically different from ordinary physical or mental items. It isn’t in space or time. It isn’t material. It isn’t a mental event. And yet it is something.
This strange kind of being is exactly why many people end up thinking universals must be mental. After all, we can think about a universal, and that thinking is an ordinary mental event. Suppose you’re thinking about whiteness. In one loose sense, you might say “whiteness is in your mind.”
But that’s sloppy, and the sloppiness matters. What’s actually “in your mind,” strictly speaking, is your act of thinking of whiteness, not whiteness itself. The old ambiguity in the word “idea” fuels the confusion: sometimes “idea” means the object you’re thinking about, and sometimes it means the act of thinking. In the first sense, whiteness can be called an “idea” (an object of thought). But if you slide into the second sense, you start treating whiteness as the mental act itself—and then you conclude whiteness is mental.
That move destroys what makes a universal a universal. One person’s act of thinking is necessarily different from another person’s. Even the same person’s thought at two different times is not the same mental event. So if whiteness were the thought itself, then two people could never think about the same whiteness, and one person could never think about it twice. What different thoughts of whiteness share is not their mental event but their object—and that object is distinct from all the thoughts. So universals aren’t thoughts, even though, when we know them, they are what our thoughts are about.
It helps to reserve the word exist for things that are in time—things for which it makes sense to point to a time when they exist (even if they exist at all times). In that sense, thoughts and feelings exist; minds exist; physical objects exist.
Universals, however, do not exist like that. Instead, we’ll say they subsist, or that they have being—where “being” contrasts with “existence” by being timeless. The world of universals is, therefore, the world of being.
That world is unchanging, strict, exact—catnip for mathematicians and logicians, and for anyone who loves clean structures more than the unpredictability of life. The world of existence, by contrast, is messy and shifting: no crisp edges, no perfect order. But it includes everything we actually live through—thoughts and feelings, the data of sense, physical objects, everything that can help or harm, everything that matters to the value of life and the world.
Depending on your temperament, you might prefer contemplating one world over the other. And the one you don’t prefer may seem to you like a pale imitation of the one you do. But the truth is: both deserve impartial attention. Both are real. Both matter to metaphysics. And once we’ve distinguished them, we immediately have to ask how they relate.
Before we do that, though, we need to look carefully at how we know universals. That will be the task of the next chapter, where we’ll see how this topic connects back to the problem that first pushed us here: the problem of a priori knowledge.
10
On our Knowledge of Universals
At any given moment, a person’s knowledge of universals—just like their knowledge of particular things—falls into three buckets:
- universals you know by acquaintance (directly),
- universals you know only by description (indirectly), and
- universals you don’t know at all (neither directly nor indirectly).
Universals we know by acquaintance: sensible qualities
Start with the simplest case: universals we meet head-on in experience. We’re clearly acquainted with universals like white, red, black, sweet, sour, loud, hard, and so on—qualities that show up in our sense-data.
When you look at a white patch, you’re directly acquainted (first) with that particular patch. But once you’ve seen lots of white patches, you naturally start to notice what they share. You “pull out” what’s common across them—the whiteness—and in doing that, you become acquainted with the universal whiteness itself. The same kind of abstraction works for any similar universal.
Universals of this kind are what we can call sensible qualities. They take less abstraction than almost anything else, and they feel closer to particular experiences than more abstract universals do.
Relations: space, time, and resemblance
Next come relations. The easiest relations to grasp are the ones that hold between parts of a single complex sense-datum.
Take a page in front of you. In one glance, you can take in the whole page as a single experience. And within that experience, you can immediately see relations among its parts: some parts are to the left of others, some are above others, and so on.
Here’s roughly how abstraction works in this case. You encounter many situations where one part is to the left of another. You notice that these situations share something. Then you isolate what that shared “something” is: a particular kind of relation—the one you call being to the left of. That’s how you become acquainted with the universal relation itself.
Time works similarly. Suppose you hear a chime of bells. When the final bell sounds, you can still hold the whole chime in mind, and you can tell that the earlier bells came before the later ones. Memory adds another route: when you remember something, you recognize that what you’re remembering happened before the present moment. From experiences like these, you can abstract the universal relation of before and after, just as you abstracted “to the left of.” So time-relations, like space-relations, belong among the relations we can be acquainted with.
Another relation we learn in much the same way is resemblance (or similarity). If you see two shades of green at the same time, you can see that they resemble each other. If you also see a red shade, you can see something more subtle: the two greens resemble each other more than either resembles the red. That’s how we become acquainted with the universal of resemblance.
Relations between universals can be immediately known
Just as particulars can stand in relations, universals can too—and we can sometimes be directly aware of those relations. A moment ago we noticed that the resemblance between two greens can be greater than the resemblance between a green and a red. That involves the relation greater than holding between two relations.
This takes more abstraction than noticing a color or a sound, but the knowledge still seems immediate—and in at least some cases, just as hard to doubt. So there’s immediate knowledge not only of sense-data, but also of universals.
A priori knowledge: relations of universals
Now we can return to the puzzle of a priori knowledge—the one we set aside when we turned to universals—and handle it more cleanly.
Think about the claim: “two and two are four.” Given what we’ve just said, it’s pretty clear that this statement is about a relation between the universal two and the universal four. That points toward a major thesis:
All a priori knowledge is exclusively about relations between universals.
That matters, because it dissolves much of what initially made a priori knowledge feel mysterious.
At first, though, the thesis might look false in cases where an a priori statement seems to talk about particular things—for example, when it says that every member of one class of particulars belongs to another class, or that anything with one property must also have another.
Even “two and two are four” can be restated in ways that sound like they’re about particular collections:
- “Any two and any other two are four,” or
- “Any collection made up of two twos is a collection of four.”
If statements like these still turn out to be only about universals, then the thesis holds.
A test: what must you understand for the proposition to make sense?
One way to tell what a proposition is really “about” is to ask: What words must I understand—what things must I be acquainted with—in order to grasp what this proposition means?
The moment you genuinely understand a proposition (even before deciding whether it’s true), you must already be acquainted with whatever the proposition is really dealing with. Otherwise you wouldn’t be able to grasp it at all.
Apply that to “two and two are four,” interpreted as “any collection formed of two twos is a collection of four.” You can understand the claim as soon as you understand what collection, two, and four mean.
You do not need to know every pair of things in the universe. In fact, if you did, you could never understand the proposition, because there are infinitely many couples. So even though the general statement would apply to particular couples if there are such couples, the statement itself doesn’t say anything about any specific, real-world couple. It makes a claim about the universal couple (and the universals two and four), not about this couple or that one.
So “two and two are four” deals only with universals. Anyone who’s acquainted with the universals involved—and can “see” the relation the statement asserts—can know it.
And at this point we have to accept something as a fact about our minds: sometimes we really can perceive relations between universals, and therefore sometimes we can know general a priori truths, like those in arithmetic and logic.
Why a priori knowledge doesn’t “predict” experience
Earlier, a priori knowledge looked strange because it seemed to somehow get ahead of experience—almost like it could dictate what experience must be.
But that impression came from a mistake. No fact about anything that can be experienced can be known independently of experience.
We know a priori that two things plus two other things make four things. But we don’t know a priori that if Brown and Jones are two, and Robinson and Smith are two, then Brown and Jones and Robinson and Smith are four.
Why not? Because you can’t even understand that particular claim unless you already know that there are people like Brown, Jones, Robinson, and Smith. And you can only know that through experience. So while the general proposition is a priori, applying it to actual particulars requires experience and brings in an empirical element.
Once you see that, the “mystery” evaporates.
A priori truths vs. empirical generalizations
To sharpen the point, compare a genuine a priori judgment with an empirical generalization like “all men are mortal.”
Just like before, you can understand what the statement means as soon as you understand the universals involved: man and mortal. You don’t have to personally meet every human being to understand the sentence.
So the difference between an a priori general truth and an empirical generalization is not in what the sentence means. It’s in what counts as evidence for it.
In the empirical case, the evidence comes from particular instances. We believe “all men are mortal” because we know of countless cases of people dying and no cases of people living beyond a certain age. We don’t believe it because we directly see some necessary connection between the universal man and the universal mortal.
Now, physiology might someday prove—given broad laws about living bodies—that no organism can last forever. That would give a stronger connection between man and mortality and would let us assert the claim without leaning on the specific evidence of people dying. But that doesn’t change the kind of justification. It just means the generalization has been absorbed into a wider one, supported by a larger (but still empirical) inductive base.
Science advances in large part by making these “subsumptions”—folding narrower generalizations into broader ones. That can raise our confidence. But it doesn’t turn induction into a priori insight. At bottom, the support is still drawn from instances, not from a purely a priori connection of universals like we find in logic and arithmetic.
Two important facts about a priori general propositions
There are two opposite—but equally important—things to notice about a priori general truths.
1) We might reach them first by induction.
Sometimes we first stumble onto an a priori truth by experimenting with many cases, and only later do we recognize the universal connection and prove it.
For example, it’s known that if you draw perpendiculars to the sides of a triangle from the opposite angles, all three perpendiculars meet at a single point. Someone could easily discover this first by drawing lots of triangles, seeing the perpendiculars always meet, and then being pushed by that experience to look for a general proof. Mathematicians live this pattern all the time.
2) We can sometimes know a general truth even when no instance can ever be given.
This is the more philosophically striking point.
We know that any two numbers can be multiplied, producing a third number called their product. We also know that all pairs of integers whose product is less than 100 have actually been multiplied and recorded in the multiplication table.
But the integers are infinite, and only a finite number of pairs have ever been—or ever will be—considered by human beings. So there must be pairs of integers that no human being has ever thought of and never will think of. And all those unthought-of pairs must involve integers whose product is over 100.
So we can state this undeniable general proposition:
“All products of two integers that have never been and never will be thought of by any human being are over 100.”
And here’s the weird twist: by the very terms of the proposition, we can never produce an example. Any pair we actually think of would no longer qualify.
People often deny that this kind of knowledge is possible, because they assume that knowing a general proposition requires knowing instances. But it doesn’t—at least not when the proposition is purely about relations among universals.
And this possibility isn’t some philosophical party trick. It underwrites a lot of what we normally count as knowledge.
Why this matters for physical objects and other minds
Earlier we argued that knowledge of physical objects (as opposed to immediate sense-data) is always reached by inference, and that physical objects themselves are not things we’re directly acquainted with.
That means we can never know a proposition like “this is a physical object” where “this” refers to something immediately given in experience. We can point to the sense-data associated with physical objects, but we can’t point to the physical objects themselves as items of acquaintance. So our knowledge about physical objects is, throughout, the kind of general knowledge for which no direct instance can be produced.
The same is true of our knowledge of other people’s minds—and, more broadly, any class of things where no instance is given to us by acquaintance.
A snapshot of our sources of knowledge
At this stage, we can take stock of where our knowledge seems to come from.
First, distinguish knowledge of things from knowledge of truths. Each has two forms: one immediate, one derivative.
Knowledge of things
- Immediate knowledge (acquaintance)
- with particulars: sense-data, and (probably) ourselves
- with universals: at least sensible qualities, space and time relations, similarity, and some abstract logical universals (though we don’t have a clean rule for exactly which universals can be known this way)
- Derivative knowledge (knowledge by description)
- always involves both acquaintance with something and knowledge of truths
Knowledge of truths
- Immediate knowledge (intuitive knowledge): self-evident truths
- includes truths that simply report what is given in sense
- includes certain abstract logical and arithmetical principles
- and, with less certainty, some ethical propositions
- Derivative knowledge of truths
- everything we can deduce from self-evident truths using self-evident principles of deduction
If this picture is right, then all our knowledge of truths rests on intuitive knowledge. So we need to examine what intuitive knowledge is, and how far it reaches—much as we earlier examined acquaintance.
But truth introduces an extra problem that “knowledge of things” doesn’t: the problem of error. Some of our beliefs are wrong, so we have to ask whether—and how—we can separate real knowledge from mistake.
This problem doesn’t arise with acquaintance itself. Whatever you’re acquainted with—even in dreams or hallucinations—there’s no error as long as you stick to what’s immediately given. Error only appears when you go beyond the immediate object (the sense-datum) and treat it as a sign of some external physical object.
That’s why questions about truth are harder than questions about things. And as the first step in tackling the problems that come with truth, we should look closely at the nature and scope of our intuitive judgments.
11
On Intuitive Knowledge
A lot of people assume that any belief worth having ought to come with a proof—or at least with reasons strong enough to make it very likely true. If you can’t give a reason, the thought goes, then your belief is basically irresponsible.
Most of the time, that instinct is right. Nearly all the everyday things we believe are either:
- conclusions we did draw from other beliefs, or
- conclusions we could draw, even if we don’t remember doing it.
Usually the “because” has slipped out of view. We don’t stop before lunch and ask, “Wait—what’s my evidence that this sandwich isn’t poison?” And yet if someone challenged us, we’d feel confident we could supply a sensible answer. In ordinary life, that confidence is usually justified.
Now picture a relentless Socrates who won’t let you off the hook. You give a reason; he asks for a reason for that reason. You answer again; he asks again. If you keep going, you’ll eventually hit a point where you simply can’t find any deeper justification—and where it seems very likely that no deeper justification exists even in principle.
Starting from the everyday beliefs of daily life, this “why?” game can push us back step by step until we reach something like:
- a very general principle, or
- a specific example of such a principle,
that strikes us as blindingly obvious and can’t be derived from anything even more obvious.
In most practical cases—like “my food is probably nourishing and not poisonous”—the chain of reasons eventually lands on the principle of induction (the idea we discussed earlier: roughly, that patterns observed in the past are good guides to what will happen in similar cases). And beyond that, the regress stops. We rely on induction constantly, sometimes deliberately and sometimes without noticing, but there doesn’t seem to be any argument that starts from a simpler self-evident truth and proves induction as a conclusion.
The same kind of limit shows up with other logical principles. We can see that they’re true, and we use them to build proofs. But some of them can’t themselves be proved.
Self-Evidence Isn’t Just “Unprovable Axioms”
Even so, we shouldn’t imagine that self-evidence belongs only to those few general principles that resist proof. Once we accept a certain core set of logical principles, we can deduce other truths from them—and the surprising thing is that the derived propositions often feel just as obvious as the ones we started with.
Arithmetic makes the point vividly. In principle, the whole of arithmetic can be derived from general logical principles. Yet simple arithmetic statements—like “two plus two equals four”—feel just as self-evident as the logical rules they ultimately rest on.
It also seems plausible (though this is more controversial) that there are self-evident ethical principles—for example, “we ought to pursue what is good.”
Why Examples Feel Clearer Than Abstract Principles
There’s another pattern worth noticing: whenever we’re dealing with general principles, concrete familiar examples usually feel more obvious than the abstract rule itself.
Take the law of non-contradiction: nothing can both have a certain property and not have it. Once you understand the statement, it’s clearly true. But it still doesn’t feel quite as immediately obvious as a plain example like: “This rose I’m looking at can’t be both red and not red.”
Of course, there are ways for real life to introduce confusion:
- Parts of the rose might be red and other parts not red. In that case, it’s straightforward that the rose as a whole isn’t simply red.
- Or the rose might be a borderline shade—say, pink—where we genuinely hesitate about whether “red” applies. In that case, the situation would become perfectly definite in theory once we fixed a precise definition of “red.”
Typically, we learn to see the abstract principle through such particular cases. Only people who are very practiced with abstractions can easily grasp a general principle without leaning on examples.
Two Kinds of Intuitive Truths: Principles and Perception
Beyond self-evident general principles, there’s another major source of self-evident truths: what we get immediately from sensation. Let’s call these truths of perception, and the judgments that express them judgments of perception.
But we need to be careful here, because the raw “stuff” of sensation—the sense-data—aren’t the sort of things that can be true or false.
For example: suppose I see a patch of color. That patch just exists in my experience. It isn’t a statement, so it can’t be true or false. What can be true are claims about it:
- that there is such a patch,
- that it has a certain shape,
- that it has a certain brightness,
- that it’s surrounded by certain other colors.
So any self-evident truths drawn from the senses must be different from the sense-data themselves.
Two Kinds of Truths of Perception
It looks like there are two types of self-evident truths of perception (though they may ultimately blend into one another).
-
Bare existence judgments
Here we simply affirm that the sense-datum is present, without analyzing it. We see a red patch and judge, “There is that red patch”—or more strictly, “There is that.” -
Analytic perception judgments
Here the sensory object is complex, and we mentally pick out features and relate them. If I see a round red patch, I might judge: “That red patch is round.”In the experience, there is one sense-datum with both color and shape. The judgment separates those aspects—color and shape—and then recombines them by saying the red color has a round shape.
Another example is a relational judgment like: “This is to the right of that,” when “this” and “that” are seen at the same time. The sense-datum contains elements standing in a relation, and the judgment asserts that relation.
Memory as a Different Kind of Intuition
There’s another group of intuitive judgments that resemble those of sense in being immediate, but are still importantly different: judgments of memory.
Memory is easy to misunderstand because remembering something is often accompanied by an image—like a mental picture. But that image can’t be what memory is. The simplest reason is that the image exists now, in the present, whereas what you remember is recognized as past.
There’s more: we can often compare our image to what we remember and notice, within wide limits, whether the image seems accurate. That would be impossible unless the remembered object itself—distinct from the image—were somehow before the mind.
So the core of memory isn’t the image. The core is this: an object is immediately present to the mind as past.
Without memory in this strict sense, we wouldn’t even know there had ever been a past. We wouldn’t understand the word “past” any more than someone born blind can understand “light.” That’s why there must be intuitive judgments of memory, and why all our knowledge of the past ultimately rests on them.
The Problem: Memory Can Be Wrong
Memory creates an obvious problem: it’s famously unreliable. And if memory can mislead us, doesn’t that cast doubt on intuitive judgments more generally?
This is not a small worry. But we can at least narrow it.
In general, memory is more trustworthy when:
- the original experience was vivid, and
- the memory is close in time.
If the house next door was struck by lightning thirty seconds ago, my memory of the flash and sound is so dependable that it would be absurd to doubt that the flash occurred. The same holds for less dramatic experiences, as long as they’re recent. I’m completely certain that half a minute ago I was sitting in the same chair I’m sitting in now.
As I rewind through the day, though, the certainty fades in a spectrum:
- things I’m fully sure about,
- things I’m almost sure about,
- things I can become sure about by thinking harder and recalling surrounding details, and
- things I’m not sure about at all.
I’m quite sure I ate breakfast this morning. But if I cared as little about breakfast as a stereotypical philosopher is supposed to, I might start to doubt it. As for the breakfast conversation, I can recall some of it easily, some only with effort, some with significant doubt, and some not at all.
So memory doesn’t present itself as “certain” or “uncertain” with nothing in between. It comes in degrees of self-evidence, and those degrees track how trustworthy the memory is.
That gives us a first response to the worry about fallible memory: memory varies in self-evidence, and its reliability varies in the same way, reaching something like perfect self-evidence and perfect trustworthiness in memories of very recent, very vivid events.
What About Confident but False “Memories”?
Still, people sometimes have extremely firm “memories” of things that never happened. What should we say about those?
A likely explanation is that, in such cases, what’s actually present to the mind—the thing remembered in the strict sense—is not the event itself, but something closely associated with it.
A famous story illustrates this: George IV reportedly came to believe that he had been at the Battle of Waterloo, largely because he had said so so often. On this account, what he truly remembered was his repeated assertions. The belief in the alleged event would then be produced by association with those remembered assertions. If so, it wouldn’t be a genuine case of memory at all—at least not memory in the strict, immediate sense we’re trying to isolate.
It’s plausible that all cases of “false memory” can be handled this way: they turn out not to be memory proper, but beliefs produced by association, suggestion, repetition, or some other mechanism that piggybacks on something genuinely remembered.
Self-Evidence Comes in Degrees
The memory case makes one big point crystal clear: self-evidence has degrees. It isn’t an all-or-nothing stamp. It can fade by steps, from absolute certainty down to a barely noticeable feeling of plausibility.
Roughly speaking:
- Truths of perception and some basic logical principles sit at the very top.
- Truths of immediate memory come close behind.
- The inductive principle has less self-evidence than certain other logical rules, like “whatever follows from a true premise must be true.”
- Memories grow less self-evident as they get older and less vivid.
- Logical and mathematical truths tend, broadly, to feel less self-evident as they become more complex.
- Judgments of intrinsic ethical or aesthetic value may have some self-evidence, but usually not much.
These gradations matter for epistemology because propositions might feel self-evident to some degree without actually being true. If that’s right, then we don’t have to sever all connection between self-evidence and truth. We can say something more subtle: when two claims conflict, we should keep the one that is more self-evident and reject the one that is less.
Maybe “Self-Evidence” Names Two Different Things
One final thought—more a suggestion than a settled conclusion. It seems quite possible that we’ve been packing two different ideas into the word “self-evidence.”
- One idea corresponds to the very highest level of self-evidence and may be an infallible guarantee of truth.
- The other corresponds to the lower levels and provides not a guarantee, but only a stronger or weaker presumption.
For now, that’s as far as we can go. After we’ve examined what truth is, we’ll return to self-evidence again—especially in connection with the distinction between knowledge and error.
12
Truth and Falsehood
When we talk about truths—things you can state, believe, deny, argue about—there’s always an opposite: error. That’s not how it works with our direct, first-hand awareness of things (what philosophers often call acquaintance). With acquaintance, you either are directly aware of something or you aren’t. But you can’t be “directly aware” of something incorrectly. Whatever you’re acquainted with must be something real to your experience. You can reason badly from what you’re acquainted with, sure—but the acquaintance itself isn’t the part that lies.
Belief is different. With beliefs, you can land on either side: you can believe what’s true, and you can believe what’s false. And since people constantly disagree—often with complete confidence—some of those beliefs have to be wrong. That immediately raises a hard question: if false beliefs can feel just as solid as true ones, how do we tell them apart?
That “How do we tell?” question is famously difficult, and there’s no fully satisfying answer. So this chapter starts with an earlier, slightly easier question:
- Not How do we know a belief is true or false?
- But What do we mean when we say a belief is true or false?
Keeping those questions separate matters. If you blur them together, you end up with an answer that doesn’t really answer either.
What Any Theory of Truth Has to Handle
If we’re trying to explain what truth is, there are three basic constraints any decent theory has to meet:
-
It must make room for falsehood.
Some theories make truth so “automatic” that they can’t explain how error is even possible. But any account of belief has to explain both truth and the possibility of getting things wrong. (Again: acquaintance doesn’t need this, because it doesn’t have an “opposite” in the same way.) -
Truth and falsehood belong to beliefs and statements.
Imagine a universe that’s nothing but matter moving around—no minds, no assertions, no beliefs. That world could still contain what we might call facts (things that happen), but it wouldn’t contain truths or falsehoods in the usual sense, because there would be nothing that could be true or false. Truth and falsehood apply to what is said or believed. -
Yet truth and falsehood depend on something outside the belief.
Whether a belief is true isn’t something you can read off by inspecting the belief from the inside. If I believe “Charles I died on the scaffold,” that belief is true because of an event in history—not because the belief has a special glow of “truthiness.” And if I believe “Charles I died in his bed,” it’s false no matter how vivid, sincere, or carefully formed it is—because history doesn’t cooperate. So truth and falsehood are properties of beliefs, but they depend on how those beliefs line up with something beyond themselves.
Correspondence vs. Coherence
That third point pushes many philosophers toward a familiar idea: truth is a kind of correspondence between belief and fact. The trouble is that it’s surprisingly hard to spell out what “correspondence” really amounts to without running into serious objections.
Partly because of those objections—and partly because correspondence seems to make truth feel unreachable (“If truth is matching something outside thought, how could thought ever be sure it has matched it?”)—some philosophers try a different definition: truth as coherence.
On the coherence view, falsehood shows up as a failure to fit into the rest of what we believe, and a truth is something that belongs inside a perfectly consistent, fully complete system—The Truth as a seamless whole.
But two major problems show up fast.
Problem 1: More Than One Coherent Story Can Fit the Same Evidence
There’s no good reason to assume only one coherent set of beliefs is possible. With enough imagination, a novelist could invent a detailed alternative history that meshes perfectly with everything we currently observe—and yet still be totally unlike what actually happened.
Science gives the same lesson in a more disciplined way. Often, two or more hypotheses can explain all the known data. Researchers then look for new evidence that will eliminate all but one option—but there’s no guarantee they’ll always succeed.
Philosophy, too, can offer rival “big pictures” that each seem to accommodate the facts. For instance, maybe life is one long dream and the external world is only as real as dream-objects are. That idea doesn’t obviously contradict what we experience—but we also don’t have a compelling reason to prefer it over common sense, which says other people and things really exist. So coherence can’t define truth if multiple coherent systems are possible.
Problem 2: “Coherence” Secretly Depends on Logic
The coherence theory also assumes we already understand what coherence means. But coherence itself relies on the laws of logic.
Two propositions “cohere” when they can both be true; they “clash” when at least one must be false. To know whether both can be true, you have to rely on logical truths like the law of non-contradiction. “This tree is a beech” and “This tree is not a beech” can’t both be true—not because of some empirical discovery, but because logic rules it out.
And here’s the catch: if you try to test the law of non-contradiction by coherence, you get nowhere. If you simply decide the law is false, then nothing is inconsistent with anything anymore. Coherence only works inside a logical framework; it can’t be used to justify that framework.
For these two reasons, coherence can’t be the meaning of truth—though once you already have a lot of truth in hand, coherence can be a very useful test for spotting error.
Back to Correspondence—and What We Still Owe
So we’re pushed back toward the correspondence idea: truth has to involve matching fact. Now we owe two clarifications:
- What exactly counts as a fact?
- What kind of correspondence has to hold between a belief and a fact for the belief to be true?
Whatever we say has to satisfy our earlier constraints: it must allow for falsehood, treat truth as something beliefs can have, and yet make truth depend on how beliefs relate to things outside the believing mind.
Why Belief Can’t Be a Simple “Mind-to-Object” Link
To make sense of false belief, we can’t treat believing as the mind standing in a relation to a single object—“the thing believed.”
If belief worked that way, it would behave like acquaintance: it would always connect you to something that exists, and so it would always come out true.
Take a famous example. Othello believes (wrongly) that Desdemona loves Cassio. We can’t say his belief is a relation to a single object like “Desdemona’s love for Cassio,” because if there were such an object, the belief would automatically be true. But in the story, there is no such love—so there is no such object for Othello to be related to.
You might try to dodge that by saying the object is not “Desdemona’s love for Cassio” but “that Desdemona loves Cassio.” Yet if Desdemona doesn’t love Cassio, it’s just as strange to treat that as a real object Othello is related to. It’s cleaner to use a theory of belief that doesn’t make belief a relation to one standalone “belief-object.”
Some Relations Need More Than Two Terms
We often picture relations as two-place links: A is next to B, A loves B, and so on. But many relations require three terms, four terms, or more.
- Between needs three: York is between London and Edinburgh. With only London and Edinburgh, nothing could be “between” them.
- Jealousy needs at least three people.
- “A wishes B to promote C’s marriage with D” involves four distinct terms.
So it shouldn’t surprise us if believing is also a relation that ties together more than two things at once.
Belief as a Multi-Part Relation
If we want falsehood to be genuinely possible, it’s better to treat judging or believing as a relation that includes:
- the mind that believes, and
- the objects the belief is about,
all as separate ingredients of a single act of believing.
When Othello believes that Desdemona loves Cassio, the act involves four constituents:
- Othello (the believer),
- Desdemona,
- the relation loving,
- Cassio.
And importantly, Othello doesn’t have the same “believing” relation to each object one-by-one, like three separate links. There is one act of believing that ties all four together in one complex event at a particular moment.
So an act of belief (or judgement) is simply: at a certain time, the relation of believing occurs among a mind and several other terms.
True vs. False: Same Parts, Different Outcome
Now we can say what makes a judgement true or false.
Let’s name the pieces:
- The mind that judges is the subject.
- The other terms it judges about are the objects.
- Together, subject + objects are the constituents of the judgement.
Believing also has a built-in direction (a “sense”). The same ingredients can be arranged differently, and that difference matters. “Desdemona loves Cassio” is not the same as “Cassio loves Desdemona,” even though it involves the same people and the same relation. The act of judging imposes an order on its objects (some languages show this with word endings; English mostly does it with word order).
Now for the key move. When you believe something like “Desdemona loves Cassio,” one of the objects is itself a relation—here, loving. But inside the belief, loving isn’t doing the job of unifying the whole structure. In the belief, loving is just one component—like a brick. The “cement” that holds the belief together is the relation believing.
- If the belief is true, then outside the mind there is a corresponding complex unity made solely from the objects, arranged in the same order, where the object-relation (loving) actually connects the object-terms (Desdemona and Cassio). In other words: there exists a fact like “Desdemona’s love for Cassio.”
- If the belief is false, there is no such object-only complex in the world. There is no fact of “Desdemona’s love for Cassio.”
So:
- A belief is true when it matches an associated complex made from its objects.
- A belief is false when no such matching complex exists.
Assuming (for simplicity) that a belief involves two terms and a relation, ordered by the direction of believing: if those two terms, in that order, really are united by that relation into a fact, the belief is true; if they aren’t, it’s false. That’s the definition we were looking for.
Why Truth Is “About the World,” Even Though Beliefs Live in Minds
Truth and falsehood are properties of beliefs, but in an important sense they’re external properties. The truth of a belief depends on something that doesn’t involve believing at all—usually it doesn’t involve any mind—only the objects the belief is about and whether they form the right kind of complex in reality.
That lets us hold two ideas together without contradiction:
- Beliefs exist only because minds exist.
- But beliefs aren’t true just because minds exist (or because minds feel confident). Their truth depends on how the world is.
We can restate the picture like this: in the belief “Othello believes that Desdemona loves Cassio,” Desdemona and Cassio are the object-terms, and loving is the object-relation.
If there’s a real-world situation—say, “Desdemona loves Cassio”—made up of the same things (Desdemona and Cassio) connected by the same relationship (love) in the same order your belief asserts, then that situation is what we call the fact that matches the belief.
So the rule is simple:
- A belief is true when a matching fact exists.
- A belief is false when no such matching fact exists.
Notice what this implies: minds don’t manufacture truth or falsehood. Minds manufacture beliefs. But once a belief exists, you can’t just will it into being true or false. Whether it’s true depends on the world, not on your thinking about it.
There is one limited exception: beliefs about the future that are partly under your control—like believing you’ll catch a train. In cases like that, your actions can help make the belief come out true. But in general, what makes a belief true is a fact, and that fact usually has nothing to do with the person who holds the belief.
Now that we’ve pinned down what we mean by truth and falsehood, the next question is practical: how can we tell, for any given belief, whether it’s true or false? That’s what the next chapter takes up.
13
Knowledge, Error, and Probable Opinion
In the last chapter we asked what truth and falsehood mean. That matters, but the more urgent question is practical: how do we figure out which of our beliefs are true and which are false?
Some of what we believe is plainly wrong. That fact forces a harder question: how certain can we ever be that this belief, right now, isn’t also wrong? Put bluntly, do we ever genuinely know anything—or do we mostly just get lucky and sometimes believe the truth by accident?
To tackle that, we first need to get clear on what we mean by “knowing.” And it’s trickier than it sounds.
True belief isn’t enough
At first, “knowledge” can look like a simple idea: maybe knowledge is just true belief. If what you believe happens to be true, haven’t you “known” it?
Not really—at least not the way we actually use the word.
Here’s a silly example that makes the point. Suppose someone believes, “The late Prime Minister’s last name started with B.” That’s true, because the late Prime Minister was Sir Henry Campbell Bannerman. But suppose the same person “knows” this only because he wrongly believes the late Prime Minister was Mr. Balfour. He still lands on a true statement (“the last name starts with B”), but we wouldn’t call that knowledge. He got the truth through a mistake.
Or take a newspaper that guesses the outcome of a battle before any reliable report arrives. It might announce the correct result by sheer good fortune, and some readers may believe it. Their belief could be true—but it still wouldn’t count as knowledge, because it wasn’t formed in a way that connects properly to the facts. So:
- A belief can be true yet still not be knowledge if it’s built on something false.
Even correct premises don’t help if your reasoning is broken
There’s another way true belief fails to become knowledge: bad reasoning.
Suppose I know two true things:
- All Greeks are men.
- Socrates was a man.
If I conclude from this that Socrates was a Greek, my premises are true and my conclusion is true—but the conclusion doesn’t actually follow from the premises. So I still don’t know that Socrates was Greek. I’ve stumbled into a truth through a faulty route.
So knowledge isn’t just “ending up with a true statement.” The path matters.
“Deduced from true premises” is still not a good definition
You might try to fix things by saying: knowledge is whatever is validly deduced from true premises.
But that doesn’t work either. It’s too broad and too narrow at the same time.
It’s too broad because true premises aren’t enough; the premises must also be known. Our earlier mistaken Balfour-believer might start with the true statement “the late Prime Minister’s name began with B,” and then reason flawlessly from there. But since he doesn’t know that premise (he only believes it for a bad reason), we still won’t credit him with knowledge of whatever conclusions he draws.
So we might revise:
- Knowledge is what is validly deduced from known premises.
But that fix creates a problem: it’s circular. We’re defining knowledge using the word “known.” All we’ve really done is describe one kind of knowledge—what we might call derivative knowledge—in terms of another kind that it depends on, which we might call intuitive knowledge.
A cleaner way to say it is:
- Derivative knowledge is what you can validly infer from premises you know intuitively.
That’s a workable description of derivative knowledge, but it leaves the big question hanging: what exactly is intuitive knowledge?
Derivative knowledge is broader than formal proof
Even before we define intuitive knowledge, we can spot a problem with the strict “only what you actually deduce” picture.
People constantly end up with true beliefs that could be logically inferred from intuitive knowledge, but that aren’t reached by any conscious chain of reasoning.
Reading is a perfect example. If a newspaper reports that the King has died, you’re usually justified in believing it. You’re also justified in believing something even simpler: that the newspaper says the King has died.
But what is the immediate, “given” thing you know here? At the most basic level, you’re aware of sense-data—the visual experience of black marks on paper (or pixels on a screen). That awareness is usually so automatic you barely notice it. Only someone struggling to read—a child sounding out letters, or an adult learning a new script—experiences the slow climb from shapes to meaning.
A fluent reader doesn’t perform a step-by-step logical argument from “I see these shapes” to “this sentence means the King is dead.” The meaning just arrives. And yet it would be ridiculous to insist that fluent readers therefore don’t know what the newspaper says.
So we should widen our account of derivative knowledge. We should count as derivative knowledge a belief that arises from intuitive knowledge—even through habit or association—so long as there is:
- a valid logical connection between the starting point and the belief, and
- the person could recognize that connection if they stopped and reflected.
Psychological inference and logical inference
In everyday thinking, we move from belief to belief in many ways besides formal deduction. The leap from print to meaning is one example. These informal mental transitions can be called psychological inference.
We can accept psychological inference as a source of derivative knowledge when there is a genuine logical inference that mirrors it—one that is discoverable on reflection.
That word “discoverable” is fuzzy. How much reflection counts? How hard is too hard? But this fuzziness is unavoidable, because “knowledge” itself isn’t razor-sharp. It shades gradually into something weaker—probable opinion—and any definition that pretends otherwise will mislead more than it clarifies.
The real problem: intuitive knowledge
The hardest issues don’t come from derivative knowledge. As long as we’re dealing with beliefs that trace back to intuitive knowledge, we have something to check against.
The trouble begins with intuitive beliefs themselves. How do we tell which intuitive beliefs are true and which are mistakes? There’s no simple, universal test. And we shouldn’t expect perfect precision here: nearly everything we count as knowledge carries at least some doubt. Any theory that forgets that is obviously wrong.
Still, we can make progress.
Two ways to know a fact
Our earlier account of truth helps. When a belief is true, there is a corresponding fact—a single complex in which the objects mentioned in the belief are related in the way the belief says they are. A true belief counts as knowledge of that fact if it also meets the further conditions we’ve been discussing.
But there’s another way to “have” a fact besides believing a proposition about it. We can sometimes know a fact through perception, using that word in the broadest sense.
For example: if you know the sunset time, then at that hour you can know the truth “the sun is setting.” That’s knowledge of a truth. But if the sky is clear and you look west and watch the sun dropping, you know the same thing in another way—you know the fact by direct encounter. This is knowledge of things, not just knowledge of statements.
So, in principle, any complex fact can be known in two ways:
- By judgment: you judge that its parts are related in a certain way.
- By acquaintance: you’re directly aware of the complex fact itself—what we can loosely call perception (and it isn’t limited to the five senses).
These two ways behave very differently:
- Acquaintance with a complex fact is possible only if the fact really exists. You can’t be acquainted with a non-existent complex.
- Judgment is always vulnerable to error. You can judge that things are related a certain way even when they aren’t.
Why? Because acquaintance gives you the whole—the parts as actually joined. Judgment, by contrast, can take the parts and the relation separately, and then assert that they fit together. The parts and the relation might be real, but that relation might not link those parts in the way you claim.
Two kinds of “self-evidence”
Earlier we suggested there are two kinds of self-evidence: one that guarantees truth absolutely, and one that only supports a belief to some degree. Now we can separate them clearly.
1) Absolute self-evidence (from acquaintance)
A truth is self-evident in the strongest sense when you are acquainted with the corresponding fact.
Take Othello’s belief that Desdemona loves Cassio. If it were true, the corresponding fact would be “Desdemona’s love for Cassio.” But only Desdemona could ever be acquainted with that exact mental fact. Mental facts, and facts about sense-data, are private in this way: only one person can be directly acquainted with them, so only that person can have this kind of self-evidence about them.
That means:
- No fact about a particular existing thing can be self-evident (in this strongest sense) to more than one person.
Facts about universals are different. Many minds can be acquainted with the same universal—like a particular color as such, or a logical relation—and so many people can be acquainted with a relation between universals. In those cases, the corresponding truths can have absolute self-evidence for multiple people.
Whenever we’re acquainted with a complex fact—certain terms standing in a certain relation—the truth “these terms are so related” has absolute self-evidence. And when a judgment genuinely matches such a fact, it must be true.
So this first kind of self-evidence gives an absolute guarantee of truth.
But here’s the catch: it doesn’t automatically give an absolute guarantee that your specific judgment matches the fact.
Suppose you perceive the sun shining (a single complex scene), and then you form the judgment “the sun is shining.” To make that judgment, you have to analyze the perceived whole: you separate out “the sun” and “shining” as elements. That analytic step can go wrong. So even when the fact is present and self-evident through acquaintance, your judgment about it can still miss the mark. If the judgment does match the fact, it must be true—but you can make mistakes in getting from the perception to the judgment.
2) Gradual self-evidence (in judgments)
The second kind of self-evidence belongs to judgments that aren’t grounded in direct acquaintance with a whole fact. This self-evidence comes in degrees, ranging from rock-solid conviction down to a faint lean.
Think about hearing a sound that slowly fades. Imagine a horse trotting away along a hard road. At first you’re completely sure you hear hoofbeats. If you listen carefully as it recedes, you reach a moment where you wonder: was that real, or just my imagination? Maybe it was a blind upstairs, or my own heartbeat. Eventually you’re unsure whether there was any sound at all. Then you become sure you hear nothing. Finally you know you don’t hear anything.
Notice what’s changing here. The raw sense-data are what they are. The shift happens in the judgments you form based on those sense-data. Their self-evidence slides continuously from high to low.
The same pattern shows up in vision. Compare a blue shade and a green shade: you can be completely sure they’re different. But if the green is gradually adjusted to become more like the blue—blue-green, then greenish-blue, then blue—there will be a point where you’re unsure you see any difference, and then a point where you know you can’t tell them apart. This happens when tuning an instrument, too, and in any situation with a smooth continuum.
So this second kind of self-evidence is a matter of degree. And it’s reasonable to trust higher degrees more than lower ones.
Why reasoning can fail even with true starting points
Derivative knowledge ultimately depends on premises that have at least some self-evidence, and it also depends on the self-evidence of the connection between each step and the next.
Geometry is a good illustration. It’s not enough for the axioms to feel self-evident. At every step, you also need to “see” that the conclusion really follows. In difficult proofs, that “seeing” can be weak—only barely compelling. And when that connective self-evidence is thin, errors become much more likely.
Knowledge, error, and probable opinion
If we assume that intuitive knowledge is trustworthy in proportion to how self-evident it is, then both intuitive and derivative knowledge come in a spectrum—from the most certain to the barely-more-likely-than-not.
At the top are things like:
- the presence of striking sense-data,
- the simplest truths of logic and arithmetic,
which we can treat as practically certain.
At the bottom are judgments that feel only slightly more plausible than their opposites.
With that in mind, we can sort our confident beliefs like this:
- Knowledge: a firm belief that is true, and is either intuitive or derived (logically or psychologically) from intuitive knowledge in a way that is logically valid.
- Error: a firm belief that is false.
- Probable opinion: everything in between—firm beliefs that don’t qualify as knowledge or error, and hesitant beliefs that trace back to inputs or connections with less-than-maximal self-evidence.
By this standard, a huge portion of what people casually call “knowledge” is really probable opinion.
Coherence as a useful test (but not the definition of truth)
When we’re dealing with probable opinion, one tool becomes extremely helpful: coherence.
We rejected coherence as the definition of truth, because a coherent set of beliefs can still be wrong. But coherence can still serve as a criterion—a practical sign that a belief deserves more confidence.
A group of individually probable opinions becomes more probable when they fit together into a mutually supportive system. That’s how many scientific hypotheses gain credibility: not because any one piece is undeniable, but because the hypothesis slots neatly into a web of other well-supported claims and helps the whole system make sense.
The same goes for broad philosophical hypotheses. A single case may leave them looking shaky. But if they create order and consistency across a large body of probable beliefs, they can start to feel almost certain.
A classic example is the difference between dreams and waking life. If your dreams night after night formed one continuous, coherent world the way your days do, you’d have a hard time knowing which to trust. But in real life, dreams usually don’t cohere with each other or with waking experience. Coherence, in practice, condemns the dream-world and supports the waking one.
Still, coherence has a limit. It can raise probability, sometimes dramatically, but it can’t create absolute certainty from scratch. Unless there is already something certain somewhere in the system, coherence alone will never turn probable opinion into unshakable knowledge.
14
The Limits of Philosophical Knowledge
In everything we’ve said so far about philosophy, we’ve barely touched a huge chunk of what many philosophers spend most of their time writing about. A lot of them—maybe most—claim they can prove, using pure “armchair” reasoning (a priori metaphysics), grand conclusions like:
- the core doctrines of religion
- the universe is rational through and through
- matter is an illusion
- evil isn’t ultimately real
There’s no question why this is tempting. For many people, the hope that philosophy can justify these sweeping, comforting theses is what draws them in—and keeps them studying for decades.
But that hope, I think, doesn’t pan out.
It looks like knowledge about the universe as a whole isn’t something we can squeeze out of metaphysics. And those famous proofs that claim “logic itself forces reality to be this way” don’t survive careful inspection. In this chapter, I’m going to sketch how that style of reasoning works, mainly so we can ask a simple question: is any of it actually valid?
Hegel as the great modern example
The standout modern representative of this approach is Georg Wilhelm Friedrich Hegel (1770–1831). His philosophy is notoriously hard, and people disagree about what he really meant. I’ll use an interpretation shared by many commentators—and, importantly, one that gives us a clear and interesting example of the kind of metaphysics we’re testing.
On this reading, Hegel’s central claim is that anything less than the Whole is obviously incomplete. And because it’s incomplete, it can’t truly stand on its own; it depends on the rest of reality to be what it is.
He thinks the metaphysician can do something like what a comparative anatomist does: give them one bone and they can infer, in broad outline, the animal it came from. Likewise, Hegel says, give the philosopher any single “piece” of reality and they can infer, in broad outline, what the entire universe must be like.
The picture is almost mechanical: every part of reality has “hooks” that latch onto the next part; that part hooks into another; and so on until—step by step—the whole universe is logically reconstructed.
Hegel says this isn’t just a claim about the world “out there.” It’s equally true in the world of thought.
Here’s the engine of his theory. Start with an idea that’s abstract or partial—something that leaves things out. If you treat it as complete, Hegel says, you’ll eventually run into contradictions. Those contradictions force the idea to flip into its opposite—its antithesis. To escape the contradiction, you then need a better idea that combines the first idea and its opposite into a new, richer concept: a synthesis.
But that synthesis still isn’t fully complete, so it too generates an antithesis, requiring a new synthesis, and so on. Hegel believes this process keeps advancing until it reaches the Absolute Idea—a concept with no remaining gaps, no opposite, and no need for further development.
And once you have the Absolute Idea, Hegel thinks you have the right conceptual lens to describe Absolute Reality. Everything below that level—every ordinary human concept—only describes the world from a limited viewpoint, the way a close-up photo shows details but misses the full scene.
From there Hegel draws an astonishing conclusion. Reality, in itself, is one single harmonious system:
- not fundamentally in space or time
- not evil in any degree
- entirely rational
- entirely spiritual
If the world we experience seems to contain space, time, matter, conflict, striving, and evil, Hegel says that’s because we’re seeing things in fragments. The contradictions are in our piecemeal view, not in reality itself. If we could see the universe all at once—as we might imagine God seeing it—then space, time, matter, evil, and struggle would simply drop away, replaced by an eternal, perfect, unchanging spiritual unity.
It’s hard not to feel the pull of that vision. There’s something undeniably grand about it—something you might want to be true.
Where the reasoning slips
Still, when you examine the arguments closely, they rely on a lot of confusion and a lot of assumptions that never get justified.
The system rests on one key idea: whatever is incomplete cannot exist independently. If a thing needs other things to “complete” it, then it can’t be self-sufficient—it must depend on those other things in order to be what it is.
That idea is often expressed like this: if a thing has relations to things outside itself, then its very nature must somehow “contain” reference to those outside things. So it couldn’t be what it is unless those other things existed.
A person is the stock example. What you are, we say, depends on your memories, your knowledge, what you love and hate, and so on. Remove the things you remember, know, love, and hate—and you wouldn’t be the same person. So, on this line of thought, a person is clearly a fragment, not a complete reality. Treated as the whole of reality, a person would be self-contradictory.
But this whole way of arguing depends on a slippery word: “nature.”
In practice, “the nature of a thing” here seems to mean all truths about the thing. And once you notice that, the problem becomes easier to see.
Yes: a truth that links one thing to another thing can’t exist if the other thing doesn’t exist. If “A is taller than B” is true, then B has to exist.
But a truth about a thing is not literally a part of the thing itself. Yet under this usage, truths are treated as ingredients in the thing’s “nature,” as though the thing were made of the facts we can state about it.
Now look at what follows if you define “nature” as “all truths about a thing”:
- You can’t know a thing’s nature unless you know all of its relations to everything else in the universe.
- But we often do know things without knowing anything like that.
So either we have to say that we can know a thing even when we don’t know its “nature” in that sense—or else we have to deny obvious cases of knowing.
The real mistake is mixing up two different kinds of knowledge:
- knowledge of things (direct acquaintance)
- knowledge of truths (propositions we can state)
You can be acquainted with something even if you know very few truths about it—at least in principle, you might not know any describable propositions about it at all. Acquaintance doesn’t automatically bring along a complete map of relations.
And while it’s true that being acquainted with something is involved in knowing any truth about it, the reverse isn’t true: knowing truths doesn’t require knowing all truths.
So two important points follow:
- Being acquainted with a thing does not logically require knowing its relations to other things.
- Knowing some relations does not require knowing all relations—or knowing the thing’s “nature” as “all truths about it.”
A simple example makes this vivid: I can be directly acquainted with my toothache—my knowledge of it can be as complete as acquaintance ever gets—without knowing what the dentist can tell me about its causes. The dentist might know lots of truths about it without being acquainted with the feeling itself. Different kinds of knowledge, different kinds of completeness.
Once you see this, you can see why Hegel’s leap doesn’t work. The fact that a thing has relations doesn’t show that those relations are logically necessary—that they must exist just because the thing exists. You can’t deduce a thing’s actual web of connections from the bare fact that it is what it is. It only feels deducible because we already know the connections and then smuggle them back in as “part of the thing’s nature.”
What we can’t prove, and what that leaves us with
So we can’t prove that the universe forms a single, perfectly harmonious system of the Hegelian kind. And if we can’t prove that, we also can’t prove the further claims Hegel tries to extract from it—like the unreality of space, time, matter, and evil—because those conclusions depend on the earlier claim that anything “fragmentary” must be somehow not fully real.
What we’re left with is the slower, more modest approach: investigating the world piece by piece. And that means we can’t claim deep knowledge about parts of the universe that lie far beyond our experience.
That’s disappointing if you came to philosophy hoping for cosmic certainty. But it fits the scientific and inductive spirit of the modern age, and it matches what our earlier examination of human knowledge has been pointing toward.
A pattern in metaphysics: “It’s contradictory, so it can’t be real”
Many ambitious metaphysical systems try to operate the same way. They look at some ordinary feature of the world and argue:
- it contains a contradiction,
- therefore it can’t be real,
- therefore reality must be something very different from what it seems.
But the trend of modern thought has increasingly gone the other way: what looked like contradictions often turn out to be illusions, and very little can be proved a priori by insisting on what “must” be.
Space and time as the classic example
Space and time make a great illustration.
On the face of it, both seem:
- infinitely extended
- infinitely divisible
Infinite extent: if you travel in a straight line, it’s hard to believe you’ll reach a final point where there’s nothing beyond—not even empty space. Likewise, if you imagine going backward or forward in time, it’s hard to believe you’ll hit a first or last moment with “no time” beyond it. So space and time look limitless.
Infinite divisibility: between any two points on a line, it seems obvious there must be more points in between, no matter how tiny the distance. You can always halve a segment, then halve it again, without end. Time seems to work the same way: between any two moments, no matter how close, there seem to be more moments in between.
But philosophers argued against these appearances. They claimed that infinite collections of things are impossible, and therefore the number of points in space—or instants in time—must be finite. That creates a supposed contradiction: space and time seem infinite, but infinite collections are said to be impossible.
Kant, who brought this contradiction into focus, concluded that space and time can’t belong to reality itself. He declared them subjective—features of how we experience the world, not features of the world as it is. After Kant, many philosophers adopted some version of the idea that space and time are mere appearance.
Then mathematicians changed the story. Thanks especially to Georg Cantor, it became clear that the alleged impossibility of infinite collections was simply wrong. Infinite collections are not self-contradictory. They only clash with certain stubborn habits of thought—mental prejudices we mistake for logical necessities.
Once that was established, a major argument for treating space and time as unreal collapsed. One of the main engines that drove elaborate metaphysical systems lost its fuel.
Logic becomes a liberator, not a prison
Mathematicians didn’t stop at showing that ordinary space is logically possible. They also showed that many different kinds of space are possible, too—at least as far as logic is concerned.
Some of Euclid’s axioms once looked unavoidable, like the rules of thought itself. Philosophers treated them that way. But now we can see that their “obviousness” comes largely from our familiarity with the kind of space we live in, not from any deep a priori guarantee.
By imagining worlds where Euclid’s axioms fail, mathematicians used logic to pry open common-sense assumptions. They demonstrated that alternative geometries are coherent: some very different from ours, some only slightly different.
In fact, some non-Euclidean spaces differ so little from Euclidean space at the scale of everyday measurement that observation may not be able to tell us which one we’re actually in.
So the whole situation flips:
- Before: experience seemed to give us only one kind of space, and logic supposedly proved it impossible.
- Now: logic offers many possible spaces, and experience can only partly decide among them.
Our knowledge of what is becomes less absolute than philosophers once imagined. But our knowledge of what could be expands enormously.
Instead of living inside a small, walled room where everything could be mapped corner to corner, we find ourselves in an open landscape of possibilities. A lot remains unknown—not because the world is too small to understand, but because there’s so much more that could be true than we once realized.
What this means for knowledge in general
What happened with space and time has happened elsewhere, too. The project of telling the universe what it must be like, purely from a priori principles, has broken down.
Logic no longer acts as the bouncer at the door of reality, throwing out possibilities. It becomes something closer to a key that unlocks imagination: it reveals countless alternatives that unreflective common sense never even considers. And then it leaves experience to do the job of choosing among them—where choosing is possible.
So our knowledge of what exists is limited to what we can learn from experience. That doesn’t mean it’s limited to what we directly experience, because—as we’ve seen—there’s also knowledge by description about things we’ve never encountered firsthand.
But knowledge by description always depends on some general linkage—some connection among universals—that lets us infer, from a given piece of data, that something of a certain sort exists.
For physical objects, for example, we rely on a principle like: sense-data are signs of physical objects. That principle is a connection among universals, and without something like it, experience wouldn’t get us from sensations to an external world.
The same kind of dependence shows up in the law of causality, or in more specific principles such as the law of gravitation.
A principle like gravitation isn’t proved in the strict sense. It becomes highly probable through a mix of experience and some a priori ingredient—especially the principle of induction.
So our intuitive knowledge—the ultimate source of all our knowledge of truths—comes in two basic kinds:
- pure empirical knowledge: tells us that particular things exist and gives some of their properties, through direct acquaintance
- pure a priori knowledge: gives us connections among universals, allowing us to reason from the particular facts experience provides
All our derivative knowledge depends on some pure a priori knowledge and usually also on some pure empirical knowledge.
So what makes philosophy different from science?
If all this is right, then philosophical knowledge doesn’t differ from scientific knowledge in some magical way. Philosophy doesn’t have a secret pipeline to truth that science lacks. And its results aren’t radically different in kind from scientific results.
What sets philosophy apart is its role as criticism.
Philosophy critically examines the principles we use in science and in ordinary life. It looks for inconsistencies. It refuses to accept a principle until, after careful examination, no good reason to reject it has appeared.
If the basic principles behind the sciences—stripped of irrelevant details—could really give us knowledge about the universe as a whole, then that knowledge would deserve the same confidence we give to scientific knowledge. But our inquiry hasn’t uncovered any such sweeping, universe-level knowledge. So as far as the bold doctrines of ambitious metaphysicians go, the result is mostly negative.
Still, the overall outcome isn’t purely destructive. When it comes to what people commonly count as knowledge, philosophy’s critical pressure rarely forces us to abandon it. We’ve seldom found a compelling reason to reject it, and we’ve seen no reason to think human beings are incapable of the kind of knowledge we usually assume we have.
A crucial limit: criticism isn’t total skepticism
But if philosophy is a critique of knowledge, it needs a boundary.
If you take the stance of the complete skeptic—trying to stand wholly outside all knowledge and demanding to be forced back in from that outside position—you’re asking for something impossible. That kind of skepticism can never be refuted, because any refutation has to start from some shared piece of knowledge. From pure blank doubt, no argument can even get off the ground.
So philosophical criticism can’t be that kind of total demolition if it’s going to achieve anything. Against absolute skepticism, no logical argument can be offered.
And it’s not hard to see why that kind of skepticism is unreasonable.
Consider Descartes’ famous “methodical doubt,” which helped launch modern philosophy. It isn’t the “stand outside everything” skepticism. It’s exactly the kind of criticism that philosophy should practice: doubt whatever seems doubtful, and pause over each claim to ask whether, on reflection, you can still feel certain you really know it.
Some things—like the existence of our sense-data—seem indubitable no matter how calmly and thoroughly we reflect. In cases like that, philosophical criticism doesn’t demand that we suspend belief.
But other beliefs—like the belief that physical objects exactly resemble our sense-data—often survive only as long as we don’t look too closely. Once we examine them carefully, they can dissolve. When that happens, philosophy tells us to give them up—unless we can find some new and better argument to support them.
Philosophy isn’t in the business of tossing out beliefs just because someone might object to them. If you’ve examined a belief as carefully as you can and it still doesn’t show any real weaknesses, it would be irrational to reject it—and that’s not what philosophy recommends.
What philosophy pushes for is a different kind of criticism: not the reflex that says “deny everything,” but the discipline that says “test what you think you know.” You take each claim that looks like knowledge, weigh it on its own merits, and then keep what still deserves to be called knowledge after that scrutiny.
Even then, you have to admit something uncomfortable: some chance of being wrong never disappears, because humans make mistakes. The best philosophy can honestly promise is this:
- it reduces the risk of error, and
- in some cases it reduces that risk so far that, for practical purposes, it barely matters.
But it can’t deliver perfect immunity from error. In a world where mistakes are unavoidable, that’s simply not on the menu—and no careful, responsible defender of philosophy would pretend otherwise.
15
The Value of Philosophy
We’ve reached the end of our quick—and admittedly incomplete—tour through philosophy’s big problems. So it’s worth asking the obvious closing question: what is philosophy for, and why should anyone study it?
Plenty of people—especially those immersed in science, business, or day-to-day problem solving—suspect philosophy is basically harmless but pointless: clever word games, fussy distinctions, and endless arguments about questions no one can ever truly answer.
That skepticism usually comes from two misunderstandings:
- A narrow idea of what counts as a good life.
- A narrow idea of what kinds of “goods” philosophy is even trying to deliver.
Think about physical science. Thanks to inventions and technology, science helps millions of people who have never opened a physics textbook. We recommend studying science not only because it shapes the student, but because it changes the world.
Philosophy doesn’t work like that. It doesn’t directly produce gadgets, medicines, or bridges. If philosophy has value beyond the people who study it, it’s mostly indirect—through how it changes the minds and lives of those people. So if philosophy is valuable, we should expect to find that value primarily in its effects on the thinker.
Getting Past “Practical” Prejudice
To judge philosophy fairly, we have to shake off the bias of what are often called “practical” people. In this common sense of the word, a “practical” person recognizes material needs—food, shelter, money, health—but forgets that minds need nourishment too.
Even in a world where everyone had enough and where poverty and disease were pushed as low as possible, we’d still have huge work left to do to build a truly worthwhile society. And right now, in the world we actually live in, the goods of the mind matter at least as much as the goods of the body.
That’s where philosophy belongs. Its value lives almost entirely among the mind’s goods—and anyone who feels indifferent to those goods will never be convinced philosophy is anything but wasted time.
Philosophy’s Awkward Record: Knowledge Without Final Answers
Like every serious field of study, philosophy aims at knowledge. But the kind of knowledge it seeks is distinctive:
- Knowledge that gives unity and coherence to the sciences as a whole.
- Knowledge that comes from a critical examination of the foundations of our convictions—our beliefs, biases, and inherited assumptions.
Still, there’s no escaping an uncomfortable fact: philosophy hasn’t achieved “results” in the same way other fields have.
Ask a mathematician, a geologist, or a historian what definite truths their field has established, and they can talk for hours. Ask a philosopher for an equally clear list, and an honest one will admit that philosophy hasn’t produced a settled body of conclusions comparable to the sciences.
But there’s an important reason for that: once a philosophical question becomes answerable with reliable methods, it usually stops being called philosophy and becomes its own science.
- What we now call astronomy was once part of philosophy; Newton titled his masterpiece The Mathematical Principles of Natural Philosophy.
- What we now call psychology was once folded into philosophy as the study of mind.
So a lot of philosophy’s “uncertainty” is an illusion created by the way we label things. The questions that can be answered with precision migrate into the sciences. The questions that can’t—at least not yet—are what remain under the name philosophy.
The Questions That Won’t Leave Us Alone
That explanation is only part of the story. Some questions—especially the ones that cut deepest into our spiritual and existential life—may be unsolvable by human intelligence as it currently exists. Unless our minds someday become something radically different, these may remain permanently open.
Questions like:
- Does the universe have a unified plan or purpose, or is it just a lucky collision of atoms?
- Is consciousness a lasting feature of reality—something that could grow indefinitely into greater wisdom—or is it a brief accident on a tiny planet that won’t stay habitable forever?
- Do good and evil matter to the universe itself, or only to human beings?
Philosophers ask these questions, and different philosophers offer different answers. But even if the true answers exist “out there” to be discovered, philosophy’s proposed answers aren’t demonstrably true in the way scientific conclusions can be.
And yet this doesn’t make philosophy pointless. Even when our chances of settling a question are slim, philosophy’s job is to keep those questions alive:
- to make us feel their weight,
- to explore every possible route toward them,
- and to protect that speculative curiosity about the universe that can wither when we limit ourselves to what can be measured, proven, and filed away as “known.”
Why Philosophy Can’t Be a Proof Machine
Many philosophers have believed that philosophy could deliver strict proofs about the most fundamental issues—especially the claims people care about in religion. To evaluate attempts like that, we have to step back and take stock of human knowledge: its methods, its reach, and its limits.
It would be foolish to speak with absolute certainty here. But if our earlier investigations haven’t misled us, we’re pushed toward a sobering conclusion: we should give up the hope of finding purely philosophical proofs for religious doctrines.
So we shouldn’t treat philosophy’s value as a tidy set of demonstrated answers to ultimate questions. Whatever philosophy is good for, it isn’t primarily a warehouse of final, provable conclusions.
The Value of Philosophy Lies in Its Uncertainty
Oddly enough, philosophy’s value is found largely in the very thing critics complain about: its uncertainty.
A person with no exposure to philosophy often lives inside a mental enclosure built from:
- “common sense,”
- the default assumptions of their era and nation,
- and beliefs absorbed without ever being consciously chosen or examined.
For that person, the world tends to feel obvious and finished—definite, limited, and settled. Everyday objects raise no questions, and unfamiliar possibilities get dismissed with a shrug.
Philosophy breaks that spell.
The moment you start philosophizing, you discover that even ordinary things lead to problems where the best available answers are partial and tentative. Philosophy can’t promise certainty about the doubts it raises. But it can do something just as important: it can open up possibilities.
It stretches the imagination and loosens the grip of custom. In a sense, it trades one kind of knowledge for another:
- It reduces our confidence about what things must be.
- But it increases our understanding of what things might be.
It also undercuts the arrogant certainty of people who have never wandered into the freeing territory of doubt. And it keeps wonder alive by letting us see familiar life from an unfamiliar angle.
A Bigger World, A Bigger Self
Beyond revealing hidden possibilities, philosophy has another value—maybe its greatest one. It comes from the scale of what philosophy asks us to contemplate, and from the freedom that comes when you stop making everything about your private life.
The “instinctive” life stays inside a tight circle of personal interests. It may include family and friends, but the wider world matters mainly as a tool or obstacle for what we want. Compared with a philosophical life, that instinctive life feels cramped and feverish.
Your private world is small. It sits inside a vast and powerful universe that will, sooner or later, smash it. If we never expand our interests beyond our own little circle, we end up like soldiers trapped in a fortress under siege—knowing escape is impossible and surrender is inevitable. That mindset produces no peace, only a constant fight between desire and our inability to control what ultimately happens.
If we want a life that is genuinely great and free, we have to find a way out of that prison.
Contemplation as an Escape Route
One escape is philosophical contemplation.
At its widest, contemplation doesn’t carve the universe into two teams—friends and enemies, helpful and hostile, good and bad. It tries to see the whole thing impartially.
At its best, it also doesn’t start with the mission of proving the universe is “like us.” Yes, learning expands the self—it creates a kind of union between self and not-self. But that expansion works best when it isn’t the goal you chase directly.
It happens when the desire for knowledge is the only driving force—when you study without first insisting that reality must have certain comforting features, and instead allow your mind to adjust to the world as it is.
The opposite approach—starting with yourself and trying to show the world is basically compatible with what you already are—comes from self-assertion. And self-assertion, in philosophy as in life, blocks the very growth it claims to want. It turns the world into a means to personal ends. It makes reality smaller than the self, and in doing so, it limits what the self is capable of becoming.
In contemplation, we begin from the not-self. And because the not-self is vast, the boundaries of the self expand. By confronting the universe’s infinity, the mind gains, in its own limited way, a share in that infinity.
Why “Humans Are the Measure” Shrinks the Mind
That’s why philosophies that try to force the universe to fit human categories don’t cultivate greatness of soul.
Knowledge is a union between self and not-self. And like any union, it is weakened by domination—by trying to make the other side conform to you.
There’s a widespread philosophical temptation to say:
- “Human beings are the measure of all things.”
- Truth is something we make.
- Space, time, and universals are just features of the mind.
- And if anything exists outside what the mind creates, it’s unknowable and irrelevant to us.
If our earlier reasoning was right, this view is false. But even aside from being false, it drains philosophical contemplation of what makes it valuable, because it chains contemplation to the self.
On that view, “knowledge” isn’t a genuine union with the not-self. It’s a bundle of our own prejudices, habits, and desires—a thick veil we hang between ourselves and the wider world. Someone who finds comfort in that kind of theory is like a person who never leaves home because they’re afraid they won’t be in charge everywhere they go.
What a Free Intellect Tries to Be
Real philosophical contemplation finds satisfaction in every enlargement of the not-self—in anything that makes the objects of thought bigger and, by reflection, makes the thinker bigger too.
Anything personal, private, or self-serving—anything rooted in habit, self-interest, or desire—distorts what we see and weakens the union the mind is trying to achieve. When those personal pressures sit between subject and object, they turn into a prison.
A free intellect aims to see as God might see:
- without being trapped in “here” and “now,”
- without hopes and fears,
- without the weight of inherited beliefs and cultural prejudices,
- calmly and without bias,
- driven solely by the desire to know.
That’s why such an intellect values abstract and universal knowledge—knowledge that doesn’t depend on private biography—more than sensory knowledge, which is inevitably tied to a particular body, a particular viewpoint, and sense organs that distort as much as they reveal.
How Philosophy Changes Action and Emotion
A mind trained in philosophical freedom and impartiality tends to carry some of that mindset into ordinary life—into action and feeling.
It learns to see its goals and desires as small parts of a much larger whole. And once you see your wants as tiny fragments in a world that barely notices any one person’s deeds, you stop clinging to them with desperate insistence.
The same mental quality that, in contemplation, becomes the pure desire for truth shows up elsewhere as:
- justice in action, and
- a universal love in emotion—care that can extend to all people, not only to those we find useful or impressive.
So contemplation enlarges more than our thoughts. It enlarges what we do and who we can love. It makes us citizens of the universe, not just residents of one walled city at war with everything outside it.
And in that wider citizenship lies our real freedom: liberation from the tyranny of narrow hopes and fears.
The Bottom Line
So here’s what philosophy is for.
We don’t study philosophy to collect definite answers, because—most of the time—definite answers to philosophical questions can’t be known to be true. We study it for the sake of the questions themselves:
- because they expand our sense of what’s possible,
- because they enrich the imagination,
- because they weaken the kind of dogmatic certainty that shuts down thought,
- and, above all, because by contemplating a universe so much larger than ourselves, the mind grows larger too—until it becomes capable of a deeper union with reality, which is its highest good.