SECTION I
Of the DIFFERENT SPECIES of PHILOSOPHY
Moral philosophy—what we’d now call the science of human nature—can be written in two very different styles. Each has its own strengths, and each can help us in the same big ways: it can entertain us, teach us, and even push us to become better people.
1) Philosophy for living: the “easy and humane” style
One approach starts from a simple premise: human beings are built to act. We move through life guided not just by logic, but by taste, feeling, and judgment—we chase what looks valuable and back away from what looks harmful, depending on how things appear to us in the moment.
Because virtue is widely treated as the most valuable goal, writers in this style try to make virtue feel irresistible. They borrow the tools of poetry and rhetoric, use clear language, and aim straight at the imagination and the heart. They do things like:
- Pull vivid examples from everyday life.
- Set contrasting characters side by side, so the difference between good and bad behavior is impossible to miss.
- Tempt us toward virtue with the promise of honor, happiness, and a life that feels worth living.
- Then keep us on track with practical rules and memorable models to imitate.
Their goal is emotional as much as intellectual: they want us to feel the gap between vice and virtue, to have our reactions trained and steadied. And if they can bend our hearts toward honesty and genuine honor, they think they’ve done the job.
2) Philosophy for understanding: the “accurate and abstruse” style
The second approach looks at people less as doers and more as thinkers. Its target is the mind’s machinery. Instead of polishing our manners, it tries to sharpen our understanding.
These philosophers treat human nature as a puzzle to be solved. They examine it closely to discover the principles that:
- govern how we reason,
- trigger our emotions,
- and make us approve or disapprove of particular actions, characters, or ideas.
They’re bothered—almost embarrassed—that philosophy still debates morality, logic, and aesthetics without agreeing on their foundations. We talk confidently about true vs. false, right vs. wrong, beautiful vs. ugly, but where do these distinctions come from? What creates them in the first place?
So they take on a hard assignment. They start with concrete cases, extract general rules, then keep climbing toward even more general principles, refusing to stop until they reach the deepest “first principles” that set the limits of inquiry in any science. Their writing can feel abstract, even unreadable to many people. But they aren’t aiming for mass appeal. They want the respect of the learned—and they’ll happily spend a lifetime on the chance to uncover a few buried truths that might educate future generations.
Why most people prefer the “easy” kind
It’s no surprise that the “easy and obvious” style wins the popularity contest. Most people find it not only more enjoyable, but more useful.
That’s because it lives close to real life. It shapes our affections, grabs the motives that actually move us, reforms behavior, and pulls us toward a clearer picture of human excellence. By contrast, the abstract style often loses its grip the moment the philosopher steps out of quiet reflection and back into the noise of real life. In the heat of passion, the push of ambition, the tug of love or anger, the intricate conclusions of deep theory can evaporate. The “profound philosopher” becomes, in practice, just another ordinary person.
Fame tends to follow the readable
There’s another uncomfortable truth: the philosophy written for ordinary human beings usually earns the most durable—and fairest—reputation. Deep abstract systems often enjoy a brief fashion in their own era, then fade with later generations.
One reason is simple: subtle reasoning is easy to derail. A single mistake in a delicate argument tends to generate more mistakes as the thinker follows consequences wherever they lead, even when the conclusion looks bizarre or clashes with common opinion. But the writer who aims to refine common sense, not replace it, has a safety rail. If they slip into error, they don’t tumble far. They can return to ordinary judgment and the mind’s natural sentiments, correct course, and avoid dangerous illusions.
That’s why, the author claims, some writers remain widely admired while others don’t travel well across time and place: Cicero still reads fresh; Aristotle’s glory has withered. La Bruyère crosses borders; Malebranche mostly stays at home. Addison may keep charming readers long after Locke is no longer fashionable.
The ideal human being is not “pure philosopher” or “pure ignoramus”
A person who is only a philosopher often seems unpopular in society—assumed to contribute neither pleasure nor practical advantage, living cut off from people, lost in principles no one else can follow. But the person who is only ignorant is worse off: in a culture where knowledge thrives, nothing signals a cramped, ungenerous mind more clearly than having no taste for learning at all.
The best character, then, sits between these extremes. It combines:
- a real love of books,
- ease in company,
- competence in business and practical life,
- conversational tact and discernment shaped by learning,
- and integrity and precision shaped by sound thinking.
If that’s the goal, then writing in the easy, humane style is especially useful. It doesn’t yank us too far from life, doesn’t demand a hermit’s retreat to understand, and sends the student back into the world stocked with noble sentiments and usable guidance. Done well, it makes:
- virtue feel attractive,
- science feel welcoming,
- conversation feel smarter,
- and solitude feel like pleasure, not exile.
Human nature wants a balanced life
Human beings, the author argues, have three basic pulls:
- We’re reasonable, so we need knowledge the way we need food.
- We’re social, so we need company that’s genuinely enjoyable.
- We’re active, so we have to work, manage responsibilities, and stay engaged.
But each of these has limits. Our understanding is narrow, so learning can’t fully satisfy us—either because our knowledge is never secure enough, or never extensive enough. We can’t always find good company, and even when we can, we can’t always keep the taste for it. And business and labor, necessary as they are, exhaust the mind; we need rest, and we can’t stay forever bent toward care and effort.
So nature points us toward a mixed life—a balanced way of living that doesn’t let any one bias overtake the others and make us unfit for everything else.
It’s as if nature gives this advice: pursue knowledge, but keep it human—keep it connected to action and to society. As for endless depth and obscure speculation, she “forbids” it, and punishes it with melancholy, uncertainty, and the chilly reception our grand “discoveries” often get when we try to share them.
In other words: Be a philosopher—but, with all your philosophy, still be a human being.
So why defend deep, abstract thinking at all?
If most people simply preferred the easy style without sneering at the other, we could let everyone follow their own taste. The problem is that people often go further: they don’t just prefer the readable kind—they reject all profound reasoning outright, especially anything labeled “metaphysics.”
So the author now turns to the defense: what can reasonably be said on behalf of the accurate and abstract approach?
Defense #1: deep analysis supports good popular writing
Start with a practical point: the “accurate and abstract” style is often the hidden foundation of the “easy and humane” one. Without careful analysis, the friendly style can’t reach real precision in its feelings, moral advice, or reasoning.
Think about what literature and refined writing do. They’re basically portraits of human life—people in different situations that trigger our reactions: praise or blame, admiration or ridicule, warmth or disgust. To paint those portraits well, an author needs more than quick wit and good taste. They also need an accurate map of how the mind works:
- how understanding operates,
- how passions move,
- and what kinds of sentiments separate vice from virtue.
This inner investigation can feel unpleasant. But it’s often necessary if you want to describe the outward surfaces of life with real skill.
The author drives the point home with an analogy: the anatomist deals in ugly sights—cutting into the body, exposing what we’d rather not see. But that knowledge helps the painter depict even the most beautiful figure. A painter can dazzle with color and grace, but to make a body look real, they still have to understand muscles, bones, and how every part fits and functions.
Likewise, accuracy helps beauty, and sound reasoning helps delicate feeling. You can’t truly elevate one by trashing the other.
Defense #2: precision improves every practical field
Zoom out to society. In every trade and profession—even the ones most tied to action—an acquired habit of accuracy pushes the work closer to perfection and makes it more useful to the public.
Even if a philosopher lives far from practical affairs, the spirit of philosophy, if cultivated by enough people, spreads through the culture. Over time it injects a kind of correctness into everything:
- Politicians gain sharper foresight and more subtle skill at dividing and balancing power.
- Lawyers become more methodical, with cleaner principles in their arguments.
- Generals grow more disciplined and cautious in strategy and operations.
The author even links this to history: modern governments, more stable than ancient ones, and modern philosophy, more exact than earlier thinking, have improved step by step—and may continue improving the same way.
Defense #3: curiosity is a real, harmless pleasure
Even if abstract studies offered nothing beyond satisfying curiosity, that alone shouldn’t be dismissed. Safe pleasures are rare. Learning is one of the sweetest and least harmful paths through life. Anyone who clears obstacles from that road—or opens a new view—deserves to be called a benefactor.
And yes, this kind of research can be tiring. But some minds are like some bodies: if you’re vigorous, you need hard exercise, and you can even enjoy it. Obscurity hurts—mentally, like darkness hurts the eyes. But the work of dragging an idea into the light can be deeply satisfying.
The strongest objection: obscure metaphysics breeds error—and shelters superstition
Still, the central objection remains: deep abstract philosophy is not only difficult; it can be a factory for uncertainty and mistakes.
Here, the author concedes a lot. This is the most reasonable complaint against a large chunk of metaphysics: much of it isn’t genuinely scientific. It often comes from either:
- human vanity—the urge to punch through questions our minds simply can’t reach, or
- the tricks of superstition—systems that can’t defend themselves honestly, so they throw up thorny, confusing jargon to protect their weakness.
When superstition gets driven out of open debate, it retreats into the intellectual “forest,” hiding in confusing arguments and waiting to ambush any unguarded mind with fear and prejudice. Even a strong opponent gets overwhelmed if they let their attention slip. And plenty of people, out of fear or laziness, swing the gates open and accept these intruders as rightful rulers.
Why that’s exactly why we shouldn’t quit
But does that mean philosophers should stop investigating and leave superstition entrenched in its hideout? The author argues the opposite. If superstition uses obscurity as a fortress, then we need to take the fight into that fortress.
It’s naïve to think people will eventually abandon these airy speculations just because they’ve been disappointed before. For one thing, many people have a real stake in keeping those topics alive. And more broadly, despair doesn’t belong in science. Even if earlier attempts failed, it’s still reasonable to hope that later generations—through effort, luck, or sharper insight—might discover what earlier minds couldn’t.
Ambitious thinkers won’t stop; they’ll be energized by their predecessors’ failures, imagining that the glory of success is reserved for them alone.
So what’s the only reliable way to clear learning of these dead-end, seductive questions? Study the human mind itself. Do a careful analysis of its powers and limits, and show—plainly and exactly—that it simply isn’t built for certain remote and abstract subjects.
We have to accept this hard work now so we can live more easily later. We have to cultivate true metaphysics carefully in order to destroy the false, counterfeit kind.
Some people protect themselves from bad philosophy through laziness; others are pulled in by curiosity. Sometimes despair wins; later, hope returns. The author’s point is that only accurate and disciplined reasoning works as a universal remedy. It’s the one thing capable of cutting through metaphysical jargon—especially when that jargon is mixed with superstition and dressed up to look like wisdom.
A final payoff: mapping the mind is real progress
And there’s more. Even after we’ve rejected the worst parts of speculative metaphysics, careful inquiry into human nature brings positive benefits.
One striking fact about the mind is that it’s both the closest thing to us and, when we try to study it directly, strangely hard to see. When we reflect on our own mental operations, they can feel hazy. The boundaries between them are hard to draw. The “objects” are too subtle to hold still; you have to catch them quickly, with a kind of penetrating attention that comes partly from natural talent and partly from practice and reflection.
That means a major scientific task is simply learning to:
- identify the mind’s different operations,
- separate them cleanly,
- classify them under the right headings,
- and bring order to what looks like confusion when we first turn inward.
Doing this kind of sorting might not seem impressive when you’re dealing with physical objects. But when the target is the mind—because it’s so hard—this ordering becomes far more valuable. And even if we go no further than drawing this mental geography, this map of the mind’s powers and parts, it’s still satisfying to get that far. In fact, if this subject seems “obvious,” that only makes ignorance of it more embarrassing for anyone claiming to be educated.
This kind of knowledge isn’t “chimerical”
Finally, the author argues, we have no reason to suspect that this science is pure fantasy—unless we embrace a skepticism so extreme it would wreck not just theory, but everyday action.
We can’t seriously doubt that the mind has multiple powers, that these powers are distinct, and that what’s distinct in immediate experience can be separated more clearly through reflection. So there really is truth and falsehood in claims about the mind, and that truth is not beyond human capacity.
Some distinctions are obvious to everyone—like the difference between the will and the understanding, or between imagination and passion. The more refined distinctions are just as real and certain, even if they’re harder to grasp.
Some real successes—especially the later ones—should update our sense of how solid and reliable this whole kind of inquiry can be. After all, we applaud the philosopher who builds a correct model of the solar system, nailing the positions and order of worlds we’ll never touch. So why would we shrug at the people who, with equal care and often real success, map the mind—the thing we’re inside of every moment?
And here’s the bigger hope: if we treat philosophy seriously, work at it patiently, and the public actually values it, why couldn’t it push further and uncover (at least partly) the hidden mechanisms that drive our mental life?
Think about what happened in astronomy. For a long time, astronomers were satisfied with this: start from what you can observe, then infer the true motions, arrangement, and sizes of the heavenly bodies. That was already impressive. But eventually someone came along who did more than describe the pattern—he identified the laws and forces that govern those planetary motions in the first place. Nature has yielded similar breakthroughs in other areas, too. So there’s no good reason to assume we can’t make comparable progress in understanding the mind—so long as we bring the same level of talent, caution, and discipline to the project.
It’s very likely that the mind’s workings form a kind of hierarchy:
- One mental operation depends on another.
- That second one can often be explained in terms of something more general.
- And that, in turn, may rest on an even broader, more universal principle.
How far this can go is hard to say. You probably can’t know in advance—and you might not know even after a serious attempt—exactly where the chain of explanation ends. But one thing is certain: people try to do this sort of theorizing every day, including those who do philosophy carelessly. That’s exactly why the task demands real attention. If a genuine system of principles is within human reach, careful work gives it the best chance of being achieved. And if it isn’t within our reach, careful work at least lets us set it aside with more confidence—rather than giving up out of laziness or confusion.
And to be clear, that “give up” conclusion is nothing to celebrate, and we shouldn’t accept it too quickly. If we assume from the start that no underlying principles can be found, we strip this whole branch of philosophy of much of its beauty and value.
Look at what moral philosophers have done. Faced with the huge variety of actions that win our approval or disgust, they’ve tried to identify some common principle that could explain why our moral reactions vary the way they do. Yes, they sometimes overreach—falling in love with one grand idea and trying to force everything into it. Still, they’re not unreasonable for expecting that there really are some general principles into which virtues and vices can be properly traced.
The same basic ambition shows up elsewhere:
- Critics look for general rules that explain why certain works move us and others don’t.
- Logicians look for general principles of reasoning.
- Politicians look for general principles about how societies hold together and fall apart.
And these efforts haven’t been pointless. They’ve produced real insights. With more time, more precision, and more sustained effort, these fields may move even closer to maturity.
So abandoning the entire search for underlying principles—throwing it all away at once—deserves to be called what it is: more rash, hasty, and dogmatic than even the most bold, confident systems that have tried to dictate their principles to humankind.
“But,” you might say, “this talk about human nature is so abstract. It’s hard to follow.” True. But difficulty isn’t evidence of falsehood. In fact, if these truths have escaped so many wise and profound thinkers for so long, it would be strange if they were obvious. And whatever effort this research demands, the reward can be worth it—not just in usefulness, but in the sheer satisfaction of understanding—if it adds even a little to our knowledge of subjects that matter this much.
Still, we shouldn’t pretend that abstraction is a virtue. If anything, it’s a handicap. The good news is that some of the difficulty may be reduced. With care, good method, and by cutting away unnecessary detail, we can often bring light to topics that uncertainty has scared off the thoughtful and obscurity has confused for everyone else.
Best of all would be this: to connect the different kinds of philosophy by showing that deep inquiry and clear writing don’t have to be enemies—just as truth doesn’t have to come without freshness and surprise. And better still, if by reasoning in this more straightforward way we can quietly erode the authority of a dark, technical “mystery philosophy” that has too often served as a hiding place for superstition, and a cloak for absurdity, error, and confusion.
SECTION II
Of the ORIGIN of IDEAS*
Everyone can tell there’s a big difference between two kinds of mental experience:
- What it’s like to feel something right now—say, the sharp pain of intense heat, or the comfort of mild warmth.
- What it’s like to remember that feeling later, or to imagine it ahead of time.
Memory and imagination can replay what the senses once delivered. But even at their strongest, they don’t fully match the punch of the original experience. At best, a memory can feel so vivid that you might say, “I can almost feel it again.” Still—unless illness or madness blurs the line—your mind doesn’t confuse that replay with the real thing. Poetry can paint gorgeous pictures, but it can’t make you mistake a description for an actual landscape. Even the liveliest thought is dimmer than the dullest sensation.
You can see the same contrast everywhere. Someone who’s actually furious is not in the same mental state as someone merely thinking about anger. If you tell me a person is in love, I understand what you mean and can picture the situation—but I won’t confuse my picture with the real storm of wanting, anxiety, joy, and agitation that love can bring. When we look back on past emotions, our mind can mirror them accurately, but in faded colors. You don’t need any special talent for philosophy to notice this difference.
So here’s a useful way to sort everything the mind perceives. Divide our perceptions into two categories, distinguished by how much force and vividness they have:
- Ideas (or thoughts): the softer, fainter perceptions—what you experience when you reflect, remember, or imagine.
- Impressions: the stronger, brighter perceptions—what you experience when you actually see, hear, feel, love, hate, desire, or choose.
This is not quite the everyday meaning of “impression,” but we don’t have a better umbrella word in ordinary language, probably because most people don’t need it outside of philosophy. By impressions, then, I mean all those vivid mental events that arrive as lived experience. By ideas, I mean the weaker echoes we have when we think back on those experiences.
At first glance, human thought seems wildly free. It slips past human authority, and it even seems to outrun nature itself. The imagination can invent monsters and mash together mismatched parts as easily as it can picture ordinary objects. While the body is stuck on one planet, crawling around with effort, thought can instantly carry us to the edge of the cosmos—or even beyond it, into an imagined chaos where nature dissolves into confusion. We can conceive of things we’ve never seen or heard of. In fact, nothing seems out of reach for thought except what involves a flat contradiction.
But look closer, and this freedom turns out to be more limited than it first appears. The mind’s “creative power” is really just the ability to work with what it has already been given—to combine, rearrange, increase, or decrease the materials supplied by experience and the senses.
A “golden mountain” is simply two familiar ideas—gold and mountain—stitched together. A “virtuous horse” is imaginable because you already have an inner sense of virtue from your own experience, and you can attach that idea to the shape of a horse you know well. More generally:
- The raw materials of thinking come from outer experience (what we sense) and inner experience (what we feel and notice in ourselves).
- The mind then mixes and composes those materials as it pleases.
Put in more technical language: ideas—our fainter perceptions—are copies of impressions—our more vivid ones.
Two arguments support this.
First, when you take any thought you have—no matter how lofty or complicated—and break it down, you always find it resolves into simpler ideas that trace back to earlier feelings or experiences. Even ideas that seem, at first, furthest from sense and experience end up coming from there once you examine them carefully.
Take the idea of God, understood as an infinitely intelligent, wise, and good being. Where does that come from? We start by noticing intelligence, wisdom, and goodness in the operations of our own mind and character. Then we stretch those qualities—without limit—into the infinite. Follow this line of inquiry as far as you like: every idea you inspect will turn out to be copied from some similar impression.
And if anyone claims this isn’t always true, there’s a straightforward way to challenge it: point to an idea that supposedly doesn’t come from impressions. Once that idea is produced, the burden shifts to us to find the corresponding impression—the vivid perception that matches it.
Second, when someone lacks a sense, we see that they also lack the ideas that would have come through that sense. A person born blind can’t form an idea of color; a person born deaf can’t form an idea of sound. Give the blind person sight or the deaf person hearing—open that channel of sensation—and you also open the channel of ideas, and they can grasp these objects without trouble.
The same thing happens when the relevant stimulus has simply never reached the organ. Someone who has never tasted wine can’t know its flavor. And while it’s rare for a human to be completely missing an entire passion that belongs to our species, we still notice weaker versions of the same pattern: a gentle person struggles to imagine deep, settled cruelty or revenge; a selfish person doesn’t easily reach the heights of friendship and generosity. We readily accept that other creatures might have senses we can’t even picture—precisely because the ideas for those senses have never entered our minds in the only way ideas ever get in: through actual sensation and feeling.
There is, however, one puzzling case that suggests a small exception. We can agree that different colors are distinct ideas, even though they resemble one another. If that’s true across different colors, it should also be true across different shades of the same color: each shade is its own distinct idea.
If you deny that, you run into trouble. Because shades can change by tiny steps, you can slide gradually from one shade into something very distant. If none of the intermediate shades are genuinely different, it becomes absurd to say the extremes are different.
Now imagine someone who has had perfect eyesight for thirty years and knows every color and shade—except for one particular shade of blue that, by sheer chance, they’ve never encountered. Lay out every other shade of blue in a smooth gradient from darkest to lightest, leaving out only that missing shade. They would notice a gap: a “blank” where something should be, and a larger jump between the neighboring shades than anywhere else in the sequence.
The question is: could that person, using imagination alone, supply the missing shade—forming the idea of it even though it never came through their senses? Most people will say yes. If so, that would show that simple ideas are not always, in every single case, derived from matching impressions. Still, the case is so unusual that it’s hardly worth building a new theory around it; it doesn’t justify abandoning the general rule.
With that on the table, we reach a principle that’s not only easy to understand, but—if used properly—could make philosophical disputes far clearer and drive out a lot of the foggy language that has long embarrassed metaphysics.
Here is why. Ideas, especially abstract ones, tend to be faint and slippery. The mind grips them weakly. They blur into neighboring ideas that resemble them. And when we use a word often—sometimes without a clear meaning—we start to feel as if we must have a definite idea attached to it, even when we don’t.
Impressions, by contrast—meaning sensations and feelings, outer or inner—arrive with force and vividness. Their boundaries are sharper. It’s much harder to get confused about what you’re experiencing when the experience is right there, strong and clear.
So whenever you suspect that a philosophical term is being used without any real meaning (which happens far too often), here’s a simple test: ask, What impression is this supposed idea derived from? If you can’t point to any impression—any lived sensation or feeling that could have produced the idea—then you have strong reason to think the term is empty.
By forcing ideas into this brighter light, we can reasonably hope to remove many disputes that arise over their nature and reality.
SECTION III
Of the ASSOCIATION of IDEAS*
Your mind doesn’t produce thoughts as random, disconnected sparks. There’s a built-in linking principle that ties one idea to the next, so that when ideas show up in memory or imagination, they tend to arrive with a noticeable order.
You can see this most clearly when you’re thinking seriously or trying to explain something out loud. A thought that barges in and derails the ongoing train of ideas stands out immediately—and you usually push it aside as irrelevant. But the same thing is true even when your mind seems least disciplined. In daydreams, in scattered reveries, even in dreams, if you look back carefully you’ll often find that the imagination wasn’t just running loose; there was still some thread connecting the ideas as they followed one another.
The same pattern shows up in ordinary conversation. If you transcribed even the loosest, most freewheeling chat, you’d still be able to spot connections from one turn to the next. And when the transitions seem to make no sense—when someone “changes the subject” so abruptly it feels like the thread snapped—that person can usually explain it afterward: there was an unseen chain of thoughts in their head that gradually carried them from the original topic to the new one.
There’s an even broader hint that this linking tendency is not just personal but human. Across different languages—even among peoples with no obvious connection—words that express complex ideas often line up in surprisingly similar ways. That’s a sign that the simpler ideas packed inside those complex ones are commonly tied together by some general principle that works the same way in all of us.
So yes: ideas connect. That’s obvious. What’s less obvious is that philosophers have rarely tried to do the next step—actually listing and sorting the basic kinds of connections that hold our thoughts together. And that seems like exactly the kind of question worth investigating.
As I see it, there are only three fundamental ways ideas link up:
- Resemblance
- Contiguity (being near each other in time or place)
- Cause and effect
It’s hard to doubt that these really do connect our thoughts. A picture naturally makes you think of the person it depicts. Mention one room in a building and you’re quickly talking about the others. Think about a wound and it’s almost impossible not to think about the pain that comes after it.
The harder claim is that this list is complete—that there are no other basic principles of association beyond these three. It’s difficult to prove that in a way that fully satisfies a reader, or even fully satisfies the person making the list. In cases like this, the best method is more practical than formal: take many examples, ask what exactly links the ideas in each case, and keep refining your explanation until the linking principle is as general as it can be. The more cases you review, and the more carefully you inspect them, the more confident you become that your final list really is complete.
SECTION IV
SCEPTICAL DOUBTS concerning the OPERATIONS of the UNDERSTANDING
PART I — Two Kinds of Knowledge
Everything we can think about, argue about, or investigate falls into two broad categories: relations of ideas and matters of fact.
1) Relations of ideas are the truths you can know just by thinking clearly. This is the territory of geometry, algebra, arithmetic—anything that’s intuitively obvious or demonstrably certain.
For example:
- “The square of the hypotenuse equals the sum of the squares of the other two sides” states a relation among geometric figures.
- “Three times five equals half of thirty” states a relation among numbers.
You don’t need to look out the window, run an experiment, or consult a history book to know these are true. They’re true because of how the ideas are defined and how the logic works. Even if the universe had never contained a perfect triangle or circle, Euclid’s proofs would still hold.
2) Matters of fact are different. They’re about what exists and what happens in the world. And they can’t be established the same way.
Here’s the key feature: the opposite of any matter of fact is always conceivable. It never contains a contradiction. You can imagine it clearly, even if it’s wildly unlikely.
“The sun won’t rise tomorrow” is perfectly understandable. It’s no contradiction in terms. It’s not like saying “a square circle exists.” So you can’t prove by pure demonstration that the sun must rise tomorrow. If the claim “the sun won’t rise” were demonstrably false, it would involve a contradiction—and we wouldn’t even be able to form the idea distinctly.
So it’s worth asking: what kind of evidence convinces us about reality beyond what we currently sense or what we remember? That is, how do we justify beliefs about things we aren’t perceiving right now?
Philosophers, ancient and modern, haven’t explored this question as carefully as they might have. That means some confusion is excusable—we’re walking on rough ground with few signposts. But the effort is still valuable. Doubt can be healthy here: it sparks curiosity and breaks the lazy, automatic confidence that kills real inquiry. If everyday “common philosophy” has cracks, noticing them shouldn’t discourage us. It should push us to build something clearer and more satisfying.
Why Cause and Effect Runs the Show
Whenever we reason about matters of fact—especially facts not currently present to our senses—we rely on cause and effect. That relation is what lets us move beyond “what I see right now” and “what I remember.”
If you ask someone why they believe something absent—say, “My friend is in the countryside,” or “My friend is in France”—they’ll point to some other fact:
- a letter or message they received,
- the friend’s known plans, promises, or habits.
Or imagine you find a watch, or any complex device, on a deserted island. You immediately conclude that people were there. Why? Because the watch looks like an effect of human design, and you treat it as connected to its cause.
This is the general pattern: we treat a present fact as tied to another fact we infer. If there were nothing binding them, the inference would be pure guesswork.
In the dark, for example, you hear a clear voice and coherent speech. You take that as evidence that a person is nearby. Why? Because articulate speech is an effect tightly linked to human beings.
If you dissect any of these inferences, you’ll find the same engine underneath: cause and effect, whether the link is close or distant, direct or indirect. Even when two things aren’t cause and effect in a straight line, we infer one from the other because they share a cause—like heat and light, which are collateral effects of fire. From one effect, we may reasonably infer the other.
So if we want to understand the kind of evidence that supports matters of fact, we need to answer a deeper question:
How do we come to know cause and effect in the first place?
Cause and Effect Isn’t Read Off Objects by Pure Reason
Here’s a sweeping claim: we never learn cause and effect by a priori reasoning. We learn it entirely from experience, when we notice that certain kinds of things regularly appear together.
Give a person the sharpest mind imaginable, and show them an object they’ve never encountered before. No matter how carefully they examine its visible and tangible qualities, they still won’t be able to “reason out” its causes or predict its effects.
The point is simple: an object’s sensory features don’t announce their consequences.
Even if you imagine Adam—newly created, with perfect rational powers from the start—he couldn’t have inferred:
- from water’s clarity and fluidity that it can suffocate,
- from fire’s warmth and brightness that it can burn and consume.
No object, just by how it looks or feels, reveals what produced it or what it will produce. Without experience, reason can’t leap from “this exists” to “this will happen.”
This is easy to accept when we remember being unfamiliar with something and having no clue what it would do. It’s also easy to accept for odd, surprising events that don’t resemble everyday patterns—no one thinks you can deduce the explosion of gunpowder, or the pull of a magnet, by sheer logic alone.
And when effects depend on complicated inner structure—hidden mechanisms we can’t see—we naturally credit experience for our knowledge. Who can honestly claim to know, purely by reason, why milk and bread nourish humans but not lions or tigers?
Familiarity Tricks Us Into Thinking We “Could’ve Figured It Out”
The temptation arises with the most familiar events—things we’ve seen since childhood, that seem to flow from the “simple” qualities of objects and fit neatly into the ordinary course of nature.
We start to feel as if we could have figured them out without experience. We imagine that if we were dropped into the world fully grown, we’d immediately know that one billiard ball, striking another, will transfer motion. We think we wouldn’t need to wait and see.
But that confidence is mostly habit wearing a mask. Custom doesn’t just cover our ignorance—it hides itself. When it’s strongest, it feels like “obvious reason,” precisely because it’s been reinforced so many times that we no longer notice the training.
Why Pure Reason Can’t Predict a Single Natural Effect
To see this clearly, try a thought experiment. Suppose an object is presented to you, and you’re required—without using any memory, without consulting any past observation—to say what effect will result. How would your mind proceed?
It would have to invent an effect. It would have to imagine something that might follow. And that invention would be arbitrary.
Why? Because the effect is not contained in the cause. It’s a different event entirely. You can inspect the cause as much as you like, but you’ll never find the effect “inside it,” the way you can find logical consequences inside definitions.
Take the classic billiard-ball case:
- The motion of the first ball is one event.
- The motion of the second ball is another event.
Nothing about the first event, considered on its own, logically implies the second. There’s no contradiction in imagining different outcomes.
Or take a stone lifted into the air and released. In practice it falls. But if you look at the situation a priori, what in that setup forces you to imagine downward motion rather than upward motion, or sideways motion, or no motion at all? The scene alone doesn’t dictate the outcome.
And if the initial “guess” about the effect is arbitrary without experience, then so is the supposed necessary tie between cause and effect—the idea that this cause must produce that effect and cannot produce any other.
When I see one billiard ball rolling toward another, even if the idea “the second ball will move” pops into my mind, why couldn’t a hundred other outcomes follow instead?
- Both balls might stay at rest.
- The first might bounce straight back.
- The first might ricochet off in some other direction.
All of those are thinkable. None involves a contradiction. So why privilege one prediction over the rest? A priori reasoning alone can’t supply a foundation for that preference.
Put bluntly:
- Every effect is distinct from its cause.
- Therefore you can’t discover an effect by analyzing the cause in thought alone.
- Even once you imagine a particular effect, the “connection” still looks contingent, because other effects seem just as conceivable.
So without observation and experience, we can’t legitimately determine what will happen—nor can we infer causes from effects.
What Philosophy Can (and Can’t) Do About Ultimate Causes
This also explains why no careful, intellectually modest philosopher claims to reveal the ultimate inner power behind any natural operation—no one can clearly lay out the hidden force that produces even a single effect in the universe.
At best, human reason does something more limited and more realistic: it simplifies. It tries to reduce many specific phenomena to a smaller set of more general causes, drawing on:
- analogy,
- experience,
- observation.
But when we ask what causes those general causes, we hit a wall. The “last springs” of nature are sealed off from us.
We may end up with principles like:
- elasticity,
- gravity,
- cohesion (how parts stick together),
- transfer of motion by impact.
And we should consider ourselves lucky if careful inquiry lets us trace particular phenomena back to these, or close to these. Even the most refined natural philosophy doesn’t eliminate ignorance; it postpones it. And the most refined moral or metaphysical philosophy often does the same, except in a different direction—it reveals just how large the unknown territory is.
In the end, philosophy keeps confronting us with the same uncomfortable lesson: human understanding is limited, and we keep running into that fact no matter how cleverly we try to dodge it.
Why Mathematics Doesn’t Magically Fix This
Even when we bring geometry into natural philosophy, it doesn’t cure the underlying problem. What people call “mixed mathematics” assumes from the start that nature follows certain laws. Then it uses abstract reasoning either:
- to help experience discover those laws, or
- to calculate how known laws play out in specific cases (where distance, quantity, and measurement matter).
For instance, experience teaches a law of motion: the “force” or “moment” of a moving body varies with both how much matter it contains and how fast it moves. That’s why a small force can overcome a huge resistance if we use a machine to increase its speed and gain mechanical advantage.
Geometry then helps us apply the law by giving exact dimensions—how to shape parts, size levers, design mechanisms. But geometry didn’t discover the law. Experience did. No amount of purely abstract reasoning could have delivered that law from scratch.
And the general point remains: when you reason a priori—considering an object only as it appears to the mind, without observation—you can’t arrive at the idea of a distinct effect, much less prove a necessary, unbreakable connection between cause and effect. You’d have to be extraordinarily “psychic” to deduce, by reason alone, that heat produces crystals and cold produces ice if you’d never seen those processes before.
PART II — The Deeper Question: What Grounds Our Inferences From Experience?
So far we still haven’t answered the question in a fully satisfying way. Each answer opens up another question just as demanding.
If you ask, “What is the nature of our reasoning about matters of fact?” the right answer seems to be: it rests on cause and effect.
If you ask, “What is the foundation of our conclusions about cause and effect?” the answer is: experience.
But if you keep pressing—“Then what is the foundation of all conclusions drawn from experience?”—you’ve raised an even harder problem.
This is where philosophers who like to sound impressively certain often get cornered. An inquisitive person can keep pushing them back, retreat after retreat, until they’re forced into a risky dilemma. The smartest defense against that embarrassment is humility: admit the difficulty before someone else points it out. In a strange way, you can earn credibility by being honest about what you don’t know.
In this section, I’ll take on a simpler task. I won’t claim to deliver a full positive theory. I’ll offer a negative answer: even after we’ve had experience of how causes and effects behave, the conclusions we draw from that experience are not based on reasoning—or on any formal process of the understanding. That claim needs explanation and defense.
To start, notice how little nature actually shows us. We’re kept at a distance from the machinery of the world. We learn only a handful of surface features, while the real powers and principles—what makes things produce their effects—stay hidden.
Our senses can tell us the color, weight, and texture of bread. But neither sense nor reason can reveal what it is about bread that makes it able to nourish a human body. We can see and feel bodies in motion. But that astonishing “something” that would keep a moving object going indefinitely—something objects only lose by passing it on to others—is not something we can even form a clear idea of.
And yet, despite our ignorance of these hidden powers, we constantly assume that similar appearances imply similar inner workings. When we see the same kinds of visible qualities, we expect the same unseen powers, and we anticipate the same effects we’ve experienced before.
Put a new object in front of us that matches the color and texture of bread we’ve eaten, and we don’t hesitate. We repeat the experiment. We expect, with confidence, nourishment again.
That mental step—from “looks similar” to “will act similar”—is exactly the process I want to understand. Everyone agrees there’s no known connection between the visible qualities and the hidden powers. So the mind can’t be led to this expectation by anything it understands about the objects’ inner nature.
As far as past experience goes, it can only give us direct, rock-solid knowledge about what it actually covered: these particular things, at that particular time. That much is easy. The real puzzle is why we treat that experience as a passport to the future—and to other things that only seem similar.
Take something ordinary: bread. In the past, the bread I ate nourished me. In plain terms, a thing with those look-and-feel qualities had, at that moment, the hidden ability to feed and sustain my body. But why should that automatically mean that bread will nourish me tomorrow? Or that anything else that looks similar will always come with the same hidden powers? That conclusion isn’t logically forced.
And that’s the point worth slowing down for. Somewhere between these two claims, the mind takes a step:
- “I’ve always found this kind of thing followed by this kind of effect.”
- “So I expect similar things will be followed by similar effects in the future.”
Those aren’t the same statement. The second goes beyond what you’ve directly observed. It’s an inference—a mental move—and it needs an explanation.
If you want to say, “Sure, the second follows from the first,” I won’t argue about what we do in practice. We clearly do make that leap. But if you insist the leap is made by a chain of reasoning, then show me that reasoning. The connection isn’t obvious just by staring at the two propositions. It isn’t intuitively self-evident. So if it’s genuinely produced by argument, there has to be some middle link—some intermediate principle—that lets the mind cross the gap. I can’t see what that link is. And anyone who claims it exists owes us an account of it, because it would be the engine behind basically all our conclusions about matters of fact.
Over time, this “negative” challenge should become extremely persuasive if smart, careful philosophers keep looking—and no one ever manages to find that missing connecting principle. Still, because this problem is relatively new, it would be unfair to demand that every reader instantly conclude, “I can’t find the argument, therefore it doesn’t exist.” Maybe it’s there and we just haven’t noticed it. That’s why we should attempt a harder project: survey the main kinds of human reasoning and show that none of them can supply the needed argument.
Let’s start with a simple map. All reasoning falls into two broad types:
- Demonstrative reasoning: reasoning about relations of ideas—the kind of thing where denying it produces a contradiction (like mathematics and logic).
- Moral (probable) reasoning: reasoning about matters of fact and real existence—the kind of thing where the opposite is always conceivable.
In our case—using the past to predict the future—there are clearly no demonstrative arguments available. Why? Because it’s never a contradiction to imagine nature changing. There’s no logical impossibility in supposing that something that looks exactly like what we’ve experienced might behave differently next time.
I can easily imagine, for example, something falling from the sky that looks like snow in every visible way but tastes salty or burns like fire. I can also plainly conceive a world where trees flourish in December and January and wither in May and June. These claims are weird, but not self-contradictory. And anything we can conceive clearly without contradiction can’t be ruled out by a priori demonstration.
So if we’re pushed by arguments to rely on experience as a guide to the future, those arguments would have to be of the second type: probable arguments about matters of fact.
But here’s the trap. If you accept this account of probable reasoning, you’ll see there can’t be an argument of that kind either. Why?
Because all reasoning about matters of fact ultimately rests on cause and effect. And our knowledge of cause and effect comes entirely from experience. And every “experimental” conclusion assumes, at bottom, that the future will resemble the past.
So if you try to prove that last assumption—“the future will resemble the past”—by any argument from experience, you end up arguing in a circle. You’d be taking for granted the very thing you’re trying to justify.
What’s really happening is this: our arguments from experience lean on the similarity we notice among natural objects. We see patterns, and that nudges us to expect similar effects from similar-looking causes. No sane person disputes that experience is the great guide of life. But a philosopher is allowed to be curious about why it has such authority—what feature of human nature makes us lean so hard on it.
We can summarize all our experimental reasoning like this:
From causes that appear similar, we expect similar effects.
Now notice something striking. If that conclusion were reached by pure reasoning, it should be just as strong after one instance as after a hundred. Reason doesn’t get “more rational” with repetition in that way. But in real life, we don’t work like that.
Nothing looks more alike than eggs. Yet no one expects every egg to taste identical just because they look similar. Only after a long run of consistent experience do we gain real confidence about what a thing will do. So where is the reasoning process that produces a timid conclusion after one case, and a confident conclusion after many cases—when the later cases don’t differ in kind from the first, except by being repeated?
I’m not asking just to be difficult. I’m genuinely asking for the mechanism. I can’t find it. I can’t even picture what it would look like. Still, I’m open to being taught if someone can actually supply it.
Maybe someone will respond: “From many uniform experiences, we infer a connection between the sensible qualities we observe and the secret powers that produce the effect.” But that’s just the same problem dressed in new language. The question comes right back: by what argument do we infer that connection? What’s the middle step—the bridging idea—that links claims so far apart?
It’s openly admitted that the color, texture, and other observable qualities of bread don’t, by themselves, reveal anything about the hidden power to nourish. If they did, we could infer nourishment the first time we saw bread—without experience—which goes against what philosophers and plain facts both tell us.
So we begin in a state of ignorance about the powers of things. What does experience do to fix that? It shows us repeated pairings: certain kinds of objects are regularly followed by certain effects. It teaches us that those objects, at those times, had those powers. Then, when we meet a new object with similar sensible qualities, we expect it to have similar powers and to produce a similar effect. We see something like bread, and we expect nourishment.
But that expectation is exactly the step that needs explaining.
When someone says:
- “In the past, I always found these sensible qualities joined with those secret powers,”
and then says:
- “So similar sensible qualities will always be joined with similar secret powers,”
he hasn’t said the same thing twice. That isn’t a tautology. It’s a leap. You may call it an inference, but you must admit it’s not intuitive, and it’s not demonstrative. So what kind of inference is it?
If you answer, “It’s experimental,” you’ve just assumed what we’re trying to justify. Every inference from experience already depends on the idea that the future will resemble the past, and that similar appearances will come with similar powers. If you seriously entertain the possibility that nature could change—that past regularity might not hold—then experience loses its ability to support any conclusion at all.
So arguments from experience can’t prove that the future resembles the past, because those arguments only work by presupposing that resemblance.
Even if the world has behaved with perfect regularity until today, that fact alone doesn’t logically guarantee it will keep behaving that way tomorrow. You can’t simply say, “I’ve learned the nature of bodies from experience,” as though their inner workings were locked forever. The hidden nature of things—and therefore their effects—could change without any change in the outward qualities we notice. That sometimes happens with particular objects. So why couldn’t it happen more broadly? What logical safeguard rules that out?
At this point someone says, “But your everyday behavior contradicts your doubts.” And it does—if you think my goal is to persuade you to stop trusting experience. That’s not my goal. As a person trying to live and act, I’m completely satisfied: I rely on experience like everyone else. My question is philosophical. I want to understand the foundation of the inference.
So far, nothing I’ve read and no inquiry I’ve made has resolved the difficulty. If I can’t solve it myself, the best I can do is put the problem in public view. Even if we don’t find an answer, we’ll at least become more aware of the limits of what we truly understand.
Now, it would be arrogant to declare, just because I can’t find an argument, that no such argument exists. And even if thinkers have searched for centuries and found nothing, it might still be rash to conclude, with certainty, that the subject is beyond human comprehension. Maybe our survey of our own knowledge is incomplete; maybe our inspection isn’t careful enough.
But in this particular case, there are facts that greatly reduce those worries.
Look at who learns from experience. Not only educated adults, but the most untrained and unreflective people—peasants, infants, even animals—get better at navigating the world by noticing what follows what. A child touches a candle flame, feels pain, and afterward avoids putting his hand near any candle flame. He expects the same effect from a cause that looks the same.
So if you claim that the child reaches this expectation by a process of reasoning, then you should be able to present that reasoning. And you can’t excuse yourself by saying the argument is too subtle to find—because you’re claiming it’s obvious enough for a baby.
If you hesitate even for a moment, or if you end up offering a complicated and “deep” argument, you’ve essentially conceded the whole point: it isn’t reasoning that leads us to suppose the future will resemble the past and to expect similar effects from similar-looking causes.
That is the claim I’ve aimed to establish here. If I’m right, it isn’t some grand new discovery. And if I’m wrong, then I must be a remarkably slow student—because I still can’t uncover an argument that, on this view, should have been familiar to me long before I could even speak.
SECTION V
SCEPTICAL SOLUTION of these DOUBTS
PART I
A love of philosophy, like a love of religion, comes with a built-in risk. Both are supposed to improve us—polish our character, cut down our vices. But handled badly, they can do the opposite: they can become an elegant excuse for whatever we were already inclined to do.
Think about what happens when we chase the ideal of the unshakable sage—someone so “above it all” that their happiness is sealed safely inside their own mind. Push that hard enough and you can end up with something like the philosophy of Epictetus and the Stoics: a sharper, more sophisticated kind of selfishness. You can talk yourself out of virtue as thoroughly as you can talk yourself out of friendship, play, and ordinary human pleasure.
Or consider another common move: you stare intensely at the “vanity” of human life, and you train your mind on how temporary money and status are. That sounds noble. But it can quietly flatter a very different trait—plain old laziness. If you already dislike the noise and grind of public life, “everything is empty anyway” becomes a tidy rational cover for doing nothing, indefinitely, with a clean conscience.
There is, however, one kind of philosophy that’s much less vulnerable to that problem—precisely because it doesn’t naturally team up with any unruly passion or cozy temperament. That is Academic, or Skeptical, philosophy.
The skeptics are forever talking about:
- doubt and suspending judgment
- the danger of rushing to conclusions
- keeping the mind’s investigations within narrow limits
- giving up “high” speculations that don’t connect to everyday life and practice
So this outlook clashes head-on with the mind’s favorite bad habits: sluggish complacency, reckless self-confidence, grandiose pretensions, and gullible superstition. It dampens every passion except one: the love of truth—and that’s the one passion that can’t really be “too strong.”
Given that, it’s strange that skeptical philosophy—harmless in almost every case—has attracted so much loud abuse. But the reason may be exactly what makes it harmless. It flatters no vice, so it recruits few fans. And by challenging so many forms of foolishness, it earns plenty of enemies, who brand it “libertine,” “profane,” and “irreligious.”
Still, we don’t need to worry that this approach—while trying to keep our inquiries tethered to ordinary life—will end up wrecking the reasoning ordinary life depends on, or push doubt so far that it freezes all action along with speculation. Nature will always reclaim the last word. No matter what abstract argument you run, our built-in ways of thinking and living reassert themselves.
For example, suppose we accept the point argued earlier: in every inference from experience, the mind takes a step that no argument of pure understanding can justify. Even if that’s true, there’s no danger that the everyday inferences nearly all knowledge relies on will collapse. If argument doesn’t compel the mind to take that step, then something else must—some other principle with equal weight and authority. And that principle will keep working as long as human nature stays the same. The real question, then, is: what is it? That question is worth pursuing.
Imagine a person with the strongest powers of reasoning and reflection, suddenly dropped into the world for the first time. Right away, they’d notice a stream of events: one thing after another, again and again. But that’s all they could honestly see. They would not, at first, be able to arrive—by reasoning alone—at the idea of cause and effect. The hidden powers that make nature run don’t show up to the senses. And it isn’t rational to say, just because one event preceded another once, that the first caused the second. The pairing might be accidental. There may be no reason at all to infer one from the appearance of the other. In short: without further experience, this person couldn’t responsibly form conclusions about matters of fact beyond what was immediately present to memory and sense.
Now suppose the same person gains experience. They live long enough to see that similar kinds of events are constantly joined—the same patterns repeating. What happens then? The moment they see one event, they immediately infer the other. Yet even with all that experience, they still haven’t uncovered any “secret power” by which one produces the other. And no chain of reasoning forces them to draw the inference. Still, they find themselves drawing it anyway. Even if they became convinced their understanding plays no part in the move, they would keep making it. So some other principle must be doing the work.
That principle is custom, or habit.
Whenever repeating an act creates a tendency to perform it again—without any fresh reasoning pushing us—we call that tendency the effect of custom. Using the word doesn’t magically explain the ultimate reason the tendency exists. It simply names a feature of human nature that everyone recognizes and that we know by what it does. Maybe we can’t dig deeper. Maybe we can’t explain the cause of this cause. We may have to accept it as the final principle we can offer for all conclusions drawn from experience—and be satisfied that we can get this far, rather than sulking because our minds won’t go further.
At minimum, the claim is clear: after we’ve repeatedly observed two kinds of things together—flame with heat, snow with cold—we come to expect one when we perceive the other, and we do so because of custom alone.
And this seems like the only hypothesis that explains a stubborn puzzle: why we can’t draw the inference from a single instance, but we confidently draw it after a thousand. Reason, by itself, doesn’t change that way. The conclusion it reaches from inspecting one circle is the same conclusion it would reach after surveying every circle in the universe. But no one who had seen only one billiard ball move after being struck could infer, from that lone case, that all bodies will move after a similar impulse. So all inferences from experience are effects of custom, not of reasoning.
Custom, then, is the great guide of human life. It’s what makes experience useful. It’s what makes us expect the future to resemble the past. Without custom’s influence, we’d know nothing about matters of fact beyond what’s directly present to memory and sense. We’d have no idea how to fit means to ends, or how to use our natural powers to produce effects. Action would stall. And most speculation would collapse with it.
One more important point: even though our conclusions from experience often carry us far beyond what we currently remember or perceive—out to distant places and remote ages—there must always be some starting fact present to memory or sense.
If someone finds the ruins of grand buildings in a desert, they infer that the land was once cultivated and inhabited by a civilized people. But if nothing like that ever appeared to them—no ruins, no traces—they could never draw that conclusion.
We learn about earlier centuries from history, but notice what that requires: we have to read the books, treat those pages as evidence, and then trace a chain of testimony back toward eyewitnesses and spectators. In every case, if we don’t begin from some fact given to memory or the senses, our reasoning turns purely hypothetical. The links might connect neatly, but the chain would hang in midair—supported by nothing—and could never deliver knowledge of any real existence.
Ask yourself what happens when I challenge you: “Why do you believe that fact you just told me?” You’ll answer with a reason—and that reason will be another fact connected to it. But you can’t keep doing that forever. Eventually you must end with something present to memory or sense, or else admit that your belief has no foundation at all.
So what’s the conclusion of the whole story? It’s simple—though it’s far from what many philosophers expect.
All belief in matters of fact, all belief in real existence, comes from two ingredients:
- some object present to the memory or senses, and
- a customary connection between that object and some other one
In other words: after we’ve repeatedly found flame paired with heat, and snow paired with cold, then when flame or snow appears again, the mind—by custom—expects heat or cold, and believes that such a quality exists and will show itself if we draw nearer.
This belief isn’t something we choose. It’s the unavoidable result of placing the mind in those circumstances. It’s an operation of the mind as natural as love when we’re benefited, or hatred when we’re harmed. These are instincts. No reasoning can manufacture them at will, and no reasoning can fully block them when the situation calls them up.
At this point, we would be well within our rights to stop. In most questions we can’t go even one step beyond this, and in all questions we end up here eventually, no matter how restless our curiosity gets.
Still, curiosity isn’t a vice here—it may even be a virtue—if it pushes us to look more closely at what this belief really is, and what exactly this “customary connection” amounts to. With that closer look, we might find explanations and analogies that satisfy, at least for those who enjoy abstract inquiry and can take pleasure in a kind of careful speculation that still leaves room for doubt. Readers with a different taste can safely skip what follows; the next investigations can be understood without it.
PART II
Nothing is freer than the human imagination. It can’t go beyond the basic stock of ideas supplied by our inner and outer senses—but within that stock it can do almost anything. It can mix and match, combine and separate, divide and recombine, producing every variety of fiction, fantasy, and vision.
It can invent a whole sequence of events with the vividness of real life: assign them a time and place, picture them as existing, and paint every detail as richly as the details of an episode from history that we believe with complete certainty.
So where, exactly, is the difference between mere fiction and genuine belief?
It can’t be simply that belief adds some special extra idea that fiction lacks. If that were all, then because the mind can attach ideas at will, we could just choose to attach this “belief idea” to anything and believe whatever we liked. But daily experience says the opposite. You can easily imagine a creature with a human head and a horse’s body, yet you can’t simply will yourself to believe that such an animal has ever really existed.
So the difference between fiction and belief must lie in a feeling—a particular sentiment that accompanies belief but not fiction, and that doesn’t depend on the will. You can’t summon it on command.
Nature has to produce it, like it produces every other sentiment, and it arises from the mind’s situation at a given moment. When an object is presented to memory or the senses, it immediately—by the force of custom—drives the imagination toward the object usually joined with it. And that resulting conception comes with a distinctive feeling, unlike the mind’s loose daydreaming. That distinctive feeling is the whole essence of belief.
This matters because there is no matter of fact we believe so firmly that we can’t still conceive the opposite. So if belief were just having a certain idea, there would be no difference between the idea we accept and the idea we reject. The difference must be that belief carries a special sentiment that marks one conception off from the other.
If I watch a billiard ball roll toward another on a smooth table, I can easily imagine it stopping dead when it hits. There’s no contradiction in that. Yet that image feels nothing like the image in which I anticipate the impact and the transfer of motion from one ball to the other.
If we tried to define this sentiment, we’d probably find it nearly impossible—like trying to define “cold” or “anger” to a being that has never felt either. Belief is the proper name of the feeling, and no one is confused by the word, because everyone is constantly aware of the experience it points to.
Still, we can try to describe it, hoping to find helpful analogies. Belief is nothing more than a conception that is more vivid, lively, forceful, firm, and steady than anything produced by imagination alone. These terms may sound unphilosophical, but they’re just different ways of pointing at the same mental act—the act that makes realities (or what we take to be realities) feel more present than fictions, makes them weigh more in our thinking, and gives them greater power over our emotions and imagination. If we agree on the thing itself, it’s not worth fighting over the label.
The imagination can do a lot. It can rearrange ideas endlessly. It can picture fictional scenes with full details of time and place, setting them before us in their “true colors,” as if they had actually happened. But imagination by itself can’t produce belief. So belief cannot consist in the special nature of the ideas, or their order. It must consist in how the ideas are conceived—how they feel to the mind.
And I admit: we can’t perfectly explain that feeling. We can use words that gesture toward it. But its proper name is belief, and that is already a term everyone understands in ordinary life. In philosophy, we can go no further than this: belief is a felt difference that distinguishes the mind’s judgments from the imagination’s inventions. It gives certain ideas extra weight and influence. It makes them seem more important. It presses them more firmly into the mind. And it turns them into the principles that govern our actions.
Right now, for instance, I hear the voice of someone I know, seeming to come from the next room. That sensory impression immediately carries my thought to the person—and to the surrounding scene. I picture them as existing at this moment, with the same qualities and relations I previously knew them to have. Those ideas grip my mind more tightly than the idea of an enchanted castle. They feel different. And they have far more power—power to please or to hurt, to bring joy or sorrow.
SECTION V
A Sceptical Solution to These Doubts
Let’s step back and take in the whole picture.
Suppose we grant two things:
- Belief isn’t some mysterious extra ingredient in the mind. It’s simply an idea we hold more vividly, more steadily, and with more firmness than the free-floating images we invent in imagination.
- And that extra vividness comes from habit: when something we’re sensing or remembering is regularly linked with something else, the mind learns to slide from the first to the second automatically.
If those assumptions are right, it shouldn’t be hard to spot other mental processes that look the same—and to explain them by even more general principles.
We’ve already noticed that nature has wired our thoughts so that ideas don’t show up alone. The moment one idea appears, it tends to tug a related one in behind it, almost without effort. These links between ideas fall into three basic kinds:
- Resemblance (one thing reminds you of another because it looks or feels like it)
- Contiguity (one thing calls up another because they’re near each other in space or time)
- Causation (one thing points to another because we’ve learned they go together as cause and effect)
These are the only “bonds” we need to explain the ordinary flow of thinking—why our minds move in recognizable trains instead of random sparks.
Now here’s the question that will settle the present difficulty: in all these relations, when one item is in front of the senses or memory, does the mind not only travel to the related idea, but arrive there with extra force and stability—with a livelier conception than it would otherwise have?
We already know that’s what happens with cause and effect, because that’s the familiar engine of belief. If the same thing happens with resemblance and contiguity too, then we can state a broad rule that applies across the mind: a present impression, plus a relation, can “boost” a related idea into something belief-like in vividness.
Resemblance: How a Present Copy “Brightens” an Absent Person
Start with a simple experiment. You see a portrait of a friend who isn’t there. Instantly, your idea of that friend perks up. It becomes more lively. And any emotion attached to that idea—joy, longing, grief—hits harder too.
Two things are working together:
- A relation (the picture resembles the friend), and
- A present impression (the picture is right there in front of you)
Take either one away, and the effect collapses.
- If the picture doesn’t resemble your friend (or wasn’t meant to depict them), it doesn’t even carry your mind to them.
- And if both the picture and the person are absent, then even if you move in thought from one to the other, the idea usually feels dimmer, not sharper.
That’s why, when the portrait is actually in front of you, you can enjoy thinking of your friend through it. But once it’s gone, you’d rather think of the friend directly than by way of a faint, secondhand image that’s now just as absent as the person.
Why Rituals Work (Psychologically, Not Magically)
Religious ceremony—especially in traditions rich with images and gestures—shows the same mental mechanism at work.
Devout people often defend their rituals by saying, in effect: “These outward actions help. They wake up our devotion. Without them, our attention drifts, because we’re trying to focus on things that are distant, invisible, and abstract.”
They’re making a psychological point: when you “stand in” for an unseen object with something you can see and touch—a statue, an icon, a posture, a spoken formula—you make the object of faith feel more present. And because sensible things (things the senses can grab) naturally strike the imagination more strongly than purely intellectual reflections, that vividness spills over onto the related idea.
The takeaway isn’t that the theology is correct. It’s simply this: resemblance plus a present impression commonly makes an associated idea more vivid. And because everyday life is full of images, symbols, and reminders, we have endless chances to observe this principle.
Contiguity: Nearness Makes Ideas Hit Harder
We can strengthen the case by looking at contiguity as well.
Distance, in general, weakens ideas. And as you get physically closer to something, it can influence your mind even before you can actually see it—almost like a faint version of direct perception.
Yes, thinking about one thing can lead you to think about what’s nearby. But there’s a crucial difference: the actual presence of something tends to push the mind along with far greater vividness than mere reflection ever can.
If you’re a few miles from home, anything connected to home feels more immediate than if you’re thousands of miles away. Even at great distance, you can still think of your friends and family. But in that case you’re moving from one idea to another idea, with no sensory foothold. The transition may be smooth, but smoothness alone doesn’t create that extra force. For that, you need some present impression.
Causation: Relics, Effects, and Shorter Chains
No one doubts that causation has a similar effect—really, the same shape of effect—as resemblance and contiguity.
That’s why superstitious people prize relics. A relic functions like a physical “handle” on a revered life. It makes the saint feel closer, more real, more vivid, because it pulls the mind toward the person through a chain of association.
And what kind of relic would work best? Not just anything connected by rumor or distant story, but something that counts—psychologically—as a more direct effect of the person: something they made, used, wore, or moved. A saint’s tool, clothing, or furniture matters to the devotee because it once stood under the saint’s control and was acted on by them. In that sense it’s linked by a shorter chain of consequences than the kinds of long, indirect evidence by which we usually learn about the existence of people we’ve never met. Shorter chain, stronger pull, livelier conception.
A Living “Effect” Works Even Better
Now imagine something even more powerful: you meet the son of a close friend who has been gone for years—or who died long ago. That single sight can bring back the whole person with startling intensity: old conversations, shared habits, familiar jokes, the emotional texture of past intimacy. Memory doesn’t merely return; it returns in brighter colors.
That’s the same principle again: a present object, tied to another by a strong relation, lifts the related idea into a more vivid, steadier form.
Belief Is Already in the Background
There’s an important detail in all these examples: the relation only works because belief is already assumed.
- A portrait affects you only if you already believe your friend existed.
- Nearness to home matters only if you believe home is real.
- A relic stirs devotion only if you accept that the saint was an actual person.
So the mind’s “boosting” effect depends on a prior commitment that the related object is real.
And this is exactly where the sceptical puzzle starts to loosen.
I’m claiming that this kind of belief—when it goes beyond what you’re directly sensing or remembering—has the same basic character and arises from the same causes as the transitions we’ve been describing.
Think about what happens when you toss dry wood into a fire. Your mind immediately moves to the idea that the flames will grow, not die out. That leap from cause to effect does not come from a chain of reasoning. It comes from custom and experience. And because it begins with something present to the senses—the sight of the fire, the feel of the wood—the idea of the effect arrives with more force than any random daydream could ever manage.
The idea doesn’t drift in slowly. It shows up at once. Your thought snaps to it, and it carries along the “energy” of the present impression.
A sharper example: if someone points a sword at your chest, the ideas of injury and pain strike you far more strongly than if someone simply sets a glass of wine in front of you—even if, by coincidence, the same idea of pain popped into your mind after seeing the wine. Why the difference? Not because the idea is logically different, but because in the first case you have:
- a present object that hits the senses hard, and
- a habitual transition to an idea you’ve learned to connect with it
That’s the whole secret. In all our conclusions about matters of fact and existence, the mind does nothing more mysterious than this: it starts from what’s present, and—by custom—slides to what usually goes with it, delivering to the related idea a stronger, steadier kind of conception.
And it’s genuinely satisfying to find everyday analogies that make this operation feel less occult and more intelligible. The general pattern is simple: a present impression gives weight and solidity to the idea it calls up.
A “Pre-Established Harmony” Between Nature and Thought
Seen this way, there’s a kind of pre-established harmony between the way the world unfolds and the way our ideas succeed one another.
We don’t know the hidden powers that govern nature’s sequence of events. But we do find that our minds tend to run in parallel with that sequence: thoughts follow the same tracks that events do.
The bridge between them is custom. And custom isn’t a philosophical luxury—it’s a survival tool. Without it, the presence of an object would never trigger the idea of what usually comes with it. Our knowledge would be trapped inside the tiny circle of immediate sensation and memory. We couldn’t fit means to ends. We couldn’t plan. We couldn’t reliably chase good outcomes or avoid harm.
If you like to marvel at purpose in nature, here’s plenty to marvel at.
Why This Can’t Be Left to Slow, Fallible Reason
One last support for the theory.
This mental operation—inferring like effects from like causes, and like causes from like effects—is so essential to human life that it’s hard to believe nature left it to pure reasoning.
Reason is:
- slow,
- absent (or nearly so) in early childhood,
- and, at every age, prone to mistakes
It seems far more in line with the ordinary “wisdom” of nature that such a necessary function would be secured by something closer to instinct—a built-in tendency that works reliably, shows up early, and doesn’t depend on careful argument.
Nature teaches us to use our limbs without first teaching us anatomy. We move long before we understand muscles and nerves. In the same way, nature implants in us an inner push that carries thought forward in step with the regularities among external objects, even though we remain ignorant of the hidden forces that produce those regular successions in the world.
SECTION VI
Of PROBABILITY10
Even if there’s no literal “chance” out in the world, our ignorance can feel exactly like it. When you don’t know the real cause of an event, your mind still has to take a stance—so it forms a belief or expectation with the same psychological flavor we usually label “luck.”
Probability, in the everyday sense, is what you get when the possible outcomes aren’t evenly matched. If more “routes” lead to one result than to another, that result naturally starts to look more credible. Picture a die where four faces show one symbol and the other two faces show a different symbol. Before you roll, it’s more reasonable to expect the four-face symbol to come up than the two-face symbol. Now exaggerate the setup: imagine a die with a thousand faces, where 999 faces show the same mark and only 1 face is different. Your confidence shifts dramatically. You don’t just lean toward the 999-face outcome—you practically settle into it.
That’s obvious enough, but it’s worth pausing on what your mind is actually doing.
When you look ahead to the roll, you treat each individual face as equally possible. That equal treatment is what we usually mean by “chance”: every specific outcome in the set gets the same initial standing. But then your mind notices a second fact: one “type” of outcome is represented by more faces than the other. So as you mentally run through the possibilities—face by face—you land on that majority outcome again and again. It shows up more often in your imagination simply because it has more entries in the catalog.
And here’s the key psychological move: the repeated return to the same outcome immediately creates the feeling of belief. Nature has wired us so that when many separate mental “glimpses” converge on the same conclusion, the conclusion starts to feel solid.
If we say that belief is just a more forceful, steadier way of conceiving an object than the airy pictures we knowingly invent, then this mechanism starts to make sense. When the same idea keeps resurfacing from multiple angles, it gets stamped more deeply into the imagination. It gains intensity. It tugs more strongly on our emotions and expectations. In short, repetition across many imagined possibilities produces the reliance—the sense of security—that we call belief or opinion.
The same pattern shows up when we talk about causes, not just dice.
Some causes look perfectly reliable. We’ve never seen them miss:
- Fire burns.
- Water suffocates people.
- A shove or a fall produces motion according to gravity and impact.
Other causes are messier. Experience tells us they don’t always deliver the same result. Rhubarb doesn’t always purge. Opium doesn’t make everyone sleepy.
Of course, when a cause fails to produce its usual effect, scientists typically don’t say “nature is inconsistent.” They assume there were hidden factors—some difference in internal structure or surrounding conditions—that blocked the usual operation. But notice: for the purposes of everyday reasoning, our minds proceed as if we were simply dealing with an irregular cause. We still have to decide what to expect next time.
Because habit trains us to project the past into the future, we treat a perfectly uniform track record as grounds for near-total confidence. When the past has been steady and exceptionless, we anticipate the same effect with the strongest assurance and barely entertain alternatives.
But when a cause that looks the same has produced different effects in the past, the mind can’t help but carry all those past outcomes forward when it imagines the future. You may favor the most common result—and usually you will—but you can’t honestly erase the others. Each outcome keeps its own “weight,” proportional to how often it has shown up.
That’s why, in most of Europe, it’s more likely that there’ll be frost at some point in January than that the entire month will stay mild—though the exact odds swing with climate, and in northern regions the expectation of frost approaches certainty.
So when we “transfer” the past to the future to predict what a cause will do, we don’t transfer a single event. We transfer a whole distribution. We carry forward the full set of outcomes in the same ratios we’ve observed: one outcome might come to mind as if it had happened a hundred times, another as if it had happened ten times, another as if it had happened once. And because many more of these mental recollections pile up behind the most frequent outcome, they reinforce it, intensify its presence in imagination, and generate what we call belief—giving it an advantage over rarer alternatives that return to the mind less often.
Try to explain this mental operation using any of the standard philosophical systems, and you’ll feel how hard it is to make the story work cleanly. For my part, it’s enough if these remarks spark curiosity and make it clear how incomplete our usual theories still are when they try to explain topics this subtle—and this grand.
SECTION VII
Of the IDEA of NECESSARY Connexion*
Part I — Why “power” is so hard to pin down
One reason math feels so much cleaner than moral philosophy is that its ideas are sensory in a way ours usually aren’t. A circle doesn’t quietly blur into an oval. A hyperbola doesn’t get mistaken for an ellipse. Even the difference between an isosceles and a scalene triangle is sharper than the difference people argue over when they say “virtue” versus “vice,” or “right” versus “wrong.”
In geometry, if you define a term, your mind can reliably swap the definition in whenever the term shows up. And even if you don’t define it, you can often just look at the thing—draw it, picture it, point to it—and keep it steady in your attention.
But try that with the mind’s finer materials:
- subtle feelings
- acts of understanding
- shifting passions
- the little internal “moves” we make when we judge, decide, doubt, or expect
These are real differences. They aren’t imaginary. Still, when you try to inspect them by reflection, they slip away. And you can’t simply “call back” the original inner experience on demand the way you can redraw a triangle. That’s how ambiguity creeps in: we start treating merely similar mental states as if they were identical, and then our conclusions drift farther and farther from what we actually started with.
A fair trade: clarity versus complexity
Even so, it’s not as if math simply wins and moral philosophy simply loses. In a better light, their strengths and weaknesses almost balance.
Yes, geometric ideas are easier to keep clear and fixed. But to reach the deeper truths of geometry, you usually have to:
- follow a much longer chain of reasoning,
- keep more intermediate steps in view, and
- compare ideas that are far apart from each other.
Meanwhile, moral and metaphysical thinking has the opposite pattern. Our ideas here can easily get fuzzy unless we’re careful—but the arguments typically run in shorter stretches. The steps from premise to conclusion are fewer.
In fact, even a simple proposition in Euclid tends to have more moving parts than a sound piece of moral reasoning (so long as that moral reasoning doesn’t wander off into fantasy or clever nonsense). If we can trace the principles of the human mind just a few steps, that’s already real progress—because nature quickly blocks us from getting all the way to the ultimate “why” of causes, and forces us to admit how little we truly know.
So the main obstacles look like this:
- In moral/metaphysical inquiry: the big problem is unclear ideas and slippery words.
- In mathematics: the big problem is the length of the argument and the mental reach required.
- In natural philosophy (physics): progress is often slowed by the lack of the right experiments and observations, which we sometimes stumble upon by luck and can’t always produce on demand, even with careful effort.
And since moral philosophy has, historically, improved less than geometry or physics, it’s reasonable to think that its obstacles require extra care and ability to overcome.
The murkiest ideas: power, force, energy, necessary connection
In metaphysics, nothing is more shadowy—and nothing is more constantly used—than ideas like power, force, energy, and necessary connection. We lean on these words in almost every discussion, yet we rarely stop to ask what, exactly, they mean.
So that’s the goal here: to pin down, as precisely as we can, what these terms amount to, and to clear away at least some of the fog that hangs over this part of philosophy.
One guiding rule: ideas come from impressions
Start with a principle that’s hard to dispute: every idea is ultimately a copy of an impression. Put more plainly: you can’t genuinely think of anything you’ve never first felt—either through the outward senses (seeing, hearing, touching) or through inward awareness (emotions, desires, reflections).
Definitions can help with complex ideas, because a definition is really just a list of the simpler ideas that compose the complex one. But what happens when you keep defining until you reach the simplest ideas—and those simplest ideas still feel unclear? What then?
There’s only one reliable move: go back to the original impression the idea is copied from. Impressions are vivid. They don’t have the same room for ambiguity. And once you have the impression in view, it can illuminate the idea that’s been floating around in the dark.
Think of this as a kind of new “optics” for the moral sciences: a way to magnify our tiniest and most elusive ideas until we can inspect them as clearly as we inspect the most obvious sensations.
So, if we want to understand the idea of power or necessary connection, we should locate its impression—the raw felt experience it’s supposed to come from. And to do that confidently, we should search every place it could plausibly originate.
Looking outward: do we ever see necessary connection?
First, consider external objects and what we call causes. In any single case, we never actually observe a power that binds an effect to a cause. We never catch, with the senses, a “must” that makes the effect inevitable.
What do we observe? Only this: one event follows another.
One billiard ball strikes another; the second moves. That’s all the senses report. And inside the mind, there’s no special additional feeling that announces, “A necessary connection has been detected.” So in any one particular instance of cause and effect, there’s nothing present that could generate the idea of power or necessity.
This matters for another reason. If the power of a cause were visible to the mind—if we could directly grasp its “energy”—then we could predict effects by sheer thought, without needing experience. We could look at a new object and infer what it must do. But that’s not how human understanding works.
Matter never advertises its powers through its sensible qualities. Take the familiar features of bodies:
- solidity
- extension
- motion
Each of these seems complete on its own. None of them, just by being inspected, points to a specific further event that must follow. The universe looks like a constant flow of scenes—one thing after another—while the force that drives the whole machine stays hidden.
We know, for example, that heat regularly accompanies flame. But what is the connection between them? We can’t even form a good guess from the mere look of flame. So the idea of power can’t come from observing bodies in single instances, because bodies never reveal any power that could serve as the original impression behind that idea.
Looking inward: does the will reveal power?
If outward observation doesn’t supply the impression, maybe inward reflection does. A natural thought is: “Surely I experience power in myself. I decide to move my hand, and my hand moves. I decide to imagine something, and the image appears. Isn’t that power, known directly by consciousness?”
That’s the claim. Now let’s test it—starting with the will’s influence over the body.
Volition and bodily motion: we know the sequence, not the power
It’s true that our bodies often move when we choose. We’re constantly aware of that. But notice what sort of knowledge this is: it’s the same kind we have of any natural event. We learn it from experience. We don’t “see” an inner force that guarantees the outcome.
We know the effect follows the will. But the means—the hidden process by which a mere intention produces motion—escapes us completely, and will always escape even our most determined investigation.
Consider why.
1) The mind–body link is a mystery.
What could be more baffling than the union of mind and body—how something we take to be nonmaterial thought can push around matter? If you could move mountains or steer planets by wishing, that wouldn’t be more astonishing in principle. If we truly perceived a power in the will, we would also understand the specific connection between will and motion, and the nature of the mind and body that makes the connection possible. But we don’t.
2) The will’s “authority” is uneven, and we can’t explain the boundaries.
Why can you move your tongue and fingers but not your heart or liver? If we had a direct inner sense of power, this wouldn’t puzzle us. We’d be able to see, without relying on experience, why the will reaches exactly this far and no farther. But we can’t.
3) Paralysis exposes the illusion.
Someone suddenly paralyzed in a leg or arm often tries, at first, to move it as usual. In that moment, he feels as ready to command the limb as a healthy person does. If consciousness were giving us the felt impression of genuine power, it would tell the difference here. And yet it doesn’t. The honest conclusion is that consciousness isn’t revealing a power at all. We learn the will’s influence from experience—experience that teaches only that one event regularly follows another, not what “binds” them together.
4) Anatomy shows a long hidden chain.
We also learn that the immediate target of voluntary motion isn’t the limb itself, but an intricate pathway—muscles, nerves, “animal spirits,” and perhaps even smaller and more unknown intermediaries—through which motion is passed along before the limb moves.
That’s strong evidence that whatever produces the motion is not something we grasp fully by inner awareness. The mind wills one event, and immediately something else happens inside us—something unknown, and not at all the same as what we intended. That produces something else, still unknown, and so on, until the visible movement appears.
But if the original power were actually felt, it would be known. And if it were known, its effect would also be known—because power makes sense only in relation to the effect it produces. Conversely, if we don’t know the effect’s true mechanism, we don’t know the power either, and we certainly don’t feel it.
So, taken together, these points support a clear conclusion: our idea of power is not copied from any inner impression of power we have when we move our bodies. We experience the regular pairing—willing followed by motion—but the “energy” that connects them is as hidden and unintelligible as the power behind any other natural event.
Volition and ideas: is mental self-control any clearer?
Maybe the will doesn’t reveal power through bodily motion, but surely it does when we control our thoughts—when we summon an idea, hold it in focus, examine it, and then dismiss it.
But the same style of reasoning applies here too.
1) To know power, you’d have to know the link between cause and effect.
If “power” and “ability to produce an effect” mean the same thing, then knowing a power would mean knowing what in the cause makes the effect happen—and how. That would require real knowledge of the nature of the soul, the nature of an idea, and how one produces the other.
Yet bringing a new idea into awareness is, in a striking sense, like creating something from nothing. That looks like an immense power—something that would seem beyond any finite being. At the very least, it’s not a power we feel, know, or can even clearly conceive. We experience the result (an idea appearing after an act of will). But the mechanism—the supposed energy—is entirely beyond us.
2) The mind’s control has limits we learn only by experience.
We can’t command everything in ourselves equally. Our control over passions and feelings is weaker than our control over ideas. And even our control over ideas has tight boundaries. Can anyone explain the ultimate reason for those boundaries—why the power fails here but not there—based on pure understanding of causes and effects? We can’t. We only learn the limits by observing what happens.
3) The “same” will works differently at different times.
A healthy person has more self-command than someone sick. Most of us can steer our thoughts better in the morning than late at night, better when fasting than after a heavy meal. Again: do we know why from any inner grasp of power? No. We just notice the pattern.
That strongly suggests there is some hidden structure or mechanism—whether in mind, body, or both—on which these effects depend. And since that structure is unknown to us, the will’s supposed energy remains unknown too.
So, look directly at the act of willing. Turn it over in your mind. Do you find, inside it, anything like the world-making force that could conjure a new idea into existence with a mental “let there be”? Far from feeling that kind of energy, we need experience—steady, repeated experience—to convince us that such extraordinary effects actually follow from a simple act of volition.
A final setup: ordinary explanations and the illusion of “seeing” power
Most people never struggle to explain the familiar operations of nature: heavy objects fall, plants grow, animals reproduce, food nourishes. But suppose that, in all these cases, they perceive the very force or energy of the cause, by which it is connected with its effect, and is for ever infallible in its operation.
People pick up a mental groove through long habit: when they see a cause show up, they instantly and confidently expect its usual companion to follow. It becomes hard for them to even imagine that anything else could happen.
It’s only when something strange breaks the pattern—an earthquake, a plague, some startling “prodigy”—that people feel stuck. They can’t find the right cause, or make sense of how the effect could have been produced. In moments like that, it’s common to reach for an invisible intelligent agent as the immediate explanation: some mind behind the scenes, stepping in because the ordinary powers of nature don’t seem up to the job.
But philosophers who push the question a little further notice something uncomfortable: the “energy” of a cause is just as mysterious in everyday cases as it is in the spectacular ones. In the end, experience teaches us only one thing—constant conjunction. We learn that certain kinds of events reliably show up together. What we never actually grasp is anything like a necessary connection between them.
From there, a number of philosophers conclude that reason forces them to use—everywhere—the same kind of explanation the public uses only for miracles. They say:
- Mind and intelligence are not just the ultimate source of the world, but the immediate cause of every event in it.
- What we call “causes” in nature are really just occasions.
- The real driver behind every effect is not any power in matter, but a volition of the Supreme Being, who wills that certain events will always be joined together.
So, on this view, when one billiard ball strikes another, it’s not that the first ball transfers motion by its own force (even if that force originally comes from the Author of Nature). Instead, God himself, by a particular act of will, moves the second ball—prompted by the first ball’s impact—according to the general laws he has laid down for governing the universe.
As inquiry goes on, these philosophers add a further point: we don’t just fail to understand how body acts on body. We also fail to understand how mind acts on body, or body on mind. Neither sensation nor consciousness gives us the ultimate principle in either case.
That same ignorance pushes them to the same conclusion again. They claim God is the immediate cause of the union between soul and body. It isn’t that external objects shake our sense organs and thereby produce sensations in the mind; rather, God wills that this motion in an organ will be followed by that sensation. Likewise, they say your will doesn’t contain any real energy that moves your limbs. Your will, by itself, is powerless; God “seconds” it, and commands the motion we mistakenly credit to our own efficacy.
Some even extend the idea inward, to the mind’s private life. Your “mental vision”—your having an idea before you—is treated as a kind of disclosure from the Creator. When you deliberately turn your thoughts to an object and summon its image in imagination, they say it isn’t your will that produces the idea. It’s the universal Creator who presents it to you.
Thus, on this theory, everything is saturated with God. And not satisfied with the claim that nothing exists except by God’s will, and nothing has power except by his permission, they strip nature—and every created thing—of all power, to make dependence on the Deity as immediate as possible.
But they miss what this costs them. The theory actually shrinks, rather than enlarges, the grandeur of the divine attributes it aims to praise. It shows more power, not less, for God to delegate real powers to created things than to personally produce every event by direct volition. And it shows more wisdom to design the world at the start with such foresight that it can, through its own operations, serve the purposes of providence—rather than requiring the Creator at every moment to fine-tune the parts and breathe motion into every wheel of the machine.
If you want a more strictly philosophical rebuttal, two reflections may be enough.
First: this doctrine of the universal energy and operation of the Supreme Being is simply too bold to genuinely convince anyone who appreciates how weak and limited human reason is. Even if the argument chain leading to it were perfectly logical, you should immediately suspect that it has carried you beyond the range of your faculties when it lands on conclusions so extraordinary and so far from ordinary life and experience. You’ve wandered into fairyland long before you reach the final steps. And once you’re there, you have no good reason to trust the everyday methods of inference—analogy, probability, “what usually happens”—that guide us everywhere else. Our measuring line is too short for these depths. Even if we reassure ourselves that we’re guided at each step by something like experience, we should remember: this supposed “experience” has no authority when we apply it to topics that lie entirely outside the sphere of experience.
Second: the arguments for the doctrine don’t seem to have any real force. It’s true that we’re ignorant of how bodies operate on one another; their force is incomprehensible. But aren’t we equally ignorant of how a mind—yes, even the supreme mind—operates on itself or on body? Where, exactly, do we get any idea of that power? We don’t feel it in ourselves as a distinct “power” we can inspect in consciousness. And our only idea of the Supreme Being comes from reflecting on our own faculties. So if ignorance were a good reason to deny energy in matter, it would push us just as strongly toward denying energy in God. We understand the operations of one as little as the operations of the other. Is it really easier to conceive motion arising from volition than from impulse? In both cases, all we truly know is that we are profoundly ignorant.
PART II
To bring this already long argument to a close: we’ve tried every likely source for an idea of power or necessary connection, and we’ve found nothing.
In any single case of bodies interacting, no matter how closely we watch, we discover only this: one event follows another. We never perceive any force by which the cause operates, or any connection tying cause to effect. The same problem appears when we consider mind and body: we observe that bodily motion follows mental willing, but we do not observe—or even form a clear conception of—the tie that binds volition to motion, or the “energy” by which mind produces the effect. And the will’s authority over its own ideas is no clearer.
So, taken as a whole, there seems to be no instance anywhere in nature where a connection is conceivable to us. Events look loose and separate. One happens after another, but we never observe the link. They appear conjoined, but never connected. And since we can’t form an idea of anything that has never shown itself either to the senses or to inward feeling, the conclusion seems unavoidable: we have no idea of power or connection at all, and those words have no meaning—whether in philosophy or in ordinary speech.
Still, there’s one way left to avoid that harsh conclusion, and it points to a source we haven’t yet examined.
When any object or event first appears, no amount of cleverness lets us discover—without experience—what will result from it. We can’t predict beyond what is immediately present to memory and the senses. Even after a single experiment, where we’ve seen one event follow another, we’re not entitled to form a general rule about what happens in similar cases. It’s rightly considered reckless to infer the whole course of nature from one experiment, however careful it was.
But when one type of event has, in every observed case, been joined with another, we stop hesitating. On the appearance of the first, we confidently predict the second—and we rely on the only kind of reasoning that ever gives us assurance about matters of fact. At that point we name the first Cause and the second Effect. We suppose there’s some connection between them, some power in the one that infallibly produces the other, with certainty and necessity.
So the idea of necessary connection arises from many similar instances of constant conjunction. No single instance—no matter how we turn it around in our minds—can ever give us that idea by itself.
What changes, then, when we go from one instance to many? Not the objects, and not what we perceive in them. The only difference is this: after repeated exposure, the mind, by habit, is carried from one event to the expectation of its usual companion, and it comes to believe the companion will occur. That felt connection in the mind—this customary transition of imagination from one object to its attendant—is the impression from which we form our idea of power or necessary connection. That’s it. There is nothing more to it.
Look at the matter from every angle and you won’t find any other origin. This is the sole difference between a single case (which can never supply the idea of connection) and many similar cases (which do).
The first time someone saw motion communicated by impact—two billiard balls colliding—he could not truthfully say the events were connected. He could only say they were conjoined: one followed the other. After watching many such cases, he begins to say they are connected. What produced this new idea? Nothing in the objects themselves, but a change in him: he now feels the connection in his imagination, and can readily predict one event from the appearance of the other.
So when we say one object is connected with another, we mean only that they have acquired a connection in our thought—a link that supports an inference, so that the one becomes evidence for the other’s existence. That conclusion sounds odd, but it rests on solid evidence. And it shouldn’t bother a skeptic. If anything, skeptics should welcome conclusions that reveal how narrow and fragile human reason is.
What could better display the mind’s surprising weakness than this? If there is any relation we most need to understand well, it’s cause and effect. All our reasoning about matters of fact depends on it. It’s how we reach beyond what we currently remember or perceive. The immediate use of every science is to teach us how to control and regulate future events by learning their causes. Our minds work on this relation constantly.
And yet our ideas of it are so imperfect that we can’t give a proper definition of “cause” except by bringing in something that isn’t, strictly speaking, the hidden link we were hoping to describe.
From experience we learn that similar objects are always conjoined with similar effects. So we can define a cause as:
- Definition 1 (regularity): an object followed by another, where all objects similar to the first are followed by objects similar to the second.
- Or, phrased differently: where, if the first object hadn’t occurred, the second would not have occurred.
We also find, again from experience, that when a cause appears, the mind is carried by custom to the idea of the effect. So we can offer another definition:
- Definition 2 (mental transition): an object followed by another, whose appearance always leads the mind to think of the other.
Both definitions rely on circumstances that accompany causation rather than on some deeper “bond” inside it. But we can’t fix that, because we do not have any idea of the bond itself—nor even a clear notion of what it is we’re asking for when we try to conceive it.
Take an example: “the vibration of this string causes this sound.” What do we mean?
We mean either:
- the vibration is followed by the sound, and all similar vibrations have been followed by similar sounds; or
- the vibration is followed by the sound, and when we perceive the vibration, the mind leaps ahead and forms the idea of the sound.
That’s all. We can view cause and effect in either of these two ways, but beyond them we have no idea of the relation.
To sum up the argument:
- Every idea is copied from some earlier impression or felt experience. If you can’t find the impression, you can be sure there is no idea.
- In any single instance—whether bodies acting on bodies, or minds acting on bodies, or wills directing ideas—nothing gives an impression that could generate an idea of power or necessary connection.
- But when many uniform instances appear, and the same kind of object is always followed by the same kind of event, we begin to speak of cause and connection.
- What’s new is not a discovered force in the objects, but a new feeling in us: a customary transition in thought from one to the other. That feeling is the origin of the idea we were looking for.
Because the idea arises from many instances and not from one, it must come from whatever makes “many instances” different from a single instance. And the only difference is this habit of inference.
The first collision of two billiard balls you ever saw is, in itself, no different from any collision you might see today—except that at first you couldn’t infer the second event from the first, while now, after long uniform experience, you can. I’m not sure every reader will immediately grasp this reasoning. And I worry that piling on more words or more angles would only make it feel more tangled. In abstract arguments there’s usually one “right” viewpoint that makes the whole thing click, and hitting that does more than any flood of eloquence. That’s the viewpoint we should aim for—and save the rhetorical flourishes for subjects that actually benefit from them.
SECTION VIII
Of LIBERTY and NECESSITY
Part I — Why this debate never ends (until you define your terms)
When people argue passionately about the same question for centuries, you’d think they would at least agree on what the key words mean. After all, defining terms is supposed to be the easy part. Once everyone is using the same definitions, we can stop fighting about language and start fighting about the actual issue.
But the longer you look at real intellectual history, the more you see the opposite pattern: if a controversy drags on for ages and still won’t settle, that’s usually a sign that the words themselves are slippery. The two sides aren’t actually grabbing the same idea when they say the same term. And since human minds are broadly similar—otherwise debate would be pointless—people wouldn’t stay divided for so long on the same question if they were genuinely attaching the same meanings to the key words. Once you can exchange arguments, test each other’s claims, and look for contradictions, a shared meaning normally forces convergence.
There is one exception: debates about things that are simply beyond human reach—grand questions about the origin of the universe, or the hidden “economy” of spirits and invisible minds. There, people can argue forever because the subject gives them nothing solid to check their claims against.
But when the dispute is about something tied to everyday experience, it’s hard to see how it could remain unresolved for so long—unless the combatants are separated by ambiguity, talking past each other instead of actually engaging.
That’s exactly what has happened with the famous fight over liberty and necessity. In fact, the confusion is so deep that I suspect something almost embarrassing: everyone has basically always agreed about the substance, both the learned and the unlearned, and the endless controversy survives mainly because we haven’t pinned down a few clear definitions.
I don’t deny that philosophers have made this topic exhausting. They’ve argued it from every angle and wandered into a maze of dark, technical wordplay. No wonder a sensible reader might want to protect their peace and refuse the whole discussion—what’s the point of a debate that promises neither insight nor enjoyment?
Still, the way I’m going to frame the issue may be worth your attention. It’s not meant to be intricate. It has a bit of novelty, it promises an actual decision, and it won’t demand that you slog through obscure reasoning.
Here’s the claim I’m aiming to defend:
- In any reasonable sense of the terms, people have always agreed about both necessity and liberty.
- The long-running “controversy” has mostly been a dispute over words.
So let’s start with necessity.
What “necessity” means in the physical world
Everyone grants that matter behaves in a lawlike way. In physical nature, causes don’t merely precede effects; they fix them. Given a cause in particular circumstances, the effect is determined so precisely that no different effect could have happened instead. The laws of nature set the direction and amount of motion with such strictness that it’s as fantastical to expect a living animal to pop into existence from two colliding bodies as it is to expect the collision to produce a totally different motion from the one it in fact produces.
So if we want a clear idea of necessity, we should ask: where does that idea come from when we apply it to bodies?
Where our idea of necessity really comes from
Imagine a world where nature never repeats itself—where no two events resemble each other, and every object is completely novel, with nothing like it ever seen before. In that world:
- You could still say this happened after that.
- But you would never say this produced that.
The very notion of cause and effect would never form. Without regular patterns, there’s no basis for inference. Reasoning about nature would collapse; your mind would be limited to what your senses and memory report, with no way to project beyond them.
So our idea of necessity—and of causation—comes entirely from one thing: the uniformity we observe in nature. Similar kinds of events show up together again and again, and the mind, trained by habit, comes to expect one when it sees the other.
That is all the “necessity” we ever actually perceive in matter:
- Constant conjunction: similar objects and events regularly go together.
- Habitual inference: from seeing one, the mind naturally expects the other.
Beyond those two features—regular pairing and the expectation it creates—we have no additional notion of a mysterious binding force or hidden “connection.”
If that’s necessity, then look at human action
Now notice what follows. If everyone has always admitted that these same two features show up in human life—if human actions display regular patterns, and we routinely infer actions from motives (and motives from actions)—then everyone has always accepted necessity in the only clear sense the word has. And if the debate has persisted anyway, it’s because the sides haven’t been understanding each other.
So let’s look at the first feature: the steady linking of similar events.
The regularity of human behavior across time and culture
It’s widely acknowledged that there’s a strong uniformity in human behavior across nations and eras. Human nature stays recognizably the same in its basic principles. The same kinds of motives tend to produce the same kinds of actions; the same kinds of causes tend to bring about the same kinds of effects.
Consider the familiar drivers of human life:
- ambition
- greed
- self-interest
- vanity
- friendship
- generosity
- concern for the public good
Mixed in different proportions and distributed through society in different ways, these passions have always powered human projects and misdeeds—everything people have ever been observed to do, from the beginning of history to the present.
Want to understand what the Greeks and Romans were like? Study the temper and conduct of the French and English. You won’t go far wrong by transferring most of what you’ve learned from the modern cases to the ancient ones. People are so similar across time and place that, in this respect, history’s main value isn’t to shock us with novelty. It’s to reveal the stable principles of human nature by showing them at work under endlessly varied circumstances.
In that sense, histories of wars, court intrigues, factional fights, and revolutions are like collections of experiments. They supply the raw data from which the politician and moral philosopher build their science—just as the physician or natural philosopher learns about plants, minerals, and other external objects through observation and experiment. The world Aristotle and Hippocrates examined is no more “different” from ours, in its elements, than the humans described by Polybius and Tacitus are different from the humans who govern and struggle today.
Why “totally different humans” would sound like dragons
This also explains why certain reports immediately strike us as fiction. If a traveler came back claiming to have found a people with no greed, no ambition, no revenge—people who knew no pleasures except friendship, generosity, and public spirit—we would detect the lie almost at once. We’d dismiss it with the same confidence we’d use for tales of centaurs, dragons, miracles, and other marvels. Why? Because it clashes with the regular patterns we take to be part of human nature.
And when we want to expose a forged history, one of the strongest methods is to show that the alleged actions contradict what human motives could plausibly produce in those circumstances. Even famous historians become suspect when they attribute “superhuman” qualities to their characters—whether it’s impossible courage or impossible strength—because both violate the same expectation: humans don’t behave like that.
In other words, we instinctively recognize uniformity in human motives and actions just as readily as we recognize uniformity in the physical world.
How experience teaches us to read people
This is also why long experience in life and business actually helps. With time, you learn the “rules” of human nature well enough to guide not only your theories but your choices.
Experience lets us move in two directions:
- From someone’s actions, words, and even gestures, we infer their motives and inclinations.
- From our knowledge of their motives and inclinations, we interpret their actions.
General patterns accumulated over time give us a thread to follow through human complexity. Pretexts and appearances stop fooling us so easily. Public declarations start to look like polished paint on a cause. And while we still give virtue and honor their due weight, we don’t seriously expect perfect selflessness from crowds and parties, rarely from their leaders, and scarcely even from individuals in any station.
If there were no regularity at all in human action—if every “experiment” of this kind were chaotic—then you could never form general observations about people, and even the most careful reflection on your experiences would be useless.
Think of a simple comparison: why is an older farmer usually better at farming than a beginner? Not because the old farmer is magical, but because nature itself is regular. The sun, rain, and soil tend to operate in stable ways in producing crops, and years of experience teach the farmer the rules that govern that process.
Uniform doesn’t mean identical
Still, we shouldn’t expect human regularity to be absolute, as if everyone in the same situation must always act in precisely the same way. Nothing in nature has that kind of perfect uniformity. Human beings differ in temperament, prejudice, education, and opinion, and those differences matter.
In fact, the variety of human conduct is part of what lets us build more general rules. We develop a richer set of practical maxims precisely because we can observe stable tendencies interacting with individual differences.
Look at what we learn from patterned differences:
- If manners vary across times and countries, we see how powerful custom and education are in shaping the mind from childhood into a settled character.
- If the typical behavior of one sex differs from the other, we infer stable differences of character that nature seems to preserve with regularity.
- If the same person changes greatly from childhood to old age, we form general observations about how sentiments and desires evolve across life stages.
- Even when a person is distinctive, their distinctiveness has a kind of consistency—otherwise we could never learn their disposition from observation or adjust our behavior toward them.
“Irregular” actions and the illusion of randomness
Yes, you can sometimes find actions that seem disconnected from any recognizable motive—apparent exceptions to every rule we use to explain conduct. But to judge these properly, it helps to think about “irregular” events in nature.
Not every cause is linked to its usual effect with the same obvious steadiness. Someone working with dead matter can miss their target, just as a politician dealing with thinking people can be disappointed. Things don’t always come out as expected.
Ordinary people, going by first impressions, tend to explain this by saying: “The cause is uncertain—it sometimes just fails, even when nothing blocks it.” Philosophers look deeper. They notice that nature is full of hidden mechanisms—tiny, remote, or complex factors we don’t perceive—and so it’s at least possible that an unexpected outcome doesn’t come from randomness in the cause, but from the quiet operation of other causes pushing in the opposite direction.
With more careful observation, that possibility turns into a rule: when effects conflict, causes conflict too. The contrast in outcomes reveals a contrast in the forces producing them.
A farmer might explain a stopped clock by shrugging: “It doesn’t usually keep good time.” A clockmaker sees more. The same spring and pendulum force has the same tendency on the gears, but a speck of dust or a tiny misalignment can oppose it and halt the whole system. From many cases like this, philosophers adopt a general maxim: the link between causes and effects is always equally necessary; when it looks uncertain, it’s because hidden opposing causes are interfering.
The body as a complicated machine (and why that matters)
Medicine shows the same point. When the usual signs of health or illness mislead us—when a drug doesn’t work as expected, or some odd outcome follows a familiar cause—experienced physicians don’t conclude that the body has no regular laws. They aren’t tempted to abandon necessity and uniformity.
They know the human body is a highly complex machine, packed with secret powers far beyond our understanding. So it often appears unpredictable to us. But those surface irregularities don’t prove that nature’s laws aren’t being followed internally with great regularity.
Apply the same logic to the mind
A consistent thinker must apply this same reasoning to human choices. The most surprising decisions often make perfect sense once you know the person’s full situation and character.
Someone usually kind snaps at you. Why? Maybe they have a toothache. Maybe they haven’t eaten. A normally sluggish person suddenly looks energetic. Why? Maybe they just got a piece of very good news.
And even when we can’t explain a particular action—not even the person can always account for it—we still recognize a general truth: human character is, to some extent, unstable. For some people, fickleness and caprice are practically a standing feature of their personality. Yet even these apparent irregularities don’t force us to deny that internal motives can operate in steady ways, any more than shifting winds and clouds force us to deny that weather is governed by stable principles we simply don’t yet know how to read.
The punchline so far
So the connection between motives and voluntary actions is not only as regular as the connection between cause and effect anywhere else in nature; it’s also something people have always recognized in everyday life and in philosophy. And since all our predictions about the future rest on past experience—on the assumption that things that have always gone together will continue to go together—it may seem unnecessary to prove that this experienced regularity in human action is one of the sources from which we draw inferences about people.
To show this argument from a few different angles, let’s pause—briefly—on a point we’ve mostly been taking for granted.
Why “necessity” is already built into ordinary life
Human beings are so deeply interdependent that almost nothing we do is “complete” in isolation. Even the lone craftsperson working in a small shop is relying on other people at every step:
- They count on the legal system to protect their work from theft.
- They count on customers showing up when they bring goods to market at a fair price.
- They count on being able to trade the money they earn for food, tools, and other essentials.
As our lives get more complex—more trades, more relationships, more moving parts—we automatically build more other people’s choices into our plans. And we do it the same way we reason about the physical world: by leaning on experience. We assume that people, like wind and water and fire, will keep operating in broadly familiar ways. A manufacturer counts on workers to do their jobs the way they count on their tools to work; they’d be just as shocked if either suddenly failed for no reason. In fact, this kind of reasoning about other people’s actions is so constant that you almost never stop doing it while you’re awake.
So isn’t it fair to say that, in practice, everyone already accepts necessity—meaning the regular, experience-based connection between motives and actions—whether or not they like the label?
Philosophy depends on it, too
Philosophers aren’t different from the rest of us here. Setting aside the obvious fact that their daily lives assume the same predictability, most intellectual work collapses without it.
- History would be impossible if we couldn’t reasonably trust historians based on what we know about human honesty, bias, and incentives.
- Politics couldn’t be a science if laws and institutions didn’t tend to shape societies in stable, repeatable ways.
- Morality would lose its footing if character didn’t reliably produce certain feelings and those feelings didn’t tend to drive behavior.
- Literary criticism would be nonsense if we couldn’t say a character’s choices are “natural” or “out of character” given their personality and situation.
It’s hard to see how you could do serious thinking—or even effective acting—without treating motives as leading to actions and character as leading to conduct with some regularity.
Moral evidence and physical evidence feel the same to the mind
Notice something else: the way “evidence” works in human affairs blends seamlessly with the way it works in nature. We treat both as one continuous chain.
Picture a prisoner with no money, no allies, and no leverage. They see escape as impossible not only because of the walls and bars, but also because of the jailer’s stubbornness and the guards’ reliability. If they try anything, they’ll work on the iron and stone—not on the “softening” of a person they’ve learned won’t bend.
Now imagine the same prisoner being marched to execution. They predict their death with the same confidence they’d predict the blade’s effect—because they’ve learned the guards won’t suddenly decide to help them flee. Their mind runs through a connected sequence:
- the soldiers refuse to cooperate,
- the executioner does their job,
- the body is harmed in a predictable way,
- blood loss, convulsions, death follow.
That chain contains both natural causes (steel cutting flesh) and voluntary actions (guards refusing, executioner acting). But when your mind moves along the chain, it doesn’t experience a dramatic shift. The expectation feels continuous. The same kind of experienced “linking” produces the same kind of confidence—whether the links are “motive → choice → action” or “impact → motion → injury.”
We can swap vocabulary—call one “physical necessity” and the other something softer—but the mental operation doesn’t change.
Everyday certainty about people can match certainty about physics
If a close friend is wealthy and known to be honest, and they visit you while you’re surrounded by servants, you’re confident they won’t stab you to steal something trivial from your desk. You feel that confidence almost as strongly as you feel confident that your well-built house won’t randomly collapse.
You might object: “But what if they suddenly go insane?” Sure—but the same style of objection applies to the house. “What if an earthquake hits?” You can always invent a freak scenario.
So take a case where “unknown frenzy” doesn’t help. You’re confident your friend won’t put their hand into a fire and hold it there until it burns away. That’s not just unlikely; it runs directly against everything we know about human nature. And you can predict that with the same assurance that you can predict a physical outcome—like someone jumping from a window (with no support) and not hovering in midair.
Or take a very public, very ordinary example: if someone at noon leaves a purse full of gold on a busy sidewalk, they might as well expect it to float away like a feather as expect to find it untouched an hour later. A huge portion of human reasoning consists of exactly these inferences—stronger or weaker depending on how consistent our experience has been of people behaving a certain way in a certain situation.
So why do people resist saying “necessity,” even while living by it?
Here’s a psychological explanation that fits the facts.
When we study bodies and physical causes, even after our strictest analysis, we never actually see a mysterious inner force called “necessary connection.” All we ever observe is this:
- certain kinds of events regularly go together, and
- our mind, by habit, moves from one to expecting the other.
And yet people still feel pulled to believe there’s something more—some hidden glue in nature that ties cause to effect.
Then they turn inward to their own choices. They don’t feel that same “glue” between a motive and an action. The inside experience can feel open-ended. That contrast tempts them to say: “Physical events are necessary; mental events are free.”
But once you accept that, in any area—physical or mental—our whole idea of causation comes from regular conjunction plus the mind’s inference, the difference starts to dissolve. Those two features show up in voluntary action as clearly as anywhere:
- motives and circumstances regularly line up with certain choices, and
- we constantly infer actions from character and situation.
At that point, it becomes natural to say the same kind of necessity applies across the board.
And this is why many philosophers who say they reject necessity in the will don’t really reject it in substance. They disagree mostly over words. In the sense of necessity we’re using here—regular connection supported by experience, plus the inferences built on it—no philosopher has ever truly been able to throw it out.
What some may insist is that, in matter, there’s an additional kind of necessity the mind can perceive—something beyond regularity and inference—and that this extra kind doesn’t exist in human choice. Fine. But if they want that claim taken seriously, they have to do the hard part: define this “extra” necessity clearly and point to where, exactly, we perceive it in physical causation.
Start with the simple case, or the debate never ends
In fact, most people start this whole “liberty vs. necessity” debate from the wrong end. They begin by staring at the soul, the will, the understanding—as if the mind will offer the cleanest, most obvious picture.
A better approach is to start with the simplest subject: brute matter. Ask first whether you can form any clear idea of causation and necessity in physical objects beyond:
- constant conjunction, and
- the mind’s habitual inference from one event to another.
If that really is the entire content of “necessity” in nature—and if those same two elements are plainly present in voluntary action—then the dispute is over, or at least reduced to a fight about terminology.
But as long as we lazily assume we have a richer idea of necessity in the physical world, while admitting we can’t find anything like that richness in choice and action, we guarantee endless confusion. The cure is to step back and accept the tight limits of what science gives us about material causes: it gives regularity and inference, not a visible metaphysical chain. Once you’ve swallowed that modest conclusion about matter, applying it to the will becomes straightforward. We already treat motives, circumstances, and character as reliably linked to behavior, and we already reason from one to the other every day. At that point, we’re simply admitting in words what we’ve been relying on in practice in every deliberation and every step we take.
What “liberty” can mean, without denying the obvious facts
Now, to keep moving with the reconciliation project—on one of metaphysics’ most stubborn disputes—we can also show that people have always agreed about liberty in everyday life, just as they’ve always relied on necessity. The argument here is quick.
Ask what “liberty” could reasonably mean when we’re talking about voluntary action. It can’t mean that actions have no connection to motives, inclinations, and circumstances—so little connection that one doesn’t regularly follow from the other, and so little that we can’t infer one from the other. That’s not just philosophically awkward; it flatly contradicts what everyone observes.
So liberty must mean something more modest and much more familiar: the power to act or not act according to what you decide. In other words:
- if you choose to stay still, you can stay still;
- if you choose to move, you can move.
Call this hypothetical liberty. Almost everyone agrees we have it—except when we’re literally constrained, like a prisoner in chains. And once liberty is defined that way, there’s nothing left to fight about.
Good definitions have two requirements
Any definition of liberty worth using has to satisfy two basic tests:
- it must fit the plain facts of experience, and
- it must be internally consistent.
Meet those requirements, explain the definition clearly, and you’ll find people largely agree.
Liberty opposed to necessity turns into “chance”
It’s widely accepted that nothing exists without a cause, and that “chance,” when you examine it closely, is just a placeholder word for “I don’t know the cause”—not a real power out in nature doing work.
Still, people sometimes claim: some causes are necessary, others are not. This is where definitions do real work. If someone can define cause in a way that does not include any necessary connection to the effect—while also explaining where that idea comes from—then the whole controversy collapses and I’ll concede the point.
But given what we’ve already said, that task can’t be done. If events didn’t regularly go together, we’d never have formed any notion of cause and effect. And it’s that regularity that produces the mind’s inference—the only “connection” we can actually understand. Anyone who tries to define cause while stripping away regular conjunction and inference will end up either speaking nonsense or sneaking the same idea back in with different words.
And if that account of cause is right, then liberty, when it’s defined as the opposite of necessity (rather than the opposite of physical constraint), becomes the same as “chance.” And “chance,” in that strong sense, has no real existence.
PART II
Why “danger to morality/religion” is a bad way to argue—and why necessity actually supports morality
In philosophical debates, one move is extremely common and yet deeply flawed: trying to refute a view by warning that it has dangerous consequences for religion or morals.
If an opinion leads to contradictions or absurdities, that’s a real problem. But “this might have bad effects” doesn’t, by itself, show the opinion is false. Arguments like that don’t help us find truth; they mainly serve to make your opponent look ugly.
I’m saying this as a general rule, not as a cheap trick to score points. In fact, I’m willing to face the challenge directly. The doctrines of necessity and liberty, as explained above, are not only compatible with morality—they’re essential to it.
Two equivalent ways to define necessity
Since necessity is built into our notion of cause, it can be defined in two matching ways:
- as the constant conjunction of similar events, or
- as the inference the mind draws from one event to the other.
These two “definitions” are really the same thing viewed from two angles. And in both senses, necessity has been—quietly but universally—applied to the human will in schools, sermons, and everyday life. No one seriously denies that we can reason about human actions, or that such reasoning rests on observed patterns: similar motives and circumstances tend to produce similar actions.
Where people differ is mostly here:
- some refuse to call this “necessity,” even while fully accepting the pattern; or
- some insist there’s an additional, deeper necessity in matter.
But notice: either difference is irrelevant to morality and religion. The second claim, even if true, matters for physics or metaphysics—not for ethics. And in any case, what we’re attributing to the mind is nothing exotic; it’s exactly what everyone already admits. We are not reshaping the “orthodox” picture of the will. We’re only being more modest about what we pretend to know regarding material causes. So this doctrine is, if anything, remarkably harmless.
Law, responsibility, and the need for regular influence
Every system of law is built on rewards and punishments. That only makes sense if we assume a basic principle: these motives reliably influence people’s minds, often steering them toward good actions and away from bad ones. Call that influence whatever you like. If it regularly goes along with the action, then it functions as a cause—and it counts as an instance of the necessity we’ve been talking about.
Why actions alone can’t be the target of blame
There’s another moral point that’s easy to miss. The proper target of hatred or revenge isn’t an action floating in space; it’s a person—a being with thought and awareness. When a harmful deed triggers anger, it does so because we connect the deed to the agent.
Actions are fleeting. They happen and vanish. If they don’t flow from something stable in the person—some feature of their character or disposition—then they can’t genuinely increase their honor when good, or their disgrace when evil.
The actions themselves might deserve blame—they might violate every rule of morality or religion. But if you deny necessity (and with it, real causes), then the person can’t be held responsible. On that view, the action didn’t come from anything stable in them, and it leaves nothing stable behind. So how could punishment or revenge ever make sense?
In fact, if actions aren’t caused by anything lasting in a person—by character, motives, or dispositions—then after the worst crime imaginable, someone would be just as “pure” as a newborn. Their character wouldn’t be implicated at all, because the action wouldn’t be from the character. And if that’s true, you could never use the wickedness of a deed as evidence that the person is wicked.
But notice how our actual moral practices work.
- We don’t blame people for what they do ignorantly or by accident, even if the consequences are terrible—because those actions come from fleeting circumstances, not from a settled principle in the mind.
- We blame people less for what they do in the heat of the moment than for what they do after deliberation—because a quick temper, even if it’s a real trait, flares up in bursts and doesn’t define the whole person.
- We treat repentance plus real reform as wiping away guilt—because we think the action mattered morally only insofar as it revealed a corrupt inner principle, and once that principle changes, the action stops being good evidence of a present criminal character.
All of this makes sense only if we assume a tight link between actions and the enduring features of a person—between what someone does and the relatively stable causes in them that produced it. In other words, we treat actions as morally significant because they’re signs of character. But if you reject necessity, then actions were never reliable signs of anything stable in the first place. And if they weren’t, they never truly made the person criminal. The whole machinery of praise, blame, punishment, and reform loses its footing.
Liberty Is Just As Necessary for Morality
It’s just as easy, and for the same reasons, to show that liberty—in the ordinary sense that everyone agrees on—is also essential to morality. If liberty is missing, then actions can’t properly be called virtuous or vicious, and they can’t reasonably be met with approval or disapproval.
Why? Because actions trigger moral feelings only insofar as they point back to something inside the agent: their character, passions, and affections. If what happened didn’t flow from those inner sources at all—if it came entirely from external violence—then there’s nothing there to praise or blame. You’re not judging the person; you’re just reacting to a collision of forces.
A Bigger Objection: Doesn’t Necessity Make God the Author of Sin?
I’m not claiming I’ve answered every objection to this account of necessity and liberty. Other problems can be raised from topics I haven’t touched. Here’s one of the most serious.
If voluntary actions follow the same necessary laws as matter, then it seems there’s an unbroken chain of causes—pre-set and pre-determined—stretching from the first cause of everything all the way down to each particular choice any human being ever makes. No contingency anywhere. No genuine “could have done otherwise.” No liberty. While we act, we’re also being acted on.
And if that’s the picture, then the ultimate source of our choices is the Creator of the world—who set the whole machine in motion and placed every part exactly where it had to be, so that every later event would follow by unavoidable necessity. So what follows?
- Either human actions have no moral stain at all, since they ultimately flow from a perfectly good cause; or
- If they do have moral stain, then that stain must rise all the way back to God, since God is the ultimate author of the causal chain.
The analogy is straightforward: if someone sets off a mine, they’re responsible for what happens next whether the fuse is short or long. If there’s a fixed chain of necessary causes, then whoever starts the chain—finite or infinite—authors everything downstream, and so must receive the praise or bear the blame that belongs to those effects.
Our moral thinking seems to enforce this rule whenever we trace consequences in ordinary human cases. And the rule would look even more compelling when applied to the intentions and choices of a being of infinite wisdom and power. A human can plead ignorance or weakness; God can’t. God would have foreseen, ordained, and intended the very actions we rashly call criminal.
So we’d have to conclude either:
- those actions aren’t really criminal, or
- God, not the human, is accountable for them.
But both options sound absurd—and, to many, impious. So, the objection says, the doctrine that leads to them must be false. If a doctrine necessarily implies an absurd conclusion, that absurdity reflects back on the doctrine. In the same way, if criminal actions necessarily follow from an original cause, the criminality would infect the cause.
This objection has two parts, and it helps to separate them:
- If human actions can be traced by a necessary chain back to God, then they can’t be criminal, because they ultimately come from an infinitely perfect being who can intend only what is good.
- Or, if those actions really are criminal, then we must give up God’s perfection and admit that God is the ultimate author of guilt and moral corruption.
Reply to the First Part: “But the Whole Is Good” Doesn’t Cancel Local Evil
The first part has an obvious reply.
Some philosophers, after surveying nature, claim that the universe—taken as a single system—is always arranged with perfect benevolence, and that in the long run the greatest possible happiness will come to all created beings, with no pure, uncompensated misery anywhere. On this view, every physical evil is a necessary ingredient in the best overall plan. Remove it, and you either let in a greater evil or block a greater good. Even God, as a wise designer, couldn’t eliminate these pains without worsening the total outcome.
From this idea, some thinkers—especially the ancient Stoics—built a kind of consolation: what looks like evil to you is actually good for the whole; if you could see the entire system at once, you’d rejoice at every event.
It’s a grand thought. But in real life it doesn’t work.
Try telling someone who’s in the middle of a gout attack that the general laws of nature are beautiful and right—that the “bad humors” are simply moving through the proper channels into the nerves, as part of the optimal design of the universe. You won’t comfort them. You’ll annoy them.
These big, system-level perspectives may briefly please a calm, comfortable person who’s thinking in the abstract. But they don’t stay vivid even in quiet moments, and they collapse immediately when pain and passion show up. Our emotions take a narrower, more natural view: they focus on the beings around us, and they respond to what helps or harms the smaller, human-scale system we live inside.
The Same Logic Applies to Moral Evil
Moral evil works the same way.
It’s not reasonable to expect remote, cosmic considerations—so weak against physical suffering—to have more power against moral reactions. Human nature is built so that when we see certain traits, dispositions, and actions, we immediately feel approval or blame. These responses aren’t optional extras; they’re part of our basic psychological design.
In general:
- We approve of characters that support peace and safety in society.
- We blame characters that threaten society with harm and disorder.
So it’s natural to think our moral sentiments arise (directly or indirectly) from reflection on these opposing interests.
Now suppose philosophical speculation tells you, “From the viewpoint of the whole, everything is right; even the traits that disrupt society are, overall, beneficial, and just as aligned with nature’s primary plan as the traits that promote human happiness.”
Fine. But can that distant and uncertain theory really outweigh the immediate feelings produced by the natural, close-up view?
If someone steals a large sum from you, do you honestly feel less upset because you’ve been reminded that, in the grand story of the universe, this event might serve the greater good? Of course not. So why expect your moral resentment toward the theft to be incompatible with that speculation?
In fact, the situation is like our judgments of beauty. You can hold on to a philosophical system about how the whole universe is ordered, and still maintain a real distinction between vice and virtue—just as you can still recognize a real distinction between beauty and ugliness. Both distinctions rest on natural human feelings. And no theory, however lofty, has the power to erase or reprogram those feelings.
Reply to the Second Part: Where Philosophy Runs Out of Road
The second part of the objection is harder. It’s not easy—maybe not even possible—to explain clearly how God could be the indirect cause of all human actions without being the author of sin and moral corruption.
These are mysteries that unaided human reason handles badly. Whatever system you adopt, you find yourself tangled in deep difficulties and even contradictions as soon as you push too far.
People have tried, and mostly failed, to do things like:
- reconcile human freedom and contingency with divine foreknowledge,
- defend absolute decrees while still keeping God innocent of sin.
So far, these problems have outstripped philosophy’s power.
If philosophy is wise, it learns modesty from that failure. It stops prying into questions so elevated and tangled, and it returns to its proper work: understanding common life. There it will still find more than enough puzzles to occupy it—without sailing out into an endless ocean of doubt, uncertainty, and contradiction.
SECTION IX
Of the REASON of ANIMALS
Everything we call “reasoning about matters of fact” rests on a simple engine: analogy. We look at what has happened before, notice what seems similar now, and expect the same kind of outcome. When two cases match closely, the analogy feels airtight and we treat the conclusion as practically certain. Nobody hesitates, for example, when they see a piece of iron: they expect it to be heavy and to hold together, because iron has always behaved that way in their experience.
When the resemblance is looser, the analogy gets weaker and the conclusion becomes less secure—but it still carries some weight, proportional to how similar the cases are. This is how anatomy generalizes: observations about one animal get extended to others. If we clearly prove, say, that blood circulates in a frog or a fish, that becomes strong evidence that circulation is a general feature of animals.
We can push the same style of thinking into the topic we’re discussing here. Any theory that explains how the human mind works—or how our passions arise and connect—gains credibility if it also explains the same patterns in other animals. So let’s test the hypothesis from the previous section: that what we call “experimental reasoning” (learning from experience and projecting it forward) is driven not by abstract logic, but by something else. Looking at animals from this angle should reinforce what we’ve already seen.
First: it’s obvious that animals, like humans, learn from experience. They come to expect that the same causes will bring about the same effects. Step by step, from early life, they build practical knowledge of the world—fire, water, hard ground, stones, heights, drops, and what those things tend to do to them. You can see the difference immediately between the raw ignorance of the young and the seasoned caution of the old. With time, animals learn what injures them and what brings comfort or pleasure.
A horse that’s spent time in a field learns what it can safely jump and won’t repeatedly try leaps that exceed its strength. An older greyhound will often let the younger dog handle the exhausting part of the chase, while it positions itself for where the hare is likely to double back. That kind of “prediction” isn’t magic; it’s built out of observation and accumulated experience.
This becomes even clearer when you look at training. With well-timed rewards and punishments, you can teach animals patterns of behavior that run straight against their natural impulses. What makes a dog flinch when you threaten him or raise a whip? Experience. What makes him respond to his name—an arbitrary sound—and treat it as meaning him, not the other dogs? Experience again. He learns to connect that particular noise, spoken in a certain way and tone, with your intention to single him out and call him over.
In all these examples, the animal reaches beyond what’s immediately present to the senses. It infers something not directly seen: what is about to happen. And that inference is grounded entirely in the past. Faced with a familiar kind of situation, it expects the familiar kind of consequence—the one it has repeatedly found to follow similar situations before.
Second: it’s impossible that the animal makes these inferences by running an argument in its head—by reasoning its way to a general principle like “nature will stay uniform” or “like causes must always produce like effects.” If arguments of that kind exist at all, they’re far too subtle for an animal’s limited understanding. In fact, teasing them out can take the full effort of a careful philosopher.
So animals aren’t guided by that sort of reasoning. And neither are children. Neither, in everyday life, are most adults. Even philosophers, when they step out of their studies and into the practical business of living, behave mostly like everyone else and follow the same habits of expectation.
Nature must have equipped us with a different mechanism—something quicker, more universal, and more reliable in ordinary life. A mental operation as central as inferring effects from causes couldn’t be left hanging on the fragile, slow, and uncertain process of formal reasoning. Even if this were debatable for humans, it’s hard to doubt it for animals. And once it’s firmly established in their case, analogy strongly suggests we should accept the same account broadly, without special exceptions.
What actually drives these inferences is custom—habit born of repetition. After enough pairings, the mind is carried from what it currently senses to what usually accompanies it. Seeing one thing, it automatically anticipates the other. And it does so in that distinctive way we call belief: not a mere idea floating by, but an expectation with a felt pull toward reality. There isn’t another credible explanation for this pattern across both “higher” and “lower” animals, so far as our observation reaches.
At the same time, although animals learn many things by experience, they also arrive with built-in patterns of behavior—capacities that seem to outrun what their day-to-day intelligence would suggest, and that don’t noticeably improve with long practice. We call these instincts, and we often treat them as astonishing, even mysterious.
But that sense of mystery should shrink when we notice something important: the very “experimental reasoning” we share with animals—the habit-based leap from past to future on which everyday life depends—works like an instinct too. It’s a kind of mechanical power operating in us without our awareness. In its main effects, it isn’t guided by careful comparisons of ideas, the way we imagine “pure thinking” to be. The instinct differs from case to case, but it’s still instinct: the same sort of built-in force that teaches a person to avoid fire also teaches a bird, with remarkable precision, how to incubate its eggs and manage the entire routine of its nest.
SECTION X
Of MIRACLES
Part I — Why testimony can’t beat direct experience
In Dr. Tillotson’s writings there’s a crisp, forceful argument against the doctrine of the “real presence”—a doctrine so at odds with ordinary reasoning that it barely deserves a long rebuttal. His point starts from something most people grant: the authority of Scripture or tradition, as external evidence, ultimately rests on testimony—on what the apostles reported as eyewitnesses to the miracles by which Jesus was said to prove his divine mission.
But notice what follows. If Christianity’s foundation is testimony, then our evidence for it is weaker than the evidence of our senses. Even for the first believers, the evidence was not stronger than direct perception, because it still depended on what others saw and reported. And as testimony passes from the original witnesses to later listeners, it naturally loses force—because no one can rationally place the same confidence in a report as in what they themselves see, hear, and touch.
From there Tillotson draws a simple rule: a weaker kind of evidence can’t overturn a stronger one. So even if the doctrine of the real presence were plainly stated in Scripture, it would still be unreasonable to accept it—because it clashes with sense experience, while Scripture and tradition (considered merely as testimony) don’t carry the same weight as sense. The only way they could match it would be if they were “brought home” to each person by an immediate inner divine operation—something beyond ordinary public evidence.
That kind of argument is convenient, because it at least quiets the loudest superstition: it blocks the move of demanding that we accept what flatly contradicts our senses on the basis of secondhand reports. And I think we can build an argument of the same general type—one that, if sound, would act as a lasting brake on every kind of superstitious deception. That would be useful for as long as people keep recording miracles and wonders in history, whether religious or secular.
How experience guides belief—strongly, but not perfectly
Experience is our only guide when we reason about matters of fact. Still, experience isn’t an infallible oracle; in some situations it can lead us astray. In a place like ours, someone who expects better weather in a week of June than in a week of December is reasoning correctly—from experience. Yet they might still be disappointed.
In that case, though, it isn’t fair to blame experience. Experience usually warns us ahead of time about uncertainty, because careful observation shows that some events don’t consistently follow their supposed causes. In the world:
- Some events are constantly linked to their causes, across countries and ages.
- Others are variable, often frustrating our expectations.
So our confidence comes in degrees—from the highest certainty down to the weakest kind of “moral” evidence.
The basic rule: believe in proportion to evidence
A wise person matches belief to evidence. When experience has been completely uniform, we expect the result with the strongest assurance and treat past experience as full proof of what will happen again. When experience has been mixed, we proceed more cautiously: we compare the results on both sides, count and weigh the observations, and lean toward the side with the stronger support—but with hesitation. Once we settle, what we have is probability, not proof.
Probability, in other words, arises when evidence pulls in opposite directions. One side outweighs the other and produces a degree of confidence proportional to its advantage. A hundred observations supporting one outcome and fifty supporting the other give you a shaky expectation; a hundred uniform observations with only one exception gives you a fairly strong assurance. In every case, when evidence conflicts, we effectively “subtract” the weaker side from the stronger to estimate how much confidence is left.
Why we trust witnesses at all
Now apply this to something essential to daily life: reasoning from human testimony—reports from witnesses and observers. You can argue about whether this rests on cause-and-effect; the label doesn’t matter. What matters is the principle behind it.
Our confidence in testimony comes from experience of two things:
- People often tell the truth.
- Reported facts usually match what really happened.
More generally, since we can’t discover any necessary connection between different events, our inferences always come from observing their regular conjunction. Human testimony shouldn’t be treated as a magical exception. A report isn’t tied to reality by logic alone; it earns credibility only because experience shows that, in normal circumstances, it often lines up with the facts.
And that credibility depends on features of human nature we learn through experience: memory holds well enough, people often prefer truth, many have a basic sense of honesty, and shame discourages lying when it’s exposed. Without those regularities, we’d place no trust in testimony at all. That’s why someone delirious—or known for lying and villainy—has no authority with us.
Testimony has grades, just like everything else
Because testimony is validated by experience, it can function as either:
- Proof, when a certain kind of report has been consistently matched by corresponding facts, or
- Probability, when that connection has been irregular.
Judging testimony also requires attention to many circumstances, and the final standard is always experience and observation. When experience isn’t perfectly uniform, we inevitably waver: arguments oppose and partly cancel each other, just as in any other kind of evidence. We often hesitate about what we’re told, balance the circumstances for and against, and then lean toward the stronger side—though with reduced confidence, depending on how strong the opposing considerations are.
When testimony weakens: the usual warning signs
Conflicting evidence can arise from several sources:
- Witnesses contradict each other.
- There are too few witnesses, or their character is doubtful.
- They have an interest in what they claim.
- They speak with hesitation—or, just as suspiciously, with excessive certainty and theatrical insistence.
Many other factors can also weaken or destroy the force of testimony.
The “marvelous” creates a direct clash of experiences
Now suppose the reported fact is extraordinary—marvelous and unusual. In that case, the credibility of testimony drops in proportion to how unusual the claim is. The reason we trust witnesses and historians is not that we can see, in advance, any logical bond between testimony and reality. We trust them because we are used to testimony matching the facts.
But when someone reports something we’ve rarely or never observed, two experiences collide:
- Experience supporting the general reliability of testimony.
- Experience against the kind of event being reported, because it doesn’t fit what we normally see.
Those opposing experiences counterbalance each other; they reduce belief and reduce authority. The stronger side wins, but only with whatever force remains after the contest.
That’s why the Romans had a proverb: “I wouldn’t believe it even if Cato told me.” Even the reputation of a famously serious, virtuous person can be overwhelmed by the sheer implausibility of the claim.
Or consider the story of an Indian prince who refused to believe early reports about frost. His reasoning was perfectly sensible. It takes strong testimony to persuade someone of events produced by a natural state they’ve never encountered, and that bears little resemblance to anything in their steady experience. The reports weren’t directly contrary to what he knew—they just didn’t match it.
From “strange” to “miracle”: proof against proof
Now push the case further. Imagine the reported event isn’t merely unusual but genuinely miraculous. And suppose the testimony, taken by itself, would amount to full proof. Even then, we would have proof against proof. The stronger must prevail—but not without losing strength in proportion to the opposing proof.
Here is the key definition: a miracle is a violation of the laws of nature. And because those laws are established by firm and unalterable experience, the evidence against a miracle—just from the kind of event it is—is as complete as any argument from experience could be.
Why do we treat it as more than merely likely that people die, that lead doesn’t hang in midair by itself, that fire burns wood and water puts fire out? Because these outcomes fit what we repeatedly observe as the settled course of nature. To prevent them would require a violation of those laws—in other words, a miracle.
An event isn’t called a miracle if it happens within nature’s ordinary patterns. A man who seems healthy dying suddenly is not a miracle. It’s unusual, yes, but it’s happened often enough that we’ve observed it in many places and times. But a dead man returning to life is a miracle—because that has never been observed in any age or country.
So, by definition, there is uniform experience against every miraculous event. If the experience weren’t uniform, the event wouldn’t count as miraculous. And because uniform experience amounts to proof, we have a direct and full proof against the existence of any miracle. Such a proof can be overcome only by an opposing proof that is stronger.
The central maxim
The result is a rule worth remembering:
No testimony is sufficient to establish a miracle unless the testimony is such that its falsehood would be more miraculous than the fact it tries to establish. And even then, the two sides partially cancel; the remaining assurance matches whatever superiority is left after the weaker side is discounted.
So if someone tells me they saw a dead man brought back to life, I immediately ask: Which is more likely—that this person is deceiving me, or has been deceived, or that the reported event truly happened? I compare one “miracle” against the other. And I accept whichever option is less miraculous, rejecting the greater one. Only if it would be even more miraculous for the testimony to be false than for the event to have occurred can the witness reasonably demand my belief.
Part II — Why real-life miracle testimony never reaches that bar
Up to now we’ve allowed, for the sake of argument, that testimony for a miracle might sometimes rise to full proof—and that for it to be false would itself be a genuine marvel. But that concession is far too generous. In reality, no miracle has ever been established by evidence that complete.
First: nowhere in all of history do we find a miracle attested by a sufficient number of people who simultaneously meet all the conditions needed for full confidence—people with:
- unquestioned good sense, education, and learning (so they are unlikely to be deluded),
- unquestioned integrity (so they are unlikely to intend deception),
- strong public credit and reputation (so they have much to lose if caught lying),
- and the ability to attest a fact done so publicly, and in so famous a place, that exposure would be unavoidable if it were false.
All of those circumstances are required before testimony deserves full assurance. And we never find them gathered together in miracle reports.
Second: human nature contains a principle that—if you look closely—dramatically weakens the force of testimony in cases of prodigies. Our ordinary rule of reasoning is sensible: unknown things tend to resemble known things; what we’ve most often observed is most probable; and when arguments conflict, we should prefer the side supported by the greatest number of past observations.
By that rule, we easily reject claims that are merely unusual and somewhat incredible. But when someone claims something outright absurd and miraculous, the mind doesn’t always keep the same discipline. Strangely, the very feature that should destroy the claim’s credibility often helps it. Why? Because surprise and wonder are pleasurable emotions, and miracles are tailor-made to produce them. That pleasure pulls the mind toward belief.
This goes so far that even people who can’t quite believe miraculous stories still enjoy them “second-hand.” They take pride in retelling them and in stirring up other people’s amazement.
Think about how eagerly people swallow travellers’ tales: sea monsters, strange lands, bizarre customs, improbable adventures. Add religion to this appetite for wonder, and common sense often collapses entirely. In that setting, testimony loses its claim to authority.
A religious person might be an enthusiast—sincerely imagining they see what isn’t real. Or they might knowingly tell a false story and persist in it, with what they take to be good intentions, because they believe it advances a holy cause. Even when outright deception isn’t present, vanity and self-interest are unusually powerful temptations here.
Meanwhile the audience typically lacks the judgment to examine the evidence. Whatever judgment they do have, they often set aside as a matter of principle in “sublime” and mysterious topics. And even if they tried to reason carefully, passion and an overheated imagination disrupt the mind’s regular operations. Their credulity feeds the speaker’s boldness, and the speaker’s boldness overwhelms their credulity.
Eloquence, at its peak, leaves little room for reflection. It aims at imagination and emotion, and it can capture willing listeners while subduing their understanding. True, this peak is rare. But what a Cicero or a Demosthenes could barely achieve over Roman or Athenian crowds, any wandering preacher or zealot can often achieve over the general public—more thoroughly—by pressing on cruder, more easily stirred passions.
Finally, consider how many forged miracles, prophecies, and supernatural tales have been exposed—either by contrary evidence or by collapsing under their own absurdity. These examples show how strongly people are drawn to the extraordinary. And they should, quite reasonably, make us suspicious of every report of that kind. This suspicion is simply an extension of how we naturally think, even about ordinary events.
However quickly any rumor can catch fire—especially in small towns—religious miracle stories spread faster.
Think about the kind of gossip people can’t resist: marriages. Two young people of similar status get seen together twice, and suddenly the whole neighborhood is pairing them off. Everyone enjoys being the first to “know,” the first to tell, the one who passes on something juicy and consequential. That pleasure alone helps the story travel. And because we all understand how that game works, sensible people don’t put much weight on the rumor until something stronger backs it up.
Now ask yourself: don’t the same human impulses—and often even stronger ones—push people to accept and repeat miracle stories with total confidence?
Why miracle stories cluster where skepticism is scarce
Third, there’s another strong reason to doubt supernatural reports: they show up most often among ignorant and “barbarous” societies. And when a highly developed society does accept such stories, it almost always inherits them from earlier, less informed ancestors—and those inherited beliefs arrive stamped with a kind of “don’t question this” authority that tradition tends to create.
Read the earliest histories of almost any nation and you can feel the shift. It’s like stepping into a different universe where nature itself doesn’t behave normally. In those accounts:
- wars, revolutions, plagues, famines, and deaths rarely have ordinary causes
- instead, everything is chalked up to prodigies, omens, oracles, and divine punishments
- the few natural events that do appear are buried under the supernatural noise
But turn the pages toward more enlightened times and the marvels thin out. The “miracles” don’t disappear because nature changed. They fade because we gradually recognize what was really driving them: humanity’s persistent appetite for the extraordinary. Education and common sense can restrain that appetite for a while, but they never erase it completely.
A thoughtful reader often says, after finishing those old wonder-filled histories, “How come nothing like this ever happens now?” The answer is not mysterious. People have lied in every era. You’ve seen enough of that, surely. And you’ve probably watched plenty of sensational stories flare up, get mocked by anyone with judgment, and eventually be dropped even by the gullible.
Those legendary falsehoods that once grew to monstrous size began the same way: as small tales planted in the right soil. Given an audience eager for marvels, they can shoot up into “prodigies” almost as grand as the ones they claim to describe.
How a con works best: start it far from scrutiny
A classic example is the false prophet Alexander—today mostly forgotten, but once famous. His strategy was smart: he launched his frauds in Paphlagonia, a place where (as Lucian reports) people were deeply uneducated and ready to swallow almost any deception.
Distance does the rest of the work. People far away who are weak enough to care don’t have the chance to check the facts. By the time the story reaches them, it has inflated with a hundred extra details. The foolish spread it eagerly. Meanwhile, the educated mostly settle for laughing at it, without bothering to gather the specific information that would refute it cleanly. That dynamic let Alexander move from fooling local Paphlagonians to recruiting followers among Greek philosophers and high-status Romans—and even to catching the attention of Emperor Marcus Aurelius, who reportedly trusted a military expedition to Alexander’s prophecies.
And here’s the broader point: starting a fraud among an ignorant public has huge advantages. Even if the lie is so clumsy that many locals don’t buy it (rare, but possible), it can still succeed better abroad than it ever could have at home.
Why?
- the least informed people are often the most energetic couriers of the story
- there’s no well-connected, credible local network strong enough to contradict it
- people’s hunger for the marvelous has room to run
So a tale that gets laughed out of the town where it began may be believed with certainty a thousand miles away. If Alexander had tried this in Athens, the philosophers in that hub of learning would have broadcast their judgment across the Roman world—backed by reputation, argument, and eloquence—and the fraud would have collapsed.
It’s true that Lucian happened to pass through Paphlagonia and was able to expose the scheme. But that’s a lucky accident. Not every Alexander runs into a Lucian.
The “miracles cancel each other out” problem
Fourth, there’s an additional reason miracle claims lose their authority: for any given alleged miracle—even one that hasn’t been explicitly debunked—there stands, in effect, an endless crowd of opposing witnesses. In other words, the miracle undermines testimony, and testimony undermines itself.
To see why, notice something about religion: when it comes to competing religions, “different” usually means “contradictory.” The religions of ancient Rome, Turkey, Siam, and China cannot all be true in the way each claims to be. Yet each tradition is full of miracles.
And every miracle is typically offered for a specific purpose: to certify that particular religious system. But by doing so, it also works—indirectly but powerfully—to discredit every rival system. If a miracle “proves” one religion, it simultaneously “disproves” the others. So the miracle reports across religions function like opposing testimony. They collide.
By this reasoning, if you accept a miracle attributed to Muhammad or his successors on the testimony of a small number of early Arab witnesses, you must also treat the testimony of writers and witnesses from other religions—Livy, Plutarch, Tacitus, Greek, Chinese, Roman Catholic, and so on—as if they were directly contradicting that Islamic miracle, with the same certainty they claim for their own.
This may sound overly subtle, but it’s no more abstract than a courtroom case. Imagine two witnesses swear a person committed a crime—while two equally credible witnesses swear he was two hundred leagues away at the exact same time. The testimonies don’t add up to truth; they neutralize each other.
“Best-attested” miracles can still be obviously false
Consider one of the most strongly supported miracles in non-religious history: a story Tacitus tells about Emperor Vespasian. In Alexandria, Vespasian supposedly cured a blind man with his spit and healed a lame man by touching him with his foot—acting on a vision from the god Serapis, who told the men to seek the emperor for miraculous cures.
If anyone today cared to defend the evidence for that now-discarded pagan superstition, they could make it look extremely impressive. Tacitus supplies circumstances that—on paper—seem to strengthen credibility:
- Vespasian’s reputation: serious, stable, mature, and not prone to theatrical claims of divinity
- Tacitus himself: a contemporary writer famous for sharp judgment and honesty, so skeptical he was sometimes accused of irreligion
- the sources: people of established character, presumably, and said to be eye-witnesses
- the timing: they continued to affirm the story even after the Flavian family lost power, when there was supposedly “no price for a lie”
- the setting: a public event, not a private anecdote
And yet, despite all that, the claim is still a gross falsehood. The point is stark: what looks like the strongest possible human testimony can still attach itself to something plainly incredible.
A careful skeptic can witness the “evidence” and still refuse belief
Here’s another story worth weighing. Cardinal de Retz recounts that while fleeing into Spain, he passed through Saragossa and was shown, in the cathedral, a man who had served as a doorkeeper for seven years—well known to many locals. The man had long been seen with one leg missing, but (so the story went) regained it after holy oil was rubbed on the stump. The cardinal claims he personally saw the man with two legs.
The miracle was backed, he says, by the cathedral’s officials, and the townspeople were called on as a body to confirm it. And by his own observation, their devotion made them entirely convinced.
Notice how strong the case seems if you’re only tallying “evidence”:
- the reporter lived at the same time as the alleged event
- he had a skeptical, worldly personality and considerable intelligence
- the miracle is so unusual it seems hard to fake
- the witnesses were numerous and, in a broad sense, “spectators”
And yet what’s most striking is this: the cardinal himself appears not to believe the story. That matters, because it removes the suspicion that he’s knowingly part of the fraud.
He understood something crucial: you don’t have to be able to untangle every detail of the deception in order to reject a miracle claim. In most cases, that kind of detailed disproof is impossible even a short time later, and it can be difficult even when you’re on the spot—because the world is full of bigotry, ignorance, cunning, and dishonesty. So he concluded, reasonably, that the “evidence” had falsehood written on its face—and that a miracle propped up by human testimony is better met with ridicule than with debate.
A modern “learned age” still manufactures miracles
Or take the flood of miracles said to have occurred in France at the tomb of Abbé Paris, the famous Jansenist whose sanctity, many claimed, had misled the public for years. People everywhere talked as if the tomb routinely:
- cured the sick
- restored hearing to the deaf
- restored sight to the blind
What’s more, many of these miracles were supposedly investigated on the spot before judges known for integrity, attested by reputable witnesses, and reported in a learned age on one of the world’s most public stages. Reports were published and spread widely. Even the Jesuits—educated, backed by civil authority, and committed enemies of the religious faction favored by the alleged miracles—could not, it was said, specifically expose or refute them.
Where could you find a more impressive pile of supporting circumstances?
And yet, what counters that entire “cloud of witnesses” is simply the nature of the claim itself: the events described are miraculous, meaning they contradict the regular course of experience and the established laws of nature. For any reasonable person, that fact alone is a sufficient refutation.
Testimony isn’t one-size-fits-all
Is it fair to argue like this: “Because testimony can be extremely strong in some cases—say, when describing the battle of Philippi or Pharsalus—therefore testimony must have equal strength in every case”?
Of course not.
Imagine the Caesarean and Pompeian factions each claimed victory in those battles, and historians from each side unanimously credited their own party. From our distance in time, how would we decide? Conflicting testimony can leave us unable to resolve even ordinary historical questions.
That same kind of opposition exists between miracle reports in ancient writers like Herodotus or Plutarch, and miracle reports in later Christian historians and chroniclers. The claims aren’t simply additional data points. They compete, and they cancel.
Why people are tempted to invent—and believe—miracles
Wise people, at least, treat any report with a kind of academic caution when it conveniently flatters the reporter’s passions—when it makes their country look glorious, their family noble, or themselves important.
But what temptation is greater than being seen as a missionary, a prophet, an envoy from heaven? Who wouldn’t brave dangers to gain a title like that?
And even when outright deception isn’t the starting point, vanity and an overheated imagination can do the work first: someone convinces himself, becomes sincerely deluded, and then, in support of a “holy” cause, thinks nothing of using pious frauds to keep the story alive.
Once the conditions are right, the smallest spark can become a blaze. The audience is already prepared. The eager, listening crowd—greedy for stories and impressed by superstition—swallows what it hears without checking.
Ordinary human nature explains the pattern—without breaking nature’s laws
How many miracle stories have been exposed and extinguished while they were still small? How many more enjoyed a moment of fame and then slid into neglect?
When such reports spread, the explanation is straightforward. We can account for them using familiar, well-observed principles: credulity, group enthusiasm, self-interest, and delusion. That is how we interpret things in line with normal experience.
So why, just to avoid that perfectly natural explanation, would we instead accept a miraculous violation of the most stable laws of nature?
And notice how hard it is to catch a falsehood even in ordinary history—whether private or public—right where it supposedly happened. It’s even harder when the story travels a short distance. Courts, with all their authority and care, often can’t separate truth from lies even in recent events. If the case is left to argument, rumor, and back-and-forth storytelling—especially when passions are already engaged—nothing ever gets decisively settled.
In the early days of new religions, educated people often treat the whole thing as too trivial to bother with. Later, when they finally want to expose the deception and wake up the crowd, it’s too late. The records are gone. The witnesses are dead. The chance to verify has vanished.
What remains is one tool: scrutinizing the testimony itself. That method is always enough for careful, knowledgeable readers—but it’s often too subtle for the general public to grasp.
The bottom line
Taken together, this shows that no testimony for any miracle has ever reached the level of genuine probability—let alone proof. And even if you imagine testimony that did rise to “proof,” it would still be opposed by another proof: the very nature of the event it tries to establish.
Why? Because experience is what gives weight to human testimony. And the same experience is what teaches us the laws of nature. When these two kinds of experience clash, we do the only rational thing: we weigh them against each other—subtract one from the other—and accept whichever side still has greater remaining force.
But once you apply that principle to popular religions, the subtraction wipes out the case entirely. That leads to a clear maxim:
No human testimony is strong enough to prove a miracle in a way that can serve as a solid foundation for any system of religion.
Please notice the limitation: the claim is not merely that miracles are rarely proved, but that a miracle can never be proved to the degree required to ground a religious system.
For all that, I’ll grant this much: in principle, there could be events that look like miracles—real breaks from nature’s usual patterns—where human testimony might actually be strong enough to count as proof. The catch is that history probably doesn’t give us any clean examples.
Here’s a thought experiment. Imagine that every writer, in every language, independently reports the same thing: starting on January 1, 1600, the entire Earth fell into complete darkness for eight straight days. Now add a few more details:
- The story has remained vivid and consistent in popular memory ever since.
- Travelers returning from every country report the same tradition.
- No one offers a competing version, a contradiction, or even a meaningful variation.
If that were our evidence, modern scientists shouldn’t respond with automatic skepticism. They should treat the event as settled and start asking a different question: What could have caused it? After all, the general idea that nature can decay, break down, or undergo catastrophic change already seems plausible by analogy with countless processes we observe—things wear out, systems collapse, organisms die, civilizations fail. So if an alleged phenomenon points in that direction, and the testimony is truly broad and uniform, it lands within the range of what we can reasonably accept on human reporting.
Now switch the case to something far more dramatic. Suppose every historian of England agrees that, on January 1, 1600, Queen Elizabeth died. Suppose the story includes all the standard public details you’d expect around a monarch’s death:
- Her physicians and the court attended her before her death.
- After her death, they still saw her body, as would normally happen for someone of her status.
- Parliament recognized and proclaimed her successor.
- She was buried.
And then comes the “miracle”: after lying in her grave for a month, she reappears, takes back the throne, and rules England for three more years.
I’ll admit it: the shared details would be bizarre. I’d be struck by how many strange circumstances line up. But I still wouldn’t feel the slightest pull to believe the central miraculous claim. I wouldn’t deny that something happened—Elizabeth’s “death,” the public ceremonies, the political transition. I would simply conclude that the death was staged, and that it wasn’t—and couldn’t have been—real.
You might protest: “But how could you possibly fool the whole world on a matter that big? Elizabeth was famously intelligent and steady-minded. And what would she even gain from such a ridiculous stunt?” Those objections would genuinely surprise me. They might leave me shaking my head at the improbability of the plot. But I’d still answer the same way: human deceit and human stupidity are familiar facts of life. When I have to choose between (a) an extraordinary tangle of fraud, mistakes, and collective gullibility and (b) a direct, unmistakable violation of nature’s laws, I’ll bet on the first. It’s the kind of thing people do. A resurrection from the grave is not.
And if that resurrection story were presented as part of a brand-new religious system, that would push the needle even further away from belief. Across history, people have been taken in by religious miracle-stories so often—and with such enthusiasm—that the religious packaging itself becomes evidence of a con. For anyone with sound judgment, that context doesn’t just make the event doubtful; it’s reason to reject it outright, without needing an extended investigation.
Even if the miracle is attributed to an Almighty Being, that doesn’t automatically make it more likely. Why? Because we don’t get direct access to the attributes or habits of such a being. The only way we ever form ideas about what a creator does is by looking at the world we actually experience—the regular, repeatable course of nature. That means we’re always dragged back to the same comparison: which is more probable?
- That witnesses are lying, mistaken, exaggerating, or being misled (a violation of truth in testimony), or
- That nature itself has been suspended (a violation of natural law).
And since lies, exaggerations, and self-deceptions are especially common in testimony about religious miracles, that kind of testimony deserves less weight than testimony about ordinary events. The sensible result is a general rule: don’t trust religious miracle reports, no matter how polished or persuasive the presentation looks.
Francis Bacon seems to reason in much the same way. He urges us to compile a careful record of unusual natural events—“monsters,” strange births, rare productions, everything new and extraordinary—but to do so under strict discipline, because it’s easy to drift into fiction. And above all, he says, treat any report as suspicious if it leans on religion, like the prodigies described by Livy. Treat with equal suspicion the stories found in writers of “natural magic” or alchemy—authors who, in Bacon’s view, can’t resist making things up.
I’m especially pleased by this line of reasoning because it helps expose a particular group: the dangerous “friends,” or disguised enemies, of Christianity who try to defend it using ordinary human reasoning. Christianity, on this view, rests on faith, not on reason. And nothing is more likely to undermine it than forcing it into a courtroom where the judge is probability and the evidence is human testimony—because that’s a trial the religion simply wasn’t designed to survive.
To see why, consider the miracles reported in scripture. And to keep the discussion from spreading endlessly, focus only on the Pentateuch. Now evaluate it the way these “reasonable” defenders invite us to: not as God’s own testimony, but as a purely human historical document.
What do we have? A book presented by a people described as culturally rough and poorly educated, written in an age even rougher, and most likely composed long after the events it claims to report. It isn’t supported by independent, confirming witnesses. And it resembles the kind of legendary origin stories every nation tells about itself. When you read it, it’s packed with wonders:
- A world and a human nature utterly unlike what we observe now
- A fall from that original condition
- Human lifespans stretched to nearly a thousand years
- The destruction of the world by a global flood
- The arbitrary selection of one nation as heaven’s favorite—and that nation just happens to be the author’s own people
- A liberation from slavery carried out through astonishing supernatural feats
Now here’s the real test. Put your hand on your heart and answer honestly: would it be more miraculous for such a book—supported by that kind of testimony—to be false, or for all the miracles inside it to be true? Because if we follow the earlier standard of probability, the book can only be accepted if its falsehood would be the greater wonder.
Everything said about miracles applies, with no real changes, to prophecy. In fact, every genuine prophecy is a miracle, and only as a miracle can it serve as evidence for a revelation. If predicting future events didn’t exceed human capacity, it would be absurd to treat prophecy as proof of a divine commission.
So the conclusion is blunt: Christianity wasn’t merely accompanied by miracles at its beginning. Even today, no reasonable person can believe it without a miracle. Reason alone can’t get you to its truth. And anyone who believes by faith is aware—within their own mind—of an ongoing miracle: a force that overturns the normal rules of understanding and moves them to accept what runs most strongly against custom and experience.
SECTION XI
Of a PARTICULAR PROVIDENCE and of a FUTURE STATE*
I was recently talking with a friend who enjoys skeptical contrarian arguments. I disagree with a lot of his principles—but they’re clever, and they connect to the thread of reasoning in this book—so I’m going to reconstruct them as faithfully as I can from memory and let you judge for yourself.
Our conversation started with me celebrating what looks like philosophy’s unlikely good luck. Philosophy needs one thing above all else: freedom. It thrives when people can openly disagree, challenge each other, and argue without fear. And historically, it’s striking that philosophy first took root in places and times that were unusually tolerant—and that even its wildest ideas were rarely crushed by official creeds, confessions, or punishments.
Yes, there are famous exceptions: Protagoras was banished, and Socrates was executed (though that case had other forces mixed in). But compared to the religious suspicion and policing of thought that infects later ages, ancient history contains surprisingly few examples of this kind of anxious, punitive zeal. Epicurus lived in Athens to old age in peace. Epicureans were even allowed to become priests and serve at the altar in the most sacred rituals of the public religion. And some of the wisest Roman emperors paid salaries and pensions to philosophy teachers of every school. It’s not hard to see why that kind of climate mattered when philosophy was young. Even now—when philosophy is supposedly tougher and more mature—it still struggles against the cold weather of slander and persecution.
My friend pushed back.
“You’re treating that tolerance as a lucky accident,” he said, “when it’s really what you should expect from how human societies naturally develop. The stubborn bigotry you blame for philosophy’s troubles is, in a way, philosophy’s own child. Philosophy mates with superstition, and their offspring—religious dogmatism—eventually turns on its parent and becomes its fiercest enemy.”
His point was historical. In the earliest ages, he said, people couldn’t even form the kind of abstract, technical religious doctrines that later become battlegrounds. Most humans were illiterate; religion was shaped for limited understanding. It relied on inherited stories and traditional belief, not on argument and debate. So after the first shock wore off—after philosophers introduced their new paradoxes and unfamiliar principles—philosophers and the established superstitions largely settled into a stable coexistence. They divided humanity between them:
- Philosophy claimed the educated and reflective.
- Popular religion held the majority—those too busy, untrained, or uninterested to follow careful argument.
I replied: “So you’re leaving politics out of it. You’re acting as if a wise government has no reason to worry about certain philosophical views—like Epicurus’s—that deny a divine being, and with it providence and a future state. Doesn’t that seem dangerous? If people stop believing in divine oversight and afterlife rewards and punishments, wouldn’t that weaken morality and threaten civil peace?”
He answered quickly: “In real life, persecutions never came from calm reasoning or from a sober assessment of consequences. They came from emotion and prejudice. But let me go further. Suppose Epicurus had been hauled before the people by the professional informers of Athens. I think he could have defended himself easily. He could have shown that his principles were at least as socially healthy as those of his opponents—who were so eager to whip up public hatred against him.”
I told him I wanted to hear that defense. “Give me the speech,” I said. “Not for an ignorant mob—if you’ll even grant that polished old Athens had a mob—but for the more thoughtful listeners who could actually understand an argument.”
“That’s easy,” he said. “I’ll pretend to be Epicurus for a moment. You be the Athenian people. And I’ll give you a speech that wins the vote completely—an urn full of white beans, not a single black one left for my enemies.”
So I told him to begin. And he did.
“I come before you, Athenians, to defend in public what I teach in my school. Yet I find myself attacked by furious enemies, not questioned by calm inquirers. Your assemblies are meant for the public good—for the welfare of the city. Instead, you’re being dragged into disputes about speculative philosophy. These investigations may sound magnificent, but they may also be fruitless, and they’re certainly displacing the everyday business that actually keeps a community safe and prosperous.
“So let’s be disciplined. We will not debate here the origin of worlds or how the cosmos is governed. We will ask only one thing: Do these questions matter to the public interest? If I can convince you that they don’t—if I can show you they’re irrelevant to the peace of society and the security of government—then you should send us back to our schools, where we can examine, at leisure, the most elevated and also the most speculative problems in philosophy.
“My opponents—the religious philosophers—aren’t satisfied with the traditions of your ancestors or the teachings of your priests, which I’m perfectly willing to live with as the religion of the city. They’re driven by a reckless curiosity: they want to prove religion using reason. And in doing so, they don’t settle doubts; they manufacture new ones. They paint the universe in glowing colors—its order, its beauty, its intricate arrangement—and then they challenge us: could such magnificence come from atoms colliding by accident? Could chance produce what even the greatest genius can’t fully admire?
“I’m not going to fight them on that terrain today. I’ll grant them everything they want. I’ll assume their argument is as strong as they claim. And that’s enough for my purpose—because even on their own reasoning, the dispute you’re angry about is still a purely speculative one. When I deny a providence and a future state, I’m not sawing through the pillars of society. I’m applying a rule of reasoning that they themselves must accept, if they want to be consistent.
“Here’s what you, my accusers, have already conceded. You say the main—perhaps the only—argument for a divine existence comes from the order of nature. You see marks of intelligence and design, and you think it absurd to credit chance or blind matter as the source. You also admit that this is an argument that moves from effects to causes: from the structure of the work, you infer planning and foresight in the maker. And you insist you won’t push your conclusion any farther than the phenomena justify. If you can’t establish that connection, you admit the whole argument collapses.
“Good. Now watch what follows.
“When we infer a cause from an effect, we must keep them in proportion. We’re not allowed to assign the cause any qualities beyond what are strictly needed to produce the effect we actually observe. If you see a ten-ounce weight lifted on a scale, you can infer the counterweight is more than ten ounces. But you cannot infer it’s a hundred. That extra leap isn’t reasoning—it’s fantasy.
“And if the cause you propose isn’t sufficient for the effect, you have two honest options: reject the cause, or add only what’s required to make it adequate. But if you pile on additional qualities—if you declare the cause capable of producing other effects you haven’t seen—then you’ve left evidence behind and entered pure conjecture. You’re inventing powers without warrant.
“This rule doesn’t change depending on what kind of cause you imagine. Whether you posit unconscious matter or an intelligent mind, the logic is the same: if you know the cause only through its effects, you must not attribute to it anything beyond what those effects require. And you certainly can’t reverse direction—starting from your supposed cause and predicting entirely new effects—when the cause itself was inferred from a limited set of observations.
“Think of an artist. If you see a single painting by Zeuxis, you can reasonably conclude he had the skill and taste needed to create that painting. But you can’t conclude, from that alone, that he was also a great sculptor or architect. Those extra talents might be real, but you have no evidence for them. The cause must be fitted to the effect. If you fit it precisely, you won’t find in it any surplus qualities that point to further designs or additional works.
“So if you grant—purely for the sake of argument—that the gods are responsible for the existence and order of the universe, then you are entitled to conclude only this: the gods possess exactly the degree of power, intelligence, and benevolence that the universe displays. Nothing more can be proved unless you substitute flattery and exaggeration for argument. As far as you see traces of attributes, you may infer those attributes. But anything beyond that is hypothesis. And it’s even worse to claim—without new evidence—that in distant regions of space or future ages of time there must be a more splendid display of those attributes, a grander administration that better matches the virtues you wish the gods had.
“You’re not allowed to climb from the universe as an effect up to Jupiter as a cause, and then climb back down again to predict brand-new effects from that imagined cause—as though the world you started from weren’t already the only evidence you have. If your knowledge of the cause comes solely from the effect, then cause and effect must be matched exactly. The cause cannot become a springboard for new conclusions that go beyond the phenomena.
“And this, frankly, is what happens. You see certain features of nature. You hunt for an author. You decide you’ve found one. Then you fall in love with your own mental creation. Soon you’re convinced this being must produce something greater and more perfect than the world we actually live in—a world packed with suffering and disorder. But notice what you’ve done: you’ve smuggled in a superlative intelligence and goodness that your evidence never established. You have no rational basis to assign the cause any qualities other than those it has already displayed.
“So let your gods, philosophers, be fitted to nature as it appears. Don’t try to refit nature to your preferred picture of the gods.
“When priests and poets—speaking with the authority of tradition—talk about a golden or silver age before our present world of vice and misery, I can listen with respectful attention. They aren’t pretending to be mathematicians of the divine. But when philosophers—who claim to ignore authority and follow reason—tell the same story, I don’t feel the same obligation to bow. I ask them: who took you into the heavens? Who invited you into the councils of the gods? Who opened the book of fate for you, so you can assert so confidently that the gods have done, or will do, something beyond what has actually appeared?
“If they answer, ‘We climbed there by reason—step by step—by inferring causes from effects,’ then I insist they must have strapped wings of imagination onto the argument. Otherwise they couldn’t have switched their method midstream. They start with effects and infer causes; then, suddenly, they argue from those causes back to new effects, claiming that a more perfect world must be more fitting for perfect gods. They forget that they had no right to call the gods perfect in the first place, except insofar as perfection is visible in the world we already have.
“That’s why you get all this desperate labor to explain away the ugliness of nature—to rescue the honor of the gods—while still being forced to admit the reality of the evil and disorder that fills the world. You’re told that stubborn matter, or the necessity of general laws, or something like that, limited Jupiter’s power and goodness, and therefore he had to create humans and every sentient creature so flawed and unhappy.
“But notice what that assumes. It assumes, from the start and at full strength, the very attributes you’re trying to protect: boundless power and boundless benevolence. Once you grant those, then yes—maybe you can invent plausible-sounding excuses for the world’s misery. But I ask again: why grant them at all? Why assign to the cause qualities that don’t appear in the effect? Why strain your mind to justify nature using suppositions that may be entirely imaginary, with no trace in the phenomena you’re supposedly explaining?
“So understand what the religious hypothesis really is. It’s a particular way of accounting for the visible features of the universe. If you think the appearances support certain causes, you may infer the existence of those causes. In subjects this vast and tangled, people should be allowed some freedom to propose explanations and argue for them. But that’s where you must stop.
“The moment you reverse direction—once you start from your inferred causes and announce that some other fact must have existed, or will exist, in nature to display these attributes more fully—you’ve abandoned the method you claimed to follow. You’ve smuggled new qualities into the cause beyond what appears in the effect. Otherwise you could never, with any sense, add new features to the world just to make it ‘worthier’ of the god you imagined.
“So where exactly is the supposed offense in what I teach—what I discuss in my school, or rather in my garden? What part of this whole debate touches the security of morals, or the peace and order of society?
“You say I deny providence: a supreme governor who steers events, humiliates the vicious with disgrace and failure, and crowns the virtuous with honor and success in every undertaking. But I don’t deny the actual course of events. That’s open to everyone’s observation. I freely admit what experience shows in the present order of things:
- Virtue tends to bring more peace of mind than vice.
- Virtuous people generally meet with better treatment from others than the vicious do.
- Friendship is, by long human experience, one of life’s greatest joys.
- Moderation is the most reliable path to calm, steady happiness.
“I don’t hesitate between virtue and vice. For anyone with a decent disposition, the advantages overwhelmingly favor the former. And what more do you claim—even with all your additional assumptions?
“You tell me this moral pattern comes from intelligence and design. Fine. But whatever its ultimate source, the pattern itself—the thing that shapes our happiness and misery, and therefore our choices—remains exactly the same. I can still regulate my conduct by what experience has taught me about life, just as you can.
“And if you now say: ‘Yes, but if there is divine providence and a supreme justice running the universe, you should expect some extra, more tailored rewards for the good and punishments for the bad, beyond the ordinary course of events’—then you’re repeating the very mistake I’ve been trying to expose.”
SECTION XI
Of a PARTICULAR PROVIDENCE and of a FUTURE STATE
You keep assuming that once we grant the divine existence you argue for, you’re then allowed to do things with it—to draw extra conclusions, to tack on new expectations about the world, and to “improve” the familiar order of nature by reasoning from whatever attributes you assign to your gods. But you’re forgetting a basic constraint: in this whole discussion, the only legitimate direction of reasoning you have is from effects to causes.
And that matters because the reverse move—from causes to effects—only works when you already understand the cause independently. Otherwise it’s just sleight of hand. If the only way you know anything about a cause is by first discovering it in the effect, then you can’t turn around and use that cause to predict new effects you haven’t observed. You’d be pretending to know more about the cause than the effect ever gave you.
Here’s the problem in plain terms:
You can’t squeeze extra information out of a hypothesis when the hypothesis itself was built entirely from the information you started with.
When “This Life Is Just the Lobby” Becomes Bad Philosophy
Now think about the philosophers who treat the world in front of them as merely a waiting room—this life as a corridor to some grander reality, a porch leading to a totally different building, a prologue whose only job is to set up the real drama.
What should a clear-headed philosopher make of that? It looks like they’ve reversed the whole order of nature. Instead of taking the present world as the main object of study, they demote it into a teaser trailer for something else.
So where do they get their idea of the gods—gods who supposedly run that “something else”? Honestly, from their own imagination. Because if their concept of divinity were genuinely derived from what we observe in this world, it wouldn’t naturally point beyond it. It would fit the world we actually see—no more, no less.
To be fair, I’ll grant a modest point: the divinity might have attributes we’ve never seen expressed. It might act from principles we can’t detect. Sure. But that’s just possibility, not evidence. And possibility isn’t a license to assert whatever you like.
The rule is simple:
We have reason to ascribe to a cause only what we’ve actually seen it do—only what its effects reveal.
A Quick Test: Do You See Justice, or Don’t You?
Take distributive justice—the idea that rewards and punishments are handed out fairly.
- If you say, “Yes, the world shows justice,” then I say: fine—justice is satisfied here, as far as it goes.
- If you say, “No, the world doesn’t show justice,” then I say: you have no grounds to attribute justice (in our sense of the word) to the gods.
- If you try to split the difference—“It shows justice sometimes, but not fully”—then I’ll ask: why “not fully”? On what basis do you assign it a larger scope than what you actually observe? You have no right to set the dial above what the evidence displays.
You can’t claim a hidden “full version” of justice behind a partial display unless you have some independent way to know that.
Epicurus’s Core Point: Experience Is the Only Public Standard
So I’ll bring this to a simple conclusion, as if I were addressing an assembly:
The course of nature is open to you and to me alike. The experienced pattern of events is the standard we all use to steer our lives. It’s the only thing anyone can appeal to in the real world—on the battlefield, in government, in public argument. And it should be the only thing we treat as authoritative in private study, too.
Our understanding has limits, and it’s a waste of effort—worse, a temptation to self-deception—to try to leap past them just because imagination wants more room than reality offers.
Even if we argue from the course of nature to some intelligent cause that originated and sustains order in the universe, that move—at best—buys us something that is both:
- uncertain, because it goes beyond anything we can directly experience, and
- useless, because even if we accept it, we still can’t return from that cause to the world and legitimately infer new facts—new rules for life, new expectations about rewards and punishments, new additions to nature’s regular patterns.
If the cause is known only through observed effects, then reasoning can’t magically produce extra effects. It can’t add new chapters to the book of experience.
My Reply: But We Do This All the Time in Everyday Reasoning
When he finished, I told him I noticed he’d borrowed an old political trick: he cast me as “the people,” then tried to win me over by endorsing principles he knows I already like.
Still, I’ll grant the key premise: experience is the only sound standard for judging matters of fact. But I don’t think that premise supports his conclusion—because we do sometimes infer more than what we’ve already seen, and we do it in a way that seems perfectly reasonable.
For example:
If you saw a half-finished building surrounded by bricks, mortar, and tools, wouldn’t you infer that it was designed—and that it would probably be finished soon, and perhaps improved further?
Or if you saw a single human footprint on a beach, wouldn’t you infer that a person walked there—and that the other foot likely left prints too, even if the waves or wind erased them?
So why refuse the same pattern of inference about nature?
Why not treat the world and this life as an unfinished structure: from that, infer a superior intelligence; and from that intelligence—on the thought that a perfect mind wouldn’t leave things incomplete—infer a more complete plan that reaches its finish at some later time or in some other region of existence?
Aren’t these forms of reasoning basically the same?
His Answer: Those Cases Work Only Because We Already Know Humans
He replied that the difference lies in the enormous gap between the subjects.
In human craftsmanship, it’s legitimate to move from effect to cause and then, from the inferred cause, predict further effects—because we already know what humans are like. We know human motives, habits, designs. We’ve observed that human projects have a certain coherence over time, shaped by the regular laws of nature governing creatures like us.
So when we learn that some work came from human skill and effort, we can make many additional predictions about what the person probably did, what they likely intended, what steps are still coming. Those extra inferences don’t come from the single artifact alone—they come from a large background of experience with the species.
But imagine the opposite: suppose we knew “human” only from that one building, with no prior knowledge of people. Then we couldn’t make those further predictions. Any “human attributes” we assigned would be derived only from the building itself, and therefore couldn’t legitimately reach beyond the building.
Even the footprint case shows this. A footprint by itself tells you only that something shaped like that pressed into the sand. It’s our broader experience—our knowledge of how human bodies are built—that lets us infer a second foot, a gait, a direction, and so on.
So yes: we climb from effect to cause, then descend again from cause to predict more effects—but in those everyday cases, we’re silently importing a whole library of other observations. Without that library, the reasoning collapses.
Why That Library Doesn’t Exist for “God”
And this, he said, is exactly why the situation is different with nature.
We know the deity only through the deity’s productions. And this supposed being is unique—one of a kind, not part of a known species or category. There’s no class of comparable beings whose typical traits we’ve observed, so there’s nothing solid to use for analogy.
So the most we can do is this:
- The universe shows wisdom and goodness, so we infer wisdom and goodness.
- The universe shows them in some particular degree, so we infer that degree—precisely as far as the observed effect requires.
But beyond that, we have no warrant. We can’t validly infer more attributes, or higher degrees of the same attributes, unless the world actually shows them.
And without that “permission to suppose,” we can’t reason from the cause to a modified effect. We can’t argue our way into an improved future world we haven’t observed.
If the world later displays greater good, that would support a greater degree of goodness. If it later shows a fairer distribution of rewards and punishments, that would support a stronger commitment to justice. But every imagined upgrade to the world is also an imagined upgrade to the author’s attributes—and since it’s unsupported by evidence, it remains pure conjecture.
The Real Source of the Leap: We Smuggle Ourselves Into God’s Place
He said our biggest mistake—the reason people feel free to invent endlessly here—is that we quietly put ourselves in the role of the Supreme Being. We assume that if we were in that position, we’d run the world according to what we’d call “reasonable” and “good,” so we conclude the deity must do the same.
But the ordinary course of nature already suggests that many things are governed by principles very different from ours. And even if that weren’t so, it violates every rule of analogy to infer the intentions of a being so unlike us from the intentions of humans.
Among humans, we often can infer one intention from another because we’ve seen that human plans typically hang together. Once you learn one part of someone’s character or goal, you can sometimes predict the next move—because you’re reasoning within a familiar, well-studied kind of mind.
But this doesn’t carry over to a being so remote and incomprehensible—so far from us that the comparison between it and any creature we know is weaker than the comparison between the sun and a candle. Such a being, if it exists, reveals itself only in faint traces. Beyond those traces, we have no authority to assign further perfections.
And there’s another twist: what we imagine as a “higher perfection” might not be a perfection at all. Even if it were, praising the deity for qualities that don’t actually show up in the world looks less like careful reasoning and more like flattery—like a speech, not an argument.
So, he concluded, all the philosophy in the world—and religion too, since it’s just another kind of philosophy—can’t carry us beyond ordinary experience. It can’t give us new rules of conduct beyond what common life already teaches.
From the religious hypothesis, we can infer:
- no new facts,
- no forecastable events,
- no extra rewards or punishments we can rationally expect or fear,
- beyond what practice and observation already show.
And on that basis, he said, Epicurus still stands cleared: whatever people’s political anxieties may be, they don’t really hinge on these metaphysical disputes about religion.
My Pushback: “Ought Not” Doesn’t Mean “Does Not”
I replied that he’d missed one important point.
Even if I grant his premises, I don’t accept his conclusion. He says religious doctrines can’t influence life because, by the standards of good reasoning, they shouldn’t influence life. But people don’t reason like that. They routinely draw consequences from belief in a divine existence. They assume the deity will punish vice and reward virtue in ways that go far beyond what we see in the ordinary course of nature.
Whether their reasoning is justified or not isn’t the issue here. The fact is: those beliefs still shape behavior.
And that’s why I’m not sure we should applaud the people who try to strip away those expectations. They may be excellent logicians, for all I know—but I’m not convinced they’re good citizens or good political thinkers, because they remove one restraint on human passions and make it easier for people to break society’s laws with less fear.
Still: Let Philosophers Speak Freely
Even so, I said I might agree with his larger political conclusion—supporting intellectual liberty—but for different reasons.
I think the state should tolerate every philosophical view. I can’t think of a government that suffered politically because it allowed philosophers to argue.
Philosophers don’t produce mass fanaticism. Their ideas usually aren’t that tempting to the public. And if you try to muzzle philosophical reasoning, the cure is worse than the disease: you open the door to persecution and oppression—not just in abstract speculation, but in the areas where most people feel most intensely invested.
A Final Difficulty: Can We Infer a Cause from a Truly Unique Effect?
One more thing occurred to me about his main claim. I’ll only raise it briefly, because it leads into arguments that get delicate fast.
I’m not sure it’s even coherent to say a cause is known only by its effect—especially if that cause is said to be so singular that it has no parallel to anything we’ve ever observed.
We can infer one thing from another only when we’ve repeatedly found two kinds of objects constantly linked. If an effect appeared that was completely unique—unlike anything we could place in a known category—I don’t see how we could form any grounded conjecture about its cause.
If experience, observation, and analogy really are our only guides here, then both cause and effect must resemble other causes and effects we already know—pairs we’ve often seen joined together.
I’ll leave you to follow out what this suggests. I’ll only note this: Epicurus’s opponents always treat the universe—an effect supposedly singular and unparalleled—as proof of a deity—an equally singular and unparalleled cause. If that’s their setup, then your line of reasoning, at the very least, deserves attention. And I admit there’s a real puzzle in how we could ever move from that cause back to the effect and, from our ideas of the former, infer any change or addition to the latter.
SECTION XII
Of the ACADEMICAL or SCEPTICAL PHILOSOPHY
Part I — What “Skeptic” Even Means
It’s hard to think of any topic that’s generated more philosophical argument than this one: proofs for God’s existence, and takedowns of atheists. And yet, even deeply religious philosophers sometimes wonder whether anyone has ever genuinely been a “speculative atheist”—someone who calmly, sincerely believes there is no God.
So what’s going on? How can there be endless books refuting a position that might not even exist in real life?
A comparison helps. The old knight-errant who rode around “saving the world” from dragons and giants never doubted those monsters were real. The enemies in his head were vivid enough. In the same way, the “atheist” or the “skeptic” can become a stock villain in people’s imaginations—someone everyone argues against, even if nobody has actually met the creature.
And that brings us to the skeptic. Preachers and serious philosophers often treat “the skeptic” as an obvious menace to religion. But notice something: no one ever runs into a person who has no opinions—no principles for action, no beliefs about what’s true, no commitments of any kind.
So a fair question is: what do we mean by skepticism? And how far can doubt really go?
Two Kinds of Skepticism
There are two broad styles of skepticism:
- Skepticism before inquiry: doubt as a starting posture, meant to prevent mistakes.
- Skepticism after inquiry: doubt as an outcome, when reflection seems to undermine our ability to know anything solid.
Let’s take them one at a time.
1) Skepticism Before Philosophy: Descartes’ “Universal Doubt”
Some philosophers—Descartes is the famous example—recommend beginning with sweeping doubt. Don’t just question your opinions, they say; question your very faculties. Can you trust your senses? Your memory? Your reasoning? Before you accept anything, you’re supposed to prove those tools are reliable by building a chain of argument from some first principle that cannot possibly be mistaken.
The problem is obvious once you say it out loud.
- If there is no such perfectly privileged “first principle,” the whole project can’t even begin.
- And even if there were, how would you build beyond it without using the very faculties you’ve decided not to trust?
You can’t test your reason without reasoning. You can’t certify your mind’s reliability using anything other than your mind.
So “Cartesian doubt,” taken literally, isn’t just hard—it’s unreachable. And if someone could somehow reach it, it would be a permanent paralysis. No argument could ever bring them back to confidence about anything.
A Reasonable Version of “Start With Doubt”
Still, there’s a moderate, sensible lesson hiding inside that extreme stance.
Used in a more practical way, a skeptical starting point is exactly what you want when you begin philosophy: it keeps you fair-minded, breaks the spell of upbringing and fashionable opinions, and slows you down before you declare victory.
In that reasonable form, the method looks like this:
- Start from principles that feel clear and self-evident.
- Move carefully, step by step.
- Re-check your conclusions often.
- Trace consequences and look for hidden assumptions.
Yes, progress is slow and often limited. But it’s the only way we can realistically hope to reach truth with any stability.
2) Skepticism After Inquiry: When Reflection Turns Against Us
The second kind of skepticism shows up after people have already done serious thinking. It arises when they begin to suspect either:
- that our mental powers are fundamentally unreliable, or
- that these powers simply aren’t suited to reach firm conclusions in the deepest speculative questions.
This style of skepticism doesn’t just poke at theology or metaphysics. It challenges the basics of everyday life. Some philosophers even argue that our senses are questionable in principle—no more trustworthy, in their raw form, than lofty philosophical theories.
And because these views have been argued for (and argued against) by serious thinkers, it’s worth asking: what are the arguments that might lead someone there?
The Classic (“Popular”) Doubts About the Senses
We don’t need to linger on the familiar tricks skeptics use against the senses:
- our organs are imperfect and sometimes mislead us,
- an oar looks bent in water,
- objects look different up close than far away,
- pressing one eye produces double images,
- and so on.
These examples prove something real, but not as much as skeptics sometimes pretend. They show that we shouldn’t treat the senses as infallible on their own. We correct sensory appearances by reasoning about conditions: the medium, the distance, the lighting, the state of the organ. Within their proper limits, the senses can still function as a workable guide to truth and error.
But there are deeper skeptical arguments that aren’t so easily patched up.
Why We Naturally Believe in an External World
Start with a basic fact about human psychology: we’re built to trust our senses. Long before we can argue about anything, we simply assume an external world exists—an “outside” universe that doesn’t depend on our perception and would still be there if no one were around to see it.
Animals live by the same assumption. Their plans and actions presuppose stable objects out there in the world.
The Naïve Picture: “What I See Is the Thing”
There’s another instinctive belief bundled into that: we naturally treat the image given by the senses as if it were the external object itself.
This table I see as white and feel as hard? I don’t spontaneously think, “Ah yes, a private mental representation of an unknown external cause.” I think: “Table.”
I assume it exists whether I’m in the room or not. My presence doesn’t create it. My absence doesn’t erase it. It seems to carry on, steady and complete, independent of any mind paying attention.
Philosophy’s First Disruption: “Only Perceptions Are Present to the Mind”
Now introduce even a little philosophy, and that comfortable picture gets shaken.
Philosophy teaches: what is immediately present to the mind is never the external object itself—it’s a perception, an image, an experience. The senses are channels through which perceptions arrive; they don’t create direct contact between mind and object.
A simple example makes the point: as you back away from a table, it looks smaller. But the “real table,” assumed to exist independently, doesn’t shrink. So what changed? Not the external object (as we conceive it), but the appearance—the image in the mind.
Once you see this, it becomes hard to deny that what we directly encounter—what we call “this house” or “that tree” in experience—is, strictly speaking, a perception: a fleeting copy or representation of something we suppose to exist outside us, remaining stable whether or not we perceive it.
The Trap: Reason Forces a New Story, Then Can’t Defend It
So reasoning pushes us away from our first instinctive view. We’re driven to a “more philosophical” system: perceptions are internal representations; external objects (if they exist) are distinct from those perceptions.
But now philosophy runs into a wall. It can no longer claim the irresistible authority of instinct—because instinct led us to a different view, and philosophy itself calls that view mistaken. And when philosophy tries to justify its new picture with a clean chain of argument, it discovers it can’t.
Here’s the challenge: what argument proves that our perceptions must be caused by external objects that are different from perceptions, even if they supposedly resemble them? Why couldn’t perceptions arise from:
- the mind’s own activity,
- the suggestion of some unknown spirit,
- or some other cause entirely beyond our understanding?
We already admit that many perceptions don’t come from external objects at all—dreams, hallucinations, illness, madness. And even if there are external bodies, the idea that matter somehow produces a mental image in a mind—a thing of such different nature—looks deeply mysterious.
Why Experience Can’t Settle the Question
Maybe this is just an ordinary factual question: do external objects produce our sensory perceptions?
If it were, you’d expect experience to answer it.
But experience can’t, because experience never gets you outside your perceptions. The mind only ever has perceptions present to it. It can’t observe a direct connection between perceptions and external objects, because those objects (as “external”) are never directly present for comparison.
So the claim that perceptions are connected to external objects isn’t grounded in experience. And without experience, reason has nothing solid to stand on.
The “God Wouldn’t Deceive Us” Detour
Some try a theological shortcut: God is truthful, therefore our senses must be reliable.
But that move is a strange loop. If God’s truthfulness were directly guaranteeing the senses, our senses would be infallible—because a perfectly truthful being wouldn’t allow systematic deception. Yet we plainly do get misled in many ways.
And there’s a bigger problem: if you start by seriously doubting the external world, you’ve also undermined the usual arguments by which people claim to prove God’s existence and attributes in the first place. You can’t easily use God to rescue the senses when your route to God typically runs through the world the senses deliver.
Why the “Deeper” Skeptic Wins This Round
This is exactly where the more hard-nosed skeptic feels unstoppable. They can press you into a corner like this:
- If you trust your natural instincts, you end up believing the perception is the external object.
- If you reject that and adopt the “philosophical” view—that perceptions are only representations—you’ve abandoned instinct.
- But once you do that, you still can’t produce any convincing argument from experience that proves perceptions are connected to external objects.
Either way, the skeptic says, you don’t get a secure foundation.
A Second Deep Objection: Primary Qualities Aren’t Safe Either
There’s another skeptical line of attack—more technical, more “philosophical,” and arguably even more destructive. Modern thinkers often agree that what we call sensible qualities—hard and soft, hot and cold, white and black—aren’t properties residing in objects themselves. They’re secondary qualities: ways the mind experiences things, with no matching “model” in the object that looks like the sensation.
If you accept that about secondary qualities, skeptics argue, you should accept something similar about so-called primary qualities like extension (spatial size/shape) and solidity. Why? Because our idea of extension is acquired through sight and touch, and those senses deliver qualities that (on this view) exist in the mind, not in the object. If all the sensory qualities are mental, then the idea built entirely from them—extension included—seems mental too.
The usual rescue attempt is: “We get primary qualities not from sensation, but from abstraction.”
But if you inspect that closely, it starts to look empty. Try to imagine extension that is neither visible nor tangible. You can’t. Now try to imagine visible or tangible extension that has none of the sensory character that always comes with it—no hardness or softness, no color, no light/dark contrast. That also collapses.
And the point generalizes. Try to form the idea of “a triangle in general” that is neither isosceles nor scalene, with no particular side lengths or proportions—just pure triangularity floating free of any specifics. If you honestly try, you’ll quickly feel why skeptics think the scholastic talk of abstraction and “general ideas” is confused at best.
Where That Leaves the “External World”
Put these objections together and you get two big skeptical pressures on the common belief in external existence:
- If belief in external objects rests on instinct, it clashes with what reflection seems to show; if it rests on reason, reason can’t supply adequate evidence.
- If all sensible qualities exist in the mind rather than in objects, then even matter’s supposed “primary” structure becomes suspect too.
Strip matter of every intelligible feature—primary and secondary—and what’s left? Not the rich world of common sense, but a vague unknown “something” posited as the cause of our perceptions. It’s so thin and so contentless that a skeptic may not even bother fighting it. There’s almost nothing there to argue about.
Part II — Skeptics Versus Reason Itself
It sounds outrageous to say you can destroy reason with reasoning. But that’s exactly what skeptics aim at. They try to raise doubts both about:
- abstract reasoning (like mathematics and geometry), and
- reasoning about matters of fact and existence (the sort we use constantly in life).
Doubts About Abstract Reasoning: Space and Time
The sharpest attack on abstract reasoning grows out of our ideas of space and time. In everyday life they seem straightforward. But when you put them under the microscope of the “deep sciences” that study quantity and extension, they generate conclusions that look absurd and contradictory.
Nothing in theology—no doctrine invented to discipline the human mind—ever shocks ordinary common sense more than the claim that extension is infinitely divisible, along with its consequences, which mathematicians and metaphysicians sometimes present with a kind of proud triumph.
Think about what that implies: a real magnitude that contains parts infinitely smaller than any finite part; and within each of those, parts smaller still; and so on forever. That’s a structure so extreme that, even if you call it a “demonstration,” it feels too heavy for the mind to carry—because it collides with our clearest, most natural ways of thinking.
And here’s what makes it unsettling: the reasoning supporting it can seem impeccable. You can feel forced to accept the conclusions once you accept the premises.
We trust geometry. The theorems about circles and triangles can be as convincing as anything humans have ever proved. Yet from those same methods, you get strange consequences—for example, the “angle of contact” between a circle and its tangent being smaller than any straight-lined angle, shrinking further as the circle’s diameter increases without limit, and then the possibility of curves with tangents that make “angles” even smaller than any circle’s tangent angle, and so on without end.
The proof style looks as airtight as the proof that a triangle’s three angles equal two right angles. But the result feels like contradiction wearing a suit and tie.
So reason gets thrown into a kind of stunned hesitation. Even without a skeptic whispering in your ear, reason begins to doubt itself. It sees bright light in some places—clear demonstrations—yet that light fades right into deep darkness. Caught between clarity and incomprehensibility, reason feels dazzled and uncertain, barely able to pronounce confidently on what it’s dealing with.
Time Makes the Problem Feel Even Worse
If anything, the weirdness becomes more obvious when we turn from extension to time.
An infinite number of real parts of time, passing one after another, each “used up” in succession—this seems like a direct contradiction. You might think no one whose judgment hasn’t been warped by over-subtle theorizing could ever accept it.
The Skeptical Twist: Even This Skepticism Is Unstable
And yet reason doesn’t find rest even here. The situation is bizarre: how can a clear and distinct idea contain circumstances that contradict itself, or contradict another clear and distinct idea? That seems incomprehensible—maybe as absurd as anything we can put into words.
So you end up with a kind of skepticism that is itself shaky: a doubt produced by geometry’s paradoxes that becomes, in its own way, the most doubtful thing of all—full of hesitation, with no satisfying place to stand.
Doubts About Matters of Fact: Popular vs. Philosophical
When skeptics attack “moral evidence”—the kind of reasoning we use for facts, existence, and everyday life—the objections come in two styles:
- Popular objections: the ordinary reminders that humans are fallible.
- Philosophical objections: deeper arguments meant to unsettle the foundations.
The popular set is familiar: how weak the human mind is, how different ages and cultures have believed contradictory things, how our judgments shift with illness and health, youth and age, prosperity and hardship, and how each person contradicts themselves across time.
We don’t need to belabor these points. They’re real, but they’re not strong enough to overturn ordinary evidence—because we rely on factual reasoning every moment. We couldn’t survive without it. Any objection that tries to erase that kind of reasoning ends up colliding with the basic conditions of living.
And that leads to the most important point in this whole discussion: the great enemy of extreme skepticism—of full-blown Pyrrhonism—is life itself.
Skeptical principles can sparkle in classrooms and debates, where nothing forces a decision. In that setting, it can be hard—maybe impossible—to refute them decisively. But the moment the skeptic steps out into the daylight, surrounded by real things that stir feelings, needs, fears, and plans, those abstract doubts dissolve. They evaporate like smoke, and even the most stubborn skeptic ends up functioning like everyone else.
The hard-nosed skeptic should stay in their lane and press the objections that come from deeper philosophical digging. And here, honestly, they’ve got plenty to work with.
They can point out, for example, that nearly everything we believe about the world beyond what we directly sense or remember rests on one idea: cause and effect. But what is that idea, really?
- All we ever observe is that two kinds of events have shown up together over and over.
- We don’t perceive a mysterious “power” connecting them—only repeated pairing.
- And we have no logically airtight argument that what has been paired in our past experience must be paired again in the future.
- The only thing that pushes us to expect the future to resemble the past is habit—a built-in instinct of the mind.
That instinct is hard to resist. It’s also not guaranteed to be reliable. So when the skeptic keeps hammering these points, they don’t reveal some superhuman strength of mind. They reveal something more awkward: our shared weakness. For a moment, these arguments can seem to dissolve every kind of confidence we have.
Sure, the skeptic could spin this out for pages. But there’s a practical question hanging over the whole performance: what lasting good would that do?
Because here’s the most devastating objection to extreme skepticism: as long as it’s taken seriously and carried all the way through, it can’t produce anything durable or useful. Ask the radical skeptic what they mean—what they’re trying to accomplish with all this clever doubt—and they quickly run out of answers.
Compare them to other thinkers:
- An astronomer arguing for the Copernican or Ptolemaic system hopes to persuade you of something that sticks.
- A Stoic or an Epicurean lays out principles meant to shape how you live.
- But a full-on Pyrrhonist—someone committed to universal doubt—can’t reasonably expect their philosophy to keep its grip on the mind, much less improve society.
If Pyrrhonism truly took over, life would grind to a halt. Conversation would stop. Action would stop. People would sit in a kind of mental paralysis until hunger, thirst, and exhaustion ended the experiment.
And yet, we don’t seriously fear that outcome. Why not? Because nature is stronger than principle. However dizzy a Pyrrhonist can make you with intricate arguments, everyday life snaps you back almost immediately. A minor interruption—a knock at the door, a sudden noise, a burst of pain, a pressing appointment—chases off the doubts. In practice, the skeptic returns to the same patterns of belief and decision-making as everyone else: other philosophers included, and even people who never opened a philosophy book.
When the Pyrrhonist “wakes up,” they’re usually the first to laugh at themselves and admit what this kind of skepticism really does. It isn’t a blueprint for living. It’s an intellectual amusement—one that highlights a strange feature of the human condition:
We have to act, reason, and believe. And yet, even with our best effort, we can’t fully justify the foundations of those habits or permanently silence the objections that can be raised against them.
That said, there is a gentler, more workable form of skepticism—often called Academic skepticism—that can actually be stable and useful. In fact, it can grow out of Pyrrhonism once its blanket doubts get tempered by common sense and careful reflection.
Most people are naturally confident—often too confident. They see one side of a question, don’t vividly grasp the counterarguments, and rush into whatever view fits their temperament or interests. They don’t just believe; they dig in. Doubt feels like sand in the gears: it complicates their thinking, cools their emotions, and slows their decisions. So they try to escape uncertainty as fast as possible by becoming even more forceful and stubborn in what they affirm.
But if these dogmatic reasoners could really take in the fragility of human understanding—even at its best, even when it’s trying to be accurate and cautious—that realization would soften them. It would produce more modesty, more restraint, and less contempt for opponents.
You can see hints of this even in everyday life:
- People without much education often notice that scholars—despite all their training—tend to speak with caution.
- And if a learned person is naturally arrogant, even a small dose of Pyrrhonian doubt can puncture that pride by reminding them that any advantage they’ve gained is tiny compared with the deep, built-in limitations shared by all human minds.
In general, a good thinker should always carry a certain amount of doubt, caution, and modesty—no matter the topic, no matter the decision.
There’s another kind of “mild skepticism” that can help us, too: restricting our inquiries to subjects that actually fit the human mind.
Human imagination loves the distant and the spectacular. It wants the farthest reaches of space, the deepest past, the furthest future. It gets bored with the familiar, so it runs toward the grand and strange.
But good judgment does the opposite. It avoids the highest and most remote questions and stays closer to common life—the kinds of topics that show up in daily experience and practical work. Let poets and rhetoricians decorate the sublime; let priests and politicians use it for their own purposes. The careful thinker keeps their feet on the ground.
And nothing helps us reach this healthier attitude more than being fully convinced of how powerful Pyrrhonian doubt is—and how impossible it is to escape it by sheer logic alone. Only strong natural instinct pulls us out.
People who genuinely enjoy philosophy will keep thinking, of course. They know philosophy isn’t some alien activity; at its best, it’s just ordinary reasoning cleaned up—made more systematic and corrected where it tends to go wrong. But if they keep in mind how limited their faculties are—how short their reach is, how messy their operations can be—they won’t be tempted to wander too far beyond everyday life.
After all, if we can’t give a satisfying rational explanation for why we believe, after a thousand trials, that stones fall and fire burns, how can we pretend to settle questions about the origin of worlds, or the structure of nature from eternity to eternity?
This self-limitation is so reasonable that it barely takes any work to see it. Just examine the mind’s natural powers and compare them to the kinds of things it tries to understand. That comparison quickly shows what topics are genuinely suited to science and inquiry.
Here’s one proposal: the only subjects that allow truly demonstrative knowledge—knowledge as airtight as proof—are quantity and number.
Any attempt to extend this kind of perfect certainty beyond mathematics slides into illusion.
Why do math and measurement work so differently? Because the parts of quantity and number are perfectly uniform. That sameness makes their relations intricate. It also makes it meaningful—and often crucial—to trace, through many intermediate steps, whether two quantities are equal or unequal.
But with everything else, our ideas are not uniform in that way. They are plainly distinct. No matter how hard we scrutinize them, we don’t uncover hidden necessity; we mostly just recognize difference and say, almost trivially, “this is not that.”
If there’s any confusion in these non-mathematical disputes, it often comes from the vagueness of language—and the cure is clearer definitions.
Notice the contrast:
- “The square of the hypotenuse equals the sum of the squares of the other two sides” can’t be known just by defining the words precisely. You still need a chain of reasoning.
- But “Where there is no property, there can be no injustice” is basically just a dressed-up definition—once you define “injustice” as a violation of property.
This is why so many famous “syllogisms” in fields outside mathematics feel impressive but don’t actually add knowledge. They rearrange meanings. They sharpen definitions. They don’t demonstrate new truths the way math does.
So, on this view, mathematics—quantity and number—is the proper home of strict proof.
Everything else we investigate concerns matters of fact and existence. And these are not the kinds of things that can be demonstrated.
Here’s the key point: whatever is could have been otherwise. Denying a fact never creates a logical contradiction. The idea that something doesn’t exist is just as clear as the idea that it does.
That’s not how mathematics works. In math, a false statement isn’t merely wrong; it becomes confused and unintelligible—because the relations are fixed by the ideas themselves. “The cube root of 64 equals half of 10” doesn’t just fail; you can’t distinctly conceive it as true.
But statements about existence are different. “Caesar never existed,” or “the angel Gabriel never existed,” may be false—but there’s no contradiction in imagining them. They remain perfectly thinkable.
So how can we ever prove that something exists?
Only by reasoning from cause and effect—from what produces it or what it produces. And that kind of reasoning rests entirely on experience.
If you try to reason purely a priori—without experience—then, for all you can prove, anything could cause anything. A pebble could extinguish the sun. A person’s wish could steer the planets. Logic alone doesn’t draw those boundaries. Only experience teaches us what tends to follow what, and within what limits.
This is the foundation of what the old writers called moral reasoning: not “morality” in the narrow sense, but reasoning about real life—about what happens in the world. It makes up most of what humans know and almost everything that drives human action.
These reasonings deal either with particular facts or general facts:
- Particular facts: everything we deliberate about in daily life, and most work in history, chronology, geography, and astronomy.
- General facts: the sciences that look for broader patterns across a whole kind of thing—politics, natural philosophy, medicine, chemistry, and so on—where we study the typical qualities, causes, and effects of a species of objects.
Where does theology fit?
Insofar as divinity tries to prove that God exists and that souls are immortal, it mixes both kinds of factual reasoning—some about particular events, some about general patterns. It has a footing in reason when it leans on experience. But, by its own lights, its strongest foundation isn’t proof. It’s faith and revelation.
And what about morals and criticism—questions of right and wrong, and of beauty and art?
These aren’t primarily matters for the understanding in the way geometry is. They’re matters of taste and sentiment. We feel beauty—whether moral or natural—more than we “perceive” it as a bare fact.
If we argue about beauty or try to fix standards, what we usually end up reasoning about is a different kind of fact: for example, what people generally prefer, how human psychology tends to respond, or what patterns of judgment show up across cultures. Those can be investigated. But the core experience is still felt.
Now, if you walk through a library with these principles in mind, you’ll become ruthless.
Pick up any book—say, a volume of theology or scholastic metaphysics—and ask two questions:
- Does it contain abstract reasoning about quantity or number?
- Does it contain experimental reasoning about matter of fact and existence?
If the answer to both is no, then, as far as genuine knowledge goes, it has no claim on you.
Throw it in the fire.
Because it can contain nothing but sophistry and illusion.