Should you have faith in John W. Loftus?

Posted on 02/25/12 43 Comments

Yesterday I received an email from my co-author (God or Godless, Baker, 2013), atheist John W. Loftus. He said in the email “I wrote something with you in mind” and then provided a link to the following article in his blog:

Faith is an Irrational Leap over the Probabilities

Good ole’ John.

Let’s go to the heart of what he says:

“Probability is all that matters. Accepting some conclusion because it’s merely possible is irrational. We should never ever do that.”

This is an interesting claim. Let’s call it John’s Belief Stricture (aka John’s BS).  So according to John’s BS, we should only accept claims that are probably true. That raises an interesting question: is John’s BS probably true? If not, we shouldn’t accept it. How would we know when John’s BS moves from being possibly true to probably true? What is the threshold of probability that renders assent to it rational or justified?

It seems to me that John’s BS is probably just confused.

John writes:

“The ONLY sense I can make of the way believers use the word “faith” is that it’s an irrational leap over the probabilities.”

Is that what he thinks John Calvin meant by faith? Thomas Aquinas? Augustine? Alvin Plantinga? Richard Swinburne? That is precisely as dull as a Christian writing:

“The ONLY sense I can make of the way atheists use the concept of “the good” is that it’s something they arbitrarily happen to like.”

Any Christian who would say that just shows his ignorance of what atheists actually say about the good. And any atheist who writes what John has written has likewise merely established that he knows next to nothing about what Christians actually say about faith. Perhaps if John is limited to surveying the belief of the Baptist seniors Bible study down the street from his house he might come up with something in the vicinity of what he wrote, but to make any claim beyond that is just silly.

That’s surprising for somebody like John who has written and edited several books on the topic of atheism and Christianity as well as running his successful blog for several years. How do you spend that much time on a topic and yet still remain that ignorant about it? Presumably you really have to work at it. (By analogy, think of an obese man who begins to jog ten miles a day and yet remains obese a year later. You’d think he must really be working at taking in piles of empty calories to remain that heavy. Perhaps he’s vying for a career as a sumo.)

I suspect that underlying John’s BS is the conviction that people ought not hold beliefs beyond what the evidence warrants (let’s call this the Evidence Belief Principle) and that “faith” simply is the flouting of the EBP.

With that in mind, let me offer some observations.

Get rid of the us-against them “believers” rhetoric.

The first thing John needs to do is get rid of his insulating rhetorical garbage in which he always contrasts himself and his fans against this amorphous group he calls “believers”. The fact is that everyone can be guilty of holding a belief beyond what the evidence warrants. You can have an irrational commitment in your favorite sports team, or the rightness of your nation’s foreign policy, or your own intellectual prowess (*cough cough*).

Not every belief requires positive evidence before it is accepted

Whether you accept John’s BS or the more plausible EBP it should be obvious that you cannot seek positive evidence for every belief. At some point you’re going to have to accept certain principled starting points and reason from them. This doesn’t mean that those principled starting points (what philosophers call “basic beliefs”) are immune to evidence. But it does mean that one’s acceptance of them is prima facie and retained absent defeaters.

Faith is inescapable

When you accept a principle like John’s BS or EBP and seek to reason in accord with it, you are having faith in it. Incidentally, I think you shouldn’t have faith in John’s BS and I offer two defeaters for it, the first here and the second in the next section. The first defeater is rooted in John’s observation that epistemic certainty, however fleeting, does seem to be possible. This requires us to say that John’s BS is hyperbolic. Hyperbole is okay on a Valentine’s Day card (“You mean everything to me!”) but not in a formal epistemological principle.

Before we turn to the next defeater let us not forget that people who accept John’s argument are also having faith in John as a reliable authority on those matters. I guarantee that when John writes something in his blog and someone posts “Right on John! I always accept your arguments!” he doesn’t reply “Have you sought independent evidence to confirm my reliability?”

John’s bigger problem

This brings us to that more serious defeater to John’s principle: it undermines our ability to believe almost everything we actually believe.

John is critical of people leaping from the possible truth of a proposition to believing the proposition. But then the same point applies to probability. If a proposition is merely probable then again, we ought not believe it. According to John’s epistemological advice, we ought only to believe that which is certain. He writes:

It’s probable that if someone jumps off a building he will fall to the ground. How probable is this? Well, since it’s possible he won’t fall (per our examples above) then we cannot say we are certain he will fall. But it’s “virtually certain” he will, like a 99.9999% chance (and I think that’s being very very generous).

Virtual certainty is not certainty and virtually true is not true. Thus, if John is consistent in the way he set up his epistemology then he can only believe truths that are epistemically certain (i.e. indefeasible and incorrigible). This means that John cannot believe that dropping a lead weight off a bridge will result in the lead weight falling. He can only believe it will probably fall. But things get even wackier. Consider this:

The universe is more than five minutes old.

Can John believe that is true? No. At best he can believe it is probably true. After all, as Bertrand Russell famously observed, it is possible that the universe was created five minutes ago with all its crumbling mountains, dusty books and half digested meals in our stomachs.

In fact, it is worse. John doesn’t even know that “The universe is probably more than five minutes old”. The reason should be obvious: John is not certain about those probabilities. So he can only believe “It is probably true that the universe is probably more than five minutes old.” But wait. How does he know that any of this is probably true instead of merely possibly true? He doesn’t. It seems like John doesn’t know much of anything. Talk about digging oneself an epistemological hole.

Now let me state with directness and as much charity as I can muster that epistemologies which make it impossible for a person to believe that they existed a day ago are a signal that the person who formulated the epistemology is not a reliable authority on the matter. Would you trust a mechanic if, after “fixing” your car, it ran so poorly that you couldn’t even get it off the lot?

What constitutes evidence?

Let’s close bo noting a rather large pachyderm in the room. To the extent where evidence is required for a belief, who decides what constitutes legitimate evidence? Let’s say that Bill is accused of murder but his parents adamantly maintain his innocence. You don’t know Bill but you do know some of the evidence and it looks incriminating. To what extent do Bill’s parents have evidence into Bill’s character that you lack? Conversely, to what extent might their personal relationship to Bill have clouded their assessment of the evidence? There is no easy answer. The fact is that personal acquaintance can both yield additional evidence and can skew the objective assessment of evidence.

Note as well that what we count as evidence is decided in part relative to the particular plausibility structure we hold. This is not to say that our plausibility structure is wholly immune to external critique. (It is possible to shift paradigms based on evidence.) But our skepticism against evidence is held relative to the set of beliefs we have. “Bill would never murder” is deeply rooted in the plausibility structure of his parents but not of yours, and so you assess the evidence differently than Bill’s parents.

I could go on, but I trust this is sufficient analysis to establish that when it comes to matters of epistemology, one ought not put their faith in the analysis of John Loftus.

Share
  • piero

    A very interesting article.

    A few observations:

    1. As everything else we know about the world, the Probability Truth Principle–or PTP for short–is based on induction. There is no way we can escape the problem of induction, because we have finite knowledge and because we exist wihin the flow of our local time. That applies to you as well, so I wouldn’t be quite so smug.

    2. Support for a footbal team or smugness about one’s own intellectual prowess have nothing to do with PTP. They are explained in terms of emotions, i.e the non-reasoning parts of our brain. By the same token, you could claim that your love for your spouse disproves PTP; that’s simply not the case. To be charitable, let’s call it a category error. Most human beings I know (including myself) behave rationally only to satisfy their desires, which are rooted in their biological make-up. We desire to live, so we take rational steps to avoid being killed. Rationality does not exist in a vacuum. Yourself are probably motivated to keep your blog running because you have an emotional attachment to your beliefs. No human being behaves like Mr. Spock; it is just impossible, given the physical structure of our brain.

    3. Epistemic certainty is obviously unattainable. Does that mean you won’t set up your alarm clock tomorrow because you cannot be certain there will be a Monday? You probablu think God will produce a Monday. It does not matter in the least: you are still trapped in the induction problem. You cannot second-guess God, as the Newcomb paradox illustrates quite clearly, so you are still behaving as if your future was somehow predictable. That does not make you rational; at best, it can make you as irrational as Loftus.

    4. Is it true that a dropped weight will probably fall, or is it merely probable that a dropped weight will probably fall? You can nest probability statements indefinitely; that doesn’t change the fact that, based on observations, we can be 99.999999999999% certain that the weight will drop. Your argument just shows that language is a human construct which is restricted by the same laws that rule reality, as identified by our brains: our brains are physically limited objects; hence, it would be unreasonable to expect language to fully describe reality. We must deal with the imperfections, paradoxes and contradictions to which language is liable. The final test of our beliefs is not our declarative sentences, but our behaviour: have you ever jumped out of your office window on the off-chance that you might be able to fly home?

    5.
    a. “Probably the universe is more than five minutes old”

    b. “Probably the universe is probably more than probably 5 minutes old”

    c. “Is is probably true that probably the universe is probably more than probably 5 minutes old”

    This is a bit silly. Assigning probabilities to an event is a way to measure our ignorance of the underlying causes, or our inability to quantify them fully. In this case we have a knower (our brain) that has developed methods to quantify its ignorance. It is a category error to expect that that very same brain can quantify the ignorance measure of its own ignorance. That would require a superior instance. In fact, you are subreptitiously arguing in favour of the existence of God, in the restricted sense of a superhuman intelligence: in the formula “it is true that a dropped weight will probably fall”, the judge of that “probably” is my brain; in the formula “it is probably true that a dropped weight will probably fall”, who is the evaluator of the first “probably”? If it’s still my brain, then the second formula is a silly restatement of the first one; if the second formula is not a silly restament of the first one, then the evaluator must be different. I wonder what or who could it be…

    Anyway, as I said before, I found your article very good and it gave me plenty of food for thought. I thank you for it.

    • http://twitter.com/davidstarlingm davidstarlingm

      Just one question.

      I’m sure this has already been handled by many philosophers wiser than I, but I still wanted to ask.

      You say that all truth propositions are subject to the PTP — that induction is inescapable. Does this hold true for mathematical abstractions? Do we know that 2+2=4 because of induction or because we simply know it to be true?

      • piero

        I’m sure the answer has been provided by many philosophers wiser than me too. In fact, any philosopher at all will do, since I’m not a philosopher.

        Anyway, for what it’s worth, I think there is nothing we just know to be true, if we interpret “knowing” as “being able to provide convincing evidence for”. As I said in a previous post, we cannot prove that reality (by which I mean the source of sensorial inputs) actually exists. Hence, there can be nothing we “know” about what we call reality. So yes, 2+2=4 must be based on induction too, because there is no way evolution could have instilled in a chimp’s brain that fact as an a priori. I am of course referring to 2+2=4 as an empirical fact: if I have 2 stones in one hand, and 2 stones on the other, I can arrange all of them so they mark the corners of a square, and that’s the meaning of “4”. On the other hand, the string of symbols “2+2=4″ is just a theorem that can be derived from Peano’s axioms (in bases greater than 4, of course). In other words, we can translate our sensory experience into symbols, then identify regularities, then identify the most fundamental concepts that can account for our experience, then formalize the whole lot and come up with a tautology we call “mathematics”.

        • http://twitter.com/davidstarlingm davidstarlingm

          “So yes, 2+2=4 must be based on induction too, because there is no way evolution could have instilled in a chimp’s brain that fact as an a priori.”

          So it simply “can’t be” because natural processes couldn’t have made it happen? That’s an awfully self-reinforcing and rather silly approach to take, don’t you think?

          • Robert

            Well, it could have been alien teenagers, but at this point one of us would drag out Occam and determine that, chances are, that’s not the best explanation.

          • piero

            Which natural processes exactly? How do you come to know those physical processes?

            You are againg trapped by induction. Anything we call a “process” must be regular, otherwise we would not refer to them as “processes”. In fact we wuold nor refer to anything at all, because in chaos no mind could exist.

            You probably think that “2+2=4″ is true in all possible universes. For that to work, you would first have to prove that “1” exists in every possible universe, and is not just derived from the our brain’s tendency to split the world into disjoint components.

      • Robert
    • randal

      “As everything else we know about the world, the Probability Truth Principle–or PTP for short–is based on induction.”

      Everything we know isn’t based on induction. Some of what we know is known through deduction, some is through abduction, but most of it is properly basic. I recommend you read a good introduction to epistemology like Robert Audi’s Epistemology (Routledge).

      “Rationality does not exist in a vacuum. Yourself are probably motivated to keep your blog running because you have an emotional attachment to your beliefs. No human being behaves like Mr. Spock; it is just impossible, given the physical structure of our brain.”

      Yes, of course. I talk at length about these very issues in two of my books. But this doesn’t have anything to do with my critique of John Loftus.

      “Epistemic certainty is obviously unattainable.”

      That’s not true. There are indefeasible beliefs, though they are far fewer than classical foundationalists believed and they are inadequate to support a noetic structure as classical foundationalism required. I talk about this at length in one of my books.

      As a moderate foundationalist I don’t have any of the problems John does. If you want to read about my views I recommend my book Theology in Search of Foundations (Oxford University Press, 2009).

      Thanks for the compliments.

      • Robert

        When in the history of our planet did proper basicality pop into existence? At some point in our evolution, a bunch of brains noticed that simple explanations work more often than complicated ones. These brains decided not to jump off a cliff – even though a complicated explanation like “people who jump off cliffs usually get hurt until today” was possible for them to believe.

        Organisms which failed to form simple beliefs about reality were taken from our gene pool. Maybe they believed that gravity works always, except right now, and were proven wrong the hard way. In other words, nature is unkind to the one who thinks that Occam’s Razor is false. Try telling reality that you have an a priori truth on your ability to drink poison. Sure, this poision usually kills people, but today is Feb 25th, and it’s entirely justified to think that poison does not kill people on Feb 25th – if we reject Occam.

        If the brain has some amazing “a priori truth factory” that works to produce accurate beliefs, it makes me wonder why a thirsty hunter-gatherer couldn’t use her “a priori truth factory” to locate drinkable water.

        And so, how is it that some beliefs like the existence of a supremely powerful, disembodied mind are justifiable a priori while mundane beliefs like “this mushroom will make me sick” actually required someone to eat that kind of mushroom?

        • randal

          “At some point in our evolution, a bunch of brains noticed that simple explanations work more often than complicated ones. These brains decided not to jump off a cliff – even though a complicated explanation like “people who jump off cliffs usually get hurt until today” was possible for them to believe.”

          That trial and error learning was only possible because it was embedded in a rich epistemological framework that included at its foundation basic sense perception, rudimentary rational intuition and deduction, memory, testimony, et cetera. Induction is part of a rich epistemic network.

          “Try telling reality that you have an a priori truth on your ability to drink poison.”

          Is that your argument against the existence of synthetic a priori knowledge? That’s like arguing against the new Toyota Camry because it sinks in water.

          “If the brain has some amazing “a priori truth factory” that works to produce accurate beliefs, it makes me wonder why a thirsty hunter-gatherer couldn’t use her “a priori truth factory” to locate drinkable water.”

          If the Toyota Camry is such a great car why don’t people take it water-skiing?

          • piero

            Randal:
            “Induction is part of a rich epistemic network.”

            That’s just not the case, as any animal trainer knows. Induction is embedded in our minds even before we acquire language. When we acquire language, we can call our contidioned reflexes “induction”. Then we start to do philosophy and use the name for the abstract principle.

            • randal

              I recommend you begin by getting up to speed not on what animal trainers say, but on what epistemologists say (and why). Read about the epistemology of sense perception, rational intuition, memory, testimony, proprioception, and a host of other sources including intuition. People who train dogs to walk on leash are not reliable authorities in these matters.

          • Robert

            Is that your argument against the existence of synthetic a priori knowledge?

            I think so. You can get knowledge of something by [1] interacting with it (directly/indirectly) [2] by being descended from ancestors who were selected by it (DNA contains information, ya know) or [3] by building a concept that applies to some range of things (there’s probably a term for this, I don’t know).

            Number [3] is an “algorithm”; The human brain is able to process information and get new, sometimes useful, information.

            But for any given “algorithm”, we should still ask … “Does it work?” “Does it produce an a priori truth or an a priori belief?” Enter science.

            It was scientists (not philosophers) who disproved Kant’s a priori beliefs on space and time. Contrary to Kant, the true nature of space and time were NOT universally known, independent of sensory experience. Kant did not know about natural selection, general relativity or quantum mechanics, and so it’s no surprise that the “a priori truth factory” in his brain was faulty in these areas.

            If we were to raise a “brain” in a vat with no sensory input and no evolutionary history, I don’t think neurologists would expect it discover arithmetic. Do you?

            • randal

              I wrote a nice response to you that disappeared thanks to my computer.

              “It was scientists (not philosophers) who disproved Kant’s a priori beliefs on space and time.”

              I think you mean Euclid’s views.

              “Contrary to Kant, the true nature of space and time were NOT universally known….”

              Actually, Kant taught that we don’t know the true nature of space and time. According to Kant, space and time are mental categories that we impose on experience. This leads to an ironic conclusion: while Kant was the first philosopher to distinguish clearly synthetic a priori knowledge over-against a posteriori knowledge and analytic knowledge, he actually didn’t have the epistemological resources to grasp any synthetic a priori knowledge. (This point is well argued by Laurence BonJour.)

              The fact is that synthetic a priori knowledge is not infallible, but that doesn’t mean it doesn’t exist. It clearly does. I’ve talked about this in the past, for example here: http://blogs.christianpost.com/tentativeapologist/2009/10/familiar-fact-or-fantastic-folly-on-the-relativity-of-strangeness-part-1-14/

              Since the middle of the twentieth century it became popular for philosophers who had imbibed 40 proof empiricism to reject synthetic a priori knowledge. That has changed in the last fifteen years. I recommend you check out BonJour’s outstanding book In Defense of Pure Reason (Cambridge University Press, 1998) as a segue into the current literature on the a priori.

        • randal

          Oh, and to get down to brass tacks, you know “nothing can be red and blue all over” a priori.

          • piero

            I disagree. The human brain develops gradually, hence any a priori knowledge would have to be embedded into it at some identifiable time, which makes it not a priori after all.

            Unless you are willing to claim that a fertilized human egg “knows” that an object cannot be simultaneously red and blue, you must admit that this fact is derived from experience, hence not a priori. Similarly, if a solid wooden cube is at point A, no other solid object can simultaneously be at point A. This is clearly a learned fact, not an a priori truth.

            Our brains were developed to allow us to survive in this universe (in a tiny portion of it, anyway), and hence are designed to accept as true our sensory perception. That does not mean that our convictions are true: before Einstein, most philosophers would have claimed that time flows uniformly across the whole universe, and would probably have claimed this was an a priori truth. I mean, it is obvious, isn’t it?

            • randal

              “Unless you are willing to claim that a fertilized human egg “knows” that an object cannot be simultaneously red and blue, you must admit that this fact is derived from experience, hence not a priori.”

              You’ve confused the fact that experience grants us knowledge of categories like “red” and “blue” with the trans-world justification we have for a proposition. I recommend you start by reading Immanuel Kant’s discussion in Critique of Pure Reason.

              Here’s another illustration. Yes, I was first taught 7+5=12 from Mrs. Brown in grade 1. But once I grasp that proposition my justification for it transcends the occasion of experience in which I first grasped it. This is the single most important point in getting a grip on synthetic a priori knowledge. You need to grasp the difference between an occasion of experience and the synthetic a priori knowledge that arises out of the experience.

              • http://twitter.com/davidstarlingm davidstarlingm

                Further illustration: one of my elementary textbooks told me that 2+2=4. Another one of them informed me that there is a large concrete obelisk in Washington, DC named “The Washington Monument”.

                I was a rude, arrogant little snot, and so I was immediately skeptical of anything that my textbooks told me. However, because I recognized the potential usefulness of remembering the information they contained, I filed both “2+2=4″ and “giant concrete obelisk” under Provisionally Accepted. Yeah, I was a cheeky little thing.

                Some time later, both of these facts moved out of Provisionally Accepted and into Sufficiently Verified — the latter when I actually visited DC and the former when I gained cognitive control over Abstract Mathematics. But, as I’m sure you can see, the processes by which I accepted these two propositions were very different. My belief in the Washington Monument was built up by successive exposures and observations; my belief that 2+2=4 was built up by cognitive abstractions.

                I never counted four objects by twos and thought, “Ahah! Further evidence that 2+2=4.” Rather, it was my knowledge that 2+2=4 which allowed me to count in that fashion.

                • randal

                  Well put!

              • Robert

                “But once I grasp that proposition my justification for it transcends the occasion of experience in which I first grasped it. This is the single most important point in getting a grip on synthetic a priori knowledge.”

                You are describing a process where your mind finds a pattern or system of getting new, hopefully reliable, information.

                If I were to build a narrow AI that can do the same, would it be gaining synthetic a priori knowledge?

                • randal

                  “You are describing a process where your mind finds a pattern or system of getting new, hopefully reliable, information.”

                  It is also knowledge that goes beyond the actual world to encompass all possible worlds.

                  “If I were to build a narrow AI that can do the same, would it be gaining synthetic a priori knowledge?”

                  What makes you think AI has any knowledge?

                  • Robert

                    “What makes you think AI has any knowledge?”

                    If what you label ‘knowledge’ requires an ontologically basic mental entity, like a soul, then I don’t expect an AI to ever gain ‘knowledge’. Otherwise, I see nothing in the laws of physics to keep us from building an AI with knowledge and experiences that match our own.

                    Related, this is a good read on future AI:
                    Intelligence Explosion: Evidence and Import (draft)

                    • http://twitter.com/davidstarlingm davidstarlingm

                      I’d think that, at best, an AI would be capable of storing patterns of information that it would apply to collected data to produce Boolean decisions. Knowledge, I think, transcends mere data patterns.

                    • randal

                      To know something involves a mind that bears a certain relationship to a proposition with semantic content. Epistemologists debate the nature of that relationship but unless you follow Quine’s reductionistic project of epistemology naturalized (and thank goodness most epistemologists don’t) that relationship of knowing semantic content is not something that occurs simply when something produces all the behaviors to be called conscious (i.e. a Turing test).

            • Robert

              Unless you are willing to claim that a fertilized human egg “knows” that an object cannot be simultaneously red and blue, you must admit that this fact is derived from experience, hence not a priori.

              I would not be surprised if, somewhere in all those base pairs, sperms and eggs already have instructions that lead to the near universal human belief of “an object cannot be simultaneously red and blue”. Grasshoppers do not form such beliefs, but then again, they have different DNA.

              piero, I doubt you disagree, but I just want to emphasize that there is a VAST amount of information transmitted from one generation to the next. When we are talking about the origin of beliefs, we should not forget this enormous link we have to the successful beliefs of others before us.

              • http://twitter.com/davidstarlingm davidstarlingm

                Not to disagree entirely, but I would say that a fairly small quantity of cognitive information (i.e. “genetic memory”) is transferred in the genome. I would say that notions like the blue/red contradiction are consequences of developing a self-aware mind capable of abstract cognitive process.

                • Robert

                  I think that’s probably right.

              • http://twitter.com/davidstarlingm davidstarlingm

                In other words, the programmed framework for self-awareness contains the necessary axioms to deduce a priori truth without reference to induction.

      • piero

        Randal_
        “’Epistemic certainty is obviously unattainable.’

        That’s not true. There are indefeasible beliefs, though they are far fewer than classical foundationalists believed and they are inadequate to support a noetic structure as classical foundationalism required.”

        I don’t believe there are undefeasible beliefs (and please don’t reply that my belief in the non-existence of undefeasible beliefs is itself an undefeasible belief, because I swear I’ll go to wherever you are and punch you in the gob).

        Even the most obvious candidate (A=A) is troublesome. In reality, A is never equal to A, because everything exists in time. The principle is only a convenient way of saying that we can ignore small differences, just as we accept that talking about lines and points makes sense, even though we know they don’t exist in reality. On the other hand, if you understand A=A to mean “anything is equal to itself”, teh principle is pointless: just drop “equal to” and you get “everything is itself”. Not a very illuminating truth, if I might say so.

    • Grady

      Loftus is an admitted liar. Who cares what he thinks?

      In his book he admits that he commited various crimes in his younger days, including assault.

      This book may have come to the attention of some of his victims.

  • Emilie

    “The first defeater is rooted in John’s observation that epistemic certainty, however fleeting, does seem to be possible.”

    Can you explain why you think that John believes this? At the blog entry you cite, he actually wrote this: “…practically nothing is certain”. I suspect that John is a fallibilist.

    If, as I suspect, John is a fallibilist, then I don’t see how your skeptical scenario (the possibility that everything was created 5 minutes ago with the appearance of age) challenges fallible knowledge. That is, I don’t see how your scenario does anything more than merely demonstrate the logical possibility of error, and I don’t see how that threatens fallible knowledge.

    • randal

      You ask why I wrote that John seems to believe that “epistemic certainty, however fleeting, does seem to be possible.” It is because he wrote “practically nothing is certain”. To say practically nothing is certain is to say a few things are certain. And saying practically no people at the party are murderers is saying at least one person is. Later in his essay John talks about virtual certainty. But virtual certainty is not fleeting (and is not certainty). So all this supports the fact that in addition to the uncertain and the virtually certain John also holds out that there are a few things that are in fact certain.

      And you can be an epistemic fallibilist whilst believing a few things are epistemically certain. I’m of that view, for example. I certainly wasn’t critiquing fallibilism. I was critiquing John’s slack-jawed epistemological musings.

      • Emilie

        It seems that your interpretation turns on the meaning of one word: ‘practically’. From an on-line dictionary:

        [prac·ti·cal·ly (prktk-l)
        adv.
        1. In a way that is practical.
        2. For all practical purposes; virtually.
        3. All but; nearly; almost.
        Usage Note: Practically has as its primary sense “in a way that is practical”: We planned the room practically so we can use it as a study as well as a den. The word has an extended meaning of “for all practical purposes,” as in After the accident, the car was practically undrivable. That is, the car can still be driven; it is just no longer practical to do so. Language critics sometimes object when the notion of practicality is stripped from this word in its further extension to mean “all but, nearly,” as in He had practically finished his meal when I arrived. But this usage is widely used by reputable writers and must be considered acceptable.]

        You take him to mean (3), as in: all but nothing is certain, nearly nothing is certain, or almost nothing is certain.

        My interpretation is of uses (1) or (2), and I think that my interpretation is more in keeping with the entirety of what he as written in the post you cite. In fact, if (2) is what he meant, then it seems to me that you have made a great deal out of the tiny (if not pragmatically non-existent) difference between virtual certainty and certainty. I think that this may be an honest mistake on your part, but it does create a straw man.

        First, John goes on at length about a spectrum of possibilities, which only makes sense if certainty does not exist.

        Consider John’s example about someone jumping off of a building, and then consider what Wikipedia has to say about fallibilism:

        [Fallibilism (from medieval Latin fallibilis, “liable to err”) is the philosophical principle that human beings could be wrong about their beliefs, expectations, or their understanding of the world. In the most commonly used sense of the term, this consists in being open to new evidence that would disprove some previously held position or belief, and in the recognition that “any claim justi?ed today may need to be revised or withdrawn in light of new evidence, new arguments, and new experiences.” This position is taken for granted in the natural sciences.]

        This fits well with what John wrote about someone jumping off of a building – we have to be open to the possibility that when someone jumps off of the building, something new or unexpected may happen. Finally, given John’s well known position regarding the primacy of science as a way of learning about the world, I think that the charitable thing to do, given that “fallibilism is taken for granted by the natural sciences”, is to assume that he is a fallibilist.

        Maybe you should ask him?

        In any case, let’s assume, if only for the sake of argument, that all he is getting at is that knowledge is fallible (and that there are varying degrees along a spectrum of potential fallibility). You wrote: “Now let me state with directness and as much charity as I can muster that epistemologies which make it impossible for a person to believe that they existed a day ago are a signal that the person who formulated the epistemology is not a reliable authority on the matter.”

        Pretty harsh words. However, if all that John is getting at is that knowledge is fallible, then I don’t see how your skeptical scenario (the possibility that everything was created 5 minutes ago with the appearance of age) challenges fallible knowledge. That is, I don’t see how your scenario does anything more than merely demonstrate the logical possibility of error, and I don’t see how that threatens fallible knowledge. Do you agree, and if not, why not?

        • randal

          Emile, I appreciate that you took the time to consult an online dictionary, but your endeavors are for naught. Your definition 2 is fully consistent with what I attributed to John. As I already noted certain beliefs may exist but are inadequate to support a noetic structure, and thus the things of which we are certain are of no use to transmit justification just like a severely damaged car with a transmission that can’t shift into second gear is for practical purposes undrivable. Unfortunately you seem to be quite confused.

          “In any case, let’s assume, if only for the sake of argument, that all he is getting at is that knowledge is fallible….”

          That’s not what John said. If he were simply saying that knowledge is fallible there’d be no problem. You should actually read what he says about knowledge, justification, and faith rather than trying to invent something you think is more defensible.

  • http://debunkingchristianity.blogspot.com/ John W. Loftus

    “slack-jawed epistemological musings”?

    More of that is coming. ;-)

    • randal

      Are you referring to the publication of our book?!

  • http://debunkingchristianity.blogspot.com/ John W. Loftus
    • randal

      Goodie. Gimme some time to polish up my brass knuckles (metaphorical brass knuckles of course since I’m a pacifist).

  • http://christthetao.blogspot.com/ David Marshall

    Randal: I also noticed John’s friendly reference to Jerry Coyne’s critique of our fellow contributor to Faith Seeking Understanding, Alvin Plantinga, this morning. I’ve just posted a rebuttal to Coyne’s critique.

    I’d be interested in what you think. I can’t pretend to be as well-read in Western philosophy as you seem to be, but I hope I got the main stuff right.

    • randal

      David, I tried posting a response on your blogsite but it didn’t work for some reason so I’ll post it here:

      Good critique David. Coyne’s comments are complete garbage. He’s a pompous ass who clearly doesn’t know the first thing about epistemology. In that regard let me make two observations.

      First, he describes Plantinga’s work here as “apologetics”. It is a sort of apologetics (i.e. negative apologetics, one that removes defeaters to belief). But even more importantly, it is a work of epistemology. Yes Plantinga rejects evidentialism here, and epistemological internalism more generally. But by doing so he is simply following the trend of epistemology over the last forty years. This all traces to several problems with traditional internalism, most particularly the Gettier problem. Beginning in the late 1960s a growing number of philosophers led by Alvin Goldman have proposed externalist epistemologies which eschew any commitment to traditional evidentialism as a way to escape these problems. Plantinga’s project fits within that wider trend. Since this paper (originally published in 1983) he has gone on to develop his unique form of proper functional externalism.

      Second, yes Mr. Coyne, Plantinga’s proper functionalism dovetails nicely with his Christianity, and in that sense it is apologetic. But so what? Robert Nozick is an atheist and his truth-tracking externalist theory of epistemic warrant dovetails nicely with his atheism. Does Coyne chastise Nozick that his epistemology is congruent with his metaphysics? Don’t bet on it.

      What a complete ignoramus.

  • http://christthetao.blogspot.com/ David Marshall

    Thanks, Randal. That adds to my understanding. Though I will merely admit, more cautiously, that Dr. Coyne also “appears to me pompous assly.” :- )

    • randal

      Nice!

      What really irks me is that Coyne is justly sensitive to uninformed people speaking out of ignorance on his area of expertise. But then he does the exact same thing when it comes to epistemology. Talk about hypocrisy.

  • http://ingles.homeunix.net/ Ray Ingles

    Not every belief requires positive evidence before it is accepted

    As I’ve written:

    However, just because you can’t use evidence to decide on fundamental axioms doesn’t mean all possible axioms are equal, that there’s no principled way to choose among them. There is at least one distinction that can be made among such ‘candidate axioms’. Some propositions fall into the category of “unfalsifiable… but useless”. I can’t prove fundamental notions like ‘my reason has the potential to be effective’ and ‘my senses relay information correlated with an external reality’ and ‘the simplest explanation that covers the facts should be preferred’. And yet… it’s not whimsy or prejudice that drives me to accept these ideas. It’s the fact that not assuming them automatically means ‘game over’.